Search is not available for this dataset
text
string
meta
dict
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % % CSPACK - Reference Manual -- LaTeX Source % % % % Front Material: Title page, % % Copyright Notice % % Preliminary Remarks % % Acknowledgements % % Table of Contents % % % % Editor: Michel Goossens / CN-AS % % Last Mod.: 22 JUne 1993 10:00 mg % % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Tile page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{titlepage} \HTML{<PRE>}% \notHTML{\vspace*{-23mm}}% \notHTML{\mbox{\epsfig{file=/usr/local/lib/tex/ps/cern15.eps,height=30mm}}}% \HTML{\mbox{}}% \hfill \raise8mm\hbox{\Large\bf CERN Program Library Long Writeups Q123} \hfill\mbox{} \begin{center} \mbox{}\\[6mm] \HTML{<P>\\} \mbox{\Ptitle{CSPACK}}\\[2cm] \HTML{<P>\\} {\LARGE Client-Server}\\[1cm] {\LARGE Routines and Utilities}\\[2cm] \HTML{<P>\\} {\LARGE Version 1.30}\\[3cm] \HTML{<P>\\} {\Large Application Software Group}\\[6mm] {\Large Computing and Networks Division}\\[2cm] \end{center} \notHTML{\vfill}% \HTML{<P>} \begin{center}\Large CERN Geneva, Switzerland\end{center} \HTML{</PRE>}% \end{titlepage} \Filename{H1Preface} \HTML{<H1>Preface</H1>} \Filename{H2Copyright} \HTML{<H2>Copyright page</H2>} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Copyright page % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \HTML{<PRE>} \NODOC{\thispagestyle{empty}} \framebox[\textwidth][t]{\hfill\begin{minipage}{0.96\textwidth}% \NODOC{\vspace*{3mm}} \begin{center}Copyright Notice\end{center} \NODOC{\parskip.6\baselineskip} CERN Program Library entry \textbf{Q123} \textbf{CSPACK -- Client Server package} \copyright{} Copyright CERN, Geneva 1993 Copyright and any other appropriate legal protection of these computer programs and associated documentation reserved in all countries of the world. These programs or documentation may not be reproduced by any method without prior written consent of the Director-General of CERN or his delegate. Permission for the usage of any programs described herein is granted apriori to those scientific institutes associated with the CERN experimental program or with whom CERN has concluded a scientific collaboration agreement. Requests for information should be addressed to: \vspace*{-.5\baselineskip} \begin{center} \tt\begin{tabular}{l} CERN Program Library Office \\ CERN-CN Division \\ CH-1211 Geneva 23 \\ Switzerland \\ Tel. +41 22 767 4951 \\ Fax. +41 22 767 7155 \\ Bitnet: CERNLIB@CERNVM \\ DECnet: VXCERN::CERNLIB (node 22.190) \\ Internet: [email protected] \end{tabular} \end{center} \vspace*{2mm} \end{minipage}\hfill}%end of minipage in framebox \NODOC{\vspace{6mm}} {\bf Trademark notice: All trademarks appearing in this guide are acknowledged as such.} \NODOC{\vfill} \begin{tabular}{l@{\quad}l@{\quad}>{\small\tt}l} {\em Contact Person\/}: & Jamie Shiers /CN & (JAMIE\atsign CERNVM.CERN.CH)\\[1mm] {\em Technical Realization\/}: & Michel Goossens /CN & (GOOSSENS\atsign CERNVM.CERN.CH)\\[2cm] \textem{Edition -- June 1993} \end{tabular} \HTML{</PRE>} \newpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Introductory material % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pagenumbering{roman} \setcounter{page}{1} \Filename{H2cspfront-Preliminary-remarks} \section*{Preliminary remarks} \par This {\bf Complete Reference} of the CSPACK system, consists of four parts: \begin{OL} \item An {\bf overview} of the system. \item A {\bf step by step tutorial introduction} to the system. \item A {\bf reference guide}, describing each command in detail. \item An {\bf installation and management guide}. \end{OL} The \CSPACK{} system is implemented on various mainframes and personal workstations. In particular, versions exist for IBM~VM/CMS, VAX/VMS and various Unix-like platforms, such as Apollo, Cray, DECstation~3100, Hewlett-Packard, IBM RS6000, Silicon Graphics, MIPs and SUN. \begin{center} \fbox{\parbox{12cm}{Throughout this manual, commands to be \textbf{entered} are \Ucom{underlined}}} \end{center} \index{underlining} \index{user input} This document has been produced using \LaTeX~\cite{bib-LATEX} with the \texttt{cernman} style option, developed at CERN. A compressed PostScript file \Lit{cspack.ps.Z}, containing a complete printable version of this manual, can be obtained by anonymous ftp as follows (commands to be typed by the user are underlined): \vspace*{3mm} \begin{tabular}{@{\hspace{12mm}}>{\tt}l} \underline{ftp asis01.cern.ch}\\ Trying 128.141.201.136...\\ Connected to asis01.cern.ch.\\ 220 asis01 FTP server (SunOS 4.1) ready.\\ Name (asis01:username): \underline{anonymous}\\ Password: \underline{your\_{}mailaddress}\\ ftp> \underline{cd cernlib/doc/ps.dir}\\ ftp> \underline{get cspack.ps.Z}\\ ftp> \underline{quit}\\ \end{tabular} \Filename{H2cspfront-Acknowledgements} \section*{Acknowledgements} Many people have contributed to the CSPACK package. The main authors are: Ren\'e Brun, Olivier Couet, Mike Gerard, Fr\'ed\'eric Hemmer, Burkhard Holl, Catherine Magnin, Ben Segal, Jamie Shiers, Jonathan Wood (Rutherford Appleton Laboratory, UK) The author is grateful to the many people who contributed to the \CSPACK{} project, through discussions or by providing code and assistance. In particular, I would like to thank Miquel Marquina of the CERN program library, who has ensured the smooth installation of the package on all systems. \Filename{H2cspfront-Related-Documents} \section*{Related Documents} This document can be complemented by the following documents: \begin{UL} \item FATMEN User Guide~\cite{bib-FATMEN} \item CMZ User Guide~\cite{bib-CMZ} \item PATCHY User Guide~\cite{bib-PATCHY} \item HBOOK User Guide~\cite{bib-HBOOK} \item PAW User Guide~\cite{bib-PAW} \item KUIP - Kit for a User Interface Package~\cite{bib-KUIP} \item ZEBRA - Data Structure Management System~\cite{bib-ZEBRA} \item The FATMEN Report~\cite{bib-FATREP} \item TMS - The CERN Tape Management System~\cite{bib-TMS} \item The MUSCLE Report~\cite{bib-MUSCLE} \item Computing at CERN in the 1990s~\cite{bib-NGB} \end{UL} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Tables of contents ... % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \tableofcontents %\listoffigures \newpage \listoftables
{ "alphanum_fraction": 0.5801000406, "avg_line_length": 34.4046511628, "ext": "tex", "hexsha": "55f34723d4b66b493852a25319db6d8ca658ef4d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "berghaus/cernlib-docs", "max_forks_repo_path": "cspack/cspfront.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "berghaus/cernlib-docs", "max_issues_repo_path": "cspack/cspfront.tex", "max_line_length": 94, "max_stars_count": 1, "max_stars_repo_head_hexsha": "76048db0ca60708a16661e8494e1fcaa76a83db7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "berghaus/cernlib-docs", "max_stars_repo_path": "cspack/cspfront.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-24T12:30:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-24T12:30:01.000Z", "num_tokens": 1882, "size": 7397 }
\RequirePackage{fix-cm} \documentclass[conference, final]{IEEEtran} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{graphicx} \graphicspath{ {./images/} } \usepackage{amsmath,amssymb} \usepackage{hyperref} \usepackage{tikz} \usepackage{environ} \usetikzlibrary{shapes,arrows,positioning,calc} \begin{document} \title{On Policy Gradients} \author{ \IEEEauthorblockN{Mattis Manfred K{\"a}mmerer} \IEEEauthorblockA{ \textit{Technische Universit{\"a}t Darmstadt} \\ \href{https://orcid.org/0000-0002-6869-2379}{orcid.org/0000-0002-6869-2379} } } \maketitle \begin{abstract} The goal of policy gradient approaches is to find a policy in a given class of policies which maximizes the expected return. Given a differentiable model of the policy, we want to apply a gradient-ascent technique to reach a local optimum. We mainly use gradient ascent, because it is theoretically well researched. The main issue is that the policy gradient with respect to the expected return is not available, thus we need to estimate it. As policy gradient algorithms also tend to require on-policy data for the gradient estimate, their biggest weakness is sample efficiency. For this reason, most research is focused on finding algorithms with improved sample efficiency. This paper provides a formal introduction to policy gradient that shows the development of policy gradient approaches, and should enable the reader to follow current research on the topic. \end{abstract} \section{Introduction} \label{intro} Policy gradient methods are approaches to maximize the expected return in a Markov Decision Process (MDP). Using a parameterized policy to decide the next action, they can easily incorporate prior domain knowledge, but require a lot of configuration to produce an effective agent for a specific environment. Also, they frequently require on-policy training and a lot of samples to find a good policy. In this paper, we focus on approaches to estimate the policy gradient, though we introduce the most common policy classes shortly. Policy improvement means a step in the parameter space such that the policy under the new parameters will on average perform better than the old policy, i.e. improve its expected return. Policy gradient estimation is the term we use to describe the process of computing the direction in parameter space for a policy improvement. Essentially, the goal is to estimate the gradient of the policy with respect to the expected return. Since this is the core problem of policy gradient methods, it is also the main topic of this paper. In section \ref{sec:prel}, we give some preliminaries and describe the problem setup in detail. In section \ref{sec:pge}, we discuss different approaches to estimate the policy gradient. Using our insights from section \ref{sec:pge}, we derive the actor-critic framework in section \ref{sec:ac}, which harnesses value-function estimation for improved gradient updates. Then, in section \ref{sec:natural}, we introduce some gradient-ascent methods that build on the approaches given in sections \ref{sec:pge} and \ref{sec:ac}, refining the gradient estimation by Fisher's information matrix to get the natural gradient. Finally, in section \ref{sec:outro}, we summarize the contents presented in this paper, and give a short conclusion. \section{Preliminaries} \label{sec:prel} We define states $s \in \mathbb{S}$, actions $a \in \mathbb{A}$, and rewards $r \in \mathbb{R}$. A trajectory $\tau := (s_0, a_0, $ $s_1, a_1, \dots, s_T, a_T)$ is generated by drawing $s_0 \sim \mu_0(s_0)$ according to the distribution over initial states $\mu_0(s_0)$, and successively sampling $a_t \sim \pi(a_t|s_t)$ according to the policy $\pi$ parameterized by $\theta$, and $s_{t+1} \sim p(s_{t+1}|s_t,a_t)$ until the horizon $T$, or a terminal state is reached. At each time step, we receive a reward according to $r_t = r(s_t, a_t) \equiv \mathbb{E}_{s'}\left[r(s_t,a_t,s')\right]$. A trajectory can also be called roll-out or episode, though the term episode implies it ends in a terminal state. We assume a Markov Decision Process (MDP), meaning the probability distribution of the next states is independent of past states $s_{0:t-1}$ given the present state $s_t$ and action $a_t$, \begin{equation} p(s_{t+1}|s_t,a_t)=p(s_{t+1}|s_{0:t},a_{0:t}). \end{equation} Where we define $i:j$ with $i,j \in \mathbb{N}, i < j$ as an index over all integers from $i$ to $j$, i.e., $s_{i:j} \equiv s_i, s_{i+1}, \dots, s_j$. We assume no additional prior knowledge about the environment, meaning we assume the probability of a trajectory is \begin{equation} p_\pi(\tau) = \mu_0(s_0) \prod_{t=0}^{T-1} p(s_{t+1}|s_t, a_t) \pi(a_t|s_t). \end{equation} The most frequently used policy classes in policy gradient approaches are Gibbs policies $\pi(a|s) = \frac{\exp(\phi(s,a)^T\theta)}{\sum_b \exp(\phi(s,b)^T\theta)}$ \cite{Sutton:1999:PGM:3009657.3009806,Bagnell2004LearningD} for discrete problems, and Gaussian policies $\pi(a|s) = \mathcal{N}(\phi(s,a)^T\theta_1,\theta_2)$ for continuous problems, where $\theta_2$ is an exploration parameter \cite{Williams92simplestatistical,peter:article:1996}, and $\phi(s,a)$ is the vector of basis functions on the state-action pair. \paragraph{Policy gradient} Our goal with respect to episodes is to maximize the expectation of the total reward, also called expected return. The total reward in the horizon $T$ is $\sum_{t=0}^{T} r_{t}$. We additionally introduce a discount factor $\gamma \in [0,1)$. Intuitively, this reflects the idea that the relevance of later actions declines, and ensures that the return is finite, even for the infinite horizon $T \to \infty$. The discounted total reward is \begin{equation} \mathcal{R}^\tau \equiv \mathcal{R}_0^T := \sum_{t=0}^{T} \gamma^t r_t. \label{eqn:acc-reward} \end{equation} Since we have only limited knowledge of the performance of the policy, we need to approximate an optimal policy by estimating a gradient. Thus, we search $\nabla_\theta J(\theta) := \nabla_\theta \mathbb{E}_{p_\pi(\tau)}\left[\mathcal{R}^\tau\right]$, to make a policy gradient step according to $\theta_{k+1} = \theta_k + \alpha_k \nabla_\theta J(\theta)$, where $\alpha_k$ denotes a learning rate. Section \ref{sec:pge} shows how we can estimate $J(\theta)$. \section{Policy Gradient Estimation} \label{sec:pge} In this section, we introduce methods for estimating the policy gradient. \paragraph{Finite-difference gradients} A simple approach for gradient estimation is to choose a small $\delta\theta$, and evaluate the new policy given the slightly changed parameters as in \begin{equation} \nabla_\theta J(\theta) \approx \frac{J(\theta+\delta\theta)J(\theta-\delta\theta)}{2\delta\theta}. \end{equation} This can lend a good estimate of the gradient given a small $\delta \theta$, and is generally called the symmetric derivative. However, finite-difference gradients suffer from the curse of dimensionality, and can require very small $\delta\theta$. Thus, finite-difference gradients only work well in specific scenarios, but should not be discarded due to simplicity. \paragraph{Value functions} Given we know the actual value of a state, i.e. the expected return we will get starting from state $s_t$, this function can be used to evaluate the performance of our policy, and can be written as \begin{equation} V^{\pi}(s_t) := \mathbb{E}_{\substack{s_{t+1:h} \\ a_{t:h}}}\left[\mathcal{R}_t^T\right]. \label{eqn:v} \end{equation} In addition to the value function, we also define the state-action value function, often called Q-function. Instead of the expected accumulated reward starting from state $s_t$, this function gives the expected accumulated reward given an action $a_t$ is selected in state $s_t$, \begin{equation} Q^{\pi}(s_t, a_t) := \mathbb{E}_{\substack{s_{t+1:h} \\ a_{t+1:h}}}\left[\mathcal{R}_t^T\right]. \label{eqn:q} \end{equation} As we will see, this function also gives us the true gradient of $J(\theta)$, though in general we need to estimate it. Using the value function, and the Q-function, we can derive a better estimate of the policy gradient. \paragraph{Likelihood-ratio gradients} For this derivation, we will change the perspective a bit, requiring some additional definitions. We define ${\mu_\pi}_i = \sum_{t=0}^{\infty} \gamma^t p(s_t=s_i | s_0, \pi)$ as the discounted state distribution, though it does not sum up to one without normalization, which can be achieved by multiplying by $(1-\gamma)$. Note that $\mu_\pi$ is equivalent to the discounted state visit count $d^\pi$ introduced by Sutton et al. \cite{Sutton:1999:PGM:3009657.3009806}. Further, we define $P_\pi$ as the transition matrix, i.e. ${P_\pi}_{i,j}=\sum\nolimits_k p(s_j|s_i,a_k)\pi(a_k|s_i)$, $r_\pi$ as the mean rewards for all states given by ${r_\pi}_i = \sum_j r(s_i,a_j)\pi(a_j|s_i)$, and $\mu_0 = \left[\mu_0(s_0),\mu_0(s_1),\ldots\right]^T$ as a vector representing the initial state distribution. Finally, we define $V_\pi = \left[ V_\pi(s_0), V_\pi(s_1), \ldots \right]^T$, from which it follows that $V_\pi = \mu_\pi r_\pi$, so we can reformulate the problem as \begin{align} \underset{\theta}{\text{max}}\ J(\theta) &= \mu_0^T V_\pi \\ \text{s.t.}\ V_\pi &= r_\pi + \gamma P_\pi V_\pi . \nonumber \end{align} Since $\mu_0^T$ does not depend on $\theta$, \begin{align*} \nabla_\theta J(\theta) &= \nabla_\theta \mu_0^T V_\pi = \mu_0^T \nabla_\theta V_\pi. \end{align*} We can replace $\nabla_\theta V_\pi$ using \begin{align*} \nabla_\theta V_\pi &= \nabla_\theta \left( r_\pi + \gamma P_\pi V_\pi \right) \\ \nabla_\theta V_\pi &= \nabla_\theta r_\pi + \gamma (\nabla_\theta P_\pi) V_\pi + \gamma P_\pi \nabla_\theta V_\pi \\ (I-\gamma P_\pi) \nabla_\theta V_\pi &= \nabla_\theta r_\pi + \gamma (\nabla_\theta P_\pi) V_\pi \\ \nabla_\theta V_\pi &= (I - \gamma P_\pi)^{-1} (\nabla_\theta r_\pi + \gamma (\nabla_\theta P_\pi) V_\pi), \end{align*} and find that \begin{align*} \mu_\pi &= \mu_0 + \gamma P_\pi^T \mu_\pi \nonumber \\ (I- \gamma P_\pi^T)\mu_\pi &= \mu_0 \nonumber \\ \mu_\pi^T &= \mu_0^T (I- \gamma P_\pi)^{-1} , \end{align*} which we can take back into the gradient equation \begin{align} \nabla_\theta J(\theta) &= \mu_0^T (I - \gamma P_\pi)^{-1} (\nabla_\theta r_\pi + \gamma (\nabla_\theta P_\pi) V_\pi) \nonumber \\ &= \mu_\pi^T (\nabla_\theta r_\pi + \gamma (\nabla_\theta P_\pi) V_\pi) \nonumber \\ &\equiv \sum\nolimits_{i,j} \mu(s_i) \nabla_\theta\pi(a_j|s_i) Q_\pi(s_i,a_j) \label{eqn:likelihood-equiv} \\ &= \sum\nolimits_{i,j} \mu(s_i) \pi(a_j|s_i) \nabla_\theta\log\pi(a_j|s_i) Q_\pi(s_i,a_j). \nonumber \end{align} The equivalence in \eqref{eqn:likelihood-equiv} comes from the observation that $\nabla_\theta \pi(a|s)$ is distributed over the addition $\nabla_\theta r_\pi + \gamma (\nabla_\theta P_\pi) V_\pi$. When we take out this common factor, $Q_\pi(s,a)$ remains. Then, we use $\nabla_\theta \pi(a|s) = \pi(a|s)\nabla_\theta\log\pi(a|s)$, obtained from the likelihood ratio $\nabla_\theta \log p(x|\theta) = \frac{\nabla_\theta p(x|\theta)}{p(x|\theta)}$. This gives us the likelihood-ratio gradient \begin{equation} \nabla_\theta J(\theta) = \mathbb{E}_{\substack{\ s \sim \mu_\pi \\a \sim \pi}} \Big[\nabla_\theta{\log\pi(a|s)}Q_\pi(s,a)\Big], \label{eqn:like-grad} \end{equation} intuitively meaning we should increase the probability of actions that lead to higher Q-values. This formulation enables us to calculate the gradient $\nabla_\theta J(\theta)$, while directly taking advantage of the MDP structure in the form of $Q_\pi(s,a)$. Obviously, we do not have the true $Q_\pi(s,a)$, thus we need to approximate it by $\hat{Q}_\pi(s,a)$. In all of the following sections, when we say that we sample an episode, we mean to draw $a \sim \pi(a|s)$, starting in state $s_0 \sim p(s_0)$ and match a function estimator to our observations. Following this procedure, it is shown that for $\lim_{k\to\infty}\alpha_k = 0$, and $\sum_{k=0}^\infty \alpha_k$ we are guaranteed to converge to a local optimum \cite{Sutton:1999:PGM:3009657.3009806}. Approximating $Q_\pi(s,a)$ by an unbiased estimator $f_w^\pi(s_t, a_t) \equiv \hat{Q}_\pi(s_t, a_t)$, Sutton et al. \cite{Sutton:1999:PGM:3009657.3009806} show that using this function approximation we will converge to the true local optimum of $J(\theta)$. \paragraph{Episode-based updates} A very general optimization approach to this optimization problem are episodic algorithms. We take a search distribution $p(\theta|\omega)$ over the parameter space of the policy class $\pi$, and sample acting policies from that distribution. The policy class $\pi$ is most often chosen deterministic. Using these policies, we sample trajectories $\tau$, and update the search policy using the returns of our sampled roll-outs \begin{equation} \nabla_\theta J(\theta) \approx \sum_{t=0}^T \nabla_\omega \log p(\theta|\omega) \mathcal{R}_t^T. \end{equation} The resulting algorithms are black-box optimizers, and as such are largely applicable, but can not use any temporal information and have a lot of variance. Given these insights, we require a way to design algorithms that improve the acting policy stepwise by observing each interaction with the environment. \paragraph{Step-based updates} The first class of algorithms developed to update a policy directly using a critic are called REINFORCE \cite{Williams92simplestatistical}. REINFORCE samples a complete episode, at which point we can calculate the actual state-action value by traversing backwards over the trajectory, and estimates \begin{equation} \nabla_\theta J(\theta) \approx \nabla_\theta\log\pi(s) \left(Q(s_t,a_t) - b_\tau \right). \label{eqn:reinforce} \end{equation} This is sometimes also called Monte-Carlo gradient estimation. However, given $b_\tau = 0, r_t > 0, \forall t=0,\dots,h$, we can only increase action probabilities. Obviously, we normalize to ensure $\forall s \in \mathbb{S}: \int_\mathbb{A}{\pi(a|s)da} = 1$. This means actions can only become less probable in relation to other actions. We find that this introduces more variance when learning from samples \cite{Sutton:1999:PGM:3009657.3009806}, and by that defeats the purpose of why we thought of this approach in the first place. One way to counter the variance is to use an effective baseline $b_\tau$. Peters et al. \cite{4867} find that an estimate of the optimal baseline can be calculated by \begin{equation} b_\tau = \frac {\left\langle \left(\sum_{t=0}^T \nabla_\theta \log\pi(a_t|s_t) \right)^2 \sum_{t'=0}^T a_{t'} r_{t'} \right\rangle} {\left\langle \left(\sum_{t=0}^T \nabla_\theta \log\pi(a_t|s_t) \right)^2 \right\rangle}, \end{equation} which does not affect the unbiasedness of the estimate. Whenever we require estimating a value function for updating our policy, we can name the policy actor, and the estimated value function critic. From this observation, we define a class of policy optimization methods called actor-critic methods in section \ref{sec:ac}. \section{Actor-Critic Methods} \label{sec:ac} Policy gradient methods can be described in terms of two main steps often called policy evaluation and policy improvement. For actor-critic approaches, we separate these steps from the actor component by implementing a critic. This means, the actor consists only of the policy, while the critic is focused on estimating a score for the actions taken. By that concept, observations of the environment are given to the actor only to decide the next action, and to the critic only to improve its function estimation with the respective rewards. Figure \ref{fig:ac} shows the general structure of an actor-critic algorithm. Given this definition, we can already say that the algorithms presented at the end of section \ref{sec:pge} are actor-critic approaches. \tikzset{block/.style= {draw, rectangle, align=center, minimum height=2em, minimum width=3cm}} \begin{figure} \begin{center} \begin{tikzpicture}[auto, node distance=2cm,>=latex'] \node [block, name=env, minimum width=6cm] (env) {Environment}; \node [block, above of=env, node distance=1.5cm] (critic) {Critic \\ $\hat{Q}_\pi(s,a)$}; \node [block, above of=critic] (actor) {Actor \\ policy $\pi(a|s)$}; \draw[densely dotted] ($(actor.north west)+(-1.5,0.6)$) rectangle ($(critic.south east)+(1.5,-0.15)$); \node[above of=actor, node distance=0.7cm] {Agent}; \draw [->, align=left, swap, densely dashed] (critic.north) -- node{Policy \\ Improvement} (actor.south); \draw [->, pos=0.75] ($(env.north west)!0.15!(env.north)$) |- node{$s_t$} (actor.west); \draw [->, pos=0.75] ($(env.north west)!0.15!(env.north)$) |- node{$s_t, r_t$} (critic.west); \draw [->, pos=0.25] (actor.east) -| node{$a_t$} ($(env.north east)!0.15!(env.north)$); \draw [->, pos=0.75, swap] ($(env.north east)!0.15!(env.north)$) |- node{$a_t$} (critic.east); \draw [-, swap, pos=0.14] ($(env.north west)!0.15!(env.north)$) |- node{Observation} (critic.west); \draw [-, swap, pos=0.95] (actor.east) -| node{Action} ($(env.north east)!0.15!(env.north)$); \end{tikzpicture} \end{center} \caption{A visualization inspired by Kimura et al. \cite{Kimura1998AnAO}, showing the actor-critic framework.} \label{fig:ac} \end{figure} The critic estimates a state-action value function as defined in \eqref{eqn:q}. Sutton et al. \cite{Sutton:1999:PGM:3009657.3009806}, and Konda et al. \cite{Konda:2003:AA:942271.942292} find that the estimation $f_w^\pi(s,a) \approx Q_\pi(s,a)$ does not affect the unbiasedness of the gradient estimate under some restrictions. Specifically, this holds for \begin{equation} f_w^\pi(s,a) = {\nabla_\theta \log\pi(a|s)}^T w, \label{eqn:compatible-approximator} \end{equation} thus $f_w^\pi(s,a)$ being a linear function parameterized by the vector $w$. Sutton et al. \cite{Sutton:1999:PGM:3009657.3009806} call this a compatible function approximator. This guarantees that the function estimator does not cause divergence, and really enables recent research in reinforcement learning for continuous control problems, e.g., in humanoid robotics. Traditionally, the improvement is often done by Monte-Carlo sampling as in REINFORCE \eqref{eqn:reinforce}, or using temporal difference (TD) \cite{Sutton1988}, i.e., we use the temporal difference between the critic's estimations \begin{equation} \delta(s_t) = r_t + \gamma \hat{V}_\pi(s_{t+1}) - \hat{V}_\pi(s_t). \label{eqn:td-error} \end{equation} However, Sutton et al. \cite{1993b} find that this is only guaranteed to be unbiased, if $\int_\mathbb{A}{\pi(s,a)f_w^\pi(s,a)da} = 0, \forall s \in \mathbb{S}$. Given this assumption, the function estimator $f_w^\pi$ is limited to approximating an advantage function \begin{equation} f_w^\pi(s_t, a_t) \equiv \hat{A}_{\pi}(s_t, a_t) = \hat{Q}_{\pi}(s_t, a_t) - \hat{V}_{\pi}(s_t), \label{eqn:adv} \end{equation} which requires bootstrapping for $\hat{V}_\pi$. If we use temporal difference in this context, we run into a problem, as \eqref{eqn:adv} subtracts $\hat{V}_\pi(s_t)$, meaning we would only learn immediate rewards \cite{Peters_IICHR_2003}. This would render the process biased. Sutton et al. \cite{Sutton:1999:PGM:3009657.3009806} and Konda et al. \cite{NIPS1999_1786} suggest estimating an action value function as in \eqref{eqn:q}. We can approximate this $f_w^\pi$ by least-squares optimization over multiple $\hat{Q}_\pi(s,a)$ obtained from roll-outs. However, Peters et al. \cite{4863} find that this approximation is highly reliant on the distribution of the training data. This comes from the realization, that we use only a subspace of the true action-value function in $\hat{V}^\pi$, which is only a state value function. One can compare this to approximating a parabola by a line, whereby the approximation changes wildly depending on which part of the parabola is in the training data. An approach to solve this bootstrapping problem is to rewrite the Bellman Equation using \eqref{eqn:adv} and \eqref{eqn:q}. With $\hat{A}_\pi(s,a) = f_w^\pi(s,a)$, $\hat{V}_\pi(s) = \phi(s)^T v$, a zero-mean error term $\epsilon \equiv \epsilon(s_t,a_t,s_{t+1})$, we get \begin{align} &\hat{A}_\pi(s,a) + \hat{V}_\pi(s) = r(s,a) + \gamma \int_\mathbb{S} p(s'|s,a)\hat{V}_\pi(s')ds', \\ &\nabla_\theta \log \pi(a_t|s_t)^T w + \phi(s_t)^T v = r(s_t,a_t) + \gamma \phi(s_{t+1})^T v + \epsilon , \end{align} which involves only linear equations to solve \cite{4863}. With these insights in mind, section \ref{sec:natural} presents the natural gradient, a refined type of gradient which has a convenient fit in the actor-critic setting we just established. \section{Natural Gradient} \label{sec:natural} Natural gradients were at first proposed for use in supervised learning settings by Amari et al. \cite{Amari:1998:NGW:287476.287477}, but have been shown to be effective in reinforcement learning by Kakade \cite{Kakade:2001} and Peters et al. \cite{4863}. When using normal gradient steps, we find that steps can become very small when a plateau is reached. This can drastically slow down the learning process, and in the worst case cause algorithms to terminate prematurely. However, we can use some additional information to refine the gradient. Figure \ref{fig:nat-grad-adv} shows an example by Peters et al. \cite{Peters_IICHR_2003} that gives a visual intuition about the difference between 'vanilla' and natural policy gradients. \begin{figure} \includegraphics[width=0.485\textwidth]{nat-grad-adv} \caption{An experiment showing where the natural gradient has a great advantage \cite{Peters_IICHR_2003}. }\label{fig:nat-grad-adv} \end{figure} Using the Fisher information matrix $F_\theta$, and the gradient estimate we discussed in section \ref{sec:pge} gives us the definition \begin{equation} \widetilde{\nabla}_\theta J(\theta) := F^{-1}_\theta \nabla_\theta J(\theta) \label{eqn:nat-grad} \end{equation} of the natural gradient. The Fisher information matrix represents the certainty we have on our estimate of the gradient and is defined as the covariance of the log likelihood function of a trajectory $\tau_{\pi}^T$, which as Peters et al. \cite{4863} show can be written as \begin{equation} F_\theta = \int_\mathbb{S} d^\pi(s) \int_\mathbb{A} \pi(a|s) \nabla_\theta \log{\pi(a|s)} \nabla_\theta \log{\pi(a|s)}^T dads. \label{eqn:F} \end{equation} Using a value function estimator and calculating the natural gradient, we get the natural policy gradient algorithm (NPG) \cite{NIPS2017_7233}. But, if we recall the definition \eqref{eqn:like-grad} of likelihood-ratio gradients, and the compatible function approximator from \eqref{eqn:compatible-approximator}, we get \begin{equation} \nabla_\theta J(\theta) = F_\theta w. \label{eqn:J-equals-F} \end{equation} From \eqref{eqn:nat-grad}, and \eqref{eqn:J-equals-F}, it follows that \begin{equation} \widetilde{\nabla}_\theta J(\theta) = F^{-1}_\theta \nabla_\theta J(\theta) = F_\theta^{-1} F_\theta w = w. \end{equation} Thus, this approach does not require an actual estimate of the Fisher information matrix, but only an estimate of $w$, with the update step according to $\theta_{k+1} = \theta_k + \alpha_k w$. Peters et al. \cite{4863} present this idea and suggest LSTD-Q($\lambda$), a version of least-squares temporal difference learning \cite{Boyan:1999:LTD:645528.657618}, as well as episodic natural actor-critic (eNAC). \section{Conclusion} \label{sec:outro} In this paper, we have introduced policy gradient methods as a class of reinforcement learning algorithms. We show why policy gradient methods are effective in these environments, and we give some intuitions for the concept. Further, we show the core elements of policy gradient methods, discuss some intricacies the estimation of the policy gradient brings, and follow the research development in the attempts of improving the efficiency and stability of policy gradients. We show that we can reuse value-estimation approaches in actor-critic settings to improve gradient estimate through better policy evaluation. This leads to the introduction of the natural gradient as a way to iterate through policy space instead of parameter space, which improves sample efficiency, especially when the gradient in parameter space is very small. From the developments in recent research, it is fair to say that policy gradient methods play a major role in reinforcement learning. \section*{Acknowledgments} Many thanks to Samuele Tosatto for his helpful reviews. Also, this paper would not exist without the engaging lectures on reinforcement learning by Jan Peters. \bibliographystyle{IEEEtran} \bibliography{bibliography} \end{document}
{ "alphanum_fraction": 0.7414071325, "avg_line_length": 71.4351585014, "ext": "tex", "hexsha": "69e09c376b92bb8965fb0d183b5b95ffa9077610", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f81962e8b55eab3fd3557b3c93aeb3e25261325c", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "M8is/rl-paper", "max_forks_repo_path": "paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f81962e8b55eab3fd3557b3c93aeb3e25261325c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "M8is/rl-paper", "max_issues_repo_path": "paper.tex", "max_line_length": 524, "max_stars_count": null, "max_stars_repo_head_hexsha": "f81962e8b55eab3fd3557b3c93aeb3e25261325c", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "M8is/rl-paper", "max_stars_repo_path": "paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7306, "size": 24788 }
If you haven't downloaded and unzipped \href{https://libaoj.in/courses/2021f/MATH3341/zip/Math.3341.zip}{\texttt{Math.3341.zip}}. Download and unzip it under \verb|H:| (H Drive if you are working on the Remote Lab). Change the current working directory by typing \verb|cd H:\Math.3341\Math.3341.Lab.08| in the Command Window, and type \verb|edit lab_08_script| in the Command Window to edit \verb|lab_08_script.m|. %--------------------------------------------- \section{Polynomial Interpolation Routines} %--------------------------------------------- \begin{enumerate}[(a)] \item Fit \verb|xdata| and \verb|ydata| by an \verb|n|th order polynomial using \verb|polyfit|. Then use \verb|polyval| to evaluate the polynomial at \verb|x|. \item Evaluate the cubic spline of \verb|xdata| and \verb|ydata| at \verb|x| using \verb|spline| command. \item Now use the \verb|pchip| command to find the values of the piecewise cubic Hermite interpolating polynomial at \verb|x|. \item Make a copy of your implementation of Lagrange interpolation for Homework 5. Use your function to find the function values of the Lagrange interpolation polynomial at \verb|x|. \item Uncomment ``3 Plot interpolation polynomials" section, which will create the figure comparing each of the polynomial interpolations. If you cannot get your Lagrange interpolation polynomial to work, comment in the relevant lines of code that plot that figure. Expected plot is shown in Figure \ref{fig:1}. \end{enumerate} \section{Derivatives of Interpolation Polynomials} \begin{enumerate}[(a)] \item \label{enum:II1} Use \verb|polyder| to calculate the coefficients of the first derivative of the interpolation polynomial given by \verb|polyfit| that you constructed, and evaluate it at \verb|x| using \verb|polyval|. \item Repeat \eqref{enum:II1} to find the second derivative of the interpoation polynomial. \item Fit \verb|xdata| and \verb|ydata| using cubic spline and store the structure of the cubic spline interpolation polynomial to \verb|cs_struct|. \item Using slicing technique to extract the columns of \verb|cs_struct.coefs| which correspond to each coefficient of the piecewise cubic spline, and store each of these columns in \verb|b|, \verb|c|, \verb|d|, respectively. \item Use these coefficients along with \verb|xdata|, \verb|x| to evaluate the first and second derivatives of the spline using \verb|cubic_spline_der.m|. Use \verb|help cubic_spline_der| to get details of the function. \item Uncomment ``4 Plot derivatives" section to generate corresponding plots. Expected plot is shown in Figure \ref{fig:2}. \end{enumerate} At the end of the day, upload \verb|lab_08_script.m|, \verb|lab_08_figure_01.pdf| and \verb|lab_08_figure_02.pdf| to Overleaf (make sure you change the caption for the figures), then recompile, and submit the generated .pdf file on WyoCourses. \begin{figure}[!hbtp] \centering \includegraphics[width=\textwidth]{../Math.3341.Lab.08.ans/lab_08_figure_01.pdf} \caption{Polynomial Interpolation using different routines} \label{fig:1} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[width=\textwidth]{../Math.3341.Lab.08.ans/lab_08_figure_02.pdf} \caption{Derivatives of Interpolation Polynomials} \label{fig:2} \end{figure}
{ "alphanum_fraction": 0.7450211225, "avg_line_length": 89.5675675676, "ext": "tex", "hexsha": "0a389d27abda38c1bb8d0667a32649b6b6561c9d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "butlerm0405/math3341", "max_forks_repo_path": "courses/template/MATH3341/Math.3341.Lab.08/exercise/body.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "butlerm0405/math3341", "max_issues_repo_path": "courses/template/MATH3341/Math.3341.Lab.08/exercise/body.tex", "max_line_length": 414, "max_stars_count": null, "max_stars_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "butlerm0405/math3341", "max_stars_repo_path": "courses/template/MATH3341/Math.3341.Lab.08/exercise/body.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 865, "size": 3314 }
\begin{figure*}[t] \centering %\includegraphics[scale=0.35]{eager_design} \includegraphics[width=5in]{eager_design_2} \caption{EAGER Architecture \label{fig:eager_design} } \end{figure*} To experiment with implementation of API governance, we have developed EAGER -- an architecture for implementing governance suitable for integration as a cloud-native feature. %EAGER leverages existing SOA governance techniques and %best practices and adapts them to make them %suitable for cloud platform-level integration. In this section, we overview the high-level design of EAGER, describe its main components and the policy language. Our goal in its design is two fold. First, we wished to verify that the integration between policy specification, API specification, deployment control, and run-time control is feasible in a cloud setting. Secondly, we wished to use the design as the basis for a prototype implementation that we could use to evaluate the impact of API governance empirically. EAGER is designed to be integrated with PaaS clouds. PaaS clouds accept code that is then deployed within the platform so that it may make calls to existing services (cloud SDK) supported by the platform. EAGER is designed to intercept all events related to application deployment within the cloud and enforces deployment-time governance checks and logging. %These governance features include API metadata validation, %dependency validation, policy validation and %API backward compatibility %checking. When a policy verification check fails, EAGER aborts the deployment of the application and logs the information necessary to perform remediation. EAGER assumes that it is integrated with the cloud and that the cloud initiates\footnote{We use the term ``initiates'' to differentiate the first clean installation of the cloud, from a cloud restart. EAGER must be able to maintain compliance across restarts, but it assumes that when the cloud is installed and suitably tested, it is in a policy compliant state.} in a compliant state ({\em i.e.} there are no policy violations when the cloud is started before any applications are deployed). It tries to maintain the cloud in a ``governed'' state at all times. That is, with EAGER active, the cloud is automatically prevented from transitioning out of policy compliance due to a change in the applications it hosts. %In this work we focus on the deployment-time governance capabilities of EAGER. %At present, it also implements the same run-time governance features that are %common to most API management facilities (which is necessary but not the %subject of the research we present in this paper). %However, our experience with the interplay between the new deployment-time %governance techniques we discuss and the extant methodologies for run-time %features leads to new possibilities for run-time governance as well (which we %discuss as future work). Figure~\ref{fig:eager_design} illustrates the main components of EAGER (in blue) and their interactions. Solid arrows represent the interactions that take place during application deployment-time, before an application has been validated for deployment. Short-dashed arrows indicate the interactions that take place during deployment-time, after an application has been successfully validated. Long-dashed arrows indicate interactions at run-time. The diagram also outlines the components of EAGER that are used to provide deployment control and run-time control. Note that some components participate in interactions related to both deployment and run-time control (e.g. Metadata Manager). EAGER must be invoked by the cloud whenever a user attempts to deploy an application in the cloud. %the developer tools available on his/her workstation that activate the target %cloud's deployment mechanisms. %In some cloud %implementations these tools could be available as an online service accessed %via a web browser. In either case, The cloud's application deployment mechanisms must be altered so that each deployment request is intercepted by EAGER, which then performs the required governance checks. If a governance check fails, EAGER will preempt the application deployment, log relevant data pertaining to the event for later analysis, and return an error. Otherwise, it proceeds with the application deployment by activating the deployment mechanisms on the user's behalf. Architecturally, the deployment action requires three inputs: the policy specification governing the deployment, the code to be deployed, and a specification of the APIs that the code exports. EAGER assumes that cloud administrators have developed and installed policies (stored in the Metadata Manager) that are to be checked against all deployments. API specifications for the application must also be available to the governance engine. Because the API specifications are to be derived from the code (and are, thus, under developer control and not administrator control) our design assumes that automated tools are available to perform analysis on the application, and generate API specifications in a suitable API specification language. These specifications must be present when the deployment request is considered by the platform. In the prototype implementation described in Section~\ref{sec:eager_prototype_impl}, the API specifications are generated as part of the application development process ({\em e.g.} by the build system). They may also be offered as a trusted service hosted in the cloud. In this case, developers will submit their source code to this service, which will generate the necessary API specifications in the cloud and trigger the application deployment process via EAGER. The proposed architecture is designed not to require major changes to the existing components of the cloud since its deployment mechanisms are likely to be web service based. However, EAGER does require integration at the service level ({\em e.g.} it must be a trusted service component in a PaaS cloud). %All the %EAGER components can be easily implemented as feature additions. %The only noteworthy change that is required is the mechanism to intercept application deployment requests and act upon them. %This can be done without changing the existing components of the cloud. %Following subsections further describe the design and responsibilities of EAGER components. \subsubsection{Metadata Manager} The Metadata Manager stores all the API metadata in EAGER. This metadata includes policy specifications, API names, versions, specifications and dependencies. It uses the dependency information to compute the dependency tree among all deployed APIs and applications. Additionally, the Metadata Manager also keeps track of developers, their subscriptions to various APIs and the access credentials (API keys) issued to them. For these purposes, the Metadata Manager must logically include both a database and an identity management system. %management system. The developer information may be stored in a specialized user %store ({\em e.g.} LDAP). The Metadata Manager is exposed to other components through a well defined web service interface. This interface allows querying existing API metadata and updating them. In the proposed model, the stored metadata is updated occasionally (only when a new application is deployed or when a developer subscribes to a published API). Therefore the Metadata Manager does not need to support a very high write throughput. This performance characteristic allows the Metadata Manager to be implemented with strong transactional semantics, which reduces the development overhead of other components that rely on Metadata Manager. Availability can be improved via simple replication methods. \subsubsection{API Deployment Coordinator} \label{sec:adc} The API Deployment Coordinator (ADC) intercepts all application deployment requests and determines whether they are suitable for deployment, based on a set of policies specified by the cloud administrators. It receives application deployment requests via a web service interface. At a high-level, ADC is the most important entity in the EAGER deployment control strategy. An application deployment request contains the name of the application, version number, names and versions of the APIs exported by the application, detailed API specifications and other API dependencies as declared by the developer. Application developers only need to specify explicitly the name and version of the application and the list of dependencies (i.e. APIs consumed by the application). All other metadata can be computed automatically by performing introspection on the application source code. %In our prototype %implementation of EAGER, we have developed special tools that perform these %metadata calculations, including the automatic generation of API %specifications. The API specifications used to describe the web APIs should specify the operations and the schema of their inputs and outputs. Any standard API description language can be used for this purpose, as long as it clearly describes the schema of the requests and responses. For describing REST interfaces, we can use Web Application Description Language (WADL)~\cite{hl:wadl}, Swagger~\cite{hl:swagger}, RESTful API Modeling Language (RAML) or any other language that provides similar functionality. %Our research %currently focuses on RESTful %services only, but our design can also be adapted to govern SOAP services in %the cloud, in which case Web Services Description Language (WSDL) can be used %to create API specifications. When a new deployment request is received, the ADC checks whether the application declares any API dependencies. If so, it queries the Metadata Manager to make sure that all the declared dependencies are already available in the cloud. Then it inspects the enclosed application metadata to see if the current application exports any web APIs. If the application exports at least one API, the ADC makes another call to the Metadata Manager and pulls any existing metadata related to that API. If the Metadata Manager cannot locate any data related to the API in question, ADC assumes it to be a brand new API (i.e. no previous version of that API has been deployed in the cloud), and proceeds to the next step of the governance check, which is policy validation. However, if any metadata regarding the API is found, then the ADC is dealing with an API update. In this case, the ADC compares the old API specifications with the latest ones provided in the application deployment request to see if they are compatible. To perform this API compatibility verification, the ADC checks to see whether the latest specification of an API contains all the operations available in the old specification. API specifications are generated at the application developer's end and submitted to EAGER along with the application deployment request. If the latest API specification is missing at least one operation that it had previously, the ADC reports this to the user and aborts the deployment. If all the past operations are present in the latest specification, the ADC performs a type check to make sure that all past and present operations are type compatible. This is done by performing recursive introspection on the input and output types declared in the API specifications. EAGER looks for type compatibility based on the following rules inspired by Hoare logic~\cite{Hoare:1969:ABC:363235.363259} and the rules of type inheritance from object oriented programming: \begin{itemize} \vspace{0.1in} \item New version of an input type is compatible with the old version of an input type, if the new version contains either all or less attributes than the old version, and any new attributes that are unique to the new version are optional. \vspace{0.1in} \item New version of an output type is compatible with the old version of an output type, if the new version contains either all or more attributes than the old version. \vspace{0.1in} \end{itemize} In addition to the type checks, ADC may also compare other parameters declared in the API specifications such as HTTP methods, mime types and URL patterns. We have also explored and published results on using a combination of syntactic and semantic comparison to determine the compatibility between APIs~\cite{6930607,jayathilaka2014using}. Once the API specifications have been successfully compared without error, the ADC initiates policy validation. \subsubsection{EAGER Policy Language and Examples} \label{sec:policy-lang} Policies are specified by cloud or organizational administrators using a subset of an object oriented language (we chose Python for the prototype). We restrict the language to prevent state from being preserved across policy validations. In particular, the EAGER policy interpreter disables file and network operations, third party library calls, and intrinsics that allow state to persist across invocations. In addition, EAGER processes each policy independently of others (i.e. each policy must be self-contained and access no external state). All other language constructs and language features can be used to specify policies in EAGER. To accommodate built-in language APIs that the administrators trust by {\em fiat}, all module and function restrictions of the EAGER policy language are enforced through a configurable white-list. The policy engine evaluates each module and function reference against this white-list to determine whether they are allowed in the context of EAGER. Cloud administrators have the freedom and flexibility to expand the set of allowed built-in and third party modules by making changes to this white-list. As part of policy language, EAGER defines a set of assertions that policy writers can use to specify various checks to perform on the applications. Currently, this assertion list includes: %\vspace{0.25in} \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single] assert_true(condition, optional_error_msg) assert_false(condition, optional_error_msg) assert_app_dependency(app, d_name, d_version) assert_not_app_dependency(app, d_name, d_version) assert_app_dependency_in_range(app, name,\ lower, upper, exclude_lower, exclude_upper) \end{lstlisting} } %\begin{itemize} %\item assert\_true(condition, optional\_error\_msg) %\item assert\_false(condition, optional\_error\_msg) %\item assert\_app\_dependency(app, d\_name, d\_version) %\item assert\_not\_app\_dependency(app, d\_name, d\_version) %\item assert\_app\_dependency\_in\_range(app, name, lower, upper, exclude\_lower, exclude\_upper) %\end{itemize} In addition to these assertions, EAGER adds a function called ``compare\_versions'' to the list of available built-in functions. Policy writers can use this function to compare version number strings associated with applications and APIs. %Basing the policy language on Python allows EAGER to leverage %existing programming tools to edit and debug policy files. Using a real programming %language to specify policies, enables administrators to implement governance policies with %arbitrary complexity and logic with ease. In the remainder of this section we illustrate the use of the policy language through examples. The first example policy mandates that any application or mash-up that uses both Geo and Direction APIs must adhere to certain versioning rules. More specifically, if the application uses Geo 3.0 or higher, it must use Direction 4.0 or higher. Note that the version numbers are compared using the ``compare\_versions'' functions described earlier. \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single, showstringspaces=false] g = filter(lambda dep: dep.name == `Geo', \ app.dependencies) d = filter(lambda dep: dep.name == `Direction',\ app.dependencies) if g and d: g_api, d_api = g[0], d[0] if compare_versions(g_api.version, `3.0') >= 0: assert_true(compare_versions(d_api.version, `4.0') >= 0) \end{lstlisting} } \vspace{0.05in} In the above example, \textit{app} is a special immutable logical variable available to all policy files. This variable allows policies to access information pertaining to the current application deployment request. The assert\_true and assert\_false functions allow testing for arbitrary conditions, thus greatly improving the expressive power and flexibility of the policy language. The next example shows a policy file that mandates that all applications deployed by the ``[email protected]'' user must have role-based authentication enabled, so that only users in the ``manager'' role can access them. To carry out this check the policy accesses the security configuration specified in the application descriptor (e.g. the web.xml for a Java application). \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single, showstringspaces=false] if app.owner == `[email protected]': roles = app.web_xml[`security-role'] constraints = app.web_xml[`security-constraint'] assert_true(roles and constraints) assert_true(len(roles) == 1) assert_true(`manager' == roles[0][`role-name']) \end{lstlisting} } \vspace{0.05in} Next, we present an example policy, which mandates that all deployed APIs must explicitly declare an operation which is accessible through the HTTP OPTIONS method. This policy further ensures that these operations return a description of the API in the Swagger~\cite{hl:swagger} machine-readable API description language. \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single, showstringspaces=false] options = filter(lambda op : op.method == `OPTIONS', api.operations) assert_true(options, `API does not support OPTIONS') assert_true(options[0].type == `swagger.API', `Does not return a Swagger description') \end{lstlisting} } \vspace{0.05in} Returning machine-readable API descriptions from web APIs makes it easier to automate the API discovery and consumption processes. Several other research efforts confirm the need for such descriptions~\cite{Verborgh:2012:FDB:2307819.2307828,Steiner:2011:FHC:1967428.1967433}. A policy such as this can help enforce such practices, thus resulting in a high-quality API ecosystem in the target cloud. The policy above also shows the use of the second and optional string argument to the assert\_true function (the same is supported by assert\_false as well). This argument can be used to specify a custom error message that will be returned to the application developer, if his/her application violates the assertion in question. The next example is a relatively simple policy file that prevents developers from introducing dependencies on deprecated web APIs. Deprecated APIs are those that have been flagged by their respective authors for removal in the near future. Therefore introducing dependencies on such APIs is not recommended. The following policy will enforce this condition in the cloud. \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single, showstringspaces=false] deprecated = filter( lambda dep : dep.status == 'DEPRECATED', app.dependencies) assert_false(deprecated, 'Must not use a deprecated dependency') \end{lstlisting} } \vspace{0.05in} Our next example presents a policy that enforces governance rules in a user-aware (i.e. tenant-aware) manner. Assume a multi-tenant private PaaS cloud that is being used by members of the development team and the sales team of a company. The primary goal in this case is to ensure that applications deployed by both teams log their activities using a set of preexisting logging APIs. However, we further want to ensure that applications deployed by the sales team log their activities using a special analytics API. A policy such as the one that follows can enforce these conditions. \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single, showstringspaces=false] if app.owner.endswith(`@engineering.test.com'): assert_app_dependency(app, `Log', `1.0') elif app.owner.endswith(`@sales.test.com'): assert_app_dependency(app, `AnalyticsLog', `1.0') else: assert_app_dependency(app, `GenericLog', `1.0') \end{lstlisting} } \vspace{0.05in} The example below shows a policy that mandates that all HTTP GET operations exposed by APIs must support paging. APIs that do so define two input parameters named ``start'' and ``count'' to the GET call. \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single, showstringspaces=false] for api in app.api_list: get = filter(lambda op : op.method == `GET', api.operations) for op in get: param_names = map(lambda p : p.name, op.parameters) assert_true(`start' in param_names and `count' in param_names) \end{lstlisting} } \vspace{0.05in} This policy accesses the metadata of API operations that is available in the API descriptions. Since API descriptions can be auto-generated from the source code of the APIs, this policy indirectly references information pertaining to the actual API implementations. Finally, we present an example for the POST method. The policy below mandates that all POST operations exposed by an API are secured with OAuth version 2.0. \vspace{0.05in} {\footnotesize \begin{lstlisting}[language=Python, frame=single, showstringspaces=false] for api in app.api_list: post = filter(lambda op : op.method == `POST', api.operations) for op in post: assert_true(op.authorizations.get(`oauth2')) \end{lstlisting} } \vspace{0.05in} EAGER places no restrictions on how many policy files are specified by administrators. Applications are validated against each policy file. Failure of any assertion in any policy file will cause the ADC to abort application deployment. Once an application has been checked against all applicable policies, ADC persists the latest application and API metadata into the Metadata Manager. At this point, the ADC may report success to the user and proceed with application deployment. In a PaaS setting this deployment activity typically involves three steps: \begin{enumerate} \vspace{0.05in} \item Deploy the application in the cloud application run-time (application server). \vspace{0.05in} \item Publish the APIs enclosed in the application and their specifications to the API Discovery Portal or catalog. \vspace{0.05in} \item Publish the APIs enclosed in the application to an API Gateway server. \vspace{0.05in} \end{enumerate} Step $1$ is required to complete the application deployment in the cloud even without EAGER. We explain the significance of steps $2$ and $3$ in the following subsections. \subsubsection{API Discovery Portal} The API Discovery Portal (ADP) is an online catalog where developers can browse available web APIs. Whenever the ADC approves and deploys a new application, it registers all the APIs exported by the application in ADP. EAGER mandates that any developer interested in using an API, first subscribe to that API and obtain the proper credentials (API keys) from the ADP. The API keys issued by the ADP can consist of an OAuth~\cite{oauth2} access token (as is typical of many commercial REST-based web services) or a similar authorization credential, which can be used to identify the developer/application that is invoking the API. This credential management identification process is used for auditing and run-time governance in EAGER. The API keys issued by the ADP are stored in the Metadata Manager. When a programmer develops a new application using one or more API dependencies, we can require the developer to declare its dependencies along with the API keys obtained from the ADP. The ADC verifies this information against the Metadata Manager as a part of its dependency check and ensures that the declared dependencies are correct and the specified API keys are valid. Deployment-time governance policies may further incentivize the declaration of API dependencies explicitly by making it impossible to call an API without first declaring it as a dependency along with the proper API keys. These types of policies can be implemented with minor changes to the application run-time in the cloud so that it loads the API credentials from the dependency declaration provided by the application developer. In addition to API discovery, the ADP also provides a user interface for API authors to select their own APIs and deprecate them or retire them. Deprecated APIs will be removed from the API search results of the portal, and application developers will no longer be able to subscribe to them. However, already existing subscriptions and API keys will continue to work until the API is eventually retired. The deprecation is considered a courtesy notice for application developers who have developed applications using the API, to migrate their code to a newer and active version if the API. Once retired, any applications that have not still been migrated to the latest version of the API will cease to operate. \subsubsection{API Gateway} Run-time governance of web services by systems such as Synapse~\cite{synapse} make use of an API ``proxy'' or gateway. The EAGER API Gateway does so to intercept API calls and validate the API keys contained within them. EAGER intercepts requests by blocking direct access to the APIs in the application run-time (app servers), and publishing the API Gateway address as the API endpoint in the ADP. We do so via firewall rules or router configuration that prevents the cloud app servers from receiving any API traffic from a source other than the API Gateway. Once the API Gateway validates an API call, it routes the message to the application server in the cloud platform that hosts the API. The API Gateway can be implemented via one or more (load-balanced) servers. In addition to API key validation, the API Gateway performs other functions such as monitoring, throttling (rate limiting), SLA enforcement, and run-time policy validation.
{ "alphanum_fraction": 0.8047264636, "avg_line_length": 48.9962406015, "ext": "tex", "hexsha": "c7908e5752adb528f64023bc21c4b81fe4cb2e5d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-25T02:59:15.000Z", "max_forks_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_forks_repo_path": "Eager/paper/thesis-proposal/eager/eager.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_issues_repo_path": "Eager/paper/thesis-proposal/eager/eager.tex", "max_line_length": 157, "max_stars_count": 3, "max_stars_repo_head_hexsha": "d58fe64bb867ef58af19c1d84a5e1ec68ecddd3d", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "UCSB-CS-RACELab/eager-appscale", "max_stars_repo_path": "Eager/paper/thesis-proposal/eager/eager.tex", "max_stars_repo_stars_event_max_datetime": "2018-07-16T18:20:23.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-12T01:18:49.000Z", "num_tokens": 5574, "size": 26066 }
\documentclass[fleqn,10pt,twoside]{gcb15submission} \usepackage{url}\urlstyle{same} \usepackage{booktabs} \usepackage{colortbl, xcolor} \usepackage{epstopdf} % rysunki \usepackage{tikz} \usepackage{ifthen} \usepackage{xxcolor} \usetikzlibrary{arrows} \usetikzlibrary[topaths] \usetikzlibrary{decorations.pathreplacing} \title{signalHsmm - a novel semi-Markov model of eukaryotic signal peptides} \author[1]{Micha\l{} Burdukiewicz} \author[2]{Piotr Sobczyk} \author[1]{Pawe\l{} B\l{}a\.{z}ej} \author[1]{Pawe\l{} Mackiewicz} \affil[1]{University of Wroc\l{}aw, Department of Genomics, Poland; [email protected], [email protected], [email protected]} \affil[2]{Wroc\l{}aw University of Technology, Department of Mathematics, Poland; [email protected]} \keywords{eukaryote; hidden semi-Markov models; n-gram; signal peptide prediction} \begin{abstract} Signal peptides play an essential role in targeting of proteins to endomembrane system and their export outside the cell. The proteins equipped with such peptides are of great importance in metabolism, maintenance of tissue structure, immune response and regulation of other organismal functions. These peptides may be recognized with adequate accuracy using learning systems. However, they are usually 'black-box' models therefore unable to trace decision rules and identify parameters responsible for the predictions. That is why, we designed a new more universal probabilistic model for eukaryotic signal peptides, which includes knowledge about their organization, amino acid composition and variability. The proposed approach called signalHsmm is based on hidden semi-Markov models (HSMMs) and uses intrinsic knowledge about signal peptides. The big advantage of the algorithm is its extensibility. By using the n-grams (k-mers), we showed that the general model can yield not only better results than other software but also more information about features of signal peptides. Our model showed the largest AUC = 0.97 in comparison to other signal peptide predictor even though it was trained on very small data sets. Moreover, it proved to be very stable regardless of types of learning data sets. Therefore, our model does not need to be permanently retrained with the continuous expansion of sequence databases. It should be emphasized, that our model is able to recognize signal peptides from medically significant malaria parasites \textit{Plasmodium} and their relatives more accurately (with AUC = 0.93) than popular programs (0.79). The web-server of signalHsmm is available at \url{http://smorfland.uni.wroc.pl/signalhsmm}. \end{abstract} \begin{document} \flushbottom \maketitle \thispagestyle{empty} \section*{Introduction} \subsection*{Roles and features of signal peptides} Proteins of eukaryotes are encoded in nuclear genomes and are synthesized in ribosomes located in the cytosol or bounded by the endoplasmic reticulum. After translation, proteins are targeted to specific subcellular compartments or exported outside the cell. The proper localization of proteins is essential to perform their desired function. Information about the protein destination is included within the very protein in short stretches of amino acid residues called targeting or sorting signals. One kind of them are signal peptides, which are located at the N-terminus of proteins. Signal peptides are responsible for targeting of proteins via the Sec61 translocation channel~\citep{2007rapoportprotein} to endomembrane system, which includes endoplasmic reticulum and Golgi apparatus. Such proteins can stay inside these compartments, can be inserted into cellular membranes or exported outside the cell. Proteins equipped with signal peptides play crucial role in metabolism ($\beta$ galactosidase, pepsins)~\citep{1991hofmannmutations}, maintenance of tissue structure (collagen)~\citep{2001chanaberrant}, immune response (interferons, interleukins)~\citep{2005zhangalteration} and regulation of other organismal functions (prolactin, glucagon)~\citep{2010huangrole}. Moreover, passing proteins through the endomembrane system is important for their correct folding and posttranslational modification such as glycosylation and phosphorylation. Despite the low sequence homology between signal peptides~\citep{1999ladungaphysean}, some general architecture were proposed~\citep{1994izardsignal, 2013vossmechanism} - Fig. \ref{fig:sparch}. It is assumed that signal peptides start with a positively charged sequence of amino acid residues, called the n-region with the length of about 5-8 residues. They probably enforce a proper topology on the polypeptide during its translocation through membrane based on the positive-inside rule~\citep{1988vonheijnetopogenic}. The first region is followed by a stretch of hydrophobic amino acids (h-region) with the length of about 8-12 residues. It constitutes a core region of signal peptide and usually forms $\alpha$-helix. The third part of a signal peptide is a polar, but uncharged c-region. It is usually 6 residues long and ends with a cleavage site, in which a signal peptidase cleaves the signal peptide, during or after translocation of the protein into the endoplasmic reticulum ~\citep{2002paetzelsignal}. The cleavage site is characterized by a variable amino acid composition. It typically contains small and neutral residues at -3 and -1 positions~\citep{1994palzkillselection}. This site is, however, absent from some membrane proteins in which the first transmembrane domain acts both as a signal peptide and signal anchor~\citep{1988szczesnaskorupapositive}. The amino acid composition and the length of these regions vary between signal peptides, which influences the efficiency of protein secretion~\citep{2006hegdethe}. \begin{figure}[ht]\centering \includegraphics[width=0.55\textwidth]{figures/SP.eps} \caption{The organization of typical signal peptide. The regions are not drawn in scale.} \label{fig:sparch} \end{figure} Some data indicate that signal peptides may be universal. It was found, for example, that even bacterial signal peptides targeted correctly transgenic proteins to plant~\citep{2009moellera} or mammalian secretory systems~\citep{2014naganoestablishment}. On the other hand, signal peptides show great variation and the description presented above (Fig. \ref{fig:sparch}) refers to the most ???typical’ signal peptides. There are exceptionally long signal peptides, which fulfill more sophisticated roles~\citep{2009hissarchitecture}. The fragment of signal peptide from preprolactin takes part in the regulation of prolactin secretion, whereas signal peptides of MHC class I inhibit activity of NK cells. Signal peptides of viral origin are involved in the immune evasion or viral life cycle~\citep{2000kappposttargeting}. The signal peptide from midkine contributing to the tumor progression contains epitopes recognized by CD4+ T cells~\citep{2013kerzerhothe}. The functional significance of these targeting signals makes that the prediction of signal peptide-containing proteins is also an important step in the drug development~\citep{2005zhangalteration, 2012netoadeimproving, 2010moellerwetmilling}. \subsection*{Software predicting signal peptides} Although many experimental methods determining the subcellular localization of proteins were devised, they are time consuming and laborious. Therefore, signal peptides became the subject of many computational programs to their prediction. Many software incorporates ???black-box’ models, such as: neural networks~\citep{2011petersensignalp}, support vector machines~\citep{2014zhangprediction}, Bayesian networks~\citep{2012zhengsignalbnf} or k-nearest neighbours~\citep{2007shensignall}. However, these models do not provide direct biological information about organization of signal peptides and are not able to predict properly atypical signal peptides. Although there are programs that do not share the innate flaws of ???black-box’ models, they also demand an improvement. Some of them are based on position matrices or their variants~\citep{2014zhangprediction, 2004hillerpredisi}. Others (Phobius, Philius and SignalP 3.0) use hidden Markov models (HMMs)~\citep{2004klla, 2008reynoldstransmembrane, 2004bendtsenimproved}, which try to reflect structure of signal peptides regions in their limited probabilistic frameworks. The used HMMs, however, imply a geometric distribution for duration of regions length. We studied the distribution for regions from the first work utilizing HMMs in prediction of signal peptides~\citep{1998nielsenprediction} and found that the length distribution for every region was not geometric (Fig. \ref{fig:reglen}). Moreover, the commonly used rigid scheme of signal peptide's organization (Fig. \ref{fig:sparch}) does not describe extremely long or short peptides. Theoretically, HMMs that describe the atypical signal peptides could be developed to consider also the unusual structures but such probabilistic frameworks have not yet been implemented. \begin{figure}[ht]\centering \includegraphics[width=0.95\textwidth]{figures/reglen.eps} \caption{Distribution of lengths of signal peptides (A) and their regions (B) expressed in the number of amino acid residues. The data was extracted from 2~589 signal peptide sequences derived from UniProt database (see \textbf{Data selection} in \textbf{Methods}).} \label{fig:reglen} \end{figure} All programs used in signal peptide recognition are trained on real protein sequences. Therefore, they succeed in the recognition of peptides similar to those in the learning set but fail in the case of artificial signal peptides. Such peptides are designed to increase effectiveness of protein secretion~\citep{2010futatsumorisugaisignal}. They are especially important in industrial applications to increase yield of proteins. Therefore, only explicit knowledge about the organization of signal peptides allows creating sequences that will be the most efficient in the export of proteins~\citep{2013ngengineering}. Signal peptides have also an important application in gene therapy. Mimicking the natural mechanism of protein export, artificial signal peptides with tumor epitopes increase the antitumor immune response~\citep{2003heenhanced}. Such epitopes must be properly inserted into a signal peptide without decreasing its secretion properties through disruption of the regional structure. Instead of time-consuming and expensive laboratory experiments, it would be very useful to survey \textit{in silico} many artificial peptides to select the ones that would fulfill the designed role. Majority of the signal peptide predicting software uses the orthogonal encoding of amino acids, in which a vector of 20 digits represents every amino acid. This method of encoding, however, does not take into account relationships between amino acids and differences in their physicochemical properties. This is disadvantage of such signal peptide models because their regions are in fact characterized by specific features of amino acid residues and not by the simple occurrence of specific amino acids. In addition to this, such sparse encoding enforces larger data sets, which hinders their management and analysis~\citep{2002linamino}. Therefore, we elaborated a new approach based on hidden semi-Markov models using grouping of amino acids into physicochemical groups characteristic of signal peptides. The new methods proved better in comparison to the current software. \section*{Methods} \subsection*{Overview} Since the functionality of signal peptides depends on the physicochemical properties of residues in a given region, we clustered amino acids into several groups based on their characteristics. The pre-processed sequences were further analyzed by the heuristic algorithm, which determines borders between three characteristic signal peptide regions~\citep{1998nielsenprediction}. We refined some region recognition criteria to attune the algorithm to less typical signal peptides. Next, two models were trained to recognize proteins with and without a signal peptide. The first one was a hidden semi-Markov model, in which each of three signal peptide regions was represented by a different hidden state. The additional fourth hidden state represented a mature protein. Each state was described by its frequencies of amino acid groups. The distribution of hidden states durations, i.e. the number of amino acids, was based on the empirical distribution of region lengths from the training set. Furthermore, the hidden semi-Markov model was enriched with n-grams representing signal peptide cleavage sites. The second model was a simple probabilistic approach in which no association between amino acids was assumed, and probability of amino acids groups occurrence was determined by their frequencies in mature proteins. \subsection*{Data selection} Eukaryotic protein sequences and their annotations were properly prepared according to the literature of the subject and downloaded from UniProt database release 2015\_06. The positive set contained 2~589 sequences with an experimentally confirmed signal peptide, containing its start and cleavage site information. Sequences with more than one cleavage site were excluded from the final data set. The negative set comprised 152~272 sequences without any signal peptide annotation. Protein sequences with ambiguous symbols: X, J, Z, B and selenocysteine (U) were finally removed. \subsection*{Clustering of amino acids} signalHsmm is not the first software which uses amino acid encoding in signal peptide prediction. BLOMAP~\citep{maetschke2005blomap} also employed the similar strategy but considered only substitution matrices. However, we applied different approach. We clustered amino acids using four properties relevant for the architecture of signal peptide: their hydrophobicity, frequency in $\alpha$-helices, polarity and size. The high hydrophobicity is a determinant of the h-region, whose $\alpha$-helix secondary structure is probably induced by the positively charged n-region. The high polarity as well as small size are important features of residues in the cleavage site~\citep{1994palzkillselection}. \begin{table}[ht] \small \centering \caption{Properties used in amino acid clusterization.} \begin{tabular}{ll} \toprule Property name & Amino acid scale and its reference \\ \midrule Size & Size~\citep{dawson1972size} \\ \rowcolor[gray]{0.85}Size & Molecular weight~\citep{fasman1976handbook}\\ Size & Residue volume~\citep{1973goldsackcontribution} \\ \rowcolor[gray]{0.85}Size & Bulkiness~\citep{1968zimmermanthe} \\ Hydrophobicity & Normalized hydrophobicity scales for $\alpha$-proteins~\citep{1992cidhydrophobicity} \\ \rowcolor[gray]{0.85}Hydrophobicity & Consensus normalized hydrophobicity scale~\citep{1984eisenbergthreedimensional} \\ Hydrophobicity & Hydropathy index~\citep{1982kytea} \\ \rowcolor[gray]{0.85}Hydrophobicity & Surrounding hydrophobicity in $\alpha$-helix~\citep{1980ponnuswamyhydrophobic} \\ Polarity & Polarity~\citep{1974granthamamino} \\ \rowcolor[gray]{0.85}Polarity & Mean polarity~\citep{1988radzickainfluences} \\ Frequency in $\alpha$-helices & Signal sequence helical potential~\citep{1982argosstructural} \\ \rowcolor[gray]{0.85}Frequency in $\alpha$-helices & Normalized frequency of N-terminal helix~\citep{chou1978prediction} \\ Frequency in $\alpha$-helices & Relative frequency in $\alpha$-helix~\citep{1990prabhakaranthe} \\ \bottomrule \end{tabular} \label{tab:aaprop} \end{table} We considered in total 13 amino acid scales from AAIndex database~\citep{2008kawashimaaaindex} (Tab.~\ref{tab:aaprop}). We selected one scale per property and carried out all possible 96 permutations of them. Based on that, we created 96 possible clusterings of amino acids using Euclidean distance and Ward's method. Next, we cut the clusterings to create four group of amino acids. In 31\% of cases, the groupings were identical. To compare the usefulness of encodings, we performed a 5-fold cross-validation training the new instance of signalHsmm on every encoding. We created balanced data sets by subsampling proteins without a signal peptide to equal the number of proteins with a signal peptide. The cross-validation was repeated 60 times to ensure that every protein without signal peptide was included in the learning set with the probability higher than 0.5. Very small variance of performance measures (for example see Tab.~\ref{tab:perfmeas}) confirmed the credibility of cross-validation. \subsection*{Hidden semi-Markov model} Hidden semi-Markov model (HSMM) is an extension of hidden Markov model (HMM). Let us first briefly describe the idea of the HMM. Suppose we have a sequence of observations, e.g. amino acids, and we are interested in understanding an underlying cause of their occurrence. HMM aims to answer that question assuming a specific and yet flexible structure of the problem. HMM consists of two stochastic processes. The first is a discrete Markov chain $X_{t=1}^T$ on the set of so called hidden states $\{S_1, \dots S_n\}$. They are "the cause" of the observations. At every step $t$, hidden state might change according to a transition matrix $A= (a)_{i,j=1}^n$, where $a_{i,j} = \mathcal{P}(X_{t+1} = S_j | X_t = S_i)$. In our application, hidden states are signal peptide regions. The second process $E_{t=1}^T$ is an observation process defined on the set of possible observations $\{O_1, \dots, O_m\}$. They are assumed to occur independently and conditionally on the hidden state that emits it. Their distribution are given by a matrix $B$, $b_{i,k} = \mathcal{P}(E_t = O_k | X_t = S_i)$. In our case, the observations are (encoded) amino acids. The main goal in signalHsmm is to find the most probable regions boundaries for a given peptide. This is achieved with Viterbi algorithm. For a good reference on HMM see~\citep{1989rabinera}. In the regular HMM, the hidden state duration, i.e. the number of observations emitted by the hidden state, has a geometric distribution. \cite{Durbin98biologicalsequence} showed how to extend it for different distributions without significant increase in computational complexity. Similar ideas were used for signal peptide recognition, for example by~\cite{2004klla}. However, it is still not flexible enough because the empirical regional length distributions (see Fig.~\ref{fig:reglen}) are difficult to capture in this way. %rysunek pokazujący o co chodzi w HMM \begin{figure}[h] \centering \tikzstyle{block} = [draw,shape=circle, top color=green!50!white!70, bottom color=green!50!white!70 ,minimum size=4em] \tikzstyle{blockComp} = [draw,shape=circle, top color=red!50!white!70, bottom color=red!50!white!70 ,minimum size=2.5em] % diameter of semicircle used to indicate that two lines are not connected \def\radius{.7mm} \tikzstyle{branch}=[shape=rectangle, top color=white, bottom color=white ,minimum size=3pt,inner sep=0pt] \tikzstyle{branch2}=[shape=rectangle, top color=white, bottom color=white ,minimum size=3pt,inner sep=0pt] \def\n{11} \tikzstyle{line} = [draw=black, color=black!70!white!50, line width=1.5mm, -latex'] \tikzstyle{line2} = [draw=black, color=red!30!white!70, line width=1.5mm, -latex'] \tikzstyle{frame}=[text=black,above, bottom color=white, top color=blue!50!black!70 ] \tikzstyle{frameComp}=[text=black,below, bottom color=red!50!white!70, top color=red!50!white!70 ] \tikzstyle{line3} = [color=green!50!black!70, line width=1.5mm] \def\names{{"$O_{1,1}$","...","$O_{1,d_1}$", "...","$O_{k, 1}$", "...", "$O_{k, d_k}$"}}% \begin{tikzpicture}[>=latex'] %wąsate nawiasy \draw [decorate, decoration={brace,amplitude=10pt},line3,xshift=0pt,yshift=0pt] (2.5,1) -> (10.5,1) node [branch,midway,yshift=0.8cm,color=black] {\textbf{Hidden states}}; % \draw [->,decoration={brace,mirror,amplitude=10pt},line2,xshift=0pt,yshift=0pt] (11,-3) -- (3,-3) node [branch2,midway,yshift=-0.6cm,color=black] % {\textbf{Kierunek odczytu na nici Cricka ($3' \to 5'$)}}; % Draw blocks, inputs and outputs \node[block, text width=3em] at (1+2,0) (block1) {\footnotesize 1st hidden state}; \node[blockComp] at (-.5+2,-2) (komp1) {\footnotesize \pgfmathparse{\names[0]}\pgfmathresult}; \node[blockComp] at (1+2,-2) (komp2) {\footnotesize \pgfmathparse{\names[1]}\pgfmathresult}; \node[blockComp] at (2.5+2,-2) (komp3) {\footnotesize \pgfmathparse{\names[2]}\pgfmathresult}; \draw[line,->] (block1) -- (komp1); \draw[line,->] (block1) -- (komp2); \draw[line,->] (block1) -- (komp3); \draw[line3,->] (block1.east) -- +(2,0) node [branch,midway,yshift=0.3cm,xshift=-0.2cm,color=black] {\footnotesize \textbf{transition}}; \node[block, bottom color=green!50!white!80,] at (4.5+2,0) (block2) {...}; \draw[line3,->] (block2.east) -- +(2,0) node [branch,midway,yshift=0.3cm,xshift=-0.2cm,color=black] {\footnotesize \textbf{transition}}; \node[blockComp] at (4.5+2,-2) (komp4) {\footnotesize \pgfmathparse{\names[3]}\pgfmathresult}; \draw[line,->] (block2) -- (komp4) ; \node[block,text width=3em, bottom color=green!50!white!70] at (8+2,0) (block3) {\footnotesize $k^{th}$ hidden state}; \node[blockComp] at (6.5+2,-2) (komp5) {\footnotesize \pgfmathparse{\names[4]}\pgfmathresult}; \node[blockComp] at (8+2,-2) (komp6) {\footnotesize \pgfmathparse{\names[5]}\pgfmathresult}; \node[blockComp] at (9.5+2,-2) (komp7) {\footnotesize \pgfmathparse{\names[6]}\pgfmathresult}; \draw[line,->] (block3) -- (komp5); \draw[line,->] (block3) -- (komp6); \draw[line,->] (block3) -- (komp7); % \draw[<->, color=red!30!black!50, line width=1.5mm] (komp1) |- +(5,-1.5) node[line2, yshift=-0.5cm] {\textbf{Observations}} -| (komp7); \draw [color=red!50!black!70, line width=1.5mm, decorate,decoration={brace,amplitude=10pt,mirror,raise=10pt},yshift=-10pt] (1,-1.8) -- (5,-1.8) node [branch2, midway, yshift=-1cm,color=black] {\textbf{Duration of length $d_1$}}; \draw [color=red!50!black!70, line width=1.5mm, decorate,decoration={brace,amplitude=10pt,mirror,raise=10pt},yshift=-10pt] (8,-1.8) -- (12,-1.8) node [branch2, midway, yshift=-1cm,color=black] {\textbf{Duration of length $d_k$}}; \draw [color=red!50!black!70, line width=1.5mm, decorate,decoration={brace,amplitude=15pt,mirror,raise=10pt},yshift=-10pt] (1,-2.6) -- (12,-2.6) node [branch2, midway, yshift=-1.3cm,color=black] {\textbf{Observations, $\sum_{i=1}^k d_i = T$}}; \end{tikzpicture} \caption{General scheme of hidden semi-Markov model.} \label{fig:hsmm} \end{figure} The model we used is Hidden semi-Markov Model (HSMM)~\citep{Yu2010215}. It extends HMM by allowing any given hidden state duration distribution (Fig.~\ref{fig:hsmm}). In addition to matrices $A$ and $B$, the model is given by probabilities of duration length in hidden states. $$\mathcal{P}(\text{duration in state} = d | \text{state is } S_i), \;\; i = 1, \dots, n, \;\; d = 1, \dots, D$$ where $D$ is the maximum allowed duration. As our datasets are of reasonable size and $D$ is small -- around 30, computational effort is not much higher than in the regular HMM. Our model has a very specific structure. The hidden states represent signal peptide regions. Almost all entries in the transition matrix $A$ are zeros because regions are sequential. Possible transitions are depicted as arrows in Fig.~\ref{fig:ngramext}. Probabilities of observations for the hidden states and hidden states durations were estimated from training data. The advantage of HSMM model is not only a better performance but also its straightforwardness. Fig.~\ref{fig:ngramext} is easy to interpret for a researcher without any mathematical background. %model structure \begin{figure}[ht]\centering \includegraphics[width=0.51\textwidth]{figures/HSMMs.eps} \caption{The diagram of simple (A) and extended version of signalHsmm with the n-gram cleavage site model (B).} \label{fig:ngramext} \end{figure} \subsection*{Extension of n-grams} Hidden semi-Markov models may be flexibly enhanced by adding additional hidden states. To improve our model, we added few supplementary states representing specific motives that might occur in the proximity of cleavage site. The structure of cleavage sites, more conserved than other parts of signal peptide~\citep{2004hillerpredisi}, may be reflected by n-grams (k-mers), short vectors of $n$ characters derived from input sequences. Using the biogram software~\citep{biogramPackage}, we extracted n-grams from cleavage sites of signal peptides. The analyzed sequences were already encoded using amino acid classification providing the best sensitivity of the general model (Tab. \ref{tab:best}). Selected n-grams representing less common cleavage site motifs were included in the HSMM model as alternative paths at the end of c-region (Fig.~\ref{fig:ngramext}B). \section*{Results and discussion} \subsection*{Cross-validation} \begin{figure}[ht]\centering \includegraphics[width=0.95\textwidth, height=7cm]{figures/cvres.eps} \caption{Sensitivity and specificity of amino acid encodings after cross-validation. 1 indicates the encoding providing the best sensitivity (AUC = 0.9683, MCC = 0.8677), whereas 2 means the encoding providing the best specificity (AUC = 0.9338, MCC = 0.6474).} \label{fig:cvres} \end{figure} \begin{table}[ht] \small \begin{minipage}{.5\linewidth} \centering \caption{The best sensitivity (final) encoding.} \begin{tabular}{l} \toprule Groups \\ \midrule D, E, H, K, N, Q, R \\ \rowcolor[gray]{0.85}G, P, S, T, Y \\ F, I, L, M, V, W \\ \rowcolor[gray]{0.85}A, C \\ \bottomrule \end{tabular} \label{tab:best} \end{minipage} \begin{minipage}{.5\linewidth} \centering \caption{The best specificity encoding.} \begin{tabular}{l} \toprule Groups \\ \midrule A, E, K, Q, R \\ \rowcolor[gray]{0.85}D, G, N, P, S, T \\ C, H, I, L, M, V \\ \rowcolor[gray]{0.85}F, W, Y \\ \bottomrule \end{tabular} \label{tab:worst} \end{minipage} \end{table} \begin{figure}[ht]\centering \includegraphics[width=0.95\textwidth]{figures/enccomp.eps} \caption{Comparison of amino acid encodings with the best sensitivity and the best specificity in different regions of signal peptide and mature proteins according to: A) the normalized value of properties for particular amino acids (points) and B) frequencies of amino acids in the given region.} \label{fig:enccomp} \end{figure} We used four performance measures to evaluate results of cross-validation: specificity, sensitivity, Matthew's Correlation Coefficient ($\phi$ coefficient) and Area Under the Curve (AUC). All encodings provided very good AUC (0.93 -- 0.97) and specificity (0.92 -- 0.96). The classification of amino acids had the biggest impact on sensitivity, which ranges from 0.66 to 0.94. The final signalHsmm algorithm uses the encoding that yields the highest specificity and Matthew's Correlation coefficient as well as the second best AUC (Fig.~\ref{fig:cvres}). \begin{table}[ht] \small \centering \caption{Performance measures for the best encoding. 60 repetitions of cross-validation.} \begin{tabular}{lrr} \toprule Measure & Mean & SD \\ \midrule AUC & 0.9683 & 0.0024 \\ \rowcolor[gray]{0.85}MCC & 0.8677 & 0.0049 \\ Sensitivity & 0.9406 & 0.0008 \\ \rowcolor[gray]{0.85}Specificity & 0.9269 & 0.0050 \\ \bottomrule \end{tabular} \label{tab:perfmeas} \end{table} \subsection*{Comparison of encodings} We examined in detail encodings with the best sensitivity and the best specificity (Tab.~\ref{tab:best}, Tab.~\ref{tab:worst} and Fig.~\ref{fig:enccomp}). In both cases, the group 1 tends to contain average-sized polar amino acids. In the best sensitivity encoding, all charged amino acids, both acidic and basic (also weakly basic histidine) belong to this group. These amino acids are nearly absent from h-region and provide very good distinction between regions of signal peptide (Fig.~\ref{fig:enccomp}). In the best specificity encoding, where polar and charged character of the group 1 is not so explicit, the difference in its distribution between the regions is also less visible. The amino acids belonging to the group 2 have a quite low probability of occurrence in $\alpha$-helix and are very diverse. This group includes polar but uncharged serine and threonine as well as hydrophobic tyrosine, aliphatic glycine and proline. In the best specificity encoding, the polar character of this group is emphasized by the addition of asparagine and aspartic acid. Despite these differences, the distribution of group 2 seems to be comparable in two encodings. The both groupings have strongly non-polar and aliphatic group 3 containing isoleucine, leucine, methionine and valine. The hydrophobic property of this group in the best sensitivity encoding are even more pronounced by the presence of tryptophan and phenylalanine. On the contrary, more polar histidine belongs to the group 3 in the best specificity encoding. Because of the hydrophobic character, this group dominates in the h-region in the case of both amino acid classifications. The fourth group is the most diverse in both encodings. In the case of the best specificity encoding, this group comprises aromatic amino acids: phenylalanine, tryptophan and tyrosine. In contrast, the group 4 in the best sensitivity encoding contains only alanine and cysteine, which both are rather small amino acids and tend to appear in $\alpha$-helices. This very unique composition seems to be typical of the c-region of signal peptide. The encoding of amino acid plays crucial role in the recognition of signal peptide but does not affect identification of proteins without signal peptides (compare change in specificity and sensitivity on Fig.~\ref{fig:cvres}). The mature protein state in the HSMM model tends to have a more uniform distribution of residues than signal peptide region, which makes it more resistant to changes in amino acid groupings. \subsection*{Benchmark tests} To provide the honest comparison, we trained our model on 2311 signal peptide-containing sequences deposited in Uniprot until 2010 year (an iteration of signalHsmm called signalHsmm-2010). The data set should correspond to the set used to train SignalP 4.1. Interestingly, our algorithm performed very well even if it was trained on a very small sets including only 336 sequences collected till 1987 year, just after the first method predicting signal peptide was published~\citep{1986vonheijnea}. The signalHsmm-1987 was able to very accurately predict a signal peptide, even better than predictors trained on richer data sets. It indicates that signalHsmm is very stable and able to recover the structure of signal peptides from even very small data sets (Tab.~\ref{tab:bench2010}). \begin{table}[ht] \small \centering \caption{Comparison of Area Under the Curve, Sensitivity, Specificity and Matthews Correlation Coefficient for different classifiers.} \begin{tabular}{lrrrr} \toprule Software name & AUC & Sensitivity & Specificity & MCC \\ \midrule signalP 4.1 (no tm) \citep{2011petersensignalp} & 0.9416 & \textbf{0.9720} & 0.9112 & 0.8848 \\ \rowcolor[gray]{0.85}signalP 4.1 (tm) \citep{2011petersensignalp} & 0.9673 & 0.9579 & \textbf{0.9766} & \textbf{0.9347} \\ PrediSi \citep{2004hillerpredisi} & 0.8949 & 0.9065 & 0.8832 & 0.7899 \\ \rowcolor[gray]{0.85}Phobius \citep{2004klla} & 0.9509 & 0.9673 & 0.9346 & 0.9024 \\ Philius \citep{2008reynoldstransmembrane} & 0.9369 & 0.9533 & 0.9206 & 0.8743 \\ \rowcolor[gray]{0.85}signalHsmm-2010 & 0.9526 & 0.9533 & 0.8832 & 0.8385 \\ signalHsmm-1989 & 0.9562 & 0.9626 & 0.8972 & 0.8617 \\ \rowcolor[gray]{0.85}signalHsmm-2010 with k-mers & \textbf{0.9679} & 0.9673 & 0.9112 & 0.8799 \\ \bottomrule \end{tabular} \label{tab:bench2010} \end{table} \begin{table}[ht] \centering \caption{Comparison of Area Under the Curve, H-measure and Matthews Correlation Coefficient for different classifiers considering only proteins belonging to Plasmodiidae.} \begin{tabular}{lrrrr} \toprule Software name & AUC & Sensitivity & Specificity & MCC \\ \midrule signalP 4.1 (no tm) \citep{2011petersensignalp} & 0.8356 & 0.7745 & 0.8966 & 0.6420 \\ \rowcolor[gray]{0.85}signalP 4.1 (tm) \citep{2011petersensignalp} & 0.7928 & 0.6471 & 0.9385 & 0.6185 \\ PrediSi \citep{2004hillerpredisi} & 0.6597 & 0.3725 & 0.9469 & 0.4028 \\ \rowcolor[gray]{0.85}Phobius \citep{2004klla} & 0.7963 & 0.6765 & 0.9162 & 0.5991 \\ Philius \citep{2008reynoldstransmembrane} & 0.7753 & 0.6176 & 0.9330 & 0.5841 \\ \rowcolor[gray]{0.85}signalHsmm-2010 & \textbf{0.9340} & \textbf{1.0000} & 0.8436 & \textbf{0.7380} \\ signalHsmm-1989 & 0.9326 & 0.9510 & \textbf{0.8631} & 0.7266 \\ \rowcolor[gray]{0.85}signalHsmm-2010 with k-mers & 0.9334 & 0.9902 & 0.7989 & 0.6767 \\ \bottomrule \end{tabular} \label{tab:bench2010plas} \end{table} To check universality of our probabilistic model, we also validated it on signal peptides belonging to specific taxonomic groups. As an example, we chose the \textit{Plasmodiidae} family including malaria parasites because of their medical significance. The testing data set extracted from UniProt data base contained 102 sequences with a signal peptide and 358 sequences without it. Interestingly, signalHsmm showed much better performance than other algorithms (Tab.~\ref{tab:bench2010plas}). Since our algorithm considers more general decision rules, it is able to recognize also atypical signal peptides. \subsection*{Conclusions} The architecture of existing signal peptide predicting software is usually opaque, which unables to extract decision rules and determine parameters responsible for the predictions. Considering the biological context of the problem, we see a need for a transparent model of signal peptide. signalHsmm is the first step in this direction. It shows a good performance and provides clear interpretability and expendability of the predictive algorithm. The model confirmed not only the high hydrophibicity of the h-region and polarity of the n-region but also found that alanine is one of the most typical amino acids in the c-region. The performance of signalHsmm indicates that properties of signal peptides do not depend on their exact sequence but on the physicochemical features of the amino acids. The flexibility and efficient information recovery makes our model unique among similar software. signalHsmm is also adjustable to model properly very specific signal peptides belonging to taxonomic groups which are poorly represented in databases. Moreover, our method can effectively extract information from very small data sets, which in future may lead to new predictors specialized in recognition of atypical signaling sequences. \subsection*{Availability and implementation} The signalHsmm prediction web-server is available at: \url{http://smorfland.uni.wroc.pl/signalhsmm}. signalHsmm is implemented as an R package available at: \url{http://cran.r-project.org/web/packages/signalHsmm}. Stand-alone version offers prediction and tools to build, train and test novel signal peptide models. \bibliography{lokalizom} \end{document}
{ "alphanum_fraction": 0.7811834219, "avg_line_length": 96.1830601093, "ext": "tex", "hexsha": "ce820702fcb174647a261209c05db2a42e7b9a11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "129d3316d5a3c33ef5341145a76316439c614f2d", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "michbur/gcb_submission", "max_forks_repo_path": "paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "129d3316d5a3c33ef5341145a76316439c614f2d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "michbur/gcb_submission", "max_issues_repo_path": "paper.tex", "max_line_length": 1790, "max_stars_count": null, "max_stars_repo_head_hexsha": "129d3316d5a3c33ef5341145a76316439c614f2d", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "michbur/gcb_submission", "max_stars_repo_path": "paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9556, "size": 35203 }
\documentclass{UoYCSproject} \addbibresource{REFERENCES.bib} \usepackage{tikz} \usetikzlibrary{automata, positioning, arrows} \usepackage{float} \usepackage{graphicx} \usepackage{listings} \tikzset{ ->, % makes the edges directed node distance=3cm, % specifies the minimum distance between two nodes. Change if necessary. every state/.style={thick, fill=gray!10, minimum size=0pt}, % sets the properties for each ’state’ node initial text=$ $, % sets the text that appears on the start arrow } \newenvironment{monospace}{\ttfamily\small}{\par} \BEng \supervisor{Dr. Christopher Crispin-Bailey} \dedication{ To Katie, who supported me throughout this project and the last two years of my degree, and my parents, who made this all possible and helped me grow into the person I am today. } \acknowledgements{ I would like to thank my supervisor, Dr. Chris Crispin-Bailey, for his guidance throughout this project. } \begin{document} \title{Generation of Hardware Accelerators for an FPGA System} \author{Jay Valentine} \maketitle \listoffigures \listoftables \begin{summary} \section{Motivation} As transistor densities in modern microprocessors increase, a breakdown in Dennard scaling \cite{dennard} has been observed. This has led to the phenomenon of dark silicon, which describes transistors which, due to power or temperature limitations, cannot be used to their full potential \cite{darksilicon}. This has severe consequences for the future of microprocessor architecture, especially in domains where energy consumption is of great concern, such as the rapidly growing mobile application domain. A shift away from more traditional multicore architectures to heterogeneous platforms is seen as one solution to this problem. One such architecture is the \textit{coprocessor-dominated architecture} (\textit{CoDA}), in which one or more general-purpose processing cores are coupled with a large number of specialised hardware accelerators, which are able to perform very specific tasks faster and with greater energy efficiency than a general-purpose core. \section{Aims} This work aims to show that a coprocessor-dominated architecture can be utilized in an embedded FPGA platform. The use of an FPGA allows the accelerators to be designed specifically with the embedded application in mind. The aim is to produce a tool that can generate hardware accelerators from an application written for the Xilinx MicroBlaze soft processor \cite{microblaze}, providing increases in performance and energy efficiency. Because the accelerators are automatically generated by the tool, rather than designed by hand, the architecture can be re-generated for each version of the application, or even across different applications, very easily. The use of an FPGA allows the architecture to be very highly specialized, as the FPGA can be re-programmed for each architecture version. In this sense, the architecture itself becomes an extension of the application software, rather than a static platform as has traditionally been the case. \section{Methodology} To obtain measurements of the system's performance and energy efficiency, two benchmark applications were used, written in C and compiled using the Xilinx MicroBlaze GNU tools. These applications were then simulated using Vivado XSim with a range of coprocessor configurations to obtain performance measurements, including application speedup. Finally, the design was synthesised using Vivado to obtain power consumption estimates and measures of FPGA resource utilization. \section{Results} It was found that for some configurations, the architecture did provide a performance improvement, with speedups of up to 1.3x measured. This improvement was not as significant as was hoped. Power consumption estimates were poor, with the architecture offering no improvement over a base MicroBlaze system, and in many cases even resulting in increased power consumption. A range of causes of this poor performance were identified and solutions to these outlined as further investigation in this area of research. \section{Statement of Ethics} Neither the undertaking of this project, nor its end result, are envisioned to cause any harm. All testing was automated and performed entirely using software tools; no personal data was collected, nor were humans or animals participants in testing in any way. Some portions of code used as test applications in the evaluation of this system were sourced from authors who had released the code under open source licenses. Great care was taken to ensure that all code used for this purpose was credited properly and that the licenses with which the code was provided permitted its use in this project. \end{summary} \chapter{Literature Review} \section{Moore's Law and Dennard Scaling} In 1965, Gordon Moore predicted that the number of transistors in an integrated circuit will double approximately every two years \cite{moore}. Dennard scaling describes the way in which transistor power density remains constant as the transistors themselves shrink in size \cite{dennard}. These phenomena combined allow for exponential transistor-density increases, and this has been exploited to produce exponentially higher performance in microprocessors year on year. \begin{figure}[h] \center{\includegraphics[width=0.75\textwidth] {figures/moore.png}} \caption{Moore's 1965 prediction. \cite{moore}} \end{figure} However, in more recent times, a breakdown in this scaling has been observed. In the past, microprocessor manufacturers have been able to offset the increased energy cost of faster transistor switching, resulting in higher and higher clock speeds. However, more recently, microprocessor clock speeds have remained relatively static. This is due to a breakdown in Dennard scaling for very small transistors, and has caused microprocessor manufacturers to instead pursue increased performance by the use of multicore designs, as shown by figure ~\ref{fig:microprocessorTrends}. \begin{figure}[h] \center{\includegraphics[width=\textwidth] {figures/cpu-speed.png}} \caption{Trends in microprocessors since 1970. \cite{karlrupp}} \label{fig:microprocessorTrends} \end{figure} \section{Dark Silicon} Dark silicon describes the way in which not all transistors in a microprocessor chip can be utilized simultaneously (usually because of power or heat limitations), leading to a percentage of the chip being under-utilized (or not utilized at all) \cite{four-horsemen}. This leads to a gap between the microprocessor's observed performance, and that predicted by extrapolating from historic performance gains. It is predicted that with 22nm transistors, 21\% of the chip will be dark silicon, with this rising to 50\% at 8nm \cite{darksilicon}. This prediction shows that dark silicon will become a serious limitation as transistor density grows, especially in areas where energy efficiency is a primary concern, as the transistors still consume resources, such as power and area, despite not being utilized. Dark silicon also becomes a limiting factor with manycore devices. The limited parallelism of most applications results in a dark silicon gap when running with manycore devices. Again, \cite{darksilicon} shows that beyond a certain number of cores the speedup achieved is negligible. This is another kind of dark silicon - the underutilization in this case is caused by the limited parallelism of the application being unable to exploit all of the cores of a device. This shows that while the shift to multicore designs is a solution to the breakdown of Dennard scaling, it is not a long-term one if continued increases in performance are desired. There have been several broad responses to this phenomenon. In \cite{four-horsemen}, Taylor gives two pessimistic predictions regarding the future of silicon utilization. The first, the 'shrinking horseman', predicts that chips will begin to shrink as a result of the utilization wall. This would lead to an increase in cost per mm\textsuperscript{2} of silicon, as design costs, test costs, marketing costs, etc. remain constant. As Taylor describes: "exponentially smaller chips are not exponentially cheaper". The second, perhaps slightly less pessimistic, prediction in Taylor's paper is referred to as 'the dim horseman', and describes the under-clocking or infrequent use of general-purpose silicon in order to meet power budgets. While a better alternative than the shrinking of chips, this still causes issues as the chip is no longer operating at maximum capacity. Taylor outlines several options for the use of this 'dim silicon'. The first is the use of existing multicore architectures with some cores operating at a lower clock speed, or even being turned off intermittently. However, there are other 'dim silicon' approaches that make better use of the unutilized silicon area. One approach is to increase cache sizes, as cache memory is less power-dense than a processing core would be. This has a secondary benefit as well, reducing the likelihood of cache misses, thereby decreasing the number of power-hungry off-chip accesses. Finally, Taylor also describes 'the specialized horseman'. This approach is to use the dark silicon area not for general-purpose computing, but for a large number of specialized cores, which would be much more energy efficient than a general-purpose core, allowing for increased energy efficiency at the cost of silicon area. \section{Heterogeneous Architectures} One such 'specialized horseman' approach is the use of heterogeneous architectures. These are computer architectures in which one or more general-purpose cores are coupled with special-purpose coprocessors, also known as hardware accelerators. A hardware accelerator is a specialised hardware circuit intended to perform a specific task more efficiently than a general-purpose processor. While historically hardware accelerators have been designed by hand for a specific application (e.g. encryption), this is infeasible when considering architectures with large numbers of accelerators. Thus an automated approach to generating hardware accelerators is required. This is the approach taken in \cite{high-performance-microarchitecture}. Here Razdan and Smith propose a simple hardware accelerator architecture which avoids the need for memory synchronization and reduces the overheads involved in invoking a hardware accelerator. They describe a toolchain which is able to extract instruction streams to be 'outsourced' to an accelerator core (called a \textit{programmable functional unit}, or \textit{PFU}) after code generation. \begin{figure}[hbt] \center{\includegraphics[width=0.65\textwidth] {figures/pfu.png}} \caption{Two PFU optimization examples. Both sequences of operations can be evaluated in a single cycle, while the same sequences in MIPS R2000 instructions would take multiple cycles. \cite{high-performance-microarchitecture}} \end{figure} Each PFU has at most two input operands and at most one output operand. In addition, the PFU-logic instructions (MIPS instructions that are candidates for translation into a PFU) cannot be memory-access or flow control instructions. This means that each PFU has an identical interface, and can be executed in a single cycle. This allows PFUs to be executed with a single instruction, \textit{expfu}, in a single cycle, maintaining the fixed-format and single-cycle instructions of the MIPS processor's RISC instruction set. \section{Conservation Cores} While traditionally hardware accelerators have been used to speed up certain computations, they can also be used to achieve the same computational performance as a general-purpose processor at a fraction of the energy cost. For this reason, special-purpose cores are a subject of great interest to those trying to alleviate the dark silicon problem. \cite{c-cores} introduces \textit{conservation cores} (\textit{c-cores}), which are hardware accelerators designed for this purpose. The paper outlines a method for generating c-cores for a given application. The first step is to identify 'hot' and 'cold' portions of the application, using some form of profiling. 'Hot' code sections are those which are run frequently, and so are ideal candidates for c-cores. 'Cold' sections are run infrequently, and so are not ideal candidates, as the overhead involved in using a c-core would not be offset by the computation avoided by its use. Once 'hot' portions are identified, a c-core can be synthesised for it. While previous hardware accelerators might have been hand-designed, such an approach is not viable if large numbers of c-cores are to be used, and so the paper outlines a method for automating the synthesis of these cores. A control-flow graph can be extracted from the code, and from this a state machine model can be constructed to perform the functionality represented in the code. \begin{figure}[h] \center{\includegraphics[width=1.2\textwidth] {figures/c-cores.png}} \caption{Conservation core example. \cite{c-cores}} \end{figure} One of the main issues with these c-cores is that memory synchronization restricts instruction-level parallelism and requires large numbers of pipeline registers (as each memory access marks a state-boundary). \cite{eco-cores} attempts to alleviate this issue by introducing \textit{selective de-pipelining} (SDP), in which memory accesses occur in 'fast states' while the rest of the processing done by the c-core proceeds as before, in 'slow states'. Signals can safely propogate through the 'slow states' without the need for latching on fast-state boundaries, while values loaded from memory are latched on fast-state boundaries as soon as they are available. An example of this process is shown in figure 2.4. \begin{figure}[h] \center{\includegraphics[width=1.2\textwidth] {figures/sdp.png}} \caption{Selective de-pipelining example. \cite{eco-cores}} \end{figure} The GreenDroid project \cite{greendroid} takes the ideas of both \cite{c-cores} and \cite{eco-cores} and attempts to apply them to the Android operating system and software stack. Mobile phone processors have vastly lower power budgets than traditional desktop processors, both to achieve long battery life and to reduce generated heat. Profiling of the Android software was used to identify the best portions of code to be converted into c-cores. As a result, c-cores account for over 90\% of execution time, and if left idle when not in use, this leads to a significant reduction in energy consumption. A similar approach is taken by Arnone in \cite{arnone-thesis}. Here, Arnone outlines a method of generating accelerator cores for a stack architecture, resulting in both timing and power improvements. Two architectures for generated cores are described: composite and wave-core. Composite cores are simple state machines with instructions mapped to states, attempting to reduce logic area by reusing existing logic between states. Conversely, the wave-core architecture avoids the reuse of logic between states, reducing power density at the cost of increased logic area. In both cases the resulting accelerator core is more energy-efficient than the general-purpose processor is at the same task. Here each core is a state machine generated from a basic block in the stack architecure's assembly code. States are divided between computation states and memory-access states. Each computation state is formed of a series of HDL statements which are direct translations of instructions in the stack architecture's instruction set. Between each computation state are one or more states in which memory is being accessed. Outputs from each state are latched in registers so that they are available for the next state to operate on. Unlike conservation cores, these accelerators involve no flow control. This means that less use can be made of the cores (as the main processor must still perform flow control), but also that the implementation of state machines is less complex. \chapter{System Design} This chapter describes the architecture and design of the MicroBlaze system and the accelerator cores, and the methods of communication between them. It also describes the methods used to analyse the object code in order to extract accelerator cores, as well as the ways in which blocks to be extracted are selected automatically. \section{System Architecture} MicroBlaze \cite{microblaze} is a 32-bit RISC architecture, intended for implementation on an FPGA. It is highly configurable, as certain features (e.g. FPU, multiplier, barrel shifter) can be disabled if not required. This allows the architecture to be as minimal as is required by the application. Because of this, MicroBlaze is often used in embedded environments where power efficiency is a significant concern. The architecture used in this project is a heterogeneous architecture consisting of a MicroBlaze processor connected to one or more hardware accelerator cores, via a control unit. Both the accelerators and the MicroBlaze core are connected to a local block RAM. Figure ~\ref{fig:systemArchitecture} shows this arrangement in detail. \begin{figure}[H] \center{\includegraphics[width=0.60\textwidth] {figures/architecture.png}} \caption{System architecture.} \label{fig:systemArchitecture} \end{figure} This reduces the number of signals required, as the accelerators themselves can use a simplified memory interface to communicate with the memory and the MicroBlaze core, rather than each having to separately implement the LMB and AXI protocols. The complexity of the LMB and AXI protocols, which provide for arbitration between competing master cores, are not required, as it is guaranteed that only one core (either an accelerator or MicroBlaze) will be executing at any one time. To further simplify the design, the more advanced MicroBlaze features, such as the hardware multiplier, divider, and barrel shifter, are deactivated. This not only simplifies the system architecture, speeding up synthesis and simulation times, but also reduces the pool of instructions that need to be translated (as hardware multiply etc. instructions no longer appear in application code). \section{Hardware Accelerator Architecture} Each hardware accelerator is modelled as a sequential state machine, with a sequence of states determined by the structure of the code that the hardware accelerator is intended to replace. Each state machine block has a similar interface, which is shown in detail in appendix ~\ref{appendix:interface}. All transitions between states are performed on the rising edge of the CLK signal. \begin{figure}[H] \centering \begin{tikzpicture} \node[state, initial] (S_START) {S\_START}; \node[state, right of=S_START] (S_000) {S\_000}; \node[state, below of=S_000] (S_001) {S\_001}; \node[state, left=3.3cm of S_001] (S_002) {S\_002}; \node[state, below=0.8cm of S_001] (S_003) {S\_003}; \node[state, left=3.3cm of S_003] (S_004) {S\_004}; \node[state, below=0.8cm of S_003] (S_END) {S\_END}; \draw (S_START) edge[above] node{CLK} (S_000); \draw (S_000) edge[loop right] node{CLK} (S_000); \draw (S_000) edge[right] node{CLK \& M\_RDY} (S_001); \draw (S_001) edge[loop right] node{CLK} (S_001); \draw (S_001) edge[above] node{CLK \& M\_RDY} (S_002); \draw (S_002) edge[above right] node{CLK} (S_003); \draw (S_003) edge[loop right] node{CLK} (S_003); \draw (S_003) edge[above] node{CLK \& M\_RDY} (S_004); \draw (S_004) edge[right] node{CLK} (S_END); \end{tikzpicture} \caption{An example abstract state machine.} \label{fig:abstractStateMachine} \end{figure} There are four kinds of states in the hardware accelerator model. The start state, which is the state the accelerator is in when activated, transfers inputs from the accelerator's register input ports into internal registers, for use during the computation. The end state is the state the accelerator moves to when the computation is complete. In this state, output values are transferred from the accelerator's internal registers to the register output ports, to be read by the MicroBlaze core. An example abstract state machine is shown in figure ~\ref{fig:abstractStateMachine}. Between the start and end states are a series of computation and wait states. A computation state is derived from a series of non-memory-access instructions (e.g. \texttt{addi} (add-immediate) or \texttt{srl} (shift-right logical)). These instructions have translations in VHDL, allowing a circuit to be synthesised which implements the same functionality, but in parallel, as shown in figure ~\ref{fig:computationState}. As such, while the instructions represented by a computation state might take several cycles to complete on a general-purpose processor, the hardware accelerator can always complete a computation state in a single cycle. The translations of MicroBlaze instructions used in the generation of accelerator cores can be seen in appendix ~\ref{appendix:translations}. \begin{figure}[H] \center{\includegraphics[width=0.95\textwidth] {figures/computation-state.png}} \caption{Comparison of original program instructions, VHDL translations, and final parallel circuit.} \label{fig:computationState} \end{figure} Each computation state is separated by one or more wait states, in which the accelerator either fetches data from or writes data to the local memory. Each wait state is associated with a single input/output instruction. On entering a wait state, the accelerator sets up the necessary control signals for the memory access. Once the memory access is complete, the M\_RDY signal is asserted and the accelerator continues into the next state. \section{Accessing Cores} In order for the generated hardware accelerators to be at all useful, a lightweight system to allow the MicroBlaze core to activate them and retrieve results is required. The accelerator cores are connected, via a controller module, to the general-purpose core via the AXI bus, which exposes the controller to MicroBlaze as a set of memory-mapped registers. These registers can then be written to and read from using the \texttt{sw} (store word) and \texttt{lw} (load word) instructions. Table ~\ref{table:controllerRegisters} shows the locations and purpose of these registers. \begin{table}[H] \centering \begin{tabular}{ |p{6cm}|p{7cm} } \textbf{Memory Location} & \textbf{Description} \\ HW\_ACCEL\_PORT & Control register. \\[0.05cm] HW\_ACCEL\_PORT + 4 & I/O for register R1. \\[0.05cm] ... & ... \\[0.05cm] HW\_ACCEL\_PORT + 120 & I/O for register R30. \\[0.05cm] HW\_ACCEL\_PORT + 124 & I/O for MSR register. \\[0.05cm] \end{tabular} \caption{Controller module memory-mapped registers. The symbol HW\_ACCEL\_PORT represents the memory location to which the controller is mapped.} \label{table:controllerRegisters} \end{table} There is no memory-mapped register for R31 because R31 is reserved by the compiler for storing the MSR system register when transferring it to and from the accelerator cores. This means that R31 will never be used by the application code, and so can be safely assumed to never be an input or output of a basic block (and by extension, an accelerator core). The transfer of MSR is necessary because this system register holds information required by some instructions, such as the carry flag, and so needs to be available to accelerator cores to use and modify. \subsection{Transaction Overview} Before activation, the controller module is in the READY state, waiting for the MicroBlaze core to pass inputs and select a core to be activated. The controller module remains in this state as the MicroBlaze core transfers inputs by writing to the appropriate controller module register. Once all inputs have been transferred, the required core is activated by writing a numeric ID to the special control register. The MicroBlaze core then goes into a 'sleep' mode (by executing a \texttt{mbar} instruction) while the controller module activates the core. The controller then goes into the WAITING state. Once the core has finished its computation, it signals the controller, which goes into the DONE state, and transfers any outputs to the controller module. The controller then asserts the MicroBlaze core's wakeup signal, and the MicroBlaze core reads the appropriate outputs from the memory-mapped registers. Finally, it reads from the special control register to reset the controller and the accelerator core, and the controller again goes into the READY state. The result of this read contains the MSR system register, the final output from the core. If necessary, the MicroBlaze core then writes the new value of MSR into the system register. \section{False Output Pruning} In order to ensure that register coherency is maintained when an accelerator core is activated, the registers that form the inputs and outputs of the basic block from which the core is derived must be known. The naive approach to identifying the inputs and outputs of a block is to assume that any register that is written to is an output, and any register read before it is written to is an input. However, this leads to very conservative results, as temporary registers used in the basic block are assumed to be outputs, even when they are not used in subsequent blocks. This creates a large amount of unnecessary overhead, as temporary registers are transferred from a core when doing so is unnecessary. In order to solve this problem, the analysis needs to take into account both how the registers are used (as defined by the ABI) and the context in which the block exists (i.e. what blocks come before and after it). The register usage convention, taken from the ABI, is shown in full in appendix ~\ref{appendix:abi}. R3 and R4 are used for returning values from subroutines, and so must be outputs from a basic block (if written to) if that block is the last block in a subroutine (i.e. the last instruction in it is a \texttt{rtsd} instruction). However, as R5 to R12 are volatile, they are not preserved when returning from a subroutine (as they will be restored by the caller). Therefore, if they are written to in the last block of a subroutine, they can be discarded as outputs. Once the true inputs and outputs of the last block of a subroutine are known, the outputs of any block directly preceeding it in the program-flow graph in the same subroutine can be 'pruned' in a similar way. If a volatile register is an output of a preceeding block, but not a return-value register (R3 or R4), nor an input of the following block, it is not a 'true' output and can be assumed to be temporary. This approach can then be applied recursively until the start of the subroutine is reached. \section{Hardware Accelerator Selection Heuristics} There may be a limited amount of resources available for use as hardware accelerators on any given die. Therefore, the system needs to be able to decide which code blocks should be implemented as hardware accelerators and which should be left to execute on the general purpose core. A set of heuristics were devised to allow the system to make this decision in an automated fashion. \subsection{Input/Output Overhead} Each accelerator has a number of input and output registers, and these registers must be written to (in the case of inputs) and read from (in the case of outputs) the accelerator core by MicroBlaze when a core is used. Each read or write is a transaction on the AXI bus, which has an associated overhead. The more inputs or outputs a particular core is, the more expensive its overheads will be. However, because the overhead for a block is a fixed cost, it must be considered relative to the size of the block itself. For example, a block with 10 cycles of instructions and 2 cycles of overhead has a higher relative overhead than a block with 100 cycles of instructions and 10 cycles of overhead. Thus, relative I/O overhead is represented as below: \begin{equation} \scalebox{1.5}{$\frac{cost_{in} + cost_{out}}{cost_{in} + cost_{out} + \#cycles}$} \end{equation} Where \(cost_{in}\) is the cost (in cycles) of transferring inputs from MicroBlaze to the core, \(cost_{out}\) is the cost (in cycles) of transferring outputs from the core to MicroBlaze, and \(\#cycles\) is the number of cycles of instructions in the core. If this value is greater than 0.5, this indicates that the given block would have more I/O overhead than it has instructions, making it a poor candidate for translation. Therefore, relative overheads >0.5 are undesirable, and a core becomes more optimal the closer to 0 its relative overhead is. \subsection{Potential Parallelism} The purpose of the hardware accelerator cores being generated is (in part) to extract parallelism out of sequential code. This is limited however by the maximum 'width' of any one sequence of computation instructions (i.e. instructions that operate only on registers, and not on memory). In the worst case, a single computation instruction bounded on both sides by memory-access instructions has a 'width' of 1, and cannot attain any parallelism whatsoever. The core selection process should seek to maximise potential parallelism, as represented below: \begin{equation} \scalebox{1.5}{$\frac{\sum\limits_{c \in C} \#c}{\#C}$} \end{equation} Where \(C\) is the set of computation sequences in a basic block, with each \(c \in C\) being an individual sequence. \(\#c\) is the number of instructions in a given sequence, and \(\#C\) is the number of sequences in a given block. Intuitively, this heuristic represents the 'average width' of a basic block, and is a predictor of the level of parallelism that can be extracted from the block. As such, it is expected that this heuristic be proportional to the instruction-per-cycle (IPC) count of the resulting core. There is no theoretical limit to the potential parallelism heuristic, because there can be arbitrary-length sequences of computation instructions in a basic block. Hence, to normalize this value, the maximum 'average width' in the program must be known. The potential parallelism values for all blocks can then be divided by this maximum, giving a value in the range [0, 1]. \subsection{Memory Access Density} As a core can only access one location in memory at a time, each memory access instruction is converted into its own state. In this state, the core sets up control signals, address, and data (if writing) and waits for the memory to respond with the 'ready' signal, at which point it reads data (if reading) and continues to the next state. Because each of these wait states must be completed in sequence, a lower bound on the number of states a core can have (and therefore the number of cycles it can complete in) is imposed by the number of memory accesses in a basic block. In order to take this into account when selecting basic blocks to be selected, the 'memory-access density' of each block needs to be calculated, as follows: \begin{equation} \scalebox{1.5}{$\frac{\#instructions_{io}}{\#instructions_{total}}$} \end{equation} Where \(\#instructions_{io}\) is the number of memory-access instructions in the block, and \(\#instructions_{total}\) is the total number of instructions in the block. A memory-access density of >0.5 indicates that the block has more memory-access in it than it has computation, and a memory-access density of <0.5 indicates the reverse. As the amount of memory-access limits the parallelism that can be extracted from a block, a lower memory-access density is desirable. \subsection{Combining Heuristics} The three heuristics described in this section describe different attributes of a basic block, and when selecting a block to be transformed into an accelerator core it is necessary to combine these into a 'hybrid' heuristic. Furthermore, this combination can be weighted to allow the preference for blocks of a certain kind to be controlled, e.g. if it is preferred that all selected blocks have low I/O overheads. Because each heuristic is normalized, a simple weighted sum can be used to combine them: \begin{equation} \scalebox{1.5}{$a \cdot H_{io} + b \cdot H_{par} + c \cdot H_{mem}$} \end{equation} Where \(H_{io}\) is the I/O overhead heuristic, \(H_{par}\) is the potential parallelism heuristic, and \(H_{mem}\) is the memory access heuristic. \(a + b + c = 1\). This gives a cost in the range [0, 1] for each basic block, allowing blocks to be compared when making selections for translation. \chapter{Experimental Methodology} This chapter outlines how the performance of the system described in the previous chapter was measured. In addition, this chapter describes how an estimation of the energy charateristics of the system are obtained. All testing was performed in the Xilinx Vivado toolsuite. \section{Benchmark Applications} A range of applications were selected for use as benchmarks in the evaluation of the MicroBlaze system. These applications were selected with the intention of being representative of embedded workloads. \begin{itemize} \item An implementation of the SHA256 algorithm \cite{sha256}. \item A 64-sample FFT application \cite{fft}. \end{itemize} It is worth noting that there is a very small number of benchmarks - nowhere near enough to understand how this system will behave in the real world. However, it is hoped that even with the small quantity of tests, a case can be made for the benefits of such a configuration in an embedded system. Each application was compiled using the Xilinx MicroBlaze GNU tools to produce a series of assembly instructions. The accelerator generation tool was then run to produce a number of state machines, generated as VHDL files. These were then used in a simulation, along with the MicroBlaze system, to run the compiled application. A range of core counts were used, ranging from 1 to 45. In addition, the application was run on the MicroBlaze core alone, to provide a baseline to which other measurements could be compared. \section{Measuring Application Speedup} Speedup is a measure of the difference in execution time between two configurations, and is calculated as below \cite{computer-arch}: \[\frac{cycles_{new}}{cycles_{old}}\] The number of clock cycles taken for each application to complete was measured during the simulation, and from these values a measurement of speedup could be calculated relative to the baseline execution time. This was repeated for each configuration (number of cores, method of analysis, etc.). \subsection{Cycle Breakdowns} In addition to measuring the overall execution time of the application, the execution time spent performing specific tasks was also measured. This provided four values, giving a breakdown of the execution time of the application as below: \begin{itemize} \item Execution time spent on the MicroBlaze core (not interacting with accelerator cores). \item Execution time spent on accelerator cores. \item Execution time spent transferring values over the AXI bus. \item Execution time spent with the MicroBlaze core asleep after an accelerator core had finished executing. \end{itemize} \section{Measuring Power Consumption} From the simulation, a Signal Activity Interchange Format (SAIF) file could be produced. This format describes the switching rates of signals in the system during the course of the simulation. This was used with the Vivado power estimation utility to provide a more accurate estimate of power consumption, taking into account the behaviour of the system. This estimate provided two measurements: one for dynamic power consumption, and one for static power consumption. \subsection{Dynamic Power} Dynamic power is the power consumed as capacitances are charged and discharged, when a transistor switches on or off \cite{power}. As such, it is expected that the dynamic power of a circuit be related to the overall activity of a system. A system with a high rate of transistor switching will consume more dynamic power than one in which transistor switching rates are low. It is expected that dynamic power consumption will decrease as accelerator cores are added (assuming the right blocks are selected for conversion), as the MicroBlaze core will be inactive while the cores are performing computations, reducing the overall amount of transistor switching occuring in the system. \subsection{Static Power} Static power, or leakage power, is power consumed by transistors even when they are in a steady state, due to the leakage of current through transistor gates or due to electron tunnelling \cite{power}. Because static power consumption is unrelated to the actual activity of the circuit, and only its size, it is expected that static power consumption will increase as more accelerator cores are added, as each core adds more components to the system. \section{Measuring Parallism (Instructions-Per-Clock)} As one of the aims of this project is to extract parallel circuits from sequential code, it makes sense to measure the parallelism of the generated hardware accelerators. One such metric is instructions-per-clock (IPC). To measure IPC for each generated core, the number of cycles taken to execute the core was measured, as well as the number of instructions in the basic block from which the core was derived. IPC can then be calculated: \begin{equation} \scalebox{1.5}{$\frac{\#instructions}{\#cycles}$} \end{equation} This gives the average number of instructions executed per clock cycle. For comparison, a RISC processor like MicroBlaze will generally have an IPC of 1, indicating that a single instruction is executed every cycle. \chapter{Results} This chapter outlines the results obtained from the benchmark applications used. Section ~\ref{section:resultsOverview} is a brief overview of the performance and power consumption measurements observed. Section ~\ref{section:resultsBreakdowns} then shows execution-time breakdowns for the SHA256 benchmark application across all four selection heuristics, as this was the benchmark on which the system performed the best. Finally, section ~\ref{section:resultsTrends} identifies trends in the attributes of generated accelerator cores. \section{Speedup and Power Consumption} \label{section:resultsOverview} The SHA256 benchmark showed significant speedup, as can be seen from figure ~\ref{fig:speedupSHA256}. The highest speedup was obtained by both the hybrid and overhead heuristics, and was over 1.3x, however even with as many as 28 cores, the speedup gained using the overhead heuristic remained above the baseline, at 1.1x, while the hybrid heuristic dropped below the baseline at 21 cores. This indicates that the overhead heuristic is the best of the four for producing a system that offers a performance increase with a large number of cores. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/speedup-sha256.png}} \caption{Speedup provided by accelerator cores for the SHA256 benchmark, by selection heuristic.} \label{fig:speedupSHA256} \end{figure} In contrast, accelerator cores in the FFT benchmark either slowed down the system or provided no change. When using the hybrid, overhead, or avgwidth selection heuristics with the FFT benchmark (figure ~\ref{fig:speedupFFT}), only a small speedup (1.1x) was observed at the highest point. The speedup then fell below the baseline with 10 cores. The memdensity heuristic performed worse, not offering any speedup whatsoever, and even falling to 0.5x with 45 cores. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/speedup-fft.png}} \caption{Speedup provided by accelerator cores for the FFT benchmark, by selection heuristic.} \label{fig:speedupFFT} \end{figure} It was initially hoped that the use of hardware accelerators would reduce dynamic power consumption, but figures ~\ref{fig:dpowerSHA256} and ~\ref{fig:dpowerFFT} show that this was not the case. The dynamic power consumption of the standalone MicroBlaze system in the SHA256 benchmark was around 500mW, while the consumption with the maximum number of cores (33) was 1900mW, regardless of selection heuristic used. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/dpower-sha256.png}} \caption{Dynamic power consumption for the SHA256 benchmark, by selection heuristic.} \label{fig:dpowerSHA256} \end{figure} Interestingly, unlike with the SHA256 benchmark, there was a difference between the performance of the heuristics with the FFT benchmark. The overhead heuristic, which offered the best speedup, showed the worst increase in power consumption. Conversely, the memdensity heuristic offered the worst speedup, but the least increase in power consumption. This indicates that, for some applications, the core selection criteria are different depending on whether performance or energy efficiency is the primary concern. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/dpower-fft.png}} \caption{Dynamic power consumption for the FFT benchmark, by selection heuristic.} \label{fig:dpowerFFT} \end{figure} As expected, the static power consumed by the system increased as more cores were added, as a larger system was being constructed. Figure ~\ref{fig:spowerSHA256} shows that for the SHA256 benchmark, the static power estimates increased generally, regardless of selection heuristic. The static power estimations for the FFT benchmark (figure ~\ref{fig:spowerFFT}) showed a similar variance between selection heuristics as the dynamic power estimations, with the overhead heuristic again showing the worst increase in power consumption, and the memdensity heuristic showing the least increase. This indicates that the cores selected by the memdensity heuristic (preferring cores with fewer memory accesses) are more power-efficient. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/spower-sha256.png}} \caption{Static power consumption for the SHA256 benchmark, by selection heuristic.} \label{fig:spowerSHA256} \end{figure} \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/spower-fft.png}} \caption{Static power consumption for the FFT benchmark, by selection heuristic.} \label{fig:spowerFFT} \end{figure} \section{Effect of Selection Heuristics on Performance} \label{section:resultsBreakdowns} While initially the avgwidth heuristic (figure ~\ref{fig:breakdownAvgWidthSHA256}) halved the execution time spent on the MicroBlaze core with 2 to 8 cores, this performance improvement stopped once the 10-core mark was reached. This heuristic selects the initial cores well, but makes poor selections in the long run. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/cycles-breakdown-sha256-avgwidth-dependency.png}} \caption{Execution-time breakdown for the SHA256 benchmark using the avgwidth selection heuristic.} \label{fig:breakdownAvgWidthSHA256} \end{figure} \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/cycles-breakdown-sha256-memdensity-dependency.png}} \caption{Execution-time breakdown for the SHA256 benchmark using the memdensity selection heuristic.} \label{fig:breakdownMemDensitySHA256} \end{figure} Conversely, the memdensity heuristic (figure ~\ref{fig:breakdownMemDensitySHA256}) did not show a performance improvement until the 8-core mark, with the cycle count sharply dropping at this point. This shows that, unlike the avgwidth heuristic, this heuristic does not initially select the cores that offer the best Of the three individual heuristics, the overhead heuristic (figure ~\ref{fig:breakdownOverheadSHA256}) performed the best, with a sharp drop in the cycle count with 2 cores, and speedup not dropping below the baseline until 33 cores. This indicates that I/O overhead is a significant limiting factor in the accelerator core performance, and that optimising core selection to reduce this overhead yields significant performance improvements. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/cycles-breakdown-sha256-overhead-dependency.png}} \caption{Execution-time breakdown for the SHA256 benchmark using the overhead selection heuristic.} \label{fig:breakdownOverheadSHA256} \end{figure} As expected, the hybrid selection heuristic (figure ~\ref{fig:breakdownHybridSHA256}) generated a configuration which performed well initially, halving the number of cycles executed on the MicroBlaze core. However, unlike the overhead heuristic, speedup fell below the baseline at 21 cores, rather than 33. Interestingly, this indicates that the I/O overhead factor dwarfs all other factors in core selection, making it really the only factor determining the performance of a given system. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/cycles-breakdown-sha256-hybrid-dependency.png}} \caption{Execution-time breakdown for the SHA256 benchmark using the hybrid selection heuristic.} \label{fig:breakdownHybridSHA256} \end{figure} \section{Trends in Generated Cores} \label{section:resultsTrends} This section describes trends observed in the generated hardware accelerator cores. As the overhead selection heuristic performed better than the others at selecting blocks to be translated, only the cores selected by this heuristic are considered here. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/pop-sha256-28-cores-states-overhead-dependency.png}} \caption{Distribution of states in hardware accelerator cores for the SHA256 benchmark application (28 cores).} \label{fig:statesSHA256-28} \end{figure} Figure ~\ref{fig:statesSHA256-28} shows that the state machines produced are generally smaller, with almost all of the cores having between 3 and 15 states. Figure ~\ref{fig:ipcSHA256} shows the average instruction-per-clock measurements for the SHA256 benchmark as the number of cores selected increase. The IPC of initially-selected cores is high, but the average drops to 1 as the number of cores increases. This indicates that there are cores being selected with an IPC <1. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/avg-ipc-sha256-overhead-dependency.png}} \caption{Average instruction-per-clock for the SHA256 benchmark application, with the overhead selection heuristic.} \label{fig:ipcSHA256} \end{figure} Figure ~\ref{fig:ipcFFT} shows the average IPC for the FFT benchmark. Here the average IPC with 45 cores is twice that of the SHA256 benchmark with 28. This indicates that the cores generated for the FFT application are more efficient than those generated for the SHA256 application, probably due to a decreased number of memory accesses. \begin{figure}[H] \center{\includegraphics[width=0.70\textwidth] {figures/avg-ipc-fft-overhead-dependency.png}} \caption{Average instruction-per-clock for the FFT benchmark application, with the overhead selection heuristic.} \label{fig:ipcFFT} \end{figure} \chapter{Discussion and Further Work} \section{Discussion of Results} The performance increases observed were not as significant as was initially hoped. As seen in the previous chapter, the best method for selecting accelerator cores for this system is by minimizing I/O overhead. This is because the overheads involved in transferring each input/output parameter over the AXI bus is an expensive operation, costing 4 cycles per parameter. The power consumption measurements observed are also disappointing, with the measurements making it clear that switching from a single MicroBlaze core to a MicroBlaze core with accelerators only increases power consumption. This is in contrast to Arnone's work in \cite{arnone-thesis} and the GreenDroid project \cite{greendroid}, which both show clear improvements in energy efficiency in moving to a coprocessor-dominated architecture. While inaccuracies in the estimation of power consumption may be to blame for this gap, it could also be due to the difference between FPGA and VLSI technologies. For example, circuits that must be implemented using standard logic components such as lookup-tables in an FPGA may be optimized into more efficient implementations in a VLSI environment. However, the FPGA has the benefit of reprogrammability, so a small amount of inefficiency may be acceptable as a trade-off for being able to reprogram the architecture whenever a change is made to the software, avoiding the need for 'patching' accelerator cores as with \cite{greendroid}. In addition, it is clear from the execution-time breakdowns shown in the previous chapter that, even with high numbers of accelerator cores, a significant amount of execution time is still spent on the MicroBlaze core. In a coprocessor-dominated architecture there is always the risk that the processor will spend as much time managing hardware accelerators as it would executing the application code in a homogenous architecture. For this reason, methods of reducing the reliance of the accelerator cores on a general-purpose processor for management should be explored. Otherwise, a CoDA is unlikely to reduce power consumption. \section{Further Work} This section is split into two subsections. The first describes extensions to the architecture that could be implemented to offer better performance or energy efficiency. The second describes further work that could be done in evaluating this system and others like it. \subsection{System Architecture Improvements} It is evident from the results observed with testing this system that the input/output overheads are too great to offer significant performance increases. Therefore, it makes sense that ways to decrease these overheads should be investigated. One such way is to try to avoid the transfer of registers between the MicroBlaze core and the hardware accelerators where possible, by allowing registers to be transferred between cores themselves. For example, consider two cores which execute back-to-back, with the MicroBlaze core only waking up to transfer registers from one to the other. In this scenario, the cores can be treated as a single unit; the MicroBlaze core will wake up to perform control flow (e.g. by executing a branch instruction), but will somehow signal to the cores that they are not to transfer register values to/from the MicroBlaze core, but between themselves. In such a case, the only AXI accesses required are those that acquire register values needed for the control-flow instruction, and those relating to controlling the cores. Such a system could dramatically reduce the number of expensive I/O operations taking place, as well as decrease the time that the MicroBlaze core spends active, improving both performance and energy efficiency. \begin{figure}[H] \center{\includegraphics[width=0.65\textwidth] {figures/direct-transfer.png}} \caption{The direct transfer mechanism.} \label{fig:directTransfer} \end{figure} There are also improvements that could be made to the cores themselves. One such improvement involves reducing the time that cores spend waiting on memory. It is possible that a significant portion of memory accesses are not prerequisites to immediately following computations, but are instead used several cycles later. This could be used to pipeline memory accesses in accelerator cores, with a memory access being made at the same time as a computation is taking place. Such an improvement would reduce the impact of memory accesses, which are relatively expensive operations, on the performance of accelerator cores. Figure ~\ref{fig:pipeliningAccelerators} shows two possible scenarios in which this technique could be used. \begin{figure}[H] \center{\includegraphics[width=0.80\textwidth] {figures/pipelining-accelerators.png}} \caption{Pipelining of independent states.} \label{fig:pipeliningAccelerators} \end{figure} \subsection{Further Evaluation Work} It is difficult to make any broad claims about the system designed in this project because of the small pool of test applications used. For this reason, it is necessary to further investigate the performance of this system on a wide range of applications, including those used in the real world. In addition, the limited evaluation possible in a simulation environment resulted in inaccurate estimations of power consumption. A real-world evaluation using a physical FPGA could provide a more accurate estimation of the energy efficiency of this system. Furthermore, there are improvements that could be made to the way that cores are selected. The heuristics described provide a reasonable (and computationally inexpensive) way to select accelerator cores, but this selection is done statically, and therefore cannot incorporate any information about the runtime behaviour of the system into the selection process. A selection method that involves some form of profiling, similar to that used in the GreenDroid project \cite{greendroid}, could ensure that the blocks transformed into accelerator cores are those that are accessed frequently enough for the conversion to provide benefit. Alternatively, a less conventional approach to core selection could be taken, for example using a genetic algorithm. While this could identify configurations with better performance and power consumption characteristics than could be found with the simple heuristics outlined in this work, it would require a change in the way that the system is evaluated; FPGA simulation and synthesis is far too time-consuming a process to be amenable to such an approach. Finally, while the focus in this investigation has been increasing performance by performing more computation in a smaller number of cycles, at the same clock speed, it is possible to instead improve energy efficiency by running the accelerator cores at a slower clock speed. This would maintain the same performance as without the accelerators, but would reduce energy usage by reducing the rate at which transistors in the circuit are switching. It would be worthwhile to measure how much the clock speed of the accelerator cores could be reduced (while maintaining the same level of performance) and determine how much energy efficiency could be gained from this. \section{Conclusion} It is apparent from the measurements observed that the performance and energy-efficiency increases offered by the accelerator cores is not as high as was hoped. As outlined in the previous section, it is possible that with some improvements, both to the design of the system and to its evaluation, greater performance improvements could be gained. In addition, a greater pool of test applications could help to identify trends that make some applications well-suited to this approach, and others not. However, this work does show that the application object code can be analysed, basic blocks for extraction identified, and hardware accelerators generated, all in an automated toolchain. This ability to automate the generation of the architecture, coupled with the re-programmability of the FPGA, means that the design space for a system such as this can be explored with greater ease than if designs had to be made by hand and tested in a VLSI environment. In addition, a range of methods of selecting accelerator cores have been outlined, and the various traits of systems generated from these different heuristics have been measured. It seems that the desired attributes of accelerator cores can be different depending on whether performance or energy efficiency are the primary concerns, and this tool provides an opportunity to explore this further with real-world evaluation. Overall, while the observed performance and energy efficiency of this system was not as initially hoped, this work makes a strong case for further exploration of this architectural model in an FPGA environment. \appendix \chapter{Hardware Accelerator Interface} \label{appendix:interface} \begin{table}[H] \centering \begin{tabular}{ |p{3cm}|p{2cm}|p{8cm}| } \textbf{Signal} & \textbf{Direction} & \multicolumn{1}{c}{\textbf{Description}} \\ CLK & I & Clock signal. \\[0.05cm] RST & I & Reset signal. Active HIGH. \\[0.05cm] SEL & I & Core-select signal. Core is active when this is HIGH, and inactive when this is LOW. \\[0.05cm] DONE & O & Done signal. This goes HIGH once the core has finished its computation, and goes LOW when the core is reset. \\[0.05cm] M\_RDY & I & Memory-ready signal. Signals to the core that a memory transaction has completed. \\[0.05cm] M\_RD & O & Memory read strobe. Indicates that the core is reading a value from memory. \\[0.05cm] M\_WR & O & Memory write strobe. Indicates that the core is writing a value to memory. \\[0.05cm] M\_ADDR & O & Memory address. Address of word being accessed in memory. \\[0.05cm] M\_DATA\_OUT & O & Data out. Data written to memory is placed on this line. \\[0.05cm] M\_DATA\_IN & I & Data in. Data read from memory is placed on this line. \\[0.05cm] IN\_Rxx & I & One or more input register lines. Inputs to the accelerator core are written to these lines from the MicroBlaze core. \\[0.05cm] OUT\_Rxx & O & One or more output register lines. Outputs from the core are written to these lines once it has finished computation. \\[0.05cm] CARRY\_IN & I & Carry signal input. Indicates the state of the carry flag at the start of the core's execution. \\[0.05cm] CARRY\_OUT & O & Carry signal output. Indicates the state of the carry flag at the end of the core's execution. \end{tabular} \caption{Hardware accelerator input/output signals.} \label{table:acceleratorSignals} \end{table} \chapter{Instruction Translations} \label{appendix:translations} Described here are the translations of MicroBlaze instructions to VHDL statements, used to automatically generate computational circuits from sequences of computation instructions. Due to time constraints, not all instruction translations were implemented; only those found in the two benchmark applications are present. \section{ADD - Addition} \subsection{ADD} Computes the unsigned addition of rA and rB and stores the result in rD. The carry flag is set to 1 if the addition overflows, and 0 otherwise. \begin{lstlisting} temp := ('0' & rA) + rB; carry := temp(32); rD := temp(31 downto 0); \end{lstlisting} \subsection{ADDI} Computes the unsigned addition of rA and an immediate value and stores the result in rD. The carry flag is set to 1 if the addition overflows, and 0 otherwise. \begin{lstlisting} temp := ('0' & rA) + unsigned(to_signed(imm, 32)); carry := temp(32); rD := temp(31 downto 0); \end{lstlisting} \subsection{ADDK} Computes the unsigned addition of rA and rB and stores the result in rD. The carry flag is unaffected and retains its previous value. \begin{lstlisting} rD := rA + rB; \end{lstlisting} \subsection{ADDIK} Computes the unsigned addition of rA and an immediate value and stores the result in rD. The carry flag is unaffected and retains its previous value. \begin{lstlisting} rD := rA + unsigned(to_signed(imm, 32)); \end{lstlisting} \section{RSUB - Reverse Subtraction} \subsection{RSUBK} Computes the unsigned subtraction of rA from rB and stores the result in rD. The carry flag is unaffected and retains its previous value. \begin{lstlisting} rD := rB - rA; \end{lstlisting} \section{CMP - Compare} \subsection{CMP} Computes the unsigned subtraction of rA from rB and stores the result in rD. The MSB of rD is updated to show the actual relation of rA to rB as signed values. \begin{lstlisting} rD := rB - rA; if signed(rA) > signed(rB) then rD(31) := '1'; else rD(31) := '0'; end if; \end{lstlisting} \subsection{CMPU} Computes the unsigned subtraction of rA from rB and stores the result in rD. The MSB of rD is updated to show the actual relation of rA to rB as unsigned values. \begin{lstlisting} rD := rB - rA; if unsigned(rA) > unsigned(rB) then rD(31) := '1'; else rD(31) := '0'; end if; \end{lstlisting} \section{SEXT - Sign Extend} \subsection{SEXT8} Sign extends the 8-bit value in rA and stores it in rD, by copying bit 7 of rA into bits 31-7 of rD. \begin{lstlisting} rD(31 downto 7) := (others => rA(7)); rD(6 downto 0) := rA(6 downto 0); \end{lstlisting} \subsection{SEXT16} Sign extends the 16-bit value in rA and stores it in rD, by copying bit 15 of rA into bits 31-15 of rD. \begin{lstlisting} rD(31 downto 15) := (others => rA(15)); rD(14 downto 0) := rA(14 downto 0); \end{lstlisting} \section{AND - Logical AND} \subsection{AND} Performs the logical AND of the values in rA and rB and stores the result in rD. \begin{lstlisting} rD := rA AND rB; \end{lstlisting} \subsection{ANDI} Performs the logical AND of the value in rA and an immediate value and stores the result in rD. \begin{lstlisting} rD := rA AND unsigned(to_signed(imm, 32)); \end{lstlisting} \section{OR - Logical OR} \subsection{OR} Performs the logical OR of the values in rA and rB and stores the result in rD. \begin{lstlisting} rD := rA OR rB; \end{lstlisting} \subsection{ORI} Performs the logical OR of the value in rA and an immediate value and stores the result in rD. \begin{lstlisting} rD := rA OR unsigned(to_signed(imm, 32)); \end{lstlisting} \section{XOR - Logical XOR} \subsection{XOR} Performs the logical XOR of the values in rA and rB and stores the result in rD. \begin{lstlisting} rD := rA XOR rB; \end{lstlisting} \subsection{XORI} Performs the logical XOR of the value in rA and an immediate value and stores the result in rD. \begin{lstlisting} rD := rA XOR unsigned(to_signed(imm, 32)); \end{lstlisting} \section{SR - Shift Right} \subsection{SRL} Shifts the value in rA right by one bit, with a 0 being shifted into the MSB. The result is stored in rD. The bit shifted out of rA is placed in the carry flag. \begin{lstlisting} temp := '0' & rA(31 downto 1); carry := rA(0); rD := temp; \end{lstlisting} \subsection{SRA} Shifts the value in rA right by one bit, with the MSB being duplicated (keeping the sign bit). The result is stored in rD. The bit shifted out of rA is placed in the carry flag. \begin{lstlisting} temp := rA(31) & rA(31 downto 1); carry := rA(0); rD := temp; \end{lstlisting} \subsection{SRC} Shifts the value in rA right by one bit, with the carry flag being shifted into the MSB. The result is stored in rD. The bit shifted out of rA is placed in the carry flag. \begin{lstlisting} temp := carry & rA(31 downto 1); carry := rA(0); rD := temp; \end{lstlisting} \chapter{MicroBlaze ABI Register Usage Convention} \label{appendix:abi} \begin{table}[H] \centering \begin{tabular}{ |p{2cm}|p{3cm}|p{8cm}| } \textbf{Register} & \textbf{Type} & \multicolumn{1}{c}{\textbf{Description}} \\ R0 & Dedicated & Hardcoded 0. \\[0.05cm] R1 & Dedicated & Stack pointer. \\[0.05cm] R2 & Dedicated & Read-only small-data-area pointer. \\[0.05cm] R3-R4 & Volatile & Return values/temporaries. \\[0.05cm] R5-R10 & Volatile & Passing parameters/temporaries. \\[0.05cm] R11-R12 & Volatile & Temporaries. \\[0.05cm] R13 & Dedicated & Read-write small-data-area pointer. \\[0.05cm] R14 & Dedicated & Return address for interrupts. \\[0.05cm] R15 & Dedicated & Return address for subroutines. \\[0.05cm] R16 & Dedicated & Return address for trap (debugger). \\[0.05cm] R17 & Dedicated & Return address for exceptions. \\[0.05cm] R18 & Dedicated & Reserved for assembler. \\[0.05cm] R19-R31 & Non-volatile & Must be saved across function calls (callee-save). \end{tabular} \caption{MicroBlaze ABI register descriptions \cite{microblaze-ref}.} \label{table:abi} \end{table} \printbibliography \end{document}
{ "alphanum_fraction": 0.7883934559, "avg_line_length": 55.0850515464, "ext": "tex", "hexsha": "2680aab6056973500cb764cbd2cba72cd12e1b85", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ab578ace45c44ad4eb5b59ee39f49b0c55b9b8e4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jayvalentine/code-to-silicon", "max_forks_repo_path": "REPORT.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ab578ace45c44ad4eb5b59ee39f49b0c55b9b8e4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jayvalentine/code-to-silicon", "max_issues_repo_path": "REPORT.tex", "max_line_length": 227, "max_stars_count": null, "max_stars_repo_head_hexsha": "ab578ace45c44ad4eb5b59ee39f49b0c55b9b8e4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jayvalentine/code-to-silicon", "max_stars_repo_path": "REPORT.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14689, "size": 64119 }
\newenvironment{tentative} { \vspace{-0.2in} \begin{quotation} \noindent \color{red}MEMORY MODEL TASK GROUP TO-DO \small \em \rule{\linewidth}{1pt}\\ } { \end{quotation} \vspace{-0.2in} } \lstdefinelanguage{alloy}{ morekeywords={abstract, sig, extends, pred, fun, fact, no, set, one, lone, let, not, all, iden, some, run, for}, morecomment=[l]{//}, morecomment=[s]{/*}{*/}, commentstyle=\color{green!40!black}, keywordstyle=\color{blue!40!black}, moredelim=**[is][\color{red}]{@}{@}, escapeinside={!}{!}, } \lstset{language=alloy} \lstset{aboveskip=0pt} \lstset{belowskip=0pt} \newcommand{\diagram}{(picture coming soon)} \chapter{RVWMO Explanatory Material} \label{sec:explanation} This section provides more explanation for the RVWMO memory model, using more informal language and concrete examples. These are intended to clarify the meaning and intent of the axioms and preserved program order rules. %In case of any discrepancy between the informal descriptions here and the formal descriptions elsewhere, the formal definitions should be considered authoritative. \section{Why RVWMO?} \label{sec:whynottso} Memory consistency models fall along a loose spectrum from weak to strong. Weak memory models (e.g., ARMv7, Power, Alpha) allow more hardware implementation flexibility and deliver arguably better performance, performance per watt, power, scalability, and hardware verification overheads than strong models, at the expense of a more complex programming model. %Models which are too weak may not even be properly analyzable with modern formal analysis techniques. Strong models (e.g., sequential consistency, TSO) provide simpler programming models, but at the cost of imposing more restrictions on the kinds of hardware optimizations that can be performed in the pipeline and in the memory system, with some cost to power and area overheads, and with some added hardware verification burden. For the base ISA, RISC-V has chosen the RVWMO memory model, which is a variant of release consistency. This places it in between the two extremes of the memory model spectrum. It is not as weak as the Power memory model, and this buys back some programming model simplicity without giving up very much in terms of performance. RVWMO is also not as restrictive as RVTSO, and hence it remains weak enough to ensure that implementations can be performant and scalable without incurring huge hardware complexity overheads. RVWMO is similar to the ARMv8 memory model in this regard. As such, the RVWMO memory model enables architects to build simple implementations, aggressive implementations, implementations embedded deeply inside a much larger system and subject to complex memory system interactions, or any number of other possibilities, all while simultaneously being strong enough to support programming language memory models at high performance. The risk of a weak memory model lies in the complexity of the programming model. Buggy code which ``just worked'' on stronger implementations may well break on more aggressive implementations due to the bugs simply not manifesting on the stronger-than-necessary implementations. For these situations, though, the root cause is the bug in the original software, not the memory model itself. The risk of finding short-term bugs in code ported from other architectures is outweighed by the long-term benefits that the weak memory model delivers more generally. To mitigate this risk, some hardware implementations may choose to stick with RVTSO, and that is perfectly acceptable and perfectly compatible with the RVWMO memory model. The cost that the weak memory model imposes on such implementations is the incremental overhead of fetching instructions (e.g., {\tt fence~r,rw} and {\tt fence rw,w}) which become no-ops on that implementation. (These fences must remain present in the code to ensure compatibility with other more weakly-ordered RISC-V implementations.) Most software is also fully compatible with weak memory models. C/C++, Java, and Linux, to name some of the most notable and more formally analyzed examples, are all entirely compatible with weak non-atomic memory models, as all are designed to run not just on x86 but also on ARM, Power, and many other architectures. It is true that some code, e.g., code ported from x86, does sometimes (correctly or incorrectly) assume a stronger model such as TSO. For such code, the RVWMO memory model provides a means for restoring TSO to sections of code through fences and atomics with {\tt .aq} and {\tt .rl} bits in the ``A'' extension, until such code can be ported to RVWMO over time. Designers who wish to provide drop-in compatibility with x86 code can also implement the Ztso extension which enforces RVTSO. Code written for RVWMO is automatically and inherently compatible with RVTSO, but code written assuming RVTSO is not guaranteed to run correctly on RVWMO implementations. In fact, RVWMO implementations will (and should) simply refuse to run TSO-only binaries. Each implementation must therefore choose whether to prioritize compatibility with RVTSO code (e.g., to facilitate porting from x86) or whether to instead prioritize compatibility with other RISC-V cores implementing RVWMO. \section{Litmus Tests} The explanations in this chapter make use of {\em litmus tests}, or small programs designed to test or highlight one particular aspect of a memory model. Figure~\ref{fig:litmus:sample} shows an example of a litmus test with two harts. For this figure (and for all figures that follow in this chapter), we assume that {\tt s0}--{\tt s2} are pre-set to the same value in all harts. As a convention, we will assume that {\tt s0} holds the address labeled {\tt x}, {\tt s1} holds {\tt y}, and {\tt s2} holds {\tt z}, where {\tt x}, {\tt y}, and {\tt z} are different memory addresses. This figure shows the same program twice: on the left in RISC-V assembly, and again on the right in graphical form. \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & $\vdots$ & & $\vdots$ \\ & li t1, 1 & & li t4, 4 \\ (a) & sw t1,0(s0) & (e) & sw t4,0(s0) \\ & $\vdots$ & & $\vdots$ \\ & li t2, 2 & & \\ (b) & sw t2,0(s0) & & \\ & $\vdots$ & & $\vdots$ \\ (c) & lw a0,0(s0) & & \\ & $\vdots$ & & $\vdots$ \\ & li t3, 3 & & li t5, 5 \\ (d) & sw t3,0(s0) & (f) & sw t5,0(s0) \\ & $\vdots$ & & $\vdots$ \\ \end{tabular} } ~~~~ \diagram \caption{A sample litmus test} \label{fig:litmus:sample} \end{figure} Litmus tests are used to understand the implications of the memory model in specific concrete situations. For example, in the litmus test of Figure~\ref{fig:litmus:sample}, the final value of {\tt a0} in the first hart can be either 2, 4, or 5, depending on the dynamic interleaving of the instruction stream from each hart at runtime. However, in this example, the final value of {\tt a0} in Hart 0 will never be 1 or 3: the value 1 will no longer be visible at the time the load executes, and the value 3 will not yet be visible by the time the load executes. We analyze this test and many others below. \section{Explaining the RVWMO Rules} In this section, we provide explanation and examples for all of the RVWMO rules and axioms. \subsection{Preserved Program Order and Global Memory Order} Preserved program order represents the set of intra-hart orderings that the hart's pipeline must ensure are maintained as the instructions execute, even in the presence of hardware optimizations that might otherwise reorder those operations. Events from the same hart which are not ordered by preserved program order, on the other hand, may appear reordered from the perspective of other harts and/or observers. Informally, the global memory order represents the order in which loads and stores perform. The formal memory model literature has moved away from specifications built around the concept of performing, but the idea is still useful for building up informal intuition. A load is said to have performed when its return value is determined. A store is said to have performed not when it has executed inside the pipeline, but rather only when its value has been propagated to globally visible memory. In this sense, the global memory order also represents the contribution of the coherence protocol and/or the rest of the memory system to interleave the (possibly reordered) memory accesses being issued by each hart into a single total order agreed upon by all harts. The order in which loads perform does not always directly correspond to the relative age of the values those two loads return. In particular, a load $b$ may perform before another load $a$ to the same address (i.e., $b$ may execute before $a$, and $b$ may appear before $a$ in the global memory order), but $a$ may nevertheless return an older value than $b$. This discrepancy captures the reordering effects of store buffers placed between the core and memory: a younger load may read from a value in the store buffer, while an older load which appears before that store in program order may ignore that younger store and read an older value from memory instead. To account for this, at the time each load performs, the value it returns is determined by the load value axiom, not just strictly by determining the most recent store to the same address in the global memory order, as described below. \subsection{Store Buffering (Load Value Axiom)} \begin{tabular}{p{1cm}|p{12cm}} & Load value axiom:\loadvalueaxiom \end{tabular} Preserved program order is {\em not} required to respect the ordering of a store followed by a load to an overlapping address. This complexity arises due to the ubiquity of store buffers in nearly all implementations. Informally, the load may perform (return a value) by forwarding from the store while the store is still in the store buffer, and hence before the store itself performs (writes back to globally visible memory). Any other hart will therefore observe the load as performing before the store. \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li t1, 1 & & li t1, 1 \\ (a) & sw t1,0(s0) & (e) & sw t1,0(s1) \\ (b) & lw a0,0(s0) & (f) & lw a2,0(s1) \\ (c) & fence r,r & (g) & fence r,r \\ (d) & lw a1,0(s1) & (h) & lw a3,0(s0) \\ \end{tabular} } ~~~~ \diagram \caption{A store buffer forwarding litmus test} \label{fig:litmus:storebuffer} \end{figure} Consider the litmus test of Figure~\ref{fig:litmus:storebuffer}. When running this program on an implementation with store buffers, it is possible to arrive at the final outcome {\tt a0=1, a1=0, a2=1, a3=0} as follows: \begin{itemize} \item (a) executes and enters the first hart's private store buffer \item (b) executes and forwards its return value 1 from (a) in the store buffer \item (c) executes since all previous loads (i.e., (b)) have completed \item (d) executes and reads the value 0 from memory \item (e) executes and enters the second hart's private store buffer \item (f) executes and forwards its return value 1 from (d) in the store buffer \item (g) executes since all previous loads (i.e., (f)) have completed \item (h) executes and reads the value 0 from memory \item (a) drains from the first hart's store buffer to memory \item (e) drains from the second hart's store buffer to memory \end{itemize} Therefore, the memory model must be able to account for this behavior. To put it another way, suppose the definition of preserved program order did include the following hypothetical rule: memory access $a$ precedes memory access $b$ in preserved program order (and hence also in the global memory order) if $a$ precedes $b$ in program order and $a$ and $b$ are accesses to the same memory location, $a$ is a write, and $b$ is a read. Call this ``Rule X''. Then we get the following: \begin{itemize} \item (a) precedes (b): by rule X \item (b) precedes (d): by rule \ref{ppo:fence} \item (d) precedes (e): by the load value axiom. Otherwise, if (e) preceded (d), then (d) would be required to return the value 1. (This is a perfectly legal execution; it's just not the one in question) \item (e) precedes (f): by rule X \item (f) precedes (h): by rule \ref{ppo:fence} \item (h) precedes (a): by the load value axiom, as above. \end{itemize} The global memory order must be a total order and cannot be cyclic, because a cycle would imply that every event in the cycle happens before itself, which is impossible. Therefore, the execution proposed above would be forbidden, and hence the addition of rule X would break the memory model. Nevertheless, even if (b) precedes (a) and/or (f) precedes (e) in the global memory order, the only sensible possibility in this example is for (b) to return the value written by (a), and likewise for (f) and (e). This combination of circumstances is what leads to the second option in the definition of the load value axiom. Even though (b) precedes (a) in the global memory order, (a) will still be visible to (b) by virtue of sitting in the store buffer at the time (b) executes. Therefore, even if (b) precedes (a) in the global memory order, (b) should return the value written by (a) because (a) precedes (b) in program order. Likewise for (e) and (f). \subsection{Same-Address Orderings, Part 1 (Rule~\ref{ppo:->st})} \begin{tabular}{p{1cm}|p{12cm}} & Rule \ref{ppo:->st}: \ppost \end{tabular} Same-address orderings where the latter is a store are straightforward: a load or store can never be reordered with a later store to an overlapping memory location. From a microarchitecture perspective, generally speaking, it is difficult or impossible to undo a speculatively reordered store if the speculation turns out to be invalid, so such behavior is simply disallowed by the model. Same-address load-load orderings are far more subtle; see Chapter~\ref{sec:ppo:rdw}. \begin{comment} The formal model captures this as follows: \begin{itemize} \item (a) precedes (b) in preserved program order because both are stores to the same address, and (b) is a store (Rule~\ref{ppo:->st}). Therefore, (c) cannot return the value written by (a), because (b) is a later store to the same address in both program order and the global memory order, and so returning the value written by (a) would violate the load value axiom. \item (c) precedes (d) in preserved program order because both are accesses to the same address, and (d) is a store. (c) also precedes (d) in program order. Therefore, (c) is not able to return the value written by (d), because neither option in the load value axiom applies. \end{itemize} \end{comment} \subsection{Fences (Rule~\ref{ppo:fence})}\label{sec:fence} \begin{tabular}{p{1cm}|p{12cm}} & Rule \ref{ppo:fence}: \ppofence \end{tabular} By default, the {\tt fence} instruction ensures that all memory accesses from instructions preceding the fence in program order (the ``predecessor set'') appear earlier in the global memory order than memory accesses from instructions appearing after the fence in program order (the ``successor set''). However, fences can optionally further restrict the predecessor set and/or the successor set to a smaller set of memory accesses in order to provide some speedup. Specifically, fences have {\tt .pr}, {\tt .pw}, {\tt .sr}, and {\tt .sw} bits which restrict the predecessor and/or successor sets. The predecessor set includes loads (resp.\@ stores) if and only if {\tt .pr} (resp.\@ {\tt .pw}) is set. Similarly, the successor set includes loads (resp.\@ stores) if and only if {\tt .sr} (resp.\@ {\tt .sw}) is set. The full RISC-V opcode encoding currently has nine non-trivial combinations of the four bits {\tt pr}, {\tt pw}, {\tt sr}, and {\tt sw}, plus one extra encoding which is expected to be added to facilitate mapping of ``acquire+release'' or TSO semantics. The remaining seven combinations have empty predecessor and/or successor sets and hence are no-ops. Of the ten non-trivial options, only six are commonly used in practice: {\tt \begin{itemize} \item fence rw,rw \item fence.tso \textrm{(i.e., a combined {\tt fence r,rw} $+$ {\tt fence rw,w})} \item fence rw,w \item fence r,rw \item fence r,r \item fence w,w \end{itemize} } We strongly recommend that programmers stick to these six, as these are the best understood. {\tt fence} instructions using any other combination of {\tt .pr}, {\tt .pw}, {\tt .sr}, and {\tt .sw} are reserved. Finally, we note that since RISC-V uses a multi-copy atomic memory model, programmers can reason about fences and the {\tt .aq} and {\tt .rl} bits in a thread-local manner. There is no complex notion of ``fence cumulativity'' as found in memory models which are not multi-copy atomic. \subsection{Acquire/Release Ordering (Rules~\ref{ppo:acquire}--\ref{ppo:amoload})}\label{sec:acqrel} \begin{tabular}{p{1cm}|p{12cm}} & Rule \ref{ppo:acquire}: \ppoacquire \\ %& Rule \ref{ppo:loadtoacq}: \ppoloadtoacq \\ & Rule \ref{ppo:release}: \pporelease \\ & Rule \ref{ppo:strongacqrel}: \ppostrongacqrel \\ & Rule \ref{ppo:amostore}: \ppoamostore \\ & Rule \ref{ppo:amoload}: \ppoamoload \\ \end{tabular} An {\em acquire} operation is used at the start of a critical section. The general requirement for acquire semantics is that all loads and stores inside the critical section are up to date with respect to the synchronization variable being used to protect it. In other words, an acquire operation requires load-to-load/store ordering. Acquire ordering can be enforced in one of two ways: setting {\tt .aq}, which enforces ordering with respect to just the synchronization variable itself, or with a {\tt FENCE r,rw}, which enforces ordering with respect to all previous loads. \begin{figure}[h!] \centering\small \begin{verbatim} sd x1, (a1) # Random unrelated store ld x2, (a2) # Random unrelated load li t0, 1 # Initialize swap value. again: amoswap.w.aq t0, t0, (a0) # Attempt to acquire lock. bnez t0, again # Retry if held. # ... # Critical section. # ... amoswap.w.rl x0, x0, (a0) # Release lock by storing 0. sd x3, (a3) # Random unrelated store ld x4, (a4) # Random unrelated load \end{verbatim} \caption{A spinlock with atomics} \label{fig:litmus:spinlock_atomics} \end{figure} Consider Figure~\ref{fig:litmus:spinlock_atomics}: Because this example uses {\tt .aq}, the loads and stores in the critical section are guaranteed to appear in the global memory order after the {\tt amoswap} used to acquire the lock. However, assuming {\tt a0}, {\tt a1}, and {\tt a2} point to different memory locations, the loads and stores in the critical section may or may not appear after the ``random unrelated load'' at the beginning of the example in the global memory order. \begin{figure}[h!] \centering\small \begin{verbatim} sd x1, (a1) # Random unrelated store ld x2, (a2) # Random unrelated load li t0, 1 # Initialize swap value. again: amoswap.w t0, t0, (a0) # Attempt to acquire lock. fence r, rw # Enforce "acquire" memory ordering bnez t0, again # Retry if held. # ... # Critical section. # ... fence rw, w # Enforce "release" memory ordering amoswap.w x0, x0, (a0) # Release lock by storing 0. sd x3, (a3) # Random unrelated store ld x4, (a4) # Random unrelated load \end{verbatim} \caption{A spinlock with fences} \label{fig:litmus:spinlock_fences} \end{figure} Now, consider the alternative in Figure~\ref{fig:litmus:spinlock_fences}. In this case, even though the {\tt amoswap} does not enforce ordering with an {\tt .aq} bit, the fence nevertheless enforces that the acquire {\tt amoswap} appears earlier in the global memory order than all loads and stores in the critical section. Note, however, that in this case, the fence also enforces additional orderings: it also requires that the ``random unrelated load'' at the start of the program appears also appears earlier in the global memory order than the loads and stores of the critical section. (This particular fence does not, however, enforce any ordering with respect to the ``random unrelated store'' at the start of the snippet.) In this way, fence-enforced orderings are slightly coarser than orderings enforced by {\tt.aq}. \begin{comment} A load-acquire also appears later in the global memory order than any previous paired loads to overlapping addresses. This rule is in place primarily to ensure compatibility with C/C++ release sequences. Consider the example of Figure~\ref{fig:relseq}: \begin{figure}[h!] \centering {\tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Thread 0} & \multicolumn{2}{c}{Thread 1} \\ \hline (a) & x = 1; & (d) & atomic\_exchange(y, 2, memory\_order\_relaxed); \\ (bc) & atomic\_store(y, 1, & (e) & atomic\_exchange(y, 2, memory\_order\_acquire); \\ & ~~memory\_order\_release); & (f) & int a1 = x; \\ \end{tabular} } \bigskip {\tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li t1, 1 & & li t1, 1 \\ (a) & sd t1,0(s0) & (d) & amoswap~~~ a0,t1,0(s1) \\ (b) & fence rw,w & (e) & amoswap.aq x0,a1,0(s1) \\ (c) & sd t1,0(s1) & (f) & ld~~~~~~~~ a1,0(s0) \\ \end{tabular} } ~~~~ \diagram \caption{C/C++ release sequence example} \label{fig:relseq} \end{figure} In Figure~\ref{fig:relseq}, the original C/C++ source code has a ``synchronizes-with'' relationship from (c) to (e) via (d), where the latter is part of the ``release sequence'' of (c). Therefore, RVWMO must somehow require (c) to appear before (e) in the global memory order. Without rule~\ref{ppo:loadtoacq}, (c) would be ordered before (d), but (d) would {\em not} be ordered before (e) due to ``fri; rfi'' behavior (Chapter~\ref{sec:ppo:rdw}). Rule~\ref{ppo:loadtoacq} therefore fixes the missing link by placing both the load and the store part of (d) before (e) in the global memory order. \end{comment} Release orderings work exactly the same as acquire orderings, just in the opposite direction. Release semantics require all loads and stores in the critical section to appear before the lock-releasing store (here, an {\tt amoswap}) in the global memory order. This can be enforced using the {\tt .rl} bit or with a {\tt fence rw,w} operations. Likewise, the ordering between the loads and stores in the critical section and the ``random unrelated store'' at the end of the code snippet is enforced only by the {\tt fence rw,w} in the second example, not by the {\tt .rl} in the first example. %Note that a corollary of rule~\ref{ppo:loadtoacq} is not needed for release operations because it would be redundant with rule~\ref{ppo:->st}. By default, store-release-to-load-acquire ordering is not enforced. This facilitates the porting of code written under the TSO and/or RCpc memory models; see Chapter~\ref{sec:porting} for details. To enforce store-release-to-load-acquire ordering, use store-release-RCsc and load-acquire-RCsc operations, so that PPO rule \ref{ppo:strongacqrel} applies. The use of only store-release-RCsc and load-acquire-RCsc operations implies sequential consistency, as the combination of PPO rules \ref{ppo:acquire}--\ref{ppo:strongacqrel} implies that all RCsc accesses will respect program order. AMOs with both {\tt .aq} and {\tt .rl} set are fully-ordered operations. Treating the load part and the store part as independent RCsc operations is not in and of itself sufficient to enforce full fencing behavior, but this subtle weak behavior is counterintuitive and not much of an advantage architecturally, especially with {\tt lr} and {\tt sc} also available. For this reason, AMOs annotated with {\tt .aqrl} are strengthened to being fully-ordered under RVWMO. %The RVWMO memory model rules do not place any explicit restrictions on whether atomics can forward values from stores still in a store buffer, as some particularly aggressive microarchitectures may do this at times. %Such behavior is compatible with higher-level software memory model such as the one used by C/C++ if the mappings of Chapter~\ref{sec:porting} are used. %However, such forwarding can be prevented manually if desired by placing a {\tt fence w,r,[addr]} between the store and the load in question. \subsection{Dependencies (Rules~\ref{ppo:addr}--\ref{ppo:success})} \label{sec:depspart1} \begin{tabular}{p{1cm}|p{12cm}} & Rule \ref{ppo:addr}: \ppoaddr \\ & Rule \ref{ppo:data}: \ppodata \\ & Rule \ref{ppo:ctrl}: \ppoctrl \\ & Rule \ref{ppo:success}: \pposuccess \\ \end{tabular} Dependencies from a load to a later memory operation in the same hart are respected by the RVWMO memory model. The Alpha memory model was notable for choosing {\em not} to enforce the ordering of such dependencies, but most modern hardware and software memory models consider allowing dependent instructions to be reordered too confusing and counterintuitive. Furthermore, modern code sometimes intentionally uses such dependencies as a particularly lightweight ordering enforcement mechanism. Like other modern memory models, the RVWMO memory model uses syntactic rather than semantic dependencies. In other words, this definition depends on the identities of the registers being accessed by different instructions, not the actual contents of those registers. This means that an address, control, or data dependency must be enforced even if the calculation could seemingly be ``optimized away''. This choice ensures that RVWMO remains compatible with programmers that use these false syntactic dependencies intentionally to form a lightweight type of ordering mechanism. For example, there is a syntactic address dependency from the first instruction to the last instruction in the Figure~\ref{fig:litmus:address}, even though {\tt a1} XOR {\tt a1} is zero and hence has no effect on the address accessed by the second load. \begin{verbbox} ld a1,0(s0) xor a2,a1,a1 add s1,s1,a2 ld a5,0(s1) \end{verbbox} \begin{figure}[h!] \centering\small \theverbbox \caption{A syntactic address dependency} \label{fig:litmus:address} \end{figure} The benefit of using dependencies as a lightweight synchronization mechanism is that the ordering enforcement requirement is limited only to the specific two instructions in question. Other non-dependent instructions may be freely-reordered by aggressive implementations. One alternative would be to use a load-acquire, but this would enforce ordering for the first load with respect to {\em all} subsequent instructions. Another would be to use a {\tt fence r,r}, but this would include all previous and all subsequent loads, making this option each more expensive. Control dependencies behave differently from address and data dependencies in the sense that a control dependency always extends to all instructions following the original target in program order. Consider Figure~\ref{fig:litmus:control1}: the instruction at {\tt next} will always execute, but it nevertheless still has control dependency from the first instruction. \begin{verbbox} lw x1,0(x2) bne x1,x0,NEXT sw x3,0(x4) next: sw x5,0(x6) \end{verbbox} \begin{figure}[h!] \centering\small \theverbbox \caption{A syntactic control dependency} \label{fig:litmus:control1} \end{figure} \begin{verbbox} lw x1,0(x2) bne x1,x0,NEXT next: sw x3,0(x4) \end{verbbox} \begin{figure}[h!] \centering\small \theverbbox \caption{Another syntactic control dependency} \label{fig:litmus:control2} \end{figure} Likewise, consider Figure~\ref{fig:litmus:control2}. Even though both branch outcomes have the same target, there is still a control dependency from the first instruction in this snippet to the last. This definition of control dependency is subtly stronger than what might be seen in other contexts (e.g., C++), but it conforms with standard definitions of control dependencies in the literature. \begin{figure}[h!] \center { \tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline (a) & ld a0,0(s0) & (e) & ld a3,0(s2) \\ (b) & lr a1,0(s1) & (f) & sd a3,0(s0) \\ (c) & sc a2,a0,0(s1) & \\ (d) & sd a2,0(s2) & \\ \end{tabular} } ~~~~ \diagram \caption{A variant of the LB litmus test} \label{fig:litmus:successdeps} \end{figure} Finally, we highlight a unique new rule regarding the success registers written by store-conditional instructions. In certain cases, without PPO rule \ref{ppo:success}, a store conditional could in theory be made to store its own success output value as its data, in a manner reminiscent of so-called out-of-thin-air behavior. This is shown in Figure~\ref{fig:litmus:successdeps}. Suppose a hypothetical implementation could occasionally make some early guarantee that a store-conditional operation will succeed. In this case, (c) could return 0 to {\tt a2} early (before actually executing), allowing the sequence (d), (e), (f), (a), and then (b) to execute, and then (c) might execute (successfully) only at that point. This would imply that (c) writes its own success value to {\tt 0(s1)}! To rule out this bizarre behavior, PPO rule~\ref{ppo:success} says that store-conditional instructions may not return success or failure into the destination register until both the address and data for the instruction have been resolved. In the example above, this would enforce an ordering from (a) to (d), and this would in turn form a cycle that rules out the strange proposed execution. \subsection{Same-Address Load-Load Ordering (Rule~\ref{ppo:rdw})} \label{sec:ppo:rdw} \begin{tabular}{p{1cm}|p{12cm}} & Rule \ref{ppo:rdw}: \ppordw \\ \end{tabular} In contrast to same-address orderings ending in a store, same-address load-load ordering requirements are very subtle. The basic requirement is that a younger load must not return a value which is older than a value returned by an older load in the same hart to the same address. This is often known as ``CoRR'' (Coherence for Read-Read pairs), or as part of a broader ``coherence'' or ``sequential consistency per location'' requirement. Some architectures in the past have relaxed same-address load-load ordering, but in hindsight this is generally considered to complicate the programming model too much, and so RVWMO requires CoRR ordering to be enforced. However, because the global memory order corresponds to the order in which loads perform rather than the ordering of the values being returned, capturing CoRR requirements in terms of the global memory order requires a bit of indirection. \begin{figure}[h!] \center { \tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li t1, 1 & & li~ t2, 2 \\ (a) & sw t1,0(s0) & (d) & lw~ a0,0(s1) \\ (b) & fence w, w & (e) & sw~ t2,0(s1) \\ (c) & sw t1,0(s1) & (f) & lw~ a1,0(s1) \\ & & (g) & xor t3,a1,a1 \\ & & (h) & add s0,s0,t3 \\ & & (i) & lw~ a2,0(s0) \\ \end{tabular} } ~~~~ \diagram \caption{Litmus test MP+FENCE+fri-rfi-addr} \label{fig:litmus:frirfi} \end{figure} Consider the litmus test of Figure~\ref{fig:litmus:frirfi}, which is one particular instance of the more general ``fri-rfi'' pattern. The term ``fri-rfi'' refers to the sequence (d),(e),(f): (d) ``from-reads'' (i.e., reads from an earlier write than) (e) which is the same hart, and (f) reads from (e) which is in the same hart. From a microarchitectural perspective, outcome {\tt a0=1, a1=2, a2=0} is legal (as are various other less subtle outcomes). Intuitively, the following would produce the outcome in question: \begin{itemize} \item (a), (b), (c) execute \item (d) stalls (for whatever reason; perhaps it's stalled waiting for some other preceding instruction) \item (e) executes and enters the store buffer \item (f) forwards from (e) in the store buffer \item (g), (h), and (i) execute \item (d) unstalls and executes \item (e) drains from the store buffer to memory \end{itemize} This corresponds to a global memory order of (e),(f),(i),(a),(c),(d). Note that even though (f) performs before (d), the value returned by (f) is newer than the value returned by (d). Therefore, this execution is legal and does not violate the CoRR requirements even though (f) appears before (d) in global memory order. Likewise, if two back-to-back loads return the values written by the same store, then they may also appear out-of-order in the global memory order without violating CoRR. Note that this is not the same as saying that the two loads return the same value, since two different stores may write the same value. Consider the litmus test of Figure~\ref{fig:litmus:rsw}: \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li t1, 1 & (d) & lw~ a0,0(s1) \\ (a) & sw t1,0(s0) & (e) & xor t2,a0,a0 \\ (b) & fence w, w & (f) & add s2,s2,t2 \\ (c) & sw t1,0(s1) & (g) & lw~ a1,0(s2) \\ & & (h) & lw~ a2,0(s2) \\ & & (i) & xor t3,a2,a2 \\ & & (j) & add s0,s0,t3 \\ & & (k) & lw~ a3,0(s0) \\ \end{tabular} } ~~~~ \diagram \caption{Litmus test RSW} \label{fig:litmus:rsw} \end{figure} The outcome {\tt a0=1,a1=a2,a3=0} can be observed by allowing (g) and (h) to be reordered. This might be done speculatively, and the speculation can be justified by the microarchitecture (e.g., by snooping for cache invalidations and finding none) because replaying (h) after (g) would return the value written by the same store anyway. Hence assuming {\tt a1=a2}, (g) and (h) can be reordered. The global memory order corresponding to this execution would be (h),(k),(a),(c),(d),(g). Executions of the above test in which {\tt a1} does not equal {\tt a2} do in fact require that (g) appears before (h) in the global memory order. Allowing (h) to appear before (g) in the global memory order would in fact result in a violation of CoRR, because then (h) would return an older value than that returned by (g). Therefore, PPO rule~\ref{ppo:rdw} forbids this CoRR violation from occurring. As such, PPO rule~\ref{ppo:rdw} strikes a careful balance between enforcing CoRR in all cases while simultaneously being weak enough to permit ``RSW'' and ``fri-rfi'' patterns that commonly appear in real microarchitectures. \subsection{Atomics and LR/SCs (Atomicity Axiom)} \begin{tabular}{p{1cm}|p{12cm}} & Atomicity axiom: \atomicityaxiom \end{tabular} The RISC-V architecture decouples the notion of atomicity from the notion of ordering. Unlike architectures such as TSO, RISC-V atomics under RVWMO do not impose any ordering requirements by default. Ordering semantics are only guaranteed by the PPO rules that otherwise apply. This relaxed nature allows implementations to be aggressive about forwarding values even before a paired store has been committed to memory. Roughly speaking, the atomicity rule states that there can be no store from another hart during the time the reservation is held. For AMOs, the reservation is held as the AMO is being performed. For successful {\tt lr}/{\tt sc} pairs, the reservation is held between the time the {\tt lr} is performed and the time the {\tt sc} is performed. In most cases, the atomicity rule states that there can be no store from another hart between the load and its paired store in global memory order. There is one exception, however: if the paired load returns its value from a store $s$ still in the store buffer (which some implementations may permit), then the reservation may not need to be acquired until $s$ is ready to leave the store buffer, and this may occur after the paired load has already performed. Therefore, in this case, the requirement is only that no other store from another hart to an overlapping address can appear between time that $s$ performs and the time that the paired store performs. Consider the example of Figure~\ref{fig:litmus:lateatomic}: \begin{figure}[h!] \centering { \setlength{\tabcolsep}{2mm} \tt\footnotesize \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li~~~~~~ t1, 2 & & li~~~~~~~~ t3, 2 \\ & li~~~~~~ t2, 1 & & li~~~~~~~~ t4, 1 \\ (a) & sd~~~~~~ t1,0(s0) & (d) & sd~~~~~~~~ t3,0(s1) \\ (b) & amoor.aq a0,t2,0(s0) & (e) & amoswap.rl x0,t4,0(s0) \\ (c) & sd~~~~~~ t2,0(s1) & & \\ \end{tabular} } ~~ \diagram \caption{A litmus test where the reservation for {\tt 0(s0)} may not be acquired until after the load of (b) has already completed} \label{fig:litmus:lateatomic} \end{figure} The outcome {\tt 0(s0)=3, 0(s1)=2} is legal, with the global memory order of (b0),(c),(d),(e),(a),(b1), where (b0) and (b1) represent the load and store parts, respectively, of (b). The atomic operation (b) does not need to grab the reservation until (a) is ready to leave the store buffer. Therefore, although (e) is a store to the same address from another hart, and even though (e) lies between (b0) and (b1) in global memory order, this execution does not violate the atomicity axiom because (e) comes after (a) in global memory order. \begin{verbbox} (a) lr t0, 0(a0) (b) sd t1, 0(a0) (c) sc t2, 0(a0) \end{verbbox} \begin{figure}[h!] \centering\small \theverbbox \caption{Store-conditional (c) may succeed on some implementations} \label{fig:litmus:lrsdsc} \end{figure} The atomicity rule does not forbid loads from being interleaved between the paired operations in program order or in the global memory order, nor does it forbid stores from the same hart from appearing between the paired operations in either program order or in the global memory order. For example, the sequence in Figure~\ref{fig:litmus:lrsdsc} is legal, and the {\tt sc} may (but is not guaranteed to) succeed. By preserved program order rule \ref{ppo:->st}, the program order of the three operations must be maintained in the global memory order. This does not violate the atomicity axiom, because the intervening non-conditional store is from the same hart as the paired load-reserved and store-conditional instructions. \begin{figure}[h!] \centering { \setlength{\tabcolsep}{1mm} \tt\footnotesize \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li t0, 1 & & \\ (a) & amoor.aq a0,t0,0(s0) & (c) & amoadd.aq a1,x0,0(s1) \\ (b) & sd~~~~~~ a0,0(s1) & (d) & ld~~~~~~~ a2,0(s0) \\ \end{tabular} } ~~ \diagram \caption{The {\tt .aq} applies only to the load part of (a), and hence it does not order the store part of (a) before (b)} \label{fig:litmus:amoaq} \end{figure} Likewise, in the test of Figure~\ref{fig:litmus:amoaq}, the following global memory order could result in the outcome {\tt a1=1, a2=0}: (a0), (b), (c), (d), (a1). Overall, the atomicity rule ensures that non-synchronization atomic operations (e.g., incrementing a counter) can be made as efficient as possible in high-performance implementations, while simultaneously ensuring that the atomicity conditions necessary for achieving consensus are maintained. \begin{comment} %\subsection{Atomics and Store Forwarding (Rules \ref{ppo:rmwrfi}--\ref{ppo:rfiaq})} \subsection{Atomics and Store Forwarding (Rule \ref{ppo:rfiaq})} \begin{tabular}{p{1cm}|p{12cm}} % & Rule \ref{ppo:rmwrfi}: \ppormwrfi \\ & Rule \ref{ppo:rfiaq}: \pporfiaq \\ \end{tabular} There is one exception to the rule that ``fri-rfi'' reordering from Chapter~\ref{sec:ppo:rdw} is permitted: sequences in which the first memory access is part of an AMO or {\tt lr}/{\tt sc} pair and the second is a load with its {\tt .aq} bit set. Rule~\ref{ppo:rfiaq} ensures compatibility with causality chains of the form ({\tt rf; ppo; rf; ppo;} $\dots$), and with C/C++ release sequences in particular. Consider the following variant of test MP+FENCE+fri-rfi-addr (with labels reused from the earlier example): \begin{center} \tt\footnotesize \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li t1, 1 & & li t2, 2 \\ (a) & sw t1,0(s0) & & li t3, 1 \\ (b) & fence w, w & (de) & amoor t3,a0,0(s1) \\ (c) & sw t1,0(s1) & (f) & lw.aq a1,0(s1) \\ & & (i) & lw a2,0(s0) \\ \end{tabular} ~~~~ \begin{tabular}{cl||cl} \multicolumn{4}{c}{\rm Abstracted assembly} \\ \\ \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & & & \\ (a) & St [x], 1 & & \\ (b) & Fence w,w & (de) & AMOOr a0, 1, [y] \\ (c) & St [y], 1 & (f) & Ld.aq a1, [y] \\ & & (i) & Ld a2, [x] \\ \end{tabular} \end{center} Programmers (and C/C++) expect the causality chain from (a) to (c) to (de) to (f) to (i) to be enforced. However, the PPO rules covered so far only enforce global memory ordering from (a) to (c) to (d) (the load of the AMOOr) to (e) (the store of the AMOOr), and from (f) to (i), but not from (e) to (f). \ref{ppo:rfiaq} fills this missing link by ensuring that the ordering from (e) to (f) is respected, and hence that the entire ordering chain from (a) to (i) is respected. \end{comment} \subsection{Pipeline Dependency Artifacts (Rules~\ref{ppo:ld->st->ld}--\ref{ppo:addrpocfence})} \label{sec:ppopipeline} \begin{tabular}{p{1cm}|p{12cm}} & Rule \ref{ppo:ld->st->ld}: \ppoldstld \\ & Rule \ref{ppo:addrpo}: \ppoaddrpo \\ & Rule \ref{ppo:ctrlcfence}: \ppoctrlcfence \\ & Rule \ref{ppo:addrpocfence}: \ppoaddrpocfence \\ \end{tabular} These four ``compound dependency'' rules reflect behaviors of almost all real processor pipeline implementations, and they are added into the model explicitly to simplify the definition of the formal operational memory model and to improve compatibility with known patterns on other architectures. \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl} (a) & lw a0, 0(s0) \\ (b) & sw a0, 0(s1) \\ (c) & lw a1, 0(s1) \\ \end{tabular} } ~~~~ \diagram \caption{Because of the data dependency from (a) to (b), (a) is also ordered before (c)} \label{fig:litmus:addrdatarfi} \end{figure} Rule~\ref{ppo:ld->st->ld} states that a load forward from a store until the address and data for that store are known. Consider Figure~\ref{fig:litmus:addrdatarfi}: (c) cannot be executed until the data for (b) has been resolved, because (c) must return the value written by (b) (or by something even later in the global memory order). Therefore, (c) will never execute before (a) has executed. \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl} & li t1, 1 \\ (a) & lw a0, 0(s0) \\ (b) & sw a0, 0(s1) \\ & sw t1, 0(s1) \\ (c) & lw a1, 0(s1) \\ \end{tabular} } ~~~~ \diagram \caption{Because of the extra store between (b) and (c), (a) is no longer necessarily ordered before (c)} \label{fig:litmus:addrdatarfi_no} \end{figure} If there were another store to the same address in between (b) and (c), as in Figure~\ref{fig:litmus:addrdatarfi_no}, then (c) would no longer dependent on the data of (b) being resolved, and hence the dependency of (c) on (a), which produces the data for (b), would be broken. One subtle related note is that {\tt amoswap} does not contain a data dependency from its load to its store. Nor does every {\tt sc} have a data dependency on its paired {\tt lr}. Therefore, Rule~\ref{ppo:ld->st->ld} does not enforce an ordering from paired loads of this category to subsequent loads to overlapping addresses. %Rule~\ref{ppo:loadtoacq} is therefore not quite redundant with Rule~\ref{ppo:ld->st->ld}, even when the first load and the store in this example are paired. Rule~\ref{ppo:addrpo} makes a similar observation to the previous rule: a store cannot be performed at memory until all previous loads which might access the same address have themselves been performed. Such a load must appear to execute before the store, but it cannot do so if the store were to overwrite the value in memory before the load had a chance to read the old value. \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl} & li t1, 1 \\ (a) & lw a0, 0(s0) \\ (b) & lw a1, 0(a0) \\ (c) & sw t1, 0(s1) \\ \end{tabular} } ~~~~ \diagram \caption{Because of the address dependency from (a) to (b), (a) is also ordered before (c)} \label{fig:litmus:addrpo} \end{figure} Consider Figure~\ref{fig:litmus:addrpo}: (c) cannot be executed until the address for (b) is resolved, because it may turn out that the addresses match; i.e., that {\tt a0=s1}. Therefore, (c) cannot be sent to memory before (a) has executed and confirmed whether the addresses to indeed overlap. \begin{figure}[h!] \centering { \tt\small \begin{tabular}{rl} (a) & ld a0, 0(s0) \\ & xor a1,a0,a0 \\ & bne a1, critical \\ critical: & fence.i \\ (c) & ld a1, 0(s1) \\ \end{tabular} } ~~~~ \diagram \caption{Because of the control dependency from (a) to (c), (a) is also ordered before (c)} \label{fig:litmus:ctrlcfence} \end{figure} Rule~\ref{ppo:ctrlcfence} reflects the idiom of Figure~\ref{fig:litmus:ctrlcfence} for a lightweight acquire fence: In this code snippet, (c) cannot execute until the {\tt fence.i} is cleared. The {\tt fence.i} cannot clear until the branch has executed and drained. The branch cannot execute until it receives the value from (a) through the {\tt xor}. Therefore, (a) must be ordered before (c) in the global memory order. \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl} (a) & ld a0, 0(s0) \\ (b) & ld a1, 0(a0) \\ & fence.i \\ (c) & ld a1, 0(s1) \\ \end{tabular} } ~~~~ \diagram \caption{Because of the address dependency from (a) to (b) and the {\tt fence.i} between (b) and (c), (a) is also ordered before (c)} \label{fig:litmus:addrpocfence} \end{figure} Rule~\ref{ppo:addrpocfence} and Figure~\ref{fig:litmus:addrpocfence} present a similar situation: Once again, (c) cannot execute until the {\tt fence.i} is cleared. The {\tt fence.i} cannot clear until both (a) and (b) have at least issued (even if they have not yet returned a value). Finally, (b) cannot issue until it receives its address from (a). Therefore, (a) must be ordered before (c). \section{FENCE.I, SFENCE.VMA, and I/O Fences} In this section, we provide an informal description of how the {\tt fence.i}, {\tt sfence.vma}, and I/O fences interact with the memory model. Instruction fetches and address translation operations (where applicable) follow the RISC-V memory model as well as the rules below. \begin{itemize} \item {\tt fence.i}: Conceptually, {\tt fence.i} ensures that no instructions following the {\tt fence.i} are issued until all instructions prior to the {\tt fence.i} have executed (but not necessarily performed globally). This implies that the fetch of each instruction following the {\tt fence.i} in program order appears later in the global memory order than all stores prior to the {\tt fence.i} in program order. That in turn means that instruction caches which hardware does not keep coherent with normal memory must be flushed when a {\tt fence.i} instruction is executed. ({\tt fence.i} is also used form the patterns of Chapter~\ref{sec:ppopipeline}.) \item {\tt sfence.vma}: Conceptually, the instruction fetch and address translation operations of each instruction following the {\tt sfence.vma} in program order appears later in the global memory order than all stores prior to the {\tt sfence.vma} in program order. This implies that stale entries in the local hart's TLBs must be invalidated. \item Conceptually, updates to the page table made by a hardware page table walker form a paired atomic read-modify-write operation subject to the rules of the atomicity axiom \end{itemize} \subsection{Coherence and Cacheability} The RISC-V ISA defines Physical Memory Attributes (PMAs) which specify, among other things, whether portions of the address space are coherent and/or cacheable. See the privileged spec for the complete details. Here, we simply discuss how the various details in each PMA relate to the memory model: \begin{itemize} \item Main memory vs.\@ I/O, and I/O memory ordering PMAs: the memory model as defined applies to main memory regions. I/O ordering is discussed below. \item Supported access types and atomicity PMAs: the memory model is simply applied on top of whatever primitives each region supports. \item Coherence and cacheability PMAs: neither the coherence nor the cacheability PMAs affect the memory model. The RISC-V privileged specification suggests that hardware-incoherent regions of main memory are discouraged, but the memory model is compatible with hardware coherence, software coherence, implicit coherence due to read-only memory, implicit coherence due to only one agent having access, or otherwise. Likewise, non-cacheable regions may have more restrictive behavior than cacheable regions, but the set of allowed behaviors does not change regardless. \item Idempotency PMAs: Idempotency PMAs are used to specify memory regions for which loads and/or stores may have side effects, and this in turn is used by the microarchitecture to determine, e.g., whether prefetches are legal. This distinction does not affect the memory model. \end{itemize} \subsection{I/O Ordering} For I/O, the load value axiom and atomicity axiom in general do not apply, as both reads and writes might have device-specific side effects. The preserved program order rules do not generally apply to I/O either. Instead, we informally say that memory access $a$ is ordered before memory access $b$ if $a$ precedes $b$ in program order and one or more of the following holds: \begin{enumerate} \item $a$ and $b$ are accesses to overlapping addresses in an I/O region \item $a$ and $b$ are accesses to the same strongly-ordered I/O region \item $a$ and $b$ are accesses to I/O regions, and the channel associated with the I/O region accessed by either $a$ or $b$ is channel 1 \item $a$ and $b$ are accesses to I/O regions associated with the same channel (except for channel 0) \item $a$ and $b$ are separated in program order by a FENCE, $a$ is in the predecessor set of the FENCE, and $b$ is in the successor set of the FENCE. The predecessor and successor sets include the sets described by all eight FENCE bits {\tt .pr}, {\tt .pw}, {\tt .pi}, {\tt .po}, {\tt .sr}, {\tt .sw}, {\tt .si}, and {\tt .so}. \item $a$ and $b$ are accesses to I/O regions, and $a$ has {\tt .aq} set \item $a$ and $b$ are accesses to I/O regions, and $b$ has {\tt .rl} set \item $a$ and $b$ are accesses to I/O regions, and $a$ and $b$ both have {\tt .aq} and {\tt .rl} set \item $a$ and $b$ are accesses to I/O regions, and $a$ is an AMO that has {\tt .aq} and {\tt .rl} set \item $a$ and $b$ are accesses to I/O regions, and $b$ is an AMO that has {\tt .aq} and {\tt .rl} set \end{enumerate} As described above, accesses to I/O memory require stronger synchronization that what is enforced by the RVWMO PPO rules. For such cases, {\tt FENCE} operations with {\tt .pi}, {\tt .po}, {\tt .si}, and/or {\tt .so} are needed. For example, to enforce ordering between a write to normal memory and an MMIO write to a device register, a {\tt FENCE w,o} or stronger is needed. Even {\tt .aq} and {\tt .rl} do not enforce ordering between normal memory accesses and accesses to I/O memory. When a fence is in fact used, implementations must assume that the device may attempt to access memory immediately after receiving the MMIO signal, and subsequent memory accesses from that device to memory must observe the effects of all accesses ordered prior to that MMIO operation. \begin{verbbox} sd t0, 0(a0) fence w,o sd a0, 0(a1) \end{verbbox} \begin{figure}[h!] \centering\small \theverbbox \caption{Ordering memory and I/O accesses} \label{fig:litmus:wo} \end{figure} In other words, in Figure~\ref{fig:litmus:wo}, suppose {\tt 0(a0)} is in normal memory and {\tt 0(a1)} is the address of a device register in I/O memory. If the device accesses {\tt 0(a0)} upon receiving the MMIO write, then that load must conceptually appear after the first store to {\tt 0(a0)} according to the rules of the RVWMO memory model. In some implementations, the only way to ensure this will be to require that the first store does in fact complete before the MMIO write is issued. Other implementations may find ways to be more aggressive, while others still may not need to do anything different at all for I/O and normal memory accesses. Nevertheless, the RVWMO memory model does not distinguish between these options; it simply provides an implementation-agnostic mechanism to specify the orderings that must be enforced. Many architectures include separate notions of ``ordering'' and ``completion'' fences, especially as it relates to I/O (as opposed to normal memory). Ordering fences simply ensure that memory operations stay in order, while completion fences ensure that predecessor accesses have all completed before any successors are made visible. RISC-V does not explicitly distinguish between ordering and completion fences. Instead, this distinction is simply inferred from different uses of the FENCE bits. For implementations that conform to the RISC-V Unix Platform Specification, I/O devices, DMA operations, etc.\@ are required to access memory coherently and via strongly-ordered I/O channels. Therefore, accesses to normal memory regions that are shared with I/O devices can also use the standard synchronization mechanisms. Implementations which do not conform to the Unix Platform Specification and/or in which devices do not access memory coherently will need to use platform-specific mechanisms (such as cache flushes) to enforce coherency. I/O regions in the address space should be considered non-cacheable regions in the PMAs for those regions. Such regions can be considered coherent by the PMA if they are not cached by any agent. The ordering guarantees in this section may not apply beyond a platform-specific boundary between the RISC-V cores and the device. In particular, I/O accesses sent across an external bus (e.g., PCIe) may be reordered before they reach their ultimate destination. Ordering must be enforced in such situations according to the platform-specific rules of those external devices and buses. \section{Code Examples} \label{sec:mmcode} \subsection{Compare and Swap} An example using {\tt lr}/{\tt sc} to implement a compare-and-swap function is shown in Figure~\ref{cas}. If inlined, compare-and-swap functionality need only take three instructions. \begin{figure}[h!] \begin{center} \begin{verbatim} # a0 holds address of memory location # a1 holds expected value # a2 holds desired value # a0 holds return value, 0 if successful, !0 otherwise cas: lr.w t0, (a0) # Load original value. bne t0, a1, fail # Doesn't match, so fail. sc.w a0, a2, (a0) # Try to update. jr ra # Return. fail: li a0, 1 # Set return to failure. jr ra # Return. \end{verbatim} \end{center} \caption{Sample code for compare-and-swap function using {\tt lr}/{\tt sc}.} \label{cas} \end{figure} \subsection{Spinlocks} \label{sec:spinlock} An example code sequence for a critical section guarded by a test-and-set spinlock is shown in Figure~\ref{critical}. Note the first AMO is marked {\tt .aq} to order the lock acquisition before the critical section, and the second AMO is marked {\tt .rl} to order the critical section before the lock relinquishment. \begin{figure}[h!] \begin{center} \begin{verbatim} li t0, 1 # Initialize swap value. again: amoswap.w.aq t0, t0, (a0) # Attempt to acquire lock. bnez t0, again # Retry if held. # ... # Critical section. # ... amoswap.w.rl x0, x0, (a0) # Release lock by storing 0. \end{verbatim} \end{center} \caption{Sample code for mutual exclusion. {\tt a0} contains the address of the lock.} \label{critical} \end{figure} \section{Code Porting Guidelines} \label{sec:porting} Normal x86 loads and stores are all inherently acquire and release operations: TSO enforces all load-load, load-store, and store-store ordering by default. All TSO loads must be mapped onto {\tt l\{b|h|w|d\}; fence r,rw}, and all TSO stores must either be mapped onto {\tt amoswap.rl x0} or onto {\tt fence rw,w; s\{b|h|w|d\}}. Alternatively, TSO loads and stores can be mapped onto {\tt l\{b|h|w|d\}.aq} and {\tt s\{b|h|w|d\}.rl} assembler pseudoinstructions to facilitate forwards compatibility in case such instructions are added to the ISA one day. However, in the meantime, the assembler will generate the same fence-based and/or {\tt amoswap}-based versions for these pseudoinstructions. %However, the more correct solution to porting code from x86-TSO (which is generally overly-constrained at the assembly level compared to DRF software requirements) is to rewrite the algorithm to determine which orderings the original algorithm actually required, and then to re-code the algorithm in terms of the RVWMO memory model. x86 atomics using the LOCK prefix are all sequentially consistent and when ported naively to RISC-V must be marked as {\tt .aqrl}. A Power {\tt sync}/{\tt hwsync} fence, an ARM {\tt dmb} fence, and an x86 {\tt mfence} are all equivalent to a RISC-V {\tt fence rw,rw}. Power {\tt isync} and ARM {\tt isb} map to RISC-V {\tt fence.i}. A Power {\tt lwsync} map onto {\tt fence.tso}, or onto {\tt fence rw,rw} when {\tt fence.tso} is not available. ARM {\tt dmb ld} and {\tt dmb st} fences map to RISC-V {\tt fence r,rw} and {\tt fence w,w}, respectively. A direct mapping of ARMv8 atomics that maps unordered instructions to unordered instructions, RCpc instructions to RCpc instructions, and RCsc instructions to RCsc instructions is likely to work in the majority of cases. Mapping even unordered load-reserved instructions onto {\tt lr.aq} (particularly for LR/SC pairs without internal data dependencies) is an even safer bet, as this ensures C/C++ release sequences will be respected. However, due to a subtle mismatch between the two models, strict theoretical compatibility with the ARMv8 memory model requires that a naive mapping translate all ARMv8 store conditional and load-acquire operations map onto RISC-V RCsc operations. Any atomics which are naively ported into RCsc operations may revert back to the straightforward mapping if the programmer can verify that the code is not relying on an ordering from the store-conditional to the load-acquire (as this is not common). %ARMv8 solves the C/C++ release sequence problem of Chapter~\ref{sec:acqrel} through a rule that is different from rule~\ref{ppo:loadtoacq}. %Therefore, strict formal compatibility requires naively-ported ARMv8 load-acquire operations to be preceded by a {\tt fence w,r,[addr]} or stronger. %The naive translations of ARM {\tt ldapr}, {\tt ldar}, and {\tt stlr} are therefore ``{\tt fence w,r,[addr]; amoor.aq rd,[addr],x0}'', ``{\tt fence w,r,[addr]; amoor.aq.rl rd,[addr],x0}'' and ``{\tt amoswap.aq.rl} with {\tt rd=x0}'', respectively. %In general the extra fence would not have been necessary if the original source were recompiled to RISC-V natively, because the RVWMO memory model already solves the same underlying problem just in a different way. %Naive ports may choose whether to stick to a strict naive port or to assume that the (cheaper) mapping without the fence is more than likely sufficient. The Linux fences {\tt smp\_mb()}, {\tt smp\_wmb()}, {\tt smp\_rmb()} map onto {\tt fence rw,rw}, {\tt fence w,w}, and {\tt fence r,r}, respectively. The fence {\tt smp\_read\_barrier\_depends()} map to a no-op due to preserved program order rules \ref{ppo:addr}--\ref{ppo:ctrl}. The Linux fences {\tt dma\_rmb()} and {\tt dma\_wmb()} map onto {\tt fence r,r} and {\tt fence w,w}, respectively, since the RISC-V Unix Platform requires coherent DMA. The Linux fences {\tt rmb()}, {\tt wmb()}, and {\tt mb()} map onto {\tt fence ri,ri}, {\tt fence wo,wo}, and {\tt fence rwio,rwio}, respectively. %\begin{table}[h!] % \begin{tabular}{|l|l|l|} % \hline % C/C++ Construct & Base ISA Mapping & `A' Extension Mapping \\ % \hline % \hline % Non-atomic load & \multicolumn{2}{l|}{\tt ld} \\ % \hline % \tt atomic\_load(memory\_order\_relaxed) & \multicolumn{2}{l|}{\tt ld} \\ % \hline % %\tt atomic\_load(memory\_order\_consume) & \multicolumn{2}{l|}{\tt ld; fence r,rw} \\ % %\hline % \tt atomic\_load(memory\_order\_acquire) & \tt fence r,r,[addr]; & \tt ld.aq \\ % & \tt ld; fence r,rw & \\ % \hline % \tt atomic\_load(memory\_order\_seq\_cst) & \tt fence rw,rw; ld; & \tt ld.aq.rl rs2=x0 \\ % & \tt fence r,rw & \\ % \hline % \hline % Non-atomic store & \multicolumn{2}{l|}{\tt sd} \\ % \hline % \tt atomic\_store(memory\_order\_relaxed) & \multicolumn{2}{l|}{\tt sd} \\ % \hline % \tt atomic\_store(memory\_order\_release) & \tt fence rw,w; sd & \tt sd.rl x0 \\ % \hline % \tt atomic\_store(memory\_order\_seq\_cst) & \tt fence rw,rw; sd & \tt sd.aq.rl x0 \\ % \hline % \hline % \tt atomic\_thread\_fence(memory\_order\_acquire) & \multicolumn{2}{l|}{\tt fence r,rw} \\ % \hline % \tt atomic\_thread\_fence(memory\_order\_release) & \multicolumn{2}{l|}{\tt fence rw,w} \\ % \hline % \tt atomic\_thread\_fence(memory\_order\_acq\_rel) & \multicolumn{2}{l|}{{\tt fence rw,rw~} or {~\tt fence rw,w; fence r,rw}} \\ % \hline % \tt atomic\_thread\_fence(memory\_order\_seq\_cst) & \multicolumn{2}{l|}{\tt fence rw,rw} \\ % \hline % \end{tabular} % \caption{Mappings from C/C++ primitives to RISC-V primitives. The atomics mapping is preferable where available.} % \label{tab:mappings} %\end{table} \begin{table}[h!] \begin{tabular}{|l|l|} \hline C/C++ Construct & RVWMO Mapping \\ \hline \hline Non-atomic load & \tt l\{b|h|w|d\} \\ \hline \tt atomic\_load(memory\_order\_relaxed) & \tt l\{b|h|w|d\} \\ \hline %\tt atomic\_load(memory\_order\_consume) & \multicolumn{2}{l|}{\tt l\{b|h|w|d\}; fence r,rw} \\ %\hline \tt atomic\_load(memory\_order\_acquire) & \tt l\{b|h|w|d\}; fence r,rw \\ \hline \tt atomic\_load(memory\_order\_seq\_cst) & \tt fence rw,rw; l\{b|h|w|d\}; fence r,rw \\ \hline \hline Non-atomic store & \tt s\{b|h|w|d\} \\ \hline \tt atomic\_store(memory\_order\_relaxed) & \tt s\{b|h|w|d\} \\ \hline \tt atomic\_store(memory\_order\_release) & \tt fence rw,w; s\{b|h|w|d\} \\ \hline \tt atomic\_store(memory\_order\_seq\_cst) & \tt fence rw,rw; s\{b|h|w|d\} \\ \hline \hline \tt atomic\_thread\_fence(memory\_order\_acquire) & \tt fence r,rw \\ \hline \tt atomic\_thread\_fence(memory\_order\_release) & \tt fence rw,w \\ \hline \tt atomic\_thread\_fence(memory\_order\_acq\_rel) & {\tt fence.tso} \\ \hline \tt atomic\_thread\_fence(memory\_order\_seq\_cst) & \tt fence rw,rw \\ \hline \end{tabular} \caption{Mappings from C/C++ primitives to RISC-V primitives.} % The atomics mapping is preferable where available.} \label{tab:mappings} \end{table} The C11/C++11 {\tt memory\_order\_*} primitives should be mapped as shown in Table~\ref{tab:mappings}. The {\tt memory\_order\_acquire} orderings in particular must use fences rather than atomics to ensure that release sequences behave correctly even in the presence of {\tt amoswap}. The {\tt memory\_order\_release} mappings may use {\tt .rl} as an alternative. \begin{table}[h!] \centering \begin{tabular}{|l|l|} \hline Ordering Annotation & Fence-based Equivalent \\ \hline \tt l\{b|h|w|d|r\}.aq & \tt l\{b|h|w|d|r\}; fence r,rw \\ \hline \tt l\{b|h|w|d|r\}.aqrl & \tt fence rw,rw; l\{b|h|w|d|r\}; fence r,rw \\ \hline \tt s\{b|h|w|d|c\}.rl & \tt fence rw,w; s\{b|h|w|d|c\} \\ \hline \tt s\{b|h|w|d|c\}.aqrl & \tt fence rw,w; s\{b|h|w|d|c\} \\ \hline \tt amo<op>.aq & \tt amo<op>; fence r,rw \\ \hline \tt amo<op>.rl & \tt fence rw,w; amo<op> \\ \hline \tt amo<op>.aqrl & \tt fence rw,rw; amo<op>; fence rw,rw \\ \hline \end{tabular} \caption{Mappings from {\tt .aq} and/or {\tt .rl} to fence-based equivalents. An alternative mapping places a {\tt fence rw,rw} after the existing {\tt s\{b|h|w|d|c\}} mapping rather than at the front of the {\tt l\{b|h|w|d|r\}} mapping.} \label{tab:aqrltofence} \end{table} It is also safe to translate any {\tt .aq}, {\tt .rl}, or {\tt .aqrl} annotation into the fence-based snippets of Table~\ref{tab:aqrltofence}. These can also be used as a legal implementation of {\tt l\{b|h|w|d\}} or {\tt s\{b|h|w|d\}} pseudoinstructions for as long as those instructions are not added to the ISA. \section{Implementation Guidelines} The RVWMO and RVTSO memory models by no means preclude microarchitectures from employing sophisticated speculation techniques or other forms of optimization in order to deliver higher performance. The models also do not impose any requirement to use any one particular cache hierarchy, nor even to use a cache coherence protocol at all. Instead, these models only specify the behaviors that can be exposed to software. Microarchitectures are free to use any pipeline design, any coherent or non-coherent cache hierarchy, any on-chip interconnect, etc., as long as the design satisfy the memory model rules. That said, to help people understand the actual implementations of the memory model, in this section we provide some guidelines below on how architects and programmers should interpret the models' rules. Both RVWMO and RVTSO are multi-copy atomic (or ``other-multi-copy-atomic''): any store value which is visible to a hart other than the one that originally issued it must also be conceptually visible to all other harts in the system. In other words, harts may forward from their own previous stores before those stores have become globally visible to all harts, but no other early intra-hart forwarding is permitted. Multi-copy atomicity may be enforced in a number of ways. It might hold inherently due to the physical design of the caches and store buffers, it may be enforced via a single-writer/multiple-reader cache coherence protocol, or it might hold due to some other mechanism. Although multi-copy atomicity does impose some restrictions on the microarchitecture, it is one of the key properties keeping the memory model from becoming extremely complicated. For example, a hart may not legally forward a value from a neighbor hart's private store buffer, unless those two harts are the only two in the system. Nor may a cache coherence protocol forward a value from one hart to another until the coherence protocol has invalidated all older copies from other caches. Of course, microarchitectures may (and high-performance implementations likely will) violate these rules under the covers through speculation or other optimizations, as long as any non-compliant behaviors are not exposed to the programmer. As a rough guideline for interpreting the PPO rules in RVWMO, we expect the following from the software perspective: \begin{itemize} \item programmers will use PPO rules \ref{ppo:->st}--\ref{ppo:amoload} regularly and actively. \item expert programmers will use PPO rules \ref{ppo:addr}--\ref{ppo:ctrl} to speed up critical paths of important data structures. %\item expert programmers will occasionally use PPO rules \ref{ppo:rdw}--\ref{ppo:rfiaq} in very aggressive code and/or as part of a longer chain of synchronization. \item even expert programmers will rarely if ever use PPO rules \ref{ppo:success}--\ref{ppo:addrpocfence} directly. These are included to facilitate common microarchitectural optimizations (rule~\ref{ppo:rdw}) and the operational formal modeling approach (rules \ref{ppo:success} and \ref{ppo:ld->st->ld}--\ref{ppo:addrpocfence}) described in Chapter~\ref{sec:operational}. They also facilitate the process of porting code from other architectures which have similar rules. \end{itemize} We also expect the following from the hardware perspective: \begin{itemize} \item PPO rules \ref{ppo:->st}--\ref{ppo:release} and \ref{ppo:amostore}--\ref{ppo:amoload} reflect well-understood rules that should pose few surprises to architects. \item PPO rule \ref{ppo:strongacqrel} may not be immediately obvious to architects, but is somewhat standard nevertheless \item The load value axiom, the atomicity axiom, and PPO rules \ref{ppo:addr}--\ref{ppo:ctrl} and \ref{ppo:ld->st->ld}--\ref{ppo:addrpocfence} reflect rules that most hardware implementations will enforce naturally, unless they contain extreme optimizations. Of course, implementations should make sure to double check these rules nevertheless. Hardware must also ensure that syntactic dependencies are not ``optimized away''. %\item PPO rules \ref{ppo:strongacqrel} and \ref{ppo:rfiaq} may not be obvious or intuitive, and hence they deserve particular attention. %\item PPO rules \ref{ppo:strongacqrel} and \ref{ppo:rmwrfi}--\ref{ppo:rfiaq} may not be obvious or intuitive, and hence they deserve particular attention. \item PPO rule \ref{ppo:success} is not obvious, but it is necessary to avoid certain out-of-thin-air-like behavior that appears with store-conditional success values \item PPO rule \ref{ppo:rdw} reflects a natural and common hardware optimization, but one that is very subtle and hence is worth double checking carefully. \end{itemize} Architectures are free to implement any of the memory model rules as conservatively as they choose. For example, a hardware implementation may choose to do any or all of the following: \begin{itemize} \item interpret all fences as if they were {\tt fence rw,rw} (or {\tt fence iorw,iorw}, if I/O is involved), regardless of the bits actually set \item implement all fences with {\tt .pw} and {\tt .sr} as if they were {\tt fence~rw,rw} (or {\tt fence~iorw,iorw}, if I/O is involved), as ``{\tt w,r}'' is the most expensive of the four possible normal memory orderings anyway \item ignore any addresses passed to a fence instruction and simply implement the fence for all addresses \item implement an instruction with {\tt .aq} set as being preceded immediately by {\tt fence r,rw} \item implement an instruction with {\tt .rl} set as being succeeded immediately by {\tt fence rw,w} \item enforcing all same-address load-load ordering, even in the presence of patterns such as ``fri-rfi'' and ``RSW'' \item forbid any forwarding of a value from a store in the store buffer to a subsequent AMO or {\tt lr} to the same address \item forbid any forwarding of a value from an AMO or {\tt sc} in the store buffer to a subsequent load to the same address \item implement TSO on all memory accesses, and ignore any normal memory fences that do not include ``{\tt w,r}'' ordering \item implement all atomics to be RCsc; i.e., always enforce all store-release-to-load-acquire ordering \end{itemize} %PPO rules~\ref{ppo:ld->st->ld}--\ref{ppo:addrpocfence} are not intended to impose any ordering requirements onto a processor pipeline beyond constraints which arise naturally, but extremely-optimized pipelines should be careful not to violate these rules nevertheless (or to ensure that any speculation-based optimizations do not make illegal behaviors visible to software). Architectures which implement RVTSO can safely do the following: \begin{itemize} \item Ignore all {\tt .aq} and {\tt .rl} bits, since these are implicitly always set under RVTSO. ({\tt .aqrl} cannot be ignored, however, due to PPO rules \ref{ppo:strongacqrel}--\ref{ppo:amoload}.) \item Ignore all fences which do not have both {\tt .pw} and {\tt .sr} (unless the fence also orders I/O) \item Ignore PPO rules \ref{ppo:->st} and \ref{ppo:addr}--\ref{ppo:addrpocfence}, since these are redundant with other PPO rules under RVTSO assumptions \end{itemize} Other general notes: \begin{itemize} \item Silent stores (i.e., stores which write the same value that already exists at a memory location) do not have any special behavior from a memory model point of view. Microarchitectures that attempt to implement silent stores must take care to ensure that the memory model is still obeyed, particularly in cases such as RSW (Chapter~\ref{sec:ppo:rdw}) which tend to be incompatible with silent stores. \item Writes may be merged (i.e., two consecutive writes to the same address may be merged) or subsumed (i.e., the earlier of two back-to-back writes to the same address may be elided) as long as the resulting behavior does not otherwise violate the memory model semantics. \end{itemize} The question of write subsumption can be understood from the following example: \begin{figure}[h!] \centering { \tt\small \begin{tabular}{cl||cl} \multicolumn{2}{c}{Hart 0} & \multicolumn{2}{c}{Hart 1} \\ \hline & li t1, 3 & & li t3, 2 \\ & li t2, 1 & & \\ (a) & sw t1,0(s0) & (d) & lw a0,0(s1) \\ (b) & fence w, w & (e) & sw a0,0(s0) \\ (c) & sw t2,0(s1) & (f) & lw t3,0(s0) \\ \end{tabular} } ~~~~ \diagram \caption{Write subsumption litmus test} \label{fig:litmus:subsumption} \end{figure} As written, (a) must follow (f) in the global memory order: \begin{itemize} \item (a) follows (c) in the global memory order because of rule 2 \item (c) follows (d) in the global memory order because of the Load Value axiom \item (d) follows (e) in the global memory order because of rule 7 \item (e) follows (f) in the global memory order because of rule 1 \end{itemize} A very aggressive microarchitecture might erroneously decide to discard (e), as (f) supersedes it, and this may in turn lead the microarchitecture to break the now-eliminated dependency between (d) and (f) (and hence also between (a) and (f)). This would violate the memory model rules, and hence it is forbidden. Write subsumption may in other cases be legal, if for example there were no data dependency between (d) and (e). \section{Summary of New/Modified ISA Features} At a high level, PPO rules \ref{ppo:strongacqrel}, \ref{ppo:success}, and \ref{ppo:ld->st->ld}--\ref{ppo:addrpocfence} are all new rules that did not exist in the original ISA spec. Rule~\ref{ppo:rdw} and the specifics of the atomicity axiom were addressed but not stated in detail. Other new or modified ISA details: \begin{itemize} \item There is an RCpc ({\tt .aq} and {\tt .rl}) vs.\@ RCsc ({\tt .aqrl}) distinction \item Load-release and store-acquire are deprecated \item {\tt lr}/{\tt sc} behavior was clarified %\item Fences reserve two bits for platform-specific use \end{itemize} \subsection{Possible Future Extensions} We expect that any or all of the following possible future extensions would be compatible with the RVWMO memory model: \begin{itemize} \item `V' vector ISA extensions \item A transactional memory subset of the `T' ISA extension \item `J' JIT extension \item Native encodings for {\tt l\{b|h|w|d\}.aq}/{\tt s\{b|h|w|d\}.rl} \item Fences limited to certain addresses \item Cache writeback/flush/invalidate/etc.\@ hints, but these should be considered hints, not functional requirements. Any cache management operations which are required for basic correctness should be described as (possibly address range-limited) fences to comply with the RISC-V philosophy (see also {\tt fence.i} and {\tt sfence.vma}). For example, a functional cache writeback instruction might instead be written as ``{\tt fence~rw[addr],w[addr]}''. \end{itemize} \section{Litmus Tests} These litmus tests represent some of the better-known litmus tests in the field, plus some tests that are randomly-generated, plus some tests that are generated to be particularly relevant to the RVWMO memory model. All will be made available for download once they are generated. We expect that these tests will one day serve as part of a compliance test suite, and we expect that many architects will use them for verification purposes as well. COMING SOON! \chapter{Formal Memory Model Specifications} \begin{commentary} To facilitate formal analysis of RVWMO, we present a set of formalizations in this chapter. Any discrepancies are unintended; the expectation is that the models will describe exactly the same sets of legal behaviors, pending some memory model changes that have not quite been added to all of the formalizations yet. As such, these formalizations should be considered snapshots from some point in time during the development process rather than finalized specifications. At this point, no individual formalization is considered authoritative, but we may designate one as such in collaboration with the ISA specification and/or formalization task groups. \end{commentary} \section{Formal Axiomatic Specification in Alloy} \label{sec:alloy} We present two formal specifications of the RVWMO memory model in Alloy (\url{http://alloy.mit.edu}). The first corresponds directly to the natural language model earlier in this chapter. \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} //////////////////////////////////////////////////////////////////////////////// // =RISC-V RVWMO axioms= // Preserved Program Order fun ppo : Event->Event { // same-address ordering po_loc :> Store // explicit synchronization + ppo_fence + Load.aq <: ^po + ^po :> Store.rl + Store.aq.rl <: ^po :> Load.aq.rl + ^po :> Load.sc + Store.sc <: ^po // dependencies + addr + data + ctrl :> Store + (addr+data).successdep // CoRR + rdw & po_loc_no_intervening_write // pipeline dependency artifacts + (addr+data).rfi + addr.^po :> Store + ctrl.(FenceI <: ^po) + addr.^po.(FenceI <: ^po) } // the global memory order respects preserved program order fact { ppo in gmo } \end{lstlisting}} \caption{The RVWMO memory model formalized in Alloy (1/4: PPO)} \label{fig:alloy1} \end{figure} \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} // Load value axiom fun candidates[r: Load] : set Store { (r.~gmo & Store & same_addr[r]) // writes preceding r in gmo + (r.^~po & Store & same_addr[r]) // writes preceding r in po } fun latest_among[s: set Event] : Event { s - s.~gmo } pred LoadValue { all w: Store | all r: Load | w->r in rf <=> w = latest_among[candidates[r]] } fun after_reserve_of[r: Load] : Event { latest_among[r + r.~rf].gmo } pred Atomicity { all r: Store.~rmw | // starting from the read r of an atomic, no x: Store & same_addr[r + r.rmw] | // there is no write x to the same addr x not in same_hart[r] // from a different hart, such that and x in after_reserve_of[r] // x follows (the write r reads from) in gmo and r.rmw in x.gmo // and r follows x in gmo } pred RISCV_mm { LoadValue and Atomicity } \end{lstlisting}} \caption{The RVWMO memory model formalized in Alloy (2/4: Axioms)} \label{fig:alloy2} \end{figure} \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} //////////////////////////////////////////////////////////////////////////////// // Basic model of memory sig Hart { // hardware thread start : one Event } sig Address {} abstract sig Event { po: lone Event // program order } abstract sig MemoryEvent extends Event { address: one Address, aq: lone MemoryEvent, // opcode bit rl: lone MemoryEvent, // opcode bit sc: lone MemoryEvent, // for AMOs with .aq and .rl, to distinguish from lr/sc gmo: set MemoryEvent // global memory order } sig Load extends MemoryEvent { addr: set Event, ctrl: set Event, data: set Store, successdep: set Event, rmw: lone Store } sig Store extends MemoryEvent { rf: set Load } sig Fence extends Event { pr: lone Fence, // opcode bit pw: lone Fence, // opcode bit sr: lone Fence, // opcode bit sw: lone Fence // opcode bit } sig FenceI extends Event {} // FENCE PPO fun FencePRSR : Fence { Fence.(pr & sr) } fun FencePRSW : Fence { Fence.(pr & sw) } fun FencePWSR : Fence { Fence.(pw & sr) } fun FencePWSW : Fence { Fence.(pw & sw) } fun ppo_fence : MemoryEvent->MemoryEvent { (Load <: ^po :> FencePRSR).(^po :> Load) + (Load <: ^po :> FencePRSW).(^po :> Store) + (Store <: ^po :> FencePWSR).(^po :> Load) + (Store <: ^po :> FencePWSW).(^po :> Store) } \end{lstlisting}} \caption{The RVWMO memory model formalized in Alloy (3/4: model of memory)} \label{fig:alloy3} \end{figure} \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} // auxiliary definitions fun po_loc_no_intervening_write : MemoryEvent->MemoryEvent { po_loc - ((po_loc :> Store).po_loc) } fun RFInit : Load { Load - Store.rf } fun rsw : Load->Load { ~rf.rf + (RFInit <: address.~address :> RFInit) } fun rdw : Load->Load { (Load <: po_loc :> Load) - rsw } fun po_loc : Event->Event { ^po & address.~address } fun same_hart[e: Event] : set Event { e + e.^~po + e.^po } fun same_addr[e: Event] : set Event { e.address.~address } // basic facts about well-formed execution candidates fact { acyclic[po] } fact { all e: Event | one e.*~po.~start } // each event is in exactly one hart fact { rf.~rf in iden } // each read returns the value of only one write fact { total[gmo, MemoryEvent] } // gmo is a total order over all MemoryEvents //rf fact { rf in address.~address } fun rfi : Store->Load { Store <: po_loc_no_intervening_write :> Load } //dep fact { addr + ctrl + data in ^po } fact { successdep in (Write.~rmw) <: ^po } fact { ctrl.*po in ctrl } fact { rmw in ^po } //////////////////////////////////////////////////////////////////////////////// // =Opcode encoding restrictions= // opcode bits are either set (encoded, e.g., as f.pr in iden) or unset // (f.pr not in iden). The bits cannot be used for anything else fact { pr + pw + sr + sw + aq + rl + sc in iden } fact { sc in aq + rl } fact { Load.sc.rmw in Store.sc and Store.sc.~rmw in Load.sc } // Fences must have either pr or pw set, and either sr or sw set fact { Fence in Fence.(pr + pw) & Fence.(sr + sw) } // there is no write-acquire, but there is write-strong-acquire fact { Store & Acquire in Release } fact { Load & Release in Acquire } //////////////////////////////////////////////////////////////////////////////// // =Alloy shortcuts= pred acyclic[rel: Event->Event] { no iden & ^rel } pred total[rel: Event->Event, bag: Event] { all disj e, e': bag | e->e' in rel + ~rel acyclic[rel] } \end{lstlisting}} \caption{The RVWMO memory model formalized in Alloy (4/4: Auxiliaries)} \label{fig:alloy4} \end{figure} \clearpage The second is an equivalent formulation which is slightly more complex but which is more computationally efficient. We expect that analysis tools will be built off of this second formulation. Also included are empirical checks that the two models match. This formulation, however, does not apply when mixed-size accesses are used, nor when {\tt lr}/{\tt sc} to different addresses are used. \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} // coherence order: a total order on the writes to each address fun co : Write->Write { Write <: ((address.~address) & gmo) :> Write } // from-read: from a read to the coherence successors of the rf-source of the write fun fr : Read->Write { ~rf.co + ((Read - Write.rf) <: address.~address :> Write) } // e = external; i.e., from a different hart fun rfe : Store->Load { rf - iden - ^po - ^~po } fun coe : Store->Store { co - iden - ^po - ^~po } fun fre : Load->Store { fr - iden - ^po - ^~po } pred sc_per_location { acyclic[rf + co + fr + po_loc] } pred atomicity { no rmw & fre.coe } pred causality { acyclic[rfe + co + fr + ppo] } // equality checks run RISCV_mm_com_sanity { RISCV_mm_com } for 3 check RISCV_mm_gmo_com { RISCV_mm => RISCV_mm_com } for 6 check RISCV_mm_com_gmo { rmw in address.~address => // the rf/co/fr model assumes rmw in same addr RISCV_mm_com => rfe + co + fr in gmo => // pick a gmo which matches rfe+co+fr RISCV_mm } for 6 \end{lstlisting} } \caption{An alternative, more computationally efficient but less complete axiomatic definition} \label{fig:com} \end{figure} \clearpage \section{Formal Axiomatic Specification in Herd} See also: \url{http://moscova.inria.fr/~maranget/cats7/riscv} (This herd model is not yet updated to account for rules \ref{ppo:amostore}--\ref{ppo:amoload} and \ref{ppo:success}, and rule {\tt r4} has been tentatively removed from RVWMO. Updates to come...) \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} (*************) (* Utilities *) (*************) let fence.r.r = [R];fencerel(Fence.r.r);[R] let fence.r.w = [R];fencerel(Fence.r.w);[W] let fence.r.rw = [R];fencerel(Fence.r.rw);[M] let fence.w.r = [W];fencerel(Fence.w.r);[R] let fence.w.w = [W];fencerel(Fence.w.w);[W] let fence.w.rw = [W];fencerel(Fence.w.rw);[M] let fence.rw.r = [M];fencerel(Fence.rw.r);[R] let fence.rw.w = [M];fencerel(Fence.rw.w);[W] let fence.rw.rw = [M];fencerel(Fence.rw.rw);[M] let fence = fence.r.r | fence.r.w | fence.r.rw | fence.w.r | fence.w.w | fence.w.rw | fence.rw.r | fence.rw.w | fence.rw.rw let po-loc-no-w = po-loc \ (po-loc;[W];po-loc) let rsw = rf^-1;rf let LD-ACQ = R & (Acq|AcqRel) and ST-REL = W & (Rel|AcqRel) (*************) (* ppo rules *) (*************) let r1 = [M];po-loc;[W] and r2 = fence and r3 = [LD-ACQ];po;[M] and r4 = [R];po-loc;[LD-ACQ] and r5 = [M];po;[ST-REL] and r6 = [W & AcqRel];po;[R & AcqRel] and r7 = [R];addr;[M] and r8 = [R];data;[W] and r9 = [R];ctrl;[W] and r10 = ([R];po-loc-no-w;[R]) \ rsw and r11 = [R];(addr|data);[W];po-loc-no-w;[R] and r12 = [R];addr;[M];po;[W] and r13 = [R];ctrl;[Fence.i];po;[R] and r14 = [R];addr;[M];po;[Fence.i];po;[M] let ppo = r1 | r2 | r3 | r4 | r5 | r6 | r7 | r8 | r9 | r10 | r11 | r12 | r13 | r14 \end{lstlisting} } \caption{{\tt riscv-defs.cat}, part of a herd version of the RVWMO memory model (1/3)} \label{fig:herd1} \end{figure} \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} Total (* Notice that herd has defined its own rf relation *) (* Define ppo *) include "riscv-defs.cat" (********************************) (* Generate global memory order *) (********************************) let gmo0 = (* precursor: ie build gmo as an total order that include gmo0 *) loc & (W\FW) * FW | # Final write before any write to the same location ppo | # ppo compatible rfe # first half of (* Walk over all linear extensions of gmo0 *) with gmo from linearisations(M\IW,gmo0) (* Add initial writes upfront -- convenient for computing rfGMO *) let gmo = gmo | loc & IW * (M\IW) (**********) (* Axioms *) (**********) (* Compute rf according to the load value axiom, aka rfGMO *) let WR = loc & ([W];(gmo|po);[R]) let rfGMO = WR \ (loc&([W];gmo);WR) (* Check equality of herd rf and of rfGMO *) empty (rf\rfGMO)|(rfGMO\rf) as RfCons (* Atomic axion *) let infloc = (gmo & loc)^-1 let inflocext = infloc & ext let winside = (infloc;rmw;inflocext) & (infloc;rf;rmw;inflocext) & [W] empty winside as Atomic \end{lstlisting} } \caption{{\tt riscv.cat}, a herd version of the RVWMO memory model (2/3)} \label{fig:herd2} \end{figure} \begin{figure}[h!] { \tt\bfseries\centering\footnotesize \begin{lstlisting} Partial (***************) (* Definitions *) (***************) (* Define ppo *) include "riscv-defs.cat" (* Compute coherence relation *) include "cos-opt.cat" (**********) (* Axioms *) (**********) (* Sc per location *) acyclic co|rf|fr|po-loc as Coherence (* Main model axiom *) acyclic co|rfe|fr|ppo as Model (* Atomicity axiom *) empty rmw & (fre;coe) as Atomic \end{lstlisting} } \caption{{\tt riscv.cat}, part of an alternative herd presentation of the RVWMO memory model (2/3)} \label{fig:herd3} \end{figure}
{ "alphanum_fraction": 0.7003484706, "avg_line_length": 56.0763027295, "ext": "tex", "hexsha": "dd0a1fe3612f8796aa1fbe0cd6b7ba44109d7ca6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5f8294b56e0447775c444c5c641713d8e67c4b88", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "bshanks/riscv-isa-manual", "max_forks_repo_path": "src/memory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5f8294b56e0447775c444c5c641713d8e67c4b88", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "bshanks/riscv-isa-manual", "max_issues_repo_path": "src/memory.tex", "max_line_length": 595, "max_stars_count": null, "max_stars_repo_head_hexsha": "5f8294b56e0447775c444c5c641713d8e67c4b88", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "bshanks/riscv-isa-manual", "max_stars_repo_path": "src/memory.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 25228, "size": 90395 }
% Material and Methods %\clearpage%if the chapter heading starts close to bottom of the page, force a line break and remove the vertical vspace \vspace{21.5pt} \chapter{Material and Methods} Check Final Year Project Guide for the content of Material and Methods chapter.
{ "alphanum_fraction": 0.7977941176, "avg_line_length": 38.8571428571, "ext": "tex", "hexsha": "f008fde17919d608cd5abf64b7f77752a0310d66", "lang": "TeX", "max_forks_count": 25, "max_forks_repo_forks_event_max_datetime": "2022-01-18T17:27:09.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-04T07:36:47.000Z", "max_forks_repo_head_hexsha": "359254143618bcfc5368989fc4321199d7218f3a", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "heikkiket/inssityo", "max_forks_repo_path": "chapters/methods.tex", "max_issues_count": 20, "max_issues_repo_head_hexsha": "359254143618bcfc5368989fc4321199d7218f3a", "max_issues_repo_issues_event_max_datetime": "2021-09-24T13:28:26.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-21T19:01:38.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "heikkiket/inssityo", "max_issues_repo_path": "chapters/methods.tex", "max_line_length": 120, "max_stars_count": 9, "max_stars_repo_head_hexsha": "359254143618bcfc5368989fc4321199d7218f3a", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "heikkiket/inssityo", "max_stars_repo_path": "chapters/methods.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-18T06:22:38.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-04T07:36:44.000Z", "num_tokens": 61, "size": 272 }
\documentclass[9pt,twocolumn,twoside]{../../styles/osajnl} \usepackage{fancyvrb} \journal{i524} \title{HTCondor: Distributed Workflow Management System} \author[1,*]{Niteesh Kumar Akurati} \affil[1]{School of Informatics and Computing, Bloomington, IN 47408, U.S.A.} \affil[*]{Corresponding authors: [email protected]} \dates{paper1, \today} \ociscodes{HTCondor, ClassAd, Match Making, Condor-G, High-throughput Computing, Cycle Scavenging, Batch Systems} % replace this with your url in github/gitlab \doi{\url{https://github.com/Niteesh01/sp17-i524/blob/master/paper1/S17-IR-2001/report.pdf}} \begin{abstract} HTcondor is a distributed high throughput computing open-source workflow management system developed by the condor research team at University of Wisconsin-Madison Department of Computer Sciences. It is an unique and highly sophisticated Job Scheduler which has been changing and adopting dynamically in alignment with the users and the growing popularity of the distributed computing field. It is used for distributed parallelization of computing intensive tasks. The Condor project has become popular for its two key products, they are: The Condor high-throughput computing system, and the Condor-G agent for grid computing.\newline \end{abstract} \setboolean{displaycopyright}{true} \begin{document} \maketitle % Condor \section{Introduction} An ideal computing environment provides ready access to huge scale of computing power. Over the course of years it is recognized that such immense computing power can be achieved at a very low cost by combining various small devices rather than using expensive supercomputers. This situation is addressed by the condor project. The core philosophy of the Condor project is flexibility. ``Like other full-featured batch systems, HTCondor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. Users submit their serial or parallel jobs to HTCondor, HTCondor places them into a queue, chooses when and where to run the jobs based upon a policy, carefully monitors their progress, and ultimately informs the user upon completion''\cite{beowulfbook-condor} Apart from providing the functionalities as other batch systems, HTCondor harnesses effectively wasted CPU cycles from idle desktops as well as workstations by using various effective mechanisms. HTCondor is really unique because it can be used for managing workload on a set of dedicated clusters like the beowulf clusters, thereby allocating work to idle desktops/workstations the process being called as cycle scavenging. ``HTCondor can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment''\cite{HTCondor_wikipedia} \section{Architecture} At the core of HTCondor technology lies the kernel.The kernel shown in Fig.1 is the fundamental structure of HTCondor. Computing environments with wide variety can be constructed by making minor modifications to the kernel. The kernel workflow is as follows a user submits a job to an agent, the agent remembers the job in a persistent storage while looking for resources that are willing to run the job. Matchmaker is responsible for potential agents and resources. The agents and resources advertises themselves to the matchmaker. The matchmaker introduces the agent, the agent once introduced is responsible for contacting the resource and validating the match.To execute a job both agent and resource starts new processes called shadow and sandbox respectively. Shadow provides all the details necessary to execute a job. Sandbox creates a safe environment to run the job \cite{condor-practice} %\subsection{Sample Figure} %Figure \ref{fig:The Condor Kernel}. \begin{figure}[htbp] \centering \fbox{\includegraphics[width=\linewidth]{images/Condor_Kernel.eps}} \caption{The HTCondor Kernel} \end{figure} \section{The HTCondor Software} HTCondor project supports a variety of computing systems deployed worldwide for various commercial and academic purposed. The HTCondor products: The Condor high-throughput computing system, and the Condor-G agent for grid computing are very popular among various domains. Lets discuss both these products in detail \subsection{HTCondor High Throughput Computing System} HTCondor being a high-throughput distributed batch computing system. It provides job management, scheduling, resource management and monitoring just like other batch systems. HTCondor is prominent in areas where other batch systems are weak like:high-throughput computing, and oppurtunistic computing. The idea of high-throughput computing is providing large amounts of computing power that is fault tolerant over prolonged periods of time by effectively utilizing the resources that are available in the network. Oppurtunistic computing is to use resources whenever they are available. High-throughput computing can be efficiently achieved through oppurtunistic computing. Therefore, HTCondor brings together high-throughput computing as well as oppurtunistic computing %\begin{description} To achieve high-throughput computing through oppurtunistic means, this requires several powerful as well as unique tools: \begin{itemize} %\renewcommand{\labelitemi}{\scriptsize$\square$} \item {\bf ClassAds.} ``The ClassAd language in Condor provides an extremely flexible and expressive framework for matching resource requests(e.g jobs) with resource offers (e.g machines)''\cite{condor-practice}. The ClassAd mechanism makes it easy for the job to state the requirements and preferences of the job. It also makes it easy for the machines to specify about the preferences and requirements for the jobs they are willing to run. Therefore, enabling these resources and requirements to be described in the form of powerful expressions thus resulting in HTCondor's adoption mostly any desired policy\cite{condor-practice} \item {\bf Job Checkpoint and Migration.} Job checkpoint is a snapshot of the complete state of the job\cite{Checkpoint_Migration_techreport_1997}. With certain types of jobs HTCondor can take a checkpoint of the job and later resume it. Job can continue its execution later exactly from where it left off at the time of checkpointing. HTCondor also allow job migration from one pool of resources to an other pool of resources with the help of checkpointing\cite{beowulfbook-condor} \item {\bf Remote System Calls.} HTCondor preserves the local execution enviroment using remote system calls despite running jobs on remote machines. Users need not make files available on remote workstations by accessing the machines using remote login. Remote system calls is a mobile sandbox mechanism of HTCondor which is used for redirecting all the I/O related calls back to the machine which submitted the job. The program behaves as if it is running on the originally submitted workstation, regardless of where it really executes\cite{condor-practice} \end{itemize} %\end{description}\cite{condor-practice} HTCondor with the help of above tools can also scavenge and manage wasted CPU power from otherwise idle workstations across the organization with minimal effort\cite{condor-hunter}. The same mechanisms enable preemptive-resume scheduling on compute cluster resources. Therefore, this allows HTCondor to support priority based scheduling on clusteres. HTCondor therefore can be used to combine all of organization's computing power into a single resource. \subsection{Condor-G} The Condor-G is combination of initiatives from Globus and HTCondor projects. Globus is about inter-domain communications as well as standardized access to a variety of remote batch systems\cite{Globus1997article}. In HTCondor comes the user concerns of fault tolerance, error recovery, creating a friendly execution environment, job submission and job allocation ``Condor-G can be used as the reliable submission and job management service for one or more sites, HTCondor can exist both at front end and back end of a grid. The HTCondor HTC system can be as the fabric management service for one or more sites and the Globus Toolkit can be used as the bridge between them''\cite{condor-practice}. \section{ClassAd Mechanism} The ClassAd mechanism is an unique, extremely flexible mechanism for handling jobs and resource requests. HTCondor uses matchmaking for matching an idle job with an available machine. ClassAds are used by users to specify which machines should service their jobs. Adminstrators uses it customize the sccheduling policy. HTCondor's ClassAd mechanism is similar to the classifieds in the advertising section of the newspaper. Sellers advertise what they hope to sell to attract the buyers at the same time buyers advertise specifics whar they wish to purchase. ClassAds consists of a unique set of named expressions. Each expression is an attribute. Each attribute has an attribute name as well as value\cite{ClassAddstechreport2003}. \section{DAGMan} DAGMan is a problem solver which is a higher-level structure built on top of the HTCondor agent. It provides an unique programming model for managing large number of jobs. A problem solver relies/uses agent as a service for reliable execution of jobs. Therefore, the problem need not worry about failure of jobs as the agent assumes responsibility for hiding and retrying those jobs. DAG Manager(DAGMan) is a service responsible for execution of multiple jobs with dependencies in a declarative form. DAGMan is a fault tolerant as well as distribute version of traditional tool make. DAG does not depend on the file system to record a DAG process unlike make. DAG keeps a set of private logs to act in the case of crashes or failures \section{Applications} HTCondor has been adopted widely across various commercial and academia. Few of the notable applications in academia and commercial space are listed here: C.O.R.E Digital Pictures, NUG30 Optimization Challenge \begin{itemize} \item{\bf C.O.R.E DIgital Pictures.} A highly successful Toronto-based computer animation studio. The studio primarily deals primarily with Photo-realisitic animation which is a compute intensive process. Each frame can take upto 1 hour. An animation requires 30 or more such frames. Today, HTCondor manages hundreds of linux and silicon Graphics machines at C.O.R.E Digital Pictures. On a average day 15000 jobs are submitted by the C.O.R.E animators to HTCondor. HTCondor has been successfully used by C.O.R.E for major productions such as X-Men, Blade II and The Time Machine. \item{\bf NUG30 Optimization Challenge.} NUG30 is quadratic assignment problem first proposed in 1968 as one of the most difficult combinatorial optimization challenges, but remained unsolved for 32 years beacuse of its complexity. This problem is solved by four mathematicians from Argone National Laboratory, University of Iowa, and Northwestern University by using Condor-G and several other technologies The actual computation was managed by HTCondor's Master-Worker(MW) problem solver environment.'' MW submitted work to Condor-G, which provided compute resources from around the world by both direct flocking to other Condor pools and by gliding in to other compute resources accessible via the Globus GRAM protocol. Remote System Calls, part ofCondor’s standard universe, was used as the I/O service between the master and the workers. Checkpointing was performed every 15 minutes for fault tolerance''\cite{condor-practice} . As a result, a solution to NUG30 was discovered by utilizing Condor-G in a run of less than one week. Condor-G allowed the mathematicians to manage 2400 systems at 10 different sites seamlessly. Over 95,000 CPU hours are consumed in that week \cite{condor-practice}. \end{itemize} \section{Educational Material} HTCondor is one of the most popular technology for high-computing distributed workflow management system. More information about HTCondor can be found here \cite{HTCondor_website}. \section{Licensing} HTCondor is released under the open source Apache License, Version 2.0. You may not use this file except in compliance with the License. You may obtain a copy of the License here\cite{HTCondor-License}. \section{Conclusion} HTCondor is a powerful, unique and flexible high throughput computing distributed workflow management system which has evolved over years by keeping the end users,adminstrators and the growing demand in mind. HTCondor continues to standout from other batch management systems with the power of high-throughput computing and oppurtunistic computing. %Condor % Bibliography \bibliography{references} \newpage \appendix \end{document}
{ "alphanum_fraction": 0.8105181145, "avg_line_length": 43.2154882155, "ext": "tex", "hexsha": "2db27254e03b214802144051468081147c953e88", "lang": "TeX", "max_forks_count": 294, "max_forks_repo_forks_event_max_datetime": "2018-07-13T01:32:24.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-09T13:18:39.000Z", "max_forks_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "cloudmesh/sp17-i524", "max_forks_repo_path": "paper1/S17-IR-2001/report.tex", "max_issues_count": 98, "max_issues_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412", "max_issues_repo_issues_event_max_datetime": "2017-10-27T11:30:50.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-19T04:24:02.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "cloudmesh/sp17-i524", "max_issues_repo_path": "paper1/S17-IR-2001/report.tex", "max_line_length": 113, "max_stars_count": 2, "max_stars_repo_head_hexsha": "42dd11b914c03c741dad8a8505c3e091dc6ec412", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "suunni/sp17-i524", "max_stars_repo_path": "paper1/S17-IR-2001/report.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-14T19:13:18.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-30T09:54:25.000Z", "num_tokens": 2929, "size": 12835 }
\subsection{Examining the Source Code}
{ "alphanum_fraction": 0.8421052632, "avg_line_length": 38, "ext": "tex", "hexsha": "60dde00cc553a798ab3b960fa300083874f818b3", "lang": "TeX", "max_forks_count": 18, "max_forks_repo_forks_event_max_datetime": "2022-03-03T15:36:11.000Z", "max_forks_repo_forks_event_min_datetime": "2019-07-22T19:01:25.000Z", "max_forks_repo_head_hexsha": "c64b339712db28a619880c4c04839aef7d3b6e2b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "kant/CMMPPT", "max_forks_repo_path": "gsa/wit/COIN/Bcp/Doc/Manual/sample.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "c64b339712db28a619880c4c04839aef7d3b6e2b", "max_issues_repo_issues_event_max_datetime": "2020-09-16T08:10:57.000Z", "max_issues_repo_issues_event_min_datetime": "2019-09-04T17:34:59.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "kant/CMMPPT", "max_issues_repo_path": "gsa/wit/COIN/Bcp/Doc/Manual/sample.tex", "max_line_length": 38, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c64b339712db28a619880c4c04839aef7d3b6e2b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "kant/CMMPPT", "max_stars_repo_path": "gsa/wit/COIN/Bcp/Doc/Manual/sample.tex", "max_stars_repo_stars_event_max_datetime": "2019-10-25T05:25:23.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-25T05:25:23.000Z", "num_tokens": 9, "size": 38 }
\documentclass[acmtog]{acmart} \usepackage{graphicx} \usepackage{subfigure} \usepackage{natbib} \usepackage{listings} \usepackage{bm} \definecolor{blve}{rgb}{0.3372549 , 0.61176471, 0.83921569} \definecolor{gr33n}{rgb}{0.29019608, 0.7372549, 0.64705882} \makeatletter \lst@InstallKeywords k{class}{classstyle}\slshape{classstyle}{}ld \makeatother \lstset{language=C++, basicstyle=\ttfamily, keywordstyle=\color{blve}\ttfamily, stringstyle=\color{red}\ttfamily, commentstyle=\color{magenta}\ttfamily, morecomment=[l][\color{magenta}]{\#}, classstyle = \bfseries\color{gr33n}, tabsize=2 } \lstset{basicstyle=\ttfamily} % Title portion \title{Assignment 2:\\ {Geometric Modeling}} \author{Name:\quad Yang Hongdi \\ student number:\ 2019533234 \\email:\quad [email protected]} % Document starts \begin{document} \maketitle \vspace*{2 ex} \section{Introduction} In this project, there is an object consisted of 3 bezier surfaces with L1 continuity. There is also a Bspline surface which can be interactively edited by imgui. And Phong lighting by one light is performed. \section{Implementation Details} \subsection{Bezier curve evaluation} For bezier curve evalution, I simply use the iterative method as $$Q(i) = (1-t) P(i) + tP(i+1)$$ and iterative do this until there is only one point left. When there are only two points, the line connects them is the tangent we need, we record it and use it to calculate the normal later. \subsection{Bezier surface evaluation} To construct a Bezier surface, we need control points of two directions and the division num to determine how many points we need to evaluate. \begin{lstlisting} class BezierSurface { public: vector<vector<vec3>> control_points_m_; vector<vector<vec3>> control_points_n_; int div_num; //...member functions }; \end{lstlisting} \quad We first evaluate the bezier curves along one direction, then we get the control points and we use it to construct a Bezier curve and evaluate this bezier curve again. Then we have the point's postion and we record the tangent in this direction.\\ We repeat this procedure again in another direction, the point's postion should be the same and we get another tangent. The cross product of the two tangents will be the normal of this point. \begin{lstlisting} Vertex BezierSurface::evaluate (vector<vector<vec3>>& control_points, float u, float v) { BezierCurve order1_curve(control_points.size()); for (int i = 0; i < control_points.size(); i++) { BezierCurve tempcurve(control_points[i]); order1_curve.setControlPoint(i, tempcurve.evaluate(v).position); } return order1_curve.evaluate(u); } \end{lstlisting} \subsection{BezierSurface rendering} After evaluating enough points on the surface, we can add the points to the vertex array of the object. For each 2x3 points, we can construct two triangles, like \begin{lstlisting} int point_num = div_num + 1; for (int i = 0;i < point_num - 1; i++) { for (int j = 0; j < point_num - 1; j++) { new_object.indices.push_back(i * point_num + j + 1); new_object.indices.push_back(i * point_num + j); new_object.indices.push_back((i+1) * point_num + j); ///1st triangle new_object.indices.push_back(i * point_num + j + 1); new_object.indices.push_back((i+1) * point_num + j); new_object.indices.push_back((i+1) * point_num + j + 1); ///2nd triangle } } \end{lstlisting} then we have the vertex array and the indices array, by attaching them to VAO, VBO we can render the Bezier Surface. \subsection{Bezier Surface stitching} For Bezier surface stitching, I simply set their 1st and last control point to be the same postion and make sure the control polygon is colinear. \subsection{Bspline curve evaluation} For Bspline curve evaluation, we can use methods similar to Bezier curve. First, we define the degree to be $p$ and $n$ control points, and we need to have $m$ knots satisfying $m = n + p + 1$. To make sure we can reach the final control point and first control point, we need to have $p + 1$knots to be 0 and $p + 1$ knots to be 1.\\ To evaluate the point, we use the formula $$Q_i = (1 - a_i)P_{i-1} + aiP_i$$ \\ $$a_i = \frac{t-u_i}{u_{i+p} - u_i} \ \ \ for\ k -p+1 \leq i \leq k$$ k is where $u_k \leq t < u_{k+1}$.\\ For this formula, we can iteratively compute the control points until there is only one point, and we record the tangent when there are two control points(like what we do with Bezier curve), then we can get the position and tangent of the point. \subsection{Bspline Surface construction and Rendering} Quite like that with BezierSurface, we evaluate the points from two directions, we get two tangents, and we compute the normal. Then we are able to render it using the vertex array and indices array. \subsection{Interactive Editing} I use the dear imgui library to edit. inclue the header files. \begin{lstlisting} #include "imgui.h" #include "imgui_impl_glfw.h" #include "imgui_impl_opengl3.h" \end{lstlisting} for each control point, we can change its position and then evaluate the points again. \section{Results} \begin{figure}[h] \begin{minipage}[b]{.4\linewidth} \includegraphics[scale=0.25]{beziersurface.png} \caption{beziersurface front} \end{minipage} \\ \begin{minipage}[b]{.4\linewidth} \includegraphics[scale=0.25]{beziersurface2.png} \caption{beziersurface back} \end{minipage} \\ \begin{minipage}[b]{.4\linewidth} \includegraphics[scale=0.25]{bsplinesurface.png} \caption{bsplinesurface front} \end{minipage} \\ \begin{minipage}[b]{.4\linewidth} \includegraphics[scale=0.25]{bsplinesurface2.png} \caption{bsplinesurface back} \end{minipage} \end{figure} \end{document}
{ "alphanum_fraction": 0.7436797753, "avg_line_length": 36.7483870968, "ext": "tex", "hexsha": "ea152fe85c0a7276da81c5e8ef77e6e0f73cb85c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d52aafe16d1128dafaf349ac726c40bb6dab8758", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Young2647/Computer-Graphics", "max_forks_repo_path": "Assignment2/report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d52aafe16d1128dafaf349ac726c40bb6dab8758", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Young2647/Computer-Graphics", "max_issues_repo_path": "Assignment2/report/report.tex", "max_line_length": 173, "max_stars_count": null, "max_stars_repo_head_hexsha": "d52aafe16d1128dafaf349ac726c40bb6dab8758", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Young2647/Computer-Graphics", "max_stars_repo_path": "Assignment2/report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1632, "size": 5696 }
\documentclass[12pt]{article} \usepackage{setspace} \usepackage{xcolor, amsmath, amsfonts, amssymb, graphicx, color, fancyhdr, lipsum, scalerel, stackengine, mathrsfs, tikz-cd, mdframed} \usepackage[amsthm]{ntheorem} \usepackage[mathscr]{euscript} %set margins \usepackage[ top = 1.0in, bottom = 1.0in, left = 1.0in, right = 1.0in]{geometry} %header stuff \setlength{\headsep}{24pt} % space between header and text \pagestyle{fancy} % set pagestyle for document \lhead{ {Peter Webb's \it A Course in Finite Group Representation Theory} } % put text in header (left side) \rhead{Nico Courts} % put text in header (right side) \cfoot{\it p. \thepage} \setlength{\headheight}{15pt} \allowdisplaybreaks %Set of Integers \newcommand*{\Z}{ \mathbb{Z} } %Set of Natural Numbers \newcommand*{\N}{ \mathbb{N} } %Set of Real Numbers \newcommand*{\R}{ \mathbb{R} } %Set of Complex Numbers \newcommand*{\C}{ \mathbb{C} } %Field \newcommand*{\F}{ \mathbb{F} } %Rationals \newcommand*{\Q}{ \mathbb{Q} } %Section break \newcommand*{\brk}{ \rule{2in}{.1pt} } \DeclareMathOperator{\Aut}{Aut} %raise that Chi! \DeclareRobustCommand{\Chi}{{\mathpalette\irchi\relax}} \newcommand{\irchi}[2]{\raisebox{\depth}{$#1\chi$}} %Image \DeclareMathOperator{\im}{Im} %Hom \DeclareMathOperator{\Hom}{Hom} %Coker \DeclareMathOperator{\coker}{coker} %characteristic \DeclareMathOperator{\ch}{char} %restriction \DeclareMathOperator{\Res}{Res_H} %socle %restriction \DeclareMathOperator{\Soc}{Soc} %induction \DeclareMathOperator{\Ind}{Ind_H^G} %fix tilde \let\tilde\relax \newcommand*{\tilde}[1]{\widetilde{#1}} \newtheorem{lem}{Lemma} \newtheorem{thm}{Theorem} %New environments for problem solving \newenvironment{prob}[1]{\par\smallskip \noindent\begin{mdframed}\small \textbf{Problem~\thesection.#1} \rmfamily\quad}{\end{mdframed}\medskip} \newenvironment{sol}{\noindent \textbf{Solution.} \,}{\\\hspace*{\fill}$\square$\medskip} %\onehalfspacing \begin{document} %make the title page \title{ Problems from Peter Webb's\\ \textit{A Course in Finite Group Representation Theory}\vspace{-1ex}} \author{Nico Courts} \date{} \maketitle %begin problems \section{Representations, Maschke's Theorem, and Semisimplicity} \begin{prob}{1} In Example 1.1.6, prove that there are no invariant subspaces other than the ones listed. \end{prob} \begin{sol} Recall that in this example we are considering the representation of $C_2=\Z_2$ given by $\rho:C_2\to GL(\R^2)$ via \[\rho(1)=\begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix},\qquad \rho(-1)=\begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}\tag{1}\label{eq:1.1.1}.\] In the example we noted that $\{0\}$, $\langle\binom{1}{0}\rangle$, $\langle\binom{0}{1}\rangle$ and $\R^2$ are all invariant subspaces. Assume that $V$ is any $\R$-subspace of $\R^2$ not listed above. Then necessarily it has $\R$-dimension one, so is spanned by an element $\binom{a}{b}\in\R^2.$ If $V$ is stable under the $C_2$ action, it must be that (in particular) the image of $\binom{a}{b}$ under nontrivial matrix in (\ref{eq:1.1.1}) above lands back in $V$. But then for some $\alpha\in\R$, \[\alpha\binom{a}{b}=\binom{a}{-b}\] whence either $a\ne 0$ and so $\alpha=1$ and $b=0$ or else $a=0$ and (since $V$ is nontrivial) $b\ne 0$ whence $\alpha=-1$ and $a=0$. But these two case describe precisely the two one dimensional subspaces given above so this must in fact be a complete list. \end{sol} \begin{prob}{2} (\textit{The Modular Law}) -- Let $A$ be a ring and $U=V\oplus W$ an $A$-module that is a direct sum of $A$-modules $V$ and $W$. Show by example that if $X$ is any submodule of $U$ then it need not be the case that $X=(V\cap X)\oplus(W\cap X)$. Show that if we make the assumption that $V\subseteq X$ then it is true that $X=(V\cap X)\oplus (W\cap X).$ \end{prob} \begin{sol} Consider $A=\R$ and let $U=\langle e_1,e_2\rangle$ be $\R^2$ spanned by the standard basis vectors. Now define $V=\langle e_1+e_2\rangle$ and $W=\langle e_1\rangle$. $V\cap W=\{0\}$ and $V+W=U$, so the hypotheses are satisfied. Now let $X=\{e_2\}$ and notice \[(X\cap A)\oplus (X\cap B)=\{0\}\oplus\{0\}=\{0\}\ne X.\] Now assume that $V\subseteq X$, so that $V\cap X=V.$ Now we want to prove that $X=V\oplus(W\cap X).$ Clearly since $V$ and $W\cap X$ are in $X$, their sum is as well. Therefore $V\oplus (W\cap X)\subseteq X.$ Assume now that $x\in X$. But then $x\in U$ so $x=v+w$ for $v\in V$ and $w\in W$. Since $V\subseteq X$, $v\in X$ and the same goes for $x-v=w$. But then $x=(v)+(x-v)$ where the former part is in $V$ and the latter is in $X\cap W$. This gives us the reverse inclusion and thus equality. \end{sol} \begin{prob}{3} Suppose that $\rho$ is a finite-dimensional representation of a finite group $G$ over $\C$. Show that for each $g\in G$, the matrix $\rho(g)$ is diagonalizable. \end{prob} \begin{sol} Let $|G|=n$ and $\rho:G\to GL(\C)$ be a degree $k<\infty$ representation. Since $G$ has finite order, $\rho(g)^n=I_k$ so $\rho(g)$ satisfies the polynomial $x^n-1$. Since $\C$ is algebraically closed, $\rho(g)$ has $k$ eigenvalues, each of which must be an $n^{th}$ root of unity. Furthermore $x^n-1$ has no repeated roots (separable) so the minimal polynomial of $\rho(g)$ has distinct roots. Therefore $\rho(g)$ has $k$ distinct eigenvalues in $\C$ which is sufficient to show that it is diagonalizable. \end{sol} \begin{prob}{4} Let $\phi:U\to V$ be an $A$-module homomorphism. Show that $\phi(\Soc U)\subseteq \Soc V$ and that if $\phi$ is an isomorphism then it restricts to an isomorphism $\Soc U$ to $\Soc V$. \end{prob} \begin{sol} Recall that the image of a simple module is itself simple. To see this, assume that $S$ is simple and $0\ne X\subseteq \phi(S)$ is a submodule. Then $\phi^{-1}(X)$ is a submodule of $U$ intersecting $S$ nontrivially and by simplicity $\phi^{-1}(X)\cap S=S$. But then $\phi(S)\subseteq X$, so they are equal. Thus the only nonzero submodule of $\phi(S)$ is itself. Thus since $\Soc U$ is a sum of simple modules of $U$, $\varphi(\Soc U)$ is a sum of simple modules of $V$. So $\phi(\Soc U)\subseteq \Soc V$, the sum of \textit{all} simple modules of $V$. Assume now that $\phi$ is an isomorphism. Then by the same argument as above $\phi^{-1}(\Soc V)\subseteq \Soc U$ and so \[\phi\circ\phi^{-1}(\Soc V)\subseteq \phi(\Soc U)\quad\Rightarrow\quad \Soc V\subseteq \phi(\Soc U)\] so they are equal. Finally that $\Soc V=\phi(\Soc U)$ and $\phi^{-1}(\Soc V)=\Soc U$ and the injectivity of $\phi$ and its inverse gives us that $\phi$ restricts to an isomorphism $\Soc U\to\Soc V$. \end{sol} \begin{prob}{5} Let $U=S_1\oplus\cdots \oplus S_r$ be an $A$-module that is the direct sum of finitely many simple $S_i$. Show that if $T$ is any simple submodule of $U$ then $T\cong S_i$ for some $i$. \end{prob} \begin{sol} Since $T\subseteq U$, $T\cap S_i$ is nontrivial for some $i$. But then $T\cap S_i=S_i$ by the simplicity of $S_i$ and so $S_i\subseteq T$. By problem 1.2, since we have \[T\subset S_i\oplus\left(S_1\oplus\cdots\oplus\hat S_i\oplus\cdots\oplus S_r\right)=S_i+S\] we have that \[T=(T\cap S_i)\oplus (T\cap S)=S_i\oplus (T\cap S)\] and by the simplicity of $T$, $T\cap S=0$ and $T\cong S_i$. \end{sol} \begin{prob}{6} Let $V$ be an $A$-module for some ring $A$ and suppose that $V$ is a sum $V=V_1+\cdots+V_n$ of simple submodules. Assume further that the $V_i$ are pairwise nonisomorphic. Show that the $V_i$ are the only simple submodules of $V$ and that $V=V_1\oplus\cdots\oplus V_n$ is their direct sum. \end{prob} \begin{sol} Let $W$ be any simple submodule of $V$. By Lemma 1.2.3 in the book $V=V_{i_1}\oplus\cdots\oplus V_{i_k}$ for some subset of the $V_i$. By problem 1.5 above, this gives us $W\cong V_j$ for some $j$. But the $V_i$ are pairwise nonisomorphic so $W=V_j$ so the $V_i$ are the only simple submodules of $V$. That all $V_i$ occur in the direct sum holds by again applying the last problem and noticing that each $V_i$ must be isomorphic to some $V_j$ in the direct sum, whence $V_i$ itself must appear. \end{sol} \begin{prob}{7} Let $G=\langle x,y| x^2=y^2=1=[x,y]\rangle$ be the Klein 4-group, $R=\F_2$, and consider the two representations $\rho_1$ and $\rho_2$ given by \[\rho_1(x)=\begin{pmatrix} 1&1&0\\0&1&0\\0&0&1 \end{pmatrix},\qquad \rho_1(y)=\begin{pmatrix} 1&0&1\\0&1&1\\0&0&1 \end{pmatrix}\] and \[\rho_2(x)=\begin{pmatrix} 1&0&0\\0&1&1\\0&0&1 \end{pmatrix},\qquad \rho_2(y)=\begin{pmatrix} 1&0&1\\0&1&0\\0&0&1 \end{pmatrix}.\] Compute the socles of these representations. Show that neither representation is semisimple. \end{prob} \begin{sol} Notice that every subrepresentation is automatically a vector subspace of $\F_2^3$, so begin by considering the one dimensional subspaces. If $\langle (a,b,c)^T\rangle$ is invariant under $\rho_1$, consider the images: \[\rho_1(x)(a,b,c)^T=(a+b,b,c)^T,\quad\rho_1(y)(a,b,c)^T=(a+c,b,c)^T\] each of which are either $(a,b,c)^T$ or $\mathbf{0}$. In either case we are forced to have $b=c=0$, so the only invariant one dimensional subspace is the one spanned by $(1,0,0)^T$. In the two dimensional case, consider any subspace spanned by $\mathbf{v}=(a,b,c)^T$ and $\mathbf{w}=(d,e,f)^T$. If $b\ne e$ or $c\ne f$, we must have that $\rho_1$ fixes each of these vectors, so the resulting module must not be simple. Therefore we can assume that $b=e$ and $c=f.$ But then since it must be two dimensional, $a=d+1\pmod{2}$. But then \[\mathbf{v}+\mathbf{w}=(1,0,0)^T\] which spans an invariant subspace under $\rho_1$ so this is also not simple. So there are no simple degree two subrepresentations of $\rho_1$ so the socle of $\rho_1$ is the one dimensional space spanned by $(1,0,0)^T$. Thus $\rho_1$ is clearly not semisimple. \brk Let $\mathbf{v}=(a,b,c)^T$ be any vector spanning a degree one subrepresentation of $\rho_2$. Then \[\rho_2(x)\mathbf{v}=(a,b+c,c)^T,\qquad \rho_2(y)\mathbf{v}=(a+c,b,c)^T.\] Neither can be $\mathbf{0}$ since otherwise $\mathbf{v}=\mathbf{0}$. But this implies $c=0$. Therefore there are three one-dimensional invariant subspaces spanned by $(1,0,0)^T$, $(0,1,0)^T$ and $(1,1,0)^T$. Now given vectors $\mathbf{v}=(a,b,c)^T$ and $\mathbf{w}=(d,e,f)^T$ spanning a degree 2 subrepresentation, notice that if $c=f$ then $\mathbf{v}+\mathbf{w}$ is either $\mathbf{0}$ (whence $\mathbf{v}=\mathbf{w}$) or else this sum is one of the vectors from the previous paragraph. In either case $\mathbf{v}$ and $\mathbf{w}$ don't span a simple degree two representation space, so we have $c\ne f.$ But then without loss of generality if $c=0$, $\mathbf{v}$ spans an invariant subspace so we can conclude that there are \textit{no} degree two simple subrepresentations of $\rho_2$. Thus the socle is computed as the sum of the spaces spanned by the three vectors from the first paragraph: \[\langle(1,0,0)^T,(0,1,0)^T,(1,1,0)^T\rangle=\{(a,b,0)^T\in\F_2^3\}\subsetneq\F_2^3\] so this representation is not semisimple either. \end{sol} \begin{prob}{8} Let $G=C_p=\langle x\rangle$ and $R=\F_p$ for some prime $p\ge 3.$ Consider the two representations $\rho_1$ and $\rho_2$ specified by \[\rho_1(x)=\begin{pmatrix} 1&1&0\\0&1&1\\0&0&1 \end{pmatrix}\quad\text{and}\quad \rho_1(x)=\begin{pmatrix} 1&1&1\\0&1&0\\0&0&1 \end{pmatrix}.\] Calculate the socles of these two representations and show that neither representation is semisimple. Show that the second representation is nevertheless the direct sum of two nonzero subrepresentations. \end{prob} \begin{sol} Consider first $\rho_1$. If $\mathbf{v}=(a,b,c)$ (lazily eschewing the transpose in this problem for notational simplicity) spans a one dimensional invariant subspace, then for some $\alpha\in\Z_p$ we have \[\rho_1(x)\mathbf{v}=\alpha\begin{pmatrix} a+b\\b+c\\c \end{pmatrix}\] Assume by contradiction that $c\ne 0$. Then $\alpha=1$ and $b=b+c\Rightarrow c=0$, a contradiction. Thus $c=0$. Similarly if $b\ne 0$, $a+b=a$, and we get another contradiction. Thus $b=c=0$. Any vector $(a,0,0)\in\F_p^3$ spans an invariant subspace with the trivial action by $\rho_1$ so these are all the degree one subrepresentations. Notice that if $V$ is the representation space of a degree two subrepresentation of $\rho_1$ and if $\mathbf{v}\in V$ where the third component of $\mathbf{v}$ is zero, we have that $\mathbf{v}=0$. This can be seen by considering \[\left(\rho_1(x^2)-\rho_1(x)\right)\begin{pmatrix}v_1\\v_2\\0\end{pmatrix}=\begin{pmatrix}v_1+2v_2\\v_2\\0\end{pmatrix}-\begin{pmatrix}v_1+v_2\\v_2\\0\end{pmatrix}=\begin{pmatrix}v_2\\0\\0\end{pmatrix}\] and if $v_2\ne 0$, $V$ contains one of the one-dimensional subspaces from above and is therefore not simple. Otherwise $v_2=0$ and $\mathbf{v}$ spans a degree 1 invariant subspace, so again $V$ is not simple. \textbf{Not finished.} \end{sol} \begin{prob}{9} Let $k$ be an infinite field of characteristic 2, and $G=\langle x,y\rangle\cong C_2\times C_2$ be the noncyclic group of order 4. For each $\lambda\in k$, let $\rho_\lambda(x),\rho_\lambda(y)$ be \[\rho_\lambda(x)=\begin{pmatrix}1&0\\1&1\end{pmatrix},\quad\rho_\lambda(y)=\begin{pmatrix}1&0\\\lambda & 1\end{pmatrix},\] regarded as linear maps $U_\lambda\to U_\lambda$ where $U_\lambda$ is a $k$-vector space of dimension 2 with basis $\{e_1,e_2\}.$ \begin{itemize} \item[(a)] Show that $\rho_\lambda$ defines a representation of $G$ with representation space $U_\lambda$. \item[(b)] Find a basis for $\Soc U_\lambda$. \item[(c)] By considering the effect on $\Soc U_\lambda$, show that any $kG$-module homomorphism $\alpha:U_\lambda\to U_\mu$ has a triangular matrix $\alpha=(\begin{smallmatrix}a&0\\b&c\end{smallmatrix})$ with respect to the given bases. \item[(d)] Show that if $U_\lambda\cong U_\mu$ as $kG$-modules then $\lambda=\mu$. Deduce that $kG$ has infinitely many nonisomorphic 2-dimensional representations. \end{itemize} \end{prob} \begin{sol} Let $k,G,$ and $\rho_\lambda$ be defined as above. \subsubsection*{(a)} Some routine computations gives us \[\rho_\lambda(x)\rho_\lambda(y)=\begin{pmatrix}1&0\\\lambda+1& 1\end{pmatrix}=\rho_\lambda(y)\rho_\lambda(x)\] and \[(\rho_\lambda(x))^2=(\rho_\lambda(y))^2=I_2\] proving $\rho_\lambda$ is a group homomorphism into $GL_2(k)$, whence is a representation (which acts on $U_\lambda$ by definition). \subsubsection*{(b)} Any one dimensional subspace spanned by $(a,b)$ must have $a=0$ in order to be fixed by $x$ (referring for simplicity to the element and its action on $U_\lambda$ rather than its image under the map). But then the only proper subspace is the one spanned by $e_2.$ \subsubsection*{(c)} Leveraging the result from problem 1.4, if $\alpha:U_\lambda\to U_\mu$ is a $kG$-module homomorphism, then \[\alpha(\Soc U_\lambda)=\alpha(\langle e_2\rangle)\subseteq \langle e_2\rangle=\Soc U_\mu.\] That is, $\alpha(e_1)=ce_1$ for some $c\in k$, giving us that $\alpha$ has the form \[\begin{pmatrix} a & 0\\ b & c \end{pmatrix}\] for some $a,b\in k$ in the standard basis. \subsubsection*{(d)} Assume that $U_\lambda\cong U_\mu$ via $\alpha$. By the fact that the $G$ action pulls through $\alpha$, we get that \[\begin{pmatrix}1&0\\1&1\end{pmatrix} \begin{pmatrix}a&0\\b&c\end{pmatrix} =\begin{pmatrix}a&0\\a+b&c\end{pmatrix} =\begin{pmatrix}a&0\\b+c&c\end{pmatrix} =\begin{pmatrix}a&0\\b&c\end{pmatrix} \begin{pmatrix}1&0\\1&1\end{pmatrix}\] and so $a+b=b+c$ whence $a=c=k$. That this matrix is invertible means $k\ne 0.$ Using a similar computation, we know \[\begin{pmatrix}1&0\\\lambda&1\end{pmatrix} \begin{pmatrix}a&0\\b&c\end{pmatrix} =\begin{pmatrix}a&0\\\lambda a+b&c\end{pmatrix} =\begin{pmatrix}a&0\\b+\mu c&c\end{pmatrix} =\begin{pmatrix}a&0\\b&c\end{pmatrix} \begin{pmatrix}1&0\\\mu&1\end{pmatrix}\] so $\lambda a=\mu c$ or $\lambda k=\mu k$ and since $k\ne 0$, $\lambda=\mu.$ Thus since $\rho_\lambda$ can be defined for any $\lambda\in k$, this gives us the existence of infinitely many non-isomorphic two dimensional representations, as desired. \end{sol} \begin{prob}{10} Let \[\rho_1:G\to GL(V),\qquad\rho_2:G\to GL(V)\] be two representations of $G$ on the same $R$-module $V$ that are injective as homomorphisms. (We say that such a representation is \textit{faithful}.) Consider the three properties that \begin{itemize} \item[(1)] the $RG$-modules given by $\rho_1$ and $\rho_2$ are isomorphic, \item[(2)] the subgroups $\rho_1(G)$ and $\rho_2(G)$ are conjugate in $GL(V)$, \item[(3)] for some automorphism $\alpha\in\Aut(G)$, the representations $\rho_1$ and $\rho_2\alpha$ are isomorphic. \end{itemize} Show that (1)$\Rightarrow$(2) and that (2)$\Rightarrow$(3). Show also that if $\alpha\in\Aut(G)$ is an inner automorphism then $\rho_1$ and $\rho_1\alpha$ are isomorphic. \end{prob} \begin{sol} Let $\rho_i:G\to GL(V)$ be faithful representations acting on $_RV$. \subsubsection*{(1)$\Rightarrow$(2)} Let $V_1$ and $V_2$ be $V$ with the $RG$-module structure defined by $\rho_1$ and $\rho_2$, respectively and assume that $V_1\cong V_2$, say via a map $\alpha$. Let $v\in V$ be arbitrary and notice that by virtue of being a $RG$-module homomorphism, \[\alpha(\rho_1(g)(v))=\rho_2(g)\alpha(v),\quad\forall g\in G.\] That is, $\alpha\circ \rho_1(g)=\rho_2(g)\circ\alpha$ as maps in $GL(V)$ (since $\alpha$ is invertible and $R$-linear) and furthermore \[\rho_1(g)=\alpha^{-1}\circ\rho_2(g)\circ\alpha\] for each $g\in G$ whence $\rho_1(G)$ is conjugate to $\rho_2(G)$ in $GL(V)$. \subsubsection*{(2)$\Rightarrow$(3)} Now assume that $T^{-1}\rho_1(G)T=\rho_2(G)$ for some $T\in GL(V)$. Now since $\rho_1$ and $\rho_2$ are faithful, $\exists! h_g\in G$ for each $g\in G$ such that \[T^{-1}\rho_1(g)T=\rho_2(h).\] Define $\alpha\in\Aut(G)$ by $\alpha(g)=h_g$. This is a homomorphism since $\rho_2$ is injective and \[\rho_2(\alpha(gg'))=T^{-1}\rho_1(gg')T=T^{-1}\rho_1(g)TT^{-1}\rho_1(g')T=\rho_2(\alpha(g))\rho_2(\alpha(g'))=\rho_2(\alpha(g)\alpha(g'))\] and an automorphism since it is both surjective and injective. Now define the map $\varphi=T^{-1}\in GL(V)$. This map is bijective and $R$ linear, so we need only show it is $G$-linear to prove it is a module isomorphism. But consider \[\rho_2(\alpha^{-1}g)(\varphi(v))=T^{-1}\rho_1(g)T(T^{-1}v)=T^{-1}(\rho_1(g)(v))=\varphi(\rho_1(g)(v)).\] which proves $G$-linearity and we can conclude that $\rho_1\cong \rho_2\alpha^{-1}$. \subsubsection*{Finally,} Let $\alpha\in \Aut(G)$ be an inner automorphism corresponding to conjugation by $h\in G$. But then \[\rho_1(\alpha(g))=\rho_1(h^{-1}gh)=\rho_1(h^{-1})\rho_1(g)\rho_1(h)=H^{-1}\rho_1(g)H\] for each $g\in G$, so $\rho_1(\alpha(G))=H^{-1}\rho_1(G)H$, so (2) applies and the result follows from the argument above. \end{sol} \begin{prob}{11} One version of the Jordan-Zassenhaus Theorem asserts that for each $n$, $GL(n,\Z)=\Aut(\Z^n)$ has only finitely many conjugacy classes of subgroups of finite order. Assuming this, show that for each finite group $G$ and each integer $n$, there are only finitely many isomorphism classes of representations of $G$ on $\Z^n$. \end{prob} \begin{sol} Fix a finite $G$ and $n\in\N$. Assume by contradiction that there were infinitely many pairwise nonisomophic representations $\rho_i:G\to\Z^n$. Now consider the subgroups $H_i=\rho_i(G)\le Z^n$. Now if $H_i=k^{-1}H_jk$ then $H_i=H_j$ since $\Z^n$ is abelian. But since $G$ is finite, so are the $H_i$, so there are only finitely many maps $\rho_i$ that can map onto each finite $H_i\le \Z^n$. Furthermore by Jordan-Zassenhaus, there exist only finitely many conjugacy classes among the $H_i$, giving that there must be, in fact, only finitely many $H_i$. This contradicts that there be infinitely many $\rho_i$ and proves the statement. \end{sol} \begin{prob}{12} Write out a proof of Maschke's theorem in the case of representations over $\C$ along the following lines. \begin{itemize} \item[(a)] Given a representation $\rho:G\to GL(V)$ where $V$ is a vector space over $\C$, let $(-,-)$ be a positive definite Hermitian form on $V$. Define a new form $(-,-)_1$ on $V$ by \[(v,w)_1=\frac{1}{|G|}\sum_{g\in G}(gv,gw).\] Show that $(-,-)_1$ is a positive definite Hermitian form, preserved by the action by $G$; that is $(v,w)_1=(gv,gw)_1$ always. If $W$ is a subrepresentation of $V$ then show that $V=W\oplus W^\perp$, as representations. \item[(b)] Show that any finite subgroup of $GL(n,\C)$ is conjugate to a subgroup of $U(n,\C)$ (the unitary group with matrices satisfying $A\bar A^T=I$). Show that any finite subgroup of $GL(n,\R)$ is conjugate to a subgroup of $O(n,\R)$, the orthogonal group of matrices satisfying $AA^T=I$. \end{itemize} \end{prob} \begin{sol} \textbf{(a)}\hspace{1ex} Define $(-,-)_1$ as above and we can see that this is a bilinear form since \[(a\alpha+b\beta,\gamma)_1=\frac{1}{|G|}\sum_g (ag\alpha+bg\beta,g\gamma)=\frac{1}{|G|}\sum_g\left(a(g\alpha,g\gamma)+b(g\beta,g\gamma)\right) = a(\alpha,\gamma)_1+b(\beta,\gamma)_1\] and linearity in the second coordinate is proved via an identical argument. It is positive definite since $(-,-)$ is positive definite and the image of $(-,-)_1$ is simply the average of images of $(-,-).$ Finally this form is Hermitian since \[(v,w)_1=\frac{1}{|G|}\sum (gv,gw)=\frac{1}{|G|}\sum\overline{(gw,gv)}=\overline{\frac{1}{|G|}\sum(gw,gv)}=\overline{(w,v)_1}.\] That this form is preserved by action by $G$ is evident since multiplying $u$ and $v$ by $g$ is the same as simply permuting the summands in the definition. Now let $_{\C G}W\le V$. $W^\perp$ is a vector space in $V$, but we must show it is stable under the $G$-action. Assume that for $g\in G$ and $0\ne w\in W^\perp$, $gw\in W.$ But then since $W$ is a subrepresentation, $g^{-1}(gw)=w\in W$, a contradiction. Thus $W$ and $W^\perp$ are disjoint (from the definition/linear algebra) subrepresentations of $V$ and it suffices to show $V=W+W^\perp$. To see this, \end{sol} \begin{prob}{13} \begin{itemize} \item[(a)] Using proposition 1.2.4, show that if $A$ is a ring such that the regular representation $_AA$ is semisimple, then every finitely generated $A$-module is semisimple. \item[(b)] Extend the result of part (a) using Zorn's lemma, to show that if $A$ is a ring for which $_AA$ is semisimple then every $A$-module is semisimple. \end{itemize} \end{prob} \begin{sol} \textbf{(a)}\hspace{1ex} Let $A$ be a ring with semisimple regular representation. \end{sol} \end{document}
{ "alphanum_fraction": 0.690396836, "avg_line_length": 60.3297587131, "ext": "tex", "hexsha": "e73c33dbb184e770c504db2c4ee9763b82167050", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-09-10T00:24:49.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-10T00:24:49.000Z", "max_forks_repo_head_hexsha": "2c63123ce11bf8a75bff5530c1048669f29e87f9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NicoCourts/Algebra", "max_forks_repo_path": "Webb Problems/webb_problems.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2c63123ce11bf8a75bff5530c1048669f29e87f9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NicoCourts/Algebra", "max_issues_repo_path": "Webb Problems/webb_problems.tex", "max_line_length": 443, "max_stars_count": 4, "max_stars_repo_head_hexsha": "2c63123ce11bf8a75bff5530c1048669f29e87f9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NicoCourts/Algebra", "max_stars_repo_path": "Webb Problems/webb_problems.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-29T01:14:02.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-27T17:11:16.000Z", "num_tokens": 8006, "size": 22503 }
\documentclass[12pt]{article} \usepackage{amsfonts} \usepackage[utf8]{inputenc} \usepackage{comment} %\usepackage{pgfplots} %\pgfplotsset{width=10cm, compat=1.9} %\documentclass[border=2mm,tikz]{standalone} %\usetikzlibrary{datavisualization} \usepackage{fancyhdr} \usepackage{comment} \usepackage[a4paper, top=2.2cm, bottom=2.5cm, left=2.2cm, right=2.2cm]% {geometry} \usepackage{times} \usepackage{amsmath} \usepackage{changepage} \usepackage{amssymb} \usepackage{graphicx}% \setcounter{MaxMatrixCols}{30} \newtheorem{theorem}{Theorem} \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{axiom}{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary} \newenvironment{proof}[1][Proof]{\textbf{#1.} }{\ \rule{0.5em}{0.5em}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\Z}{\mathbb{Z}} \begin{document} \title{MAT120: Integral Calculus and Differential Equations \\ BRAC University} \author{Syed Zuhair Hossain \\ St. ID - 19101573 \\ Section - 07 \\ Set-I} \date{\today} \maketitle %%%%%%%%%%%Starting Point%%%%%%%%%%%%%%% %%%%%MATH 01%%%%%%%%%%%% \section{You all have learnt the concept of finding arc lengths of curves that are bounded over some interval. The formula for finding the aforementioned arc length of a curve, are as follows} \begin{align*} Arc \ Length = \int_{a}^{b} \sqrt{1+.[f{'}(x)]^2} \ dx \end{align*} \begin{paragraph} where $f{'}(x)$ denotes the first derivative of f(x).\\ Given the function:\\ $$f(x) = 9x^\frac{3}{2}:$$Find out what the arc length is for the function bounded by the interval [0,1] . \end{paragraph}\\ \textbf{Solution} \begin{align*} Given, f(x) &= 9x^\frac{3}{2}\\ f{'}(x) &= 27 \frac{\sqrt{x}}{2}\\ &= \frac{27}{2}\sqrt{x}\\ \textbf{We know that,} \\ \texttt{Arc Length = $\int_{a}^{b} \sqrt{1+.[f{'}(x)]^2} \ dx$}\\ &= \int_{0}^{1} \sqrt{1+ \left ( \frac{27}{2}\sqrt{x}\right)^2} \ dx\\ &= \int_{0}^{1} \sqrt{1+ \frac{729}{4}x} \ dx \end{align*} \begin{align*} let, \\ y&= \frac{729x}{4}+1 \\ \frac{dy}{dx} &= \frac{729}{4} \Rightarrow dx = \frac{4 dy}{729}\\ \therefore dy &= \frac{729}{4}dx \\ \textit{lower limit,} &\\ x \rightarrow 0 &; y \rightarrow 1 \\ \\ \textit{upper limit,} &\\ y \rightarrow 0 &; y \rightarrow \frac{733}{4} \end{align*} \begin{align*} \therefore & \int_{0}^{1} \sqrt{1+ \frac{729}{4}x} \ dx\\ & = \int_{1}^{\frac{733}{4}} \sqrt{y} \ \frac{4}{729} \ dy \\ & = \frac{4}{729} \int_{1}^{\frac{733}{4}} \sqrt{y} dy \\ & = \frac{4}{729} \left [ \frac{2y^\frac{3}{2}}{3} \right ]_{1}^{\frac{733}{4}}\\ & = \left[ \frac{8y^\frac{3}{2}}{2187}\right]_{1}^{\frac{733}{4}}\\ & = \frac{8(\frac{733}{4})^\frac{3}{2}}{2187}- \frac{8(1)^\frac{3}{2}}{2187}\\ & = \frac{733\sqrt{733}-8}{2187}\\ & = 9.070517613\\ & \approx 9.071 \end{align*} \bigskip \bigskip \bigskip \textbf{So, \ 9.071 is the arc length of $f(x)= 9x^\frac{3}{2}$ for the function bounded by the interval [0,1]} \pagebreak %%%%%%%%MATH 02%%%%%%%%%%%% \section{Find the area of surface that is generated by revolving the portion of the curve $x=2\sqrt{1-y}$ \ , $-1\leq y \leq 0$ about the y-axis.} \ \ \ \ \ \ \textbf{\large Solution} \begin{align*} Given, \\ x&=2\sqrt{1-y} \ ,-1\leq y \leq 0; \\ let,\\ x&= g(y)\\ g(y) &= 2 \sqrt{1-y}\\ g{'}(y) &= 2 \ \ \frac{(-1)}{2\sqrt{1-y}} \ =\ \frac{-1}{\sqrt{1-y}}\\ g{'}(y)^2 &= \frac{1}{1-y}\\\\ \sqrt{1+g{'}(y)^2} \ &= \sqrt{1+ \frac{1}{1-y}}\\ &= \sqrt{\frac{2-y}{1-y}} \end{align*} \begin{align*} \textit{So , here the surface area is}\\ S &= \int_{-1}^{0} 2\pi 2\sqrt{1-y} \ \frac{\sqrt{2-y}}{\sqrt{1-y}} \ dy \ = \int_{-1}^{0} 4\pi \sqrt{2-y} \ dy\\ S &= 4 \pi \int_{-1}^{0} \sqrt{2-y} \ dy\\ \end{align*} \begin{align*} \textit{to evaluate this integral, we can make u-substitution,}\\ Let,&\\ u&= 2-y\\ d&u= -dy\\ \textit{lower limit,} &\\ y \rightarrow 0 &; u \rightarrow 2 \\ \textit{upper limit,} &\\ y \rightarrow -1 &; u \rightarrow 3 \end{align*} \begin{align*} S&= 4 \pi \int_{-1}^{0} \sqrt{2-y} \ dy\\ &= 4 \pi \int_{2}^{3} u^{-\frac{3}{2}} \ du\\ &= 4 \pi (\frac{2}{3}) \left [ u^{\frac{3}{2}}\right ]_{2}^{3}\\ &= \frac{8\pi}{3} \left[(2-y)^{\frac{3}{2}}\right]_{-1}^{0}\\ &=\frac{8\pi}{3} \left[ (2-(-1))^{\frac{3}{2}} - (2-0)^{\frac{3}{2}} \right]\\ &= \frac{8\pi}{3} \left(3\sqrt{3} - 2\sqrt{2}\right)\\ &=19.83580907\\ &\approx19.84\\\\ & & \textsc{[Answer]} \end{align*} %%%%%%%%MATH 03%%%%%%%%%%%% \section{Evaluate \ \ $\int_{0}^{\infty} \frac{(x^6-x^3)x^2}{(1+x^3)^5}\ dx$ ; \ $\left[ Use \ \beta (m,n)= \int_{0}^{\infty} \frac{x^{m-1}}{(1+x)^{m+n}} \ dx \right]$} \textbf{Solution} \begin{align*} let,\\ x^3 &= u\\ \Rightarrow 3x^2 &= \frac{du}{dx}\\ \Rightarrow x^2 dx &= \frac{1}{3} \ du\\ \textit{lower limit,} &\\ u\rightarrow 0 &; x \rightarrow 0 \\ \textit{upper limit,} &\\ u \rightarrow \infty &; x \rightarrow \infty\\\\\\ &\therefore \int_{0}^{\infty} \frac{(x^6-x^3)x^2}{(1+x^3)^5} dx\\ &= \int_{0}^{\infty} \frac{1}{3} \ \times \ \frac{u^2-u}{(1+u)^5} du \\ &= \frac{1}{3} \left[ \int_{0}^{\infty} \frac{u^{2}}{(1+u)^{5}} du \ - \int_{0}^{\infty} \frac{u}{(1+u)^{5}} du \right]\\ &= \frac{1}{3} \left[ \int_{0}^{\infty} \frac{u^{3-1}}{(1+u)^{3+2}} du \ - \int_{0}^{\infty} \frac{u^{2-1}}{(1+u)^{3+2}} du \right]\\ &= \left[\beta(3,2) \ - \ \beta(2,3) \right] & \left[\because \beta(m,n) = \int_{0}^{\infty} \frac{x^{m-1}}{(1+x)^{m+n}}\: dx\right]\\ &= \left[\frac{\Gamma(3)\Gamma(2)}{\Gamma(3+2)} \ - \frac{\Gamma(2) \Gamma(3)}{\Gamma (2+3)} \right] & \left[\because \beta(m,n) = \frac{\Gamma(m) \Gamma(n)}{\Gamma (m+n)} \right]\\ &= \frac{1}{3} \left[\frac{(3-1)! \ (2-1)!}{(5-1)!} - \frac{(2-1)!\:(3-1)!}{(5-1)!}\right]\\ &= \frac{1}{3} \left[\frac{2! \ 1!}{4!} \ - \ \frac{1! \ 2! }{4!} \right]\\ &= \frac{1}{3} \times 0\\ &= 0 \\ & &\textsc{[answer]} \end{align*} \pagebreak %%%%%%%%MATH 04%%%%%%%%%%%% \section{Evaluate $\int_{0}^{\frac{\pi}{2}}\sqrt{tan \theta}$} \begin{align*} let,\\ I &= \int_{0}^{\frac{\pi}{2}}\sqrt{tan \theta}\\ I{'}&= \int_{0}^{\frac{\pi}{2}}\sqrt{tan(\frac{\pi}{2}-\theta)} d\theta\\ &= \int_{0}^{\frac{\pi}{2}}\sqrt{cot\theta} d\theta\\ 2I &= \int_{0}^{\frac{\pi}{2}}(\sqrt{\tan\theta} + \sqrt{\cot\theta})d\theta\ &[I=I{'}]\\ &= \int_{0}^{\frac{\pi}{2}} \left(\frac{\sqrt{sin\theta}}{\sqrt{cos\theta}} + \frac{\sqrt{cos\theta}}{\sqrt{\sin\theta}}\right)d\theta\\ &= \int_{0}^{\frac{\pi}{2}} \frac{\sin\theta+\cos\theta}{\sqrt{\sin\theta \cdot \cos\theta}}d\theta\\ &= \int_{0}^{\frac{\pi}{2}}\frac{\sin\theta+\cos\theta}{\sqrt{\sin2\theta}\times \frac{1}{\sqrt{2}}} d\theta & \left[\because 2\sin A \cos A= \sin^2A \right]\\ &= \sqrt{2}\int_{0}^{\frac{\pi}{2}} \frac{\sin\theta + \cos\theta}{\sqrt{1+\sin2\theta-1}}d\theta\\ &= \sqrt{2}\int_{0}^{\frac{\pi}{2}} \frac{\sin\theta+\cos\theta}{[-(\sin^2\theta+\cos^2\theta)+2\sin\theta\cos\theta + 1]^{\frac{1}{2}}}d\theta\\ &= \sqrt{2}\int_{0}^{\frac{\pi}{2}} \frac{\sin\theta+\cos\theta}{\sqrt{-(\sin\theta-\cos\theta)^2+1}}d\theta\\ &= \sqrt{2}\int_{0}^{\frac{\pi}{2}} \frac{\sin\theta+\cos\theta}{\sqrt{1-(\sin\theta-\cos\theta)^2}}d\theta\\ let, &\\ &\sin\theta-\cos\theta=t\\ &(\cos\theta +\sin\theta)d\theta=dt\\ \textit{lower limit}\\ &x\rightarrow 0 ;t\rightarrow \sin0-\cos0 =-1\\ \textit{upper limit}\\ &x\rightarrow\frac{\pi}{2}; t\rightarrow sin\frac{\pi}{2} - \cos\frac{\pi}{2} = 1\\ 2I&=\sqrt{2} \int_{-1}^{1} \frac{dt}{1-dt^2}\\ &=\sqrt{2} \left[sin^{-1} t\right]_{-1}^{1}\\ &= \sqrt{2} \left[sin^{-1}(1)- sin^{-1}(-1)\right]\\ &=\sqrt{2}(\frac{\pi}{2}+\frac{\pi}{2})\\ &=\sqrt{2} \pi\\ \therefore I &= \frac{\pi}{\sqrt{2}}\\ & &\textsc{[answer]} \end{align*} %%%%%%%%MATH 05%%%%%%%%%%%% \section{Evaluate the following indefinite integrals by using appropriate substitutions: $\int \sqrt{4x^2 - 8x + 24} \ dx$} \textbf{Solution} \begin{align*} Given,&\\ &\int \sqrt{4x^2 - 8x + 24} \ dx\\ &= \int \sqrt{4(x^2-2x+6)} \ dx\\ &= 2\int \sqrt{x^2-2x+6} \ dx\\ &= 2\int \sqrt{x^2-2x+1+5} \ dx\\ &= 2\int \sqrt{(x-1)^2+5} \ dx\\ & &let,\\ & & u=x-1\\ & & \frac{du}{dx}=1\\ & & \therefore du=dx\\ &= 2\int \sqrt{u^2+5} \ du\\ &= 2\int \sqrt{u^2+(\sqrt{5})^2} \ du\\ & &\textit{As we know,}\\ & &\sqrt{a^2+x^2}=a\tan\theta\\ & &\textit{So, we can apply trigonometric formula here,}\\ & &u=\sqrt{tanv}\\ & &\therefore v=\tan^{-1}\frac{u}{\sqrt{5}}\\ & & and \ du=\sqrt{5} \sec^2(v) \ dv \\ &=\int \sqrt{5} sec^2v \times \sqrt{5 \tan^2v+5} \ dv\\ &=\int \sqrt{5} sec^2v \times \sqrt{5(\tan^2v+1)} \ dv\\ &=\sqrt{5} \cdot \sqrt{5} \sec^2v \cdot \sec (v) \ dv \\ &= 5\int \sec^3v \ dv \end{align*} \begin{align*} &\textit{by applying reduction formula ,}\\ &=\frac{\sec(v)\tan(v)}{2}+\frac{1}{2} \int sec(v) \ dv\\\\ &\textit{we know that,}\\ & \int sec(v) dx = ln(tan(v)+sec(v))+c\\ &\textit{So, we can write}\\ &\frac{\ln{\tan(v)+\sec(v)}}{2}+ \frac{5\sec(v)\tan(v)}{2}\\ & \therefore v=tan^{-1}(\frac{u}{\sqrt{5}})\\ & \rightarrow tan^{-1}(\frac{u}{\sqrt{5}})=\frac{u}{\sqrt{5}}\\ &sec^{-1}(\frac{u}{\sqrt{5}})=\sqrt{\frac{u^2}{5}+1}\\ &= \frac{5ln\left(\sqrt{\frac{u^2}{5}+1}+\frac{u}{\sqrt{5}} \right)}{2} + \frac{\sqrt{5} u \sqrt{\frac{u^2}{5}+1}}{2}\\ &=\frac{\sqrt{5}\sqrt{\frac{(x-1)^2}{5}+1} \ (x-1)}{2} + \frac{5ln\left( \frac{x-1}{\sqrt{5}} + \sqrt{\frac{(x-1)^2}{5}+1}\right)}{2}\\\\ &\therefore 2\int \sqrt{x^2-2x+6} \ dx = 2 \left( \frac{\sqrt{5}\sqrt{\frac{(x-1)^2}{5}+1} \ (x-1)}{2} + \frac{5ln\left( \frac{x-1}{\sqrt{5}} + \sqrt{\frac{(x-1)^2}{5}+1}\right)}{2}\right)\\ &\rightarrow \int \sqrt{4x^2-8x+24} \ dx = \sqrt{5}\sqrt{\frac{(x-1)^2}{5}+1} \ (x-1) + 5ln\left(\sqrt{\frac{x-1}{\sqrt{5}} + 1 }\right) + c\\\\ & &\textsc{[Answer]} \end{align*} \pagebreak %%%%%%%%%%MATH 06%%%%%%%%%%%% \section{Evaluate in terms of Gamma function} \begin{center} $\int_{0}^{\infty}x^6 e^{-3x}dx$ \end{center} \textbf{Solution} \begin{align*} &\int_{0}^{\infty}x^6 e^{-3x}dx\\ Let,&\\ u&=3x\\ \Rightarrow x &=\frac{u}{3}\\ \frac{d}{dx}(2x)&=\frac{d}{dx}(u)\\ \Rightarrow 3 &= \frac{du}{dx}\\ \therefore dx &= \frac{du}{3} \\\\ & \therefore \int_{0}^{\infty}x^6 e^{-3x}dx\\ &= \int_{0}^{\infty} (\frac{u}{3})^6 e^{-u} du\\ &= \int_{0}^{\infty} e^{-u} \ \frac{u^6}{3^6} \ \frac{du}{3}\\ &= \frac{1}{3^6}\times \frac{1}{3} \int_{0}^{\infty} e^{-u} u^6 du\\ &= \frac{1}{3^7} \int_{0}^{\infty}e^{-u} u^{7-1} du\\ &= \frac{1}{3^7}\times \Gamma7 & \left[\therefore \Gamma n= \int_{0}^{\infty} e^{-x} x^{n-1}dx \right]\\ &=\frac{6!}{3^7}\\ &=0.329218107\\ &\approx0.33\\ & & \textsc{[Answer]} \end{align*} \end{document}
{ "alphanum_fraction": 0.5374694428, "avg_line_length": 38.5162337662, "ext": "tex", "hexsha": "94e290cafc82c0d626a6ff2bee10045c60af9f90", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cf8279293abae776cb9d022e04a201e39302310c", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "ZuhairHossain/LaTeX-Projects", "max_forks_repo_path": "MAT120 Assignments/Assignment-03/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cf8279293abae776cb9d022e04a201e39302310c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "ZuhairHossain/LaTeX-Projects", "max_issues_repo_path": "MAT120 Assignments/Assignment-03/main.tex", "max_line_length": 194, "max_stars_count": null, "max_stars_repo_head_hexsha": "cf8279293abae776cb9d022e04a201e39302310c", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ZuhairHossain/LaTeX-Projects", "max_stars_repo_path": "MAT120 Assignments/Assignment-03/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5093, "size": 11863 }
\documentclass{article} \usepackage{amsmath} \usepackage{graphicx} \usepackage{hyperref} \hypersetup{ colorlinks, linkcolor=red } \begin{document} \title{CS838 Lab 2} \author{Sek Cheong} \maketitle %\begin{abstract} %The abstract text goes here. %\end{abstract} \section{Introduction} The purpose of this experiment was to design a neural network with single hidden layer to predict protein secondary structures. The data set we used in this experiment was the \href{https://archive.ics.uci.edu/ml/datasets/Molecular+Biology+(Protein+Secondary+Structure)}{Molecular Biology (Protein Secondary Structure) Data Set} obtained from the UC Irvine Machine Learning Repository. We used convolution with the window of 17 amino for the input train the network with the middle amino secondary structure. The overall accuracy of our network was about $62\%$ obtained in less than 5 epochs on average. \section{The Data Set} The data set from UC Irvine was composed of two files one for training and one for testing. We combine these two files into a single file. The combined file was then split into training set, tuning set, and test set. The test set was a collection of every 5th example in combined file. The tune set was a collection of every 6th example in the combined file. The remaining examples in the combined file were used as the training set. \section{The Experiment} We used all the examples in the training set for each epoch in training of the network. The early stop criteria was set on the difference of accuracy on the tuning set of previous epoch and the current epoch. When the difference is greater or equal than $0.001$ we stop the training. \begin{equation} \epsilon = accuracy(tune,epoch-1) - accuracy(tune, epoch) >= 0.001 \end{equation} We tested the network with various number of hidden units and learning rate. We also tested the network unregularized and regularized with learning rate, momentum, and weight decay. Following were our findings. \subsection{Unregularized} We used the following setting for our network for this experiment: \begin{equation*} HU=9, \eta=0.01, \epsilon=0.001, \alpha=0, \lambda=0, MaxEpoch=100 \end{equation*} Amount the experiments the unregularized network (with other setting being equal) gave as the lowest accuracy. We were only able to obtain on average $56.7\%$ accuracy. \subsection{Learning Rate} The learning rate $\eta$ contribute greatly to the overall accuracy of the network prediction. We experimented $\eta$ from $0.00050$ to $0.09$. The following graph shows the $\eta$ vs. accuracy: \begin{center} \includegraphics[width=3.0in]{learningrate} \end{center} To make the comparison meaningful, we used the same setting as in 3.1, we only change the value of $\eta$ for each run. As you can see from the graph, setting $\eta$ too small or too large could adversely affect network accuracy. In out case the optimal $\eta$ was somewhere between $0.008$ and $0.009$. \subsection{Number of Hidden Units} The number of hidden units also plays a significant role in the network accuracy. If the network have to few hidden units the network will not have the desired capacity to be trained to predict the give sample distribution. If the network has too many hidden units overfitting is likely to occur. \begin{center} \includegraphics[width=3.0in]{hiddenunits} \end{center} In out case the optimal number of hidden units was between $7$ and $10$ units. When the number of hidden units increase the network accuracy plateaued and steadily dropping lower. \subsection{Momentum} The momentum $\alpha$ also has a significant affect on the accuracy of the network. The appropriate momentum range is $0<\alpha<1$. The empirical result shows that larger momentum with smaller $\eta$ gives yields pretty good result. \begin{center} \includegraphics[width=3.0in]{momentum} \end{center} In out experiment we set the momentum somewhere between $0.75$ to $0.95$ yield the best result. \subsection{Weight Decay} The weight decay $\lambda$ improves the accuracy to some degree. The result suggested that once $\lambda$ reached $0.00006$ the accuracy was in steady decline. \begin{center} \includegraphics[width=3.0in]{weightdecay} \end{center} %\begin{figure} % \centering % \includegraphics[width=2.0in]{learningrate} % \caption{Simulation Results} % \label{simulationfigure} %\end{figure} The weight decay was supposed to improve the accuracy by regularize the weight complexity of the network. When combined with the momentum $\alpha$ it did improve accuracy but not by much, however, the training time was noticeably reduced. \section{Conclusion} Our neural network performed pretty well and could achieve accuracy around $62\%$ consistently. The number of hidden units could significant affect the performance of the network. Too few hidden units, the network accuracy would suffer and too many hidden units would not improve network accuracy or even reduce the accuracy. The training time is in polynomial time proportion to the number of hidden units. Therefore, it is important to choose the right amount of hidden units to get the best balanced result. Our best setting is as following: \begin{equation*} HU=9, \eta=0.009, \epsilon=0.001, \alpha=0.90, \lambda=0.0006, MaxEpoch=150 \end{equation*} A network with properly tuned learning rate, momentum, and weight decay could significantly out perform networks without any regularization. The weight decay did not seem to be very useful in out experiment. \end{document}
{ "alphanum_fraction": 0.7822230268, "avg_line_length": 56.9484536082, "ext": "tex", "hexsha": "a446a60ac8c745c105703adaa06b1aa9f5b8d23c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "db07f0a290564f4c1e1a611e6347679ff62c4bcd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sekcheong/protein", "max_forks_repo_path": "report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "db07f0a290564f4c1e1a611e6347679ff62c4bcd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sekcheong/protein", "max_issues_repo_path": "report/report.tex", "max_line_length": 604, "max_stars_count": null, "max_stars_repo_head_hexsha": "db07f0a290564f4c1e1a611e6347679ff62c4bcd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sekcheong/protein", "max_stars_repo_path": "report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1305, "size": 5524 }
\documentclass[a4paper,14pt,onecolumn]{article} \usepackage[dvips]{graphics} \usepackage{color} \usepackage{epsfig} \usepackage{amsmath} \usepackage{graphicx} \bibliographystyle{ieeetr} \setlength{\textwidth}{6.27in} \setlength{\textheight}{9.69in} \setlength{\topmargin}{0.0in} \setlength{\oddsidemargin}{0.0in} % Customisable \setlength{\headheight}{0.0in} \setlength{\headsep}{0.0in} \setlength{\topskip}{0.0in} \fontencoding{T1} % Font specification : Times New Roman, Bold, Normal, 18 \fontfamily{cmr} % Roman \fontseries{m} % Medium \fontshape{n} % Upright \fontsize{14pt}{5} \linespread{1.5} % Vertical spacing between lines \selectfont % Select the specified font \begin{document} \pagestyle{empty} \input{title.tex} \input{certificate} \input{Acknowledge.tex} \input{abstract.tex} \pagenumbering{roman} % Lowercase roman numbering for prelim sections \newpage \thispagestyle{empty} \tableofcontents % *Generate* the table of contents. No content - no table % LATEX needs to run 2-3 times over source to get this correct \newpage \listoftables \newpage \listoffigures \newpage \pagenumbering{arabic} % Change to Arabic numbers for main chapters. \section{INTRODUCTION} Neural Networks are successful in acquiring hidden knowledge in datasets. Their biggest weakness is that the knowledge they acquire is represented in a form not understandable to humans. Researchers tried to address this problem by extracting rules from trained Neural Networks. Most of the proposed rule extraction methods required specialized type of Neural Networks; some required binary inputs and some were computationally Lack of explanation capability is one of the most important reasons why Neural Networks do not get the necessary interest in some parts of the industry.\\ In most of the real world applications (especially in safety critical applications) users want to know the reasoning behind the conclusion of a learning system or an expert system. Extracting If-Then rules is usually accepted as the best way of extracting the knowledge represented in the Neural Network. Rules extracted from the trained net can be used for explaining the reasoning behind the output of the system. They can also be used in other systems, like expert systems or in systems for discovering previously unknown features in the data (data mining).\\ Generalization of the system can also be improved by having a better feature representation. Quality of the rules can be measured by Accuracy, Fidelity, and Comprehensibility. Fidelity is the fraction of instances on which Neural Network and the extracted rules give the same output. Comprehensibility is measured by the size of the rule set and by the number of antecedents in each rule. \subsection{Basic Concepts} \subsubsection{ Data Mining} Data mining (DM), also known as ‘‘knowledge discovery in databases” (KDD), is the process of discovering meaningful patterns in huge databases [3]. In addition, it is also an application that can provide significant competitive advantages for making the right decision. DM is an explorative and complicated process involving multiple iterative steps. \subsubsection{Data Mining Tasks} \begin{itemize} \item Classification : Classifies a data item into one of several predefined categories. \item Regression : Maps a data item to a real-valued prediction variable. \item Clustering : Maps a data item into a cluster, where clusters are natural groupings of data items based on similarity metrics. \item Association rules : Describes association relationship among different attributes. \item Summarization : Provides a compact description for a subset of data. \item Dependency modeling : Describes significant dependencies among variables. \item Sequence analysis : Models sequential patterns, like time-series analysis. The goal is to model the state of the process generating the sequence or to extract and report deviations and trends over time. \end{itemize} \subsubsection{Classification Techniques} \begin{itemize} \item Decision Tree based Methods \item Rule-based Methods \item Memory based reasoning \item Genetic Algorithms \item Support Vector Machines \item Neural Networks \end{itemize} \subsubsection{Classification } Learn a method for predicting the instance class from pre-labeled (classified) t Instances. \begin{figure}[h] \begin{center} \includegraphics[height=1in,width=2in] {classficationGraph.jpg} \caption{Classification Graph} \end{center} \end{figure} \subsubsection{Decision Trees } An internal node is a test on an attribute. A branch represents an outcome of the test, e.g. Color=red. A leaf node represents a class label or class label distribution. At each node, one attribute is chosen to split training examples into distinct classes as much as possible A new instance is classified by following a matching path to a leaf node. \begin{figure} \begin{center} \includegraphics[height=2in,width=2.5in] {decisiontree.jpg} \caption{Example Tree for “Play?”} \end{center} \end{figure} \subsubsection{Neural Network} \begin{enumerate} \item What is Neural Network?\\ A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. It is a biologically motivated approach to machine learning. Neural networks resemble the human brain in the following two ways: \begin{itemize} \item A neural network acquires knowledge through learning. \item A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights. \end{itemize} \begin{figure}[hbp] \begin{center} \includegraphics[height=2in,width=2in] {Neuron.jpg} \caption{Biological Neuron} \end{center} \end{figure} \begin{figure}[hbp] \begin{center} \includegraphics[height=3in,width=4in] {Neuralnetwork.jpg} \caption{Basic Architecture of Neural Network} \end{center} \end{figure} \item Back propagation and Feed-Forward NN\\ It is a supervised learning method, and is a generalization of the delta rule. It requires a teacher that knows, or can calculate, the desired output for any input in the training set. It is most useful for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop). The term is an abbreviation for "backward propagation of errors". Back propagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable \begin{figure}[hbp] \begin{center} \includegraphics[height=4in,width=4in] {BBNeuralnetwork.jpg} \caption{Backproagation Neural Network} \end{center} \end{figure} The ANN is composed of richly interconnected non-linear nodes that do processing in parallel. The connection weights are modifiable, allowing ANN to learn directly from examples without requiring or providing an analytical solution to the problem. The most popular forms of learning are: \begin{itemize} \item Supervised learning: Patterns for which both their inputs and outputs are known are presented to the ANN. The task of the supervised learner is to predict the value of the function for any valid input object after having seen a number of training examples. ANN employing supervised learning has been widely utilized for the solution of function approximation and classification problems. \item Unsupervised learning: Patterns are presented to the ANN in the form of feature values. It is distinguished from supervised learning by the fact that there is no a priori output. ANN employing unsupervised learning has been successfully employed for data mining and classification tasks. The self-organizing map (SOM) and adaptive resonance theory (ART) constitutes the most popular examples of this class. A backpropagation network (BPN) is a neural network that uses a supervised learning method and feed-forward architecture. A BPN is one of the most frequently utilized neural network techniques for classification and prediction and is considered an advanced multiple regression analysis that can accommodate complex and non-linear data relationships. \end{itemize} \item Neural Network and data mining \begin{figure}[hbp] \begin{center} \includegraphics[height=3in,width=3in] {NNdatamining.jpg} \caption{Neural Network in Data Mining} \end{center} \end{figure} \begin{itemize} \item Can select more complex regions. \item Can be more accurate. \item Also can overfit the data – find patterns in random noise. \end{itemize} \end{enumerate} \subsubsection{Application} \begin{itemize} \item Key finance decision making problem e.g. credit card approval. \item Medical Application e.g. cancer patient classification depending upon symptoms attributes. \item Pattern Recognition. \item Image Processing. \end{itemize} \newpage \section{Literature Survey} To reveal the information concealed in an ANN, researchers have proposed a number of rule extraction techniques. One of the first rule extraction techniques from neural network was proposed by Galant . He was working on connectionist expert systems. In this work, each ANN node is represented as a conceptual entity.\\ Yueh-Min Huanga and Chun-Min Hunga [8] have addressed the problem of imbalanced class distributions. This paper analyzes different classification algorithms that were employed to predict the creditworthiness of a bank’s customers based on checking account information. A series of experiments were conducted to test the different techniques. The objective is to determine a range of credit scores that could be implemented by a manager for risk management. Also a strategy of data cleaning for handling such a real case with imbalanced distribution data is presented by them.\\ Wei-Sen Chen and Yin-Kuan Du have used the artificial neural network (ANN) and data mining (DM) techniques to construct the financial distress prediction model . In this paper traditional Data Mining- Clustering approach is compared with ANN approach. After this experimentation authors have stated that the ANN approach obtains better prediction accuracy than the DM clustering approach. Therefore, the authors have concluded that the artificial intelligent (AI) approach could be a more suitable methodology than traditional statistics for predicting the potential financial distress of a company.\\ Humar Kahramanli and Novruz Allahverdi [7] have presented the technique of mining the classification rules for Liver Disorders using adaptive activation function. In this study the authors have first trained the neural network with adaptive activation function. Then the rules are extracted from this trained neural network by using the OptaiNET that is an Artificial Immune Algorithm (AIS).\\ The used Neuro-Adaptive function is as follows:\\ \begin{center} $Ø(x) =\frac{A1 e^{- x2}+A2}{(1+e^{-B*x})}$\\ \end{center} Where, A1, A2 and B are real variables which will be adjusted during training. In this paper, for rule extraction the first ANN which classifies the dataset was designed. Then Opt-aiNET algorithm was executed for extraction of rules from this ANN. Finally, the extracted rules were decoded. Produced rules diagnosed correctly 192 samples from 200 belong to Class 0 and 135 samples from 145 belongs to Class1. It means system achieve 96% and 93% correctly diagnosis for Class 0 and Class 1 respectively. In summary the system correctly diagnosed 94.8% of whole samples.\\ In paper [9] association rules have been composed using Apriori algorithm and transactions, which provide these rules, were eliminated. This provides shrinking database. Then ANN has been trained and used Opt-aiNET for composing rule set. It’s been observed that this method increased classification accuracy despite decreasing number of rules.\\ Humar Kahramanli and Novruz Allahverdi [1] have presented the method that uses Artificial Immune Systems (AIS) algorithm to extract rules from trained hybrid neural network [6]. The datasets used are Cleveland heart disease and Hepatitis data taken from UCI machine learning repository.\\ The performance metrics used in this algorithm are accuracy, sensitivity and specificity which are the common performance metrics used in medical diagnosis tasks. The measure of the ability of the classifier to produce accurate diagnosis is determined by accuracy. The measure of the ability of the model to identify the occurrence of a target class accurately is determined by sensitivity. The measure of the ability of the model to separate the target class is determined by specificity. So that accuracy, sensitivity and specificity are calculated as follows:\\ \begin{center} $Accuracy=\frac{Total number of correctly diagnosed cases}{Total number of cases}$\\[4mm] $Sensitivity=\frac{Total number of positive cases correctly diagnosed}{Total number of positive cases }$\\[4mm] $Specificity=\frac{Total number of negative cases correctly diagnosed T}{Total number of negative cases}$\\[4mm] \end{center} This method achieves accuracy values of 96.4% and 96.8% for Cleveland heart disease dataset and Hepatitis dataset respectively.\\ Miguel Rocha and Paulo Cortez [10] have presented the use of Evolutionary Computation (EC) as a promising alternative for ANN optimization. In this paper two hybrid EC/ANN algorithms are presented: the first evolves neural topologies while the latter performs simultaneous optimization of architectures and weights. Sixteen real-world tasks were used to test these strategies. \\ Richi Nayak [11] has given a novel methodology “Gyan” that represents the knowledge of a trained network in the form of restricted first-order predicate rules. \\ In paper [5] the mathematical properties of several error functions with a focus on MLP data classification are analyzed. This analysis led us to propose two parameterized error functions for MLP training. The first one, ESMF, is a monotonic error function applicable only to two-class problems and was shown to perform equally well as other functions. However, it should be extended to the general multi-class problem for a better evaluation of its capability. The second one, EExp, an exponential-type error function, this function is able to emulate the behaviors of classic error functions and, as a matter of fact, by single parameter adjustment it is able to implement an infinite family of error functions, with different behavior of the error gradient weighting.\\ By Ajit Narayanan, Edward Keedwell and Dragan Savic [12] have presented a novel approach using genetic algorithms to search for symbolic rules in a trained neural network. Many rule extraction algorithms have been designed to generate classification rules from NNs that have been trained to distinguish data samples from different classes. These algorithms frequently assume that the input data attributes are discrete in order to make the rule extraction process more manageable. NeuroRule [14] is one such algorithm. A component of NeuroRule is an automatic rule generation method called rule generation (RG). Each rule is generated by RG such that it covers as many samples from the same class as possible with the minimum number of attributes in the rule condition. RG is applied to generate rules that explain the network’s output in terms of the discretized hidden unit activation values and rules that explain the discretized activation values in terms of the discretized attributes of the input data.\\ Rule eXtraction (RX) [15] is another NN rule extraction algorithm that works on discrete data. RX recursively generates rules by analyzing the discretized hidden unit activations of a pruned network with one hidden layer. When the number of input connections to a hidden unit is larger than a certain threshold, a new NN is created and trained with the discretized activation values as the target output. \\ The generalized analytic rule extraction (GLARE) algorithm [16] does not require network pruning. It does, however, require that the continuous attributes be first converted to nominal, and then to binary attributes before the algorithm can be applied. While the algorithm is shown to work well on discrete data sets, it does not perform as well as decision tree methods on data sets with continuous attributes. \\ The orthogonal search-based rule extraction (OSRE) method [17] assumes that the data has been 1-from-N encoded; hence its application is also limited to data with only nominal or ordinal attributes. \\ Trepan, an algorithm that was developed by Craven and Shavlik [18] also extracts M-of-N rules from an NN. It treats the NN as an oracle to generate additional data samples. This is an important step in trepan as it grows a decision tree by recursive partitioning. As the tree grows, fewer and fewer training samples are available for deciding if a node should be split further. Additional samples are generated by taking into account the distribution of the existing data samples and their labels are determined by the NN oracle. A node in the tree becomes a leaf node if it has sufficiently a high proportion of samples that belong to one class or if the number of internal nodes in the tree has reached the maximum.\\ There exist algorithms that generate fuzzy rules from NNs. Nefclass [19] is one such algorithm. It works as a three-layer fuzzy NN. The difference between this type of networks and the standard backpropagation NNs lies in the connection weights which represent fuzzy sets and in the activation functions which act as fuzzy set operators. Nefclass employs a fuzzy variant of the backpropagation algorithm to find the characteristic parameters of the membership functions.\\ Benitez [20] proposed a method for translating the knowledge embedded in an NN into a fuzzy-rule-based system.\\ Tsukimoto [21] developed an NN rule extraction algorithm that could be applied to continuous data directly. Instead of linear combinations of the inputs as the rule conditions, the rules are represented as continuous Boolean function of the attributes. The algorithm has been tested only on the Iris data set and the accuracy obtained was not as good as the accuracy of the decision tree method C4.5.\\ The algorithm full rule extraction (full-RE) [22] is able to extract accurate rules from the Iris data set without the need to binarize or normalize the continuous attributes of the data prior to network training. For each hidden node in the network, full-RE generates an intermediate rule that predicts its output according to the linear combinations of the input attributes. In order to obtain the final set of classification rules that do not involve network weights in their rule conditions, full-RE discretizes the input attributes and then solves a linear programming problem to select the relevant discretization boundary.\\ A recent rule extraction algorithm that works on discrete and continuous data was proposed by Rabuñal [23]. There is, however, no provision for generating rules with separate rule conditions involving discrete and continuous attributes.\\ Rudy Setiono and Bart Baesens [2] have presented a recursive algorithm for extracting classification rules (RE-RX) from feedforward neural networks that have been trained on data sets having both discrete and continuous attributes. This algorithm shares some similarities with other existing rule extraction algorithms. It assumes that the trained network has been pruned so that irrelevant and redundant network connections and units have been removed. It also makes use of the decision tree method C4.5 to generate rules with only discrete attributes in their conditions. The novel feature of the proposed recursive algorithm lies in the rule set generated. The rules are hierarchical such that only those rules at the deepest level have rule conditions that involve linear combinations of the continuous attributes, while the conditions of all the other rules involve only the discrete attributes. We believe that such rule conditions would greatly increase the comprehensibility of the rules, and hence greatly pave the way to open the NN “black box” wider.\\ \newpage \section{System requirement and specification } \subsection{Project Statement} Mining classification rules from database using ANN and optimizing the knowledge generated in terms of classification accuracy.\\ \subsection{Proposing theme of the project} Recursive Neural Network Rule Extraction for Data with Mixed attributes By Rudy Setiono,Senior Member, IEEE, Bart Baesens, and Christophe Mues [IEEE Transaction on Neural Networks, Vol. 19 No. 2, February 2008 pp. 299-307] In this paper, a recursive algorithm for extracting classification rules from feedforward neural networks that have been trained on data sets having both discrete and continuous attributes is presented Real-world classification problems usually involve both discrete and continuous input attributes. For such problems, all the continuous attributes must be discretized if algorithms such as NeuroRule, GLARE, and OSRE are to be used for rule extraction.The drawback of discretizing the continuous attributes is that the accuracy of the networks, and hence the accuracy of the rules extracted from the networks, may decrease. This is because discretization leads to a division of the input space into hyperrectangular regions. Each condition of the extracted rulescorresponds to one of these hyperrectangular regions where all data samples are predicted to belong to one class. Clearly, a data preprocessing step that divides the input space into rectangular subregions may impose unnecessary restrictions on the NNs as classifiers. It is highly likely that the boundaries of the regions that contain data samples from the same class are nonrectangular given that some of the data attributes are continuous.\\ \subsection{Overall Description} The proposed system presents a justified neural network using rule extraction for classifying pima Indian diabetes patiens.System provides recursively extracted hierarchical rules for describing hidden knowledge of the trained neural network. A recursive rule extraction algorithm proposed by Rudy Setiono is used for implementing this system \subsubsection{Overall Steps} \begin{enumerate} \item Step 1: (Data collection) The data to be used for classification is collected. \item Step 2: (Training and testing data separation) The available data are divided into training and testing data sets of size 80\% and 20\% respectively. \item Step 3: (Network architecture) network architecture and a learning method are selected. Important considerations are the exact number of perceptrons and the number of layers. \item Step 4: (Parameter tuning and weight initialization) There are parameters for tuning the network to the desired learning performance level. Part of this step is initialization of the network weights and parameters, followed by modification of the parameters as training performance feedback is received. \\ Initialize weight and biases to the random numbers distributed over a small range of values: \begin{math} [{-\alpha \over \sqrt{Ni}} ,{ -\alpha \over \sqrt{Ni}}]\\ Where\,N_{i} -No. \,of \,units\, to \, i^{th}\, unit,\\ \alpha \end{math} - integer between 1 to 3. \begin{figure}[hbp] \begin{center} \includegraphics[height=6in,width=6in]{blockdiagram.JPG} \caption{ Neural Network Implementation} \end{center} \end{figure} \item Step 5: (Data Normalization) transforms the application data into the type and format required by the ANN. All data must be normalized i.e. all values of the attributes in the database are changed to contain in the interval [0,1] or [-1,1]. Two normalization techniques are used: \begin{enumerate} \item Max-Min Normalization. \item Decimal scaling Normalization. \end{enumerate} \item Step 6: (Training) Training is conducted iteratively by presenting input and desired or output data to the ANN. The ANN computes the outputs and adjusts the weights until the computed outputs are within an acceptable tolerance of the known outputs for the input cases. \item Step 7: (Testing) The testing examines the performance of the network using the derived weights by measuring the ability of the network to classify the testing data correctly. \item Step 8: (Implementation) Now a stable set of weights are obtained. Now the network can reproduce the desired output for the given inputs like those in the training set. The network is ready to use as a stand-alone system or as part of another software system where new input data will be presented to it and it output will be a recommended decision. \end{enumerate} \newpage \subsubsection{Pima Indian Diabetes Dataset} \begin{enumerate} \item Title: Pima Indians Diabetes Database \item Sources: Original owners: National Institute of Diabetes and Digestive and Kidney Diseases \item Past Usage: Smith,~J.~W., Everhart,~J.~E., Dickson,~W.~C., Knowler,~W.~C., \& Johannes,~R.~S. (1988). Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In {\it Proceedings of the Symposium on Computer Applications and Medical Care} (pp. 261--265). IEEE Computer Society Press. \item Number of Instances: 768 \item Number of Attributes: 8 plus class \item For Each Attribute: (all numeric-valued) \begin{enumerate} \item Number of times pregnant. \item Plasma glucose concentration a 2 hours in an oral glucose tolerance test \item Diastolic blood pressure (mm Hg) \item Triceps skin fold thickness (mm) \item 2-Hour serum insulin (mu U/ml) \item Body mass index (weight in kg/ \begin{math} (height\,in\,m)^{2} \end{math} ) \item Diabetes pedigree function \item Age (years) \item Class variable (0 or 1) \end{enumerate} \item Missing Attribute Values: Yes \item Class Distribution: (class value 1 is interpreted as ``Tested positive for diabetes'') \begin{table}[hbp] \begin{center} \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{ Class Value For Number of instances} \\ \hline 0 & 500 \\ 1 & 268 \\ \hline \end{tabular} \end{center} \caption{Class Distribution} \end{table} \item Brief statistical analysis: \begin{table}[h!] \begin{center} \begin{tabular}{ | l | l | l |} \hline Attribute number & Mean & Deviation \\ \hline 1 & 3.8 & 34\\ \hline 1 & 3.8 & 3-4\\ \hline 2 & 120.9 & 32.0\\ \hline 3 & 69.1 & 19.4\\ \hline 4 & 20.5 & 16.0\\ \hline 5 & 79.8 & 115.2\\ \hline 6 & 32.0 & 7.9\\ \hline 7 & 0.5 & 0.3\\ \hline 8 & 33.2 & 11.8\\ \hline \end{tabular} \caption{Statistical analysis of the dataset} \end{center} \end{table} \end{enumerate} \newpage \subsubsection{Discrete and continuous attributes} \begin{itemize} \item \textbf{Discrete Attributes:} Any data measurements that are not quantified on and infinitely divisible numeric scale is called discrete attributes. \item \textbf{Continuous Attributes:} Any data measurements that are quantified on and infinitely divisible numeric scale is called continues attributes. \end{itemize} \subsubsection{Feed forward Neural Network} \textbf{Neural network training:}\\ Find an optimal weight (W,V).Minimize a function that measures how well the network predicts the desired outputs (class label). \begin{itemize} \item Error in prediction for \begin{math} i^{th} sample: e^{i} = {(desired output)^i - (predicted output)^i} \ \end{math} \item Sum of squared error function:\,\begin{math} E(W,V) = \sum e^i \end{math} \item Cross‐entropy error function:\\ \begin{math} E(W,V) =\- \sum d_i log p_i + (1 - di) log (1 - p_i)\\\end{math} d is the desired output either 0 or 1 \begin{math}d_i \end{math} output, 1 Many optimization methods. \item Gradient descent/error back propagation \item Conjugate gradient \item Quasi Newton method \item Genetic algorithm \item Network is considered well trained if it can predict training data and cross validation data with acceptable accuracy. \end{itemize} \subsubsection{Pruning} Artificial neural networks are considered as efficient computing models and as the universal approximates .The predictive accuracy of neural network is higher than that of other methods or human experts, it is generally difficult to understand how the network arrives at a particular decision due to the complexity of a particular architecture. One of the major criticism is their being black boxes, since no satisfactory explanation of their behavior has been offered. This is because of the complexity of the interconnections between layers and the network size [18]. As such, an optimal network size with minimal number of interconnection will give insight into how neural network performs. Another motivation for network simplification and pruning is related to time complexity of learning time. \begin{enumerate} \item \textbf{Pruning Algorithm}\\ Network pruning offers another approach for dynamically determining an appropriate network topology. Pruning techniques [11] begin by training a larger than necessary network and then eliminate weights and neurons that are deemed redundant. Typically, methods for removing weights involve adding a penalty term to the error function [5]. It is hoped that adding a penalty term to the error function, unnecessary connection will have smaller weights and therefore complexity of the network can be significantly reduced. This paper aims at pruning the network size both in number of neurons and number of interconnections between the neurons. The pruning strategies along with the penalty function are described in the subsequent sections \item \textbf{Redundant weight Pruning}\\ Penalty function is used for weight decay. As such we can eliminate redundant weights with the following Weight Elimination Algorithm as suggested in different literature. \begin{enumerate} \item Weight Elimination Algorithm: \begin{enumerate} \item Let \begin{math}\eta_1 \, and\, \eta_2\, be\, positive\, scalars\, such\, that\, \eta_1 + \eta_2 < 0.5.\end{math} \item Pick a fully connected network and train this network such that error condition is satisfied by all input patterns. Let (w, v) be the weights of this network. \item For each \begin{math} m m l v \, , if\, m m l v ×w\, \leq \, 4 \,\eta_2 \, (\sigma) ,then\, remove\, m m l v \, \end{math}from the network \item For each vm , If vm \begin{math}\leq 4 \,\eta_2 (7)\end{math},then remove vm from the Network. \item If no weight satisfies condition (6) or condition (7) then remove m wl with the smallest product m m l v ×w . \item Retrain the network. If classification rate of the network falls below an acceptable level then stop. Otherwise go to Step 3. \end{enumerate} \item Input and Hidden Node Pruning :\\A node-pruning algorithm is presented below to remove redundant nodes in the input and hidden layer. \begin{description} \item[step 1 :]Create an initial network with as many input neurons as required by the specific problem description and with one hidden unit. Randomly initialize the connection weights of the network within a certain range. \item[step 2 :]Partially train the network on the training set for a certain number of training epochs using a training algorithm. The number of training epochs, τ, is specified by the user. \item[step 3 :]Eliminate the redundant weights by using weight elimination algorithm. \item[step 4 :]Test this network. If the accuracy of this network falls below an acceptable range then add one more hidden unit and go to step 2. \item[step 5 :]If there is any input node xl with mlw = 0, for m = 1,2,....,h. then remove this node. \item[step 6 :]Test the generalization ability of the network with test set. If the network successfully converges then terminate, otherwise, go to step 1. \end{description} \end{enumerate} \end{enumerate} \subsubsection{Decision Tree Algorithm} \textbf{Decision tree learning}, used in statistics, data mining and machine learning, uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value. More descriptive names for such tree models are classification trees or regression trees. In these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels.\\ In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data but not decisions; rather the resulting classification tree can be an input for decision making. This page deals with decision trees in data mining. \subsubsection{C 4.5 Algorithms} \textbf{C4.5} is an algorithm used to generate a decision tree developed by Ross Quinlan. C4.5 is an extension of Quinlan's earlier ID3 algorithm. The decision trees generated by C4.5 can be used for classification, and for this reason, C4.5 is often referred to as a statistical classifier.\\ \textbf{C4.5 pseudo code:} \begin{enumerate} \item Check for base cases. \item For each attribute ``a'' find the normalized information gain from splitting on ``a''. \item Let, a\_best be the attribute with the highest normalized information gain. \item Create a decision node that splits on a\_best. \item Recurse on the sub list obtained by splitting on a\_best a\_best and add those nodes as children of node. \end{enumerate} \subsubsection{Support Vector machine} A support vector machine constructs a hyper plane or set of hyper planes in a high or infinite dimensional space, which can be used for classification, regression, all other tasks. Intuitively a good separation is achieved by hyper plane that has a largest distance to any nearer training data point of any class (so called functional margin), since in general the larger the margin the lower the generalization error of the classifier. \begin{figure}[h] \begin{center} \includegraphics[height=3in,width=3in] {svm.jpg} \caption{Support Vector Machine} \end{center} \end{figure} \subsubsection{Support and Error} \begin{itemize} \item \textbf{Support:} Support refers to the number of correctly classified samples by the rule, divided by the number of correctly classified samples by the NN. \item \textbf{Error:} Error refers to number of wrong classified samples by the rule, divided by number of samples in the rule. \end{itemize} \subsection{Software Requirements} \subsubsection{Functional Requirement} Creating a User Interface for demonstrating diabetes patient’s classification in positive or negative test with justification. \begin{itemize} \item The process involves in step one of module one is creating feed-forward neural network with input, hidden and output layer and training of it using appropriate training algorithm. Defining goal and hidden layers neurons for achieving good accuracy. \item In the second step pruning process applied on trained network to prune the unwanted and zeroed valued network connections with attributed values. \item Second module involves use of decision tree algorithm for discrete rule generation form trained network .Support of each rule is validated using separate C module for rule refinement. \item Third module involves use of support vector machine for hyper plane generation for contentious attributes. \end{itemize} \subsubsection{Non-Functional Requirement} The project requires standard dataset of the Pima Indian diabetes patients. \begin{itemize} \item\textbf{Maintainability:} All modules interfaces must be clearly define to change for the future technologies. Through thoughtful and effective software engineering, all steps of the software development process will be well documented to ensure maintainability of the product throughout its life time. All development will be provided with good documentation. \item \textbf{Performance:} The response time, utilization and throughput behavior of the system. Care is taken so as to ensure a system with comparatively high performance. \item \textbf{Usability:} The ease of use and training the end users of the system is usability. System should have qualities like- learning ability, efficiency, affect, control. The main aim of the project is to reduce server failure and gets high performance of server and reduces the rework of the programmer. \item \textbf{Modifiability:} The ease with which a software system can accommodate changes to its software is modifiability. Our project is easily adaptable for changes that is useful for the application to withstand the needs of the users. \item \textbf{Portability:} The ability of the system to run under different computing environments. The environment types can be either hardware or software, but is usually a combination of two. \item \textbf{Reusability:} The extent to which existing application can be reused in new application. Our application can be reused a number of times without any technical difficulties. \item \textbf{Security:} The factors that protect the software from accidental or malicious access, use, modification, destruction, or disclosure. Security can be ensured as the project involves authenticating the users. \end{itemize} \subsection{Methodologies/Techniques to be used for knowledge extraction} The Multilayer Perceptrons are used in this study make use of biases, sigmoid activation functions and one hidden layer with a variable number of nodes. \begin{itemize} \item Back propagation algorithm for training ANN (Gradient steepest descent algorithm) \item C 4.5 algorithm \item MATLAB (MATrix LABoratory) Neural Network Toolbox 4.0 from Math Works Corp. is applied to perform the training exercises. \item Neural network packages: Neurodimensions’ Neurosolutions v3.0 and Thinkspro v1.05 by Logical Designs Consulting. \item Support Vector Machine algorithm \item Credit Approval dataset from UCI machine repository. \end{itemize} \subsection{Features} \begin{itemize} \item Easy user interface \item Online database creation for new and unknown data samples. \item Clear justification of the test report using rule extracted from the network.. \item High accuracy on standard dataset. \item Can handle unknown and null values samples. \item Performance is good as advances training algorithm is used. \end{itemize} \newpage \section{PROJECT PLANNING AND MANAGEMENT} \subsection{Project Process Management} The V-model represents a software development process (also applicable to hardware development) which may be considered an extension of the waterfall model. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. The horizontal and vertical axes represents time or project completeness (left-to-right) and level of abstraction (coarsest-grain abstraction uppermost), respectively. \begin{figure}[h] \begin{center} \includegraphics[height=3in,width=4in] {vcycle.jpg} \caption{V Life Cycle Model} \end{center} \end{figure} \subsection{Verification Phases} \subsubsection{Requirements analysis} \begin{sloppypar} In the Requirements analysis phase, the first step in the verification process, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned with establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated. \end{sloppypar} \begin{sloppypar} The user requirements document will typically describe the system’s functional, interface, performance, data, security, etc requirements as expected by the user. It is used by business analysts to communicate their understanding of the system to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. See also Functional requirements. this is parallel processing.\end{sloppypar} \begin{sloppypar} There are different methods for gathering requirements of both soft and hard methodologies including; interviews, questionnaires, document analysis, observation, throw-away prototypes, use cases and status and dynamic views with users.\end{sloppypar} \subsubsection{System Design} \begin{sloppypar} Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly.\end{sloppypar} \begin{sloppypar} The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing are prepared in this phase. \end{sloppypar} \subsubsection{Architecture Design} The phase of the design of computer architecture and software architecture can also be referred to as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in the particular phase. \subsubsection{Module Design} The module design phase can also be referred to as low-level design. The designed system is broken up into smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudocode: \begin{itemize} \item Database tables, with all elements, including their type and size. \item All interface details with complete API references. \item All dependency issues. \item Error message listings. \item Complete input and outputs for a module. \item The unit test design is developed in this stage. \end{itemize} \subsubsection{Module Design} \begin{itemize} \item Unit Testing \item Integration Testing \item System Testing \item User Acceptance Testing \item Acceptance testing helps \end{itemize} \subsubsection{Feasibility Study} Literature survey basically focuses on answer to, how well our project does satisfy the four dimensions: \begin{enumerate} \item Technology Feasibility \item Economic Feasibility \item Time Feasibility \item Resources Feasibility \end{enumerate} Answer to these questions could be very easy for projects in established area where we get lots of time and resources to use and as the risks increases the feasibility study would result Negative. As far our current project is concerned we have carried out the initial risk architecture and design so, that the above questions would be accurately answered. \begin{enumerate} \item \textbf{Technology Feasibility:}\\The project needs knowledge of MATLAB,Data Mining,Neural Network,Decision Tree Algorithm,Weka tool. As we have enough resources of both the technologies, we are capable to satisfy the technological needs of the project completely as well we do not need additional software or hardware to develop application. \item \textbf{Financial Feasibility:}\\As this project does not need any special hardware or licensed software and as the developer are going through their training, financial feasibility is not the problem for the organization. Moreover the organization already has enough infrastructures to support their application well so there is no need for any finance. \item \textbf{Time Feasibility:}\\The project has to be completed by 10 months approx., as we have gathered requirements and analyzed them properly; it seems that project will be completed in time span provided. Therefore the project is feasible in time. \item \textbf{Resource Feasibility:}\\The hardware and software components and human resources availability is sufficient enough to engineer the project. Hence the software is feasible considering resources. \end{enumerate} \subsection{Cost and Effort Estimation} \begin{itemize} \item \textbf{Total Resources Estimation :}\\ We know that the cost of developing software, up until the point it is accepted, is only a fraction of the total cost of the system over the typical life cycle of the product. However for the purpose of this study we will exclude to maintenance costs, and will speak only of the development costs up until acceptance. \item \textbf{Functions:}\\ Cost estimation based on expected functionality of the system was first purposed by Albrecht in 1979, and has since been researched by several people. This Cost estimation realizes on function points and requires the identification of all occurrences of five unique functions types :External Input ,External Output ,Logical internal files ,External Interfaces and queries .The sum of all occurrences is called RAW-FUNCTION-COUNTS (FC).This value must be modifies by the weighted rating of complexity factors ,giving a Technical Complexity Factor (TCF).The function points are equivalent to FC*TCF for any given project. \begin{itemize} \item Estimation of Hardware \begin{itemize} \item Minimum Requirement for Development \item Pentium 4 Processor \item 1 GB Of RAM \item 80 GB Hard Disk Space \end{itemize} \item Minimum Requirement for Execution \begin{itemize} \item Pentium 4 Processor \item 1 GB Of RAM \item 80 GB Hard Disk Space on server \end{itemize} \item Estimation Of Software \begin{itemize} \item Software Resources \item MATLAB 7 or Higher version \item WEKA \end{itemize} \end{itemize} \end{itemize} \subsection{Risk analysis and Management} \begin{sloppypar} Risk management is the identification, assessment, and prioritization of risks (defined in ISO 31000 as the effect of uncertainty on objectives, whether positive or negative) followed by coordinated and economical application of resources to minimize, monitor, and control the probability and/or impact of unfortunate events or to maximize the realization of opportunities. \end{sloppypar} \begin{sloppypar}Risks can come from uncertainty in financial markets, project failures (at any phase in design, development, production, or sustainment life-cycles), legal liabilities, credit risk, accidents, natural causes and disasters as well as deliberate attack from an adversary, or events of uncertain or unpredictable root-cause. \end{sloppypar} \begin{sloppypar} The strategies to manage risk typically include transferring the risk to another party, avoiding the risk, reducing the negative effect or probability of the risk, or even accepting some or all of the potential or actual consequences of a particular risk. Risk management also faces difficulties in allocating resources. This is the idea of opportunity cost. Resources spent on risk management could have been spent on more profitable activities. Again, ideal risk management minimizes spending (or manpower or other resources) and also minimizes the negative effects of risks. \end{sloppypar} \subsubsection{Method} For the most part, these methods consist of the following elements, performed, more or less, in the following order. \begin{itemize} \item Identify, characterize, and assess threats. \item Assess the vulnerability of critical assets to specific threats. \item Determine the risk (i.e. the expected likelihood and consequences of specific types of attacks on specific assets). \item Identify ways to reduce those risks. \item Prioritize risk reduction measures based on a strategy. \end{itemize} \subsubsection{Risk management activities as applied to project management} In project management, risk management includes the following activities: \begin{itemize} \item Planning how risk will be managed in the particular project. Plans should include risk management tasks, responsibilities, activities and budget. \item Assigning a risk officer - a team member other than a project manager who is responsible for foreseeing potential project problems. Typical characteristic of risk officer is a healthy skepticism. \item Maintaining live project risk database. Each risk should have the following attributes: opening date, title, short description, probability and importance. Optionally a risk may have an assigned person responsible for its resolution and a date by which the risk must be resolved. \item Creating anonymous risk reporting channel. Each team member should have the possibility to report risks that he/she foresees in the project. \item Preparing mitigation plans for risks that are chosen to be mitigated. The purpose of the mitigation plan is to describe how this particular risk will be handled – what, when, by who and how will it be done to avoid it or minimize consequences if it becomes a liability. \item Summarizing planned and faced risks, effectiveness of mitigation activities, and effort spent for the risk management. \end{itemize} \subsubsection{Project Scheduling (timeline chart) } \begin{table}[hbp] \begin{center} \begin{tabular}{ | l | p{5cm} | p{5cm} | l |} \hline Index & State & Tasks & Time \\ \hline 1 & Requirement Gathering & Requirements understanding, Use case design, functional requirements & 7 weeks\\ \hline 2 & Analysis & Design or select Architecture, framework Development, use case realization,Data Set preprocessing & 7 weeks\\ \hline 3 & Design & Develop Design model, MATLAB Network model designing,System Desingning & 11 weeks\\ \hline 4 & Implementation & Environmen setup, production rollout & 5 weeks\\ \hline 5 & Testing & Goal matching,Fact table analysis,Accuracy mapping,Rule refinement & 4 weeks\\ \hline 6 & Deployment & Integration of modules and deployment of application & 2 Week\\ \hline \hline \end{tabular} \caption{Plan of Execution} \end{center} \end{table} \textbf{Gantt chart:}\\ \begin{figure}[h!] \begin{center} \includegraphics[height=3in,width=4in] {gantt.jpg} \caption{Gantt chart} \end{center} \end{figure} \textbf{Timeline chart:}\\ \begin{figure}[h!] \begin{center} \includegraphics[height=3in,width=4in] {timeline.jpg} \caption{Timeline chart} \end{center} \end{figure} \newpage \section{ANALYSIS AND DESIGN} Design modeling uses a combination of text and diagrammatic forms to depict the requirements for data, function and behavior in a way that s relatively easy to understand and more importantly straightforward to review for correctness, completeness and consistency. \begin{sloppypar} The unified modeling language (uml) is a graphical language for visualization, specifying, constructing and documenting the artifacts of a software intensive system. The uml gives a standard way to write systems blue prints, covering conceptual things, such as business processes and system functions, as well as concrete things, such as classes written in a specific programming language, database schemas and reusable software components.\end{sloppypar} Here we have included the following uml diagrams: \subsection{Use Case Diagram} A use case diagram is a type of behavioral diagram defined by the Unified Modeling Language (UML) and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases. The main purpose of a use case diagram is to show what system functions are performed for which actors. Roles of the actors in the system can be depicted.\\ \begin{figure}[h!] \begin{center} \includegraphics[height=4in,width=2.5in] {UseCaseForReport.jpg} \caption{Use cases} \end{center} \end{figure} \newpage \subsection{Activity diagram} An activity diagram is a diagram that shows activities and actions to describe workflows. In the Unified Modeling Language an activity diagram represents the business and operational step-by-step workflows of components in a system.\\ An activity diagram shows the overall flow of control.Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control. \begin{figure}[h!] \begin{center} \includegraphics[height=5in,width=4in] {state.jpg} \caption{Activity diagram} \end{center} \end{figure} \newpage \section{IMPLEMENTATION AND USER INTERFACES} \subsection{Network Architecture} The first step in training a feedforward network is to create the network object. The function newff creates a feedforward network. It requires four inputs and returns the network object. \begin{figure}[h!] \begin{center} \includegraphics[height=3.5in,width=5.5in] {NNArchi.jpg} \caption{ANN Architecture in MATLAB} \end{center} \end{figure} \subsection{Network setup} Network training involves numbers of attributes those are essential to achieve higher accuracy and optimal network. \begin{itemize} \item \textbf{Network learning rate:} 0.1 \item \textbf{Numbers of epoch:} 1, 83525 \item \textbf{Network mean square error:} 0.1 \item \textbf{Goal achieved:} 0.099999 \end{itemize} \newpage \subsection{Training of Neural network} The training process requires a set of examples of proper network behavior - network inputs p and target outputs t. During training the weights and biases of the network are iteratively adjusted to minimize the network performance function net.performFcn. \begin{figure}[h!] \begin{center} \includegraphics[height=3in,width=5in] {NNTrain.jpg} \caption{ANN Training Process in MATLAB} \end{center} \end{figure} \subsection{Training graph} The result of training is shown in the following figure. \begin{figure}[h!] \begin{center} \includegraphics[height=3in,width=5in] {NNTraing.jpg} \caption{ANN Training Graph in MATLAB} \end{center} \end{figure} \subsection{Input \& Target Co-relation Graph} Graph showing deviation between target and actual output. \begin{figure}[h!] \begin{center} \includegraphics[height=3.5in,width=5in] {NNDev.jpg} \caption{Deviation Graph in MATLAB} \end{center} \end{figure} \subsection{Weka Decision tree Statistics } Statesticis information generated by Weka on dataset using C4.5 algorithm. \begin{figure}[h!] \begin{center} \includegraphics[height=3in,width=5in] {Wekatree.jpg} \caption{Weka Decision tree Statistics} \end{center} \end{figure} \subsection{Decision tree generated by Weka tool} \begin{figure}[h!] \begin{center} \includegraphics[height=2in,width=5in] {WekaDecTree.jpg} \caption{Decision tree generated by Weka tool} \end{center} \end{figure} \subsection{GUI of the System} GUI tool for classifying user in positive or negative test with justification of NN output. \begin{figure}[h!] \begin{center} \includegraphics[height=5in,width=5in] {GUI.jpg} \caption{Graphical User Interface} \end{center} \end{figure} \subsection{Code} \subsubsection{Main Classification module with GUI} \begin{itemize} \item \textbf{index.jsp} \begin{verbatim} <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Page - Diabetis Prediction Model</title> <link rel="stylesheet" href="styles/style.css" type="text/css" media="screen" /> </head> <body> <div id="wrapper"> <div id="header"> <h1>Diabetis Prediction Model</h1> <div id="nav"> <ul> <li><a href="index.jsp" id="nav1">Home</a></li> <li><a href="page.jsp" id="nav3">Neuro-Lab</a></li> <li><a href="about.jsp" id="nav4">About</a></li> <li><a href="contact.jsp" id="nav5">Contact</a></li> </ul> </div> <div class="clear"></div> </div> <div id="middlebar-small"> <h2>Neuro Lab</h2> </div> <div id="body"> <div id="body-inner"> <div id="sidebar"> <h3 class="icon-feature">Authorised domain For Doctors.</h3> <hr></br></br> <p>Your valuable & Authorise feedback about Dibetes prediction will lead us to improve the overall model.</br> It will help us to</br> </p> <ul> <li>To improve Regional Accuracy.</li> <li>To Normliase our Dataset.</a></li> <li>To Collect more Dataset on Dibetic Patients.</li> </ul> </br> <form id="form2" name="form2" method="post" action="SQLops_main.jsp"> <ul class="Dform"> <li><label for="Select Name">Name</label><font>:</font>&nbsp <select name="Uname"> <option value="Dr.A">Dr. A</option> <option value="Dr.B">Dr. B</option> <option value="Dr.C">Dr. C</option> <option value="Dr.D">Dr. D</option> </select> </li> <li><label for="password">password</label><font>:</font>&nbsp <input name="password" type="password" /></li> <li><label for="Select Name">Outcome</label><font>:</font>&nbsp <select name="selectResult"> <option value="1">Right Prediction</option> <option value="0">Wrong Prediction</option> </select> </li> </br> <li><center><input type="submit" value="Send" class="button" /></center> </ul> <hr> </form> </br><center><h3>Success!!</h3></center> </div> <div id="main-content"> <form id="form1" name="form1" method="post" action="rules.jsp" > <h1>Check Out for Following symptoms</h1> </br><hr></br> <ul class="form"> <li><label>1. Number of times pregnant</label><B>:</B>&nbsp <input name="ntp" type="text" value="3"/>&nbsp <font>*(0-17)</font></li> <li><label>2. Plasma glucose concentration</label><B>:</B>&nbsp <input name="pgc" type="text" value="66" />&nbsp <font>*(57-197)</font></li> <li><label>3. Diastolic blood pressure (mm Hg)</label><B>:</B>&nbsp <input name="dbp" type="text" value="45" />&nbsp <font>*(30-110)</font></li> <li><label>4. Triceps skin fold thickness (mm)</label><B>:</B>&nbsp <input name="tsft" type="text" value="39" />&nbsp <font>*(7-63)</font></li> <li><label>5. 2-Hour serum insulin (mu U/ml)</label><B>:</B>&nbsp <input name="si" type="text" value="145" />&nbsp <font>*(0-846)</font></li> <li><label>6. Body mass index</label><B>:</B>&nbsp <input name="bmi" type="text" value="43.8" />&nbsp <font>*(0-67.1)&nbsp </font></li> <li><label>7. Diabetes pedigree function</label><B>:</B>&nbsp <input name="dpf" type="text" value="0.45" />&nbsp <font>*(0.078-2.42)</font></li> <li><label>8. Age</label><B>:</B>&nbsp <input name="age" type="text" value="34" />&nbsp <font>*(21-81)</font></li> </br></br> </ul> <center><input type="submit" value="Submit Query" class="button" /></center> </form> </br> <center><h2>Tested *Negative for Diabetes</h2></center> </br></br> <form id="form1" name="form2" method="post" action="justfy.jsp" > <center><input type="submit" value="Justify Your Prediction" class="button" /><center> </br> <ul> <li><b>Plasma glucose concentration is Less than 155.</b></li><li><b>Age of the pantient is Greater than 25 Years.</b></li><li><b>Pantient is pregnant less than 11 times.</b></li><li><b>Diastolic blood pressure is less than 53 (mm hg)</b></li> </ul> </form> </div> </div> </div> <div id="footer"> <div id="footer-right"> <a href="index.jsp">Home</a> | <a href="page.jsp">Neuro-Lab</a> | <a href="about.jsp">About</a> | <a href="contact.jsp">Contact</a> </div> Copyright &copy; 2012 Neural Network Group </div> </div> </body> </html> \end{verbatim} \end{itemize} \subsubsection{Module for feed-forward NN} \begin{verbatim} function net = newff(pr,s,tf,btf,blf,pf) if nargin < 2 net = newnet('newff'); return end % Defaults if nargin < 4, btf = 'trainlm'; end if nargin < 5, blf = 'learngdm'; end if nargin < 6, pf = 'mse'; end % Error checking if (~isa(pr,'double')) | ~isreal(pr) | (size(pr,2) ~= 2) error('Input ranges is not a two column matrix.') end if any(pr(:,1) > pr(:,2)) error('Input ranges has values in the second column larger in the values in the same row of the first column.') end if isa(s,'cell') if (size(s,1) ~= 1) error('Layer sizes is not a row vector of positive integers.') end for i=1:length(s) si = s{i}; if ~isa(si,'double') | ~isreal(si) | any(size(si) ~= 1) | any(si<1) | any(round(si) ~= si) error('Layer sizes is not a row vector of positive integers.') end end s = cell2mat(s); end if (~isa(s,'double')) | ~isreal(s) | (size(s,1) ~= 1) | any(s<1) | any(round(s) ~= s) error('Layer sizes is not a row vector of positive integers.') end % More defaults Nl = length(s); if nargin < 3, tf = {'tansig'}; tf = [tf(ones(1,Nl))]; end % Architecture net = network(1,Nl); net.biasConnect = ones(Nl,1); net.inputConnect(1,1) = 1; [j,i] = meshgrid(1:Nl,1:Nl); net.layerConnect = (j == (i-1)); net.outputConnect(Nl) = 1; net.targetConnect(Nl) = 1; % Simulation net.inputs{1}.range = pr; for i=1:Nl net.layers{i}.size = s(i); net.layers{i}.transferFcn = tf{i}; end % Performance net.performFcn = pf; % Adaption net.adaptfcn = 'trains'; net.inputWeights{1,1}.learnFcn = blf; for i=1:Nl net.biases{i}.learnFcn = blf; net.layerWeights{i,:}.learnFcn = blf; end % Training net.trainfcn = btf; % Initialization net.initFcn = 'initlay'; for i=1:Nl net.layers{i}.initFcn = 'initnw'; end net = init(net); \end{verbatim} \subsubsection{NN training module} \begin{verbatim} function [net,tr,Y,E,Pf,Af]=train(net,P,T,Pi,Ai,VV,TV) if nargin < 2 error('Not enough input arguments.'); end if ~isa(net,'network') error('First argument is not a network.'); end if net.hint.zeroDelay error('Network contains a zero-delay loop.') end switch nargin case 2 [err,P,T,Pi,Ai,Q,TS,matrixForm] = trainargs(net,P); VV = []; TV = []; case 3 [err,P,T,Pi,Ai,Q,TS,matrixForm] = trainargs(net,P,T); VV = []; TV = []; case 4 [err,P,T,Pi,Ai,Q,TS,matrixForm] = trainargs(net,P,T,Pi); VV = []; TV = []; case 5 [err,P,T,Pi,Ai,Q,TS,matrixForm] = trainargs(net,P,T,Pi,Ai); VV = []; TV = []; case 6 [err,P,T,Pi,Ai,Q,TS,matrixForm] = trainargs(net,P,T,Pi,Ai); if isempty(VV) VV = []; else if ~hasfield(VV,'P'), error('VV.P must be defined or VV must be [].'), end if ~hasfield(VV,'T'), VV.T = []; end if ~hasfield(VV,'Pi'), VV.Pi = []; end if ~hasfield(VV,'Ai'), VV.Ai = []; end [err,VV.P,VV.T,VV.Pi,VV.Ai,VV.Q,VV.TS,matrixForm] = ... trainargs(net,VV.P,VV.T,VV.Pi,VV.Ai); if length(err) disp('??? Error with validation vectors using ==> train') error(err) end end TV = []; case 7 [err,P,T,Pi,Ai,Q,TS,matrixForm] = trainargs(net,P,T,Pi,Ai); if isempty(VV) VV = []; else if ~hasfield(VV,'P'), error('VV.P must be defined or VV must be [].'), end if ~hasfield(VV,'T'), VV.T = []; end if ~hasfield(VV,'Pi'), VV.Pi = []; end if ~hasfield(VV,'Ai'), VV.Ai = []; end [err,VV.P,VV.T,VV.Pi,VV.Ai,VV.Q,VV.TS,matrixForm] = ... trainargs(net,VV.P,VV.T,VV.Pi,VV.Ai); if length(err) disp('??? Error with validation vectors using ==> train') error(err) end end if isempty(TV) TV = []; else if ~hasfield(TV,'P'), error('TV.P must be defined or TV must be [].'), end if ~hasfield(TV,'T'), TV.T = []; end if ~hasfield(TV,'Pi'), TV.Pi = []; end if ~hasfield(TV,'Ai'), TV.Ai = []; end [err,TV.P,TV.T,TV.Pi,TV.Ai,TV.Q,TV.TS,matrixForm] = ... trainargs(net,TV.P,TV.T,TV.Pi,TV.Ai); if length(err) disp('??? Error with test vectors using ==> train') error(err) end end end if length(err), error(err), end % TRAIN NETWORK % ------------- % Training function trainFcn = net.trainFcn; if ~length(trainFcn) error('Network "trainFcn" is undefined.') end % Delayed inputs, layer targets Pc = [Pi P]; Pd = calcpd(net,TS,Q,Pc); Tl = expandrows(T,net.hint.targetInd,net.numLayers); % Validation and Test vectors doValidation = ~isempty(VV); doTest = ~isempty(TV); if (doValidation) VV.Pd = calcpd(net,VV.TS,VV.Q,[VV.Pi VV.P]); VV.Tl = expandrows(VV.T,net.hint.targetInd,net.numLayers); end if (doTest) TV.Pd = calcpd(net,TV.TS,TV.Q,[TV.Pi TV.P]); TV.Tl = expandrows(TV.T,net.hint.targetInd,net.numLayers); end % Train network net = struct(net); flatten_time = (net.numLayerDelays == 0) & (TS > 1) & (~strcmp(trainFcn,'trains')); if flatten_time Pd = seq2con(Pd); Tl = seq2con(Tl); if (doValidation) VV.Pd = seq2con(VV.Pd); VV.Tl = seq2con(VV.Tl); VV.Q = VV.Q*VV.TS; VV.TS = 1; end if (doTest) TV.Pd = seq2con(TV.Pd); TV.Tl = seq2con(TV.Tl); TV.Q = TV.Q*TV.TS; TV.TS = 1; end [net,tr,Ac,El] = feval(trainFcn,net,Pd,Tl,Ai,Q*TS,1,VV,TV); Ac = con2seq(Ac,TS); El = con2seq(El,TS); else [net,tr,Ac,El] = feval(trainFcn,net,Pd,Tl,Ai,Q,TS,VV,TV); end net = class(net,'network'); % Network outputs, errors, final inputs Y = Ac(net.hint.outputInd,net.numLayerDelays+[1:TS]); E = El(net.hint.targetInd,:); Pf = Pc(:,TS+[1:net.numInputDelays]); Af = Ac(:,TS+[1:net.numLayerDelays]); % FORMAT OUTPUT ARGUMENTS % ----------------------- if (matrixForm) Y = cell2mat(Y); E = cell2mat(E); Pf = cell2mat(Pf); Af = cell2mat(Af); end % CLEAN UP TEMPORARY FILES % ------------------------ nntempdir=fullfile(tempdir,'matlab_nnet'); if exist(nntempdir) rmpath(nntempdir); tf=dir(nntempdir); for i=1:length(tf) if (~tf(i).isdir) delete(fullfile(nntempdir,tf(i).name)); end end rmdir(nntempdir) end % ============================================================ function [s2] = expandrows(s,ind,rows) s2 = cell(rows,size(s,2)); s2(ind,:) = s; % ============================================================ function [err,P,T,Pi,Ai,Q,TS,matrixForm] = trainargs(net,P,T,Pi,Ai); % Check signals: all matrices or all cell arrays % Change empty matrices/arrays to proper form switch class(P) case 'cell', matrixForm = 0; name = 'cell array'; default = {}; case {'double','logical'}, matrixForm = 1; name = 'matrix'; default = []; otherwise, err = 'P must be a matrix or cell array.'; return end if (nargin < 3) T = default; elseif ((isa(T,'double')|isa(T,'logical')) ~= matrixForm) if isempty(T) T = default; else err = ['P is a ' name ', so T must be a ' name ' too.']; return end end if (nargin < 4) Pi = default; elseif ((isa(Pi,'double')|isa(Pi,'logical')) ~= matrixForm) if isempty(Pi) Pi = default; else err = ['P is a ' name ', so Pi must be a ' name ' too.']; return end end if (nargin < 5) Ai = default; elseif ((isa(Ai,'double')|isa(Ai,'logical')) ~= matrixForm) if isempty(Ai) Ai = default; else err = ['P is a ' name ', so Ai must be a ' name ' too.']; return end end % Check Matrices, Matrices -> Cell Arrays if (matrixForm) [R,Q] = size(P); TS = 1; [err,P] = formatp(net,P,Q); if length(err), return, end [err,T] = formatt(net,T,Q,TS); if length(err), return, end [err,Pi] = formatpi(net,Pi,Q); if length(err), return, end [err,Ai] = formatai(net,Ai,Q); if length(err), return, end % Check Cell Arrays else d [R,TS] = size(P); [R1,Q] = size(P{1,1}); [err] = checkp(net,P,Q,TS); if length(err), return, end [err,T] = checkt(net,T,Q,TS); if length(err), return, end [err,Pi] = checkpi(net,Pi,Q); if length(err), return, end [err,Ai] = checkai(net,Ai,Q); if length(err), return, end end \end{verbatim} \section{TESTING} \subsection{NN Result with 12 hidden nodes} \begin{table}[hbp] \begin{center} \begin{tabular}{ | p{4cm} | p{4cm} | p{4cm} | p{3cm} |} \hline Training+validation & Correctly Classified & Wrong Classified & \%Accuracy \\ \hline 316 & 283 & 33 & 89.55\% \\ \hline Testing Samples & Correctly Classified & Wrong Classified & \%Accuracy\\ \hline 216 & 162 & 54 & 75\%\\ \hline \end{tabular} \caption{Fact table with 12 hidden nodes} \end{center} \end{table} \subsection{NN Result with 20 hidden nodes} \begin{table}[hbp] \begin{center} \begin{tabular}{ | p{4cm} | p{4cm} | p{4cm} | p{3cm} |} \hline Training+validation & Correctly Classified & Wrong Classified & \%Accuracy \\ \hline 366 & 357 & 9 & 97.54\% \\ \hline Testing Samples & Correctly Classified & Wrong Classified & \%Accuracy\\ \hline 166 & 121 & 45 & 72.89\%\\ \hline \end{tabular} \caption{Fact table with 20 hidden nodes} \end{center} \end{table} \newpage \subsection{Rules Table} \begin{table}[hbp] \begin{center} \begin{tabular}{ | l | l | l | l | l | p{1cm}| } \hline Rule No. & No. of samples & Correctly classified & Wrong Classified & \%Support & \%Error \\ \hline R1 & 62 & 62 & 0 & 17.32 & 0 \\ \hline R2 & 2 & 2 & 0 & 0.56 & 0 \\ \hline R3 & 45 & 42 & 3 & 12.57 & 6.67 \\ \hline R4 & 4 & 3 & 1 & 1.12 & 25 \\ \hline R5 & 97 & 79 & 18 & 27.09 & 18.56 \\ \hline R6 & 52 & 30 & 22 & 14.52 & 42.31 \\ \hline R7 & 5 & 5 & 0 & 1.4 & 0 \\ \hline R8 & 31 & 25 & 6 & 8.66 & 19.35 \\ \hline R9 & 8 & 8 & 0 & 2.23 & 0 \\ \hline R10 & 52 & 45 & 7 & 14.52 & 13.45 \\ \hline \, & 358 & 301 & 57 & \, & \,\\ \hline \end{tabular} \caption{Rules Table} \end{center} \end{table} \subsection{Rules generated using Discrete Attributes} \begin{verbatim} Rule R1: If D2<=154 and D2<=100 and D5<=25, then Predict Class 0. Rule R2: If D2<=154 and D2<=100 and D5>25 and D1<=10 and D3<=52, then Predict Class1. Rule R3: If D2<=154 and D2<=100 and D5>25 and D1<=10 and D3>52, then Predict Class0. Rule R4: If D2<=154 and D2<=100 and D5>25 and D1>10, then Predict Class 1. Rule R5: If D2<=154 and D2>100 and D5<=28, then Predict Class0. Rule R6: If D2<=154 and D2>100 and D5>28 and D5<=58 and D3<=76, then Predict Class0. Rule R7: If D2<=154 and D2>100 and D5>28 and D5<=58 and D3>76 and D4<=21, then Predict Class0. Rule R8: If D2<=154 and D2>100 and D5>28 and D5<=58 and D3>76 and D4>21, then Predict Class1. Rule R9: If D2<=154 and D2>100 and D5>58, then Predict class0 Rule 10: If D2>154 then, Predict Class 1. \end{verbatim} \newpage \section{APPLICATION} \subsection{Business Applications} The 1988 DARPA Neural Network Study [DARP88] lists various neural network applications, beginning in about 1984 with the adaptive channel equalizer. This device, which is an outstanding commercial success, is a single- neuron network used in long-distance telephone systems to stabilize voice signals. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor, a sonar classifier, and a risk analysis system. Neural networks have been applied in many other fields since the DARPA report was written. A list of some applications mentioned in the literature follows: \subsection{Credit Card Activity Checking} Neural networks are used to spot unusual credit card activity that might possibly be associated with loss of a credit card. \subsection{Financial} Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit-line use analysis, portfolio trading program, corporate financial analysis, currency price prediction. \subsection{Medical and Banking} Breast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of transplant times, hospital expense reduction, hospital quality improvement, emergency-room test advisement How can I tell which transactions are likely to be fraudulent.? \subsection{FUTURE ENHANCEMENTS} \begin{itemize} \item Rule optimization using Genetic Algorithm. \item Extending RE-RX to multiple group classification problem. \item Base can be changed from C4.5 to C5.0. \item Replacing hyper plane by the technique to handle inseparability. \end{itemize} \newpage \section{CONCLUSTION} We have developed a new NN rule extraction algorithm Re-RX that is capable of extracting classification rules from NNs trained using both discrete and continuous attributes.The novel characteristic of the algorithm lies in its hierarchical nature of considering discrete variables before continuous variables,in a recursive way. We have illustrated the working of the algorithm using three real-world data sets. More specifically,we have considered application scoring where classification models are built to distinguish good payers from defaulters based on the characteristics provided at the time of application.Two key requirements of application scoring models are performance and interpretability. Especially, the latter is becoming more and more important since financial regulators and law enforcement bodies require all risk management models of a financial institution to be validated. Using real-life application scoring data sets, we have demonstrated that Re-RX is able to extract accurate and interpretable classification rules.There is an obvious limitation to the applicability of the proposed algorithm, however. When the class labels of the data are best described as a nonlinear function of the continuous inputs,the extracted rules will not be able to preserve the accuracy of the NN. This is a consequence of the property of Re-RX which generates only linear classifiers involving the continuous attributes in order to improve the comprehensibility of the overall rule set. It is possible to preserve the accuracy, for example, by having an NN or other nonlinear classifiers, in place of the linear hyperplane. However, such rules would not be as comprehensible as Re-RX generated rules. The issue of accuracy versus complexity and comprehensibility of the rules generated from trained NNs certainly deserves further research. \newpage \appendix \section{Recursive Neural Network Rule Extraction for Data With Mixed Attributes [Re-Rx] Algorithm} \rule{6.3in}{.02in} \\Algorithm \emph{Re-Rx(S,D,C)}\\ \rule{6.3in}{.02in} \\Input : A set of data samples S having discrete attributes D and continuous attributes C\\ Output: A set of classification rules.\\ \begin{enumerate} \item Train and prune an NN using the data set S and all its attributes D and C. \item Let $\acute{D}$ and $\acute{C}$ be the sets of discrete and continuous attributes still present in the network, respectively. Let $\acute{S}$ be the set of data samples that are correctly classified by the pruned network. \item If $\acute{D} = \emptyset$ , then generate a hyperplane to split the samples in $\acute{S}$ according to the values of their continuous attributes $\acute{C}$ and stop.\\ Otherwise, using only the discrete attributes $\acute{D}$, generate the set of classification rules R for the data set $\acute{S}$. \item For each rule ${R}_i$ generated: If support ${R}_i > \delta_1$ and error ${R}_i > \delta_2$ , then \begin{itemize} \item Let ${S}_i$ be the set of data samples that satisfy the condition of rule ${R}_i$ and ${D}_i$ be the set of discrete attributes that do not appear in rule condition of ${R}_i$ . \item If $\acute{D} = \emptyset$ , then generate a hyperplane to split the samples in ${S}_i$ according to the values of their continuous attributes ${C}_i$ and stop. \item Otherwise, call \emph{Re-Rx(${S}_i$${D}_i$,${C}_i$)} \end{itemize} \end{enumerate} \rule{6.3in}{.02in} \newpage \begin{thebibliography}{00} \bibitem{1} Extracting rules for classification problems: AIS based approach By Humar Kahramanli , Novruz Allahverdi [Science direct : Expert Systems with Applications xxx (2009) article in press] \bibitem{2} Recursive Neural Network Rule Extraction for Data with Mixed attributes By Rudy Setiono,Senior Member, IEEE, Bart Baesens, and Christophe Mues [IEEE Transaction on Neural Networks, Vol. 19 No. 2, February 2008 pp. 299-307] \bibitem{3} Data Mining:Concepts and techniques By Han J. , Kamber, M. Y. and Lee S. C.(2001).. San Francisco, CA, USA: Morgan Kaufmann. \bibitem{4} Using neural networks and data mining techniques for the financial distress prediction model. By Wei-Sen Chen , Yin-Kuan Du [Science direct :Expert Systems with Applications 36 (2009) 4075–4086] \bibitem{5} Data classification with Multilayer perceptrons using a generalized error function By Luís M. Silva a, J. Marques de Sá a,b, Luís A. Alexandre [Science direct : Neural Networks 21 (2008) 1302-1310] \bibitem{6} Design of a hybrid system for the diabetes and heart diseases By Humar Kahramanli , Novruz Allahverdi [Science direct : Expert Systems with Applications 35 (2008) 82-89. \bibitem{7} Mining Classification rules for Liver Disorders By Humar Kahramanli , Novruz Allahverdi International Journal of Mathematics and Computers in Simulatin Issue1, Volume 3, 2009. \bibitem{8} Evaluation of neural networks and data mining methods on a credit assessment task for class imbalance problem. By Yueh-Min Huanga, Chun-Min Hunga, Hewijin Christine Jiaub [Science direct: Nonlinear Analysis: RealWorld Applications 7 (2006) 720 – 747 ] \bibitem{9} Humar Kahramanli , Novruz Allahverdi ,”A new method for composing classification rules: AR+OPTBP”, 5th International Advanced Technologies Symposium (IATS’09), May 13-15, 2009, Karabuk, Turkey. … \bibitem{10} Evolution of neural networks for classification and regression By Miguel Rocha, Paulo Cortez, Jose Neves [Science direct : Neurocomputing 70 (2007) 2809–2816] \bibitem{11} Generating rules with predicates, terms and variables from the pruned neural networks By Richi Nayak [Science direct : Neural Networks-Accepted in 6 Feb, 2009 Article in press] \bibitem{12} Gallant, S. I. (1988). Connection expert systems. Communications of the ACM, 31(2), 152–169. \bibitem{13} R. Setiono and H. Liu, “Symbolic representation of neural networks,” IEEE Computer, vol. 29, no. 3, pp. 71–77, Mar. 1996. \bibitem{14} R. Setiono, “Extracting rules from neural networks by pruning and hidden-unit splitting,” Neural Comput., vol. 9, no. 1, pp. 205–225, 1997. \bibitem{15} A. Gupta, S. Park, and S. M. Lam, “Generalized analytic rule extraction for feedforward neural networks,” IEEE Trans. Knowl. Data Eng., vol. 11, no. 6, pp. 985–991, Nov./Dec. 1999. \bibitem{16} T. A. Etchells and J. P. G. Lisboa, “Orthogonal search-based rule extraction (OSRE) for trained neural-networks: A practical and efficient approach,” IEEE Trans. Neural Netw., vol. 17, no. 2, pp. 374–384, Mar. 2006. \bibitem{17} M. Craven and J. Shavlik, “Extracting tree-structured representations of trained networks,” in Advances in Neural Information Processing Systems (NIPS). Cambridge, MA:MIT Press, 1996, vol. 8, pp. 24–30. \bibitem{18} D. Nauck and R. Kruse, “Neuro-fuzzy methods in fuzzy rule generation,” in Fuzzy Sets in Approximate Reasoning and Information Systems. Norwell, MA: Kluwer , 1999, pp. 305–334. \bibitem{19} J. M. Benitez, J. L. Castro, and I. Requena, “Are neural network black boxes?,” IEEE Trans. Neural Netw., vol. 8, no. 5, pp. 1156–1164, Sep. 1997. \bibitem{20} H. Tsukimoto, “Extracting rules from trained neural networks,” IEEE Trans. Neural Netw., vol. 11, no. 2, pp. 377–389, Mar. 2000. \bibitem{21} I. A. Taha and J. Ghosh, “Symbolic interpretation of artificial neural networks,” IEEE Trans. Knowl. Data Eng., vol. 11, no. 3, pp. 448–463, May/Jun. 1999. \bibitem{22} J. R. Rabunal, J. Dorado, A. Pazos, J. Periera, and D. Rivero, “A new approach to the extraction of ANN rules and to their generalization capacity through GP,” Neural Comput., vol. 16, no. 7, pp. 1483–1523, 2004. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7454643788, "avg_line_length": 60.0413284133, "ext": "tex", "hexsha": "b7055f1e54364b88cf04e0cfc20cd0a883621489", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "397f0de658de0186a305cb36bb12704b76e7fb7f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "nikxsh/ml", "max_forks_repo_path": "finalyear/Reports/Report.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "397f0de658de0186a305cb36bb12704b76e7fb7f", "max_issues_repo_issues_event_max_datetime": "2022-03-11T23:51:25.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-28T22:48:38.000Z", "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "nikxsh/ml", "max_issues_repo_path": "finalyear/Reports/Report.tex", "max_line_length": 1847, "max_stars_count": null, "max_stars_repo_head_hexsha": "397f0de658de0186a305cb36bb12704b76e7fb7f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "nikxsh/ml", "max_stars_repo_path": "finalyear/Reports/Report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 20175, "size": 81356 }
% $Id: ExecComponents.tex,v 1.1 2008/01/31 18:04:16 dconway Exp $ %\chapter{\label{chapter:Engine}Components of the GMAT Engine} %\chapauthor{Darrel J. Conway}{Thinking Systems, Inc.} \setcounter{footnote}{0} \section*{Overview of Chapters~\ref{chapter:Moderator} through \ref{chapter:Publisher}} Mission modeling is performed in GMAT through the core numerical engine designed for the system. This part of the architectural specification describes the classes that make up the core engine components: the Moderator, the Factory Manager, the Configuration Manager, the Publisher, and Sandboxes. The purpose of each of these components is summarized in Table~\ref{table:EngineComponents}. \begin{table}[H] \begin{center} \caption{\label{table:EngineComponents}Key Elements of the GMAT Engine} \setlength\extrarowheight{2pt} \begin{tabular}{|p{1.4in}|p{1in}|p{3.4in}|} \hline Component & Notes & Description \\ \hline \hline Moderator & Singleton & Controls the program flow in the Engine.\\ Factory Manager & Singleton & Responsible for the creation of objects and Mission Control Sequence commands used in the flight dynamics model.\\ Configuration Manager & Singleton & Stores and retrieves user objects created by the Factory Manager.\\ Publisher & Singleton & Passes data generated during a run to the Subscribers that present these data to users.\\ Sandbox & Multiple copies allowed\footnotemark & The component that manages initialization and execution of the Mission Control Sequence when a mission is run.\\ \hline \end{tabular} \end{center} \end{table} \footnotetext{While GMAT is designed to allow more than one Sandbox, the current implementation only uses a single Sandbox.} \section*{Contents of the Chapters} Each component of the engine is described in a separate chapter, structured on the following outline: \begin{description} \item[Overview] The introductory text for each chapter contains an overview of the component and its role in the GMAT engine. \item[Design Principles] This section describes the motivation behind the component, along with the principles followed while designing it. It includes a description of patterns or other criteria used in the component design, the roles and responsibilities handled by the component, and other design features that will help you understand how the component fits into the GMAT engine. \item[Design] The design section presents the design for the component. It starts with the class diagram for the component, followed by descriptions of representative methods and attributes of the class selected to help you understand its implementation. It also provides an explanation of how the class satisfies the roles and responsibilities identified in the preceding section, through the use of activity and sequence diagrams. Sometimes the details of these descriptions are better placed in other portions of the design specification; when that happens, a summary is provided in the chapter along with a reference to the detailed description. \item[Usage and Modification] This section of the chapter provides additional tips about how to use or change the component, and includes text describing best practices for working with it. \end{description} % \section{\label{ModeratorOverview}The Moderator} % % The core executive for GMAT is the Moderator. The Moderator controls program flow, creating % components through the factory manager that are then managed in the configuration manager and %using % these components to model missions in a sandbox. % % \begin{figure}[htb] % \begin{center} % \includegraphics[370,346]{Images/TheModerator.png} % \caption{The Moderator in its Environment} % \label{figure:ModeratorClassDiagram} % \end{center} % \end{figure} % % Figure ~\ref{figure:ModeratorClassDiagram} shows the Moderator, the classes it interacts with, and % many of its key internal structures. The interactions between the Moderator and other elements of % GMAT's engine were presented in Chapter~\ref{chapter:TopLevel}. The sequence diagrams presented % there describe at a high level the interfaces to the Moderator and their usage when constructing a % model. The methods shown in Figure~\ref{figure:ModeratorClassDiagram} present these interfaces in % more detail. The following paragraphs describe these interfaces and the internal data members %used % by the Moderator. % % \subsection{\label{section:ModeratorAttributes}Attributes and Methods of the Moderator Class} % % The Moderator is a large class. The following lists of attributes and methods provide an overview % of the features of the class. Interested readers should browse the code in the % base/executive/Moderator.cpp and base/executive/Moderator.hpp, or generate the full API using % Doxygen\cite{doxygen}, to see the full implementation details of the Moderator. % % \subparagraph{\textit{Class Attributes}} % % The key data members of the Moderator are the pointers to other engine elements and some core %model % components, as are listed here: % % \begin{itemize} % \item \textbf{static Moderator *instance}: The singleton instance of the Moderator. % \item \textbf{std::vector<Sandbox*> sandboxes}: The sandboxes managed by the moderator. % \item \textbf{std::vector<GmatCommand*> commands}: The Mission Control Sequences managed by this % Moderator. The current implementation manages one Mission Control Sequence per Sandbox. % \item \textbf{static GuiInterpreter* theGuiInterpreter}: A pointer to the GuiInterpreter %singleton. % \item \textbf{static ScriptInterpreter* theScriptInterpreter}: A pointer to the ScriptInterpreter % singleton. % \item \textbf{ConfigManager* theConfigManager}: A pointer to the configuration manager singleton. % \item \textbf{FactoryManager* theFactoryManager}: A pointer to the factory manager singleton. % \item \textbf{FileManager* theFileManager}: A pointer to the file manager, a helper class used to % manage file processing. % \item \textbf{Publisher* thePublisher}: A pointer to the Publisher singleton. % \item \textbf{SolarSystem* theSolarSystemInUse}: A pointer to the SolarSystem instance used when % running a mission. % \item \textbf{std::string theCurrentPlanetarySource}: An identifier for the current default % planetary ephemerides source. % \item \textbf{RunState runState}: The current state of the engine. % \end{itemize} % % \subparagraph{\textit{Engine Execution Methods}} % % The methods implemented in the Moderator are broken into two categories: those used to run the % engine, and those used to manipulate the configured objects and commands. The following list % describes the methods that fall into the first group. % % \begin{itemize} % \item \textbf{bool Initialize(bool isFromGui = false)}: Initializes the Moderator at the start of % the program. The argument to this method, isFromGui, is set to true if the Moderator was created %in % GMAT's GUI; otherwise, it should be set to false. Moderator initialization is used to create the % other singleton classes used in GMAT, some default model elements, and, when launched from the %GUI, % a default Mission Control Sequence. % \item \textbf{void Finalize()}: Cleans up memory when the GMAT engine is closed down. % \item \textbf{StringArray GetListOfFactoryItems(Gmat::ObjectType type)}: Returns a list of item % object types, identified by generic type, that can be created in the Factory subsystem. This %method % is used to determine the types of concrete objects that can be created of a given abstract type -- % for example, the list of available GmatCommands can be generated by calling this method and % requesting the list of Gmat::COMMAND items. % \item \textbf{bool LoadDefaultMission()}: Sets up the default mission that appears when GMAT is % started. % \item \textbf{void ClearAllSandboxes()}: Instructs each sandbox to clear its data and local object % maps. % \item \textbf{Integer RunMission(Integer sandboxNum = 1)}: This is the entry point for running a % mission in GMAT. The method loads the configured objects into the Sandbox, sets the Mission %Control % Sequence, instructs the Sandbox to initialize itself, and then to execute the control sequence. % \item \textbf{Integer ChangeRunState(const std::string \&state, Integer sandboxNum = 1)}: Sets % GMAT's RunState to IDLE, PAUSED, or RUNNING, based on the state description contained in the state % string. The sandboxNum parameter is not yet used; when multiple Sandboxes are implemented, each % Sandbox will have its own RunState. % \item \textbf{RunState GetUserInterrupt()}: Retrieves the RunState from the Modeartor so that the % Sandbox can take appropriate actions in response to state changes generated by the user. This % method is used when the Sandbox polls the Moderator for user messages, and provides a time slice %for % the GUI to process user actions like selection of the Stop or Pause buttons on the toolbar. % \item \textbf{RunState GetRunState()}: Returns the current RunState from the Moderator. % \item \textbf{bool InterpretScript(const std::string \&filename, bool readBack, const std::string % \&newPath)}: This method clears the current configuration and Mission Control Sequence from %memory, % and then reads in and parses a script file, building a new configuration and Mission Control % Sequence based on the contents of the script file. % \item \textbf{bool SaveScript(const std::string \&filename, WriteMode mode = Gmat::SCRIPTING)}: % Writes the current configuration and Mission Control Sequence to a script file, formatted in the % mode specified by the mode flag. % \item \textbf{std::string GetScript(WriteMode mode = Gmat::SCRIPTING)}: Writes the current % configuration and Mission Control Sequence to a string, formatted in the mode specified by the %mode % flag. % \item \textbf{Integer RunScript(Integer sandboxNum = 1)}: Runs the current mission in the %specified % Sandbox. % \end{itemize} % % \subparagraph{\textit{Configured Object and Command Methods}} % % The Moderator has methods to handle object and command creation for all of the user accessible % components of the flight dynamics model, and to access these components after they have been % created. These facilities are used by the Interpreters during model configuration. Future %releases % may also provide access to these methods from the Sandbox during a run to support object creation % during a mission. % % The configured objects are handled by type in the current implementation. That means that there %are % methods with the names CreateSpacecraft and GetSpacecraft for Spacecraft objects, CreateBurn and % GetBurn for Burn objects, and so forth for all of the configured objects created or accessed %through % the Moderator. The methods used for these functions are all similar in form, so for the purposes %of % this chapter, they are referenced using the markers <Object> and <Object*> in the following list. % % The methods used by the Moderator to create and access objects and commands are listed here: % % \begin{itemize} % \item \textbf{<Object*> Create<Object>(const std::string \&type, const std::string \&name)}: %Creates % an object of a given type, and assigns the specified name to that object. % \item \textbf{<Object*> Get<Object>(const std::string \&name)}: Retrieves a named object from the % configuration. % \item \textbf{GmatCommand* InterpretGmatFunction(const std::string \&functionFilename)}: Builds a % command list for a GmatFunction\footnote{GmatFunctions are not yet fully implemented.}. % \item \textbf{GmatCommand* CreateCommand(const std::string \&type, const std::string \&name, bool % \&retFlag)}: Creates a GmatCommand and returns it. This method does not add commands created with % it to the Mission Control Sequence. % \item \textbf{GmatCommand* AppendCommand(const std::string \&type, const std::string \&name, bool % \&retFlag, Integer sandboxNum = 1)}: Creates a GmatCommand, appends it to the Mission Control % Sequence, and returns it. % \item \textbf{GmatCommand* AppendCommand(GmatCommand *cmd, Integer sandboxNum = 1)}: Appends an % existing GmatCommand to the Mission Control Sequence. % \item \textbf{GmatCommand* GetFirstCommand(Integer sandboxNum = 1)}: Returns the head of the %Mission % Control Sequence for the specified Sandbox. % \item \textbf{GmatCommand* DeleteCommand(GmatCommand *cmd, Integer sandboxNum = 1)}: Removes a % command from the Mission Control Sequence linked list. The removed command is returned to the % caller for processing or deletion; the method does not call the destructor on the command. % \item \textbf{bool InsertCommand(GmatCommand *cmd, GmatCommand *prevCmd, Integer sandboxNum~=~1)}: % Inserts a command into the Mission Control Sequence. % \item \textbf{GmatBase* GetConfiguredObject(const std::string \&name)}: Retrieves a configured % object from the configuration. % \item \textbf{bool ClearResource()}: Deletes all objects from the configuration and clears all of % the Sandboxes. % \item \textbf{bool ClearCommandSequence(Integer sandboxNum = 1)}: Deletes all commands from the % Mission Control Sequence for the specified Sandbox. % \end{itemize} % % \subsection{\label{section:ModeratorStates}States of the Moderator} % % The Moderator tracks the current state of the system using a parameter named runState, which is %set % to a value in the RunState enumeration (see Table~\ref{table:RunStateEnum}) defined in the Gmat % namespace. The engine states tracked in the Moderator are the IDLE, RUNNING, and PAUSED states. % The transitions % % \begin{figure}[htb] % \begin{center} % \includegraphics[260,231]{Images/ModeratorStateTransitions.png} % \caption{State Transitions in the Moderator} % \label{figure:ModeratorStateTransitions} % \end{center} % \end{figure} % % Figure ~\ref{figure:ModeratorStateTransitions} shows the run state transitions tracked by the % Moderator. The Moderator is created with the run state set to the IDLE state. Most of the time, % the Moderator remains in the IDLE state, processing messages from users and managing the internal % components of the GMAT engine\footnote{Many of the activities performed by the Moderator in the %IDLE % state are described in Chapter~\ref{chapter:TopLevel}. Additional Moderator interactions with the % other engine components are described in the appropriate sections of this document.}. % % When a user executes a Mission Control Sequence, the Moderator transitions to the RUNNING state. %In % this state, the Moderator performs very limited processing while the control of the system is % managed by the sandbox that is running the mission. The sandbox polls the Moderator for user % activity at convenient points during the mission run. This polling allows the Moderator to %respond % to user actions that either terminate the mission early or pause the mission. % % If the user presses the pause button on the GUI, the Moderator transitions into the PAUSED state % when the sandbox polls for state status. This activity stops the mission run, but maintains data %so % that the run can be resumed from the point of the stop. The user tells the Moderator to resume %the % run by pressing the run button on the GUI. When the Moderator receives the run message, it % transitions back into the RUNNING state and tells the sandbox to resume the run. % % The user can terminate a run early by pressing the stop button on the GUI during a run. This %action % always causes the Moderator to transition from its current state - either RUNNING or PAUSED -- %into % the IDLE state. % % \section{\label{section:FactManOverview}The Factory Manager} % % Users configure GMAT by creating and using components designed to model elements or features of a % satellite system in space. All of these user elements are created in GMAT's factory subsystem. %The % interface into this system is the FactoryManager, a singleton class that routes requests for % specific objects into the Factory that is responsible for creating that type of object. The %Factory % classes are described in Chapter~\ref{chapter:Factories}; this section describes how the % FactoryManager interacts with the Factories and the rest of the GMAT engine to produce requested % objects. % % \subsection{\label{section:FactManAttributes}Attributes and Methods of the Factory Manager} % % The Factory Manager is a singleton object, and has a fairly simple internal structure, as can be % seen in the class diagram shown in Figure~\ref{figure:FactManClassDiagram}. The following lists % describe the class members shown in the figure. % % \begin{figure}[htb] % \begin{center} % \includegraphics[455,257]{Images/TheFactoryManager.png} % \caption{The Factory Manager in its Environment} % \label{figure:FactManClassDiagram} % \end{center} % \end{figure} % % \subparagraph{\textit{Class Attributes}} % % The Factory Manager maintains a list of factories, and uses those factories to generate the data % needed to respond to requests from the rest of GMAT. There are three data members in the % FactoryManager: % % \begin{itemize} % \item \textbf{StringArray entireList}: Array used to store the list of items requested. % \item \textbf{std::list<Factory*> factoryList}: The array of registered factories managed by the % Factory Manager. % \item \textbf{static FactoryManager* onlyInstance}: The singleton pointer for the manager. % \end{itemize} % % \subparagraph{\textit{Methods}} % % The Factory Manager implements methods that are used to register factories that are managed, %access % the lists of objects those factories can create, and access the creation methods in the factories. % Like the Moderator, the Factory Manager provides methods that access objects by specific type. % Those methods are described here generically, using the notation <Object> rather than explicitly % calling out each type of object, as was done for the Moderator methods. The Factory Manager also % contains methods to access the class lists and object creation methods through generic calls. % % The methods supported by the Factory Manager are: % % \begin{itemize} % \item \textbf{static FactoryManager* Instance()}: Access method used to get the singleton pointer. % \item \textbf{bool RegisterFactory(Factory* fact)}: Method used to register a factory with the % Factory Manager % \item \textbf{GmatBase* CreateObject(const Gmat::ObjectType generalType, const std::string \\ % \&ofType, const std::string \&withName = "")}: Generic method used to create an object. % \item \textbf{<Object*> Create<Object>(const std::string \&ofType, const std::string \&withName = % "")}: Method used to create an object of a specific ObjectType. Calls made through the <Object> % methods do not need to type cast the returned objects to the appropriate object types. % \item \textbf{StringArray GetListOfItems(Gmat::ObjectType byType)}: Method used to find the list %of % classes that can be created for a specific ObjectType. This public method calls into the private % GetList() method to find the requested data. % \item \textbf{StringArray GetListOf<Object>()}: Method used to find the list of classes that can %be % created for an ObjectType. These methods do not need the object type specifier. % \item \textbf{Factory* FindFactory(Gmat::ObjectType ofType, const std::string \&forType)}: Method % used to find the factory that creates instances of a specific class -- the ``forType'' class, with %a % specific object type. % \item \textbf{StringArray GetList(Gmat::ObjectType ofType)}: Method used to find the list of %classes % that can be created for a specific ObjectType. % \end{itemize} % % \subsection{\label{section:FactoryRegistration}Registering Factories with the Factory Manager} % % The Factory Manager registers factories at run time, rather than through a static list created at % compile time. This feature gives GMAT the ability to register factories created by users without % forcing a recompilation of the rest of the system. The current implementation does not support % registration of user created factories at run time; that is a planned extension of the system. %The % Factory Manager interfaces for this extension do exist in the current implementation. % % \begin{figure}[htb] % \begin{center} % \includegraphics[345,210]{Images/FactoryRegistration.png} % \caption{Registering a Factory} % \label{figure:FactoryRegistration} % \end{center} % \end{figure} % % The factory registration process, shown in Figure~\ref{figure:FactoryRegistration}, is % straightforward. The process starts with the Moderator. The Moderator retrieves a pointer to the % Factory that needs to be registered, either through object creation during initialization, or % through the user factory registration process\footnote{The user registration process will be % implemented in a future build.}. In the illustrated example, the factory created is the % CommandFactory. The Moderator creates the CommandFactory during initialization. It takes the % pointer to the created factory, and passes it to the Factory Manager using the RegisterFactory % method. The Factory Manager receives the pointer, and adds it to the list of managed factories by % pushing it onto the std::vector of factories. This completes the Factory registration process. % % \subsection{\label{section:CreatingObjectsWithFactMan}Object Creation through the Factory Manager} % % The Factory Manager routes requests for user objects to specific factories, and passes the created % objects back to the calling routine. The procedure followed for this process is shown in % Figure~\ref{figure:FactoryCreation}. % % \begin{figure}[htb] % \begin{center} % \includegraphics[430,363]{Images/FactoryObjectCreation.png} % \caption{Creating an Object Using the Factory Subsystem} % \label{figure:FactoryCreation} % \end{center} % \end{figure} % % The process starts when the Moderator receives a message requesting a user object of a given type, % optionally with a given name. In the example shown in the figure, the request was for a Propagate % command. The Moderator receives the request through a call to the CreateCommand() method, %typically % with the syntax ``CreateCommand(``Propagate'',``'')'' for commands. Named objects receive the % desired name as the second parameter in this call. The Moderator calls the Factory Manager's % CreateCommand() method with the same parameters. % % The Factory Manager calls its FindFactory() method, which locates the factory that can create % objects of the desired type. This method works by asking each registered factory for its list of % creatable types. FindFactory() checks each list returned from the registered factories until it % finda a factory that can create the requested object type -- in this example, it searches until it % finds a factory that can create a ``Propagate'' object. The CommandFactory creates Propagate % objects, so that is the factory returned from the FindFactory() method. % % The factory found above is then called and asked for an object of the target type, with the name % passed in the second parameter of the call. The factory calls the constructor for that type of % object -- a Propagate command object in this case -- and returns the pointer to the new object. % This pointer is returned from the Factory Manager to the Moderator\footnote{The Moderator may % perform other actions as well -- for instance, named objects may be passed to the Configuration % Manager, and commands may be inserted into the Mission Control Sequence. The Moderator actions % taken are discussed in Chapter~\ref{chapter:TopLevel}; this discussion is only intended to provide %a % view into the process followed by the Factory Manager.}, and from the Moderator to the function %that % made the call that initiated this sequence, completing the requested task. % % \section{\label{section:ConfigManOverview}The Configuration Manager} % % Objects created by the Factory subsystem are stored in one of two places. Commands are stored in % the Mission Control Sequence. The other objects created in the factories are stored in the % configuration, a vector of pointers to GmatBase derivatives that is accessed by object name. Most % of these objects are stored directly in this vector. Some objects are stored as objects contained % in an object in the configuration. An example of the latter form of object management is a %planet, % stored in a solar system object, which is then managed by the Configuration Manager. % % \subsection{\label{section:ConfigManAttributes}Attributes and Methods of the Configuration %Manager} % % \begin{figure}[htb] % \begin{center} % \includegraphics[405,286]{Images/TheConfigurationManager.png} % \caption{The Configuration Manager in its Environment} % \label{figure:ConfigManClassDiagram} % \end{center} % \end{figure} % % The structure of the Configuration Manager is shown in Figure~\ref{figure:ConfigManClassDiagram}. % The attributes and methods shown in this figure are described below. % % \subparagraph{\textit{Class Attributes}} % % The configuration is the core element managed by the Configuration Manager. It is stored in a % vector of GmatBase pointers named objects. A related component, the mapping object map, keeps %track % of the names of the configured objects and the associated object pointers so that retrieval by %name % can be managed easily. All of the Configuration Manager attributes are listed here: % % \begin{itemize} % \item \textbf{static ConfigManager* theConfigManager}: The Configuration Manager singleton. % \item \textbf{std::vector<GmatBase*> objects}: The configuration. % \item \textbf{StringArray listOfItems}: Array used to store lists of named items that are returned % from some of the Configuration Manager interfaces. % \item \textbf{std::map<std::string, GmatBase*> mapping}: A mapping of object names to the %pointers % to the configured objects. This construct simplifies retrieval of objects from the configuration. % \item \textbf{bool objectChanged}: A flag indicating if an object in the configuration has %changed. % \item \textbf{SolarSystem *defaultSolarSystem}: The default solar system object. % \item \textbf{SolarSystem *solarSystemInUse}: The current solar system selected for editing or for %a % run. % \end{itemize} % % \subparagraph{\textit{Methods}} % % The Configuration Manager provides interfaces designed to manage insertion, access, and deletion %of % objects in the configuration, along with utility method that facilitate object renaming and %queries % into the associations between objects. % % Like the Moderator and Factory Manager, the Configuration Manager has methods to add and retrieve % objects of specific types. These methods are listed here using the generic ``<Object>'' notation, % rather than listing all of the specific methods. Interested users should refer to the source code % or Doxygen generated documentation to see the specific methods incorporated in the Configuration % Manager. % % The methods supported by the Configuration Manager are listed here: % % \begin{itemize} % \item \textbf{static ConfigManager* Instance()}: Method used to access the Configuration Manager % singleton. % \item \textbf{void Add<Object>(<Object*> obj)}: Method used to add an object of a specific type to % the configuration. % \item \textbf{<Object*> Get<Object>(const std::string \&name)}: Used to access an object, of a % specific type, with a specified name. % \item \textbf{void SetDefaultSolarSystem(SolarSystem *ss)}: Sets the pointer for the default solar % system model. % \item \textbf{void SetSolarSystemInUse(SolarSystem *ss)}: Sets the pointer for the current solar % system model. In the current build of GMAT, the default and current solar system models are the % same; this method is provided to support specification of multiple solar system models in a future % build. % \item \textbf{bool SetSolarSystemInUse(const std::string \&name)}: Sets the current solar system %by % specifying the name of a configured solar system model by name. This method is provided to %support % specification of multiple solar system models in a future build. % \item \textbf{StringArray\& GetListOfAllItems()}: Retrieves a list of the names of all of the % objects in the configuration. % \item \textbf{StringArray\& GetListOfItems(Gmat::ObjectType itemType)}: Retrieves a list of the % names of all of the objects in the configuration of a specific type. % \item \textbf{StringArray\& GetListOfItemsHas(Gmat::ObjectType type, const std::string \&name, %bool % includeSysParam = true)}: Searches the configuration to determine if a specific named object is % used by other configured objects. For each such usage, the name of the object that uses the named % object is added to the returned list. % \item \textbf{GmatBase* GetItem(const std::string \&name)}: Retrieves an object from the % configuration and returns its pointer. % \item \textbf{bool RenameItem(Gmat::ObjectType itemType, const std::string \&oldName, const \\ % std::string \&newName)}: Renames an object in the configuration. % \item \textbf{bool RemoveAllItems()}: Clears the configuration so that a new mission can be % constructed without preserving previously created objects. % \item \textbf{bool RemoveItem(Gmat::ObjectType type, const std::string \&name)}: Removes a single % object from the configuration. % \item \textbf{bool ReconfigureItem(GmatBase *newobj, const std::string \&name)}: Resets the %pointer % to a configured object with a new object pointer. % \item \textbf{bool HasConfigurationChanged()}: Reports the value of the objectChanged flag. % \item \textbf{void ConfigurationChanged(bool tf)}: Sets the value of the objectChanged flag. % \item \textbf{std::string AddClone(const std::string \&name)}: Clones an object and adds the %cloned % object to the configuration. % \item \textbf{std::string GetNewName(const std::string \&name, Integer startCount)}: Creates a %new, % unique object name based on a current name. % \item \textbf{void AddObject(GmatBase* obj)}: Adds an object to the configuration and the object % map. This private method if the method that actually performs the insertion into the %configuration % vector. % \end{itemize} % % \section{\label{section:SandboxOverview}The Sandbox} % % GMAT's model contains descriptions of the components that are used during a run and the sequence %of % events that needs to be executed in order to perform the run. These pieces are assembled and % executed in the GMAT Sandbox. The Sandbox is created by the Moderator. It is passed the solar % system configuration for the run, the spacecraft and related objects used in the run, and the % Mission Control Sequence that fires for the run. % % \begin{figure}[htb] % \begin{center} % \includegraphics[432,307]{Images/TheSandbox.png} % \caption{The Sandbox in its Environment} % \label{figure:SandboxClassDiagram} % \end{center} % \end{figure} % % The Sandbox manages objects internally through base class pointers, making its interfaces and % internal data structures relatively straightforward. The structure of the Sandbox is shown in % Figure~\ref{figure:SandboxClassDiagram} and described in the following paragraphs. % % \subsection{\label{section:SandboxAttributes}Attributes and Methods of the Sandbox Class} % % The Sandbox contains the data members necessary for maintaining a local copy of the model and % running a mission using the model. It includes members used to communicate with the Moderator and % Publisher, so that the processes outside of the Sandbox can control the mission run and receive %data % resulting from that run. The class attributes that enable this functionality are listed here. % % \subparagraph{\textit{Class Attributes}} % % \begin{itemize} % \item \textbf{std::map<std::string, GmatBase *> objectMap}: The local object map in the Sandbox, % filled by cloning objects from the configuration. % \item \textbf{SolarSystem *solarSys}: The solar system model used when running the Mission Control % Sequence in the Sandbox. % \item \textbf{CoordinateSystem *internalCoordSys}: The reference coordinate system used when % performing conversions. % \item \textbf{Publisher *publisher}: The GMAT publisher. % \item \textbf{GmatCommand *sequence}: The Mission Control Sequence. % \item \textbf{GmatCommand *current}: The current command in the Mission Control Sequence. This % pointer is used to traverse the linked list during a run. % \item \textbf{Moderator *moderator}: The GMAT Moderator. % \item \textbf{runMode state}: The current state of the Sandbox. The Sandbox state model is % described in Section~\ref{section:StatesintheSandbox}. % \item \textbf{Integer interruptCount}: A counter used to track polling between the Sandbox and the % Moderator, so that user interrupts can be processed during a run. % \item \textbf{Integer pollFrequency}: The frequency used by the Sandbox to poll the Moderator. % \item \textbf{std::vector<PhysicalModel *> transientForces}: A vector of forces that can be turned % on or off in a force model during integration. % \end{itemize} % % \subparagraph{\textit{Public Methods}} % % The Moderator controls model loading and the state of the Sandbox through the following public % methods. % % \begin{itemize} % \item \textbf{bool AddObject(GmatBase *obj)}: Adds objects to the local object map. % \item \textbf{bool AddCommand(GmatCommand *cmd)}: Sets the Mission Control Sequence. % \item \textbf{bool AddSolarSystem(SolarSystem *ss)}: Sets the solar system model used in a run. % \item \textbf{bool AddSubscriber(Subscriber *sub)}: Sets subscribers for the run. This method % registers the subscriber with the Publisher, and places the Subscriber in the local object map. % \item \textbf{bool SetInternalCoordSystem(CoordinateSystem *ss)}: Sets the internal coordinate % system used by the Sandbox. % \item \textbf{bool SetPublisher(Publisher *pub = NULL)}: Sets the Publisher pointer for the %Sandbox. % \item \textbf{GmatBase* GetInternalObject(std::string name, Gmat::ObjectType type = \\ % Gmat::UNKNOWN\_OBJECT)}: Retrieves an object from the local object map by name. % \item \textbf{bool Initialize()}: Initializes the Sandbox. % \item \textbf{bool Execute()}: Runs the Mission Control Sequence. % \item \textbf{bool Interrupt()}: Polls the Moderator to determine is a user action requires a %state % change in the Sandbox. % \item \textbf{void Clear()}: Deletes clones from the local object map, clears the Sandbox's %Mission % Control Sequence and transient force vectors, and sets the Sandbox state to IDLE. % \end{itemize} % % \subparagraph{\textit{Private Methods}} % % The Sandbox uses the following methods internally to initialize and run a mission. % % \begin{itemize} % \item \textbf{void InitializeInternalObjects()}: Sets up object references in the Sandbox's %internal % coordinate system and solar system. % \item \textbf{void BuildReferences(GmatBase *obj)}: Helper method that sets all internal %references % for the input object. % \item \textbf{void SetRefFromName(GmatBase *obj, const std::string \&oName)}: Finds a named object % and sets its reference on the input object. % \item \textbf{void BuildAssociations(GmatBase *obj)}: Finds referenced objects that need to be % associated with the input object through cloning, creates the clones, and hands the cloned object %to % the owner. % \item \textbf{SpacePoint* FindSpacePoint(const std::string \&spName)}: Finds a SpacePoint by name % and returns it. % \end{itemize} % % \subsection{\label{section:StatesintheSandbox}State in the Sandbox} % % The Sandbox maintains state information in the ``state'' member variable. The state member is set % to an internal enumerated value, set to one of the values of the runState enumeration designed to % track the status of the model in the Sandbox. This enumeration is listed in % Table~\ref{table:SandboxRunMode}. % % \begin{table}[H] % \begin{center} % \caption{\label{table:SandboxRunMode}Values for the Sandbox runState Enumeration} % \setlength\extrarowheight{2pt} % \begin{tabular}{|p{1.7in}|p{4in}|} % \hline % Identifier & Description \\ % \hline % \hline % IDLE & The Sandbox is waiting for the Moderator to prompt for a new run. Initialized to 7001. \\ % INITIALIZED & The Sandbox has successfully initialized the local object map and the Mission %Control % Sequence.\\ % RUNNING & A Mission Control Sequence is executing.\\ % PAUSED & The Moderator has paused a Mission Control Sequence run. The Sandbox is ready to resume % the run when the Moderator triggers the run. \\ % STOPPED & The Mission Control Sequence was interrupted by the Moderator, and will not be %resumed.\\ % RESET & The Sandbox is being reset for a new run. This state is not used in the current % implementation.\\ % \hline % \end{tabular} % \end{center} % \end{table} % % Figure~\ref{figure:SandboxStateDiagram} shows the state transitions that the Sandbox uses over the % course of a typical session with GMAT. The Sandbox state is initialized to the IDLE state when %the % Sandbox is created by the Moderator. It remains in this state until the sandbox is needed for a % mission run. The sequence of events resulting in a mission run were shown in % Figure~\ref{figure:RunningBasicScript} of Chapter~\ref{chapter:TopLevel}. These events cause the % sandbox transitions shown in the figure presented here. % % \begin{figure}[htb] % \begin{center} % \includegraphics[455,188]{Images/SandboxStateTransitions.png} % \caption{State Transitions in the Sandbox} % \label{figure:SandboxStateDiagram} % \end{center} % \end{figure} % % When the Moderator recieves a message to run a mission, it passes the configured objects and the % Mission Control Sequence into the sandbox. The sandbox adds the objects to the local object map %and % sets the sequence pointer in response to these actions, remaining in the IDLE state. The %Moderator % triggers the transition from the IDLE state to the INITIALIZED state by calling the Initialize() % method. This method queries each object in the sandbox for referenced objects, and sets the local % pointers to those references on each object. If all of the local references were set %successfully, % the sandbox transitions to the INITIALIZED state. % % \label{section:SandboxLateBinding}This binding of the reference object pointers happens inside of % the sandbox so that the reference pointers can be set to the local clones in the sandbox. This %late % binding feature is part of the GMAT design, and is incorporated in that design so that the system % can support multiple sandboxes, and so that the system can distribute processing to multiple % processors in future builds. % % When the sandbox completes initialization, all of the components in the sandbox have their local % references set, and the system is ready to run a Mission Control Sequence. There are two possible % transitions from the INITIALIZED state. The Moderator can add additional elements to the sandbox. % If any additional element is added, the sandbox returns to the IDLE state, indicating that object % references have had the opportunity to be set for all of the objects in the sandbox. % % The more interesting transition happens in response to a call of the Execute method from the % Moderator. When the Moderator calls the Execute() method, the sandbox transitions into the %RUNNING % state. The sandbox starts executing the commands in the Mission Control Sequence, and runs these % commands until either the mission runs to completion, an error occurs in the run, or the user % interrupts processing by pressing the stop or pause button on the GUI. % % If the user presses the pause button on the GUI, the sandbox transitions into the PAUSED state, % maintaining the information required to resume the run. From the PAUSED state, the user can %either % terminate the run by pressing the stop button or resume the run by pressing the run button on the % GUI. Pressing the stop button transitions the sandbox into the STOPPED state. Pressing the run % button causes a transition back into the RUNNING state, and execution of the Mission Control % Sequence resumes from the point where the pause message was received by the sandbox. % % Once the run has terminated, either by running to completion, running to an error in the sequence, % or through a user interrupt generated by the stop button, the sandbox enters the STOPPED state. %In % the STOPPED state, all of the objects in the sandbox remain in their post execution state. This % feature allows some limited user access to the post-run data; for example, the user can retrieve % data at the end of execution of each command using the Command Summary panel in the GUI. The % sandbox's local objects remain in this post-run state until the sandbox is cleared by the %Moderator. % % The Moderator clears a sandbox by calling the Clear method. The Clear() method destroys all of %the % objects in the sandbox's local object map, releases the Mission Control Sequence pointer, and % transitions the sandbox into the IDLE state, closing the state transition loop shown % in Figure~\ref{figure:SandboxStateDiagram}. % % When GMAT closes down, the Moderator destroys the sandboxes in the system. This last transition % moves each sandbox from the IDLE state to the final state shown in the figure. % % \subsection{\label{section:SandboxInterruptPolling}Interrupt Polling in the Sandbox} % % One feature of the sandbox state transitions described above is the interaction between the %Sandbox % and the rest of GMAT performed to process user interrupts. In the current implementation, the % sandbox takes over execution when a Mission Control Sequence is running. Each command is executed % in order, and the sandbox moves from one command to the next in the list of commands. Commands %are % executed by calling a command specific Execute() method. % % Between each call to these Execute() methods, the sandbox is given the opportunity to query the % Moderator for user interrupts. The sandbox performs this operation periodically. The Moderator, % in turn, queries the GuiInterpreter for user interrupts, and the GuiInterpreter queries the GUI. % The GUI processes any pending messages, and, if the user has presses the pause or stop button, % sends the interrupt through the GuiInterpreter to the Moderator. The sandbox retrieves this % interrupt from the Moderator, and triggers the corresponding state transition, as described above. % % \section{\label{section:PublisherOverview}The Publisher} % % During a run users can monitor the progress through the Mission Control Sequence by watching the % trajectory evolve in OpenGL windows, parameters change value in plot windows, and on some systems %by % watching data flow into output files. GMAT uses a publish and subscribe system to drive all of % these pipelines from the Sandbox to user viewable data sources. The engine drives the data %transfer % from the Sandbox to the output objects using a singleton called the Publisher. % % Data generated during a run is sent from the Sandbox containing the run to the Publisher %singleton. % The Publisher receives the Sandbox data, and sends the data to each Subscriber registered with the % Publisher. Subscribers, described in Chapter~\ref{chapter:Factories}, contain common interfaces % that the Publisher uses to push data received from the Sandbox to the Subscribers. Note that a %key % element of the publish and subscribe algorithm implemented in GMAT is that data flows in one % direction: data is generated in the Sandbox, passed to the Publisher, and then passed from the % Publisher to the Subscribers. % % The Publisher class attributes and interfaces that enable this functionality are listed here. % % \subsection{\label{section:PublisherAttributes}Attributes and Methods of the Publisher} % % \begin{figure}[htb] % \begin{center} % \includegraphics[445,227]{Images/ThePublisher.png} % \caption{The Publisher in its Environment} % \label{figure:PublisherClassDiagram} % \end{center} % \end{figure} % % Figure~\ref{figure:PublisherClassDiagram} shows the attributes and methods incorporated in the % Publisher. % % \subparagraph{\textit{Class Attributes}} % % The Publisher maintains a collection of Subscribers that are registered to receive data and % information about the current state of GMAT in order to manage the data transfer tasks during a %run. % The following attribute list is built into the Publisher to perform this maintenance: % % \begin{itemize} % \item \textbf{static Publisher *instance}: The Publisher singleton. % \item \textbf{std::list<Subscriber*> subs}: The current list of Subscribers supported by the % Publisher. % \item \textbf{Integer providerCount}: The number of registered data providers. % \item \textbf{Integer currentProvider}: The current data provider sending data to the publisher. % \item \textbf{std::vector<StringArray> objectMap}: List of the names of objects providing data. % \item \textbf{std::vector<StringArray> elementMap}: List of the names of the data elements %provided % to the Publisher. % \item \textbf{Gmat::RunState runState}: The current run state for GMAT. The Publisher tracks the % current run state so that Subscribers can act differently based on the current state of the %system. % \end{itemize} % % \noindent Note that the data structures in the Publisher currently support only one set of data % structures, mapping to a single Mission Control Sequence. When GMAT is extended to support %multiple % Sandboxes, these data structures may be changed to add support for multiple Sandboxes. % % \subparagraph{\textit{Methods}} % % Interfaces into the Publisher are provided through the following methods: % % \begin{itemize} % \item \textbf{static Publisher* Instance()}: Method used to access the Publisher singleton. % \item \textbf{bool Subscribe(Subscriber *s)}: The registration interface for the Subscribers. % \item \textbf{bool Unsubscribe(Subscriber *s)}: The method used to remove a Subscriber from the % list of data receivers. % \item \textbf{bool UnsubscribeAll()}: Removes all Subscribers from the list of data receivers % \item \textbf{bool Publish(Integer id, Real *data, Integer count)}: Passes an array of Real data % from the data source identified by id number to the Subscribers. % \item \textbf{bool Publish(Integer id, char *data, Integer count = 0)}: Passes an array of % character data from the data source identified by id number to the Subscribers. % \item \textbf{bool Publish(Integer id, Integer *data, Integer count)}: Passes an array of Integer % data from the data source identified by id number to the Subscribers. % \item \textbf{bool FlushBuffers()}: Prompts each registered Subscriber to process all received %data % immediately, rather than wait for an input buffer to be filled. % \item \textbf{bool NotifyEndOfRun()}: Tells each registered Subscriber that the Mission Control % Sequence has finished running. % \item \textbf{Integer RegisterPublishedData(const StringArray\& owners, const StringArray\& % elements)}: Registers data providers with the Publisher. The returned value is the id for the % registered data provider. % \item \textbf{const StringArray\& GetStringArrayParameter(const std::string\& type)}: Accesses %the % strings in the objectMap and elementMap members. % \item \textbf{void SetRunState(const Gmat::RunState state)}: Sets the run state for the %Publisher. % \item \textbf{inline Gmat::RunState GetRunState()}: Accesses the current run state. % \item \textbf{void UpdateProviderID(Integer newId)}: Resets the ids for the Subscribers. % \end{itemize} % % \subsection{\label{section:PublisherStates}State Transitions in the Publisher} % % \begin{figure}[htb] % \begin{center} % \includegraphics[359,231]{Images/PublisherStateTransitions.png} % \caption{State Transitions in the Publisher} % \label{figure:PublisherStateDiagram} % \end{center} % \end{figure} % % Subscribers have the option to act differently based on the current status of the run -- for % example, the OpenGL Subscriber can be set to show all of the published data, or only data %published % when the Mission Control Sequence is not running iterations through a Solver. The Subscribers %track % this information through state data maintained by the Publisher. The states tracked by the % Publisher are shown in Figure~\ref{figure:PublisherStateDiagram}. % % The Publisher starts in the IDLE state, and returns to this state when a Mission Control Sequence % stops running. The Publisher moves from the IDLE state to the RUNNING state when a user starts % execution of a Mission Control Sequence. % % The RUNNING state is the nominal state for the Publisher when a Mission Control Sequence is being % executed. The Publisher moves into the SOLVING state whenever a Solver loop is searching for a % solution -- that is, whenever a Target command is targeting, and Optimize command is optimizing, %or % an Iterate command is iterating over a set of variables. The Publisher remains in the SOLVING %state % until the Solver finds a solution; at that point the Publisher returns to the RUNNING state. % % Users can interrupt the Mission Control Sequence by pressing either the Pause or Stop button on %the % GUI. When the Pause button is pressed, the Publisher transitions into the PAUSED state. If the % mission resumes (i.e. if the user pressed the Run button for the paused mission), the Publisher % returns to its previous state -- either RUNNING or SOLVING, based on the state of the Mission % Control Sequence. If the user presses the Stop button, the Publisher moves into the IDLE state.
{ "alphanum_fraction": 0.7774275159, "avg_line_length": 55.3377777778, "ext": "tex", "hexsha": "7a05d645caf64c617a9a0a47e129f40758a25c63", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": [ "NASA-1.3" ], "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_path": "doc/SystemDocs/ArchitecturalSpecification/ExecComponents.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_licenses": [ "NASA-1.3" ], "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_path": "doc/SystemDocs/ArchitecturalSpecification/ExecComponents.tex", "max_line_length": 100, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_path": "doc/SystemDocs/ArchitecturalSpecification/ExecComponents.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "num_tokens": 11781, "size": 49804 }
% LaTeX file for resume % This file uses the resume document class (res.cls) \documentclass[margin, letterpaper]{res} \usepackage{hyperref} \usepackage{multicol} \hypersetup{colorlinks=true, linkcolor=blue, citecolor=blue, urlcolor=blue} \usepackage{inputenc} \usepackage{fontenc} \usepackage{fouriernc} % the margin option causes section titles to appear to the left of body text \textwidth=5.3in % increase textwidth to get smaller right margin \begin{document} \name{\Large{Duk Gyoo Kim}\\[12pt]} % the \\[12pt] adds a blank line after name %\address{467 Uris Hall, Cornell University\\Ithaca, NY 14853 USA} %\address{Phone: (607) 793-6539\\ E-mail: \href{mailto:[email protected]}{[email protected]} } \begin{resume} \section{Contact Information} Office: L7, 3-5, Room 225, 68161 Mannheim, Germany\\ Phone: +49-621-181-1797 (Office), +49-157-7482-7036 (Mobile)\\ Email: \href{mailto:[email protected]}{[email protected]}\\ Webpage: \url{https://kimdukgyoo.github.io} \section{Academic Positions} \textbf{Assistant Professor}, Department of Economics, University of Mannheim, 2017--\\ \textbf{Postdoctoral Scholar}, California Institute of Technology, 2015--2017\\ (* Dan Searle Fellowship in Economics/ Mentor: Thomas R. Palfrey) \section{Affiliations} \textbf{Affiliate Member}, CESifo, Munich, 2020-- \section{Fields} \textbf{Primary}: Public Economics, Political Economy, Experimental Economics\\ \textbf{Secondary}: Microeconomic Theory, Behavioral Economics \section{Education} \textbf{Ph.D. in Economics,} Cornell University, Aug. 2015\\ (* Fulbright Graduate Study Award/ Advisers: Stephen Coate and Robert H. Frank)\\ \textbf{M.A. in Economics,} Yonsei University, Seoul Korea, Aug. 2010\\ \textbf{B.A. in Business Administration,} Yonsei University, Seoul Korea, Feb. 2008 \section{Publications} \begin{enumerate} \item Vaccination Lottery, \emph{Economics Letters}, Volume 208, November 2021, 110059 \item Multilateral Bargaining with Proposer Selection Contests (with Sang-Hyun Kim), \emph{Canadian Journal of Economics}, 2021, Volume 54, No. 4 \item Big and Small Lies (with Franziska Heinicke and Diogo Geraldes), \emph{Journal of Behavioral and Experimental Economics}, 2021, Volume 91, 101666 \item Recognition without Replacement in Legislative Bargaining, \emph{Games and Economic Behavior}, 2019, Volume 118, 161--175 \item A Theory of FAQs: Public Announcements with Rational Ignorance (with Yeochang Yoon), \emph{Journal of Economic Behavior \& Organization}, 2019, Volume 158, 560--574 \item Positional Concern and Low Demand for Redistribution of the Poor, \emph{European Journal of Political Economy}, 2019, Volume 56, 27--38 \item Population Uncertainty in Voluntary Contributions of Public Goods, \emph{Journal of Economic Behavior \& Organization}, 2018, Volume 145, 218--231 \item The Second-Tier Trap: Theory and Experimental Evidence, \emph{International Journal of Economic Theory}, 2018, Volume 14, Issue 4, 323--349 \item Response time in choosing the most or least preferred option, \textit{Economics Bulletin}, 2016, Vol. 36 No. 1 pp. 595--600 \end{enumerate} %\begin{enumerate} %\item Guide to Effective Retirement Pension Choices: An Experimental Design (with Seura Ha, Sang-Hyun Kim, and Euncheol Shin), \emph{Journal of Financial Regulation and Supervision}, 2019, Vol 6(2), 79--141 (in Korean) %\item Why Are the Poor Conservative? (with Paul Moon Sub Choi), \textit{The Korean Journal of Economics}, 2015, Vol 22(1), 15--24 %\end{enumerate} \section{Working Papers} \begin{enumerate} \item Multilateral Bargaining over the Division of Losses (with Wooyoung Lim), CESifo Working Paper No. 8011, Resubmitted to \emph{Journal of Economic Theory} \item Penalty Lottery \item Clustering Standard Errors at the ``Session'' Level, CESifo Working Paper No. 8386, Resubmitted to \emph{Journal of the Economic Science Association} \item ``One Bite at the Apple'': Legislative Bargaining without Replacement \item Probability Matching and Strategic Decision Making (with Hee Chun Kim), Resubmitted to \emph{Journal of Behavioral and Experimental Economics} \item Collective Proofreading and the Optimal Voting Rule (with Jinhyuk Lee and Euncheol Shin), Resubmitted to \emph{Global Economic Review} \end{enumerate} \section{Work in Progress} \begin{enumerate} \item Paradoxes of Network in Bargaining (with Joosung Lee) %\item Political Blurring over Non-linear Tax Schedules (with Felix Bierbrauer) \item Distributive Politics on Public Bads (with Andrzej Baranski-Madrigal) \item Viable Nash Equilibria: An Experiment (with Daehong Min and John Wooders) \end{enumerate} \section{Fellowships, Grants, Honors, and Awards} \begin{itemize} \item University of Mannheim, SFB 884 ``The political economy of reforms" (financed by German Research Foundation), Project A7 (2018--2021, with Hans Peter Gr{\"u}ner). \item Dan Searle Postdoctoral Fellowship in Economics, Oct. 2015--Sept. 2017. \item East Asia Program Research Travel Grant, May 2015. \item Einaudi Center International Research Travel Award, Mar. 2015. \item Charles Koch Foundation Dissertation Research Grant, 2014--2015 \item The Howard and Abby Milstein Graduate Teaching Award, 2014. \item Cornell Graduate School Research Travel Grant, Nov. 2014. \item Hayek Fund for Scholars, The Institute of Humane Studies, 2014, 2015, 2016, 2017. \item The Institute of Humane Studies Conference and Research Grant, Sept. 2014. \item Cornell Population Center Rapid Grant, 2014, 2015. \item Cornell Graduate School Conference Travel Grant, 2013, 2014. \item Science of Philanthropy Initiative PhD grant, Mar. 2014 \item Fulbright Graduate Study Award, Aug. 2010--Jul. 2012 %\item Brain Korea 21 Research Assistantship, Ministry of Education, South Korea, August 2008 to February 2010 %\item Citi-KIF (Korean Institute of Finance) financial research paper competition grant (\$3,500), June 2009 %\item Citi-KAIST financial research paper competition grant (\$3,000), March 2008 %\item \textit{Maeil} Business and Economy Newspaper 22nd Economic Thesis Award, October 2007 %\item Provost's Honor, UC San Diego (Exchange Program), Fall 2006 quarter %\item Dean's Honor of Distinguished Students, Yonsei University, Spring 2005, and Fall 2005 \end{itemize} \section{Conference/ Seminar Presentations (Last five years)} \textbf{2021}: International Workshop for Lab and Field Experiments, SAET Conference, Bargaining: Experiments, Empirics, and Theory workshop, ESA 2021 Global Around-the-Clock Meetings, GAMES 2021, KER 2021\\% (*scheduled)\\ \textbf{2020}: HeiKaMaxY Workshop, Mannheim/ZEW Experimental seminar, Reading Experimental and Behavioral Economics Workshop, Econometric Society World Congress, Korean Economic Review International Conference, Yonsei University, ESA Global Around-the-Clock Virtual Conference, Econometric Society Winter Meeting\\ \textbf{2019}: Asia-Pacific Economic Science Association Meeting, New York University Abu Dhabi, University of Mannheim, HeiKaMaX Workshop, Max Planck Institute for Tax Law and Public Finance, International Meeting on Experimental and Behavioral Social Sciences, North American Summer Meeting of the Econometric Society, Economics Science Association World Meeting, Korean Economic Review International Conference, University of British Columbia, University of Victoria, University of Massachusetts Amherst, Queen's University, Carleton University, Concordia University, Canadian Public Economics Group Conference, McMaster University, Florida State University, Southern Economic Association Meeting, University of Toronto, University of Western Ontario, University of Guelph, Bilkent University\\ \textbf{2018}: Royal Economic Society Conference, HeiKaMaX Workshop, Social Choice and Welfare Conference, Yonsei University, Asian Meeting of the Econometric Society, Foundations of Risk and Uncertainty Conference, Economic Science Association World Meeting, Stony Brook International Conference on Game Theory, European Association of Law and Economics Conference, Nordic Conference on Behavioral and Experimental Economics, HKUST Experimental Workshop, Edinburgh Business School, HeiKaMaxY Workshop\\ \textbf{2017}: Max Planck Institute for Tax Law and Public Finance, University of Mannheim, Korea Informational Society Development Institute, Korea Institute of Public Finance, Korea Institute of International Economic Policy, Korea Energy Economics Institute, University of Massachusetts Amherst, Western Political Science Association Conference, Kyung Hee University, SKKU Junior Faculty Research Conference, North American Meeting of the Econometric Society, Western Economics Association International Annual Meetings, Yonsei Young Economists Workshop, KAEA-KIPF Conference, Yonsei University, Midwest Economic Theory Conference, HeiKaMaxY Workshop%\\ %\textbf{2016}: Chapman University, Southwest Experimental and Behavioral Economics Workshop Annual Conference, Canadian Economics Association Annual Conference, Caltech, Western Economics Association International Annual Meetings, Sogang University, Seoul National University, KDI School of Public Policy and Management, KAEA-KEA International Conference, Cornell University, Canadian Public Economics Group Conference, North-American ESA Conference, Sonoma State University, Midwest Economic Theory Conference%\\ %\textbf{2015}: The 52nd Annual Meetings of the Public Choice Society, The 85th Annual Conference of the Southern Economic Association\\ %\textbf{2014}: Canadian Economics Association Annual Conference, The Institute of Humane Studies Summer Research Colloquium, KAEA-KEA International Conference, The 9th Economics Graduate Student Conference at Washington University in St. Louis, Canadian Public Economics Group Conference, The 2nd Science of Philanthropy Initiative Conference \section{Other Activities} \begin{itemize} \item Referee service: Canadian Journal of Economics, China Economic Review, Economic Journal, European Economic Review, European Journal of Political Economy, International Journal of Game Theory, Journal of Economic Behavior and Organization, Journal of the Economic Science Association, Journal of Public Economics, PLOS ONE, Social Choice and Welfare, Southern Economic Journal, Scientific Reports %\item Departmental service: Junior recruiting committee, University of Mannheim, 2017--2018 \item CES Visiting Scholar, Ludwig Maximilian University of Munich, Dec 2019 \item Visiting Scholar, Department of Economics, McMaster University, Fall 2019 \item Short Visits: School of Economics, Sogang University, Korea, July 2016 \end{itemize} \section{Citizenship} South Korea (EU Blue Card)% (J-1 Visa)%South Korea (F-1 Visa since 2012, J-1 from 2010 to 2012) \section{Affiliations} American Economic Association, European Economics Association, Econometric Society, Korean-American Economic Association \section{Languages} Korean(native), English(fluent) \section{References} \textbf{Thomas R. Palfrey} \href{mailto:[email protected]}{[email protected]}\\ Flintridge Foundation Professor of Economics and Political Science, Caltech\\\\ \textbf{Stephen Coate} \href{mailto:[email protected]}{[email protected]}\\ Kiplinger Professor of Public Policy, Cornell University\\\\ %\textbf{Robert H. Frank} \href{mailto:[email protected]}{[email protected]}\\ %Henrietta Johnson Louis Professor of Management and Professor of Economics\\ %Johnson Graduate School of Management, Cornell University\\\\ \textbf{Hans Peter Gr\"{u}ner} \href{mailto:[email protected]}{[email protected]}\\ Professor of Economics, University of Mannheim\\\\ \textbf{Wooyoung Lim} \href{mailto:[email protected]}{[email protected]}\\ Associate Professor of Economics, Hong Kong University of Science and Technology \begin{flushright} Last Updated: \today \end{flushright} \end{resume} \end{document}
{ "alphanum_fraction": 0.7909938405, "avg_line_length": 85.8142857143, "ext": "tex", "hexsha": "d3624ed49d4966d27da0eeee968b31d7be236520", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f9d731704b225aa4f1a38931eda3b3cd1b55daa1", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "kimdukgyoo/kimdukgyoo.github.io", "max_forks_repo_path": "PDFfiles/CV-DukGyooKim.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f9d731704b225aa4f1a38931eda3b3cd1b55daa1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "kimdukgyoo/kimdukgyoo.github.io", "max_issues_repo_path": "PDFfiles/CV-DukGyooKim.tex", "max_line_length": 798, "max_stars_count": 2, "max_stars_repo_head_hexsha": "f9d731704b225aa4f1a38931eda3b3cd1b55daa1", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "kimdukgyoo/kimdukgyoo.github.io", "max_stars_repo_path": "PDFfiles/CV-DukGyooKim.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-12T05:24:32.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-01T16:06:14.000Z", "num_tokens": 2967, "size": 12014 }
\documentclass[journal]{IEEEtran} %All packages used goes here: \usepackage{lipsum} \usepackage{graphicx} \begin{document} \title{Your Title\\ New Line} \author{Student Name - \textit{Student number}\\ Student Name - \textit{Student number}\\ Student Name - \textit{Student number}\\ } \markboth{EEE1006F Semester Project - Electrical Engineering Department - UCT - \today}% {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals}% <- leave this as is \maketitle \begin{abstract} The abstract goes here. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart{T}{his} \lipsum[10] % <- remove this nonsense text \subsection{Subsection Heading Here} Subsection text here. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Figures: See: https://www.overleaf.com/learn/latex/Inserting_Images %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Here is how you would add an image. First upload the image on the left of your screen by pressing on the up arrow next to the folder icon. \begin{figure}[h!] \centering % <- centre the image \includegraphics[width=8cm]{example.jpg} % <- set the size and file path/name \caption{Example of an image} \label{fig:circ} %<- give the figure a name to refer to later \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %References: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Refer to Figure \ref{fig:circ} like this. Refer to Equation \ref{Eq:E} like this. Refer to Table \ref{tab:cells} like this. Cite a reference like this: blah blah blah \cite{examplecite} % <- see the name at the end under bibliography %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Paragraphs and pages: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage %<- new page Blah\\\\ % <- use this at the end of a paragraph to create a paragraph space Blah\\ % <- use this at the end of a paragraph to create a new paragraph without space %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Maths: See: https://www.codecogs.com/latex/eqneditor.php %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% In line maths is done like this $P=I^{2}R$, otherwise use: \begin{equation} E=mc^{2} \label{Eq:E} %<- give the figure a name to refer to later \end{equation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Tables: See: https://www.tablesgenerator.com/ and https://www.overleaf.com/learn/latex/Tables %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin {table}[h!] \begin{center} \label{tab:cells} %<- give the figure a name to refer to later \caption{Table name} \begin{tabular}{ |c|c|c| } \hline cell1 & cell2 & cell3 \\ \hline cell4 & cell5 & cell6 \\ \hline cell7 & cell8 & cell9 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Subsubsection Heading Here} Subsubsection text here. \lipsum[10] % <- remove this nonsense text \section{Conclusion} The conclusion goes here. \lipsum[10] % <- remove this nonsense text \begin{thebibliography}{1} %Add your references below (I'm not too fussed if this isn't perfect) \bibitem{examplecite} H.~Kopka and P.~W. Daly, \emph{A Guide to \LaTeX}, 3rd~ed.\hskip 1em plus 0.5em minus 0.4em\relax Harlow, England: Addison-Wesley, 1999. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.6487883683, "avg_line_length": 25.5785123967, "ext": "tex", "hexsha": "4fb4d0991b874b5b3e92c0ff5bc92c8753b66a52", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9e0e4b328b3010201316f11b85c11772dc0ff1c2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "StevenThomi/Embedded-Systems", "max_forks_repo_path": "Practicals/Practical 3/IEEE Journal Paper Template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9e0e4b328b3010201316f11b85c11772dc0ff1c2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "StevenThomi/Embedded-Systems", "max_issues_repo_path": "Practicals/Practical 3/IEEE Journal Paper Template.tex", "max_line_length": 139, "max_stars_count": null, "max_stars_repo_head_hexsha": "9e0e4b328b3010201316f11b85c11772dc0ff1c2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "StevenThomi/Embedded-Systems", "max_stars_repo_path": "Practicals/Practical 3/IEEE Journal Paper Template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 883, "size": 3095 }
\chapter{Result} Programs consist of assignments and expressions. Each evaluated expression returns a \emph{result}. The idea of result: \begin{lstlisting}[language=haskell] data Value = ... data Type = None | Int | Float | String | Regex newtype Term = (Value, Type) data Result = Succ Term | Fail Term \end{lstlisting} The difference between \emph{result} in Intentio and most of other languages is containing a \emph{succ/fail} prefix. \begin{example}[Some results] \begin{lstlisting} succ "foo" # succ String foo succ 1.25 # succ Float 1.25 fail none # fail None none \end{lstlisting} \end{example}
{ "alphanum_fraction": 0.707165109, "avg_line_length": 25.68, "ext": "tex", "hexsha": "f068afc8051c0deac5c46c3e722d6575e293c506", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1cbd43b813f766d7ddde7680d3b464599ce1f880", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "intentio-lang/user-guide", "max_forks_repo_path": "chapters/result.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1cbd43b813f766d7ddde7680d3b464599ce1f880", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "intentio-lang/user-guide", "max_issues_repo_path": "chapters/result.tex", "max_line_length": 117, "max_stars_count": null, "max_stars_repo_head_hexsha": "1cbd43b813f766d7ddde7680d3b464599ce1f880", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "intentio-lang/user-guide", "max_stars_repo_path": "chapters/result.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 177, "size": 642 }
\section{Introduction} \label{sec:intro} % Where are we right now with infrastructure independent communication? The communication technologies developed and deployed in the last decades are integral parts of our daily life and are used by mobile phones, computers, or smart applications in homes and cities. Today's smartphones, however, highly depend on the availability of telecommunication infrastructures, such as Wi-Fi or cellular technology (e.g., 3G, LTE, or the upcoming 5G standard). However, there are situations in which either no communication infrastructure is available or only at a high cost, e.g., in remote areas~(\cite{gardner2011serval}), in the agricultural sector~(\cite{elijah2018overview}), as a result of disasters~(\cite{manoj2007communication}), or due to political censorship~(\cite{liu2015performance}). Furthermore, in countries with less evolved infrastructures, e.g., due to low population densities or due to economic reasons, cellular networks often cannot be used at all or cannot be established in an economically feasible manner. In this case, low-cost communication technologies would give people the possibility to communicate with each other~(\cite{kayisire2016ict}). However, while modern infrastructure-independent technologies do exist, these are often only accessible to advanced users due to regulations, high costs, or technical complexity. To make these technologies accessible to a broad user base, they need to be integrated into devices already known to users. % Short LoRa Intro We propose to use LoRa wireless technology as a communication enabler in such situations. LoRa (\emph{Lo}ng-\emph{Ra}nge) is a long range and low power network protocol designed for the Internet of Things to support low data rate applications~(\cite{hornbuckle2010fractional}). It consists of a proprietary physical layer, using the Chirp Spread Spectrum (CSS) in the freely usable ISM bands at 433, 868, or 915 MHz, depending on the global region. The additional MAC layer protocol LoRaWAN is designed as a hierarchical topology. A set of gateways is receiving and forwarding messages of end devices to a central server that processes the data. While LoRa itself has to be licensed by the Semtech company and implemented in specific hardware, it is independent of LoRaWAN and can thus be used in a device-to-device manner. % Our contributions In this paper, we present an approach to equip existing mobile devices with LoRa technology, by distributing small System-on-a-Chip (SoC) devices supporting multiple Radio Access Technologies (RATs). There are several commercially off-the-shelf microcontroller units (MCUs) available supporting Wi-Fi, Bluetooth, and LoRa. We propose to use these low-cost devices to upgrade existing smartphones, laptops, and other mobile devices for long range infrastructure-less communication. To reach this goal, we present a custom firmware for Arduino-SDK compatible boards, called \textit{rf95modem}. Existing mobile devices can be connected to a board through a serial connection, Wi-Fi, or Bluetooth. As a general solution, we propose to use modem AT commands as an interface for application software. This interface can then be exposed through different communication channels and used by application software without requiring LoRa specific device drivers. Since these boards are cheap and do not require laying new cables or setting up communication towers, these boards can either be distributed to people living in high-risk areas beforehand or handed out by first responders during the event of a crisis. To demonstrate the functionality of our implementation, we first present a cross-platform mobile application for device-to-device messaging. This re-enables basic infrastructure-less communication capabilities in disasters. Second, we present an integration of our implementation into a disruption-tolerant networking (DTN) software. Although the low data rates of LoRa are not sufficient to support multimedia applications, sensor data, e.g., in agricultural applications or environmental monitoring, as well as context information for further DTN routing decisions can be transmitted through the LoRa channel. To illustrate the benefits of our approach, the developed device-to-device messaging app, as well as our DTN integration are tested through experimental evaluations in an urban and a rural area. To summarize, we make the following contributions: \begin{itemize} \item We present a novel free and open source modem firmware implementation for LoRa-enabled MCUs, featuring a device-driver independent way of using LoRa via serial, Bluetooth LE, and Wi-Fi interfaces. \item We present a novel device-to-device LoRa chat application for a) Android and iOS smartphones and b) traditional computers. \item We present a freely available and open source integration of long range communication into a delay-tolerant networking software. \item We experimentally evaluate the proposed approach by conducting field tests in an urban environment as well as in a rural area and performing energy measurements of multiple devices. \item The presented \textit{rf95modem} software\footnote{\url{https://github.com/gh0st42/rf95modem/}, MIT License}, the device-to-device chat application\footnote{\url{https://github.com/umr-ds/BlueRa}, MIT License}, the integration into DTN7\footnote{\url{https://github.com/dtn7/dtn7-go}, GNU General Public License v3.0} and the experimental evaluation code fragments\footnote{\url{https://github.com/umr-ds/hoechst2020lora}} are freely available. \end{itemize}
{ "alphanum_fraction": 0.8080682424, "avg_line_length": 110.3333333333, "ext": "tex", "hexsha": "fb2a4c61b70dc5c08ff17c412681aacabaf77317", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "27150c6e45bd5c3513b2abd608dd4ce42f113d8f", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "umr-ds/hoechst2020lora", "max_forks_repo_path": "1_intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "27150c6e45bd5c3513b2abd608dd4ce42f113d8f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "umr-ds/hoechst2020lora", "max_issues_repo_path": "1_intro.tex", "max_line_length": 338, "max_stars_count": 7, "max_stars_repo_head_hexsha": "27150c6e45bd5c3513b2abd608dd4ce42f113d8f", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "umr-ds/hoechst2020lora", "max_stars_repo_path": "1_intro.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-05T10:13:34.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-31T00:42:37.000Z", "num_tokens": 1173, "size": 5627 }
\addtocontents{toc}{\protect\enlargethispage{\baselineskip}}\chapter{Language and structure} \label{ch:language and structure} In \chapref{ch:information and agents}, I described informational spaces as well as agents and their interactions. Among other things, I introduced the infon space $({\cal I}, \odot)$ which will supply the semantic contents of utterances in Equilibrium Semantics. Assessing utterances requires certain operations on words and parse trees analogous to the operation $\odot$ on infons and so I now define two systems similar to $({\cal I}, \odot)$. One serves as the language to be analyzed and the other provides the syntactic contents. \section{Language} \label{sec:language definition} Assume a finite set of words $\cal W$. For example, $\cal W$ might include $\{\Expression{Bill}, \Expression{ran}\}$. A string on $\cal W$ of length $n$ is a function from $\{0, 1, 2, \ldots, n\}$ to $\cal W$. We usually write just a list of words such as $\Expression{Bill}$, $\Expression{Bill ran}$, or $\Expression{ran Bill}$. The unique string of length $0$, the empty string, is denoted by $e$. The unique zero string $0$ can be thought of as an illegitimate string. Both $e$ and $0$ are always members of $\cal W$. The concatenation $\cdot$ of two nonzero strings is defined in the usual way\footnote{See \citet[164--166]{wall:ml} for example.} and $w \cdot w'$ is written $ww'$. For example, $\Expression{Bill} \cdot \Expression{Bill} = \Expression{Bill Bill}$ and $\Expression{ran} \cdot \Expression{Bill} = \Expression{ran Bill}$. The concatenation of any string with $0$ is $0$. ${\cal W}^{\ast}$ is the set of all strings on ${\cal W}$ and is called the free monoid on $\cal W$. It is an infinite set and has an identity $e$ and a zero $0$. \is{grammar} Assume $G$ is a context-free grammar (CFG).\footnote{A context-free grammar is a grammar whose rules all have the form $\mathrm{A} \to w$. That is, there is no (linguistic) context surrounding $\mathrm{A}$ and $w$ so that the rule can be freely applied. See \citet[Chapter~9]{wall:ml}.} Here is a simple example: \[\mathrm{S} \to \hbox{NP VP};\ \ \ \ \hbox{NP} \to \hbox{N};\ \ \ \ \hbox{VP} \to \hbox{V};\ \ \ \ \hbox{N} \to \hbox{Bill};\ \ \ \ \hbox{V} \to \hbox{ran}\] Here, ``S'' stands for sentence, ``NP'' for noun phrase, ``VP'' for verb phrase, ``N'' for noun, and ``V'' for verb. The only sentence generated by this CFG is $\Expression{Bill ran}$. I will use ``parse'' and ``parse tree'' to refer also to the subtrees of phrases with the root node being any relevant nonterminal symbol. A tree such as $[_{\mathrm{NP}} [_{\mathrm{N}}\, \mathrm{Bill}]]$ would count as a parse or parse tree of the noun phrase \Expression{Bill}. I now use concatenation to define an operation that yields exactly the subsentential expressions and sentences of the language $\cal L$ by forming an appropriate proper subset of ${\cal W}^{\ast}$. Define the following modification of concatenation: $w \circ_G w' = w \cdot w' = ww'$ when $ww'$ is a substring of some string in ${\cal W}^{\ast}$ that has at least one parse by $G$ and $w \circ_G w' = 0$ otherwise. The set formed by freely generating all strings from ${\cal W}$ by this special concatenation operation is the language $\cal L$. In our little example, ${\cal L} = \{e, 0, \Expression{Bill}, \Expression{ran}, \Expression{Bill ran}\}$. For instance, the string \Expression{ran Bill} has been dropped from ${\cal W}^{\ast}$ because it is not a substring of any string parseable by $G$.\footnote{Consider the sentence \Expression{I handed you the salt}. Then the string \Expression{you the} is a member of the corresponding $\cal L$ even though it is not a legitimate phrase because it is a substring of the whole sentence which would be parseable by the corresponding $G$. I owe this example to Tom Wasow.\ia{Wasow, Thomas}} This operation is called \emph{grammatical concatenation} and is abbreviated to $\circ$. It is associative but not commutative and has $e$ as its identity and the zero element $0$. In other words, $({\cal L}, \circ)$ is a monoid with a zero. A sentence $\varphi$ of $\cal L$ is made up of individual words $\varphi_1 \circ \varphi_2 \circ \ldots \circ \varphi_n = \varphi_1 \varphi_2 \ldots \varphi_n$ for some natural number $n$. For the sentence $\varphi = \Expression{Bill ran}$, $\varphi_1 = \Expression{Bill}$, $\varphi_2 = \Expression{ran}$, and $\varphi = \varphi_1 \circ \varphi_2 = \varphi_1\varphi_2$. \section{Algebraic system of trees} \label{sec:algebraic system of trees} So far, I have defined two algebraic systems $({\cal I}, \odot)$ and $({\cal L}, \circ)$ that capture the structure of infons and linguistic expressions. One describes the world and the other language. The third system involves a new way to describe the grammar $G$ as a system of trees with a product operation. Each of the five rules of the CFG above can be re-described as a tree. For example, the tree corresponding to the first rule is $[_{\mathrm{S}}[_{\mathrm{NP}}\, ][_{\mathrm{VP}}\, ]]$ and the tree corresponding to the fourth rule is $[_{\mathrm{N}}\, \mathrm{Bill}]$. Thus, $G$ can be expressed either as a set of rules or as a set of trees. However, we cannot define the desired operation on these trees directly and a little work is required to get them into the appropriate form. The product operation is defined in two stages. First, an intuitive substitution or merging operation on parse trees is specified as follows. A tree such as $t' = [_{\mathrm{X}}\, \ldots ]$ can be substituted into $t = [_{\mathrm{Z}} [_{\mathrm{X}}\, ] \ldots ]$ to form $t'' = [_{\mathrm{Z}} [_{\mathrm{X}}\, \ldots ] \ldots ]$ where the $\ldots$ from $t'$ have been entered into $t$ because the outer category X of $t'$ matched an inner category X of $t$. If there is more than one X in $t$ that matches the X in $t'$, then the leftmost one is substituted into. This operation is denoted $\lhd$ and we write $t \lhd t' = t''$. It is neither associative nor commutative and is identical to the substitution operation defined in \citet{joshi:tag} and \citet{joshi:tag2}. Consider now the following merging of the second and fourth tree above: $[_{\mathrm{NP}} [_{\mathrm{N}}\, ]] \lhd [_{\mathrm{N}}\, \mathrm{Bill}] = [_{\mathrm{NP}} [_{\mathrm{N}}\, \mathrm{Bill}]]$. This can be informally described by saying that we have merged the simple tree $[_{\mathrm{N}}\, \mathrm{Bill}]$ as far as it could go up the parse tree of the whole sentence without encountering another branch. It cannot go any further because the tree encountered is $[_{\mathrm{S}}[_{\mathrm{NP}}\, ][_{\mathrm{VP}}\, ]]$ which has another branch involving the verb phrase. I call this procedure \emph{chaining}, so we say that simple lexical trees are chained as far up as possible. It is possible to chain the third and fifth trees in the same way to get $[_{\mathrm{VP}}[_{\mathrm{V}}\, \mathrm{ran}]]$. Thus, we are left with just two maximally chained trees from the original five trees and we also have the first tree -- $[_{\mathrm{S}}[_{\mathrm{NP}}\, ][_{\mathrm{VP}}\, ]]$. These are: \[[_{\mathrm{S}}[_{\mathrm{NP}}\, ][_{\mathrm{VP}}\, ]];\ \ \ \ [_{\mathrm{NP}} [_{\mathrm{N}}\, \mathrm{Bill}];\ \ \ \ [_{\mathrm{VP}}[_{\mathrm{V}}\, \mathrm{ran}]]\] %After we have obtained such maximally chained trees and unchained trees, we divide them into two groups of chained and unchained trees. In our example, the two groups contain $[_{\mathrm{NP}} [_{\mathrm{N}}\, \mathrm{Bill}]$ and $[_{\mathrm{VP}}[_{\mathrm{V}}\, \mathrm{ran}]]$ on the one hand and $[_{\mathrm{S}}[_{\mathrm{NP}}\, ][_{\mathrm{VP}}\, ]]$ on the other. The chained trees, also called elementary trees, are collected and given names as follows: \[t_1 = [_{\mathrm{NP}} [_{\mathrm{N}}\, \mathrm{Bill}]]\] \[t_2 = [_{\mathrm{VP}}[_{\mathrm{V}}\, \mathrm{ran}]]\] The subscripts of $t_1$ and $t_2$ are determined by the sentence being considered, namely, \Expression{Bill ran}, so that the tree involving the first word \Expression{Bill} is indexed by 1 and the tree involving the second word \Expression{ran} is indexed by 2. In more complex sentences, there will be more than nine elementary trees and then the indexes will be written (10), (11), (12), and so on. Keeping track of the indexes in this way makes it easier to describe the operation below. The unchained trees -- just $[_{\mathrm{S}}[_{\mathrm{NP}}\, ][_{\mathrm{VP}}\, ]]$ in our case -- are left in $G$. Now, I describe a more complex product operation $\star_{G, u}$ on these trees. It is parametrized by $G$ as $\circ_G$ was and by $u$ as $\odot_{u}$ was. The utterance situation is needed because the sentence being parsed enters via $u$ and it is on the basis of the sentence that the trees can be properly indexed as explained above. Because the chosen CFG is so simple, there is just one nontrivial product: \begin{eqnarray*} t_1 \star_{G, u} t_2 & = & [_{\mathrm{NP}} [_{\mathrm{N}}\, \mathrm{Bill}]] \star_{G, u} [_{\mathrm{VP}}[_{\mathrm{V}}\, \mathrm{ran}]] \\ & = & ([_{\mathrm{S}}[_{\mathrm{NP}}\, ][_{\mathrm{VP}}\, ]] \lhd [_{\mathrm{NP}} [_{\mathrm{N}}\, \mathrm{Bill}]]) \lhd [_{\mathrm{VP}}[_{\mathrm{V}}\, \mathrm{ran}]] \\ & = & [_{\mathrm{S}}[_{\mathrm{NP}}\, \mathrm{Bill}][_{\mathrm{VP}}\, ]] \lhd [_{\mathrm{VP}}[_{\mathrm{V}}\, \mathrm{ran}]] \\ & = & [_{\mathrm{S}}[_{\mathrm{NP}}\, \mathrm{Bill}][_{\mathrm{VP}}\, \mathrm{ran}]] \\ & = & t_{12} \end{eqnarray*} In other words, the tree product draws upon relevant trees in $G$ such as $[_{\mathrm{S}}[_{\mathrm{NP}}\, ]\allowbreak\relax[_{\mathrm{VP}}\, ]]$ to enable them to be merged or substituted. In this product, only one such tree in $G$ was introduced but in general there could be more, both to the left of $t_1$ or $t_2$. This is why the operation is parametrized by $G$. We now add two special trees. The first is the empty tree, $t_e$, and the second is a tree $t_0$. The former serves as the identity of the set of trees obtained via this product and the latter as the zero of the set. That is, $t \star_{G,u} t_e = t_e \star_{G,u} t = t$ for all $t$ and $t \star_{G,u} t_0 = t_0 \star_{G,u} t = t_0$ for all $t$, the latter being true by definition. When two incompatible trees are multiplied, for example $t_1 \star_{G,u} t_{12}$, the result is stipulated to be $t_0$. Note the special vector index ``12'' of the product $t_{12}$. First, this index should not be confused with an elementary tree index such as (12) which would be expressed as $t_{(12)}$. In the case of $t_{12}$, the vector index has two components; in the case of $t_{(12)}$ the vector index has just one component. Second, it may seem at first sight that we could have obtained the product $t_2 \star_{G,u} t_1 = t_{12}$ in the same way, by first substituting the verb and then the noun. But instead the product $t_2 \star_{G,u} t_1 = t_0$ by stipulation. The basic rule is that the subscript of the first multiplicand must be strictly lower than the subscript of the second multiplicand to potentially yield a nonzero tree. When higher-level trees are multiplied in a more complex setting, the rule is that the first component of the first vector index must be strictly less than the first component of the second vector index. For example, a tree labeled $t_{34}$ would have a lower vector index than one labeled $t_{5}$ because $3 < 5$. So $t_{34} \star_{G,u} t_5$ could potentially be nonzero. In the reverse order, the product is always $t_0$. There are thus five trees in the operation table for this sentence with respect to this CFG, the three above and $t_e$ and $t_0$. All combinations of these five trees yield one of these five trees. This gives us closure for the operation. The multiplication table for $\star_{G,u}$ is shown in Figure~\ref{fig:operation table}. Observe that $t_1 \star_{G,u} t_2 = t _{12} \neq t_2 \star_{G,u} t_1$. \begin{figure} \renewcommand{\arraystretch}{1.5} \begin{tabular}{c|c|c|c|c|c|c} \multicolumn{1}{c}{$\star_{G,u}$} & \multicolumn{1}{c}{$t_e$} & \multicolumn{1}{c}{$t_1$} & \multicolumn{1}{c}{$t_2$} & \multicolumn{1}{c}{$t_{12}$} & \multicolumn{1}{c}{$t_0$} \\[.5ex] \cline{2-6} $t_e$ & $t_e$ & $t_1$ & $t_2$ & $t_{12}$ & $t_0$ & \qquad \\[.5ex] \cline{2-6} $t_1$ & $t_1$ & $t_0$ & $t_{12}$ & $t_0$ & $t_0$ & \qquad \\[.5ex] \cline{2-6} $t_2$ & $t_2$ & $t_0$ & $t_0$ & $t_0$ & $t_0$ & \qquad \\[.5ex] \cline{2-6} $t_{12}$ & $t_{12}$ & $t_0$ & $t_0$ & $t_0$ & $t_0$ & \qquad \\[.5ex] \cline{2-6} $t_0$ & $t_0$ & $t_0$ & $t_0$ & $t_0$ & $t_0$ & \qquad \\[.5ex] \cline{2-6} \end{tabular} \caption{The operation table for $\star_{G,u}$} \label{fig:operation table} \end{figure} The basic rules for forming the product of trees are as follows: \begin{enumerate} \item Merge or substitute via $\lhd$ if possible. \item Otherwise, successively introduce trees from $G$ to the left of either or both multiplicands until one or more substitutions via $\lhd$ are possible. For example, if two trees $t'$ and $t''$ have to be introduced in that order to the left of $t_i$ in a product $t_i \star_{G,u} t_j$, it would be as follows: $(t'' \lhd (t' \lhd t_i)) \lhd t_j$. \item If the above fails, the result is $t_0$. \end{enumerate} This procedure always gives a unique result as only trees such as Z $\to$ X Y from $G$ can left multiply a product like $t_i \star_{G,u} t_j$ where $t_i = [_{\mathrm{X}}\, \ldots ]$ and $t_j = [_{\mathrm{Y}}\, \ldots ]$ and we stipulate that there cannot be another rule $\mathrm{Z}' \to$ X Y with $\mathrm{Z}' \neq$ Z in the CFG. Like $\lhd$, the operation $\star_{G,u}$ is neither associative nor commutative. Full parsing may be done in either of two ways: by successive application of compatible rules in the CFG to yield S or by successive application of the $\star_{G,u}$ operation to yield a tree like $t_{12}$. All other combinations will ultimately result in $t_0$. In more complex CFGs, there will be multiple trees like $t_{12}$ that may be the end result of combining subtrees, which corresponds to multiple parses for the sentence. The word order of the sentence is automatically taken into account by the product operation owing to the indexing procedure so the yield of successive applications of $\star_{G,u}$ is guaranteed to match it. Consider the algebraic system $({\cal T},\star_{G,u})$ where $\cal T$ is the set of five trees. This captures the relevant subset of the CFG for this sentence and so is an equivalent way to express a context-free grammar sentence by sentence. Each sentence corresponds to a separate algebraic system, all of which can in principle be combined into a larger system but this is unnecessary in practice. If we start with the elementary trees $t_e$, $t_1$, $t_2$, and $t_0$, we can freely generate the whole set $\cal T$. %In general, there will be an infinite number of trees in $\cal T$ but only a finite number of elementary trees and one may work with just these elements. I will assume the grammar $G$ for $\cal L$ can be rewritten as a system of trees $({\cal T},\star_{G,u})$ for each sentence in the manner described above. For convenience, I will drop the parameters from the notation and write just $\star$ henceforth and also frequently write $t_it_j$ instead of $t_i \star t_j$. \section{Summary of assumptions}\label{sec:4.3} Three algebraic systems have been constructed: $({\cal I}, \odot_{u})$, $({\cal L}, \circ_G)$, $({\cal T}, \star_{G,u})$ or, more simply, $({\cal I}, \odot)$, $({\cal L}, \circ)$, $({\cal T}, \star)$. The second system is a monoid with a zero but the first and third have just an identity and a zero. As the parameter $u$ is fixed at the outset, each sentence in it is identifiable, and so the corresponding subsystem $({\cal T}, \star)$ is also identifiable. In \chapref{ch:picture of communication}, I sketched what communication looks like in the small and in the large and briefly mentioned the games of partial information that arise. The interaction between speaker and addressee can be partly described by these partial information games and they lead to two more monoidal systems $({\cal G}, \otimes)$ and $({\cal G}', \otimes')$, the first involving semantic games and the second involving syntactic games. These are introduced in the context of the sentence $\varphi = \Expression{Bill ran}$ that we will consider in detail in \chapref{ch:defining communication games}. A peculiarity of all these systems is that every element of each system that is not an identity element is what is called a zero divisor, that is, a nonzero element for which another nonzero element exists such that their product is zero. Also, there is just an operation of multiplication and the zeros are stipulated rather than arising as identities of a second operation of addition as happens in rings.
{ "alphanum_fraction": 0.7091839771, "avg_line_length": 180.6559139785, "ext": "tex", "hexsha": "5e6df130dc94400dfb36c812b48ec3068b731b55", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "385719acbce4f0a5eeb6aa693d5bfc067573f0b2", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/248", "max_forks_repo_path": "chapters/ch4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "385719acbce4f0a5eeb6aa693d5bfc067573f0b2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/248", "max_issues_repo_path": "chapters/ch4.tex", "max_line_length": 1624, "max_stars_count": null, "max_stars_repo_head_hexsha": "385719acbce4f0a5eeb6aa693d5bfc067573f0b2", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/248", "max_stars_repo_path": "chapters/ch4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4938, "size": 16801 }
\section{Natural sources of market power}
{ "alphanum_fraction": 0.7727272727, "avg_line_length": 11, "ext": "tex", "hexsha": "7bcc88b77e61fd0f635da289e080f635fa824a84", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/economics/IO/02-00-Natural.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/economics/IO/02-00-Natural.tex", "max_line_length": 41, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/economics/IO/02-00-Natural.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10, "size": 44 }
\section{Appendix B: Overcoming Background Galaxy Contamination}\label{sec:appB} In this appendix, the probability of failed host galaxy associations for nearby transients with large host offsets due to interloping background galaxies is quantified, and used to evaluate the minimum number of potential hosts that should be stored in a {\tt DIAObect} record. For simplicity, this appendix uses the effective radius as the separation distance (Section \ref{ssec:options_Re}). The 10-year {\tt Object} catalogs will include $\sim$4 billion galaxies with $i<25$ mag across the $\sim$20000 $\rm deg^2$ main survey area, known as the ``gold" sample, especially in the context of weak lensing studies \citedsp{LPM-17}. However, the 10-year coadded depths will detect galaxies down to 5$\sigma$ limiting magnitudes of 26.1, 27.4, 27.5, 26.8, 26.1, and 24.9 mag in filters {\it ugrizy}; this is $\sim$3 times as many as in the ``gold" sample, or $\sim$10 billion galaxies. This high density of background galaxies complicates the process for associating large nearby host galaxies with their transients, especially the rare transients in their outskirts. Consider a transient at $3R_e$ from the center of a nearby galaxy with $z=0.01$ and an effective radius of $R_e = 10$ kpc ($\sim$50 arcsec). In order for this transient to be associated with it's true host, the separation distance for all {\tt Objects} within a radius of at least $3R_e$, and thus an area of at least $A_{3R_e} = \pi (3R_e)^2 = 0.0052$ $\rm deg^2$, would need to be considered. Based on the final 10-year number of detected galaxies (10 billion) and the total survey area (20000 deg$^2$), that is $\sim (10^10 / 20000) \times 0.0052 \approx 2600$ galaxies. Furthermore, the true host galaxy must have a lower separation distance than the $N$ nearest background galaxies, where $N$ is the number of potential hosts that will be listed in the {\tt DIAObject} parameters {\tt potentialHost} and {\tt potentialHostSeparation}. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{hg_Mr_dist.png} \includegraphics[width=8cm]{hg_Mr_vs_z.png} \includegraphics[width=8cm]{hg_rad_dist.png} \includegraphics[width=8cm]{hg_rad_vs_z.png} \includegraphics[width=8cm]{hg_z_dist.png} \includegraphics[width=8cm]{hg_rad_vs_Mr.png} \caption{{\it Left:} Histograms of the synthesized absolute $r$-band intrinsic magnitude (top) and radius (middle) for catalog galaxies, and the simulated galaxy redshifts (bottom). {\it Right:} Correlation with redshift of the synthesized intrinsic magnitude (top) and radius (middle), and the approximate relation between radius and intrinsic magnitude (bottom). \label{fig:simcat}} \end{center} \end{figure} To investigate the probability of host-association failure for nearby transients, we simulate a mock catalog of randomly distributed random background interloper galaxies. It is based on the same LSST-like mock galaxy catalog used by \citet{2018AJ....155....1G} for studies of photometric redshifts. For this experiment the catalog is limited to galaxies with at least a 5$\sigma$ detection in the {\it griz} bands at the projected 10-year depths listed above. The catalog contains redshifts and apparent {\it ugrizy} magnitudes, which are used to approximate intrinsic absolute $r$-band magnitudes (without $K$-correction, simply using a distance modulus based on redshift and $M_r=m_r-\mu$). The absolute magnitudes are used to synthesize approximate galaxy radii based on the relationship between absolute magnitude and radius for late-type galaxies defined in Figure 3 of \citet{2003MNRAS.343..978S}. These synthesized radii are over-estimates because late-types are generally larger than early-types, and because this magnitude-radius relation was defined for SDSS galaxies at lower redshifts than the LSST high-$z$ galaxies it is being applied to. This is a deliberate choice to overestimate, because it will result in upper limits on the rate of interlopers, which will allow for conservative estimates about the effect of background interlopers. The characteristics of this crude galaxy catalog are illustrated in Figure \ref{fig:simcat}. Consider again the large nearby galaxy with $R_e=10$ kpc at a redshift of $z=0.01$, for which the sky area within $3R_e$ is $A(3R_e)=0.0052$ $\rm deg^2$ and contains $\sim$2600 background galaxies. From the simulated catalog described above, 100 sets of background galaxies are randomly selected. For each set, the fraction of the nearby galaxy's area which is covered by the area of interloping background galaxies is calculated: $f_A = \sum{A(3R_{e,{\rm bkg}})} / 0.0052$. This fraction is equivalent to the probability that a transient at $3R_e$ from this large nearby host will be within $3R_{e,{\rm bkg}}$ (i.e., will be closer to) a background interloper. The probability that this transient is closer to $N$ interlopers than to its true host is $P_{\rm fail} = (f_A)^N$. This is the probability of a failed host association, where ``failed" means that the {\tt DIAObject} record of the $N$ galaxies with the lowest separation distances does not include the true host. For this large nearby galaxy, average values of $f_A$ and $P_{\rm fail}$ from the 100 simulated background sets are $f_A = 0.181\pm0.008$ and $P_{\rm fail} (N=3) = 0.006\pm0.0008$ Since this probability of failure is based on the sky density of background galaxies, it is independent of the radius and redshift of the nearby galaxy. However, it does depend on the factor applied to effective radius (i.e., the offset of transient considered) and $N$, the number of potential hosts recorded in the {\tt DIAObject} catalog. Figure \ref{fig:pfail} shows the probability of failure as a function of $R_e$ and $N$. \textbf{These are upper limits on the probability of failure} because they are derived from upper-estimates of galaxy radii and the sky density of the final 10-year LSST galaxy catalog, as described above. To assist in the interpretation of Figure \ref{fig:pfail}, the following describes several conclusions drawn from points in this plot: \begin{itemize} \item Transients with low host offsets, $1R_e$, are closer to (within $1R_{e,{\rm bkg}}$ of) one background interloper $\sim2\%$ of the time (purple point at $1R_e$). (However, given the high surface brightness of nearby galaxies, such a background galaxy might not be detected.) \item Transients with high host offsets, $5R_e$, are closer to (within $5R_{e,{\rm bkg}}$ of) one background interloper $\sim50\%$ of the time (purple point at $5R_e$), and closer to six background interlopers $>1\%$ of the time (magenta point at $5R_e$). \item In order to achieve a $1\%$ probability of failure for transients offset by $3R_e$ from large nearby galaxies, the {\tt DIAObject} catalog record should include the $N=3$ galaxies with the lowest separation distances (green point at $3R_e$). \end{itemize} The above experiment could be summarized in two main take-away points for reducing the probability of failure in associating nearby, large-offset transients with their true hosts: \begin{enumerate} \item The separation distances for all galaxies within at least $4R_e \approx 200$ arcsec should be calculated and considered. \item The 3 galaxies with the lowest separation distances should be included in the {\tt DIAObject} catalog record. \end{enumerate} Adopting these recommendations would cause up to $1\%$ of the transients at $3R_e$ from large nearby galaxies to experience a failed host association, where the true host is not listed in the {\tt DIAObject} record. Since $3R_e$ encompasses $\sim99\%$ of a galaxy's light, and most transient types are distributed proportional to the light, \textbf{the upper limit on the host association failure rate} for nearby transients should be $\sim0.01\%$. \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{hg_P_vs_Re.png} \caption{The probability that a transient will fail to be associated with a large ($R_e=10$ kpc) nearby host galaxy due to background interlopers, as a function of the transient's offset in effective radii from galaxies in the vicinity (including the true host), where ``failure" means the true host's separation distance is not in the top $N$ nearest galaxies. \textbf{This is a ``worst case scenario" because it applies to an very large nearby galaxy, and all background galaxy radii estimates are upper limits} (as described in the text). Error bars show the standard deviation from the 100 randomly-generated sets of background galaxies. \label{fig:pfail}} \end{center} \end{figure}
{ "alphanum_fraction": 0.7779722319, "avg_line_length": 122.4428571429, "ext": "tex", "hexsha": "447bbd89491bdb75ed8ee986ddd547022293261c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b7d498d4eaa78e444bd76fad112cef23157a6adb", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "lsst-dm/dmtn-151", "max_forks_repo_path": "AppendixB.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b7d498d4eaa78e444bd76fad112cef23157a6adb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "lsst-dm/dmtn-151", "max_issues_repo_path": "AppendixB.tex", "max_line_length": 660, "max_stars_count": null, "max_stars_repo_head_hexsha": "b7d498d4eaa78e444bd76fad112cef23157a6adb", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "lsst-dm/dmtn-151", "max_stars_repo_path": "AppendixB.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2179, "size": 8571 }
\foldertitle{qreport}{Quick-Report Functions}{qreport/Contents} \paragraph{Quick-report functions}\label{quick-report-functions} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \href{qreport/qplot}{\texttt{qplot}} - Quick report. \item \href{qreport/qstyle}{\texttt{qstyle}} - Apply styles to graphics object and its descandants. \end{itemize} \paragraph{Getting on-line help on qreport functions}\label{getting-on-line-help-on-qreport-functions} \begin{verbatim} help qreport help qreport/function_name \end{verbatim}
{ "alphanum_fraction": 0.7706422018, "avg_line_length": 21.8, "ext": "tex", "hexsha": "a2b344a8753062974fb11c24841a09b12de21f17", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_path": "-help/qreport/Contents.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_path": "-help/qreport/Contents.tex", "max_line_length": 67, "max_stars_count": 1, "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_path": "-help/qreport/Contents.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "num_tokens": 162, "size": 545 }
\subsection{Clipboard} \begin{class}{ClipboardModule} \clsdiagram[width=0.80\textwidth]{resources/Classes/Modules/Clipboard/ClipboardModule.png} \clsdcl{public class ClipboardModule : IModule} \clsdsp{The ClipboardModule is responsible for recording all clipboard related user interactions.} \end{class} \subsection*{Events} \begin{absclass}{ClipboardEvent} \clsdiagram{resources/Classes/Modules/Clipboard/Events/ClipboardEvent.png} \clsdcl{public abstract class ClipboardEvent : Event} \clsdsp{A generic clipboard event which all specific ClipboardEvents inherit from.} \begin{attributes} \attribute{public string ClipboardText}{The text in the clipboard.} \end{attributes} \end{absclass} \begin{class}{ClipboardCopyEvent} \clsdiagram[scale=0.9]{resources/Classes/Modules/Clipboard/Events/ClipboardCopyEvent.png} \clsdcl{public class ClipboardCopyEvent : ClipboardEvent} \clsdsp{A copy into clipboard user interaction.} \end{class} \begin{class}{ClipboardCutEvent} \clsdiagram[scale=0.9]{resources/Classes/Modules/Clipboard/Events/ClipboardCutEvent.png} \clsdcl{public class ClipboardCutEvent : ClipboardEvent} \clsdsp{A cut into clipboard user interaction.} \end{class} \begin{class}{ClipboardPasteEvent} \clsdiagram[scale=0.9]{resources/Classes/Modules/Clipboard/Events/ClipboardPasteEvent.png} \clsdcl{public class ClipboardPasteEvent : ClipboardEvent} \clsdsp{A paste from clipboard user interaction.} \end{class} \subsection*{Producers} \begin{class}{ClipboardCopyEventProducer} \clsdiagram[scale = 0.9]{resources/Classes/Modules/Clipboard/Producers/ClipboardCopyEventProducer.png} \clsdcl{public class ClipboardCopyEventProducer : DefaultEventQueue<ClipboardCopyEvent>} \clsdsp{Provides a single-writer-multiple-reader queue for ClipboardCopyEvent} \begin{methods} \begin{method}{public void StartCapture(IClipboardWindowMessageSink windowMessageSink, INativeClipboard nativeClip)}{Starts capturing clipboard copy events} \begin{parameters} \para{IClipboardWindowMessageSink windowMessageSink}{Required for capturing ClipboardCopyEvent. Creates a window procedure to receive window messages.} \para{INativeClipboard nativeClip}{Required for getting clipboard text from the clipboard} \end{parameters} \end{method} \begin{method}{public void StopCapture()}{Stops capturing clipboard copy events} \end{method} \end{methods} \end{class} \begin{class}{ClipboardCutEventProducer} \clsdiagram[scale = 0.8]{resources/Classes/Modules/Clipboard/Producers/ClipboardCutEventProducer.png} \clsdcl{public class ClipboardCutEventProducer : DefaultEventQueue<ClipboardCutEvent>} \clsdsp{Provides a single-writer-multiple-reader queue for ClipboardCutEvent} \begin{methods} \begin{method}{public void StartCapture(IClipboardWindowMessageSink windowMessageSink, INativeClipboard nativeClip)}{Starts capturing clipboard cut events} \begin{parameters} \para{IClipboardWindowMessageSink windowMessageSink}{Required for capturing ClipboardCutEvent. Creates a window procedure to receive window messages.} \para{INativeClipboard nativeClip}{Required for getting clipboard text from the clipboard} \end{parameters} \end{method} \begin{method}{public void StopCapture()}{Stops capturing clipboard cut events} \end{method} \end{methods} \end{class} \begin{class}{ClipboardPasteEventProducer} \clsdiagram[scale = 1]{resources/Classes/Modules/Clipboard/Producers/ClipboardPasteEventProducer.png} \clsdcl{public class ClipboardPasteEventProducer : DefaultEventQueue<ClipboardPasteEvent>} \clsdsp{Provides a single-writer-multiple-reader queue for ClipboardPasteEvent} \begin{methods} \begin{method}{public void StartCapture(INativeClipboard nativeClip)}{Starts capturing clipboard paste events} \begin{parameters} \para{INativeClipboard nativeClip}{Required for capturing ClipboardPasteEvent and getting clipboard text from the clipboard} \end{parameters} \end{method} \begin{method}{public void StopCapture()}{Stops capturing clipboard paste events} \end{method} \end{methods} \end{class}
{ "alphanum_fraction": 0.7585494606, "avg_line_length": 39.9724770642, "ext": "tex", "hexsha": "823abc9d5c1f555447ae5cd6b6cf10f70fbf8e3b", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-07-24T06:05:52.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-24T06:05:52.000Z", "max_forks_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "prtest01/MORR", "max_forks_repo_path": "documents/sd/chapter/Classes/Modules/Clipboard/Clipboard.tex", "max_issues_count": 110, "max_issues_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_issues_repo_issues_event_max_datetime": "2020-04-05T20:55:05.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-28T16:49:24.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "prtest01/MORR", "max_issues_repo_path": "documents/sd/chapter/Classes/Modules/Clipboard/Clipboard.tex", "max_line_length": 164, "max_stars_count": 5, "max_stars_repo_head_hexsha": "0830f2155fb3b32dc127587e07cbd780deb0e118", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "insightmind/MORR", "max_stars_repo_path": "documents/sd/chapter/Classes/Modules/Clipboard/Clipboard.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-26T20:21:13.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-03T14:52:47.000Z", "num_tokens": 1062, "size": 4357 }
\documentclass[12pt]{article} \pdfoutput=24 \usepackage{amsmath} \usepackage{amsthm} \usepackage{esvect} \usepackage[toc,page]{appendix} \usepackage{leftidx} \usepackage{color} \usepackage{framed, color} \usepackage{multirow} \usepackage{pdfpages} \usepackage{multicol} \usepackage{wrapfig,lipsum,booktabs} \usepackage[utf8]{inputenc} \usepackage{mathtools,hyperref} \hypersetup{ colorlinks=true, linkcolor=cyan, filecolor=cyan, urlcolor=red, citecolor=red, } \usepackage{cleveref} \usepackage{commath} \usepackage{enumitem} \usepackage{amssymb} \renewcommand{\qedsymbol}{$\blacksquare$} %%%%%%%%%%% %%%%%%%%%%% User Defined Commands. (macros) %%%%%%%%%%% \definecolor{mgreen}{RGB}{25,147,100} \definecolor{shadecolor}{rgb}{1,.8,.1} \definecolor{shadecolor2}{RGB}{245,237,0} \definecolor{orange}{RGB}{255,137,20} \definecolor{orange}{RGB}{245,37,100} %%%%%%%%%%% %%%%%%%%%%% Graphical Packages %%%%%%%%%%% \usepackage{pgfplots} \usetikzlibrary{patterns} \usepackage{mdframed} \usepackage{adjustbox} \usepackage{tcolorbox} %\usepackage{graphics} \usepackage{tikz,ifthen,fp,calc} \usepackage{caption} \usepackage{subcaption} \usetikzlibrary{plotmarks} \usepackage{graphicx} %%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% Theorem Styles %%%%%%%%%%%%%%%%% \theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{corr}{Corollary}[section] \theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{remark}{Remark}[section] \newtheorem{fact}{Fact}[section] \usepackage[english]{babel} \usepackage{babel,blindtext} \newtheorem{corollary}{Corollary}[theorem] \newtheorem{exmp}{Example}[section] \usepackage{fullpage} \usepackage{amsfonts} \usepackage{lscape} \usepackage{bbm} \usepackage{todonotes} \usepackage{cite} \usepackage{verbatim} \usepackage{bm} \DeclareMathOperator*{\argmax}{arg\,max} \usepackage[margin=1in]{geometry} \providecommand{\keywords}[1]{\textbf{\textit{Keywords:---}} #1} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{authblk} \usepackage{cite} \newcommand*{\affaddr}[1]{#1} % No op here. Customize it for different styles. \newcommand*{\affmark}[1][*]{\textsuperscript{#1}} %\newcommand*{\email}[1]{\texttt{#1}} \title{\textbf{Mahalanobis distance explained}} \author{} \date{} \providecommand{\keywords}[1]{\textbf{\textit{Keywords:}} #1} \begin{document} \maketitle \section{Diagonalizability} Let $\mathbf{X}$ be a matrix whose rows are variables and columns are observations that are centered about the mean. Then the covariance of the $\mathbf{X}$ is given by \begin{equation} \mathbf{{\Large{\Sigma} }}= cov(\mathbf{X}) = \frac{1}{n-1}\mathbf{ XX^\text{T}} \end{equation} Assume we want to have a mapping $\mathbf{M}$ so that $\mathbf{Y=M^\text{T}X}$ has covariance that is diagonal. Then \begin{itemize} \item Assume there is no observation with all zero values \item Assume there is no two identical observations, i.e.columns are linearly independent. \item (Rare) things like that which will make covariance matrix singular. \end{itemize} Then since covariance matrix is symmetric, with positive entries, etc., it is diagonalizable, i.e. it can be written as \begin{equation}\label{eq:eigendecomposition} \mathbf{{\Large{\Sigma} }} = \mathbf{U^{-1} D U = U^\text{T} D U} \end{equation} or equivalently \begin{equation}\label{eq:eigendecomposition2} \mathbf{U^\text{T} {\Large{\Sigma} }U } = \mathbf{D} \end{equation} where $\mathbf{D}$ is diagonal and, in this case due to properties of covariance matrix, the matrix $\mathbf{U}$ is orthogonal, i.e. $\mathbf{U^{-1} = U^\text{T}}$, i.e. $\mathbf{UU^\text{T}} = \mathbf{I}$ The columns of $\mathbf{U^{-1}}$ are eigenvectors of covariance matrix, entries of $\mathbf{D}$ are eigenvalues of covariance matrix, which are also variances of different variables in $\mathbf{X}$. Assume we want to rotate the data, or represent them in a coordinate system where it is represented with variables that are orthogonal, i.e. the collinearity is killed. This is what PCA does. Now lets pretend we do not know that. Lets say we want to transform $\mathbf{X}$ into $\mathbf{Y}$ by a mapping $\mathbf{M^\text{T}}$, i.e. $\mathbf{Y= M^\text{T}X}$, so that covariance of $\mathbf{Y}$ is diagonal matrix, $\mathbf{\hat{D}}$. \\ \pagebreak So, we want $cov(\mathbf{Y}) = \frac{1}{n-1}\mathbf{Y}\mathbf{Y^\text{T}}$, Lets take a look: \begin{equation}\label{eq:diagonalize} \begin{aligned} \mathbf{\hat{D}} = cov({\mathbf{Y}}) &= \frac{1}{n-1}\mathbf{Y}\mathbf{Y^\text{T}}\\ &= \frac{1}{n-1}\mathbf{M^\text{T}X}\mathbf{(M^\text{T}X)^\text{T}}\\ &= \frac{1}{n-1} \mathbf{M^\text{T}}\mathbf{XX^\text{T}}\mathbf{ M}\\ &= \mathbf{M^\text{T}}\mathbf{\Sigma}\mathbf{M} \end{aligned} \end{equation} So, we arrived at $\mathbf{\hat{D}} = \mathbf{M^\text{T}}\mathbf{\Sigma}\mathbf{M}$. Hence, if you choose $\mathbf{M}$ to be the same as $\mathbf{U}$, then covariance of $\mathbf{Y}$ would be diagonal, and you have $\mathbf{\hat{D}} = \mathbf{D}$.\\ Since, the the decomposition given by Eq.~(\ref{eq:eigendecomposition}) is eigen-decomposition of $\mathbf{\Sigma}$, we can see this is what PCA does. \section{Equivalency} \theoremstyle{definition} \begin{definition} Suppose the random vectors of $\mathbf{v}$ and $\mathbf{w}$ be drawn from a distribution whose associated covariance matrix is given by $\mathbf{\Sigma}$. Then define the Malanoblis distance as follows: \begin{equation}\label{eq:mahabDist} d = \mathbf{(v - w)^\text{T} \Sigma^{-1} (v - w)} \end{equation} \end{definition} Lets take a look: \begin{equation}\label{eq:equivalency} \begin{aligned} \mathbf{d} &= \mathbf{(v - w)^\text{T} \Sigma^{-1} (v - w)} \\ &= \mathbf{(v - w)^\text{T} (U^\text{T} D U) ^{-1} (v - w)} \\ &= \mathbf{(v - w)^\text{T} (U^{-1} D^{-1} U^{-T}) (v - w)} \\ &= \mathbf{(v - w)^\text{T} (U^\text{T} D^{-1} U) (v - w)} \\ &= \mathbf{(U(v - w))^\text{T} D^{-1} (U (v - w))} \\ \end{aligned} \end{equation} Notice: \begin{itemize} \item The diagonal entries of $\mathbf{D}$ eigenvalues of covariance matrix, which are variance of variables in $\mathbf{X}$. So, if the data in $\mathbf{X}$ was scaled by their variances, then this distance was equivalent to Euclidean distance. \item Lets look at the last term in above equation: \begin{equation} \mathbf{ U (v - w)} = \mathbf{ U v} - \mathbf{ U w} \end{equation} Each of these terms are mappings of $\mathbf{v}$ and $\mathbf{w}$ into the PCA space of the data $\mathbf{X}$. \end{itemize} Suppose $\mathbf{x}$ is a vector and we wish to represent it, in the column space of a matrix $\mathbf{A} = [\mathbf{A}_1, \mathbf{A}_2, \dots \mathbf{A}_N]$, where each $\mathbf{A}_i$ is a column of $\mathbf{A}$. So, we are looking for constants $y_1, y_2, \dots, y_N$ so that \[\mathbf{x} = y_1 \mathbf{A}_1 + y_2 \mathbf{A}_2 \dots + \mathbf{A}_N y_N = \mathbf{Ay} \] Hence, $\mathbf{y = A^{-1} X}$. So, $\mathbf{y}$ is the mapping of $\mathbf{x}$ into column space of $\mathbf{A}$. Just like $\mathbf{Uv}$ which is mapping of $\mathbf{v}$ into column space of $\mathbf{U^{-1}}$ whose columns are eigenvectors of covariance, i.e. PCA. \iffalse P.S. I had to choose where to put the exponent \{-1\} to indicate inverse of the matrices, whether to the left or right of $\mathbf{\Sigma}$. Either way would have made some parts easier, but some other parts harder to follow.\\ Now, the question is, when you mentioned M. Distance takes care of scales and collinearity, where, in what context you learned that. Did the context refer to this definition of distance as M. distance, or they were talking about the distance between a point and distribution? \fi \end{document}
{ "alphanum_fraction": 0.6828470825, "avg_line_length": 32.9958506224, "ext": "tex", "hexsha": "9464d732ed255527ba90f102dc10e2486963455e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fb7108dac1190774bd90a527aaa8a3cb405f127d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "HNoorazar/Kirti", "max_forks_repo_path": "arxiv/mahab_equivalency/arXiv.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fb7108dac1190774bd90a527aaa8a3cb405f127d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "HNoorazar/Kirti", "max_issues_repo_path": "arxiv/mahab_equivalency/arXiv.tex", "max_line_length": 119, "max_stars_count": null, "max_stars_repo_head_hexsha": "fb7108dac1190774bd90a527aaa8a3cb405f127d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "HNoorazar/Kirti", "max_stars_repo_path": "arxiv/mahab_equivalency/arXiv.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2570, "size": 7952 }
\chapter{The Collatz Tree} \section{The Connection between Groups and Graphs} \label{sec:groups_graphs} Let $(a_k)$ be a numerical sequence with $a_k=g^{(k)}(m)$, then a reversion produces an infinite number of sequences of reversely-written Collatz members \cite{Ref_Klisse_2010}. \par\medskip Let $S$ be a set containing two elements $q$ and $r$, which are bijective functions over $\mathbb{Q}$: \begin{equation} \begin{array}{l} q(x)=2x \\ r(x)=\frac{1}{3}(x-1) \end{array} \end{equation} Let a binary operation be the right-to-left composition of functions $q\circ r$, where $q\circ r(x)=q(r(x))$. Composing functions is an associative operation. All compositions of the bijections $q$ and $r$ and their inverses $q^{-1}$ and $r^{-1}$ are again bijective. The set, whose elements are all these compositions, is closed under that operation. It forms a free group $F$ of rank 2 with respect to the free generating set $S$, where the group's binary operation $\circ$ is the function composition and the group's identity element is the identity function $id_{\mathbb{Q}}=e$. We call $e$ an \textit{empty string}. $F$ consists of all expressions (strings) that can be concatenated from the generators $q$ and $r$. The corresponding Cayley graph $Cay(F,S)=G$ is a regular tree whose vertices have four neighbors \cite[p.~66]{Ref_Loeh}. A tree is called \textit{regular} or \textit{homogeneous} when every vertex has the same degree, in this case, $d(v)=4$ for every vertex $v$ in $G$. The Cayley graph's set of vertices is $V(G)=F$, and its set of edges is $E(G)=\left\{\left\{f,f\circ s\right\}\mid f\in F,s\in\left(S\cup S^{-1}\right)\setminus\left\{e\right\}\right\}$ \cite[p.~57]{Ref_Loeh}. More precisely, the vertices are \textit{labeled} by the elements (strings) of $F$. \par\medskip In conformance with graph-theoretical precepts \cite{Ref_Bondy_Murty}, \cite{Ref_Bonnington_Little}, \cite{Ref_Bender_Williamson} we specify a subgraph $H$ of $G$ as a triple $\left(V(H),E(H),\psi_{H}\right)$ consisting of a set $V(H)$ of vertices, a set $E(H)$ of edges, and an incidence function $\psi_{H}$. The latter is, in our case, the restriction $\psi_{G}\vert_{E(H)}$ of the Cayley graph's incidence function to the set of edges that only join vertices, which are labeled by a string over alphabet $\{r,q\}$ without the inverses: $E(H)=\left\{\left\{f,f\circ s\right\}\mid f\in F,s\in S\setminus\left\{e \right\}\right\}$. \par\medskip This subgraph corresponds to the monoid $S^*$, which is freely generated by $S$ follows related thoughts \cite{Ref_Truemper_2014} that examine the Collatz problem in terms of a free semigroup on the set $S^{-1}$ of inverse generators. Note that this semigroup is not to be confused with an \textit{inverse semigroup} "in which every element has a unique inverse" \cite[p.~26]{Ref_Almeida}, \cite[p.~22]{Ref_Loeh}. \par\medskip Let $Y^X=\{f\mid f\text{ is a map }X\rightarrow Y\}$ be the set of functions, which in category theory is referred to as the \textit{exponential object} for any sets $X$, $Y$. The evaluation function $ev:Y^X\times X\to Y$ sends the pair $(f,x)$ to $f(x)$. For a detailed description of this concept, see \cite[p.~127]{Ref_Johnsonbaugh}, \cite[p.~155]{Ref_MacLane_Birkhoff}, \cite[p.~54]{Ref_Novak_etal} and \cite[p.~188]{Ref_Pellissier}. We define the evaluation function $ev_{S^*}:S^*\times\{1\}\rightarrow\mathbb{Q}$ that evaluates an element of $S^*$, id est a composition of $q$ and $r$, for the given input value $1$. Furthermore we define the corestriction ${ev^0_{S^*}}$ of $ev_{S^*}$ to $\mathbb{N}$. Since a corestricion of a function resricts the function's codomain \cite[p.~3]{Ref_Helemskii}, the function $ev^0_{S^*}$ operates on a subset $T\subset S^*$ that contains only those compositions of $q$ and $r$, which return a natural number when inputting the value $1$. \par\medskip The set $T$ forms not a monoid under function composition, for example $ev_{S^*}(qrq^4,1)=10$ and $ev_{S^*}(rq^6,1)=21$, but the composition $qrq^4rq^6$ does not lie in $T$, because the evaluation $ev_{S^*}(qrq^4rq^6,1)$ yields a value outside the codomain $\mathbb{N}$. However, each element of this set labels a vertex of a tree $H_{T}\subset H$, which is a proper subtree of $H$. \par\medskip Let $U\subset T$ be a subset of $T$, which does not contain a reduced word with two or more successive characters $r$. The corresponding tree $H_{U}\subset H_{T}$ reflects Collatz sequences as demonstrated in figure~\ref{fig:1}. \begin{remark} When talking about trees having a root ("rooted trees"), another important concept should be explained: the \textbf{level of a vertex} or often called \textbf{depth of a vertex} is the length of the path from the root to this vertex \cite[p.~804]{Ref_Rosen}. In other words, it is the vertex's distance (the number of edges in the path) from the root. The \textbf{height of a vertex} is its level plus one $level(v)+1=height(v)$, see \cite[p.~169]{Ref_Makinson}. \end{remark} % trim=left top right bottom \begin{figure} \includegraphics[trim=2.3cm 5.8cm 5.9cm 4.8cm, width=1.00\textwidth,page=1]{figures/caytree.pdf} \caption{Small section of $H_T$ with darkly highlighted subtree $H_U$} \label{fig:1} \end{figure} \section{Defining the Tree} The starting point for specifying our tree is $H_U$. Due to its significance, we first concertize $H_U$ by the definition~\ref{def:H_U} below, which establishes four essential characteristics. \pagebreak \begin{definition} The graph $H_U$ possess the following key properties: \begin{itemize} \item \mbox{\boldmath$H_U$} \textbf{is a directed graph (digraph):} Fundamentally, when we consider the more general case, an undirected graph as a triple $(V,E,\psi)$, the incidence function maps an edge to an arbitary vertex pair $\psi : E\rightarrow\{X\subseteq V:\left|X\right|=2\}$. In a digraph, the set $V\times V$ represents ordered vertex pairs. Accordingly the incidence function is more specifically defined, namely as a mapping of the edges to that set $\psi : E\rightarrow\{(v,w)\in V\times V:v\neq w\}$, see \cite[p.~15]{Ref_Korte_Vygen}. \item \mbox{\boldmath$H_U$} \textbf{is a rooted tree:} According to Rosen \cite[p.~747]{Ref_Rosen}, a rooted tree is "a tree in which one vertex has been designated as the root and every edge is directed away from the root." Peculiarly, this definition considers the directionality as an inherent part of rooted trees. Unlike Mehlhorn and Sanders \cite[p.~52]{Ref_Mehlhorn_Sanders}, for example, who distinguish between an undirected and directed rooted tree. \par\smallskip \textit{Note: As long as we do not stipulate that vertices may collapse, it is absolutely guaranteed that the graph is a tree.} \item \mbox{\boldmath$H_U$} \textbf{is an out-tree:} There is exactly one path from the root to every other node \cite[p.~52]{Ref_Mehlhorn_Sanders}, which means that edge directions go from parents to children \cite[p.~108]{Ref_Du_Ko_Hu}. This property is implied in Rosen's definition for a rooted tree as well by saying "every edge is directed away from the root." An out-tree is sometimes designated as \textit{out-arborescence} \cite[p.~108]{Ref_Du_Ko_Hu}. \item \mbox{\boldmath$H_U$} \textbf{is a labeled tree:} For defining a labeled graph, Ehrig et al. \cite[p.~23]{Ref_Ehrig_etal} use a label alphabet consisting of a vertex label set and an edge label set. Since we only label the vertices, in our case the specification of a vertex label set $L_V$ together with the vertex label function $l_V:V\rightarrow L_V$ is sufficient. Originally, we said vertex labels are strings over the alphabet $S=\{q,r\}$, through which the free monoid $S^*$ is generated. We illustrate labeling $H_U$ by defining $l_{V(H_U)}(v)=ev^0_{S^*}(l_{V(G)}(\iota(v)),1)$, whereby $\iota:V(H_U)\hookrightarrow V(G)$ is the inclusion map \cite[p.~142]{Ref_Childs} from the set of vertices of $H_U$ to the set of vertices from the previously defined Cayley graph $G$. \end{itemize} \label{def:H_U} \end{definition} \par\medskip We define a tree $H_C$ by taking the tree $H_U$ as a basis and for every vertex $v\in V(H_U)$ satisfying $2\mid l_{V(H_U)}(v)$, we contract the incoming edge. We attach the label of the parent of $v$ to the new vertex, which results by replacing (merging) the two overlapping vertices that the contracted edge used to connect. Visually, we obtain $H_C$ by contracting all edges in $H_U$ that have an even-labeled target vertex, which (due to contraction) gets "merged into its parent." Edge contraction is occasionally referred to as \textit{collapsing an edge}. For more details and examples on edge contraction, one can see Voloshin \cite[p.~27]{Ref_Voloshin} and Loehr \cite{Ref_Loehr}. \par\medskip The tree $H_C$ is a \textit{minor of $H_U$}, since it can be obtained from $H_U$ "by a sequence of any vertex deletions, edge deletions and edge contractions" \cite[p.~32]{Ref_Voloshin}. The sequence of contracting the edges between adjacent (in our case even-labeled) vertices is called \textit{path contraction}. \par\medskip A small section of the tree $H_C$ is shown in figure~\ref{fig:2}. Other definitions of the same tree exist, see for example Conrow \cite{Ref_Conrow} or Bauer \cite[p.~379]{Ref_Bauer}. \begin{figure} \includegraphics[width=1.00\textwidth]{figures/h_c.png} \caption{Small section of $H_C$ (displaying the trivial cycle is waived)} \label{fig:2} \end{figure} \section{Relationship of successive nodes in \mbox{$H_C$}} Let $v_1$ and $v_{n+1}$ be two vertices of $H_C$, where $v_1$ is reachable from $v_{n+1}$ with $level(v_1)-level(v_{n+1})=n$. Hence, a path $(v_{n+1},\ldots,v_1)$ exists between these two vertices. Theorem~\ref{theo:1} specifies the following relationship between $v_1$ and $v_{n+1}$. \par\medskip \begin{theorem} \label{theo:1} $l_{V(H_C)}(v_{n+1})=3^nl_{V(H_C)}(v_1)\prod_{i=1}^{n}\left(1+\frac{1}{3l_{V(H_C)}(v_{i})}\right)2^{-\alpha_i}$. In order to simplify readability, we waive writing down the vertex label function and put it shortly:\\ $v_{n+1}=3^nv_1\prod_{i=1}^{n}\left(1+\frac{1}{3v_{i}}\right)2^{-\alpha_i}$. The value $\alpha_i\in\mathbb{N}$ is the number of edges which have been contracted between $v_i$ and $v_{i+1}$ in $H_U$. \end{theorem} In order to demonstrate the construction produced by theorem~\ref{theo:1} in an illustrative fashion, example~\ref{ex:vertices} runs through a concrete path in $H_C$. \par\medskip \begin{example} \label{ex:vertices} For example, the two vertices $v_1=45$ and $v_{1+3}=v_4=5$ are connected via the path $(5,13,17,45)$, see figure~\ref{fig:2}. Furthermore, one can retrace in figure~\ref{fig:3} the uncontracted path between these two nodes within $H_U$. When applied to this example, theorem~\ref{theo:1} produces the following: \begin{center} $5=v_{1+3}=3^3*45*\left(1+\frac{1}{3*45}\right)*2^{-3} *\left(1+\frac{1}{3*17}\right)*2^{-2} *\left(1+\frac{1}{3*13}\right)*2^{-3}$ \end{center} \end{example} \begin{proof} \label{proof:1} This relationship of successive nodes can simply be proven inductively. For the base case, we set $n=1$ and retrieve \begin{center} $v_{1+1}=3v_1\left(1+\frac{1}{3v_1}\right)2^{-\alpha_1} =\left(3v_1+1\right)2^{-\alpha_1}=v_2$ \end{center} The path from $v_2$ to $v_1$ can conformly be expressed by a string $rq\cdots q$ of $S^*$, because of $v_1=r\circ q^{\alpha_1}\left(v_2\right)$. We set $n=n+1$ for the step case, which leads to \begin{equation*} \begin{array}{cl} v_{n+2} & =3^{n+1}v_1\prod_{i=1}^{n+1}\left(1+\frac{1}{3v_i}\right)2^{-\alpha_i}\\ & =3^{n+1}v_1\left(1+\frac{1}{3v_{n+1}}\right)2^{-\alpha_{n+1}}\prod_{i=1}^{n}\left(1+\frac{1}{3v_i}\right)2^{-\alpha_i}\\ & =3\left(1+\frac{1}{3v_{n+1}}\right)2^{-\alpha_{n+1}}3^nv_1\prod_{i=1}^{n}\left(1+\frac{1}{3v_i}\right)2^{-\alpha_i}\\ & =3\left(1+\frac{1}{3v_{n+1}}\right)2^{-\alpha_{n+1}}v_{n+1}\\ & =\left(3v_{n+1}+1\right)2^{-\alpha_{n+1}} \end{array} \end{equation*} In this case the path from $v_{n+2}$ to $v_{n+1}$ is conformly expressable by a string $rq\cdots q$ of $S^*$ too, since $v_{n+1}=r\circ q^{\alpha_{n+1}}\left(v_{n+2}\right)$. \end{proof} Even though the tree may theoretically contain two or more identically labeled vertices, it is essential to emphasize that we only consider such paths $(v_{n+1},\ldots,v_1)$ whose vertices are all labeled differently. Later in section~\ref{sec:cycles}, we even require that identically labeled nodes are one and the same. In order to correctly determine successive nodes using theorem~\ref{theo:1}, we must consider the halting conditions. These are specified in Definition~\ref{def:halting_condition}. \begin{definition} \label{def:halting_condition} When determining successive nodes starting at $v_1$ according to theorem~\ref{theo:1}, we halt if one of the following two conditions is fulfilled: \begin{enumerate} \item $v_{n+1}=1$ \item $v_{n+1}\in\{v_1,v_2,\ldots,v_n\}$ \end{enumerate} If the first condition applies, the Collatz conjecture is true for a specific sequence. When the second condition is fulfilled, the sequence has led to a cycle. For every starting node, except the root node (labeled with $1$), the Collatz conjecture is consequently falsified. Let us consider the example $v_1=13$, where the algorithm halts after two iterations, because the first condition is met: \[ v_{n+1}=3^2\cdot\left(1+\frac{1}{3\cdot13}\right)\left(1+\frac{1}{3\cdot5}\right)\cdot2^{-7}=1 \] If we examine the case $v_{1}=1$, we realize that the algorithm finishes after the first iteration, since both halting conditions are true. The sequence stops because the final node labeled with $1$ is reached. Furthermore, the sequence has led to a cycle: \[ v_{n+1}=3\cdot\left(1+\frac{1}{3}\right)2^{-2}=1 \] The trivial cycle is the only sequence where both conditions are fulfilled. \end{definition} \noindent Theorem~\ref{theo:1} can be used for specifying the condition of a cycle as follows: \begin{equation} \label{eq:func_cycle} \begin{array}{l} v_{1}=3^nv_1\prod_{i=1}^{n}\left(1+\frac{1}{3v_i}\right)2^{-\alpha_i} \\[\medskipamount] 2^{\alpha_1+\cdots+\alpha_n}=\prod_{i=1}^{n}\left(3+\frac{1}{v_i}\right) \end{array} \end{equation} A similar condition has been formulated by Hercher \cite{Ref_Hercher} and Eric Roosendaal \cite{Ref_Roosendaal_2020}. Taking a first look at equation~\ref{eq:func_cycle}, we are able to recognize the trivial cycle for $n=1$. One might easily come to the false conclusion that the term only results in a natural number for this trivial cylce, since we are multiplying fractions. The following counterexample, starting at $v_1=31$, disproves this assumption: \begin{equation*} 20480=\left(3+\frac{1}{31}\right)\left(3+\frac{1}{47}\right) \left(3+\frac{1}{71}\right)\left(3+\frac{1}{107}\right)\left(3+\frac{1}{161}\right)\left(3+\frac{1}{121}\right)\left(3+\frac{1}{91}\right)\left(3+\frac{1}{137}\right)\left(3+\frac{1}{103}\right) \end{equation*} According to OESIS \cite{Ref_OESIS}, the integer $v_1=31$ is called \textit{self-contained}. The term self-contained is based on the fact that the node $v_{n+1}=v_{10}=155$ is divisible by the starting node $v_1=31$. Moreover, $v_{10}$ results from applying one and the same function (in this case the Collatz function) using $v_1$ as input, see also Guy \cite[p.~332]{Ref_Guy}. For such a case equation~\ref{eq:func_cycle} leads to a natural number, but not necessarily to a cycle. A cycle only occurs if the term results in a power of two. One example is the trivial cycle. We find another case when we choose the factor $5$ instead of $3$: \begin{center} $128=2^7=\left(5+\frac{1}{13}\right)\left(5+\frac{1}{33}\right) \left(5+\frac{1}{83}\right)$ \end{center} The above example shows that non-trivial cycles can be found if we generalize the Collatz conjecture by replacing the factor $3$ with the variable $k$. We study this generalized form and the occurance of cycles in section~\ref{sec:cycles}. A detailed elaboration of the divisibility and a deeper understanding of the tree $H_C$ needs to be performed in order to get towards any proof of the Collatz conjecture. %\par\medskip\noindent %Generally, for any variant $kx+1$ it applies that if $v_1\mid v_{n+1}$, the the product is natural: %\begin{equation*} % \prod_{i=1}^n\left(k+\frac{1}{v_i}\right)\in \mathbb{N} %\end{equation*} \begin{figure} \includegraphics[width=1.00\textwidth]{figures/h_u.png} \caption{Section of $H_U$ containing the path from $5$ to $45$} \label{fig:3} \end{figure} \section{Relationship of sibling nodes in \mbox{$H_C$}} In a rooted tree, vertices which have the same parent are called "siblings" \cite[p.~702]{Ref_Johnsonbaugh}, \cite[p.~747]{Ref_Rosen}. Sibling vertices accordingly have the same level. \par\medskip Let $w$ be a vertex, from which a path exists to the vertex $v_1$. Let $v_2$ be the immediate right-sibling of $v_1$, then $l_{V\left(H_C\right)}\left(v_2\right)=4*l_{V\left(H_C\right)}\left(v_1\right)+1$. This fact has been expressed differently by Kak \cite{Ref_Kak_2014} as follows: "If an odd number $a$ leads to another odd number (after several applications of the Collatz transformation) $b$, then $4a+1$ also leads to $b$." \par\medskip Applied to our approach, consider $w$ as the parent of $v_1$ and $v_2$. Suppose, in $H_U$, a path consisting of $n+1$ edges goes from $w$ to $v_1$. Then we can straightforwardly show that $n$ edges in $H_U$ have been contracted between both nodes $w$ and $v_1$ and $n+2$ edges between $w$ and $v_2$ (for simplicity we again omit writing the label function): \begin{equation*} \begin{array}{l} v_1=\frac{w*2^n-1}{3} \\[\medskipamount] v_2=\frac{w*2^{n+2}-1}{3}=4*v_1+1 \end{array} \end{equation*} For example, $n=3$ edges in $H_U$ have been contracted between $w=5$ and $v_1=13$ and $n+2=5$ edges between $w$ and $v_2=53$, whereby in $H_C$, the vertex $v_2$ is the right-sibling of $v_1$ and these two sibling vertices are immediate children of $w$. \section{A vertex's \mbox{$n$}-fold left-child and right-sibling in \mbox{$H_C$}} Referring to the "left-child, right-sibling representation" of rooted trees \cite[p.~246]{Ref_Cormen_Leiserson_Rivest_Stein}, the function $\textit{left-child}:V\rightarrow V$ returns the leftmost child of a vertex $v$. Nesting this function $n$ times leads to the definition of a vertex's $n$-fold left-child, which is given by $\textit{left-child}^n(v)$. As shown in figure~\ref{fig:2}, for example $\textit{left-child}^3(13)=7$. \par\medskip The function $\textit{right-sibling}:V\rightarrow V$ points to the sibling of a vertex $v$ immediately to its right \cite[p.~246]{Ref_Cormen_Leiserson_Rivest_Stein}. If this function is nested $n$ times, we get a vertex's $n$-fold right-sibling defined by $\textit{right-sibling}^n(v)$. One example is $\textit{right-sibling}^2(113)=1813$ which has been demonstrated in figure~\ref{fig:2} too. \par\medskip Let $w$ be a vertex in $H_C$ and $v_0$ the left-child of $w$. The $n$-fold right-sibling of $v_0$ can be calculated as follows: \begin{equation} \label{eq:nfold_right_sibling} v_n=\textit{right-sibling}^n(v_0)=\frac{1}{3}*\left(w*2^{2*n+\pi_3(w\bmod 3)}-1\right) \end{equation} The function $\pi_3$ is the self-inverse permutation (involution): \begin{equation} \label{eq:pi_3} \pi_3=\left(\begin{array}{cc} 1 & 2\\ 2 & 1 \end{array}\right) \end{equation} We consider permutations of the set $\{1,2\}$ and not of $\{0,1,2\}$, due to the fact that $w\bmod 3$ cannot be zero. A node $w$ in $H_C$, which is labeled by an integer divisible by $3$ is a leaf; and therefore such node has no left-child, more specifically it has no children at all. \par\medskip \noindent When setting $n=0$, we trivially retrieve the vertex's $w$ left-child: \begin{center} $v_0=\textit{left-child}(w)=\frac{1}{3}*\left(w*2^{\pi_3(w\bmod 3)}-1\right)$ \end{center} \begin{example} \label{ex:siblings} Let us refer to figure~\ref{fig:2} again and pick out $w=5$. Then the vertex's $w$ left-child is $v_0=3$ and the threefold right-sibling $v_3=213$: \begin{equation*} \begin{array}{l} v_0=\frac{1}{3}*\left(5*2^{\pi_3(5\bmod 3)}-1\right)=3 \\[\medskipamount] v_3=\frac{1}{3}*\left(5*2^{2*3+\pi_3(5\bmod 3)}-1\right)=213 \end{array} \end{equation*} \end{example} \section{Left-child and right-sibling in the \mbox{$5x+1$} variant of \mbox{$H_C$}} In the following we take a look at the $5x+1$ variant of $H_C$. We name this graph $H_{C,5}$ and must note that it is not a tree and moreover that not all of its vertices are reachable from the root. We define the permutation $\pi_5$ as follows: \begin{center} $\pi_5=\left(\begin{array}{cccc} 1 & 2 & 3 & 4\\ 4 & 3 & 1 & 2 \end{array}\right)$ \end{center} Next, by letting $w$ be a vertex in $H_{C,5}$ and $v_0$ the left-child of $w$ we obtain the $n$-fold right-sibling of $v_0$ by the function that is slightly different to the one defined by \ref{eq:nfold_right_sibling}: \begin{equation} \label{eq:nfold_right_sibling_5} v_n=\textit{right-sibling}^n(v_0)=\frac{1}{5}*\left(w*2^{4*n+\pi_5(w\bmod 5)}-1\right) \end{equation} Analogous to \ref{eq:pi_3} only permutations on the set without zero \{1,2,3,4\} need to be considered, since $w\bmod 5$ cannot be zero. Otherwise, if $w\equiv 0 (\bmod 5)$ which means that $w$ were labeled by an integer divisible by 5, then the node $w$ has no successor in $H_{C,5}$. \par\medskip \noindent By setting $n=0$, the function (above given by \ref{eq:nfold_right_sibling_5}) returns the left child of $w$: \begin{center} $v_0=\textit{left-child}(w)=\frac{1}{5}*\left(w*2^{\pi_5(w\bmod 5)}-1\right)$ \end{center} \begin{figure} \includegraphics[width=1.00\textwidth]{figures/h_c5b.png} \caption{Section of the graph $H_{C,5}$ starting at its root (without branches that reflect a subsequence containing the trivial cycle)} \label{fig:4} \end{figure} Figure~\ref{fig:4} illustrates a small section of $H_{C,5}$ starting at its root. The particularly interesting thing about the graph $H_{C,5}$ is that it contains three cycles, the trivial cycle starting from the root $1,3$ and two non-trivial cycles $43,17,27$ and $83,33,13$. To be precise, three cycles are known (as it will become apparent later in section~\ref{sec:non_trivial_cycles}), and on the basis of present knowledge it cannot be ruled out with any certainty that other cycles exist.
{ "alphanum_fraction": 0.7248580244, "avg_line_length": 67.7666666667, "ext": "tex", "hexsha": "3c5435157b9dd56c735302b1da7de09c77a7d94d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-06T20:44:07.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-06T20:44:07.000Z", "max_forks_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Sultanow/collatz", "max_forks_repo_path": "01 Graph Theory/TeX/v4.1/chapter/02_collatz_tree_COX_EDIT.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Sultanow/collatz", "max_issues_repo_path": "01 Graph Theory/TeX/v4.1/chapter/02_collatz_tree_COX_EDIT.tex", "max_line_length": 980, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d8a5137af508be19da371fff787c114f1b5185c3", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Sultanow/collatz", "max_stars_repo_path": "01 Graph Theory/TeX/v4.1/chapter/02_collatz_tree_COX_EDIT.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-01T15:54:55.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-01T15:12:10.000Z", "num_tokens": 7344, "size": 22363 }
\section{Tables} \begin{frame} \frametitle{Tables} \framesubtitle{Insertion Of Tables} \begin{exampleblock}{New packages in this section} \begin{itemize} \item longtable \end{itemize} \end{exampleblock} \begin{block}{New commands in this section} \begin{itemize} \item \color{unibablueI}\textbackslash begin\color{black}\{tabular\} \dots ~\color{unibablueI}\textbackslash end\color{black}\{tabular\} \item \color{unibablueI}\textbackslash begin\color{black}\{table\} \dots ~\color{unibablueI}\textbackslash end\color{black}\{table\} \item \color{unibablueI}\textbackslash begin\color{black}\{longtable\} \dots ~\color{unibablueI}\textbackslash end\color{black}\{longtable\} \item \color{unibablueI}\textbackslash begin\color{black}\{tabbing\} \dots ~\color{unibablueI}\textbackslash end\color{black}\{tabbing\} \item \color{nounibaredI}$|$\color{black} \item \color{nounibaredI}\& \color{black} \item \color{nounibaredI}\textbackslash hline\color{black} \item \color{nounibaredI}\textbackslash multicolumn\color{black}\{\}\{\}\{\} \end{itemize} \end{block} \end{frame} \begin{frame} \frametitle{Tables} \framesubtitle{``table'' \& \glqq tabular\grqq} \textbf{Structure:}\\[2mm] \color{unibablueI}\begin{ttfamily}\textbackslash begin\color{black}\{table\}\color{nounibagreenI}[position]\color{black}\\ \color{unibablueI}\textbackslash begin\color{black}\{tabular\}\{\textit{Definition of columns}\}\\ \textit{Table content}\\ \color{unibablueI}\textbackslash end\color{black}\{tabular\}\\ \color{nounibaredI}\textbackslash caption\color{black}\{caption\}\\ \color{nounibaredI}\textbackslash label\color{black}\{tab:bsptab1\}\\ \color{unibablueI}\textbackslash end\color{black}\{table\}\\ ~\\ \end{ttfamily} \begin{block}{Reminder: positioning in most \LaTeX -- environments} \color{nounibagreenI}[h]\color{black}~or \color{nounibagreenI}[H]\color{black}~= At this very position\\ \color{nounibagreenI}[t]\color{black}~= On top of the page\\ \color{nounibagreenI}[b]\color{black}~= On bottom of the page\\ \color{nounibagreenI}[p]\color{black}~= Positioning on an own page \end{block} \end{frame} \begin{frame} \frametitle{Tables} \framesubtitle{Definition Of Columns} Here you can define how the columns should be aligned and how the vertical lines should be set:\\[3mm] \begin{tabbing}[H]p{column width}xxx\=\kill \textbf{Commands:}\\ l \>= left-justified\\ c \>= centered\\ r \>= right-justified\\ p\{column width\} \>= a left-justified column with defined width\\ \color{nounibaredI}$|$\color{black} \>= sets a vertical line at this position\\ \end{tabbing} \end{frame} \begin{frame} \frametitle{Tables} \framesubtitle{View From Inside} \begin{ttfamily} \input{content/listings/tabular_escaped.tex} \end{ttfamily} \end{frame} \begin{frame} \frametitle{Tables} \framesubtitle{Table Content} Here you fill the defined columns with content.\\[3mm] \begin{tabbing}[H]p{column width}xxx\=\kill \textbf{Commands:}\\ \color{nounibaredI}\&\color{black} \>= horizontal separation of rows\\ \color{nounibaredI}\textbackslash \textbackslash \color{black} \>= new line\\ \color{nounibaredI}\textbackslash hline\color{black} \>= sets a horizontal line\\[2mm] \color{nounibaredI}\textbackslash multicolumn\color{black}\{column number\}\{column alignment\}\{text\}\\[2mm] \>= Combines as many columns as you like.\\ \end{tabbing} \end{frame} \begin{frame}[t] \frametitle{Tables} \framesubtitle{Example Tabular} \begin{footnotesize} \begin{ttfamily} \input{content/listings/tabular_escaped.tex} \end{ttfamily} \end{footnotesize} \begin{tabular}{c|p{40mm}|lr|c} \multicolumn{5}{c}{E-Sports Championship Franconia} \\ \hline \hline Number & Place & Player 1 & Player 2 & Result \\ \hline 1 & N\"urnberg & Wolf & Lamm & 23:10 \\ \hline 2 & Bamberg & Meyer & Beyer & \\ \hline 3 & Zirndorf & Brandst. & Brauer & 21:21 \\ \hline \end{tabular} \end{frame} \begin{frame}{Tables} \framesubtitle{Longtable -- Table With Line Break} \bigskip „tabular“ shows the table on one page. If it does not fit on the page, the remainder is cut off.\\ For tables longer than one page, a table is needed that performs a division of the table.\\ \textbf{Solution: {\ttfamily longtable}}\\ {\ttfamily longtable} allows a line break in the table. Moreover, {\ttfamily longtable} is an environment, so the {\ttfamily table}-environment is not needed anymore!\\[3mm] \begin{ttfamily} \color{unibablueI}\textbackslash begin\color{black}\{longtable\}\{\textit{definition of columns}\}\\ \textit{table content}\\ \color{nounibaredI}\textbackslash caption\color{black}\{caption\}\\ \color{nounibaredI}\textbackslash label\color{black}\{tab:bsptab2\}\\ \color{unibablueI}\textbackslash end\color{black}\{longtable\} \end{ttfamily} \end{frame} \begin{frame} \frametitle{Indentations With „tabbing“} \begin{block}{Control} \begin{itemize} \item[\begin{ttfamily}\color{nounibaredI}\textbackslash =\end{ttfamily}]\color{black}set a tab position \item[\begin{ttfamily}\color{nounibaredI}\textbackslash $>$\end{ttfamily}]select a tab position \end{itemize} \end{block} \begin{columns} \begin{column}{.5\textwidth} \begin{ttfamily}{\scriptsize \input{content/listings/tabbing_escaped.tex}} \end{ttfamily} \end{column} \begin{column}{.5\textwidth} \begin{tabbing} Employ\=ee:\\ A \> Daniel\\ B \> Martin\\ C \> Linus\\ xxx\=xxx\=xxxxxxx\kill \> Committees\\ \>\> Tests\\ \>\> Mails\\ \end{tabbing} \end{column} \end{columns} If you use the command \begin{ttfamily}\color{nounibaredI}\textbackslash kill\color{black}\end{ttfamily}, the rest of the line is not shown. In this way you can do the formatting without showing the respective text. \end{frame}
{ "alphanum_fraction": 0.7419584148, "avg_line_length": 33.494047619, "ext": "tex", "hexsha": "a794e92f65700a0946c84257fdbba6ad7cef47b9", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-10-12T04:13:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-02T03:10:26.000Z", "max_forks_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_forks_repo_path": "200+ beamer 模板合集/LaTeX-Tutorial-master(培训课件)/beamer/content/tabellen.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_issues_repo_path": "200+ beamer 模板合集/LaTeX-Tutorial-master(培训课件)/beamer/content/tabellen.tex", "max_line_length": 215, "max_stars_count": 13, "max_stars_repo_head_hexsha": "3ab28a23fb60cb0a97fcec883847e2d8728b98c0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lemoxiao/Awesome-Beamer-Collection", "max_stars_repo_path": "200+ beamer 模板合集/LaTeX-Tutorial-master(培训课件)/beamer/content/tabellen.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-24T09:27:26.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-30T04:09:54.000Z", "num_tokens": 1904, "size": 5627 }
\chapter{Model Building as Software Development} \noindent Developing a statistical model in Stan means writing a Stan program and is thus a kind of software development process. Developing software is hard. Very hard. So many things can go wrong because there are so many moving parts and combinations of parts. Software development practices are designed to mitigate the problems caused by the inherent complexity of writing computer programs. Unfortunately, many methodologies veer off into dogma, bean counting, or both. A couple we can recommend that provide solid, practical advice for developers are \citep{HuntThomas:99} and \citep{McConnell:2004}. This section tries to summarize some of their advice. \section{Use Version Control} Version control software, such as Subversion or Git, should be in place before starting to code.% % \footnote{Stan started using Subversion (SVN), then switched to the much more feature-rich Git package. Git does everything SVN does and a whole lot more. The price is a steeper learning curve. For individual or very-small-team development, SVN is just fine.} % It may seem like a big investment to learn version control, but it's well worth it to be able to type a single command to revert to a previously working version or to get the difference between the current version and an old version. It's even better when you need to share work with others, even on a paper---work can be done independently and then automatically merged. See \refchapter{software-development} for information on how Stan itself is developed. \section{Make it Reproducible} Rather than entering commands on the command-line when running models (or entering commands directly into an interactive programming language like R or Python), try writing scripts to run the data through the models and produce whatever posterior analysis you need. Scripts can be written for the shell, R, or Python. Whatever language a script is in, it should be self contained and not depend on global variables having been set, other data being read in, etc. See \refchapter{reproducibility} for complete information on reproducibility in Stan and its interfaces. \subsection{Scripts are Good Documentation} It may seem like overkill if running the project is only a single line of code, but the script provides not only a way to run the code, but also a form of concrete documentation for what is run. \subsection{Randomization and Saving Seeds} Randomness defeats reproducibility. MCMC methods are conceptually randomized. Stan's samplers involve random initializations as well as randomization during each iteration (e.g., Hamiltonian Monte Carlo generates a random momentum in each iteration). Computers are deterministic. There is no real randomness, just pseudo-random number generators. These operate by generating a sequence of random numbers based on a ``seed.'' Stan (and other languages like R) can use time-based methods to generate a seed based on the time and date, or seeds can be provided to Stan (or R) in the form of integers. Stan writes out the seed used to generate the data as well as the version number of the Stan software so that results can be reproduced at a later date.% % \footnote{This also requires fixing compilers and hardware, because floating-point arithmetic does not have an absolutely fixed behavior across operating systems, hardware configurations, or compilers.} \section{Make it Readable} Treating programs and scripts like other forms of writing for an audience provides an important perspective on how the code will be used. Not only might others want to read a program or model, the developer will want to read it later. One of the motivations of Stan's design was to make models self-documenting in terms of variable usage (e.g., data versus parameter), types (e.g., covariance matrix vs. unconstrained matrix) and sizes. A large part of readability is consistency. Particularly in naming and layout. Not only of programs themselves, but the directories and files in which they're stored. Readability of code is not just about comments---it is also about naming and organization for readability. It is surprising how often the solution to a debugging or design problem occurs when trying to explain enough about the problem to someone else to get help. This can be on a mailing list, but it works best person-to-person. Finding the solution to your own problem when explaining it to someone else happens so frequently in software development that the listener is called a ``rubber ducky,'' because they only have to nod along.% % \footnote{Research has shown an actual rubber ducky won't work. For some reason, the rubber ducky must actually be capable of understanding the explanation.} \section{Explore the Data} Although this should go without saying, don't just fit data blindly. Look at the data you actually have to understand its properties. If you're doing a logistic regression, is it separable? If you're building a multilevel model, do the basic outcomes vary by level? If you're fitting a linear regression, see whether such a model makes sense by scatterplotting $x$ vs. $y$. \section{Design Top-Down, Code Bottom-Up} Software projects are almost always designed top-down from one or more intended use cases. Good software coding, on the other hand, is typically done bottom-up. The motivation for top-down design is obvious. The motivation for bottom-up development is that it is much easier to develop software using components that have been thoroughly tested. Although Stan has no built-in support for either modularity or testing, many of the same principles apply. The way the developers of Stan themselves build models is to start as simply as possibly, then build up. This is true even if we have a complicated model in mind as the end goal, and even if we have a very good idea of the model we eventually want to fit. Rather than building a hierarchical model with multiple interactions, covariance priors, or other complicated structure, start simple. Build just a simple regression with fixed (and fairly tight) priors. Then add interactions or additional levels. One at a time. Make sure that these do the right thing. Then expand. \section{Fit Simulated Data} One of the best ways to make sure your model is doing the right thing computationally is to generate simulated (i.e., ``fake'') data with known parameter values, then see if the model can recover these parameters from the data. If not, there is very little hope that it will do the right thing with data from the wild. There are fancier ways to do this, where you can do things like run $\chi^2$ tests on marginal statistics or follow the paradigm introduced in \citep{CookGelmanRubin:2006}, which involves interval tests. \section{Debug by Print} Although Stan does not have a stepwise debugger or any unit testing framework in place, it does support the time-honored tradition of debug-by-printf.% % \footnote{The ``f'' is not a typo --- it's a historical artifact of the name of the \code{printf} function used for formatted printing in C.} Stan supports print statements with one or more string or expression arguments. Because Stan is an imperative language, variables can have different values at different points in the execution of a program. Print statements can be invaluable for debugging, especially for a language like Stan with no stepwise debugger. For instance, to print the value of variables \code{y} and \code{z}, use the following statement. % \begin{stancode} print("y=", y, " z=", z); \end{stancode} % This print statement prints the string ``y='' followed by the value of \code{y}, followed by the string `` z='' (with the leading space), followed by the value of the variable \code{z}. Each print statement is followed by a new line. The specific ASCII character(s) generated to create a new line are platform specific. Arbitrary expressions can be used. For example, the statement \begin{stancode} print("1+1=", 1+1); \end{stancode} % will print ``1 + 1 = 2'' followed by a new line. Print statements may be used anywhere other statements may be used, but their behavior in terms of frequency depends on how often the block they are in is evaluated. \section{Comments}\label{comments-programming.section} \subsection{Code Never Lies} The machine does what the code says, not what the documentation says. Documentation, on the other hand, might not match the code. Code documentation easily rots as the code evolves if the documentation is not well maintained. Thus it is always preferable to write readable code as opposed to documenting unreadable code. Every time you write a piece of documentation, ask yourself if there's a way to write the code in such a way as to make the documentation unnecessary. \subsection{Comment Styles in Stan} Stan supports \Cpp-style comments with \code{//} for line comments and \code{/*} and \code{*/} as block comment wrappers. The recommended style is to use line-based comments for short comments on the code or to comment out one or more lines of code. Bracketed comments are then reserved for long documentation comments. The reason for this convention is that bracketed comments cannot be wrapped inside of bracketed comments. \subsection{What Not to Comment} When commenting code, it is usually safe to assume that you are writing the comments for other programmers who understand the basics of the programming language in use. In other words, don't comment the obvious. For instance, there is no need to have comments such as the following, which add nothing to the code. % \begin{stancode} y ~ normal(0, 1); // y has a standard normal distribution \end{stancode} % A Jacobian adjustment for a hand-coded transform might be worth commenting, as in the following example. % \begin{stancode} exp(y) ~ normal(0, 1); // adjust for change of vars: y = log | d/dy exp(y) | target += y; \end{stancode} % It's an art form to empathize with a future code reader and decide what they will or won't know (or remember) about statistics and Stan. \subsection{What to Comment} It can help to document variable declarations if variables are given generic names like \code{N}, \code{mu}, and \code{sigma}. For example, some data variable declarations in an item-response model might be usefully commented as follows. % \begin{stancode} int<lower=1> N; // number of observations int<lower=1> I; // number of students int<lower=1> J; // number of test questions \end{stancode} % The alternative is to use longer names that do not require comments. % \begin{stancode} int<lower=1> n_obs; int<lower=1> n_students; int<lower=1> n_questions; \end{stancode} % Both styles are reasonable and which one to adopt is mostly a matter of taste (mostly because sometimes models come with their own naming conventions which should be followed so as not to confuse readers of the code familiar with the statistical conventions). Some code authors like big blocks of comments at the top explaining the purpose of the model, who wrote it, copyright and licensing information, and so on. The following bracketed comment is an example of a conventional style for large comment blocks. % \begin{stancode} /* * Item-Response Theory PL3 Model * ----------------------------------------------------- * Copyright: Joe Schmoe <[email protected]> * Date: 19 September 2012 * License: GPLv3 */ data { // ... \end{stancode} % The use of leading asterisks helps readers understand the scope of the comment. The problem with including dates or other volatile information in comments is that they can easily get out of synch with the reality of the code. A misleading comment or one that is wrong is worse than no comment at all!
{ "alphanum_fraction": 0.7784172662, "avg_line_length": 40.3242320819, "ext": "tex", "hexsha": "31c4c9d92c9fade18f8e94df9e6b470bebdd99a8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-08-28T12:09:08.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-28T12:09:08.000Z", "max_forks_repo_head_hexsha": "b98073b6a9ee835f21c103f23ea5cec652ba7e92", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "yao-yl/stan", "max_forks_repo_path": "src/docs/users-guide/model-as-software.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b98073b6a9ee835f21c103f23ea5cec652ba7e92", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "yao-yl/stan", "max_issues_repo_path": "src/docs/users-guide/model-as-software.tex", "max_line_length": 71, "max_stars_count": null, "max_stars_repo_head_hexsha": "b98073b6a9ee835f21c103f23ea5cec652ba7e92", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "yao-yl/stan", "max_stars_repo_path": "src/docs/users-guide/model-as-software.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2729, "size": 11815 }
\section{\module{quopri} --- Encode and decode MIME quoted-printable data} \declaremodule{standard}{quopri} \modulesynopsis{Encode and decode files using the MIME quoted-printable encoding.} This module performs quoted-printable transport encoding and decoding, as defined in \rfc{1521}: ``MIME (Multipurpose Internet Mail Extensions) Part One''. The quoted-printable encoding is designed for data where there are relatively few nonprintable characters; the base64 encoding scheme available via the \refmodule{base64} module is more compact if there are many such characters, as when sending a graphics file. \indexii{quoted-printable}{encoding} \index{MIME!quoted-printable encoding} \begin{funcdesc}{decode}{input, output} Decode the contents of the \var{input} file and write the resulting decoded binary data to the \var{output} file. \var{input} and \var{output} must either be file objects or objects that mimic the file object interface. \var{input} will be read until \code{\var{input}.read()} returns an empty string. \end{funcdesc} \begin{funcdesc}{encode}{input, output, quotetabs} Encode the contents of the \var{input} file and write the resulting quoted-printable data to the \var{output} file. \var{input} and \var{output} must either be file objects or objects that mimic the file object interface. \var{input} will be read until \code{\var{input}.read()} returns an empty string. \end{funcdesc} \begin{seealso} \seemodule{mimify}{General utilities for processing of MIME messages.} \end{seealso}
{ "alphanum_fraction": 0.7688831504, "avg_line_length": 39.7179487179, "ext": "tex", "hexsha": "176aeea7b6b848f6c2f15044668943f7b9f3e5fa", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-27T01:55:17.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-16T08:14:13.000Z", "max_forks_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "marcosptf/cpython-2.0.1", "max_forks_repo_path": "Doc/lib/libquopri.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_issues_repo_issues_event_max_datetime": "2021-05-03T21:20:50.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-18T15:48:14.000Z", "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "marcosptf/cpython-2.0.1", "max_issues_repo_path": "Doc/lib/libquopri.tex", "max_line_length": 75, "max_stars_count": 5, "max_stars_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "marcosptf/cpython-2.0.1", "max_stars_repo_path": "Doc/lib/libquopri.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T21:47:20.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-26T21:53:36.000Z", "num_tokens": 390, "size": 1549 }
%!TEX root = ../CombinatoricsNotes.tex \section{The Littlewood-Offord problem} \lect{1}{18} % \marginnote{Lecture 4: Monday, January 18, 2014.} \begin{problem}[\cite{LO1943}] Given $(z_1,\dotsc,z_n)$ complex numbers, with $|z_i| \geq 1$. Consider the $2^n$ possible sums formed by the $z_i$'s. How many sums can have pairwise distances less than 1 from each other? If $z_1 = z_2 = \dotsc =z_n=1$, then we get ${n \choose \floor{n/2}}$ equal sums. Given $(z_1,\dotsc,z_n)$, and subset of indicies $A\subset [n]$, let $Z_A = \sum_{i\in A} z_i$, where we define $Z_\emptyset = 0$ and $Z_{\{i\}} = z_i$. We say $\F \subset \P([n])$ is \defn{Littlewood-Offord}[Littlewood-Offord!sets of numbers] (LO) with respect to $(z_1,\dotsc,z_n)$ if $|Z_A- Z_B| < 1$ for all $A,B \in \F$. In this notation, we are looking for the largest collection $\F$ such that $\F$ is LO with respect to $(z_1,\dotsc,z_n)$. \end{problem} \begin{conjecture*} If $(z_1,\dotsc,z_n) \in \C^n$ with $|z_i|\geq 1$, and $\F$ is LO with respect to $(z_1,\dotsc,z_n)$, then $|\F| \leq {n \choose \floor{n/2}}$. \end{conjecture*} \begin{remark} Littlewood and Offord were not combinatorialists; instead they were interested in studying the roots of random polynomials. \end{remark} \begin{theorem}[\cite{erdos1945}] If $x_1,\dotsc,x_n$ are real, $|x_i| \geq 1$ for every $i$, and $\F$ is LO with respect to $(x_1,\dotsc,x_n)$, then $|\F| \leq { n\choose \floor{n/2}}$. \end{theorem} \begin{proof} We'll change notation to $X_A$ instead of $Z_A$ for this real case. If $x_1,\dotsc,x_n \geq 1$, then $\F$ is Sperner. Suppose $A\subsetneq B$; then \[ |X_A - X_B| = X_B - X_A = X_{B\setminus A} \geq |B-A| \geq 1. \qquad \checkmark \] The general problem for reals can be reduced to this case of $x_i>0$ as follows. Suppose that $\F$ is LO with respect to $(x_1,\dotsc,x_n)$. Then we will construct a family $\F'$ which is LO with respect to \\$(x_1,\dotsc, x_{i-1},-x_i, x_{i+1}, \dotsc, x_n)$ with $|\F'| = |\F|$. This will suffice to prove the theorem. Let $\F' = \{ A \symd \{i\}: A\in \F\}$\sidenote[][]{We remove $i$ if it's there, and add it otherwise.}. Clearly $|\F'| = |\F|$. We've reduced all sums by $x_i$, as shown by the following. If $i \in A$, then $X'_{A \symd \{i\}} = \left(\sum_{j\in A} x_j \right)-x_i$. if $i \not \in B$, then $X'_{B \symd \{i\}} = \left( \sum_{j\in B} x_j \right) - x_i$. Hence, all sums are still within $1$ of each other, completing the proof. \end{proof} \begin{definition} A chain $A_1 \subset A_2 \subset \dotsc \subset A_k$ in $\P([n])$ is \defn{symmetric}[symmetric!chain] if $|A_{i+1}| = |A_i| +1$ for $i=1,\dotsc,k-1$. Additionally, we require $|A_1| + |A_k| = n$. \end{definition} Note that in particular, a symmetric chain intersects $[n]^{(\floor{n/2})}$. An example of symmetric chains is shown in \cref{fig:sym_chains}. \begin{figure}[ht] \begin{center} \begin{tikzcd}[column sep=tiny] {[3]}^{(3)} & &\{1,2,3\}\arrow[green,dash]{ld} \\ {[3]}^{(2)} &\{1,2\} \arrow[dash,green]{d}& \{1,3\} \arrow[dash,blue]{rd} & \arrow[dash,red]{ld} \{2,3\} \\ \mathbf{[3]^{(1)}} & \mathbf{\{1\}} \arrow[dash,green]{rd} & \mathbf{\{2\}}&\mathbf{\{3\}}\\ {[3]}^{(0)} & & \emptyset \end{tikzcd} \begin{tikzcd}[column sep=tiny] &\{1,2,3\}\arrow[red,dash]{rd} \\ \{1,2\} \arrow[dash,green]{d}& \{1,3\} \arrow[dash,blue]{rd} & \arrow[dash,red]{ld} \{2,3\} \\ \mathbf{\{1\}} \arrow[dash,green]{rd} & \mathbf{\{2\}}&\mathbf{\{3\}}\\ & \emptyset \end{tikzcd} \end{center} \caption{\textit{Left:} We draw our favorite $\P([n])$, partioned into symmetric chains, which thus each intersect $[n]^{(\floor{n/2})}$. \textit{Right:} A partition of $\P([n])$ into chains which intersect $[n]^{(\floor{n/2})}$, which we could obtain from the proof of \cref{thm:sperner}. But these chains are not symmetric.} \label{fig:sym_chains} \end{figure} Our previous method, when proving Sperner's theorem, didn't actually guarantee symmetric chains; see \cref{fig:sym_chains}. We will do this now, which provides a new proof of Sperner's theorem. \pagebreak \begin{theorem} \label{thm:sym_part} $\P([n])$ can be partitioned into symmetric chains. \end{theorem} \begin{proof}[Proof by induction on $n$.] The base case $n=1$ is immediate. Induction step: Let $C_1,C_2,\dotsc,C_L$ be symmetric chains forming a partition of $\P{[n-1]}$. Consider $C_i = (A_1,A_2,\dotsc,A_k)$. Form \begin{align*} C_i' &= (A_1,A_2,\dotsc,A_k,A_k\cup \{n\})\\ C_i'' &= (A_1 \cup \{n\}, A_2 \cup \{n\}, \dotsc, A_{k-1} \cup \{n\}). \end{align*} % \improvement{Add picture of $n=3$ of $C_i'$ and $C_i''$.} \begin{figure} \begin{center} \begin{tikzcd}[column sep=tiny] % {[n]}^{(3)} & &\{1,2,3\}\arrow[green,dash]{ld} \\ \phantom{{[n]}^{(3)}}& & \\ {[2]}^{(2)} &\{1,2\} \arrow[dash,OliveGreen]{d}& \\ \mathbf{[2]^{(1)}} & \mathbf{\{1\}} \arrow[dash,OliveGreen]{rd} & \mathbf{\{2\}}&\\ {[2]}^{(0)} & & \emptyset \end{tikzcd}\hfill \begin{tikzcd}[column sep=tiny] {[3]}^{(3)} & &\{1,2,3\}\arrow[DarkGreen,dash]{ld} \\ {[3]}^{(2)} &\{1,2\} \arrow[dash,DarkGreen]{d}& \{1,3\} \arrow[dash,LightGreen]{rd} & \arrow[dash,red]{ld} \{2,3\} \\ \mathbf{[3]^{(1)}} & \mathbf{\{1\}} \arrow[dash,DarkGreen]{rd} & \mathbf{\{2\}}&\mathbf{\{3\}}\\ {[3]}^{(0)} & & \emptyset \end{tikzcd} \end{center} \caption{An illustration of the induction step, from $n=2$ on the left to $n=3$ on the right. On the left, two chains: $C_1 = (\emptyset, \{1\}, \{1,2\})$ in green, and $C_2 = (\{2\})$. We create $C_1' = (\emptyset, \{1\},\{1,2\},\{1,2,3\})$ in dark green, and $C_1'' = (\{3\}, \{1,3\})$ in light green. We also create $C_2' = (\{2\},\{2,3\})$ in red, and $C_2'' = ()$, an empty chain.% }% \end{figure} We can easily see that given that $C_i$ was a symmetric chain on $\P([n-1])$, we have $C_i'$ and $C_i''$ are both symmetric chains on $\P([n])$. Moreover, \[ \{C_1', \dotsc, C_L'\}\cup \{C_1'', \dotsc, C_L''\} \] form a partition of $\P([n])$, as follows: For any set $A\in \P([n])$, if $n\not \in A$, then $A\in C_i$ for some $i$, and hence $A\in C_i'$. If $n\in A$, then $A\setminus \{n\} \in C_i = (A_1,A_2,\dotsc,A_k)$ for some $i$, so $A = A_j$ for some $j$. If $j=k$, then $A\in C_i'$, otherwise $A\in C_i''$. In any case, if $A\in \P([n])$, $A$ is in one of these chains. Finally, $A$ cannot be in two chains, otherwise we have that $\{C_i\}$ was not a partition of $\P([n-1])$. \end{proof} Any partition of $\P([n])$ symmetric chains has ${n\choose \floor{n/2}}$ sets, because each chain must intersect with one point in $[n]^{(\floor{n/2})}$. In the proof, we took a partition into ${n-1 \choose \floor{(n-1)/2}}$ chains, and seem to have created $2 {n-1 \choose \floor{(n-1)/2}} \neq {n\choose \floor{n/2}}$ chains. The catch is that $C_i''$ is an empty chain if $k=1$. \newthought{Let us count} the sizes of symmetric chains. Suppose $C_1,\dotsc,C_L$ is a partition of $\P([n])$ into symmetric chains, where $L = {n\choose \floor{n/2}}$. How many chains are there of length $r=n+1$ in the partition? Exactly one, the chain which contains the empty set and must therefore contain the set of $n$ elements. What about $r=n$? Zero. What about $r= n-1$? $n-1$ chains, because it must start the collection of 1 element sets and go to the collection of $n-1$ element sets. There are $n$ 1-element sets, but one is already in the maximal chain, so we are left with $n-1$. What about $r= (n+1)-2i$? These are the symmetric chains which start at the $i$th level $[n]^{(i)}$ and go to the $n-i$th level $[n]^{(n-i)}$. The result is ${n\choose i} - {n\choose i-1}$. That is because there are ${n\choose i}$ elements in $[n]^{(i)}$, but ${n\choose i-1}$ of them are part of longer chains, the number of which is the number of elements in the level $[n]^{(i-1)}$. Let us formulate analogues of these ideas to solve the Littlewood-Offord problem in $\R^d$. \begin{theorem}[\cite{KLEITMAN1970}] \label{thm:kleit70} Let $(x_1,\dotsc,x_n)$ be vectors in $\R^d$, with $\|x_i\| \geq 1$. As before, define for $A\subset [n]$, $X_A = \sum_{i\in A} x_i$, and say that $\F\subset \P([n])$ is \defn{LO}[Littlewood-Offord!sets of vectors] with respect to $(x_1,\dotsc,x_n)$ if for each $A,B\in \F$, we have $\|X_A - X_B\| < 1$. Let $\F$ be LO with respect to $(x_1,\dotsc,x_n)$. Then $|\F| \leq { n\choose \floor{n/2}}$. \end{theorem} \begin{proof} We will say $\Cset\subset \P([n])$ is \defn{sparse}[sparse!set of indicies of vectors] (with respect to $(x_1,\dotsc,x_n)$) if $\|X_A-X_B\| > 1$ for all $A,B\in \Cset$ with $A\neq B$.x It is enough to show that $\P([n])$ can be partitioned into ${n\choose \floor{n/2}}$ sparse sets\sidenote{A LO family may only contain one element of each sparse set.}. We say that a partition $C_1,\dotsc,C_L$ of $\P([n])$ is \defn{symmetric}[symmetric!partition] if it contains exactly ${n\choose i} - {n\choose i-1}$ ``chains'' of order $(n+1)-2i$ for $i=0,1,\dotsc,\floor{n/2}$, and no chains with sizes not congruent to $n+1 \mod 2$. \marginnote{Here, ``chain'' simply means a set $C_i$ for some $i\in [L]$.} In particular, $L= {n\choose \floor{n/2}}$. We will show that $\P[n]$ has a symmetric partition into sparse sets. Then we will have found $L = {n\choose \floor{n/2}}$ sparse sets, completing the proof. Let's proceed by induction on $n$. For $n=1$, we have $X_\emptyset = 0$, and $X_{\{1\}} = x_1$. Then $\{\emptyset, \{1\}\}$ is sparse if $\|X_\emptyset - X_{\{1\}}\| > 1$, which holds because $\|x_1\| > 1$. Induction step: Let $C_1,C_2,\dotsc, C_L$ be a symmetric partition of $\P([n-1])$ into sets sparse with respect to $(x_1,\dotsc,x_{n-1})$. Let $C_i = \{A_1,\dotsc,A_k\}$. We'd like to form sparse sets \begin{align*} C_i' &= \{A_1,A_2,\dotsc,A_k,A_k \cup \{n\}\}\\ C_i'' &= \{A_1\cup \{n\}, A_2\cup\{n\},\dotsc,A_{k-1}\cup \{n\}\}. \end{align*} Our sets $A_1,\dotsc,A_k$ do not have an ordering this time, and we may choose any set to be $A_k$. We will have to use this freedom; consider translating the sums in some chain $C_i = (A_1,\dotsc,A_k)$ by adding the vector $x_n$. We want $C_i'$ to be sparse, but then we need $X_{A_k\cup\{n\}}$ far from each $X_{A_j}$, which does not always need to happen, as illustrated in \cref{fig:LO_proof_prob}. \begin{marginfigure}[-3cm] \input{graphics/figLOproofv2} % \includegraphics[scale=.5]{graphics/LOproof.pdf} \caption{With the choice of $A_k$ shown here, we have that $X_{A_k \cup \{n\}} = X_A + x_n$ (in red) is near to $X_{A_1}$ (in blue), so $C_i'$ is not a sparse chain, even though $C_i$ is a sparse chain (all the start points of the arrows are far from each other).} \label{fig:LO_proof_prob} \end{marginfigure} % \improvement{Draw better figure, show that the distance is less than 1 in that one spot.} Assume WLOG that $x_n = (\alpha,0,0,\dotsc,0)$ only has non-zero first coordinate with $\alpha>0$. Let $p(v)$ denote the first coordinate of $v$. Then $p$ is a linear function, and $p(v) \leq \|v\|$ for each $v$. Let $A_k$ in the construction above be chosen so that $p(X_{A_k})\geq p(X_{A_j})$ for each $j$\sidenote{So we are choosing $A_k$ to be the furthest in the direction $x_n$. Then if we add $x_n$, we are only moving it further away from the others, so the conflict in \cref{fig:LO_proof_prob} doesn't occur.}. Then to show that $C_i'$ is sparse, it suffices to see that \begin{align*} \|X_{A_k \cup\{n\}} - X_{A_j}\| &\geq p( X_{A_k \cup \{n\}} - X_{A_j})\\ &= p(x_{A_k}) + p(x_n) - p(X_{A_j})\geq p(x_n)\geq 1. \end{align*} Hence, we follow the same proceedure as in \cref{thm:sym_part} to conclude that indeed we did produce a symmetric partition. % \improvement{Add more words to that proof, 2.2.} \end{proof} \lect{1}{20} % \marginnote{Lecture 5: Wednesday, January 20, 2016.} \newthought{Let us now consider} $\Z_p:=\Z/p\Z$, the integers modulo a prime $p$. Let $x_1,\dotsc,x_n\in \Z_p\setminus\{0\}$. We will say $\Cset \subset \P([n])$ is \defn{sparse}[sparse!set of indicies of integers modulo $p$] with respect to $x_1,\dotsc,x_n$ if $X_A\neq X_B$ for all $A,B\in \C$ with $A\neq B$. Let us define \[ \Sigma(z):= | \{A\subset \P([n]): X_A=z\}|. \] We are interested in $\max \Sigma(z)$. First, if each $x_1=\dotsb=x_n=1$, then we see that may achieve ${n \choose \floor{n/2}}$. On the other hand, since there are $2^n$ possible sums and only $p$ values, we have by the pigeonhole principle that $\max \Sigma(z) \geq 2^n/p$. To determine $\max \Sigma(z)$ we will use a partition into sparse sets, just as in the proof of \cref{thm:kleit70}. \begin{theorem} Given $x_1,\dotsc,x_n\in \Z_p\setminus\{0\}$ for prime $p$, with $n\leq p-1$, then there exists a symmetric partition of $\P([n])$ into sparse sets with respect to $x_1,\dotsc,x_n$. \end{theorem} \begin{proof} By induction on $n$. If $\Cset = \{A_1,\dotsc, A_k\}$ is sparse with respect to $x_1,\dotsc,x_{n-1}$, we want to show that \begin{align*} \Cset' &= \{A_1,\dotsc, A_k, A_{\ell} \cup \{n\}\},\\ \Cset'' &= \{A_1\cup \{n\}, A_2\cup\{n\}, \dotsc, A_k \cup \{n\}\}\setminus \{A_\ell \cup \{n\}\} \end{align*} are sparse for some $\ell$. Set $Y_i = X_{A_i}$ and consider the sums $Y_1,\dotsc,Y_k$. We wish to find $1\leq \ell\leq k$ such that $Y_\ell+x_n \not\in \{Y_1,\dotsc,Y_k\}$. Note that $k\leq (n-1)+1 \leq p-1$. Suppose there is no such $\ell$. Then wlog, $Y_1 + x_n = Y_2$, $Y_2 + x_n = Y_3$, \ldots, $Y_j + x_n = Y_1$ for some $j$. Then $Y_1 + j x_n = Y_1$, so $jx_n=0$. Then $j=0$ or $x_n = 0$, which is a contradiction, since $j\leq k\leq p-1$. In other words, if we have a set $A\subset \Gamma$ for some group $\Gamma$, and for some element $x\in \Gamma$ we have $x+A \subset A$. Then $A$ is a union of cosets of a cyclic subgroup of $\Gamma$ generated by $\{x\}$. \end{proof} \begin{corollary} Let $x_1,\dotsc,x_n\in \Z_p\setminus\{0\}$ with $n\leq p-1$. ~\begin{enumerate} \item Then $\Sigma(z) \leq {n \choose \floor{n/2}}$. \item (Cauchy-Davenport). Let $S(x_1,\dotsc,x_n) = \{ X_A: A\in \P([n])\}$. Then $|S(x_1,\dotsc,x_n)| \geq \min\{ p,n+1\}$. \end{enumerate} \end{corollary} \begin{proof} ~ \begin{enumerate} \item A symmetric partition has exactly ${n\choose \floor{n/2}}$ parts, and within each part there is at most one set with corresponding sum $z$. \item If $n\leq p-1$, then we have a symmetric partition, which contains a sparse set of size $n+1$. If $n\geq p$, then $S(x_1,\dotsc,x_n) = \Z_p$. This is because we can just use any $p-1$ elements to get all the elements by sums.\qedhere \end{enumerate} \end{proof}
{ "alphanum_fraction": 0.6435897436, "avg_line_length": 75.5497382199, "ext": "tex", "hexsha": "aa83652089163562d1ad5983724b5cc12a1c9e38", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-01-04T19:38:24.000Z", "max_forks_repo_forks_event_min_datetime": "2020-01-04T19:38:24.000Z", "max_forks_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ericphanson/CombinatoricsNotes", "max_forks_repo_path": "chapters/ch2_LO.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ericphanson/CombinatoricsNotes", "max_issues_repo_path": "chapters/ch2_LO.tex", "max_line_length": 473, "max_stars_count": 5, "max_stars_repo_head_hexsha": "6b369a77b77cf6f0281b59f227aaa31e6903079c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ericphanson/CombinatoricsNotes", "max_stars_repo_path": "chapters/ch2_LO.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-20T04:27:41.000Z", "max_stars_repo_stars_event_min_datetime": "2018-04-24T06:43:31.000Z", "num_tokens": 5587, "size": 14430 }
\chapter{Evaluation of Results} \label{ch:results-evaluation} \section{Methodology} In order to evaluate the proposed solutions to the two questions posed in \cref{ch:intro}, the following methodologies has been used. \begin{enumerate} \item To evaluate how error detection has been improved, compare the number of actual errors found in a form by the tools detected and against the proposed solution. \item To evaluate how the error information system has been improved, the number of elements that make up the error messages of each existing tool and of our solution is compared. In addition, a survey is carried out on different users familiar with the existing tools. \item To evaluate to what extent we can translate shapes to domain object models we collect all the existing shapes in GitHub, reduce the set to those that fit the micro compact syntax and try to generate objects for those that are syntactically and semantically valid. In this way we can approximate what percentage we can translate. \end{enumerate} \section{Datasets} To test the above methodologies we will use two types of datasets. real and synthetic. In the case of the real ones, as we do not know any shape dataset, we will use the Big Query Google service to download all the files with an open source license and extension \texttt{.shex} that exist as of March 17, 2019 on GitHub. In the case of synthetic tests, schemes are designed to contain the errors described, taking into account the previous work. \section{Results} After evaluating the detection of errors with the methodology and the synthetic dataset, the results of \cref{tb:errors-results} are obtained. From the table we obtain \cref{fig:results-chart} where we have eliminated row 13 to be able to scale and see the differences better. \begin{table} \centering \caption[Results produced for synthetic tests 1-13]{Unit results of detection of syntactic (syn.), semantic (sema.) and warning (warn.) errors produced for synthetic tests 1-13. In addition, the last row includes the aggregate sum of each column. Each test corresponds to a synthetic shape expression that contains errors or warnings. The type and number of errors that each synthetic shape contains match with the expected value from the table.} \label{tb:errors-results} \resizebox{\textwidth}{!}{\begin{tabular}{c|ccc|ccc|ccc|ccc|ccc} \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{Test}} & \multicolumn{3}{c|}{Expected} & \multicolumn{3}{c|}{ShEx-Lite} & \multicolumn{3}{c|}{rdfshape} & \multicolumn{3}{c|}{Shaclex} & \multicolumn{3}{c}{ShEx.js} \\ \cline{2-16} & syn. & sema. & warn. & syn. & sema. & warn. & syn. & sema. & warn. & syn. & sema. & warn. & syn. & sema. & warn. \\ \hline \multicolumn{1}{c|}{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \multicolumn{1}{c|}{2} & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \multicolumn{1}{c|}{3} & 0 & 0 & 2 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \multicolumn{1}{c|}{4} & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ \multicolumn{1}{c|}{5} & 0 & 2 & 0 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 2 & 0 \\ \multicolumn{1}{c|}{6} & 0 & 3 & 0 & 0 & 3 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 2 & 0 \\ \multicolumn{1}{c|}{7} & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ \multicolumn{1}{c|}{8} & 2 & 0 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ \multicolumn{1}{c|}{9} & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ \multicolumn{1}{c|}{10} & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ \multicolumn{1}{c|}{11} & 0 & 2 & 1 & 0 & 2 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 2 & 1 \\ \multicolumn{1}{c|}{12} & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\ \multicolumn{1}{c|}{13} & 0 & 200 & 200 & 0 & 200 & 200 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 200 & 0 \\ \hline \multicolumn{1}{c|}{Aggregated} & 5 & 210 & 207 & 5 & 209 & 205 & 4 & 6 & 1 & 4 & 6 & 1 & 4 & 208 & 1 \end{tabular}} \end{table} \begin{figure} \includegraphics[width=\textwidth]{images/results-chart.png} \centering \caption[Bar chart for \cref{tb:errors-results}]{Bar chart for \cref{tb:errors-results}. Row 13 has been removed in order to normalize other results.} \label{fig:results-chart} \end{figure} From the results obtained and displayed, it can be seen that ShEx-Lite is, together with ShEx.js, the only system that detects multiple errors at the same time. So systems like rdfshape, based on Shaclex, or YASHE can benefit from the procedures proposed in this work. Another important observation is that when we have a syntactical error, no system is capable of processing semantic errors or warnings that may arise. This is completely normal, since if the syntactic analysis phases are not completed, the semantic analysis cannot be performed. Regarding the second point of our methodology, \cref{tb:errors-q-results} shows the results after evaluating the error messages (\cref{fig:err-non-def-pref-2,fig:err-non-def-pref-3,fig:err-non-def-pref-4}) produced by the different systems against the good practices defined in \cite{heeren2005top}. \begin{figure} \begin{lstlisting}[numbers=left,basicstyle=\ttfamily\scriptsize] error[E007]: prefix not defined --> shape_with_error_cause_pref_not_defined.shex:17:3 | 17| non_existing:label xsd:string +; | ^ the prefix `non_existing` has not been defined \end{lstlisting} \caption[Semantic error produced by ShEx-Lite for an undefined prefix]{Semantic error produced by ShEx-Lite for an undefined prefix.} \label{fig:err-non-def-pref-2} \end{figure} \begin{figure} \begin{lstlisting}[numbers=left,basicstyle=\ttfamily\scriptsize] Prefix `non_existing` is not defined \end{lstlisting} \caption[Semantic error produced by RDFShape and YASHE for an undefined prefix]{Semantic error produced by RDFShape and YASHE for an undefined prefix.} \label{fig:err-non-def-pref-3} \end{figure} \begin{figure}[t] \begin{lstlisting}[numbers=left,basicstyle=\ttfamily\scriptsize] error parsing input schema: Parse error; unknown prefix "non_existing" \end{lstlisting} \caption[Semantic error produced by RDFShape for an undefined prefix]{Semantic error produced by RDFShape for an undefined prefix.} \label{fig:err-non-def-pref-4} \end{figure} \begin{table} \centering \caption[Comparison of information provided in error and warning messages]{Comparison of information provided in error and warning messages by ShEx-Lite, RDFShape (shaclex), YASHE and ShEx.js. The $\times$ symbol indicates that contains that feature, blank if does not contain the feature.} \label{tb:errors-q-results} \begin{tabular}{ccccccc} \hline\hline System & \begin{tabular}[c]{@{}c@{}}Source\\ File\end{tabular} & \begin{tabular}[c]{@{}c@{}}Line\\ Number\end{tabular} & \begin{tabular}[c]{@{}c@{}}Column\\ Number\end{tabular} & \begin{tabular}[c]{@{}c@{}}Code\\ Snipet\end{tabular} & \begin{tabular}[c]{@{}c@{}}Message\\ Title\end{tabular} & \begin{tabular}[c]{@{}c@{}}Message\\ Description\end{tabular} \\ \hline ShEx-Lite & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ RDFShape & & $\times$ & $\times$ & & $\times$ & \\ YASHE & & $\times$ & $\times$ & & $\times$ & \\ ShEx.js & & & & & $\times$ & \end{tabular} \end{table} From the results, we interpret that all the existing tools, although they really focus on data validation with Shape Expressions, would be greatly benefited from using the methodologies described in this document. Regarding the translation of schemas to object models, after executing the translation on the dataset obtained from GitHub, the results found in \cref{tb:real-datase-results} are extracted. \begin{table} \centering \caption[Values obtained after compiling all the elements of our real case dataset with the ShEx-Lite system]{Values obtained after compiling all the elements of our real case dataset with the ShEx-Lite system. The dataset contains a total of 1.612 files of which only 19,4\% contain no errors and only 2,5\% contain neither errors nor warnings. Of the files without errors we were able to convert 29,4\% of the documents to domain model objects. While of those who had neither errors nor warnings, we were able to translate almost 50\%.} \label{tb:real-datase-results} \begin{tabular}{l|c|cc} ShEx Files & 1.612 & 100\% & \\ \cline{1-3} Without Errors & 313 & 19.417\% & \\ \cline{1-3} Without Warnings & 41 & 2.543\% & \\ \cline{1-3} WE Transformed & 92 & \multicolumn{1}{c|}{5.707\%} & 29.393\% \\ \hline WW Transformed & 20 & \multicolumn{1}{c|}{1.241\%} & 48.780\% \end{tabular} \end{table} Of these values the first that strikes us is the large number of shapes that exist on GitHub that have errors. This is explainable since we are only admitting a subset of the syntax. Therefore we can consider these 313 as those shapes that are correct expressed in ShEx micro Compact Syntax. However, of these 313 shapes that are in compact syntax, 86.9\% contain some unused resource that generates warnings. On the one hand, this indicates that there is no system that warns of these things and that almost 90\% of Shape Expressions users would benefit from the analysis methods developed in this work. Furthermore, our system is capable of translating almost 30\% of the schemas that do not have errors. Of the shapes that do not contain any errors or warnings, our system translates almost 50\%. This tells us that the more quality a shape has, the more likely it is to be compatible with object-oriented languages.
{ "alphanum_fraction": 0.5409585179, "avg_line_length": 75.2424242424, "ext": "tex", "hexsha": "4e45c5a9db86580e1487470b4315c02341173622", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ce22ebcf6cd1ce5bafd79398ee87e50dcb18c41d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "thewilly/shex-lite-book", "max_forks_repo_path": "chapters/part_3/evauation-of-results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ce22ebcf6cd1ce5bafd79398ee87e50dcb18c41d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "thewilly/shex-lite-book", "max_issues_repo_path": "chapters/part_3/evauation-of-results.tex", "max_line_length": 368, "max_stars_count": null, "max_stars_repo_head_hexsha": "ce22ebcf6cd1ce5bafd79398ee87e50dcb18c41d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "thewilly/shex-lite-book", "max_stars_repo_path": "chapters/part_3/evauation-of-results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3367, "size": 12415 }
\documentclass[11pt]{article} \usepackage{setspace} \usepackage{pxfonts} \usepackage{graphicx} \usepackage{geometry} \geometry{letterpaper,left=.5in,right=.5in,top=0in,bottom=.75in,headsep=5pt,footskip=20pt} \title{Problem Set 3 -- Integrate-and-fire neuron model} \author{Computational Neuroscience Summer Program} \date{June, 2011} \begin{document} \maketitle In this problem set you will be building a simple integrate-and-fire neuron. You should assume a specific membrane capacitance of $c_m = 10$ nF/mm\textsuperscript{2}, a specific membrane resistance of $r_m = 1$ M$\Omega\cdot$mm\textsuperscript{2}, a resting membrane potential of $E = -70$ mV, a reset potential of $V_{reset} = -80$ mV, an action potential threshold of $V_{threshold} = -55$ mV, and a cell surface area of $A = 0.025$ mm\textsuperscript{2}. Write up your results in a text editor of your choosing. Include any relevant figures, your Matlab code, and any other calculations related to the problem set. You may work individually or in groups, but each student should hand in their own report. \subsection*{Equations} \begin{center} \begin{tabular}{ll} $C_m = A \cdot c_m$ & $R_m = \frac{r_m}{A}$\tabularnewline $\tau_m = c_m \cdot r_m$ & $V_\infty = E + R_mI_{ext}$\tabularnewline $V(t) = V_\infty + (V(0) - V_\infty)e^\frac{-t}{\tau_m}$ & $r_{isi} = (\tau_m \mathrm{ln}(\frac{R_mI_{ext} + E - V_{reset}}{R_mI_{ext} + E - V_{threshold}}))^{-1}$\tabularnewline $\tau_m\frac{dV}{dt} = E - V(t-1) + R_mI_{ext}$ & \tabularnewline \end{tabular} \end{center} \subsection*{Problems} \paragraph{1.} Model an integrate-and-fire neuron using the equations above and the following rule: when the neuron's membrane voltage exceeds $V_{threshold}$, set the voltage in that timestep to $V_{peak} = 40$ mV, and in the next timestep set the voltage to $V_{reset}$. Set $dt = 0.1$ ms. Apply a square pulse of 0.5 nA from $t = 250$ ms until $t = 750$ ms in your simulation. Use Matlab's subplot command to plot the membrane voltage over time in the top panel and $I_{ext}$ in the bottom panel (use the same time scale for the horizontal axis of both plots). No text is required for this question; just include a plot. \paragraph{2.} Compute the average firing rate (spikes per second) of the integrate-and-fire neuron for the pulse interval you used in quesion 1 (500 ms). Now plot simulated firing rate vs $r_{isi}$ for several values of $I_{ext}$ (use $I_{ext}$ between 0 and 1 nA). How does the firing rate of the modeled neuron compare to the estimated firing rate given by $r_{isi}$? Note: the equation for $r_{isi}$ only holds if $V_\infty > V_{thresh}$; otherwise, $r_{isi} = 0$. \paragraph{3.} Starting from 0 nA, gradually increase the amount of external current injected into the integrate-and-fire neuron in steps of 0.01 nA. Keep the pulse duration constant at 500 ms. What is the smallest amount of current you can inject which will still result in an action potential? Is there a maximum firing rate this neuron can achieve? Why or why not? \paragraph{4. Challenge problem.} Compute firing rate as a function of pulse duration, $I_{duration}$, using 20 durations between 10 and 500 ms. Repeat this for several different values of $I_{ext}$ (try using 10 log-spaced values between 0.1 and 5 nA). Explain what you see. In particular, are the firing rate curves smooth or jagged? Why? \paragraph{5.} Vary the resting potential, specific capacitance, specific resistance, and surface area variables. How do increases or decreases in these values affect firing rate of the integrate-and-fire neuron? Explain (try to stay at or under 1-2 sentences per variable). Include plots for each of these variables. \end{document}
{ "alphanum_fraction": 0.7412174846, "avg_line_length": 53.2714285714, "ext": "tex", "hexsha": "1d669e0417c2116b2f8d455f7649ad09f144bca2", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2021-08-24T09:23:11.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-11T20:56:13.000Z", "max_forks_repo_head_hexsha": "b0a3812a46fe4387de2655a9072f8910a7f212f3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ContextLab/computational-neuroscience", "max_forks_repo_path": "integrate_and_fire_simple/integrate_and_fire_simple.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "b0a3812a46fe4387de2655a9072f8910a7f212f3", "max_issues_repo_issues_event_max_datetime": "2018-10-31T14:03:00.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-31T02:19:06.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ContextLab/computational-neuroscience", "max_issues_repo_path": "integrate_and_fire_simple/integrate_and_fire_simple.tex", "max_line_length": 345, "max_stars_count": 35, "max_stars_repo_head_hexsha": "b0a3812a46fe4387de2655a9072f8910a7f212f3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ContextLab/computational-neuroscience", "max_stars_repo_path": "integrate_and_fire_simple/integrate_and_fire_simple.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-04T20:44:42.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-22T21:51:39.000Z", "num_tokens": 1106, "size": 3729 }
\maketitle \section*{Motivation: Why covering spaces?} \todo[inline]{Topologists have a love-hate relationship with topological spaces.} One way to understand spaces is to associate an algebraic invariant to them. \begin{equation*} \begin{tikzcd} \mbox{Topological Space} \ar[rr] && \mbox{Algebraic invariant} \\ \mbox{For example: Knot} \ar[r] & \mbox{Knot projection} \ar[r] & \mbox{Knot invariant} \end{tikzcd} \end{equation*} where a knot invariant can be a number, polynomial, vector space, chain complex, etc. It is rarely possible to compute an invariant directly using definition. Instead, there are two main strategies in algebraic topology to compute these algebraic invariants: \begin{enumerate} \item Break a space $X$ up into smaller spaces, \begin{align*} X = X_1 \cup \dots \cup X_n. \end{align*} This is called \emph{excision}. \item Map another space $Y$ into $X$ and recover information about $X$ using $Y$. \begin{align*} Y \longrightarrow X \end{align*} This is where covering spaces come in. A covering map $Y \rightarrow X$ provides us information about the fundamental group/first homotopy group $\pi_1(X)$. We will show later on that if $\pi_1(Y) \triangleleft \pi_1(X)$ then there is a short exact sequence of groups \begin{equation*} 1 \rightarrow \pi_1(Y) \rightarrow \pi_1(X) \rightarrow \Gal(Y|X) \rightarrow 1 \end{equation*} \end{enumerate}
{ "alphanum_fraction": 0.7279774489, "avg_line_length": 45.7741935484, "ext": "tex", "hexsha": "53eb00b185e6ac12782a9cdb59eaa5cfd550f6f3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0daace3a630f99a117be973eab11bc547dc6fb44", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "apurvnakade/mc2019-Galois-correspondence-of-covering-spaces", "max_forks_repo_path": "00.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0daace3a630f99a117be973eab11bc547dc6fb44", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "apurvnakade/mc2019-Galois-correspondence-of-covering-spaces", "max_issues_repo_path": "00.tex", "max_line_length": 158, "max_stars_count": null, "max_stars_repo_head_hexsha": "0daace3a630f99a117be973eab11bc547dc6fb44", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "apurvnakade/mc2019-Galois-correspondence-of-covering-spaces", "max_stars_repo_path": "00.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 422, "size": 1419 }
\documentclass[Physics.tex]{subfiles} \begin{document} \chapter{Gravitational Field} Newton's \sldef{law of universal gravitation} states that every particle in the universe attracts every other particle with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them i.e. \begin{equation}\left|\mathbf{F}\right| = G\frac{m_{1}m_{2}}{r^{2}}\end{equation} where \(G\) is the gravitational constant. The law of gravitation is an inverse-square law i.e. the magnitude of force is inversely proportional to the square of something -- in this case the square of the separation between the particles. This equation holds only for point masses, but large masses at a large distance can be assumed, without losing too much accuracy, to be a point mass, so this equation is still reasonably valid. The region in space where an object exerts a gravitational force on another object is the \sldef{gravitational field} of the first object. A gravitational field can be represented by field lines representing the resultant direction of gravitational force acting on a mass placed at any point in the field. Where the field lines are closer, the field is stronger, and vice versa. The \sldef{gravitational field strength} \textbf{g} at a point in a gravitational field is the gravitational force per unit mass acting on a body placed at that point. \begin{equation}\left|\mathbf{g}\right| = G\frac{Mm}{mr^{2}} = G\frac{M}{r^{2}}\end{equation} Near the surface of the Earth, \(\mathbf{g}\) is approximately constant, as \(r\) does not vary very appreciably. However, \(\mathbf{g}\) varies over different points on Earth's surface, as the Earth is not a perfect sphere -- it bulges at the equator, the density of the Earth is not uniform, and the Earth is rotating about an axis through its poles -- so for an object not at the poles, gravity also has to provide for centripetal acceleration. An object is only truly weightless when it experiences no gravitational force at all. Otherwise, it experiences apparent weightlessness -- it simply does not feel its weight as it experiences no normal force. The \sldef{gravitational potential energy} \(U\) of a mass at a point in a gravitational field is the work done by an external force in bringing the mass from infinity to that point without acceleration. \begin{equation}U = -G\frac{Mm}{r}\end{equation} At infinity, \(U = 0\). The negative sign arises from the fact that the gravitational force is attractive in nature. The \sldef{gravitational potential} \(\Phi\) at a point in a gravitational field is the work done per unit mass by an external force in bringing a test mass from infinity to that point without acceleration.\begin{equation}\Phi = -G\frac{M}{r}\end{equation} The \sldef{escape speed} is the minimum speed with which a mass should be launched from a planet's surface to escape the planet's gravitational field. It is simply the speed that gives the mass kinetic energy that makes the mass's total energy exactly equal what its total energy would be if it were stationary at infinity. An \sldef{equipotential} line or surface is formed by all the points that have the same potential. Objects moving along an equipotential do not lose or gain any energy. \sldef{Kepler's 3rd law} states that the square of an object's orbit period is proportional to the cube of its orbit radius. A \sldef{geostationary satellite} of a planet is a satellite that orbits the planet such that it will always be above the same point on the planet. This means that the satellite must be orbiting the planet's equator in the direction of the planet's axial rotation, and the period of the orbit must be equal to the period of the rotation. Geostationary satellites are often used for communication. They are useful because they always stay over the same point, so there is no need to adjust satellite dishes; they are also high above the Earth and so can see large areas of the Earth. However, also due to their high altitude, images taken tend to be of low spatial resolution, and because they must be over the equator, they are of limited use for latitudes more than about \SI{70}{\degree} N or S. \end{document}
{ "alphanum_fraction": 0.7785865227, "avg_line_length": 137.3870967742, "ext": "tex", "hexsha": "96feefd9d2726e5f00032cb24f6ef83396ea8558", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "oliverli/A-Level-Notes", "max_forks_repo_path": "TeX/Physics/ch12_g.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "oliverli/A-Level-Notes", "max_issues_repo_path": "TeX/Physics/ch12_g.tex", "max_line_length": 460, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5afdc9a71c37736aacf3ae1db9d0384cdb6a0348", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "oliverli/A-Level-Notes", "max_stars_repo_path": "TeX/Physics/ch12_g.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-05T11:44:33.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-05T11:44:33.000Z", "num_tokens": 976, "size": 4259 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatability definitions \def\gt{>} \def\lt{<} % Document parameters \title{SciRadar} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \hypertarget{science-radar}{% \section{Science Radar}\label{science-radar}} Aplicación para la detección automática de frentes de investigación. \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}1}]:} \PY{o}{\PYZpc{}}\PY{k}{colors} lightbg \PY{o}{\PYZpc{}}\PY{k}{matplotlib} inline \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k+kn}{from} \PY{n+nn}{commands}\PY{n+nn}{.}\PY{n+nn}{network\PYZus{}generation} \PY{k}{import} \PY{o}{*} \PY{k+kn}{from} \PY{n+nn}{commands}\PY{n+nn}{.}\PY{n+nn}{network\PYZus{}analysis} \PY{k}{import} \PY{o}{*} \PY{k+kn}{from} \PY{n+nn}{commands}\PY{n+nn}{.}\PY{n+nn}{europe\PYZus{}pmc\PYZus{}harvester} \PY{k}{import} \PY{o}{*} \PY{k+kn}{from} \PY{n+nn}{commands}\PY{n+nn}{.}\PY{n+nn}{burst\PYZus{}detection} \PY{k}{import} \PY{o}{*} \PY{k+kn}{from} \PY{n+nn}{utils}\PY{n+nn}{.}\PY{n+nn}{fulltext\PYZus{}downloader} \PY{k}{import} \PY{n}{download\PYZus{}fulltext} \PY{k+kn}{from} \PY{n+nn}{utils}\PY{n+nn}{.}\PY{n+nn}{bioportal\PYZus{}api} \PY{k}{import} \PY{n}{annotate\PYZus{}text} \PY{k+kn}{from} \PY{n+nn}{utils}\PY{n+nn}{.}\PY{n+nn}{europe\PYZus{}pmc\PYZus{}api} \PY{k}{import} \PY{n}{get\PYZus{}paper\PYZus{}keywords} \PY{k+kn}{from} \PY{n+nn}{utils} \PY{k}{import} \PY{n}{mongodb\PYZus{}access} \PY{k+kn}{from} \PY{n+nn}{joblib} \PY{k}{import} \PY{n}{delayed}\PY{p}{,} \PY{n}{Parallel} \PY{k+kn}{import} \PY{n+nn}{logging} \PY{k+kn}{import} \PY{n+nn}{click\PYZus{}log} \PY{k+kn}{import} \PY{n+nn}{matplotlib} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pyplot} \PY{k}{as} \PY{n+nn}{plt} \PY{k+kn}{from} \PY{n+nn}{bs4} \PY{k}{import} \PY{n}{BeautifulSoup} \PY{k+kn}{from} \PY{n+nn}{time} \PY{k}{import} \PY{n}{time} \PY{k+kn}{from} \PY{n+nn}{sklearn}\PY{n+nn}{.}\PY{n+nn}{feature\PYZus{}extraction}\PY{n+nn}{.}\PY{n+nn}{text} \PY{k}{import} \PY{n}{TfidfVectorizer} \PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{cluster} \PY{k}{import} \PY{n}{hierarchy} \PY{k+kn}{from} \PY{n+nn}{sklearn}\PY{n+nn}{.}\PY{n+nn}{cluster} \PY{k}{import} \PY{n}{DBSCAN} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] /usr/lib/python3.6/site-packages/tqdm/autonotebook/\_\_init\_\_.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) " (e.g. in jupyter console)", TqdmExperimentalWarning) \end{Verbatim} \hypertarget{recolecciuxf3n-de-datos}{% \subsection{Recolección de datos}\label{recolecciuxf3n-de-datos}} \hypertarget{paruxe1metros-para-la-recolecciuxf3n-de-datos}{% \subsubsection{Parámetros para la recolección de datos}\label{paruxe1metros-para-la-recolecciuxf3n-de-datos}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}2}]:} \PY{n}{dataset\PYZus{}name} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}} \PY{n}{keywords} \PY{o}{=} \PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ZIKA}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ZIKAV}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{pmids} \PY{o}{=} \PY{p}{[}\PY{p}{]} \PY{n}{include\PYZus{}citing\PYZus{}papers} \PY{o}{=} \PY{k+kc}{True} \PY{n}{output\PYZus{}path} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{data/sciradar}\PY{l+s+s1}{\PYZsq{}} \PY{n}{start\PYZus{}year} \PY{o}{=} \PY{l+m+mi}{2010} \PY{n}{end\PYZus{}year} \PY{o}{=} \PY{l+m+mi}{2018} \PY{n}{mongo\PYZus{}config} \PY{o}{=} \PY{p}{\PYZob{}} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{publications}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mongodb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{l+m+mi}{27017} \PY{p}{\PYZcb{}} \end{Verbatim} \hypertarget{creaciuxf3n-de-directorio-de-salida}{% \subsubsection{Creación de directorio de salida}\label{creaciuxf3n-de-directorio-de-salida}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}3}]:} \PY{n}{output\PYZus{}path} \PY{o}{=} \PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{/}\PY{l+s+s1}{\PYZsq{}} \PY{k}{if} \PY{o+ow}{not} \PY{n}{output\PYZus{}path}\PY{o}{.}\PY{n}{endswith}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{else} \PY{n}{output\PYZus{}path} \PY{n}{output\PYZus{}path} \PY{o}{+}\PY{o}{=} \PY{n}{dataset\PYZus{}name} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{/}\PY{l+s+s1}{\PYZsq{}} \PY{k}{if} \PY{o+ow}{not} \PY{n}{os}\PY{o}{.}\PY{n}{path}\PY{o}{.}\PY{n}{exists}\PY{p}{(}\PY{n}{output\PYZus{}path}\PY{p}{)}\PY{p}{:} \PY{n}{os}\PY{o}{.}\PY{n}{makedirs}\PY{p}{(}\PY{n}{output\PYZus{}path}\PY{p}{)} \end{Verbatim} \hypertarget{recolecciuxf3n-de-datos-desde-europepmc}{% \subsubsection{\texorpdfstring{Recolección de datos desde \href{https://europepmc.org/}{EuropePMC}}{Recolección de datos desde EuropePMC}}\label{recolecciuxf3n-de-datos-desde-europepmc}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{k}{def} \PY{n+nf}{get\PYZus{}paper\PYZus{}fulltext}\PY{p}{(}\PY{n}{reference}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n}{os}\PY{o}{.}\PY{n}{path}\PY{o}{.}\PY{n}{isfile}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.json}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{:} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.json}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{as} \PY{n}{f}\PY{p}{:} \PY{n}{reference} \PY{o}{=} \PY{n}{json}\PY{o}{.}\PY{n}{load}\PY{p}{(}\PY{n}{f}\PY{p}{)} \PY{k}{return} \PY{n}{reference} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{fullText}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{download\PYZus{}fulltext}\PY{p}{(}\PY{n}{reference}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{p}{)} \PY{k}{return} \PY{n}{reference} \PY{k}{def} \PY{n+nf}{annotate\PYZus{}reference}\PY{p}{(}\PY{n}{reference}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n}{os}\PY{o}{.}\PY{n}{path}\PY{o}{.}\PY{n}{isfile}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.json}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{:} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.json}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{as} \PY{n}{f}\PY{p}{:} \PY{n}{reference} \PY{o}{=} \PY{n}{json}\PY{o}{.}\PY{n}{load}\PY{p}{(}\PY{n}{f}\PY{p}{)} \PY{k}{return} \PY{n}{reference} \PY{k}{if} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{fullText}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{strip}\PY{p}{(}\PY{p}{)} \PY{o+ow}{is} \PY{o+ow}{not} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{annotations}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{annotate\PYZus{}text}\PY{p}{(}\PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{fullText}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{strip}\PY{p}{(}\PY{p}{)}\PY{p}{)} \PY{k}{else}\PY{p}{:} \PY{n}{text} \PY{o}{=} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{title}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{text} \PY{o}{+}\PY{o}{=} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{abstractText}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{abstractText}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{in} \PY{n}{reference} \PY{k}{else} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{annotations}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{annotate\PYZus{}text}\PY{p}{(}\PY{n}{text}\PY{p}{)} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.json}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{w}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{as} \PY{n}{fp}\PY{p}{:} \PY{n}{json}\PY{o}{.}\PY{n}{dump}\PY{p}{(}\PY{n}{reference}\PY{p}{,} \PY{n}{fp}\PY{p}{,} \PY{n}{default}\PY{o}{=}\PY{n}{default}\PY{p}{,} \PY{n}{indent}\PY{o}{=}\PY{l+m+mi}{4}\PY{p}{,} \PY{n}{sort\PYZus{}keys}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \PY{k}{def} \PY{n+nf}{annotate\PYZus{}citations}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{,} \PY{n}{reference}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{index}\PY{p}{,} \PY{n}{citation} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{references}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{annotations}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{in} \PY{n}{citation}\PY{p}{:} \PY{k}{return} \PY{n}{text} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{context}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{in} \PY{n}{citation} \PY{o+ow}{and} \PY{n}{citation}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{context}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{strip}\PY{p}{(}\PY{p}{)} \PY{o+ow}{is} \PY{o+ow}{not} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{n}{text} \PY{o}{=} \PY{n}{citation}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{context}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{strip}\PY{p}{(}\PY{p}{)} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{title}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{in} \PY{n}{citation}\PY{p}{:} \PY{n}{text} \PY{o}{+}\PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ }\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{citation}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{title}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{if} \PY{n}{text} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{k}{return} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{references}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{index}\PY{p}{]}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{annotations}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{annotate\PYZus{}text}\PY{p}{(}\PY{n}{text}\PY{p}{)} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{update\PYZus{}document}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{,} \PY{n}{reference}\PY{p}{)} \PY{k}{def} \PY{n+nf}{get\PYZus{}citations\PYZus{}keywords}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{,} \PY{n}{reference}\PY{p}{)}\PY{p}{:} \PY{k}{for} \PY{n}{index}\PY{p}{,} \PY{n}{citation} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{references}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{keywords}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{in} \PY{n}{citation}\PY{p}{:} \PY{k}{return} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{source}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{in} \PY{n}{citation} \PY{o+ow}{and} \PY{n}{citation}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{source}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{MED}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{and} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{in} \PY{n}{citation}\PY{p}{:} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{references}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{index}\PY{p}{]}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{keywords}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{get\PYZus{}paper\PYZus{}keywords}\PY{p}{(}\PY{n}{citation}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{update\PYZus{}document}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{,} \PY{n}{reference}\PY{p}{)} \PY{k}{def} \PY{n+nf}{default}\PY{p}{(}\PY{n}{o}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n+nb}{type}\PY{p}{(}\PY{n}{o}\PY{p}{)} \PY{o+ow}{is} \PY{n}{datetime}\PY{o}{.}\PY{n}{date} \PY{o+ow}{or} \PY{n+nb}{type}\PY{p}{(}\PY{n}{o}\PY{p}{)} \PY{o+ow}{is} \PY{n}{datetime}\PY{o}{.}\PY{n}{datetime}\PY{p}{:} \PY{k}{return} \PY{n}{o}\PY{o}{.}\PY{n}{isoformat}\PY{p}{(}\PY{p}{)} \PY{k}{def} \PY{n+nf}{get\PYZus{}references\PYZus{}from\PYZus{}mongo}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n+nb}{list}\PY{p}{(}\PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{get\PYZus{}all}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{)}\PY{p}{)} \PY{k}{def} \PY{n+nf}{get\PYZus{}citation\PYZus{}context}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{,} \PY{n}{reference}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{pmcid}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{not} \PY{o+ow}{in} \PY{n}{reference}\PY{p}{:} \PY{k}{return} \PY{n}{nxml\PYZus{}file\PYZus{}name} \PY{o}{=} \PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{xml/}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{pmcid}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.nxml}\PY{l+s+s1}{\PYZsq{}} \PY{k}{if} \PY{o+ow}{not} \PY{n}{os}\PY{o}{.}\PY{n}{path}\PY{o}{.}\PY{n}{isfile}\PY{p}{(}\PY{n}{nxml\PYZus{}file\PYZus{}name}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{nxml\PYZus{}file\PYZus{}name}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{r}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{as} \PY{n}{xml\PYZus{}file}\PY{p}{:} \PY{n}{xml\PYZus{}text} \PY{o}{=} \PY{n}{xml\PYZus{}file}\PY{o}{.}\PY{n}{read}\PY{p}{(}\PY{p}{)} \PY{n}{xml\PYZus{}text} \PY{o}{=} \PY{n}{xml\PYZus{}text}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+se}{\PYZbs{}n}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZlt{}sup\PYZgt{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZam{}lt;}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZlt{}/sup\PYZgt{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZam{}gt;}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZlt{}italic\PYZgt{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZlt{}/italic\PYZgt{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{soup} \PY{o}{=} \PY{n}{BeautifulSoup}\PY{p}{(}\PY{n}{xml\PYZus{}text}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{xml}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{has\PYZus{}change} \PY{o}{=} \PY{k+kc}{False} \PY{k}{for} \PY{n}{index}\PY{p}{,} \PY{n}{citation} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{references}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}} \PY{o+ow}{not} \PY{o+ow}{in} \PY{n}{citation} \PY{o+ow}{or} \PY{n}{citation}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{==} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{k}{return} \PY{n}{cites\PYZus{}pmid} \PY{o}{=} \PY{n}{citation}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{ref\PYZus{}list\PYZus{}elem} \PY{o}{=} \PY{n}{soup}\PY{o}{.}\PY{n}{find}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ref\PYZhy{}list}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{k}{if} \PY{n}{ref\PYZus{}list\PYZus{}elem}\PY{p}{:} \PY{n}{pub\PYZus{}id\PYZus{}elem} \PY{o}{=} \PY{n}{ref\PYZus{}list\PYZus{}elem}\PY{o}{.}\PY{n}{find}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{pub\PYZhy{}id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{p}{\PYZob{}}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{pub\PYZhy{}id\PYZhy{}type}\PY{l+s+s2}{\PYZdq{}}\PY{p}{:} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{pmid}\PY{l+s+s2}{\PYZdq{}}\PY{p}{\PYZcb{}}\PY{p}{,} \PY{n}{text}\PY{o}{=}\PY{n}{cites\PYZus{}pmid}\PY{p}{)} \PY{k}{if} \PY{n}{pub\PYZus{}id\PYZus{}elem}\PY{p}{:} \PY{n}{citation\PYZus{}elem} \PY{o}{=} \PY{n}{pub\PYZus{}id\PYZus{}elem}\PY{o}{.}\PY{n}{parent} \PY{k}{if} \PY{n}{citation\PYZus{}elem}\PY{p}{:} \PY{n}{ref\PYZus{}elem} \PY{o}{=} \PY{n}{citation\PYZus{}elem}\PY{o}{.}\PY{n}{parent} \PY{k}{if} \PY{n}{ref\PYZus{}elem}\PY{o}{.}\PY{n}{has\PYZus{}attr}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{:} \PY{n}{ref\PYZus{}id} \PY{o}{=} \PY{n}{ref\PYZus{}elem}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{id}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{citation} \PY{o}{=} \PY{n}{soup}\PY{o}{.}\PY{n}{find}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{xref}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{p}{\PYZob{}}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{rid}\PY{l+s+s1}{\PYZsq{}}\PY{p}{:} \PY{n}{ref\PYZus{}id}\PY{p}{\PYZcb{}}\PY{p}{)} \PY{k}{if} \PY{n}{citation} \PY{o+ow}{is} \PY{o+ow}{not} \PY{k+kc}{None} \PY{o+ow}{and} \PY{n}{citation}\PY{o}{.}\PY{n}{findParent}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{p}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{:} \PY{n}{citation\PYZus{}text} \PY{o}{=} \PY{n}{citation}\PY{o}{.}\PY{n}{findParent}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{p}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{o}{.}\PY{n}{text}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZlt{}}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{citation}\PY{o}{.}\PY{n}{text} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZgt{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{[}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s1}{]}\PY{l+s+s1}{\PYZsq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{cites\PYZus{}pmid}\PY{p}{)}\PY{p}{)} \PY{n}{citation\PYZus{}text} \PY{o}{=} \PY{n}{citation\PYZus{}text}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{[}\PY{l+s+s1}{\PYZsq{}} \PY{o}{+} \PY{n}{citation}\PY{o}{.}\PY{n}{text} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{]}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{[}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s1}{]}\PY{l+s+s1}{\PYZsq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{cites\PYZus{}pmid}\PY{p}{)}\PY{p}{)} \PY{n}{citation\PYZus{}text} \PY{o}{=} \PY{n}{citation\PYZus{}text}\PY{o}{.}\PY{n}{replace}\PY{p}{(}\PY{n}{citation}\PY{o}{.}\PY{n}{text} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{,}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{[}\PY{l+s+si}{\PYZob{}\PYZcb{}}\PY{l+s+s1}{]}\PY{l+s+s1}{\PYZsq{}}\PY{o}{.}\PY{n}{format}\PY{p}{(}\PY{n}{cites\PYZus{}pmid}\PY{p}{)}\PY{p}{)} \PY{n}{reference}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{references}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{index}\PY{p}{]}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{context}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{citation\PYZus{}text} \PY{n}{has\PYZus{}change} \PY{o}{=} \PY{k+kc}{True} \PY{k}{if} \PY{n}{has\PYZus{}change}\PY{p}{:} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{update\PYZus{}document}\PY{p}{(}\PY{n}{mongo\PYZus{}host}\PY{p}{,} \PY{n}{mongo\PYZus{}port}\PY{p}{,} \PY{n}{database}\PY{p}{,} \PY{n}{collection}\PY{p}{,} \PY{n}{reference}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{references} \PY{o}{=} \PY{n}{get\PYZus{}references\PYZus{}from\PYZus{}mongo}\PY{p}{(}\PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{k}{if} \PY{n+nb}{len}\PY{p}{(}\PY{n}{references}\PY{p}{)} \PY{o}{==} \PY{l+m+mi}{0}\PY{p}{:} \PY{n}{click}\PY{o}{.}\PY{n}{secho}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Harvesting papers}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{fg}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{yellow}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{references} \PY{o}{=} \PY{n}{harvest\PYZus{}papers\PYZus{}metadata}\PY{p}{(}\PY{n}{keywords}\PY{p}{,} \PY{n}{pmids}\PY{p}{,} \PY{n}{start\PYZus{}year}\PY{p}{,} \PY{n}{end\PYZus{}year}\PY{p}{,} \PY{n}{include\PYZus{}citing\PYZus{}papers}\PY{o}{=}\PY{n}{include\PYZus{}citing\PYZus{}papers}\PY{p}{)} \PY{n}{click}\PY{o}{.}\PY{n}{secho}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Getting fulltext}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{fg}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{yellow}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{references} \PY{o}{=} \PY{n}{Parallel}\PY{p}{(}\PY{n}{n\PYZus{}jobs}\PY{o}{=}\PY{l+m+mi}{25}\PY{p}{)}\PY{p}{(}\PY{n}{delayed}\PY{p}{(}\PY{n}{get\PYZus{}paper\PYZus{}fulltext}\PY{p}{)}\PY{p}{(}\PY{n}{reference}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{p}{)} \PY{k}{for} \PY{n}{reference} \PY{o+ow}{in} \PY{n}{tqdm}\PY{p}{(}\PY{n}{references}\PY{p}{)}\PY{p}{)} \PY{k}{if} \PY{o+ow}{not} \PY{n}{os}\PY{o}{.}\PY{n}{path}\PY{o}{.}\PY{n}{exists}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{:} \PY{n}{os}\PY{o}{.}\PY{n}{makedirs}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{click}\PY{o}{.}\PY{n}{secho}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Annotating papers}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{fg}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{yellow}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{Parallel}\PY{p}{(}\PY{n}{n\PYZus{}jobs}\PY{o}{=}\PY{l+m+mi}{25}\PY{p}{)}\PY{p}{(}\PY{n}{delayed}\PY{p}{(}\PY{n}{annotate\PYZus{}reference}\PY{p}{)}\PY{p}{(}\PY{n}{reference}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{p}{)} \PY{k}{for} \PY{n}{reference} \PY{o+ow}{in} \PY{n}{tqdm}\PY{p}{(}\PY{n}{references}\PY{p}{)}\PY{p}{)} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{save\PYZus{}directory}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{json/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{publications}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{dataset\PYZus{}name}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mongodb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{references} \PY{o}{=} \PY{n}{get\PYZus{}references\PYZus{}from\PYZus{}mongo}\PY{p}{(}\PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{Parallel}\PY{p}{(}\PY{n}{n\PYZus{}jobs}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)}\PY{p}{(}\PY{n}{delayed}\PY{p}{(}\PY{n}{get\PYZus{}citation\PYZus{}context}\PY{p}{)}\PY{p}{(}\PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{reference}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{p}{)} \PY{k}{for} \PY{n}{reference} \PY{o+ow}{in} \PY{n}{tqdm}\PY{p}{(}\PY{n}{references}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{reference\PYZus{}cursor} \PY{o}{=} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{get\PYZus{}all}\PY{p}{(}\PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{n}{Parallel}\PY{p}{(}\PY{n}{n\PYZus{}jobs}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)}\PY{p}{(}\PY{n}{delayed}\PY{p}{(}\PY{n}{annotate\PYZus{}citations}\PY{p}{)}\PY{p}{(}\PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{reference}\PY{p}{)} \PY{k}{for} \PY{n}{reference} \PY{o+ow}{in} \PY{n}{tqdm}\PY{p}{(}\PY{n}{reference\PYZus{}cursor}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{reference\PYZus{}cursor} \PY{o}{=} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{get\PYZus{}all}\PY{p}{(}\PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{n}{Parallel}\PY{p}{(}\PY{n}{n\PYZus{}jobs}\PY{o}{=}\PY{l+m+mi}{16}\PY{p}{)}\PY{p}{(}\PY{n}{delayed}\PY{p}{(}\PY{n}{get\PYZus{}citations\PYZus{}keywords}\PY{p}{)}\PY{p}{(}\PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{host}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{port}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{database}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{collection}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{,} \PY{n}{reference}\PY{p}{)} \PY{k}{for} \PY{n}{reference} \PY{o+ow}{in} \PY{n}{tqdm}\PY{p}{(}\PY{n}{reference\PYZus{}cursor}\PY{p}{)}\PY{p}{)} \end{Verbatim} \hypertarget{anuxe1lisis-de-redes}{% \subsection{Análisis de redes}\label{anuxe1lisis-de-redes}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{k}{if} \PY{o+ow}{not} \PY{n}{os}\PY{o}{.}\PY{n}{path}\PY{o}{.}\PY{n}{exists}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{:} \PY{n}{os}\PY{o}{.}\PY{n}{makedirs}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \hypertarget{generaciuxf3n-red-de-co-autoruxeda}{% \subsubsection{Generación Red de Co-autoría}\label{generaciuxf3n-red-de-co-autoruxeda}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{coauthorship\PYZus{}networks} \PY{o}{=} \PY{n}{generate\PYZus{}co\PYZus{}authorship\PYZus{}networks\PYZus{}incremental}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{2010}\PY{p}{,} \PY{l+m+mi}{2018}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{o}{=}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{o}{=}\PY{n}{mongo\PYZus{}config}\PY{p}{,} \PY{n}{save}\PY{o}{=}\PY{k+kc}{True}\PY{p}{,} \PY{n}{use\PYZus{}cache}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{coauthorship\PYZus{}analysis} \PY{o}{=} \PY{n}{analyse\PYZus{}networks}\PY{p}{(}\PY{n}{coauthorship\PYZus{}networks}\PY{p}{,} \PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/co\PYZhy{}authorship\PYZhy{}json}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{dates} \PY{o}{=} \PY{n}{coauthorship\PYZus{}analysis}\PY{o}{.}\PY{n}{keys}\PY{p}{(}\PY{p}{)} \PY{n}{values} \PY{o}{=} \PY{p}{[}\PY{n}{analysis}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{clustering\PYZus{}coefficient}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{k}{for} \PY{n}{analysis} \PY{o+ow}{in} \PY{n}{coauthorship\PYZus{}analysis}\PY{o}{.}\PY{n}{values}\PY{p}{(}\PY{p}{)}\PY{p}{]} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{dpi}\PY{o}{=} \PY{l+m+mi}{80}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{w}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{edgecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{k}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{dates}\PY{p}{,} \PY{n}{values}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xticks}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{n}{dates}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{n}{date}\PY{o}{.}\PY{n}{split}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ to }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{k}{for} \PY{n}{date} \PY{o+ow}{in} \PY{n}{dates}\PY{p}{]}\PY{p}{,} \PY{n}{rotation}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vertical}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{dates} \PY{o}{=} \PY{n}{coauthorship\PYZus{}analysis}\PY{o}{.}\PY{n}{keys}\PY{p}{(}\PY{p}{)} \PY{n}{values} \PY{o}{=} \PY{p}{[}\PY{n}{analysis}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{density}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{for} \PY{n}{analysis} \PY{o+ow}{in} \PY{n}{coauthorship\PYZus{}analysis}\PY{o}{.}\PY{n}{values}\PY{p}{(}\PY{p}{)}\PY{p}{]} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{dpi}\PY{o}{=} \PY{l+m+mi}{80}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{w}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{edgecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{k}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{dates}\PY{p}{,} \PY{n}{values}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xticks}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{n}{dates}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{n}{date}\PY{o}{.}\PY{n}{split}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ to }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{k}{for} \PY{n}{date} \PY{o+ow}{in} \PY{n}{dates}\PY{p}{]}\PY{p}{,} \PY{n}{rotation}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vertical}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{dates} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{coauthorship\PYZus{}analysis}\PY{o}{.}\PY{n}{keys}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{:}\PY{p}{]} \PY{n}{values} \PY{o}{=} \PY{p}{[}\PY{n}{analysis}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{similarity\PYZus{}year\PYZus{}before}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{for} \PY{n}{analysis} \PY{o+ow}{in} \PY{n+nb}{list}\PY{p}{(}\PY{n}{coauthorship\PYZus{}analysis}\PY{o}{.}\PY{n}{values}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{:}\PY{p}{]}\PY{p}{]} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{dpi}\PY{o}{=} \PY{l+m+mi}{80}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{w}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{edgecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{k}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{dates}\PY{p}{,} \PY{n}{values}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xticks}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{n}{dates}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{n}{date}\PY{o}{.}\PY{n}{split}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ to }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{k}{for} \PY{n}{date} \PY{o+ow}{in} \PY{n}{dates}\PY{p}{]}\PY{p}{,} \PY{n}{rotation}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vertical}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{coauthorship\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2017\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \end{Verbatim} \hypertarget{generaciuxf3n-red-de-co-citaciuxf3n}{% \subsubsection{Generación Red de Co-citación}\label{generaciuxf3n-red-de-co-citaciuxf3n}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}4}]:} \PY{n}{co\PYZus{}citation\PYZus{}networks} \PY{o}{=} \PY{n}{get\PYZus{}co\PYZus{}citation\PYZus{}network\PYZus{}incremental}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{2010}\PY{p}{,} \PY{l+m+mi}{2018}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{o}{=}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{o}{=}\PY{n}{mongo\PYZus{}config}\PY{p}{,} \PY{n}{save}\PY{o}{=}\PY{k+kc}{True}\PY{p}{,} \PY{n}{use\PYZus{}cache}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] Generating network for range 2010-1 to 2010-1 Generating network for range 2010-1 to 2010-2 Generating network for range 2010-1 to 2010-3 Generating network for range 2010-1 to 2010-4 Generating network for range 2010-1 to 2010-5 Generating network for range 2010-1 to 2010-6 Generating network for range 2010-1 to 2010-7 Generating network for range 2010-1 to 2010-8 Generating network for range 2010-1 to 2010-9 Generating network for range 2010-1 to 2010-10 Generating network for range 2010-1 to 2010-11 Generating network for range 2010-1 to 2010-12 Generating network for range 2010-1 to 2011-1 Generating network for range 2010-1 to 2011-2 Generating network for range 2010-1 to 2011-3 Generating network for range 2010-1 to 2011-4 Generating network for range 2010-1 to 2011-5 Generating network for range 2010-1 to 2011-6 Generating network for range 2010-1 to 2011-7 Generating network for range 2010-1 to 2011-8 Generating network for range 2010-1 to 2011-9 Generating network for range 2010-1 to 2011-10 Generating network for range 2010-1 to 2011-11 Generating network for range 2010-1 to 2011-12 Generating network for range 2010-1 to 2012-1 Generating network for range 2010-1 to 2012-2 Generating network for range 2010-1 to 2012-3 Generating network for range 2010-1 to 2012-4 Generating network for range 2010-1 to 2012-5 Generating network for range 2010-1 to 2012-6 Generating network for range 2010-1 to 2012-7 Generating network for range 2010-1 to 2012-8 Generating network for range 2010-1 to 2012-9 Generating network for range 2010-1 to 2012-10 Generating network for range 2010-1 to 2012-11 Generating network for range 2010-1 to 2012-12 Generating network for range 2010-1 to 2013-1 Generating network for range 2010-1 to 2013-2 Generating network for range 2010-1 to 2013-3 Generating network for range 2010-1 to 2013-4 Generating network for range 2010-1 to 2013-5 Generating network for range 2010-1 to 2013-6 Generating network for range 2010-1 to 2013-7 Generating network for range 2010-1 to 2013-8 Generating network for range 2010-1 to 2013-9 Generating network for range 2010-1 to 2013-10 Generating network for range 2010-1 to 2013-11 Generating network for range 2010-1 to 2013-12 Generating network for range 2010-1 to 2014-1 Generating network for range 2010-1 to 2014-2 Generating network for range 2010-1 to 2014-3 Generating network for range 2010-1 to 2014-4 Generating network for range 2010-1 to 2014-5 Generating network for range 2010-1 to 2014-6 Generating network for range 2010-1 to 2014-7 Generating network for range 2010-1 to 2014-8 Generating network for range 2010-1 to 2014-9 Generating network for range 2010-1 to 2014-10 Generating network for range 2010-1 to 2014-11 Generating network for range 2010-1 to 2014-12 Generating network for range 2010-1 to 2015-1 Generating network for range 2010-1 to 2015-2 Generating network for range 2010-1 to 2015-3 Generating network for range 2010-1 to 2015-4 Generating network for range 2010-1 to 2015-5 Generating network for range 2010-1 to 2015-6 Generating network for range 2010-1 to 2015-7 Generating network for range 2010-1 to 2015-8 Generating network for range 2010-1 to 2015-9 Generating network for range 2010-1 to 2015-10 Generating network for range 2010-1 to 2015-11 Generating network for range 2010-1 to 2015-12 Generating network for range 2010-1 to 2016-1 Generating network for range 2010-1 to 2016-2 Generating network for range 2010-1 to 2016-3 Generating network for range 2010-1 to 2016-4 Generating network for range 2010-1 to 2016-5 Generating network for range 2010-1 to 2016-6 Generating network for range 2010-1 to 2016-7 Generating network for range 2010-1 to 2016-8 Generating network for range 2010-1 to 2016-9 Generating network for range 2010-1 to 2016-10 Generating network for range 2010-1 to 2016-11 Generating network for range 2010-1 to 2016-12 Generating network for range 2010-1 to 2017-1 Generating network for range 2010-1 to 2017-2 Generating network for range 2010-1 to 2017-3 Generating network for range 2010-1 to 2017-4 Generating network for range 2010-1 to 2017-5 Generating network for range 2010-1 to 2017-6 Generating network for range 2010-1 to 2017-7 Generating network for range 2010-1 to 2017-8 Generating network for range 2010-1 to 2017-9 Generating network for range 2010-1 to 2017-10 Generating network for range 2010-1 to 2017-11 Generating network for range 2010-1 to 2017-12 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{co\PYZus{}citation\PYZus{}analysis} \PY{o}{=} \PY{n}{analyse\PYZus{}networks}\PY{p}{(}\PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{,} \PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/co\PYZhy{}citation.json}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{dates} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}analysis}\PY{o}{.}\PY{n}{keys}\PY{p}{(}\PY{p}{)} \PY{n}{values} \PY{o}{=} \PY{p}{[}\PY{n}{analysis}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{clustering\PYZus{}coefficient}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]} \PY{k}{for} \PY{n}{analysis} \PY{o+ow}{in} \PY{n}{co\PYZus{}citation\PYZus{}analysis}\PY{o}{.}\PY{n}{values}\PY{p}{(}\PY{p}{)}\PY{p}{]} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{dpi}\PY{o}{=} \PY{l+m+mi}{80}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{w}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{edgecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{k}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{dates}\PY{p}{,} \PY{n}{values}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xticks}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{n}{dates}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{n}{date}\PY{o}{.}\PY{n}{split}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ to }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{k}{for} \PY{n}{date} \PY{o+ow}{in} \PY{n}{dates}\PY{p}{]}\PY{p}{,} \PY{n}{rotation}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vertical}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{dates} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}analysis}\PY{o}{.}\PY{n}{keys}\PY{p}{(}\PY{p}{)} \PY{n}{values} \PY{o}{=} \PY{p}{[}\PY{n}{analysis}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{density}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{for} \PY{n}{analysis} \PY{o+ow}{in} \PY{n}{co\PYZus{}citation\PYZus{}analysis}\PY{o}{.}\PY{n}{values}\PY{p}{(}\PY{p}{)}\PY{p}{]} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{dpi}\PY{o}{=} \PY{l+m+mi}{80}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{w}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{edgecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{k}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{dates}\PY{p}{,} \PY{n}{values}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xticks}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{n}{dates}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{n}{date}\PY{o}{.}\PY{n}{split}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ to }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{k}{for} \PY{n}{date} \PY{o+ow}{in} \PY{n}{dates}\PY{p}{]}\PY{p}{,} \PY{n}{rotation}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vertical}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{dates} \PY{o}{=} \PY{n+nb}{list}\PY{p}{(}\PY{n}{co\PYZus{}citation\PYZus{}analysis}\PY{o}{.}\PY{n}{keys}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{:}\PY{p}{]} \PY{n}{values} \PY{o}{=} \PY{p}{[}\PY{n}{analysis}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{similarity\PYZus{}year\PYZus{}before}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{k}{for} \PY{n}{analysis} \PY{o+ow}{in} \PY{n+nb}{list}\PY{p}{(}\PY{n}{co\PYZus{}citation\PYZus{}analysis}\PY{o}{.}\PY{n}{values}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{:}\PY{p}{]}\PY{p}{]} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{n}{figsize}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{18}\PY{p}{,} \PY{l+m+mi}{16}\PY{p}{)}\PY{p}{,} \PY{n}{dpi}\PY{o}{=} \PY{l+m+mi}{80}\PY{p}{,} \PY{n}{facecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{w}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{edgecolor}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{k}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{dates}\PY{p}{,} \PY{n}{values}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xticks}\PY{p}{(}\PY{n+nb}{list}\PY{p}{(}\PY{n}{dates}\PY{p}{)}\PY{p}{,} \PY{p}{[}\PY{n}{date}\PY{o}{.}\PY{n}{split}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ to }\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{k}{for} \PY{n}{date} \PY{o+ow}{in} \PY{n}{dates}\PY{p}{]}\PY{p}{,} \PY{n}{rotation}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vertical}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2014\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{g} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2014\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{g} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{n}{label\PYZus{}largest\PYZus{}component}\PY{p}{(}\PY{n}{g}\PY{p}{)}\PY{p}{)} \PY{n}{ee}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y} \PY{o}{=} \PY{n}{hits}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{g}\PY{o}{.}\PY{n}{edge\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{weight}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{x}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{x}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{y}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{y}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{y}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2015\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{g} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2015\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{g} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{n}{label\PYZus{}largest\PYZus{}component}\PY{p}{(}\PY{n}{g}\PY{p}{)}\PY{p}{)} \PY{n}{ee}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y} \PY{o}{=} \PY{n}{hits}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{g}\PY{o}{.}\PY{n}{edge\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{weight}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{x}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{x}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{y}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{y}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{y}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2016\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{g} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2016\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{g} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{n}{label\PYZus{}largest\PYZus{}component}\PY{p}{(}\PY{n}{g}\PY{p}{)}\PY{p}{)} \PY{n}{ee}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y} \PY{o}{=} \PY{n}{hits}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{g}\PY{o}{.}\PY{n}{edge\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{weight}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{x}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{x}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{y}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{y}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{y}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2017\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{g} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2017\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{g} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{k}{lambda} \PY{n}{v}\PY{p}{:} \PY{n}{g}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{citedByCount}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{10}\PY{p}{)} \PY{n}{g} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{n}{label\PYZus{}largest\PYZus{}component}\PY{p}{(}\PY{n}{g}\PY{p}{)}\PY{p}{)} \PY{n}{ee}\PY{p}{,} \PY{n}{x}\PY{p}{,} \PY{n}{y} \PY{o}{=} \PY{n}{hits}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{g}\PY{o}{.}\PY{n}{edge\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{weight}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{)} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{x}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{x}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{x}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{graph\PYZus{}draw}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vertex\PYZus{}fill\PYZus{}color}\PY{o}{=}\PY{n}{y}\PY{p}{,} \PY{n}{vertex\PYZus{}size}\PY{o}{=}\PY{n}{prop\PYZus{}to\PYZus{}size}\PY{p}{(}\PY{n}{y}\PY{p}{,} \PY{n}{mi}\PY{o}{=}\PY{l+m+mi}{5}\PY{p}{,} \PY{n}{ma}\PY{o}{=}\PY{l+m+mi}{15}\PY{p}{)}\PY{p}{,} \PY{n}{vcmap}\PY{o}{=}\PY{n}{matplotlib}\PY{o}{.}\PY{n}{cm}\PY{o}{.}\PY{n}{gist\PYZus{}heat}\PY{p}{,} \PY{n}{vorder}\PY{o}{=}\PY{n}{y}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{g} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2014\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{g\PYZus{}most} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{k}{lambda} \PY{n}{v}\PY{p}{:} \PY{n}{g}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{citedByCount}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{l+m+mi}{1}\PY{p}{)}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \PY{n}{g\PYZus{}most}\PY{o}{.}\PY{n}{purge\PYZus{}vertices}\PY{p}{(}\PY{p}{)} \PY{n}{g\PYZus{}most}\PY{o}{.}\PY{n}{save}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/co\PYZhy{}citation\PYZus{}most\PYZus{}cited\PYZus{}2014.xml.gz}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{references}\PY{p}{,} \PY{n}{index} \PY{o}{=} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{get\PYZus{}references\PYZus{}annotations}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{publications}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mongodb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{27017}\PY{p}{,} \PY{l+m+mi}{2010}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2017}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{)} \PY{n}{most\PYZus{}cited} \PY{o}{=} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{get\PYZus{}references\PYZus{}count}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{publications}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mongodb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{27017}\PY{p}{,} \PY{l+m+mi}{2010}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2017}\PY{p}{,} \PY{l+m+mi}{12}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{most\PYZus{}cited}\PY{p}{)}\PY{p}{)} \PY{n}{to\PYZus{}cluster} \PY{o}{=} \PY{p}{[}\PY{p}{]} \PY{n}{pmids\PYZus{}to\PYZus{}cluster} \PY{o}{=} \PY{p}{[}\PY{p}{]} \PY{k}{for} \PY{n}{i}\PY{p}{,} \PY{n}{pmid} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{index}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n}{pmid} \PY{o+ow}{in} \PY{n}{most\PYZus{}cited}\PY{p}{:} \PY{n}{to\PYZus{}cluster}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{references}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{n}{pmids\PYZus{}to\PYZus{}cluster}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n+nb}{str}\PY{p}{(}\PY{n}{pmid}\PY{p}{)}\PY{p}{)} \PY{n}{t0} \PY{o}{=} \PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vectorizing}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{vectorizer} \PY{o}{=} \PY{n}{TfidfVectorizer}\PY{p}{(}\PY{n}{use\PYZus{}idf}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \PY{n}{X} \PY{o}{=} \PY{n}{vectorizer}\PY{o}{.}\PY{n}{fit\PYZus{}transform}\PY{p}{(}\PY{n}{to\PYZus{}cluster}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{n\PYZus{}samples: }\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s2}{, n\PYZus{}features: }\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PY{n}{X}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{n}{X} \PY{o}{=} \PY{n}{X}\PY{o}{.}\PY{n}{todense}\PY{p}{(}\PY{p}{)} \PY{n}{threshold} \PY{o}{=} \PY{l+m+mf}{0.5} \PY{n}{Z} \PY{o}{=} \PY{n}{hierarchy}\PY{o}{.}\PY{n}{linkage}\PY{p}{(}\PY{n}{X}\PY{p}{,} \PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{average}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{metric}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{cosine}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{C} \PY{o}{=} \PY{n}{hierarchy}\PY{o}{.}\PY{n}{fcluster}\PY{p}{(}\PY{n}{Z}\PY{p}{,} \PY{n}{threshold}\PY{p}{,} \PY{n}{criterion}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{distance}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{dn} \PY{o}{=} \PY{n}{hierarchy}\PY{o}{.}\PY{n}{dendrogram}\PY{p}{(}\PY{n}{Z}\PY{p}{,} \PY{n}{labels}\PY{o}{=}\PY{n}{pmids\PYZus{}to\PYZus{}cluster}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n+nb}{set}\PY{p}{(}\PY{n}{C}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{n}{pmid\PYZus{}cluster\PYZus{}map} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{p}{)} \PY{k}{for} \PY{n}{i}\PY{p}{,} \PY{n}{label} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{C}\PY{p}{)}\PY{p}{:} \PY{n}{pmid\PYZus{}cluster\PYZus{}map}\PY{p}{[}\PY{n}{pmids\PYZus{}to\PYZus{}cluster}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{]} \PY{o}{=} \PY{n}{label} \PY{n}{g\PYZus{}grouped} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2017\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{group\PYZus{}prop} \PY{o}{=} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{new\PYZus{}vertex\PYZus{}property}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{int}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{group\PYZus{}prop} \PY{k}{for} \PY{n}{v} \PY{o+ow}{in} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertices}\PY{p}{(}\PY{p}{)}\PY{p}{:} \PY{n}{pmid} \PY{o}{=} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{label}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o}{=} \PY{n}{pmid\PYZus{}cluster\PYZus{}map}\PY{p}{[}\PY{n}{pmid}\PY{p}{]} \PY{k}{if} \PY{n}{pmid} \PY{o+ow}{in} \PY{n}{pmid\PYZus{}cluster\PYZus{}map} \PY{k}{else} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{n}{g\PYZus{}grouped} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g\PYZus{}grouped}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{k}{lambda} \PY{n}{v}\PY{p}{:} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{o+ow}{and} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o+ow}{is} \PY{o+ow}{not} \PY{k+kc}{None}\PY{p}{)}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{purge\PYZus{}vertices}\PY{p}{(}\PY{p}{)} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{save}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/clustered\PYZus{}2010\PYZus{}2017.xml.gz}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{references}\PY{p}{,} \PY{n}{index} \PY{o}{=} \PY{n}{mongodb\PYZus{}access}\PY{o}{.}\PY{n}{get\PYZus{}references\PYZus{}annotations}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{publications}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{mongodb}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{27017}\PY{p}{,} \PY{l+m+mi}{2010}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{,} \PY{l+m+mi}{2018}\PY{p}{,} \PY{l+m+mi}{1}\PY{p}{)} \PY{n}{t0} \PY{o}{=} \PY{n}{time}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{vectorizing}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{vectorizer} \PY{o}{=} \PY{n}{TfidfVectorizer}\PY{p}{(}\PY{n}{use\PYZus{}idf}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \PY{n}{X} \PY{o}{=} \PY{n}{vectorizer}\PY{o}{.}\PY{n}{fit\PYZus{}transform}\PY{p}{(}\PY{n}{references}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{n\PYZus{}samples: }\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s2}{, n\PYZus{}features: }\PY{l+s+si}{\PYZpc{}d}\PY{l+s+s2}{\PYZdq{}} \PY{o}{\PYZpc{}} \PY{n}{X}\PY{o}{.}\PY{n}{shape}\PY{p}{)} \PY{n}{db} \PY{o}{=} \PY{n}{DBSCAN}\PY{p}{(}\PY{n}{metric}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{cosine}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{eps}\PY{o}{=}\PY{l+m+mf}{0.5}\PY{p}{)}\PY{o}{.}\PY{n}{fit}\PY{p}{(}\PY{n}{X}\PY{p}{)} \PY{n}{core\PYZus{}samples\PYZus{}mask} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{zeros\PYZus{}like}\PY{p}{(}\PY{n}{db}\PY{o}{.}\PY{n}{labels\PYZus{}}\PY{p}{,} \PY{n}{dtype}\PY{o}{=}\PY{n+nb}{bool}\PY{p}{)} \PY{n}{core\PYZus{}samples\PYZus{}mask}\PY{p}{[}\PY{n}{db}\PY{o}{.}\PY{n}{core\PYZus{}sample\PYZus{}indices\PYZus{}}\PY{p}{]} \PY{o}{=} \PY{k+kc}{True} \PY{n}{labels} \PY{o}{=} \PY{n}{db}\PY{o}{.}\PY{n}{labels\PYZus{}} \PY{c+c1}{\PYZsh{} Number of clusters in labels, ignoring noise if present.} \PY{n}{n\PYZus{}clusters\PYZus{}} \PY{o}{=} \PY{n+nb}{len}\PY{p}{(}\PY{n+nb}{set}\PY{p}{(}\PY{n}{labels}\PY{p}{)}\PY{p}{)} \PY{o}{\PYZhy{}} \PY{p}{(}\PY{l+m+mi}{1} \PY{k}{if} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{o+ow}{in} \PY{n}{labels} \PY{k}{else} \PY{l+m+mi}{0}\PY{p}{)} \PY{n}{unique\PYZus{}labels} \PY{o}{=} \PY{n+nb}{set}\PY{p}{(}\PY{n}{labels}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{unique\PYZus{}labels}\PY{p}{)}\PY{p}{)} \PY{n}{pmid\PYZus{}cluster\PYZus{}map} \PY{o}{=} \PY{n+nb}{dict}\PY{p}{(}\PY{p}{)} \PY{k}{for} \PY{n}{i}\PY{p}{,} \PY{n}{label} \PY{o+ow}{in} \PY{n+nb}{enumerate}\PY{p}{(}\PY{n}{labels}\PY{p}{)}\PY{p}{:} \PY{n}{pmid\PYZus{}cluster\PYZus{}map}\PY{p}{[}\PY{n}{index}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{]} \PY{o}{=} \PY{n}{label} \PY{n}{g\PYZus{}grouped} \PY{o}{=} \PY{n}{co\PYZus{}citation\PYZus{}networks}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2010\PYZhy{}1 to 2017\PYZhy{}12}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{n}{group\PYZus{}prop} \PY{o}{=} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{new\PYZus{}vertex\PYZus{}property}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{int}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]} \PY{o}{=} \PY{n}{group\PYZus{}prop} \PY{k}{for} \PY{n}{v} \PY{o+ow}{in} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertices}\PY{p}{(}\PY{p}{)}\PY{p}{:} \PY{n}{pmid} \PY{o}{=} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{label}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o}{=} \PY{n}{pmid\PYZus{}cluster\PYZus{}map}\PY{p}{[}\PY{n}{pmid}\PY{p}{]} \PY{k}{if} \PY{n}{pmid} \PY{o+ow}{in} \PY{n}{pmid\PYZus{}cluster\PYZus{}map} \PY{k}{else} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{n}{g\PYZus{}grouped} \PY{o}{=} \PY{n}{GraphView}\PY{p}{(}\PY{n}{g\PYZus{}grouped}\PY{p}{,} \PY{n}{vfilt}\PY{o}{=}\PY{k}{lambda} \PY{n}{v}\PY{p}{:} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o}{\PYZgt{}} \PY{o}{\PYZhy{}}\PY{l+m+mi}{1} \PY{o+ow}{and} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{vertex\PYZus{}properties}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{group}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{p}{[}\PY{n}{v}\PY{p}{]} \PY{o+ow}{is} \PY{o+ow}{not} \PY{k+kc}{None}\PY{p}{)}\PY{o}{.}\PY{n}{copy}\PY{p}{(}\PY{p}{)} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{purge\PYZus{}vertices}\PY{p}{(}\PY{p}{)} \PY{n}{g\PYZus{}grouped}\PY{o}{.}\PY{n}{save}\PY{p}{(}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/clustered\PYZus{}2010\PYZus{}2017.xml.gz}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] vectorizing n\_samples: 31702, n\_features: 15004 \end{Verbatim} \hypertarget{generaciuxf3n-red-de-co-aparaciuxf3n-de-las-anotaciones}{% \subsubsection{Generación Red de Co-aparación de las anotaciones}\label{generaciuxf3n-red-de-co-aparaciuxf3n-de-las-anotaciones}} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor} }]:} \PY{n}{cooccurrence\PYZus{}networks} \PY{o}{=} \PY{n}{get\PYZus{}co\PYZus{}occurrence\PYZus{}network\PYZus{}incremental}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{zika}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{l+m+mi}{2010}\PY{p}{,} \PY{l+m+mi}{2018}\PY{p}{,} \PY{n}{output\PYZus{}path}\PY{o}{=}\PY{n}{output\PYZus{}path} \PY{o}{+} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{gt/}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{mongo\PYZus{}config}\PY{o}{=}\PY{n}{mongo\PYZus{}config}\PY{p}{,} \PY{n}{save}\PY{o}{=}\PY{k+kc}{True}\PY{p}{,} \PY{n}{use\PYZus{}cache}\PY{o}{=}\PY{k+kc}{True}\PY{p}{)} \end{Verbatim} \hypertarget{anuxe1lisis-de-ruxe1faga}{% \subsection{Análisis de ráfaga}\label{anuxe1lisis-de-ruxe1faga}} \hypertarget{ruxe1faga-de-palabras-utilizando-las-anotaciones}{% \subsubsection{Ráfaga de palabras utilizando las anotaciones}\label{ruxe1faga-de-palabras-utilizando-las-anotaciones}} \hypertarget{ruxe1faga-de-palabras-utilizando-el-texto-completo}{% \subsubsection{Ráfaga de palabras utilizando el texto completo}\label{ruxe1faga-de-palabras-utilizando-el-texto-completo}} \hypertarget{ruxe1faga-de-palabras-utilizando-tuxedtulo-y-abstract}{% \subsubsection{Ráfaga de palabras utilizando título y abstract}\label{ruxe1faga-de-palabras-utilizando-tuxedtulo-y-abstract}} \hypertarget{ruxe1faga-de-palabras-utilizando-palabras-clave}{% \subsubsection{Ráfaga de palabras utilizando palabras clave}\label{ruxe1faga-de-palabras-utilizando-palabras-clave}} \hypertarget{ruxe1faga-de-palabras-utilizando-mesh}{% \subsubsection{Ráfaga de palabras utilizando MESH}\label{ruxe1faga-de-palabras-utilizando-mesh}} \hypertarget{detecciuxf3n-de-frentes-de-investigaciuxf3n}{% \subsection{Detección de frentes de investigación}\label{detecciuxf3n-de-frentes-de-investigaciuxf3n}} \hypertarget{anuxe1lisis-del-cambio-en-los-autores}{% \subsubsection{Análisis del cambio en los autores}\label{anuxe1lisis-del-cambio-en-los-autores}} \hypertarget{anuxe1lisis-del-cambio-en-la-interdisciplinariedad}{% \subsubsection{Análisis del cambio en la interdisciplinariedad}\label{anuxe1lisis-del-cambio-en-la-interdisciplinariedad}} \hypertarget{anuxe1lisis-de-ruxe1faga-de-palabras}{% \subsubsection{Análisis de ráfaga de palabras}\label{anuxe1lisis-de-ruxe1faga-de-palabras}} % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5558169302, "avg_line_length": 92.2563829787, "ext": "tex", "hexsha": "f3af73095ea9d09bde014be1cafea12b641760ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3929493d93aa3712e4f48e7ab23b659daedb3b1f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "ficolo/science-radar", "max_forks_repo_path": "notebook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3929493d93aa3712e4f48e7ab23b659daedb3b1f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "ficolo/science-radar", "max_issues_repo_path": "notebook.tex", "max_line_length": 973, "max_stars_count": 1, "max_stars_repo_head_hexsha": "3929493d93aa3712e4f48e7ab23b659daedb3b1f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "ficolo/science-radar", "max_stars_repo_path": "notebook.tex", "max_stars_repo_stars_event_max_datetime": "2019-10-03T02:04:12.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-03T02:04:12.000Z", "num_tokens": 39309, "size": 86721 }
%% Introduction %% % Onderwerp : Ontwikkeling van een programma waarmee de technologie bestanden van SPACE % op een snelle manier gemaakt kunnen worden. % % % Hoofdvraag: Wat zijn de belangrijkste problemen bij de ontwikkeling en hoe zijn deze opgelost? % % Achtergrondvragen: % - Wat is de omgeving waarin het programma is ontwikkeld? % - Welke ontwerpmethodes zijn toegepast? % - Waarom helpt dit programma bij de ontwikkeling van technologie bestanden? (Waarom optimaal) % % Kernvragen: % - technische haalbaarheid % - flexibiliteit % - betrouwbaarheid %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Introduction} \label{chap:introduction} A layout-to-circuit extractor like SPACE\footnote{\emph{SPACE} is the layout--to--circuit extractor developed at the Circuits and Systems group of the Electrical Engineering faculty. SPACE has been quite successful so far. It is used by several companies and universities, including Agilent (Hewlett Packard) and Level One (Intel).} must (of course) be able to handle all kinds of processes. In SPACE, each of these processes is described by a set of \emph{technology files}. The contents of these files vary from simple layer specification to complex capacitance tables. Until now, the technology files were maintained by hand. This means that for each process all the technology files had to be entered manually, a time-consuming and error-prone task. Also, the user must be familiar with the file format of each of the technology files, which makes process entry and maintenance a specialized task. \emph{SPOCK} is to alleviate the problems described above by presenting the user with a graphical user interface in which the required data can be entered. \bigskip \noindent The main goal of this thesis is to identify the problems encountered during the design and implementation of SPOCK and to describe the solutions that were used to solve these problems. \bigskip \noindent Chapter \ref{chap:demands} presents the demanded functionality of the application as well as some imposed constraints. Chapter \ref{chap:design} discusses and explains the final design. The user interface plays an important role in this application. Some standard and custom user interface elements are therefore presented in Chapter \ref{chap:uidesign}. Chapter \ref{chap:language} describes the configuration file language. As will become clear later, the configuration file describes the technology files supported by the application. %Chapter \ref{chap:environment}) describes the tools used during development. Some %comments on the testing and debugging process will be given in Chapter %\ref{chap:testing}. The final chapter will contain some concluding remarks.
{ "alphanum_fraction": 0.7515789474, "avg_line_length": 48.3050847458, "ext": "tex", "hexsha": "88dd11149319c0226603ffcc6cbbcd802a812585", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-29T18:15:17.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-29T18:15:17.000Z", "max_forks_repo_head_hexsha": "696f5a22cb71b83eabbb9de199f1972d458fa9e9", "max_forks_repo_licenses": [ "ISC" ], "max_forks_repo_name": "yrrapt/cacd", "max_forks_repo_path": "doc/manuals/spock/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "696f5a22cb71b83eabbb9de199f1972d458fa9e9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "ISC" ], "max_issues_repo_name": "yrrapt/cacd", "max_issues_repo_path": "doc/manuals/spock/introduction.tex", "max_line_length": 98, "max_stars_count": 11, "max_stars_repo_head_hexsha": "696f5a22cb71b83eabbb9de199f1972d458fa9e9", "max_stars_repo_licenses": [ "ISC" ], "max_stars_repo_name": "yrrapt/cacd", "max_stars_repo_path": "doc/manuals/spock/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2021-09-28T19:46:12.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-16T11:03:49.000Z", "num_tokens": 637, "size": 2850 }
% Created 2017-02-17 Fri 16:30 % Intended LaTeX compiler: xelatex \documentclass[presentation]{beamer} \usepackage{graphicx} \usepackage{grffile} \usepackage{longtable} \usepackage{wrapfig} \usepackage{rotating} \usepackage[normalem]{ulem} \usepackage{amsmath} \usepackage{textcomp} \usepackage{amssymb} \usepackage{capt-of} \usepackage{hyperref} \usetheme{UMMISCO} \author{Author} \date{\today} \title{Title of my talk} \hypersetup{ pdfauthor={Author}, pdftitle={Title of my talk}, pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 25.2.1 (Org mode 9.0.5)}, pdflang={English}} \begin{document} \maketitle \begin{frame}{Outline} \tableofcontents \end{frame} \section{Slide 1} \label{sec:org7d01809} \begin{frame}[label={sec:org7ace24f}]{Slide 1} A science committed to a sustainable future \end{frame} \end{document}
{ "alphanum_fraction": 0.7557436518, "avg_line_length": 20.1707317073, "ext": "tex", "hexsha": "76aeb9312b895c9a72c5307c0207145702ae0a93", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3809bd5e317f23c341e25ca905bf6ce6c14bac49", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "UMMISCO/UMMISCO-beamer-theme", "max_forks_repo_path": "example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3809bd5e317f23c341e25ca905bf6ce6c14bac49", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "UMMISCO/UMMISCO-beamer-theme", "max_issues_repo_path": "example.tex", "max_line_length": 46, "max_stars_count": null, "max_stars_repo_head_hexsha": "3809bd5e317f23c341e25ca905bf6ce6c14bac49", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "UMMISCO/UMMISCO-beamer-theme", "max_stars_repo_path": "example.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 282, "size": 827 }
\documentclass[a4paper,11pt]{article} \usepackage{graphicx} \usepackage{enumerate} \usepackage[usenames, dvipsnames]{color} \usepackage[margin=1.25in]{geometry} \usepackage{hyperref} \usepackage{setspace} \doublespacing \begin{document} \begin{flushright} \vspace{1.1cm} {\bf\Huge Hubble Space Telescope Spectra Database} \rule{0.25\linewidth}{0.5pt} \vspace{0.5cm} %Put Authors Justin Ely \linebreak %Put Author's affiliations \footnotesize{605.411. Principles of Database Systems \\} % Date here below 11 August, 2017 \end{flushright} \noindent\rule{\linewidth}{1.0pt} %------------------------------------------------------------------------------ \section{Introduction} The Hubble Space Telescope (HST) is one of the most productive scientific instruments of all time, taking many observations each day and producing large quantities of data. Unfortunately, the data storage for these archives grew organically over more than 25 years of operations. This means the data is stored in a collection of ASCII files, binary files, and distributed databases with a large amount of redundancy and inneficiency. \subsection{Scope and Purpose of Document} This document serves to describe the design and implementation of a database system to store and serve data for the Hubble Space Telescope. Contained within are ER diagrams, operational rules, a sample implementation, and more to demonstrate applicability. \subsection{Project Objective} This project aims to take a fresh look at storing this rich data archive with a focus on efficiency, integrity, and fast access access to help improve operations and use to the community. Though this simple example, by restriction of scope, only applies to a fraction of the available data, the new approach can be expanded and applied to the entire archive. %------------------------------------------------------------------------------ \section{System Requirements} The requirements for deploying and running this database project are driven by the needs of the RDBMS employed. For this project, PostgreSQL (postgres) was the RDBMS chosen. A full listing of the most up-to-date technical requirements necessary should be checked at \href{https://www.postgresql.org/}{https://www.postgresql.org/}, but an overview is provided here. \subsection{Hardware Requirements} PostgreSQL supports many architectures including x86, ARM, PowerPC, and SPARC. For development and testing of this project, an x86 Intel Core i7-4790K was employed. The database, as implemented at the time of this document, had a total size of 313728 bytes, or just over 300KB. This is the very floor of free memory required to deploy the given database, but this represents just a small sample of the available data. A full production database would likely be many GB if the entire history of the telescope was to be injested. The system used during development and test had 24GB of RAM and a 120GB SSD. \subsection{Software Requirements} The postgres DBMS is well established and well supported. It will run on Linux, OSX, and Windows, as well as many other UNIX varieties. Installing PostgreSQL requires GNU make and an ISO/ANSI C compiler. The versions used for development were GNU Make 4.1 and GCC 5.4.0. These were on an Ubuntu 16.04 64-bit system. \subsection{Functional Requirements} This application in general supports the storage and retrieval of HST spectral datasets and associated metadata. This includes adding and tracking proposals, investigators, and targets with each new proposal cycle. It also supports adding observations, their sub-exposures, and associated metadata as they are taken each day. Finally, it supports statistical investigations and aggregations on the ingested data. \subsection{Database Requirements} Postgres is an ORDBMS that has been in use for more than 20 years, and has wide community support. It status as a reliable free and open-source tool was why it was chosen for this project. Postgres v9.5.7 was used as the RDBMS for this project during development, but this is not known to be a hard dependency and the functionality is likely backwards and forwards compatible with many other versions. %------------------------------------------------------------------------------ \section{Database Design Description} \subsection{Design Rationale} The design for the following ER model comes largely from a top-down approach; taking the existing largely flat archive of data and breaking it into smaller logical units to reduce duplication and increase performance. The intent of this database is to present the information in as usefull a manner as possible; one that can be extended into many other uses. To this end, as many entities as possible are non-identifying. This lets each one stand a little better on it's own, and allow useful information retrieval without understanding another table. Fortunately, this is possible for nearly all tables, with the notable exception of the detector entity, which needs its parent entity for unique identification. Most entities also contained a unique attribute without needing to add a unique id using the DBMS. The exceptions to this were the investigator, target, and description entities. The entity with the most keys is the exposure, which maps most closely to the original source of data. Many of the other entities are chunks of data that repeat for many exposure occurances, which is also why it has the most links to or from other entities. \subsection{E/R Model} \subsubsection{Entities} Investigator is an entity that represents a person with an accepted scientific proposal to use the telescope. This entity gives each investigator a unique integer ID, and tracks their firstname, middlename, and lastname. Proposal stores an accepted proposal for the telescope. Each proposal has an unique integer ID (that has been assigned by another system), as well as the title. A foreign key that references an investigator ID tracks which proposals were submitted by which Investigator. An investigator may have many proposals, but a proposal may have only one investigator, so the proposal is a child entity of investigator. Target is an entity which holds each unique astronomical target that has been observed. Each has a unique ID, as well as its common name and position in right ascension and declination (ra and dec). It is important to not use name or position by themselves as the unique identifier, as objects can move (i.e. jupiter) and will need multiple positions, and a particular line of sight (ra and dec) may contain many different objects. Therefore, name or position alone cannot uniquely determine the target. Description holds individual human-given descriptions that can pertain to various astronomical observations. Each entry has an ID and a description. Descriptions can be any string description of an observed target or objects in the field. Examples include, but are not limited to, planet, star, galaxy, nebule, bright, dim, etc. The instrument entity holds entries for the scientific instruments that can be used in an observation. This entity uses the instrument's name as a unique primary key, and holds pertinent metadata about the instrument such as sensitivity, manufacturer, installation date, and status. The instrument\_count entity holds a running total of the cumulative exposure time that has been observed for a given instrument. This table is populated by a trigger that will be discussed in a later section. The child entity of instrument is the detector entity. Each scientific instrument can have multiple detectors. This entity contains a compound primary key made of the detector name and the foreign key instrument ID. The name of the detectors is not unique, but the instrument name + detector name is a unique combination. The entity also contains detector-level metadata like the number of pixels and detector type. An exposure entity represents a unique activation of a detector on the telescope. This entity represents a particular atomic "measurement" and the settings used. Each exposure implementation was a unique ID, the date and time of the observation, as well has the configurations of MJD, exptime, observation mode, central wavelength used, and optical element in the beampath. It also includes 3 foreign keys that link to the IDs for the instrument, detector and parent observation. An Observation entity is a parent to the exposure entitiy. This entity is needed to group together many individual exposures and link them to proposals. This grouping is needed as the observations listed in the proposal may in practice be broken up into multiple sub measurements (exposures) due to orbital constraints on continuous time segments. The file entity links each exposure to the physical file on disk that contains measurements and metadata. The primary key is the filename, which must be unique to avoid confusion, and a foreign key links it back to the exposure it informs. Reference\_file entity tracks various permutable calibration switches that con control how the exposure data is reduced into it's final form. Each reference file can apply to many individual exposures, or none, and each exposure will need many reference files. \subsubsection{Relationships} Observes is a relationship that links targets to proposals. This relationship is necessary because a proposal may have multiple targets, and a target may be observed in multiple proposals. Describes is a relationship that maps targets to descriptions with their respective IDs. This relationship is necessary to resolve the many-to-many relationship that applies between these two entities. Applies\_to relates reference files and exposures. Many reference files will apply to multiple exposures, and exposures will need to link to any number of reference files. \subsubsection{E/R Diagram} \begin{figure}[h!] \caption{ERD in IE notation for the HST spectra database.} \centering \includegraphics[width=.9\textwidth]{hst_spectra_erd.png} \end{figure} \subsection{Relational Model} \subsubsection{Data Dictionary} \hrulefill APPLIES\_TO entity definition: \begin{verbatim} Column | Type | Modifiers --------+-----------------------+----------- obs | character(9) | ref | character varying(30) | Foreign-key constraints: "applies_to_obs_fkey" FOREIGN KEY (obs) REFERENCES exposure(rootname) "applies_to_ref_fkey" FOREIGN KEY (ref) REFERENCES reference_file(name) \end{verbatim} \hrulefill DESCRIPTION entity definition: \begin{verbatim} Column | Type | Modifiers -------------+-----------------------+-------------------------------------- id | integer | not null default nextval('description_id_seq'::regclass) description | character varying(80) | Indexes: "description_pkey" PRIMARY KEY, btree (id) Referenced by: TABLE "targ_map" CONSTRAINT "targ_map_descripid_fkey" FOREIGN KEY (descripid) REFERENCES description(id) \end{verbatim} \hrulefill DETECTOR entity definition: \begin{verbatim} Column | Type | Modifiers ---------+-----------------------+----------- name | character(4) | not null inst | character(4) | not null npixels | integer | type | character varying(20) | Indexes: "detector_pkey" PRIMARY KEY, btree (name, inst) Check constraints: "detector_type_check" CHECK (type::text = 'mama'::text OR type::text = 'ccd'::text OR type::text = 'xdl'::text OR type::text = 'digicon'::text OR type::text = 'cathode'::text) Foreign-key constraints: "detector_inst_fkey" FOREIGN KEY (inst) REFERENCES instrument(name) Referenced by: TABLE "exposure" CONSTRAINT "exposure_detector_fkey" FOREIGN KEY (detector, instrument) REFERENCES detector(name, inst) \end{verbatim} \hrulefill EXPOSURE entity definition: \begin{verbatim} Column | Type | Modifiers ------------+-----------------------+----------- rootname | character(9) | not null obsname | character(9) | instrument | character(4) | not null detector | character(4) | not null mjd | double precision | exptime | double precision | obsmode | character varying(80) | cenwave | integer | opt_elem | character(6) | Indexes: "exposure_pkey" PRIMARY KEY, btree (rootname) Check constraints: "exposure_cenwave_check" CHECK (cenwave > 0) "exposure_exptime_check" CHECK (exptime > 0::double precision) "exposure_mjd_check" CHECK (mjd > 0::double precision) "exposure_obsmode_check" CHECK (obsmode::text = 'ACCUM'::text OR obsmode::text = 'TIME-TAG'::text) Foreign-key constraints: "exposure_detector_fkey" FOREIGN KEY (detector, instrument) REFERENCES detector(name, inst) "exposure_obsname_fkey" FOREIGN KEY (obsname) REFERENCES observation(rootname) Referenced by: TABLE "applies_to" CONSTRAINT "applies_to_obs_fkey" FOREIGN KEY (obs) REFERENCES exposure(rootname) TABLE "file" CONSTRAINT "file_exposure_fkey" FOREIGN KEY (exposure) REFERENCES exposure(rootname) Triggers: add_exptime BEFORE INSERT ON exposure FOR EACH ROW EXECUTE PROCEDURE log_exptime() \end{verbatim} \hrulefill EXPTIME\_COUNT entity definition: \begin{verbatim} Column | Type | Modifiers ------------+------------------+----------- instrument | character(4) | count | double precision | Check constraints: "exptime_count_count_check" CHECK (count >= 0::double precision) Foreign-key constraints: "exptime_count_instrument_fkey" FOREIGN KEY (instrument) REFERENCES instrument(name) \end{verbatim} \hrulefill FILE entity definition: \begin{verbatim} Column | Type | Modifiers ----------+-----------------------+----------- name | character varying(30) | not null exposure | character(9) | fpath | character varying(50) | Indexes: "file_pkey" PRIMARY KEY, btree (name) Foreign-key constraints: "file_exposure_fkey" FOREIGN KEY (exposure) REFERENCES exposure(rootname) \end{verbatim} \hrulefill INSTRUMENT entity definition: \begin{verbatim} Column | Type | Modifiers -----------+-----------------------+----------- name | character(4) | not null maker | character varying(40) | installed | double precision | active | boolean | Indexes: "instrument_pkey" PRIMARY KEY, btree (name) Check constraints: "instrument_installed_check" CHECK (installed > 0::double precision) Referenced by: TABLE "detector" CONSTRAINT "detector_inst_fkey" FOREIGN KEY (inst) REFERENCES instrument(name) TABLE "exptime_count" CONSTRAINT "exptime_count_instrument_fkey" FOREIGN KEY (instrument) REFERENCES instrument(name) \end{verbatim} \hrulefill INVESTIGATOR entity definition: \begin{verbatim} Column | Type | Modifiers --------+-----------------------+----------------------------------------------------------- id | integer | not null default nextval('investigator_id_seq'::regclass) fname | character varying(30) | mname | character varying(30) | lname | character varying(30) | Indexes: "investigator_pkey" PRIMARY KEY, btree (id) Referenced by: TABLE "proposal" CONSTRAINT "proposal_pi_fkey" FOREIGN KEY (pi) REFERENCES investigator(id) \end{verbatim} \hrulefill OBSERVATION entity definition: \begin{verbatim} Table "public.observation" Column | Type | Modifiers ------------+--------------+----------- rootname | character(9) | not null proposalid | integer | Indexes: "observation_pkey" PRIMARY KEY, btree (rootname) Foreign-key constraints: "observation_proposalid_fkey" FOREIGN KEY (proposalid) REFERENCES proposal(proposid) Referenced by: TABLE "exposure" CONSTRAINT "exposure_obsname_fkey" FOREIGN KEY (obsname) REFERENCES observation(rootname) \end{verbatim} \hrulefill PROPOSAL entity definition: \begin{verbatim} Column | Type | Modifiers ----------+-----------------------+----------- proposid | integer | not null pi | integer | not null title | character varying(80) | Indexes: "proposal_pkey" PRIMARY KEY, btree (proposid) Check constraints: "proposal_proposid_check" CHECK (proposid > 0) Foreign-key constraints: "proposal_pi_fkey" FOREIGN KEY (pi) REFERENCES investigator(id) Referenced by: TABLE "observation" CONSTRAINT "observation_proposalid_fkey" FOREIGN KEY (proposalid) REFERENCES proposal(proposid) \end{verbatim} \hrulefill REFERENCE\_FILE entity definition: \begin{verbatim} Column | Type | Modifiers --------+-----------------------+----------- name | character varying(30) | not null type | character varying(30) | Indexes: "reference_file_pkey" PRIMARY KEY, btree (name) Referenced by: TABLE "applies_to" CONSTRAINT "applies_to_ref_fkey" FOREIGN KEY (ref) REFERENCES reference_file(name) \end{verbatim} \hrulefill DESCRIBES entity definition: \begin{verbatim} Column | Type | Modifiers -----------+---------+----------- targetid | integer | descripid | integer | Foreign-key constraints: "targ_map_descripid_fkey" FOREIGN KEY (descripid) REFERENCES description(id) "targ_map_targetid_fkey" FOREIGN KEY (targetid) REFERENCES target(id) \end{verbatim} \hrulefill TARGET entity definition: \begin{verbatim} Column | Type | Modifiers --------+-----------------------+----------------------------------------------------- id | integer | not null default nextval('target_id_seq'::regclass) name | character varying(30) | ra | double precision | dec | double precision | Indexes: "target_pkey" PRIMARY KEY, btree (id) Referenced by: TABLE "targ_map" CONSTRAINT "targ_map_targetid_fkey" FOREIGN KEY (targetid) REFERENCES target(id) \end{verbatim} \hrulefill \subsubsection{Integrity Rules} Integrity rules for this database are enforced with on the tables themselves, with primary and foreign keys, unique/not null constraints, and specific data constraints. Much of the concern with this database is keeping referential integrity between all the linked entities. A strong set of primary and foreign keys keeps this together by ensuring that the linkages will exist with each insertion. Additionally, specific column constraints were used throughout for individual data integrity. Typical examples are constraining columns to a few known values, like in the detector.type attribute. Other constraints simply make sure the data is within known ranges, like enforcing non-negative values for exposure.exptime. Since a negtive exposure time is physically impossible, and this attribute is often used in calculations, catching an improper value early is very valuable. \subsubsection{Operational Rules} Operational rules are rather permitting for this application, as it's designed to be flexible and permit the aggregation of disparate, but linked, data. As such, operations are permitted so long as the do not violate the data integrity rules built into the table. I.e. a proposal may not be added into the system either creating a new or linking to an existing investigator, due to the foreign key constraint. Similarly, an exposure cannot be inserted without a linked observation which can't be inserted without an existing proposal. The hierarchy of keys and constraints enforce the operational rules, but anything else is permitted. \subsubsection{Operations} A common operation that will be performed on this database is the insertion of the proposals from a recent round of approvals. This operation involves inserting any new investigators, inserting new proposals, then reading the target table and linking or inserting as needed. \subsection{Security} For this database, security is a minor concern. All data used is public-domain, so access control is not required. In addition, the database is an abstract layer on top of existing data products designed to be more efficient and easy-to-use, but it is not the sole point of truth. Therefore, the database can easily be re-created from the source should any data be lost. In addition, the database is not currently hosted, so preventing malicious access is a minor concern. \subsection{Database Backup and Recovery} Backup will be accomplished through the PG\_DUMP task. This is a utility included with PostgreSQL that dumps the entire database contains into plaintext suitable to re-create the entire state of the database. This backup is performant at these small scales, but has not been tested with a larger database. However, given the infrequent additions to the database in a production environment, a low performance dump will not hamper operations. \subsection{Using Database Design or CASE Tool} Entities and the ERD were created with \href{https://draw.io}{draw.io}, and database administration was done through the command-line tools provided by PostgreSQL. While many more fully-featured tools may have made these tasks easier, I prefer to learn directly with the tools themselves. Activities like installing the DBMS, creating users, adding tables, inserting data, and all other activities were done by hand. \subsection{Other Possible E/R Relationships} An additional E/R relationship that could be useful to implement would be to add an Institution entity and a works\_for relationship to the existing investigor entity. This would allow additional reporting like which institutions around the world were more or less successful at proposing in a given year. I also considered a completely different ERD where the tables much more closely resembled the existing file structures as produced by the mission. This then produced an ERD centered almost entirely on observations, and suffered from a great deal of data redundancy and difficult queries. %------------------------------------------------------------------------------ \section{Implementation Description} \subsection{Data Dictionary} \begin{verbatim} spec=# \dt+ List of relations Schema | Name | Type | Owner | Size | Description --------+----------------+-------+--------+------------+------------- public | applies_to | table | justin | 120 kB | public | description | table | justin | 8192 bytes | public | detector | table | justin | 8192 bytes | public | exposure | table | justin | 8192 bytes | public | exptime_count | table | justin | 40 kB | public | file | table | justin | 40 kB | public | instrument | table | justin | 8192 bytes | public | investigator | table | justin | 8192 bytes | public | observation | table | justin | 8192 bytes | public | proposal | table | justin | 8192 bytes | public | reference_file | table | justin | 40 kB | public | targ_map | table | justin | 8192 bytes | public | target | table | justin | 8192 bytes | \end{verbatim} \subsection{Advanced Features} This database contains stored functions and triggers to perform automatic data aggregation as inserts occur. The stored procedure updates a row in a table that stores the total accumulated exposure time for a given instrument. The trigger, causes the procedure to run each time a new exposure is added to the exposure table. This way, when new data is taken, the exposure time is automatically added to a running total in another entity, and the aggregate stats are always up-to-date. The features are defined as follows. \begin{verbatim} -- stored procedure CREATE FUNCTION log_exptime() RETURNS trigger AS $log_exptime$ BEGIN UPDATE exptime_count SET count = (count + NEW.exptime) WHERE instrument = NEW.instrument; RETURN NEW; END; $log_exptime$ LANGUAGE plpgsql; -- DB trigger CREATE TRIGGER add_exptime BEFORE INSERT ON exposure FOR EACH ROW EXECUTE PROCEDURE log_exptime(); \end{verbatim} \subsection{Queries} \subsubsection{Describing a target} This query would be very common, as astronomical names are not terribly descriptive as to what the source is. Not only that, but a field of view can be very large, so multiple different objects may be visible. Therefore, mapping an individal target through all available descriptions is useful to understand what is being looked at. \begin{verbatim} spec=# SELECT description FROM target JOIN targ_map ON target.id=targ_map.targetid JOIN description ON descripid=description.id WHERE name = 'NGC-7469'; description ------------- NLR BLR SEYFERT NUCLEUS (4 rows) \end{verbatim} \subsubsection{Find applicable reference files for an exposure} Each exposure has been calibrated from the raw data down to a useable form. It is common to need to back-track and understand what was applied to the data, so mapping an exposure to all applicable reference material is useful to fully understand the data. \begin{verbatim} spec=# SELECT ref FROM exposure JOIN applies_to ON rootname=obs WHERE rootname='lc7001b3q'; ref ---------------------- xab1551cl_flat.fits u1t1616ql_wcp.fits 1811831fl_spwcs.fits 14o2013ql_xwalk.fits 14o2013rl_ywalk.fits 0561933ll_tds.fits lc7001050_asn.fits 0bn1606sl_disp.fits zas1615jl_spot.fits 1811827rl_lamp.fits x6q17586l_1dx.fits wc318317l_pha.fits x1u1459il_brf.fits 0561933jl_phot.fits yae1249sl_bpix.fits s7g1700gl_dead.fits zbn1927gl_gsag.fits x1u1459gl_geo.fits \end{verbatim} \subsubsection{Which investigator has the most observing time} This query may be of use for general stats, and/or to help in investigating bias among proposal selection commitees. \begin{verbatim} spec=# SELECT fname, mname, lname, SUM(exptime) FROM investigator JOIN proposal on id=pi JOIN observation ON proposid=observation.proposalid JOIN exposure ON observation.rootname=obsname GROUP BY fname, mname, lname ORDER BY sum DESC; fname | mname | lname | sum -----------+-------+----------+-------------- Jelle | | Kaastra | 23615 Nahum | | Arav | 23085 Francesco | | Palla | 13856.199951 Reginald | J. | Dufour | 2634.5 Guy | | Worthey | 165 Charles | R. | Proffitt | 0.1 (6 rows) \end{verbatim} \subsubsection{Rank investigators by number of successful proposals} Similar to the query above, this ranks all investigators instead by the number of succesffuly proposals. This would be used as another metric as general stats or helping to understand bias/demographics/usage/etc. \begin{verbatim} SELECT fname,mname, lname, COUNT(*) FROM investigator JOIN proposal ON id=pi GROUP BY fname, mname, lname ORDER BY count DESC; fname | mname | lname | count -----------+-------+----------+------- Nahum | | Arav | 2 Guy | | Worthey | 1 Jelle | | Kaastra | 1 Charles | R. | Proffitt | 1 Francesco | | Palla | 1 Reginald | J. | Dufour | 1 (6 rows) \end{verbatim} \subsubsection{Find which instrument is most used} This is a very common query to determine which instrument is most widely used by raw exposure time. This information is important for understanding the needs of the community, as well as tracking the degredation of the instrument being used. \begin{verbatim} spec=# SELECT * FROM exptime_count ORDER BY count DESC LIMIT 1; instrument | count ------------+------- COS | 46700 (1 row) \end{verbatim} %------------------------------------------------------------------------------ \section{CRUD Matrix} \begin{center} \begin{tabular}{||c c c c c c||} \hline & applies\_to & description & detector & exposure & exptime\_count \\ [0.5ex] \hline\hline F1 & & & & & \\ \hline F2 & R & & R & C & U \\ \hline F3 & & & & D & U \\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{||c c c c c c c||} \hline & file & instrument & investigator & observation & proposal & reference\_file \\ [0.5ex] \hline\hline F1 & & & R & & C & \\ \hline F2 & C & R & & R & & RC \\ \hline F3 & D & & & & & \\ \hline \end{tabular} \end{center} \subsection{List of Entity Types} \begin{itemize} \item applies\_to \item description \item detector \item exposure \item exptime\_count \item file \item instrument \item investigator \item observation \item proposal \item reference\_file \item targ\_map \item target \end{itemize} \subsection{List of Functions} \begin{itemize} \item F1: Insert a new proposal. \item F2: Insert a new exposure. \item F3: Delete an exposure. \end{itemize} %------------------------------------------------------------------------------ \section{Concluding Remarks} This project was an interesting exploration into how to re-organize and optimize an existing collection of data into a new database format. I learned much from this project in regards to actual SQL capabilities and DBMS peculiarities. The capabilities of triggers and stored procedures was new for me, but the usefulness was immediately obvious. Especially for a project like this, where many derived attributes can be calculated on the fly as things are inserted. This can save outside users from having to learn too much syntax, if common calculations can be automatic. This can also help performance in large DB by "pre-calculating" certain things that are more intensive. Unfortunately, I didn't implement very many in this project, but that would be a key expansion to a follow-up on this topic. PostgreSQL is a powerful database, and I found no issues when using it for typical SQL like INSERT, UPDATE, SELECT, etc. However, I find that I don't as much like the DBA tools built-in. The syntax very much breaks from typical SQL, and if feels far more like switching languages than the corresponding tasks in tools like MySQL or Sqlite. Though it's good to be exposed to other tools, in the future I will likely stick more to MySQL if possible, as I vastly prefer the DBA syntax. \pagebreak \appendix %------------------------------------------------------------------------------ \section{DDL, INSERT, SELECT Statements} The DDL, INSERT, SELECT statements necessary to create the database are over 2000 lines long. This would make the report unnecessarily long, so they have been hosted in a public Github repository for viewing. They can be found at \href{https://github.com/justincely/hst\_db\_project}{https://github.com/justincely/hst\_db\_project}. \end{document}
{ "alphanum_fraction": 0.702022129, "avg_line_length": 51.5606557377, "ext": "tex", "hexsha": "95d67a8aef2434b8e6bb3730f6d72f4dbeedc782", "lang": "TeX", "max_forks_count": 11, "max_forks_repo_forks_event_max_datetime": "2021-10-04T17:04:32.000Z", "max_forks_repo_forks_event_min_datetime": "2017-06-13T13:11:27.000Z", "max_forks_repo_head_hexsha": "2d2b1882f9141bc5776977a5c7c6a4788ea7bc4f", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "justincely/classwork", "max_forks_repo_path": "JHU/intro_db/final_project/final.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2d2b1882f9141bc5776977a5c7c6a4788ea7bc4f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "justincely/classwork", "max_issues_repo_path": "JHU/intro_db/final_project/final.tex", "max_line_length": 678, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2d2b1882f9141bc5776977a5c7c6a4788ea7bc4f", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "justincely/classwork", "max_stars_repo_path": "JHU/intro_db/final_project/final.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-16T03:17:26.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-16T03:17:26.000Z", "num_tokens": 7054, "size": 31452 }
\chapter*{Information about this report} \vspace{\fill} \textbf{Contact information} \begin{tabularx}{\textwidth}{N{2.5cm}X} Author: & \AuthorFirstName \AuthorLastName \\ & MSE Student \\ & HES-SO//Master \\ & Switzerland \\ Email: & \email{\AuthorEmail} \end{tabularx} \vspace{\fill} \textbf{Declaration of honor} {\renewcommand{\arraystretch}{2} \begin{tabularx}{\textwidth}{N{2.5cm}X} & I, undersigned, \Author, hereby declare that the work submitted is the result of a personal work. I certify that I have not resorted to plagiarism or other forms of fraud. All sources of information used and the author quotes were clearly mentioned. \\ Place, date: & \underline{\hspace{7cm}} \\ Signature: & \underline{\hspace{7cm}} \end{tabularx} } \vspace{\fill} \textbf{Validation} Accepted by the HES-SO//Master (Switzerland, Lausanne) on a proposal from: \vspace{0.5cm} \Advisor, Thesis project advisor \Expert, \ExpertLab, Main expert \vspace{1cm} Place, date: \underline{\hspace{8cm}} \vspace{3cm} { \renewcommand{\arraystretch}{1.5} \begin{tabularx}{\textwidth}{X X} \Advisor & \Dean\\ Advisor & Dean, HES-SO//Master\\ \end{tabularx} }
{ "alphanum_fraction": 0.7098976109, "avg_line_length": 22.1132075472, "ext": "tex", "hexsha": "afe183c80feba51cd8e689c7c2ef9ab08cc81bc0", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2022-03-08T15:19:42.000Z", "max_forks_repo_forks_event_min_datetime": "2017-02-20T08:30:04.000Z", "max_forks_repo_head_hexsha": "4dd22b81df07c5fb613870ffc101a353f4b6a6ee", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "mdemierre/hesso-latextemplate-thesis", "max_forks_repo_path": "01-head/reportinfo.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "4dd22b81df07c5fb613870ffc101a353f4b6a6ee", "max_issues_repo_issues_event_max_datetime": "2018-02-21T06:44:42.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-07T13:41:17.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "mdemierre/hesso-latextemplate-thesis", "max_issues_repo_path": "01-head/reportinfo.tex", "max_line_length": 77, "max_stars_count": 26, "max_stars_repo_head_hexsha": "4dd22b81df07c5fb613870ffc101a353f4b6a6ee", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "mdemierre/hesso-latextemplate-thesis", "max_stars_repo_path": "01-head/reportinfo.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-22T12:42:44.000Z", "max_stars_repo_stars_event_min_datetime": "2016-10-07T13:35:59.000Z", "num_tokens": 371, "size": 1172 }
\section{Applications: the \glsentrylong{paf}} \label{sec:paf} An application that interacts with the blockchain is some kind of program that runs on a users computer. But what does that application actually do? For starters, \glspl{app} that make use of the ledger's scripting functionality will need to create appropriate \gls{zerepoch-core} programs. We discuss how we enable this for Haskell applications in \cref{sec:zerepoch-tx}. However, even very simple applications have some clear needs. Consider one of the simplest possible \glspl{app}: \paragraph{Metadata Poster} Metadata Poster does nothing except occasionally submit transactions to the chain. The transactions which it submits do not move any substantive amount of value, their purpose is just to post a transaction to the chain with some metadata payload that can later be checked by another application.\footnote{ This may seem like a silly example, but many real proposed applications in supply-chain management are essentially just Metadata Poster! } \medskip Firstly, Metadata Poster needs to communicate with a number of other components: \begin{itemize} \item It needs to submit transactions, so it must talk to a \gls{node} or a \gls{wallet-backend}. \item It needs to acquire funds to pay fees, so it must talk to a \gls{wallet-backend}. \item It must talk to its users, so it must expose some kind of API or talk to a graphical interface like a \gls{wallet-frontend}. \end{itemize} Moreover, Metadata Poster may care about some of the rollback issues discussed in \cref{req:app-rollback}. Finally, there are a number of operational issues common to \glspl{app}: \begin{itemize} \item It may need to be distributed to end users and receive updates. \item It may need to synchronize its state between multiple instances (e.g. desktop and mobile). \item It may need to have its state backed up by systems administrators. \item It may need to provide logging and monitoring for production usage. \end{itemize} It is clear that there is a lot of complexity in the \gls{off-chain} part of a \gls{app}. Enough that we probably cannot simply leave this in the hands of application developers. The \glsfirst{paf} is our response to this problem: a disciplined framework for writing \glspl{app} that eases many of these problems. \subsection{Requirements} \begin{requirement}[Backups] \label{req:app-backups} \Glspl{app} need to be easy to back up, if they are to be used in production. \end{requirement} \begin{requirement}[Monitoring] \label{req:app-monitoring} \Glspl{app} need to be easy to monitor, if they are to be used in production. \end{requirement} \begin{requirement}[Synchronization] \label{req:app-synch} It should be possible to synchronize the state of an \gls{app} between instances on multiple machines, e.g. a mobile and a desktop instance. This is quite important for consumer-type users. \end{requirement} \begin{requirement}[Reproducibility] \label{req:app-reproducibility} \Gls{app} behaviour should be reliable and reproducible on different environments and devices. For example, \glspl{app-inst} that have had their state synchronized (\cref{req:app-synch}) should behave identically. \end{requirement} \begin{requirement}[Distribution] \label{req:app-dist} \Glspl{app} need to be distributed to users somehow. Different users may have different needs here, for example: \begin{itemize} \item A consumer user may want to download an \gls{app} from a centrally managed ``app store'' in their \gls{wallet-frontend}. \item A business user may want to download a native application via their usual package manager, or directly from the author. \end{itemize} \end{requirement} \begin{requirement}[Flexible, self-describing application endpoints] \label{req:app-client-interfaces} \Glspl{app} will want to expose endpoints to users which trigger the functionality of the application. These need to be accessible to both server-side headless consumers, and also to graphical \glspl{wallet-frontend} which mediate interaction with end-users. Ideally, these endpoints would be \emph{self-describing} so that we can have at least basic generic interfaces in e.g. a \gls{wallet-frontend}. \end{requirement} \begin{requirement}[Chain data access] \label{req:app-chain-data} \Glspl{app} need to access some historical data about the chain. In particular, due to \cref{req:ledger-utxo-size} the \glspl{datum} for \glspl{script-output} are not stored in the \gls{utxo} set, but rather in the transaction that creates the output. \Glspl{app} will need to know about these \glspl{datum}, so we must provide some way of tracking this information from the chain and making it available. \todompj{Should link to wherever we discuss the whole issue with storing datums in detail.} \end{requirement} \begin{requirement}[Rollback resistance] \label{req:app-rollback} Rollbacks can cause serious problems for agents (not just applications) trying to take conditional actions. It would be nice if we could mitigate these for \glspl{app}, but that may not always be possible. Here are two scenarios we might care about. \paragraph{Incoherent choice} \label{para:incoherent-choice} Suppose that Alice promises to send 10 \gls{bcc} to Bob, provided that Bob sends 20 \gls{bcc} to Carol (perhaps Alice is holding Bob's collateral for a loan from Carol, which Bob is now repaying). The following events occur: \begin{enumerate} \item Bob pays 20 \gls{bcc} to Carol in transaction T1. \item Alice observes T1, and proceeds to pay 10 \gls{bcc} to Bob in transaction T2. \item A rollback occurs. After the rollback, T1 and T2 go back into the mempool, but T1 is now invalid. \item T2 alone is reapplied. \end{enumerate} As a result, Alice ends up making the payment to Bob without Bob paying Carol, so Bob gets away with all the money! Alice ends up committed to an action that she would only have chosen to do the old history of the chain, and which she would not have chosen to do the new history. \paragraph{Incomplete reapplication} \label{para:incomplete-reapplication} Suppose as a variant of the previous scenario that Alice promises to send 10 \gls{bcc} to both Bob and Carol, provided that some off-chain event happens. The following events occur: \begin{enumerate} \item The off-chain event occurs. \item Alice pays 10 \gls{bcc} to Bob in transaction T1. \item Alice pays 10 \gls{bcc} to Carol in transaction T2. \item A rollback occurs. After the rollback, T1 and T2 go back into the mempool, but T1 is now invalid. \item T2 alone is reapplied. \end{enumerate} As a result, Alice ends up only paying Carol and not Bob. Alice ends up \emph{partially} taking an action that she still wants to take, and would need to reconstruct the missing parts to get back to the state she wants to be in. \end{requirement} \begin{requirement}[Testing and emulation] \label{req:app-emulation} Users need to be able to test their \glspl{app} in an environment that mirrors the real one as closely as possible. However, the real environment is very complex, featuring a multi-agent, distributed system with a number of tricky behaviours: network issues, rollbacks etc. It is therefore desirable to provide some kind of emulated testing harness which users can use to test their \glspl{app} locally, but which allows control and simulation of real issues. Moreover, this is important for us during development, as it allows us to mock up the system that we expect without having to wait for other components to be ready. \end{requirement} \subsection{Lifecycle of a \glsentrytext{app}} The lifecycle of a \gls{app} is as follows: \begin{itemize} \item \Glspl{app} are authored and compiled with the \gls{zerepoch-sdk} using the \gls{app-api} for interacting with other components. \item \Glspl{app} are distributed via some means to be decided, but manually in the interim. \item \Glspl{app} are installed into an instance of the \gls{pab}. The \gls{pab} just knows about the compiled \gls{app-exe} provided by the \gls{app}. \item A \gls{app} can be instantiated into a \gls{app-inst} by running the \gls{app-exe} and providing any parameters that it needs. There can be multiple \glspl{app-inst} per \gls{app}, and they are managed by the \gls{pab}. \item The \gls{pab} manages and handles the requirements of the \gls{app-inst} throughout its lifecycle, including interaction with external clients such as \glspl{wallet-frontend}. \end{itemize} The major component here is the \gls{pab}. \subsection{The \glsentrylong{pab}} \label{sec:pab} \fbox{ \begin{minipage}{\textwidth} WARNING: this component is under heavy development, so this will likely evolve and may not represent the current state of things. \end{minipage} } \medskip A key component of the \gls{paf} is the \glsfirst{pab}. This is a backend service (like the \gls{wallet-backend}) that intermediates between \glspl{app}, the \gls{node}, the \gls{wallet-backend}, and users (including the \gls{wallet-frontend}). The \gls{pab} will be run in similar contexts to the wallet backed, e.g. backing a graphical user wallet (e.g. \gls{klarity}), or on a server that runs \glspl{app} as part of a larger system. The purpose of the \gls{pab} is to: \todompj{Do this in prose? Also all the provisions here should be expanded and moved to requirements} \begin{itemize} \item Provide a standardized environment for \glspl{app} to run in (\cref{req:app-reproducibility,req:app-monitoring}) \item Provide disciplined state management (\cref{req:app-backups,req:app-synch,req:app-rollback}) \item Present discoverable interfaces to the external clients (\cref{req:app-client-interfaces}) \item Track information from the chain for use by contracts (\cref{req:app-chain-data}) \item Work in an emulated environment (\cref{req:app-emulation}) \end{itemize} The \gls{pab} is a series of components which produce/consume events, and a message bus. Some of the components have additional complexity, e.g. the application management component needs to manage the state of \glspl{app-inst}. \subsubsection{Node client} The \gls{pab} needs to talk to the \gls{node}, primarily because it needs to populate the \gls{chain-index}, but it also needs to watch the stream of incoming transactions and rollbacks, and notify the \glspl{app-inst} of changes to transactions that they are interested in. \subsubsection{\Glsentrytext{wallet-backend} client} The \gls{pab} needs to talk to the \gls{wallet-backend} for a number of things: \begin{itemize} \item Coin selection/transaction balancing \item Transaction signing and submission \item Address creation \end{itemize} You might think that since the \gls{pab} has a node client itself, it could do its own transaction submission, and only rely on the \gls{wallet-backend} for signing. However, transactions made by the \gls{pab} will likely use outputs ``owned'' by the \gls{wallet-backend} (e.g. those selected by coin selection from the user's outputs). Hence it is important that the \gls{wallet-backend} knows about such outputs, so that it does not attempt to spend them somewhere else. \subsubsection{Concurrency} \glspl{app-inst} managed by the \gls{pab} spend most of their time waiting for changes to the blockchain, user input, or the passage of time. When they are not waiting, they are making requests to services managed by the \gls{pab}, for example the \gls{chain-index} or the \gls{wallet-backend}. % In addition, the persistence story for \glspl{app-inst} involves persisting their incoming events, so this is a good fit. We are currently using an event-sourced architecture here. However, we plan to switch to a simpler database model. % We hope that this will make backups and synchronization easier (\cref{req:app-backups,req:app-synch}). \subsubsection{Application management} \Gls{app-inst} need to be managed, created, destroyed, fed with events, etc. \begin{itemize} \item Create \glspl{app-inst} \item Instantiate and run \glspl{app-exe} in a sandbox \item Handle communication with the \gls{app-exe} \item Mediate requests to \glspl{pab-services} by the \gls{app-inst} \item Manage/dump/load \glspl{app-inst} state \item Create/destroy \glspl{app-inst} \item Handle rollbacks \end{itemize} \subsubsection{\Glsentrytext{chain-index}} Applications need to access \glspl{datum} for outputs (see \cref{req:app-chain-data}), so we need some kind of system that monitors the chain and records (at least) the \glspl{datum}. \subsubsection{Client interface} For external clients (other programs), including graphical \glspl{wallet-frontend} to talk to. Should expose some of the application endpoints and \gls{app-inst} management functionality. \subsubsection{Logging and monitoring} To satisfy \cref{req:app-monitoring}. \subsection{Emulators} In order to satisfy \cref{req:app-emulation}, we need to write emulators for quite a number of components. At present, we have (or expect to have) emulators for: \begin{itemize} \item The \gls{node} using our ledger extensions. In the long run we should be able to use the real Charles \gls{node}. \item The parts of the \gls{wallet-backend} that we need. In the long run we will be able to use the real \gls{wallet-backend}. \item Basic \gls{wallet-frontend} functionality, such as displaying balances and interacting with \glspl{app-inst}. In the long run we \emph{might} be able to use the real \gls{wallet-frontend}, but this seems unlikely as it is quite heavyweight. Having our own component here has the advantage that we can reuse it in the \gls{zerepoch-playground}. \end{itemize} We also need libraries to bind all of these into an overall, multi-agent simulation, and to allow users to write tests that exercise particular series of events in this simulation. \subsection{The \glsentrytext{zerepoch-playground}} \label{sec:zerepoch-playground} The \gls{zerepoch-playground} provides a Web environment for getting started with the \gls{zerepoch-platform}. The authoring experience in the \gls{zerepoch-playground} is fairly limited (one file only), but it has the best support for specifying ad-hoc scenarios and visualizing the results. Over time we hope to unify the experiences of working locally and working in the \gls{zerepoch-playground}, by: \begin{itemize} \item Improving the authoring experience in the \gls{zerepoch-playground} (multiple files etc.) \item Improving the visualization experience locally (sharing components with the \gls{zerepoch-playground}) \item Allowing distribution of simple \glspl{app} directly from the \gls{zerepoch-playground}. \end{itemize} \subsection{Application design} \label{sec:application-design} \todompj{Talk about state machines and our ideas for handling rollbacks.} \glspl{app} are distributed applications whose state is spread across multiple processes. One of those processes is the blockchain, or (operationally) the set of Bcc \glspl{node} that verify transactions. Here the state of the \gls{app} takes the form of \glspl{script-output}. There is no one-to-one correspondence between \glspl{app} and script outputs, or even \glspl{address}. \subsubsection{On-chain} To reason about the behavior of the on-chain parts of \glspl{app} we use a type of state machines called constraint-emitting machines (\gls{cem}), state machines that produce constraints on the next transition in every step. This approach has been published in \cite{DBLP:conf/isola/Chakravarty0MMM20a}. The \gls{zerepoch-sdk} offers support for writing \glspl{cem} in Haskell. An advantage of writing \glspl{script} as \glspl{cem} over directly writing the \gls{validator} function in Haskell is that the constraints can be used not only to verify the spending transaction on-chain, but also to construct it off-chain. Building a transaction that spends an output with a hand-written \gls{validator} often involves code that is very similar, but not quite identical, to the validator code itself. With \glspl{cem} we can capture exactly this overlap and reduce duplication. \subsubsection{Off-chain} \glspl{app} react to events that happen either on the blockchain or outside the \gls{zerepoch-platform}. The following types of events can be reacted to: \begin{itemize} \item Wall clock time progresses. \item The status of a transaction changes as a result of transaction validation or a rollback. \item The set of unspent outputs at an address changes as a result of a transaction status change. \item Input is provided to the \gls{app} from outside the system. \end{itemize} The meaning of these interactions is described by the following Petri nets. \paragraph{Slot change} For each slot $s$ there is a place $p_s$. A \emph{clock} transition $c_s$ takes a token from $p_s$ and places it in $p_{s+1}$ to signal that slot $s + 1$ has begun. Any other transition that removes a token from $p_s$ is expected to put it back immediately, so that there is always exactly one token in $p_s$ after it has been filled for the first time. See \ref{fig:petri-net-time} for an illustration. \begin{figure} \centering \begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto] % Styles are from https://texample.net/tikz/examples/nodetutorial/ \tikzstyle{place-time}=[circle,thick,draw=blue!75,fill=black!20,minimum size=7mm] \tikzstyle{transition}=[rectangle,thick,draw=black!75, fill=black!20,minimum size=4mm] \node [place-time,tokens=1] (p1) [label=above:$p_1$] {}; \node [transition] (t1) [right of=p1,label=above:$c_1$] {}; \node [place-time] (p2) [right of=t1,tokens=0,label=above:$p_2$] {}; \node [transition] (t2) [right of=p2,label=$c_2$] {}; \node [place-time] (pn) [right of=t2,tokens=0,label=above:$p_n$] {}; \path (p1) edge [->] (t1); \path (t1) edge [->] (p2); \path (p2) edge [->] (t2); \path (t2) edge [->,dashed] (pn); \end{tikzpicture} \caption{Petri net modeling the passage of time as observed by \glspl{app}} \label{fig:petri-net-time} \end{figure} \paragraph{Transaction status change} The status of a transaction changes multiple times after it has been sent to the node. Transactions start out in the node's \emph{mempool}. Then their status changes to \emph{tentatively confirmed} or to \emph{rejected}. Finally, a transaction that is tentatively confirmed can revert back to \emph{mempool} or it can become \emph{permantently confirmed} when enough blocks have been added to make it irreversible. For each of the four states of a transaction there is one place in the petri net, as shown in \ref{fig:petri-net-txn}. % TODO: Should probably use colored tokens here for different transactions that we can distinguish. \begin{figure} \centering \begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto] % Styles are from https://texample.net/tikz/examples/nodetutorial/ \tikzstyle{place-status}=[circle,thick,draw=red!75,fill=black!20,minimum size=7mm] \tikzstyle{transition}=[rectangle,thick,draw=black!75, fill=black!20,minimum size=4mm] \node [place-status,tokens=1] (mempool) [label=left:$\mathsf{mempool}$] {}; \node [transition] (confirm) [below right of=mempool] {}; \node [transition] (rollback) [above right of=mempool,label=above:$\mathsf{rollback}$] {}; \node [place-status] (confirmed) [below right of=rollback,tokens=0,label=below right:$\mathsf{confirmed}$] {}; \node [transition] (commit) [right of=confirmed] {}; \node [place-status] (committed) [right of=commit,tokens=0,label=above:$\mathsf{committed}$] {}; \node [transition] (reject) [below of=mempool] {}; \node [place-status] (invalid) [left of=reject,label=$\mathsf{invalid}$] {}; \path (mempool) edge [->] (confirm); \path (confirm) edge [->] (confirmed); \path (confirmed) edge [->] (rollback); \path (rollback) edge [->] (mempool); \path (confirmed) edge [->] (commit); \path (commit) edge [->] (committed); \path (mempool) edge [->] (reject); \path (reject) edge [->] (invalid); \end{tikzpicture} \caption{Petri net for the status of transactions} \label{fig:petri-net-txn} \end{figure} \paragraph{Address change} The set of unspent outputs at an address is modified by transactions that spend and produce outputs. Therefore, whenever the status of a transaction changes, the status of its inputs and outputs changes also. We represent the outputs at each address with two petri nets, one for unspent outputs and one for spent outputs. There is one place each for outputs that are in the mempool, tentatively confirmed, permantently confirmed, rejected. % TODO: Think about how to do it properly. What level of detail do we need here? Maybe we don't need an extra petri net (address change is just a function of tx change). Or maybe we should include tx dependencies via their outputs as well. \paragraph{Endpoint} Users and other applications may call endpoints on the \gls{app}. Endpoints are places $e_1, \ldots, e_n$ in the petri net (see \ref{fig:petri-net-endpoint}). \begin{figure} \centering \begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto] % Styles are from https://texample.net/tikz/examples/nodetutorial/ \tikzstyle{place-endpoint}=[circle,thick,draw=yellow!75,fill=black!20,minimum size=7mm] \tikzstyle{transition}=[rectangle,thick,draw=black!75, fill=black!20,minimum size=4mm] \node [place-endpoint,tokens=1] (ep1) [label=left:$e_1$] {}; \node [place-endpoint] (ep2) [right of=ep1,label=below right:$e_2$] {}; \end{tikzpicture} \caption{Petri net for endpoints. The token in $e_1$ signifies that input is available to be consumed by this contract.} \label{fig:petri-net-endpoint} \end{figure} \paragraph{Apps} Given the places for time, transaction status and endpoints we can describe \glspl{app} as sequences of transitions. The states of the app are represented by places. \ref{fig:zerepoch-app-net} shows a \gls{app} with three possible state, $s_1$, $s_2$ and $s_3$. As soon as a transaction is confirmed, the state can progress from $s_1$ to $s_2$. After that, endpoint $e_1$ becomes \emph{active}, meaning that the app can make progress as soon as input is provided. The next state depends on which of two possible events happens first: The endpoint being called by the user, or the clock reaching slot ten. The app transitions (green) involve queries to the \gls{chain-index}, transaction submission, etc. These requests are not shown in the petri net. We still record their responses however, in order to meet the replayability requirements (see \ref{req:app-synch} and \ref{req:app-reproducibility}). \begin{figure} \centering \begin{tikzpicture}[node distance=1.8cm,>=stealth',bend angle=45,auto] % Styles are from https://texample.net/tikz/examples/nodetutorial/ \tikzstyle{place-endpoint}=[circle,thick,draw=yellow!75,fill=black!20,minimum size=7mm] \tikzstyle{place-status}=[circle,thick,draw=red!75,fill=black!20,minimum size=7mm] \tikzstyle{place-time}=[circle,thick,draw=blue!75,fill=black!20,minimum size=7mm] \tikzstyle{place-app}=[circle,thick,draw=green!75,fill=black!20,minimum size=7mm] \tikzstyle{place-writer}=[circle,thick,draw=black!50,fill=black!10,minimum size=4mm] \tikzstyle{transition-app}=[rectangle,thick,draw=green!75,fill=green!20,minimum size=4mm] \node [place-status] (confirmed) [label=above:$\mathsf{confirmed}$] {}; \node [place-app] (s1) [below of=confirmed,tokens=1,label=below:$s_1$] {}; \node [transition-app] (t1) [above right of=s1] {}; \node [place-writer] (w1) [below=0.5cm of t1] {\code{w}}; \path (confirmed) edge [->,bend right] (t1); \coordinate[yshift=0,left=1cm of confirmed.west] (aux1); \path (aux1) edge [->,dashed] (confirmed); \path (t1) edge [->,bend right] (confirmed); \path (s1) edge [->] (t1); \path (t1) edge [->] (w1); \node [place-app] (s2) [right of=t1,label=below:$s_2$] {}; \path (t1) edge [->] (s2); \node [transition-app] (t2) [above right of=s2] {}; \node [place-writer] (w2) [below=0.5cm of t2] {\code{w}}; \path (t2) edge [->] (w2); \node [transition-app] (t3) [below right of=s2] {}; \node [place-writer] (w3) [below=1.2cm of t3] {\code{w}}; \path (t3) edge [->] (w3); \node [place-app] (s3) [right of=t2,label=below:$s_3$] {}; \path (s2) edge [->] (t2); \path (t2) edge [->] (s3); \node [place-endpoint] (e1) [above left of=t2,label=above:$e_1$] {}; \coordinate[yshift=0,left=1cm of e1.west] (aux2); \path (aux2) edge [->,dashed] (e1); \path (e1) edge [->, bend right] (t2); \path (t2) edge [->, bend right] (e1); \node [place-app] (s4) [right of=t3,label=below:$s_4$] {}; \path (s2) edge [->] (t3); \path (t3) edge [->] (s4); \node [place-time] (p10) [below left of=t3,tokens=0,label=below:$p_{10}$] {}; \coordinate[yshift=0,left=1cm of p10.west] (aux3); \coordinate[yshift=0,right=1cm of p10.east] (aux4); \path (aux3) edge [->,dashed] (p10); \path (p10) edge [->, bend right] (t3); \path (t3) edge [->, bend right] (p10); \end{tikzpicture} \caption{ Petri net for an \gls{app} (green) that waits for a transaction to be confirmed and then waits for slot number ten to begin, or for user input. The app emits values of type \code{w} on every transition. } \label{fig:zerepoch-app-net} \end{figure} \paragraph{Observable state} \glspl{app} need to be able to notify the outside world of changes. To this end, the application can emit values of some user-defined type \code{w} whenever one of its transition fires. In the Haskell library, this is realised using the \code{Writer w} effect, with \code{Monoid w} constraint. Clients of the \gls{app} can subscribe to receive updates whenever the accumulated total of all values changes. \paragraph{Extensions} There are some possible extensions of the basic model of apps and events. For example, we could model an on-chain state machine (in a Zerepoch script) as a petri net. Then we could describe the interactions of multiple \glspl{app-inst} interacting with that state machine, including possible race conditions. The petri net model is simple, but it scales easily over multiple machines and contracts.
{ "alphanum_fraction": 0.7455516014, "avg_line_length": 53.0401606426, "ext": "tex", "hexsha": "5ecab176099b2ac3987c3ff40c380870b626ce0a", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-02-21T16:38:59.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-13T21:25:19.000Z", "max_forks_repo_head_hexsha": "c8cf4619e6e496930c9092cf6d64493eff300177", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Quantum-One-DLT/zerepoch", "max_forks_repo_path": "zerepoch-report/applications.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c8cf4619e6e496930c9092cf6d64493eff300177", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Quantum-One-DLT/zerepoch", "max_issues_repo_path": "zerepoch-report/applications.tex", "max_line_length": 274, "max_stars_count": null, "max_stars_repo_head_hexsha": "c8cf4619e6e496930c9092cf6d64493eff300177", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Quantum-One-DLT/zerepoch", "max_stars_repo_path": "zerepoch-report/applications.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7334, "size": 26414 }
\documentclass[11pt,english,singlespacing,headsepline,consistentlayout]{auxiliary/si-msc-thesis} \usepackage[utf8]{inputenc} % Required for inputting international characters \usepackage[T1]{fontenc} % Output font encoding for international characters \usepackage{float} \usepackage{mathpazo} \usepackage{setspace} \usepackage{listings} \usepackage{caption} \usepackage{subcaption} \usepackage[ruled]{algorithm2e} \setlength\parindent{0pt} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% MARGIN SETTINGS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \geometry{paper=a4paper, inner=1.5cm, outer=1.5cm, bindingoffset=0cm, top=1.5cm, bottom=1.5cm, footnotesep=1cm, %showframe, % Uncomment to show how the type block is set on the page } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% THESIS INFORMATION %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \thesistitle{Instrumentation and\\ Performance Analysis\\of Distributed Systems\\ with Freud} %comment to not have subtitle %\thesissubtitle{} \author{Stefano Taillefert} \monthyear{May 2021} \supervisor{Prof. Antonio Carzaniga} % if you need to add or remove co-supervisors go into titlepage.tex and comment correspondingly \cosupervisorone{None} \cosupervisortwo{None} \cosupervisorthree{None} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% FRONT MATTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \frontmatter \pagestyle{plain} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% TITLE PAGE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \input{auxiliary/titlepage} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% ABSTRACT %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{abstract} Engineering software systems for performance is a particularly complex task. The performance of a software system depends on many factors, such as the algorithmic nature of the software, the features of the execution environment (including memory, storage, and CPU) and the complex interactions between components, whether internal or external. A common way to support performance analysis is to create models of the system. This can be done, for example, with traditional profilers, which characterize the distribution of the CPU time expenditures on functions and methods. Freud is a software performance analysis tool that derives similar but more expressive models called performance annotations. In essence, such annotations characterize a relevant performance metric (e.g. CPU time) as a function of one or more input parameters or features. In particular, Freud derives performance annotations automatically based on measurements of a software system. The goal of this project, called \emph{Jung}, is to extend Freud to instrument and collect data from \emph{distributed} systems. This means augmenting the existing implementation to instrument and collect performance metrics from a number of distributed components. Jung applies this instrumentation and collects data through a communication framework, and subsequently links and merges independent component traces using causal relations. The resulting unified trace is then fed into the Freud statistical analyzer to produce meaningful annotations. Jung can therefore be used to support the performance engineering of applications that use and connect to external resources, such as databases and other services. \end{abstract} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% DEDICATION & ACKNOWLEDGEMENTS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\input{ded-acks} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% PREAMBLE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \tableofcontents %\listoffigures %\listoftables %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% MAIN MATTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \mainmatter \pagestyle{thesis} %\input{introduction} \chapter{Introduction} Freud~\cite{freud} is a software performance analysis tool that derives performance annotations from measurements of running systems. What does that mean? Freud requires the system in binary form, although it can also benefit from the source code for additional analysis. In essence, Freud performs a dynamic analysis of the system, meaning that it uses information derives from the execution of the system under workloads chosen by the performance engineer. Given the program executable, compiled to include debugging symbols, Freud instruments the system to measure a series of metrics, such as running time and memory usage, while also tracing the function calls and their parameters. Freud then analyzes this data (offline) to produce significant statistical models that characterize the performance metrics as functions of one or more input parameters (or ``features''). The goal of this project is to extend Freud to collect and analyze data from a \emph{distributed} software system. This includes desktop or server applications that might use external services, typically through remote procedure calls (RPC), or a web application components that query external servers such as a database. To highlight the relation with the Freud tool and technique, we decided to name this project Jung --- after influential Swiss psychiatrist and psychoanalyst Carl Jung\footnote{\url{https://en.wikipedia.org/wiki/Carl_Jung}} who was a colleague and collaborator of Sigmund Freud. In essence, Jung instruments a distributed system so that each distributed component would measure various performance metrics of interest. The measurements are initially stored as separate traces within each component. Jung then takes care of collecting and merging these separate traces so as to assemble a unified trace that would then be processed through the statistics module of Freud. In particular, the instrumentation provided by Jung traces related calls \emph{across} distributed components using unique identifiers. The relation between calls amount to causal relations and therefore are used as a basis for the merge operation. Freud uses a powerful binary instrumentation framework called Pin~\cite{PIN}. In the initial prototype we developed during this project, we opted for a simpler solution, and therefore decided to inject our instrumentation directly within the source code, manually. A fully automatic instrumentation could be added in future developments to reduce the overhead of using Jung. \chapter{Performance analysis} % What is it, problem context, existing solutions in general Performance engineering amounts to measuring, modeling, and ultimately improving the performance of software systems. Performance engineering is crucial for many software systems. For example, a company such as Google that runs global-scale services and applications is very concerned about performance. The response time of applications, such as GMail, can severely influence their adoption and therefore their success or failure. The same is definitely true for web search and many other applications. Even relatively minor speed-ups or slow-downs can make a significant difference in usability and therefore profitability. In sum, performance engineering is important. Performance engineering is also difficult. Many factors affect performance in many often subtle ways. There is the the algorithmic complexity, but there are also other factors, including hardware features and, crucially, the interactions between components within applications, and across applications. For example, a cache system (memory or otherwise) can significantly improve the performance of a system. However, the mixed use of the same caching system by many components or systems can drastically reduce its effectiveness overall. Performance analysis is primarily a type of dynamic analysis, and therefore it is based on the measurement of running systems. Almost every kind of application can be instrumented to collect a variety of performance metrics. Consistently with the description of Freud, we use the term \emph{metric} to denote the values of performance indicators such as execution time (or ``wall clock'' time), actual CPU time or cycles, memory usage, lock holding time, and many more. These metrics can be used in various ways and are chosen by the performance engineer accordingly to the type of application and the type of analysis. For example, lock-holding time is not a very significant metric for a single-threaded application, but it might be of crucial importance for a highly parallel application such as most modern client-side and server-side applications. \begin{figure}[htb] \centering \includegraphics[width=0.65\textwidth]{clientLog.png} \caption{An example of performance trace (in this case, produced by Jung)} \label{fig:clientLog} \end{figure} % Performance analysis have several tools at their disposal to perform instrumentation, measurement, and summary analysis. For example, there are a lot of existing tools and solutions out there---one of which will be discussed in section \ref{sec:freud}---that can \emph{instrument} the executables: a dedicated tool injects some special code directly in the compiled binary file, following some particular compiler flags. This generated code then measures different parameters, keeping track for example of starting and ending times, memory allocations, and so on. At the end, an information dump is produced to summarize the data, which can then be analyzed in various ways. In this project we focus on a specific problem and context, namely the collection and combination of measurement traces for the analysis of distributed systems. \chapter{Project design} \section{Freud}\label{sec:freud} % Brief description, intro to Freud and PIN, what they do and how This project follows the steps of Freud, a tool developed by Daniele Rogora, an USI PhD student. This software has multiple components; the first two (\texttt{freud-dwarf} and \texttt{freud-pin}) are tasked with instrumenting the executable via a special tool called PIN \cite{PIN}, developed by Intel. This library---as explained earlier ---inserts the instructions necessary to collect the data during the execution. Basically, coding this part means writing code that will write some code to put into other code. The other part of Freud, which is the one we will be interacting with, is \texttt{freud-statistics}. As the name implies, this module (based on the popular \texttt{R} library\footnote{\url{https://www.r-project.org/}}) processes the statistics, resulting in a regression or clustering, given the output of the instrumentation. \section{Requirements and analysis}\label{sec:requirements} % Goals, what we needed to implement, refer to the plan and list of tasks, what's the idea The main idea was to start small and build features incrementally, to ensure that we always had a minimal working prototype. The main objective of Jung is to instrument multiple systems, collect the data and format it. To achieve this, we divided the project in three main components: \begin{itemize} \item The actual instrumentation library, with an API to track the various metrics \item The merger, which has to collect all the log files from the different systems and merge them in a single coherent trace \item The dumper, which is tasked with creating the binary output that will be passed to \texttt{freud-statistics} \end{itemize} The main milestones of the developments were the following: first we had to develop a simple distributed application, based on an RPC library, to be used as an initial test environment. For this part we chose to put together a relatively simple program that sends back and forward some text messages; the server has some delays set up in answering some queries to simulate a long computation. Following that, we needed to develop an instrumentation for the client side, server side, and --- crucially --- the RPC library: the idea was to measure some meaningful statistics on all sides involved. The metrics we settled with are: \begin{itemize} \item Execution time \item Memory usage \item Major and minor page faults \item Lock holding and waiting time \item Possible memory leaks (experimental) \end{itemize} The execution time is computed and accounted for separately between client (total, end-to-end, synchronous), server, and network. All other metrics are computed and accounted for the client and server side. All the chosen metrics except for the memory leak estimate are supported by \texttt{freud-statistics}. Possible memory leaks are detected by tracing and matching the amount of \texttt{malloc} and \texttt{free} calls. However, this computation can only offer an approximate leak detector. Then, we had to devise a method to save and retrieve the measurement logs from all the components. This has been done by dumping the collected information in a text file with some special encoding (see section \ref{sec:issues} for details) for data types and more complex values. The next step was to design an algorithm to merge the logs from all the systems into a single coherent trace. This was tricky because we had to correctly identify all the remote calls to allow to trace back which server execution corresponded to which client function. We don't want to debit someone with the computing costs of someone else, therefore we assigned unique IDs to each RPC call to allow to trace back which client function requested which server resource. Integrating said trace in the existing statistics tool (\texttt{freud-statistics}) to derive the performance annotations was the last objective for the coding part. For this we had to rely on the help of its creator, Daniele, to fully understand how to format the data. There were a few hiccups due to some technical issues, but the process concluded perfectly. Unfortunately, the only abandoned part of the project was to identify some third-party non-trivial distributed applications and analyze them with the created tool. Due to the fact that Jung doesn't work on the binary executable, we needed to access the source code of the application we wanted to analyze. This heavily reduced the spectrum of possible targets, and also considering the little time left we decided to abandon it. Last but not least, we had to write the report, prepare the poster and the presentation. Due to the current situation, we as a class decided to not hold any presentation this year, therefore this point got reduced to the document you're currently reading. Have a pizza: this is still on hold but will definitely be completed sooner or later. Every project needs a celebratory pizza at the end. \chapter{Implementation} \begin{figure}[H] \centering \includegraphics[width=0.6\textwidth]{schema.png} \caption{Jung's general functioning scheme} \label{fig:schema} \end{figure} \section{Technologies and tools used} We used gRPC \cite{gRPCdocs} as the RPC library, since it seemed pretty straightforward and simple to use. There are plenty of examples in the documentation and a lot of languages are supported, including C++. On the other hand, Jung is independent from any library: its functions can be used directly from the code, even in a custom RPC implementation. The choice of language was pretty easy as well: Freud is built entirely in C/C++, so using the same language would increase compatibility. The only choice we had to make was regarding the version: we went with C++17 to benefit from the \texttt{filesystem::exists()} function, which we used to check for the existence of previous log files. To simplify development and testing, we also packaged an auto-building Docker image (\url{https://hub.docker.com/repository/docker/steeven9/jung}) with the example server in it. This way we could keep the server running on another machine and, for example, test the network times from another remote location. \section{Issues}\label{sec:issues} The only major problem we had - which we believe to be very common - was that we made some design choices in order to initially simplify the job, but then, with the addition of new features or to fix certain problems, we had to change approach. The direct consequence was that we had to rewrite some large parts of the code and data structures to adapt to the new requirements. One instance of the aforementioned problem is that we had to develop some particular encoding to represent the function parameters, since we had to convert from data (the running code) to text (the log file) and vice versa. More specifically, we chose to represent the values as \texttt{name\=type\&value} (e.g. \texttt{asd\=int\&12} for an integer named \texttt{asd} of value 12); this added a supplementary layer of encoding/decoding when saving and reading the data to/from the text files. When we first started testing the integration with \texttt{freud-statistics}, since we was unable to see what was in the binary file, we had troubles checking if we was dumping the metrics correctly, resulting in some weird-looking statistics. With Daniele's help, we managed to make sense of the output and find the offending code parts to patch. And of course, we had our fair share of miscellaneous issues with pointers, segmentation faults and general weird behaviors. As a positive note, in the process we somehow created a program that frees memory as you run it: \begin{figure}[H] \centering \includegraphics{memUsage.png} \caption{A very peculiar memory usage} \label{fig:memUsage} \end{figure} Another problem we ran into was that initially we were unable to get \texttt{freud-statistics} to find a regression, despite our data was clearly a good candidate for it. Turns out we simply didn't have a good enough p-value, which was easily fixed by getting more samples (details will follow in the next chapter). \chapter{Evaluation} % Code validation, analysis of results and practical applications, personal experience (?) Referring to the task list in section \ref{sec:requirements}, we can assert that we successfully implemented all the required features. The results the library provides are accurate (to a certain extent) and repeatable. To test Jung's functionality we created a demo program that simulates a real use-case of this library: a client program (\texttt{jung\_client}) sends some requests to a server (you guessed it, \texttt{jung\_server}) which computes the results and sends them back. To make it more realistic, the server takes roughly the time in seconds equal to the parameter received (with some added random margin), since it simulates a computation. Being a single-threaded sever, this has been achieved by making the server thread sleep for the given input. On the other hand, the client uses a \texttt{for} loop to send the requests, so the parameter's value increases gradually; the maximum number of iterations has been set to 20 and the number of points to 5, but those can easily be changed in the code (\texttt{NUM\_MSG} and \texttt{NUM\_POINTS} respectively). Alas, time and coding constraints limited the amount of testing we could do, meaning that the performance analysis has been tested only with a few scenarios. The first one is illustrated in the plot below: \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{analysis.png} \caption{The first plot produced by \texttt{freud-statistics}} \label{fig:analysis1} \end{figure} As we can see, our data fits nicely in a quadratic regression, as expected; our client program can be represented with the following algorithm: \begin{algorithm}[H] \For{$i \gets 0$ \textbf{to} NUM\_MSG} { \For{$j \gets 0$ \textbf{to} NUM\_POINTS} { \For{$k \gets 0$ \textbf{to} $i$} { sleep($k + $ random\_margin())\; } } } \caption{Quadratic simulation algorithm} \end{algorithm} \vspace{5mm} Another scenario can be simulated by making the RPC calls a constant amount of times instead of depending on the parameter, which produces the linear result in the plot below: \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{analysis2.png} \caption{The second plot produced by \texttt{freud-statistics}} \label{fig:analysis2} \end{figure} And the corresponding elementary algorithm: \begin{algorithm}[H] \SetAlgoLined \For{$i \gets 0$ \textbf{to} NUM\_MSG} { \For{$j \gets 0$ \textbf{to} NUM\_POINTS} { sleep($i + $ random\_margin())\; } } \caption{Linear simulation algorithm} \end{algorithm} \vspace{5mm} \section{Testing} Due to the complexity and particular nature of the software, we decided to not implement any automated tests. Opposed to test-driven development, we had a more unstructured way of iterating over the code, since the functionality had to be checked by hand by comparing the produced log files with the program. Therefore, the effective coverage is 0\%, but as someone once said: \begin{quote} \centering \textit{"Tests don't prove correctness"} \end{quote} \section{Demo} In this section you can find instruction on how to run a demo program and analyze the obtained data. To install and compile all the required software, simply run the \texttt{install.sh} script from the Jung repository. In case of failure, please refer to the READMEs of each project for more information. First, start the server: \texttt{\$ ./jung\_server} While in another shell, run the client: \texttt{\$ ./jung\_client} \textit{Note: if you run the server on another machine, you can pass the \texttt{---target=HOSTNAME} % three dashes in the sentence above render correctly dunno why leave it like that argument to the client.} Then, with both the client and server log files in the root folder, merge the traces: \texttt{\$ ./trace\_merge} This will produce the binary data (under \texttt{symbols/}) and a summary of the execution (\texttt{trace\_log.txt}). We can then run Freud's analysis tool: \texttt{\$ cd ../freud/freud-statistics} \texttt{\$ ./freud\_statistics 3 0 0 do\_stuff ../../Jung/symbols/do\_stuff/} \textit{Note: this is an example, refer to Freud's documentation for details and usage.} This should find a nice regression and plot it. That's it! \chapter{Conclusions} % Results wrt objectives, limitations We consider the obtained results pretty satisfying: we managed to realize a working application that complies with the requirements and could be used in the field to measure actual systems (with some limitations, of course). As the evaluation shows, Jung correctly tracks and aggregates metrics, producing an output which can be interpreted by another tool. Different behaviors are correctly picked up and produce different outputs when analyzed, as expected. From a more personal point of view, the whole project was a great learning experience, both in terms of development and project management; in addition to refreshing my C++ skills, I experienced carrying out a large-ish project on my own, with deadlines and constraints. I would like to add that the way my advisor and I organized the work was very helpful: small steps and weekly meetings allowed to iterate quickly over the code and to keep relatively on track with the schedule. As a matter of fact, the only discarded feature, testing on third-party applications, was not mandatory for the completion of the project. \subsection*{Acknowledgements} I'd like to thank Antonio, my advisor, for the opportunity and for guiding me throughout this project; Daniele, Freud's author, for his help understanding his amazing software; my friends Sasha and Brites for their incredible support; and last but not least everyone that supported me during these tiring times. Thank you! \section{Future work and possible developments} While this project can be considered a success, there are still a number of improvements that could be implemented. For example a better handling of the data dumping via a separate thread: as of now every function writes the buffered metrics to the log file when it finishes executing. While this is barely noticeable on a small scale, it could add a significant overhead under large loads or in performance-critical systems. A crucial missing part is an actual instrumentation like Freud's, either through PIN or another similar library, allowing the user to work on the executables instead of having to change the source code. Other possible improvements would be to support more complex features as parameters, like structs, automatically infer the type of the parameter (opposed as having to manually specify it), and export the data in a more universal format, like csv, to be able to use it in a wider variety of analysis tools. %\appendix %\input{appendix1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% BIBLIOGRAPHY %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newpage \bibliographystyle{abbrv} \bibliography{biblio} \end{document}
{ "alphanum_fraction": 0.734043781, "avg_line_length": 42.6814449918, "ext": "tex", "hexsha": "f425d6e923e26e0b00bbbb50a1cb4f444c16db27", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5686a7852a3e6873bc181c7b134a8842d127a679", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Steeven9/Jung", "max_forks_repo_path": "docs/report/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5686a7852a3e6873bc181c7b134a8842d127a679", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Steeven9/Jung", "max_issues_repo_path": "docs/report/main.tex", "max_line_length": 117, "max_stars_count": null, "max_stars_repo_head_hexsha": "5686a7852a3e6873bc181c7b134a8842d127a679", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Steeven9/Jung", "max_stars_repo_path": "docs/report/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5597, "size": 25993 }
% !TeX spellcheck = en_US % !TeX encoding = UTF-8 \documentclass[11pt, a4paper]{article} \usepackage{graphics, graphicx} \usepackage{fancyvrb, enumerate} \usepackage{amsmath, amssymb, amscd, amsfonts} \usepackage{geometry} \usepackage{multirow} \usepackage{url} \usepackage{tikz} \usepackage{listings, listing} \usepackage{color} \usepackage{mathptmx} \usepackage{apacite} \usepackage[style=iso]{datetime2} \usetikzlibrary{shapes, arrows, calc, positioning} \definecolor{codegreen}{rgb}{0, 0.6, 0} \definecolor{codegray}{rgb}{0.5, 0.5, 0.5} \definecolor{codepurple}{rgb}{0.58, 0, 0.82} \definecolor{backcolour}{rgb}{0.95, 0.95, 0.92} \lstdefinestyle{mystyle} { backgroundcolor=\color{backcolor}, commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2, frame=single } \lstset{style=mystyle} \tikzstyle{decision} = [diamond, draw, fill=blue!20, text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt] \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=2em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw, ellipse, fill=red!20, node distance=5em, minimum height=2em] \tikzset { -|-/.style= { to path= { (\tikztostart) -| ($(\tikztostart)!#1!(\tikztotarget)$) |- (\tikztotarget) \tikztonodes } }, -|-/.default=0.5, |-|/.style= { to path= { (\tikztostart) |- ($(\tikztostart)!#1!(\tikztotarget)$) -| (\tikztotarget) \tikztonodes } }, |-|/.default=0.5, } \geometry { top = 20mm, bottom = 20mm, left = 20mm, right = 20mm } \title{Helixco Cavity} \author{Jaewoong Lee} \date{\today} \begin{document} \maketitle \newpage \tableofcontents \listoftables \listoffigures \newpage \section{Introduction} \subsection{Dental Cavity} Dental cavity is one of the most common bacterial infections in humans. \textit{Streptococcus mutans} in the acquired enamel pellicle have a main role in human dental cavity \cite{cavity1, cavity2}. \begin{figure}[htbp] \centering \includegraphics[width=0.4 \linewidth]{figures/molar.png} \caption{Saggital and cross-sectional sections through a permanent molar \protect \cite{cavity1}} \label{fig:molar} \end{figure} \subsection{Microbiome} Microbiome refers the genome of microbal symbionts which live inside and on human \cite{microbiome1}. Microbiome is highly personalized, and gives effects to human health. \section{Materials} \subsection{16S rRNA Analysis} \section{Methods} \subsection{t-SNE} t-SNE is one of the dimension reduction algorithm which visualizes high-dimensional data by giving each data-point a location in a two-dimensional map. \cite{tsne1} \begin{figure}[htbp] \centering \includegraphics[width=0.4 \linewidth]{figures/mnist.png} \caption{Visualizations of handwritten digits from the MNIST data set \protect \cite{tsne1}} \label{fig:mnist} \end{figure} \subsection{Programming Methods} \subsubsection{Docker} Docker is light-weight linux containers for consistent development and deployment \cite{docker1}. \subsubsection{QIIME 2} QIIME 2 is a powerful, extensible, and decentralized microbiome analysis package with a focus on data and analysis transparency. \subsubsection{Scikit-learn} Scikit-learn is a simple and efficient tools for predictive data analysis \cite{sklearn1, sklearn2}. \section{Results} \subsection{t-SNE with Every Bacterium} \begin{figure}[htbp] \centering $\begin{array}{cc} \includegraphics[width=0.4 \linewidth]{figures/step14/NC.png} & \includegraphics[width=0.4 \linewidth]{figures/step14/SP.png} \\ \mbox{(a) Normal vs. Cavity} & \mbox{(b) Saliva vs. Plaque} \end{array}$ \end{figure} \section{Discussion} \bibliographystyle{apacite} \bibliography{reference} \end{document}
{ "alphanum_fraction": 0.6363057325, "avg_line_length": 32.2602739726, "ext": "tex", "hexsha": "bfc4b16b686fc448c24e2ca858b9bdab1305469f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0c8194dbd142f97c6027eec3337472a5e248bc08", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CompbioLabUnist/Helixco_Cavity", "max_forks_repo_path": "jwlee230/Report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0c8194dbd142f97c6027eec3337472a5e248bc08", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CompbioLabUnist/Helixco_Cavity", "max_issues_repo_path": "jwlee230/Report/report.tex", "max_line_length": 210, "max_stars_count": null, "max_stars_repo_head_hexsha": "0c8194dbd142f97c6027eec3337472a5e248bc08", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CompbioLabUnist/Helixco_Cavity", "max_stars_repo_path": "jwlee230/Report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1343, "size": 4710 }
\chapter{Change title for chapter 3 in chapters/03.tex} Add content in chapters/03.tex
{ "alphanum_fraction": 0.8023255814, "avg_line_length": 43, "ext": "tex", "hexsha": "7d3c423dfd59c2e8f1c7c35c4d4e4a62b4796dab", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/sidl", "max_forks_repo_path": "inprep/Floyd/chapters/03.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/sidl", "max_issues_repo_path": "inprep/Floyd/chapters/03.tex", "max_line_length": 55, "max_stars_count": null, "max_stars_repo_head_hexsha": "ca113bd66d56345895af9a6d5bd9adbcde69fc22", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/sidl", "max_stars_repo_path": "inprep/Floyd/chapters/03.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 22, "size": 86 }
\documentclass[12pt,a4]{article} \usepackage{graphicx,amsmath,amssymb,amsthm, boxedminipage, bm} \usepackage{float} \usepackage[body={14.64cm, 24.62cm}, centering, dvipdfm]{geometry} \newtheorem{theorem}{Theorem}%[section] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newcommand{\scalar}[2]{\ensuremath{\langle #1, #2\rangle}} \newcommand{\floor}[1]{\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\left\lceil #1 \right\rceil} \newcommand{\norm}[1]{\|#1\|} \newcommand{\pfrac}[2]{\left(\frac{#1}{#2}\right)} \newcommand{\nth}[1]{#1^{\textsuperscript{th}}} \newcommand{\N}{\mathbb{N}} \newcommand{\R}{\mathbb{R}} \newcounter{exercise} \newcommand{\exercise}{\addtocounter{exercise}{1}\noindent\textbf{Exercise \arabic{section}.\arabic{exercise}. }} \date{ 2015 - %month - %date } \author{Dumb Ways to Die} \title{Algorithms and Complexity\\ Homework Assignment %Add Number Here \\ } \begin{document} \maketitle \setcounter{section}{ %Add Number Here } \section{Add Title Here} \input{ %Add Your File Name } \end{document}
{ "alphanum_fraction": 0.7161016949, "avg_line_length": 19.6666666667, "ext": "tex", "hexsha": "18d2835fef3006c738c2f18918f0b3f25529d9af", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1a7df0c76d8938148cacc2b546f6fc7168ca7594", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "try-skycn/Algorithm_05", "max_forks_repo_path": "Template/template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1a7df0c76d8938148cacc2b546f6fc7168ca7594", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "try-skycn/Algorithm_05", "max_issues_repo_path": "Template/template.tex", "max_line_length": 113, "max_stars_count": null, "max_stars_repo_head_hexsha": "1a7df0c76d8938148cacc2b546f6fc7168ca7594", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "try-skycn/Algorithm_05", "max_stars_repo_path": "Template/template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 409, "size": 1180 }
\section{Siamese Multi-Object Tracking and Attention} \label{sec:SiamMOTandAttention} % ############################################################################## \subsection{Motivation} During several evaluation runs and our manual inspection of the tracker performance, we noticed a ubiquitous pattern. We remind that the scenes on which we trained as well as tested our tracker were captured by a static camera. Consequently, several video sequences contained multiple vehicles standing still due to a traffic jam or an ongoing red light but viewed under an angle somewhere in the range of $30-60$ degrees (\figtext{}~\ref{fig:UADETRACPartialOcclusion}). Therefore, it resulted in partial occlusion. However, what we considered even more problematic was the inability of the axis-aligned \gls{bbox} to properly define the vehicle. The angle under which the car was visible caused the \gls{bbox} to capture a great portion of the neighboring vehicles even without severe occlusion happening. % ------------------------------------------------------------------------------ \begin{figure}[!t] \centerline{\includegraphics[width=0.7\linewidth]{figures/siamese_tracking/uadetrac_partial_occlusion_red_light.pdf}} \caption[Partial occlusion in the \uadetrac{} dataset]{An example of a situation where multiple vehicles are standing still on a cross-road. In this scenario, even though just a slight degree of occlusion is present, the biggest issues are caused by the need to delineate \glspl{roi} using axis-aligned \glspl{bbox}. This inevitably captures the neighboring vehicles, increasing the likelihood of drifting to the semantic background due to the presence of similar interference (distractors).} \label{fig:UADETRACPartialOcclusion} \end{figure} % ------------------------------------------------------------------------------ The situations described above reminded us of the \gls{siammask}~\cite{wang2019siammask} single object tracker targeted at predicting segmentation mask along with the usual single-object Siamese tracking routine. Such prediction was subsequently exploited to produce a rotated \gls{bbox} instead of an axis-aligned one. Even though the evaluation benchmarks only consider axis-aligned predictions, the rotated region served the purpose of enhancing the discriminative power of the tracker, primarily when dealing with partial occlusion. In \figtext{}~\ref{fig:UADETRACPartialOcclusion}, a rotated \gls{bbox} would inexorably lead to an improved tracking accuracy. This approach was deemed successful for general object tracking, thus it also spawned another follow-up work of \gls{siammaske}~\cite{chen2019rotbboxes} that altered the original formulation of predicting the rotated \gls{bbox} by use of ellipse fitting for even better accuracy. However, there is a lack of datasets providing rotated annotations. The \uadetrac{} dataset is no exception. As a result, we sidestepped this approach and searched for an alternative solution that would enhance the discriminative power of the tracker when faced with partial occlusion. One such approach was the use of attention~\cite{vaswani2017attention}, especially spatial attention, which we found effective during our survey research~\cite{ondrasovic2021siamese}. Apart from the attention mechanism, we also remembered the more general formulation of the convolution operation that has been shown to significantly better object detection tasks due to the semi-dense prediction requirements, dubbed as deformable convolution~\cite{dai2017dcnn}. In what follows, we will discuss these two methods (\sectiontext{}~\ref{ssec:Attention} and \sectiontext{}~\ref{ssec:DeformableCNNs}) as a foundation for our subsequent experiments that yielded a positive outcome. % ############################################################################## \subsection{Attention Mechanism} \label{ssec:Attention} An attention mechanism was first introduced by Vaswani~\etal{}~\cite{vaswani2017attention}. The use of encoder-decoder architectures to capture a complete sequence of information by a single vector spurred the development of the attention module. This use case poses problems in holding on to information at the beginning of the sequence and encoding long-range dependencies. To address this, the attention module computes the degree of relevance between ``queries'' and ``keys'', to retrieve ``values'' in adequate proportions. The concept of ``queries, keys, and values'' comes from information retrieval systems. Let us provide a demonstrative example based on a YouTube video search. Assume a specific query signaling the demand to retrieve a particular YouTube video. The system then maps this query against a set of keys represented by various features, \egtext{}, video title, description, upload time, etc. These keys are directly associated with the stored candidate videos within the database. The output of this operation is a set of values, \ietext{}, found videos, that best match the given query. The attention aims to exploit deep learning to learn a transformation of the input (not necessarily the same) into three separate vector spaces, each of them dedicated to a different purpose. The first space is to capture the query, therefore, it should represent features that best describe the query to facilitate information retrieval. The obvious compatriot is the key vector space which is trained to represent the value in the most accurate way to initiate the search accurately. Last but not least, the value vector space extracts features that are most useful for the task at hand. They do not need to capture features pertinent to the search. For that, there are two other mappings. For a more concrete demonstration, we will use scaled dot-product attention. The input consists of queries and keys of dimension $d_k$, and values of dimension $d_v$. The query is used to compute a dot product with all the keys. These computations are scaled by $\sqrt{d_k}$ to provide a temperature scaling for the following softmax transformation to obtain the weights that will be used to retrieve values (\figtext{}~\ref{fig:ScaledDotProductAttention}). For optimal performance, it is reasonable to compute the attention function for the set of queries simultaneously as they can be easily stored in a matrix, denoted by $\mtx{Q}$. Analogically, keys and values can be also packed together into matrices given by $\mtx{K}$ and $\mtx{V}$, respectively. Thus, the attention can be formulated as a function of queries, keys, and values: \begin{equation} \label{eq:ScaledDotProductAttention} \func{attention}{\mtx{Q}, \mtx{K}, \mtx{V}} = \func{softmax}{\frac{\mtx{Q} \mtx{K}^T}{\sqrt{d_k}}} \mtx{V}. \end{equation} % ------------------------------------------------------------------------------ \begin{figure}[!t] \centerline{\includegraphics[width=0.15\linewidth]{figures/siamese_tracking/scaled_dot_product_attention.pdf}} \caption[Scaled dot-product attention]{An example of the input transformation by the scaled dot-product attention module. The pair of queries and keys is used to produce the probability distribution over the individual values for the final weighted sum. \externalsrc{\cite{vaswani2017attention}}} \label{fig:ScaledDotProductAttention} \end{figure} % ------------------------------------------------------------------------------ The two most prominent variants of attention are the additive attention~\cite{bahdanau2016additiveattention} and the multiplicative (dot-product) attention, with the latter being identical to the one described above except for the temperature scaling. Just for the record, we experimented with both approaches and observed differences in performance. On balance, both attentions are similar in theory, however, dot-product is much faster and more space-efficient in practice. On the other hand, additive attention outperforms dot-product attention as long as temperature scaling is not employed for larger values of $d_k$ since the dot-products tend to push the softmax function to regions of extremely small gradients. In our work, we also exploited the notion of self-attention. Since attention was first targeted at natural language translation, let us provide an example from this area. Originally, the attention was computed between the input and output sentences. Regarding self-attention, attention is computed with respect to the sentence itself. In the case of computer vision, the spatial self-attention represents a weight map over a $2$D feature map indicating the importance of each feature element. Analogically, the channel self-attention may be used to attribute importance to individual channels, as they often are not tantamount. Moreover, it yields more interpretable models as a by-product~\cite{vaswani2017attention}. % ############################################################################## \subsection{Deformable Convolutional Neural Networks} \label{ssec:DeformableCNNs} \glspl{dcnn}~\cite{dai2017dcnn} have gained popularity and are being applied to numerous computer vision tasks, \egtext{}, object segmentation (dense predictions) and object detection (semi-dense predictions). As \gls{vot} revolves around similar requirements for pixel-wise precision, we contemplated using this advancement. Although \glspl{cnn} (\sectiontext{}~\ref{ssec:ConvolutionalNeuralNetworks},~p.~\pageref{ssec:ConvolutionalNeuralNetworks}) are an excellent tool for a lot of deep learning tasks involving image processing, they are still limited in their capabilities to model a wide range geometric transformations. To address this, practitioners apply a broad range of data augmentation techniques (\egtext{}, rotation, translation, scaling, shearing, and cropping) to provide the necessary samples of some particular transformation during the training. However, such an approach is limited to tailor-made transformations that may not cover the entire set of possibilities the model may face at test time. The first work to learn spatial transformation from the training data in a deep learning fashion is known under the name \glspl{stn}~\cite{jaderberg2016stn}. It warps the feature map via a global parametric transformation such as affine transformation. In the realm of convolutional operations, there is the atrous convolution operation~\cite{holschneider1990atrousconv} that enhances the standard convolution by expanding the receptive field while maintaining the same number of parameters by use of greater offsets. However, these offsets are fixed. An obvious successor of this approach is the active convolution~\cite{jeon2017activeconv} that treats convolution offsets as learnable parameters instead of constants. But, in this setting, the learned offsets are shared across different spatial locations. Thus, the most general approach is to determine the offsets at each location independently and then proceed as usual. This is where deformable convolution (\figtext{}~\ref{fig:StandardVsDeformableCNN}) comes into place. In concrete terms, a $2$D convolution consists of sampling using a regular offset grid $\mset{R}$ defining the receptive field as well as dilation over the input features $\vect{x}$ followed by the summation of the samples values weighted by $\vect{w}$. For example, a standard $3 \times 3$ convolution with dilation $1$ would employ offsets given by \begin{equation} \label{eq:StandardConvolutionOffsetGrid} \mset{R} = \cbrackets{ \rbrackets{-1, -1}, \rbrackets{-1, 0}, \dots, \rbrackets{0, 1}, \rbrackets{1, 1} }. \end{equation} Then, for each location $\vect{p}_0$ within the output feature map $\vect{y}$ is calculated as \begin{equation} \label{eq:StandardConvolutionOutputCalc} \func{\vect{y}}{\vect{p}_0} = \sum_{\forall \vect{p}_n \in \mset{R}} \func{\vect{w}}{\vect{p}_n} \cdot \func{\vect{x}}{\vect{p}_0 + \vect{p}_n}, \end{equation} where the locations in $\mset{R}$ are iterated over by $\vect{p}_n$. Conversely, the deformable convolution extends the standard one by augmenting the original sampling grid $\mset{R}$ with additional offsets $\cbrackets{\Delta \vect{p}_n \ |\ n = 1, \dots, \msetsize{R}}$ (\figtext{}~\ref{fig:DeformableCNN}). Thus, \eqtext{}~\ref{eq:StandardConvolutionOutputCalc} is reformulated as \begin{equation} \label{eq:DeformableConvolutionOutputCalc} \func{\vect{y}}{\vect{p}_0} = \sum_{\forall \vect{p}_n \in \mset{R}} \func{\vect{w}}{\vect{p}_n} \cdot \func{\vect{x}}{\vect{p}_0 + \vect{p}_n + \Delta \vect{p}_n}. \end{equation} Nonetheless, the user needs to keep in mind that the sampling offsets now become fractions and thus have to be handled accordingly. One approach is to employ bilinear interpolation, where the position in the input feature map $\vect{x}$ is determined by \begin{equation} \label{eq:DeformableConvolutionBilinear} \func{\vect{x}}{\vect{p}} = \sum_{\forall \vect{q}} \func{G}{\vect{q}, \vect{p}} \cdot \func{\vect{x}}{\vect{p}}, \end{equation} in which $\vect{q}$ enumerates all integral locations and $\func{G}{\cdot}$ represents the interpolation kernel. The interpolation processing can be efficiently implemented owing to the sparsity. The performance overhead is negligible compared to the reaped benefits of adaptive sampling locations capable of covering very complicated transformations (\figtext{}~\ref{fig:SamplingLocationsDeformableCNN}). % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=0.6\linewidth]{figures/siamese_tracking/dcn_standard_vs_deformable.png}} \caption[Standard vs. deformable convolution]{Visualization of the difference between the fixed \imgpartdesc{a} and adaptive \imgpartdesc{b} receptive fields. Stacking multiple deformable convolutions results in profound amplification of deformation, making the transformation capture diverse shapes that would otherwise be very coarsely approximated by a standard convolution. \externalsrc{\cite{dai2017dcnn}}} \label{fig:StandardVsDeformableCNN} \end{figure} % ------------------------------------------------------------------------------ % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=0.6\linewidth]{figures/siamese_tracking/deformable_convolution.pdf}} \caption[\gls{dcnn}]{Illustration of a $3 \times 3$ deformable convolution operation. Unlike the standard convolution operation used in neural networks, this one employs one additional step of predicting variable offsets instead of using a fixed rectangular grid. \externalsrc{\cite{dai2017dcnn}}} \label{fig:DeformableCNN} \end{figure} % ------------------------------------------------------------------------------ % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=0.7\linewidth]{figures/siamese_tracking/dcn_sampling_locations.png}} \caption[Various sampling locations in \glspl{dcnn}]{Deformable convolution is effective at learning appropriate sampling locations reflecting the underlying transformation. \imgpartdesc{a} shows the regular sampling grid of a standard convolution; \imgpartdesc{b} is an example of irregularly deformed sampling region; \imgpartdesc{c} and \imgpartdesc{d} represent an expected pattern corresponding to scaling and rotation operations, respectively. \externalsrc{\cite{dai2017dcnn}}} \label{fig:SamplingLocationsDeformableCNN} \end{figure} % ------------------------------------------------------------------------------ The original paper~\cite{dai2017dcnn}, in which \glspl{dcnn} were introduced, showed that learning dense spatial transformation in using deep learning by use of \glspl{cnn} or sophisticated vision tasks such as object detection and semantic segmentation is not only feasible but also effective. % ############################################################################## \subsection{Modulated Deformable Convolutional Neural Networks} \label{ssec:ModulatedDeformableCNNs} The original paper by Zhu~\etal{}~\cite{zhu2018mdcnn} aptly described their contribution as ``more deformable, better results''. Because of this, we will describe the \glspl{mdcnn}, an extension to \glspl{dcnn}. Since we are simply adding a slight modifications to an already introduced equation, we will try to avoid repetition. Thus, let $\vect{p}_0$, $\vect{p}_n$ and $\Delta \vect{p}_n$ have the same meaning as in \eqtext{}~\ref{eq:DeformableConvolutionOutputCalc}. Then, the modified equation becomes \begin{equation} \label{eq:ModulatedDeformableConvolutionOutputCalc} \func{\vect{y}}{\vect{p}_0} = \sum_{\forall \vect{p}_n \in \mset{R}} \func{\vect{w}}{\vect{p}_n} \cdot \func{\vect{x}}{\vect{p}_0 + \vect{p}_n + \Delta \vect{p}_n} \cdot \Delta \vect{m}_n, \end{equation} where $\Delta \vect{m}_n$ is the modulation scalar for the current location, such that $\Delta \vect{m}_n \in \rbrackets{0, 1}$. Thus, there are two types of learnable parameters. The already described offsets, given by the $\Delta \vect{p}_n$ term, and the new modulation (weighting) coefficients, represented by the term $\Delta \vect{m}_n$. This trivial extension allows the system to not only learn how to sample features in a non-regular fashion if needed, but it also allows applying distinct weight to each sampling location to further adaptively intensify the deforming effect. Although the weights of the underlying convolutional layer can be tweaked to a large extent in order to apply different weights to different features, the inclusion of additional weighting coefficient provides more \glspl{dof}, making the transformation more versatile. From an implementation standpoint, \glspl{dcnn} as well as \glspl{mdcnn} have learnable offsets (and modulation coefficients, if used) set to zero during initialization. This produces no deformable effect, so the convolution behaves as usual in terms of location sampling. However, the modulation aspect is slightly different. Since the sigmoid function is commonly adopted to project the modulation weights into the $\rbrackets{0, 1}$ interval, it multiplies each location by the value of $\func{sigmoid}{0} = \nicefrac{1}{2}$ at the beginning. % ############################################################################## \subsection{Deformable Siamese Attention} \label{ssec:DeformableSiameseAttention} The two independent ideas above led us to experiment with a self-attention mechanism aimed at enhancing feature selection in both spatial and channel domains. Such experiments resulted in slight improvements for the reasons outlined in the motivation section. To support that our ideas were based on properly identified causes, there is a recently published work demonstrating the effectiveness of the very same approach. Yu~\etal{}~\cite{yu2021dsa} formulated their \gls{dsa}, which covered both of our suggestions above and additionally introduced the notion of cross-attention as an enhancement to the self-attention itself. What primarily motivated the introduction of the cross-attention was that the exemplar and search region features in Siamese trackers are computed separately, yet they may frequently compensate each other. It is reasonable to assume that multiple objects appear at the same time even in \gls{sot}, let alone \gls{mot}. Consequently, it is of paramount importance for the search branch to have as much information as possible about the exemplar during the computation of the response map for better discrimination. By the same token, the exemplar features may be enhanced by information from the search features. To this end, the cross-attention, at the acceptable computational cost, serves properly in a predictable fashion. % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{figures/siamese_tracking/siammot_attention_training.pdf}} \caption[\gls{siammot} with attention]{Our proposal to incorporate attention into the \gls{siammot} pipeline. This diagram shows the relationships between individual parts of the framework during the training phase.} \label{fig:SiamMOTWithAttention} \end{figure} % ------------------------------------------------------------------------------ Considering their contribution and promising outcomes for the \gls{sot} demonstrated on the \gls{siamrpn} framework (\sectiontext{}~\ref{ssec:TrackingUsingSiameseNetworks},~p.~\pageref{ssec:TrackingUsingSiameseNetworks}), we decided to implement their proposed module into the \gls{siammot} tracker as described in their paper (\figtext{}~\ref{fig:SiamMOTWithAttention}). However, with the prospect of greater improvement, we adopted modulated \glspl{dcnn}, instead of pure \glspl{dcnn}, because the \glspl{mdcnn} have all the advantages of the standard deformable convolution, but they additionally learn a modulation (weighting) for individual elements of the feature map while taking the underlying features into account. For our purposes, this seemed to intensify the spatial attention effect, since not only the deformable part was responsible for choosing features using irregular sampling patterns, the network was even allowed to weigh them differently. We conjectured that such an extension may either have no dramatic effect or influence it only positively. % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{figures/siamese_tracking/deformable_siamese_attention.pdf}} \caption[\gls{dsa} diagram]{The \gls{dsa} extension introduces two sub-modules for both exemplar and search branch. The self-attention is further divided into two operations, namely spatial and channel attention. The very same attention network is used to process both features independently. Notice how the channel attention is computed only once as part of the self-attention process and then directly fused with the channel self-attention of the other branch, creating the cross-attention effect, which is the strongest one of all three, according to the authors. \externalsrc{\cite{yu2021dsa}}} \label{fig:DeformableSiameseAttention} \end{figure} % ------------------------------------------------------------------------------ \subsubsection{Self-Attention} Self-attention is computed on the exemplar and search branch independently. This operation can be easily executed since exemplar and search tensors only differ in width and height. The following description of the self-attention computation conforms to the established attention principles regarding ``queries, keys and values'' introduced in \sectiontext{}~\ref{ssec:Attention}. For better understanding of the computation, see the diagram in \figtext{}~\ref{fig:DeformableSiameseAttention}. Let $\mtx{X} \in \setrnnn{C}{H}{W}$ be the input features. To produce query features $\mtx{Q}$ and key features $\mtx{K}$, such that $\mtx{Q}, \mtx{K} \in \setrnnn{C^{\prime}} {H}{W}$ and $C^{\prime} = \frac{1}{4}C$, where $C^{\prime}$ is the reduced number of channels, two separate $1 \times 1$ convolution layers are applied. The obtained features are then reshaped into $\mtx{\bar{Q}}, \mtx{\bar{K}} \in \setrnn{C^{\prime}}{N}$, where $N = H \times W$. The spatial self-attention $\mtxsubsup{A}{S}{S} \in \setrnn{N}{N}$ is produced via matrix multiplication and a column-wise softmax operation as \begin{equation} \label{eq:SpatialSelfAttention} \mtxsubsup{A}{S}{S} = \func{softmax_{col}}{\mtx{\bar{Q}}^T \mtx{\bar{K}}}. \end{equation} Authors used $C^{\prime} = \frac{1}{8}C$, but in \gls{mot}, the number of objects to track is often a lot greater, thus the computation graph grows dramatically with the higher number of channels. Furthermore, in our case $C = 128$ (by \gls{siammot} design), and we considered using $32$ channels for the attention to be the bare minimum. Meanwhile, an analogous sequence of operations is adopted to produce the value features. A $1 \times 1$ convolution layer without the subsequent reshape operation transforms the input features $\mtx{X}$ into the value features $\mtx{\bar{V}} \in \setrnn{C}{N}$. At this point, we have matched the queries with keys and computed the values. We may proceeed further to the weighted selection from the values and to incorporate the attention into the features as follows \begin{equation} \label{eq:SpatialSelfAttentionApplication} \mtxsubsup{\bar{X}}{S}{S} = \alpha \mtx{\bar{V}} \mtxsubsup{A}{S}{S} + \mtx{\bar{X}}, \end{equation} where $\alpha$ is a learnable scalar parameter, and $\mtxsubsup{\bar{X}}{S}{S} \in \setrnn{C}{N}$. The outputs $\mtxsubsup{\bar{X}}{S}{S}$ are then reshaped back to the original size, specifically $\mtxsubsup{X}{S}{S} \in \setrnnn{C}{H}{W}$. From our experience, the parameter $\alpha$ is very useful for training stabilization. The corresponding channel self-attention $\mtxsubsup{A}{C}{S}$ and the channel-wise attentional features $\mtxsubsup{X}{C}{S}$ are obtained similarly. Due to space limitations and the fact that the upcoming formulation of the cross-attention exploits the channel self-attention, we will omit a detailed description. We will just point out that the ``queries, keys and values'' for the channel self-attention are produced directly from the features on the input, with no $1 \times 1$ convolutions whatsoever. The final self-attentional features are generated by an element-wise sum using the partial spatial and channel self-attentions, $\mtxsubsup{X}{S}{S}$ and $\mtxsubsup{X}{C}{S}$, respectively. \subsubsection{Cross-Attention} Let $\mtx{Z} \in \setrnnn{C}{h}{w}$, $\mtx{X} \in \setrnnn{C}{H}{W}$ denote the exemplar and search region features, respectively. The following description introduces the computation of the cross-attention from the perspective of the search branch. First, the exemplar features $\mtx{Z}$ are reshaped into $\mtx{\bar{Z}} \in \setrnn{C}{n}$, where $n = h \times w$. Then, the cross-attention from the exemplar branch is computed. We emphasize that the channel attention is reused, therefore, the computation below serves as a recipe for how to compute the channel self-attention. So, we compute the channel cross-attention $\mtxsup{A}{C} \in \setrnn{C}{C}$ as \begin{equation} \label{eq:CrossAttentionCalc} \mtxsup{A}{C} = \func{softmax_{row}}{\mtx{\bar{Z}} \mtx{\bar{Z}}^T}. \end{equation} The real benefit comes from the merging stage, where the above-computed attention is merged with the other, in this case, the search branch as \begin{equation} \label{eq:CrossAttentionMerge} \mtxsup{\bar{X}}{C} = \gamma \mtxsup{A}{C} \mtx{\bar{X}} + \mtx{\bar{X}}, \end{equation} where $\gamma$ is a learnable scalar parameter. The merged features $\mtxsup{\bar{X}}{C}$, once again, have to be reshaped, so the features $\mtxsup{X}{C} \in \setrnnn{C}{H}{W}$ are the final output. At last, the self-attentional features are combined with the cross-attentional features using an element-wise sum. The cross-attention from the perspective of the exemplar branch can be obtained using a similar sequence of operations. In total, there are six steps that involve addition for the purpose of feature merging (\figtext{}~\ref{fig:DeformableSiameseAttention}). Once the attention is applied, the corresponding response map is altered as expected. The discriminative power of the tracker is enhanced by appropriate suppression of the semantic background. As \figtext{}~\ref{fig:DSAAttentionVisualization} shows, activations in the search regions as viewed through the response map vary if the \gls{dsa} module is included in the computation, making the tracker less prone to drifting to the scene or semantic background objects. \subsubsection{Deformable Convolution Phase} The attention is finalized by processing the obtained feature tensors by another layer of modulated deformable convolution. We modified the deformable convolution operation to include the modulated version, which is our modification compared to the original formulation (\figtext{}~\ref{fig:DeformableSiameseAttention}, top yellow boxes). The resulting features with a shape identical to the input shape are used to compute the response map. As a result, this extension can be easily integrated into an existing pipeline thanks to its ability to preserve tensor shapes. % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=0.5\linewidth]{figures/siamese_tracking/dsa_attention_visualization.pdf}} \caption[\gls{dsa} attention visualization]{Visualization of response (confidence) maps. The first column represents the search image, the second column represents the activation levels without the \gls{dsa} module, whereas the third column clearly demonstrates the improved target-background discirminability in the computed attentional features. \externalsrc{\cite{yu2021dsa}}} \label{fig:DSAAttentionVisualization} \end{figure} % ------------------------------------------------------------------------------ % ############################################################################## \subsection{Experimental Evaluation and Discussion} \label{ssec:DSAExperimentalEvaluation} The inclusion of the \gls{dsa} module substantially increased the consumption of \gls{gpu} \gls{vram} during the training as each proposal needs its corresponding attentional features. The number of proposals is by default $256$, but we decreased it to $160$ to allow for bigger minibatches. We used the lower number of proposals for all the experiments to have as many common hyperparameter settings between the experiments as possible. On the contrary, the inference phase is not as affected since the number of ``computed attentions'' is given by the number of tracked objects. Unlike the original model that allowed the batch size of $24$, \dsamodel{} model utilized at most $4$. As \figtext{}~\ref{fig:SiamMOTWithAttention} shows, this extension is directly incorporated into the architecture, right before applying the cross-correlation. We trained the entire model in an end-to-end fashion. We also tried to train the architecture in two stages, \ietext{}, to freeze the attention and train the rest of the model, and then freeze the rest and just fine-tune the attention. But this training regime was detrimental to the overall performance, thus all the results discussed below are based on joint training. The following plots (\figtext{}~\ref{fig:OrigVsDSA_160RPN_MOTA_MOTP}, \figtext{}~\ref{fig:OrigVsDSA_160RPN_Prec_Rec}, \figtext{}~\ref{fig:OrigVsDSA_160RPN_GA_MOTA_MOTP}, and \figtext{}~\ref{fig:OrigVsDSA_160RPN_GA_Prec_Rec}) share the very same pattern. The original implementation represented by circles with a minimum number of modifications is compared against the very same model extended with the \gls{dsa} module represented by squares. We chose $2$D diagrams to show the change in performance using two complementary metrics. One pair of metrics is \gls{mota} vs. \gls{motp} (the most important one) and the second pair is precision vs. recall. Models with a matching number of training iterations have the same color. Concerning the interpretation, the higher some particular data point lies in the upper right corner, the better. Increasing \gls{mota} while increasing \gls{motp} is desired. The same applies to the \gls{pr} plot. However, we have to keep in mind that the \gls{motp} metric is evaluated only with respect to properly matched detections. If a tracker makes very few predictions its \gls{motp} score may be high. The official \gls{clear} evaluation (\sectiontext{}~\ref{sec:EvaluatingMOT},~p.~\pageref{sec:EvaluatingMOT}) prioritizes \gls{mota} and \gls{motp} scores above other scores (\tabletext{}~\ref{tab:OtherCLEARMetrics},~p.~\pageref{tab:OtherCLEARMetrics}). In addition, the \gls{mota} score has the leading priority within the ranking. \gls{pr} plots are our subjective choice to peek into the object detection performance deeper. \figtext{}~\ref{fig:OrigVsDSA_160RPN_MOTA_MOTP} shows how \gls{dsa} extension affects the tracking as measured by \gls{mota} and \gls{motp}. As we can see, at $15\ 000$ training iterations, the best performance is reached with our extension included. This result is not surpassed by any other model. We believe that the model starts to overfit the training set after $15\ 000$ iterations. A noteworthy observation is the existence of clusters. Except for the best performance, the inclusion of \gls{dsa} probably makes the model more conservative, so it captures fewer objects. This claim is further supported in \figtext{}~\ref{fig:OrigVsDSA_160RPN_Prec_Rec}, in which the value of recall is often lower, except for the best data point. Nevertheless, we can observe that the best performance with our extension maintained its position on the \gls{pr} plot compared to the baseline model. Thus, at $15\ 000$ iterations, the \gls{dsa} extension achieves the best combination of \gls{mota} and \gls{motp} while maintaining the precision and recall scores. % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{figures/siamese_tracking/tracker_cmp_160_2x12_vs_160_2x2_DSA_MOTA_MOTP.pdf}} \caption[\gls{dsa} evaluation - primary metrics]{Comparison of the baseline model (circles) against the \dsamodel{} model (squares) over the entire training lifetime using a complementary pair of \gls{mota} and \gls{motp} metrics. The top-performing \dsamodel{} model (purple square) shows considerable improvement at this benchmark. The existence of multiple clusters depicts the overall effect of attention mechanism upon the tracker, which in this case is slightly inferior. However, the batch size for the original model is $24$ whilist for the \dsamodel{} version it is just $4$.} \label{fig:OrigVsDSA_160RPN_MOTA_MOTP} \end{figure} % ------------------------------------------------------------------------------ % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{figures/siamese_tracking/tracker_cmp_160_2x12_vs_160_2x2_DSA_rec_prec.pdf}} \caption[\gls{dsa} evaluation - secondary metrics]{Comparison of the baseline model (circles) against the \dsamodel{} model (squares) over the entire training lifetime using a well-known \gls{pr} plot that is often used to evaluate object detectors. The top-performing \dsamodel{} model is only capable of maintaining its object detection abilities compared to the baseline model. We can also observe the existence of clusters reflecting the effect of attention. However, the batch size for the original model is $24$ whilist for the \dsamodel{} version it is just $4$.} \label{fig:OrigVsDSA_160RPN_Prec_Rec} \end{figure} % ------------------------------------------------------------------------------ To make our analysis more complete, we have to take into account the difference in batch size. Due to insufficient memory of our two \glspl{gpu}, we employed the already introduced \gls{ga}. Mathematically speaking, applying \gls{ga} is similar to a multi-\gls{gpu} setup. However, we know that batch normalization layers are particularly sensitive to small minibatches, so in practice, this claim does not hold. To remedy this, we adopted ``frozen'' versions of the batch normalization layers. Thus, these layers maintained their weights from the pre-training phase on the ImageNet dataset. Performance issues aside, we wanted to compare how our extension would perform with the same batch size, regardless of how inferior its performance would be. The relative difference was our concern. The number of training iterations was higher due to having minibatches containing just a single pair of frames for siamese training. These minibatches were then accumulated $12$ times. As \figtext{}~\ref{fig:OrigVsDSA_160RPN_GA_MOTA_MOTP} demonstrates, using bigger minibatches favors the performance of our model the most. Even though we were unable to execute identical training as the original authors who used very powerful eight \texttt{NVIDIA V100} \glspl{gpu}, we believe that the \gls{dsa} performance would maintain its edge over the baseline model. Not only does the expansion of minibatches yield better \gls{mota} and \gls{motp} combinations, but it also creates a big cluster of data points in the upper right corner of the \gls{pr} plot (\figtext{}~\ref{fig:OrigVsDSA_160RPN_GA_Prec_Rec}). We consider this result significantly better compared to our previous experiment in which we just managed to maintain the \gls{pr} performance, not improve it. Additionally, training with \gls{ga} is more accessible to ordinary users as they may not have at least $11$GB of \gls{gpu} \gls{vram} available. For this configuration, $6$GB would suffice. The only downside is the threefold increase in training time. % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{figures/siamese_tracking/tracker_cmp_160_2x12_vs_160_2x2_DSA_GA_MOTA_MOTP.pdf}} \caption[\gls{dsa} evaluation with \gls{ga} - primary metrics]{Comparison of the baseline model (circles) against the \dsamodel{} model (squares) when using \gls{ga} with batch size of $24 = 12 \times 2$. The adoption of \gls{dsa} module shows significant improvement over the original model owing to bigger minibatches. It dominates the upper-right corner thanks to the best combinations of \gls{mota} and \gls{motp} performance scores.} \label{fig:OrigVsDSA_160RPN_GA_MOTA_MOTP} \end{figure} % ------------------------------------------------------------------------------ % ------------------------------------------------------------------------------ \begin{figure}[t] \centerline{\includegraphics[width=\linewidth]{figures/siamese_tracking/tracker_cmp_160_2x12_vs_160_2x2_DSA_GA_rec_prec.pdf}} \caption[\gls{dsa} evaluation with \gls{ga} - secondary metrics]{Comparison of the baseline model (circles) against the \dsamodel{} model (squares) when using \gls{ga} with batch size of $24 = 12 \times 2$. Although these results are not what typical object detection \gls{pr} plots show, we can still observe that the \gls{dsa} yields a group of points in the upper-right corner, showing a consistent gain in performance.} \label{fig:OrigVsDSA_160RPN_GA_Prec_Rec} \end{figure} % ------------------------------------------------------------------------------ \tabletext{}~\ref{tab:OrigVsDSAScores} summarizes the best scores for exact quantitative comparison, supporting the observations from the discussed plots. We can see that the \gls{dsa} extension brings improvement in all aspects. Although the third row shows performance closer to the baseline model, we believe the cause is the use of six times smaller minibatches. This is why the setup involving \gls{ga} supports the claim that had more powerful hardware been used, the performance gap could have been wider. \begin{table}[!t] \centering \begin{tabular}{lllllll} \toprule \textbf{model} & \textbf{batch size} & \textbf{grad. accum.} & \textbf{MOTA} & \textbf{MOTP} & \textbf{precision} & \textbf{recall} \\ \midrule \dsamodel{} & $24$ & \checkmark & $0.7625$ & $0.1548$ & $0.9260$ & $0.8315$ \\ original & $24$ & \checkmark & $0.7429$ & $0.1533$ & $0.9137$ & $0.8230$ \\ \midrule \dsamodel{} & $4$ & & $0.7274$ & $0.1639$ & $0.8843$ & $0.8401$ \\ original & $24$ & & $0.7263$ & $0.1556$ & $0.8825$ & $0.8415$ \\ \bottomrule \end{tabular} \caption[\gls{dsa} extension performance table comparison]{Comparison of the top-performing models that were trained and tested on the \uadetrac{} dataset. The models were chosen in accordance with their \gls{mota} and \gls{motp} pair of scores, conforming to the official \gls{clear} rules. The middle separator divides the table into with and without the use of \gls{ga}.} \label{tab:OrigVsDSAScores} \end{table} Overall, we can say that this type of experiment eventually led to the improvement of the underlying \gls{siammot} model. We consider this extension applicable since it consistently improves the tracking performance. However, there are increased \gls{gpu} \gls{vram} demands, especially during the training. The inference is not substantially affected unless the tracker is required to track hundreds of objects. For every additional object, the \gls{dsa} extension introduces a constant computational overhead. This model can run in inference mode on a laptop \gls{gpu} with $4$GB of memory. As for the change in \gls{fps}, the impact on performance is negligible as the real-time speed is preserved. Specifically, the original model inference runs on average at $26.49$ \gls{fps}, whereas the \dsamodel{} version achieves $24.16$ \gls{fps}, incurring an approximately $9$\% reduction in speed (more in \tabletext{}~\ref{tab:InferenceSpeedComparison}).
{ "alphanum_fraction": 0.7327519939, "avg_line_length": 147.6170212766, "ext": "tex", "hexsha": "833995f143e2035510a4a3eb02af4ab78fb84ce9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "68a3a6d1687ea43dc6cdfafcd5e6d9ce35f424e8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mondrasovic/phd_thesis", "max_forks_repo_path": "tex/chapters/siamese_tracking/sections/siammot_and_attention.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68a3a6d1687ea43dc6cdfafcd5e6d9ce35f424e8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mondrasovic/phd_thesis", "max_issues_repo_path": "tex/chapters/siamese_tracking/sections/siammot_and_attention.tex", "max_line_length": 1068, "max_stars_count": null, "max_stars_repo_head_hexsha": "68a3a6d1687ea43dc6cdfafcd5e6d9ce35f424e8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mondrasovic/phd_thesis", "max_stars_repo_path": "tex/chapters/siamese_tracking/sections/siammot_and_attention.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9674, "size": 41628 }
\subsection{Required Steps} Adding a built-in function to the Go language requires a few more steps than just adding support within the compiler. While it would technically be enough to support the translation between Go code and the compiled binary, there would be no visibility for a developer that there is a function which could be used. For a complete implementation, the following steps are necessary: \begin{itemize} \item Adding the Godoc\autocite{godoc} that describes the function and it's usage \item Adding type-checking support in external packages for tools like Gopls\footnote{Gopls is Go's official language server implementation\autocite{gopls}.} \item Adding the implementation within the internal\footnote{ `internal' packages can only be imported by other packages that are rooted at the parent of the `internal' directory. It is used to make specific packages not importable in order to decrease potential API surface\autocite{internal-packages}. } package of the compiler \begin{itemize} \item Adding the \gls{ast} node type \item Adding type-checking for that node type \item Adding the AST traversal (`walk') for that node type, translating it to AST nodes that the compiler already knows and can translate to built-in runtime-calls or \gls{ssa} \end{itemize} \end{itemize} The Go source code that is relevant for this thesis can be classified into three different types. One is the Godoc --- the documentation for the new built-in functions. The other two are the `public' and the `private' implementation of these built-ins. The `private' implementation is located within the \textit{src/cmd/compile/internal} package\autocite{internal-packages}. Because it is an internal package, it can only be used by the packages in \textit{src/cmd/compile}, which contain the implementation of the compiler itself. When calling \begin{bashcode} $> go build . \end{bashcode} the compiler is invoked indirectly through the main `go' binary. To directly invoke the compiler, \begin{bashcode} $> go tool compile \end{bashcode} can be used. Everything that is not in \textit{src/cmd/compile} is referred to as the `public' part of the compiler in this thesis. The `public' parts are used by external tools, for example Gopls, for type-checking, source code validation and analysis. \subsection{Adding the Godoc} In Go, documentation is generated directly from comments within the source code \autocite{godoc}. This also applies to built-in functions in the compiler, which have a function stub to document their behaviour\autocite{godoc-builtin}, but no implementation, as that is done in the compiler\autocite{builtin-impl}. The documentation for built-ins should be as short and precise as possible. The usage of `Type' and `Type1' has been decided based on other built-ins like `append' and 'delete'. The function headers are derived from their Haskell counterparts, adjusted to the Go nomenclature. \begin{code} \gofilerange{../work/go/src/builtin/builtin.go}{begin-newbuiltins}{end-newbuiltins}% \caption{Godoc for the new built-in functions\autocite{new-builtins-godoc}} \end{code} \subsection{Public packages} To enable tooling support for the new built-in functions, they have to be registered in the `go/*' packages. The only package that is affected by new built-ins is `go/types'. \begin{quote} Note that the `go/*` family of packages, such as `go/parser` and `go/types`, have no relation to the compiler. Since the compiler was initially written in C, the `go/*` packages were developed to enable writing tools working with Go code, such as `gofmt` and `vet`.\autocite{compiler-readme} \end{quote} In the `types' package, the built-ins have to be registered as such and as `predeclared' functions: \begin{code} \gofilerange{../work/go/src/go/types/universe.go}{start-builtin}{end-builtin}% \gofilerange{../work/go/src/go/types/universe.go}{start-predeclared}{end-predeclared}% \caption{Registering new built-in functions\autocite{new-builtins-universe}} \end{code} This registration defines the type of the built-in --- they are all expressions, as they return a value --- and the number of arguments. After that, the type-checking and its associated tests have been implemented, but are not shown here. The implementation can be located in the `src/go/types' package in the files `builtins.go', `builtins\_test.go' and `universe.go' See the git diff\autocite{ba-go1-14-thesis-diff} to view the changes that have been made. This concludes the type-checking for external tools. `gopls' can be compiled against these changed public packages\footnote{ See Appendix~\ref{appendix:build-gopls} for instructions on how to build Gopls. } and will then return errors if the wrong types are used. For example, when trying to prepend an integer to a string slice: \begin{gocode} package main import "fmt" func main() { fmt.Println(prepend(3, []string{"hello", "world"})) } \end{gocode} Gopls will report a type-checking error: \begin{bashcode} $ gopls check main.go /tmp/playground/main.go:6:22-23: cannot convert 3 (untyped int constant) to string \end{bashcode} \subsection{Private packages} \input{chapters/42_functions.tex}
{ "alphanum_fraction": 0.7759415834, "avg_line_length": 43.0082644628, "ext": "tex", "hexsha": "63ecb721870c74a0dabd76d0db5f6e4672d5af0a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a8d61cce7291e6cb908ae1632dba2c246cbf19d2", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tommyknows/bachelor-thesis", "max_forks_repo_path": "thesis/chapters/41_builtins.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a8d61cce7291e6cb908ae1632dba2c246cbf19d2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tommyknows/bachelor-thesis", "max_issues_repo_path": "thesis/chapters/41_builtins.tex", "max_line_length": 113, "max_stars_count": null, "max_stars_repo_head_hexsha": "a8d61cce7291e6cb908ae1632dba2c246cbf19d2", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "tommyknows/bachelor-thesis", "max_stars_repo_path": "thesis/chapters/41_builtins.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1323, "size": 5204 }
\subsection{User Layer Driver Software} The user layer driver software implements an interface between the ARM Top-Level software and the driver for the programmable logic. It is implemented in C. It is supposed to handle the entire communication with the driver so that the hardware is only abstractly visible for the ARM Top-Level software.\\ The ARM top-level software sees the network as a class in python which has methods to load a new image data with a numpy array as input and a finish signal as a output. This method should call the user layer driver software which handles the communication between user space and kernel space. In a similar way each IP should be a class in python. \\ Requirements of the User Layer Driver Software: \begin{itemize} \item Communication with the kernel space drivers \item Use python wrapper to communicate with ARM Top-Level software \item Easy to use interface from Top-Level \item No knowledge of the hardware should be necessary to use the interface \item Data encapsulation to avoid the Top-Level Software from corrupting the memory \end{itemize} \subsubsection{File Tree of User Layer Driver Software} \todo{Would be nice if we have something similar as in \ref{SEC:ARM-TOP-SW-FILE-TREE}} \subsubsection{File Tree of User Layer Driver Software} \todo{Would be nice if we have something similar as in \ref{SEC:ARM-TOP-SW-FILE-TREE}}
{ "alphanum_fraction": 0.7971223022, "avg_line_length": 92.6666666667, "ext": "tex", "hexsha": "1b7b97798c311e03c1dc8167cd501b4cfb39fc1d", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2020-10-13T13:36:37.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-20T15:12:52.000Z", "max_forks_repo_head_hexsha": "4b4a30e0adca35de9adcad7b3fec08c516260790", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "marbleton/FPGA_MNIST", "max_forks_repo_path": "tex/documentation/driver-space-sw.tex", "max_issues_count": 29, "max_issues_repo_head_hexsha": "4b4a30e0adca35de9adcad7b3fec08c516260790", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:20:45.000Z", "max_issues_repo_issues_event_min_datetime": "2019-12-17T22:06:04.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "marbleton/FPGA_MNIST", "max_issues_repo_path": "tex/documentation/driver-space-sw.tex", "max_line_length": 349, "max_stars_count": 7, "max_stars_repo_head_hexsha": "4b4a30e0adca35de9adcad7b3fec08c516260790", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "marbleton/FPGA_MNIST", "max_stars_repo_path": "tex/documentation/driver-space-sw.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-31T02:39:35.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-13T12:24:36.000Z", "num_tokens": 306, "size": 1390 }
\chapter{PLCore \ac{RTTI} Classes} \section{PLCore::Object Class} \subsection{Methods} \paragraph{<bool> IsInstanceOf(<string>)} Write \emph{<RTTI object>:IsInstanceOf(<string>)} in order to check if object is instance of a given class. Class name (with namespace) as first parameter. Returns \emph{true} if the object is an instance of the class or one of it's derived classes, else \emph{false}. \paragraph{SetAttribute(<string>, <string>)} Write \emph{<RTTI object>:SetAttribute(<string>, <string>)} in order to set an attribute value. Attribute name as first parameter, attribute value as second parameter. \paragraph{SetAttributeDefault(<string>)} Write \emph{<RTTI object>:SetAttributeDefault(<string>)} in order to set an attribute to it's default value. Attribute name as first parameter. \paragraph{CallMethod(<string>, <string>)} Write \emph{<RTTI object>:CallMethod(<string>, <string>} in order to call a method. Method name as first parameter, parameters as string (e.g. ''Param0='x' Param1='y' '') as second parameter. \paragraph{SetValues(<string>)} Write \emph{<RTTI object>:SetValues(<string>)} in order to set multiple attribute values as a string at once. String containing attributes and values as first parameter (e.g. ''Name='Bob' Position='1 2 3' ''). \paragraph{SetDefaultValues()} Write \emph{<RTTI object>:SetDefaultValues()} in order to set all attributes to default. \paragraph{<string> ToString()} Write \emph{<RTTI object>:ToString()} in order to get the object as string. Returns string representation of object. \paragraph{FromString(<string>)} Write \emph{<RTTI object>:FromString(<string>)} in order to set the object from string. String representation of object as first parameter. \subsection{Signals} \paragraph{SignalDestroyed} Object destroyed signal. When this signal is emitted the object is already in the destruction phase and parts may already be invalid. Best to e.g. only update our object pointer. \section{PLCore::CoreApplication Class} \subsection{Methods} \paragraph{<RTTI object> GetApplicationContext()} Write \emph{<RTTI object>:GetApplicationContext()} in order to receive the application context. \paragraph{Exit(<integer>)} Write \emph{<RTTI object>:Exit(<integer>)} in order to exit the application. Return code for application as first parameter (usually 0 means no error). \section{PLCore::FrontendApplication Class} \subsection{Methods} \paragraph{<RTTI object> GetFrontend()} Write \emph{<RTTI object>:GetFrontend()} in order to get the frontend this application is running in. \section{PLCore::Frontend Class} \subsection{Methods} \paragraph{Redraw()} Write \emph{<RTTI object>:Redraw()} in order redraw the frontend. \paragraph{Ping()} Write \emph{<RTTI object>:Ping()} in order to give the frontend a chance to process \ac{OS} messages. \paragraph{RedrawAndPing()} Write \emph{<RTTI object>:RedrawAndPing()} in order to redraw the frontend and give the frontend a chance to process \ac{OS} messages. \paragraph{<string> GetTitle()} Write \emph{<RTTI object>:GetTitle()} in order receive the frontend title. \paragraph{SetTitle(<string>)} Write \emph{<RTTI object>:SetTitle(<string>)} in order to set the frontend title. \paragraph{<integer> GetX()} Write \emph{<RTTI object>:GetX()} in order receive the x position of the frontend (in screen coordinates). \paragraph{<integer> GetY()} Write \emph{<RTTI object>:GetY()} in order receive the y position of the frontend (in screen coordinates). \paragraph{<integer> GetWidth()} Write \emph{<RTTI object>:GetWidth()} in order receive the width of the frontend. \paragraph{<integer> GetHeight()} Write \emph{<RTTI object>:GetHeight()} in order receive the height of the frontend. \paragraph{<bool> GetToggleFullscreenMode()} Write \emph{<RTTI object>:GetToggleFullscreenMode()} in order to request whether it's allowed to toggle the fullscreen mode using hotkeys. \emph{true} if it's possible to toggle the fullscreen mode using hotkeys, else \emph{false}. \paragraph{SetToggleFullscreenMode(<bool>)} Write \emph{<RTTI object>:SetToggleFullscreenMode(<bool>)} in order set whether it's allowed to toggle the fullscreen mode using hotkeys. \emph{true} as first parameter to allow it, else \emph{false}. \paragraph{<bool> GetFullscreenAltTab()} Write \emph{<RTTI object>:GetFullscreenAltTab()} in order to request whether it's allowed to use Alt-Tab if fullscreen mode is used. \emph{true} if it's possible to use Alt-Tab if fullscreen mode is used, else \emph{false}. \paragraph{SetFullscreenAltTab(<bool>)} Write \emph{<RTTI object>:SetFullscreenAltTab(<bool>)} in order to set whether it's allowed to use Alt-Tab if fullscreen mode is used. \emph{true} as first parameter to allow it, else \emph{false}. \paragraph{<bool> IsFullscreen()} Write \emph{<RTTI object>:IsFullscreen()} in order to request whether or not the frontend is currently fullscreen or not. Returns \emph{true} if the frontend is currently fullscreen, else \emph{false}. \paragraph{SetFullscreen(<bool>)} Write \emph{<RTTI object>:SetFullscreen(<bool>)} in order to set whether or not the frontend is currently fullscreen or not. \emph{true} as first parameter if the frontend is currently fullscreen, else \emph{false}. \paragraph{<bool> IsMouseOver()} Write \emph{<RTTI object>:IsMouseOver()} in order to request whether or not the mouse cursor is currently over the frontend. Returns \emph{true} if the mouse cursor is currently over the frontend, else \emph{false}. \paragraph{<integer> GetMousePositionX()} Write \emph{<RTTI object>:GetMousePositionX()} in order to request the current mouse cursor X position inside the frontend, negative value if the mouse cursor isn't currently over the frontend. \paragraph{<integer> GetMousePositionY()} Write \emph{<RTTI object>:GetMousePositionY()} in order to request the current mouse cursor Y position inside the frontend, negative value if the mouse cursor isn't currently over the frontend. \paragraph{<bool> IsMouseVisible()} Write \emph{<RTTI object>:IsMouseVisible()} in order to request whether or not the mouse cursor is currently visible. Returns \emph{true} if the mouse cursor is currently visible, else \emph{false}. \paragraph{SetMouseVisible(<bool>)} Write \emph{<RTTI object>:SetMouseVisible(<bool>)} in order to set the mouse cursor visibility. \emph{true} as first parameter if the mouse cursor shall be visible. \paragraph{SetTrapMouse(<bool>)} Write \emph{<RTTI object>:SetTrapMouse(<bool>)} in order to trap the mouse inside the frontend. \emph{true} as first parameter if the mouse should be trapped inside the frontend, else \emph{false}. \section{PLCore::ApplicationContext Class} \subsection{Methods} \paragraph{<string> GetExecutableFilename()} Write \emph{<RTTI object>:GetExecutableFilename()} in order to receive the absolute path of application executable (native path style, e.g. on Windows: "C:\textbackslash MyApplication\textbackslash x86\textbackslash Test.exe"). \paragraph{<string> GetExecutableDirectory()} Write \emph{<RTTI object>:GetExecutableDirectory()} in order to receive the directory of application executable (native path style, e.g. on Windows: "C:\textbackslash MyApplication\textbackslash x86"). \paragraph{<string> GetAppDirectory()} Write \emph{<RTTI object>:GetAppDirectory()} in order to receive the directory of application (native path style, e.g. on Windows: "C:\textbackslash MyApplication"). \paragraph{<string> GetStartupDirectory()} Write \emph{<RTTI object>:GetStartupDirectory()} in order to receive the current directory when the application constructor was called (native path style, e.g. on Windows: "C:\textbackslash MyApplication\textbackslash x86"). \paragraph{<string> GetLogFilename()} Write \emph{<RTTI object>:GetLogFilename()} in order to receive the absolute path to log file, empty if log has not been opened (native path style). \paragraph{<string> GetConfigFilename()} Write \emph{<RTTI object>:GetConfigFilename()} in order to receive the absolute path to config file, empty if no config is used (native path style).
{ "alphanum_fraction": 0.7672062276, "avg_line_length": 49.9567901235, "ext": "tex", "hexsha": "a2a173b56013c639b09f5fbbd107c6ed9ec845cb", "lang": "TeX", "max_forks_count": 40, "max_forks_repo_forks_event_max_datetime": "2021-03-06T09:01:48.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-25T18:24:34.000Z", "max_forks_repo_head_hexsha": "d7666f5b49020334cbb5debbee11030f34cced56", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "naetherm/PixelLight", "max_forks_repo_path": "Docs/PixelLightScript/RTTIClasses_PLCore.tex", "max_issues_count": 27, "max_issues_repo_head_hexsha": "43a661e762034054b47766d7e38d94baf22d2038", "max_issues_repo_issues_event_max_datetime": "2020-02-02T11:11:28.000Z", "max_issues_repo_issues_event_min_datetime": "2019-06-18T06:46:07.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "PixelLightFoundation/pixellight", "max_issues_repo_path": "Docs/PixelLightScript/RTTIClasses_PLCore.tex", "max_line_length": 268, "max_stars_count": 83, "max_stars_repo_head_hexsha": "43a661e762034054b47766d7e38d94baf22d2038", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ktotheoz/pixellight", "max_stars_repo_path": "Docs/PixelLightScript/RTTIClasses_PLCore.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-20T17:07:00.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-08T15:06:14.000Z", "num_tokens": 2041, "size": 8093 }
%%============================ %% Appendix 4: List of Symbols %%============================ \documentclass[../dissertation.tex]{subfiles} \begin{document} %%===================== %% Fundamental Notation %%===================== \def\tabletitle{Fundamental Notation} \subsection{\tabletitle} \begin{indextable}{\tabletitle} $:=$ & defined to be; for example $a := b$ means ``$a$ is defined to be $b$'' & p.\pageref{sym0:def} \\ $\pv$ & Cauchy principle value; given by { \begin{teqn} \pv\int_{\mathbb R} f(x) \, \mathrm{d}x := \lim_{\varepsilon \searrow 0} \int_{|x|>\varepsilon} f(x) \, \mathrm{d}x \end{teqn} } & p.\pageref{sym0:pv} \\ \textit{a.e.} & abbreviation for almost everywhere & p.\pageref{sym:ae}\\ $\mathbb R$ & the set of all real numbers & p.\pageref{sym:Reals} \\ $\mathbb C$ & the set of all complex numbers & p.\pageref{sym:Complex} \\ $\mathbb Z$ & the set of all integers & \\ $\chi_A$ & characteristic function on a set $A$; given by {\begin{teqn} \chi_A(x) := \begin{cases} 1, & x \in A \\ 0, & \text{otherwise} \end{cases} \end{teqn}} & p.\pageref{sym:chi} \\ $m(\dotarg)$ & Lebesgue measure on $\mathbb R$ & p.\pageref{sym:lebesguemeasure} \\ $\mathcal F$, $\hat{(\dotarg)}$ & Fourier transform defined as \begin{teqn} \hat f(\xi) := \(\mathcal F f\)(\xi) = \int_{\mathbb R} e^{-i x \xi} f(x) \, \mathrm{d}x \end{teqn} & p.\pageref{sym:fourier} \\[-1\baselineskip] $\mathcal F^{-1}$, $\check{(\dotarg)}$ & inverse Fourier transform defined as \begin{teqn} \check g(x) := \(\mathcal F^{-1}g\)(x) = \frac{1}{2 \pi} \int_{\mathbb R} e^{i x \xi} g(\xi) \, \mathrm{d}\xi \end{teqn} & p.\pageref{sym:fourier} \\ $\mathscr S(\mathbb R)$ & space of all Schwartz class functions on $\mathbb R$ & p.\pageref{sym3:schwartz} \\ $\Res_{z=c}f$ & complex residue of a function $f$ at the pole $z = c$ & p.\pageref{sym1:res} \\ $(\dotarg\pm i0)$ & implied limit of $(\dotarg \pm i \varepsilon)$ as $\varepsilon \searrow 0$ & p.\pageref{sym:i0} \\ $f^+$ & lower boundary $f^+(x) := \lim_{y\searrow0} f(x+ i y)$ of a function $f$ analytic on $S_\delta$, where $x, y \in \mathbb R$ & p.\pageref{sym:bndries} \\ $f^-$ & upper boundary $f^-(x) := \lim_{y\nearrow0} f(x+ i 2y)$ of a function $f$ analytic on $S_\delta$, where $x, y \in \mathbb R$ & p.\pageref{sym:bndries} \\ $\lesssim$ & $q \lesssim s$ means there exists some fixed constant $C$ so that $q \leq C\,s$; the constant $C$ is commonly referred to as ``the implied constant'' & p.\pageref{sym:lesssim} \\ $\lesssim_k$ & $q \lesssim s$ means there exists some constant $C := C(k)$ depending only on the parameter $k$ so that $q \leq C \, s$; the constant $C$ is commonly referred as ``the implied constant'' & p.\pageref{sym2:lesssimdep} \\ $\log_+ t$ & the function given by $\max\big\{ 0, \, \log(t) \big\}$ & p.\pageref{sym:logplus} \\ $\inn{x}$ & short-hand notation for $\big(1 + |x|^2\big)^{1/2}$ & p.\pageref{sym:xbracket} \\ $L^{p,s}(\mathbb R)$ & space of measurable functions with \begin{teqn} \|f\|_{L^{p,s}} := \left(\int_{\mathbb R} \inn{x}^{sp} |f(x)|^p \right)^{1/p} < \infty \end{teqn} & p.\pageref{defn2:Lps} \\ $\langle\dotarg\rangle L^\infty(\mathbb R)$ & space of measurable functions with \[ \|f\|_{\inn{\dotarg} L^\infty} := \esssup_{x\in \mathbb R} \left| \inn{x}^{-1} f(x) \right| < \infty \] & p.\pageref{defn2:wLp} \\ $B_Y(y_0, r)$ & the open ball $\{ y \in Y ~:~ \|y - y_0\|_Y < r \}$ in the metric space $Y$ with radius $r$ centered at $y_0 \in Y$ & p.\pageref{sym:ball} \\ $Y\to Z$ & a map from a space $Y$ to a space $Z$ & p.\pageref{sym:mapsto} \\ $Y \toitself$ & a map from a space $Y$ into itself & p.\pageref{sym:toitself} \\ $\|\dotarg\|_{Y\to Z}$ & the induced operator norm for an operator with domain $Y$ and co-domain $Z$ & p.\pageref{sym:opnorm} \end{indextable} \newpage %=================== % Chapter 1 Notation %=================== \def\tabletitle{Chapter 1 Notation} % \def\tabletitle{Chapter 0 Notation} \subsection{\tabletitle} \begin{indextable}{\tabletitle} $\delta$ & depth of stratified fluids\textemdash{}typically taken to be $\delta=1$ & p.\pageref{sym:delta} \\ $\mathcal S_\delta$ & the complex strip $\{z \in \mathbb C ~:~ 0 < \im z < 2\delta\}$ & p.\pageref{sym:Sdelta} \\ $f^+$ & lower boundary $f^+(x) := \lim_{y\searrow0} f(x+ i y)$ of a function $f$ analytic on $\mathcal S_\delta$, where $x, y \in \mathbb R$ & p.\pageref{sym:bndries} \\ $f^-$ & upper boundary $f^-(x) := \lim_{y\nearrow0} f(x+ i 2y)$ of a function $f$ analytic on $\mathcal S_\delta$, where $x, y \in \mathbb R$ & p.\pageref{sym:bndries} \\ $L_\delta$ & operator on functions analytic in the complex strip $\mathcal S_\delta$; given by { \begin{teqn} L_\delta (\Psi) := \frac{1}{i} \frac{\partial}{\partial x} \Psi^+ - \zeta \left(\Psi^+ - \Psi^-\right) = u \Psi^+ \end{teqn} } & p.\pageref{eq0:SpecProb} \\ $\lambda$ & a spectral parameter for the linear spectral problem \eqref{eq0:SpecProb} & p.\pageref{sym:zeta} \\ $\zeta$ & a spectral parameter for \eqref{eq0:SpecProb} commonly parameterized by $\lambda$ as $\displaystyle \zeta(\lambda; \delta) = \frac{\lambda}{1-e^{-2\delta\lambda}}$ & p.\pageref{sym:zeta} \\ $\lambda(\zeta)$ & inverse of the map $\lambda \to \zeta(\lambda)$ & p.\pageref{sym:lambda} \\ $\inn{x}$ & short-hand notation for $\big(1 + |x|^2\big)^{1/2}$ & p.\pageref{sym:xbracket} \\ $B_Y(y_0, r)$ & the open ball $\{ y \in Y ~:~ \|y - y_0\|_Y < r \}$ in the metric space $Y$ with radius $r$ centered at $y_0 \in Y$ & p.\pageref{sym:ball} \\ $M_1$, $M_e$, $N_1$, $N_e$ & depending on context, either Jost solutions or analytic extensions of solutions to the integral equations \eqref{eq0:JostIE} & p.\pageref{defn0:jost}, p.\pageref{eq0:JostIE} \\ $M_1^+$, $M_e^+$, $N_1^+$, $N_e^+$ & depending on context, either solutions to the integral equations \eqref{eq0:JostIE} or lower boundary values of the Jost solutions & p.\pageref{eq0:JostIE}, p.\pageref{defn0:jost} \\ $r(\lambda; \delta)$ & reflection coefficent; given by { \begin{teqn} r(\lambda; \delta) = \frac{b(\lambda; \delta)}{a(\lambda; \delta)}, \end{teqn} where \begin{talign} b(\lambda) &:= \frac{i}{1-2\delta\zeta(-\lambda)} \int_{\mathbb R} e^{-i\lambda x} u(x) \, M_1^+(x; \lambda,\delta) \, \mathrm{d}x \\ a(\lambda) &:= 1 + \frac{i}{1-2\delta \zeta(\lambda)} \int_{\mathbb R} u(x) \, M_1^+(x; \lambda,\delta) \, \mathrm{d}x \end{talign} } & p.\pageref{sym0:reflection} \\ $\mathscr D$ & the direct scattering map for the Intermediate Long Wave (ILW) equation; maps ILW initial data $u$ to the corresponding reflection coefficient $r$ & p.\pageref{sym0:DSM} \\ $G_L$, $G_R$ & formal Green's functions corresponding to the linear spectral problem \eqref{eq0:SpecProb} & p.\pageref{eq0:GFL}, p.\pageref{eq0:GFR}\\ $\alpha(\lambda; \delta)$ & residue of $e^{iz\xi}/p(\xi)$ at the $\xi=0$ pole; given by \begin{teqn} \alpha(\lambda; \delta) = \frac{1}{1-2\delta\zeta(\lambda; \delta)} \end{teqn} & p.\pageref{sym0:residues} \\ $\beta(\lambda; \delta)$ & $e^{iz\lambda}$ times the residue of $e^{iz\xi}/p(\xi)$ at the $\xi=\lambda$ pole; given by \begin{teqn} \alpha(\lambda; \delta) = \frac{1}{1-2\delta\zeta(-\lambda; \delta)} \end{teqn} & p.\pageref{sym0:residues} \\ $K^+$ & non-residue term resulting from shifting the integration contours of $G_L^+$ and $G_R^+$ & p.\pageref{sym0:K} \\ $T_{\star, \lambda, u}$ & bounded operators on $\inn{\dotarg}L^\infty(\mathbb R)$ given by \begin{teqn} T_{\star, \lambda, u} f(x) := \big[ G_\star^+(\dotarg; \lambda) \big] * (u\,f)(x), \end{teqn} where $\star = L$ or $R$ & p.\pageref{eq0:Tstar} \end{indextable} \newpage %=================== % Chapter 2 Notation %=================== \def\tabletitle{Chapter 2 Notation} % \def\tabletitle{Chapter 1 Notation} \subsection{\tabletitle} \begin{indextable}{\tabletitle} $\mathscr D$ & the direct scattering map for the Intermediate Long Wave (ILW) equation; maps ILW initial data $u$ to the corresponding reflection coefficient $r$ & p.\pageref{sym0:DSM} \\ $\star$ & used as a placeholder for both $L$ and $R$; for example, if a statement contains the notation ``$G_\star$ ($\star = L \text{, or } R$),'' then it is equally true (or not true) for both $G_L$ and $G_R$ & p.\pageref{rmk1:StarNotation} \\ $f^+$ & lower boundary $f^+(x) := \lim_{y\searrow0} f(x+ i y)$ of a function $f$ analytic on $S_\delta$, where $x, y \in \mathbb R$ & p.\pageref{sym:bndries} \\ $f^-$ & upper boundary $f^-(x) := \lim_{y\nearrow0} f(x+ i 2y)$ of a function $f$ analytic on $S_\delta$, where $x, y \in \mathbb R$ & p.\pageref{sym:bndries} \\ $\delta$ & depth of stratified fluids\textemdash{}typically taken to be $\delta=1$ & p.\pageref{sym:delta} \\ $\lambda$ & a spectral parameter for the linear spectral problem \eqref{eq0:SpecProb} & p.\pageref{sym:zeta} \\ $\zeta$ & a spectral parameter for \eqref{eq0:SpecProb} commonly parameterized by $\lambda$ as $\displaystyle \zeta(\lambda; \delta) = \frac{\lambda}{1-e^{-2\delta\lambda}}$ & p.\pageref{sym:zeta} \\ $\lambda(\zeta)$ & inverse of the map $\lambda \to \zeta(\lambda)$ & p.\pageref{sym:lambda} \\ $\zeta^*$ & nonlinear reflection $\zeta\big(-\lambda(\zeta)\big)$ & p.\pageref{sym:zetastar} \\ $G_\star^+$ & lower boundary value of the Greens' function whose contour of integration is $\Gamma_\star$; given by \begin{teqn} G_\star^+(x; \lambda, \delta) := \frac{1}{2\pi} \int_{\Gamma_\star} e^{i x \xi} \, \frac{1}{p(\xi; \lambda, \delta)} \, \mathrm{d}\xi \end{teqn} & p.\pageref{sym:GFbndry} \\ ${\Gamma_L}$ & contour along the real line which bypasses the roots of $p$ from below & p.\pageref{sym:Gamma} \\ ${\Gamma_R}$ & contour along the real line which bypasses the roots of $p$ from above & p.\pageref{sym:Gamma} \\ $p$ & Fourier symbol of the Green's functions; given by \[ p(\xi; \lambda, \delta) = \xi - \zeta(\lambda) \( 1- e^{-2\delta \xi} \) \] and commonly denoted as $p(\xi; \lambda, \delta)$, $p(\xi; \zeta, \delta)$, $p(\xi; \lambda)$, $p(\xi; \zeta)$, and $p(\xi)$. & p.\pageref{sym:GFintegrand} \\ $\Res_{z=c}f$ & complex residue of a function $f$ at the pole $z = c$ & p.\pageref{sym1:res} \\ $\alpha(\lambda; \delta)$ & residue of $e^{iz\xi}/p(\xi)$ at the $\xi=0$ pole; given by {\begin{teqn} \alpha(\lambda; \delta) = \frac{1}{1-2\delta\zeta(\lambda; \delta)} \end{teqn}} & p.\pageref{sym:alphabeta} \\ $\beta(\lambda; \delta)$ & $e^{iz\lambda}$ times the residue of $e^{iz\xi}/p(\xi)$ at the $\xi=\lambda$ pole; given by {\begin{teqn} \beta(\lambda; \delta) = \frac{1}{1-2\delta\zeta(-\lambda; \delta)} \end{teqn}} & p.\pageref{sym:alphabeta} \\ $K^+$ & non-residue term resulting from shifting the integration contours of $G_L^+$ and $G_R^+$ & p.\pageref{sym1:K} \\ $\log_+ t$ & the function given by $\max\big\{ 0, \, \log(t) \big\}$ & p.\pageref{sym:logplus} \\ $\mathcal R_\delta$ & the complex strip $\{z \in \mathbb C ~:~ -\pi/\delta \leq \im z \leq \pi/\delta \}$ about the real line % about the real axis given by % \begin{teqn} % := \{z \in \mathbb C ~:~ -\pi/\delta \leq \im z \leq \pi/\delta \} % \end{teqn} & p.\pageref{sym1:Rcal} \\ $\mathpzc R_\star$ & sum residues of $e^{iz\xi}/p(\xi)$ at the $\xi=0$ and $\xi=\lambda$ poles & p.\pageref{sym1:ressum} \\ $\delta_c$ & Dirac delta-function centered at $x=c$ & p.\pageref{sym:dirac} \\ $W_k$ & $k^{th}$ branch ($k \in \mathbb Z$) of the complex Lambert $W$ function & p.\pageref{sym1:Wk} \\ $\Sigma_{c}$ & the integration contour $\mathbb R + i c \, \pi$ & p.\pageref{sym1:SigRealLine} \\ $\Sigma(R,~c)$ & the integration contour $(-R, R) + i c \,\pi$, where $R > 0$ and $(-R, R):= \{x \in \mathbb R ~:~ -R < x < R\}$ & p.\pageref{sym1:SigR} \\ $K_\zeta$ & the integral function given by { \begin{teqn} K_\zeta(x) := \int_{\mathbb R} \frac{e^{ix \xi}}{\xi - \zeta\left(1-e^{-2\xi}\right)+i\pi} \, \mathrm{d}\xi \end{teqn} } & p.\pageref{sym1:Kzeta} \\ $K_q$ & the integral function given by { \begin{teqn} K_q(x) := \int_{\mathbb R} \frac{e^{ix\xi} \chi\left( 2^{-q} x \xi\right)} {\xi - \zeta\left(1-e^{-2\xi}\right)+i\pi} \, \mathrm{d}\xi \end{teqn} } & p.\pageref{sym1:Kq} \\ \end{indextable} \newpage %=================== % Chapter 3 Notation %=================== \def\tabletitle{Chapter 3 Notation} % \def\tabletitle{Chapter 2 Notation} \subsection{\tabletitle} \begin{indextable}{\tabletitle} $\inn{x}$ & short-hand notation for $\big(1 + |x|^2\big)^{1/2}$ & p.\pageref{sym2:xbracket} \\ $L^{p,s}(\mathbb R)$ & space of measurable functions with \[ \|f\|_{L^{p,s}} := \left(\int_{\mathbb R} \inn{x}^{sp} |f(x)|^p \right)^{1/p} < \infty \] & p.\pageref{defn2:Lps} \\ $\langle\dotarg\rangle L^\infty(\mathbb R)$ & space of measurable functions with \begin{teqn} \|f\|_{\inn{\dotarg} L^\infty} := \esssup_{x\in \mathbb R} \left| \inn{x}^{-1} f(x) \right| \end{teqn} finite & p.\pageref{defn2:wLp} \\ $L_\xi^p(\mathbb R)$ & space of measurable functions which are $L^p$ integrable with respect to the variable $\xi$; similar subscript notation is used for other function spaces & p.\pageref{sym2:Lpxi} \\ $\star$ & used as a placeholder for both $L$ and $R$; for example, if a statement contains the notation ``$G_\star$ ($\star = L \text{, or } R$),'' then it is equally true (or not true) for both $G_L$ and $G_R$ & p.\pageref{rmk1:StarNotation} \\ $T_{\star, \lambda, u}$ & bounded operator on $\inn{\dotarg}L^\infty(\mathbb R)$ given by { \begin{teqn} T_{\star, \lambda, u} f(x) := \big[ G_\star^+(\dotarg; \lambda) \big] * (u\,f)(x) \end{teqn} } based on context, $T_{\star, \lambda, u}$ is sometimes denoted by $T_\star$, $T_{\star, \lambda}$, or $T_\lambda$ & p.\pageref{eqn2:Tdefn} \\ $X$ & space of potentials $u$ with $\nm{\inn{\dotarg}^{4} u}_{L^2} < \infty$ & p.\pageref{defn2:X} \\ $\lesssim_k$ & $q \lesssim s$ means there exists some constant $C := C(k)$ depending only on the parameter $k$ so that $q \leq C \, s$; the constant $C$ is commonly referred as ``the implied constant'' & p.\pageref{sym2:lesssimdep} \\ $\chi_\pm$ & the characteristic functions $\chi_-:= \chi_{(-\infty, 0)}$, $\chi_+:= \chi_{(0, \infty)}$ on the respective intervals $(-\infty, 0)$ and $(0, \infty)$ & p.\pageref{sym2:chipm} \\ $G$ & as specified in Remark \ref{rmk2:notation}, $G(x,\lambda)$ and $G(\lambda)$ are occasionally used to as shorthand notations for $G_\star^+(x; \lambda)$ & p.\pageref{rmk2:notation} \\ $G_h(\lambda)$ & the difference quotient of $G_\star^+$ with respect to $\lambda$; given by \begin{teqn} G_h(\lambda) := \frac{G(\lambda+h) - G(\lambda)}{h} \end{teqn} & p.\pageref{sym2:Gh} \\ $\ds\left(\frac{1}{p_\lambda(\xi)}\right)_{h}$ & the difference quotient of $1/p$ with respect to $\lambda$; \begin{teqn} \left(\frac{1}{p_\lambda(\xi)}\right)_{h} := \frac{1}{h}\, \left[ \frac{1}{p(\xi; \lambda +h)} -\frac{1}{p(\xi; \lambda)} \right] \end{teqn} & p.\pageref{sym2:pdiffquot} \\ $\Res_{z=c}f$ & complex residue of a function $f$ at the pole $z = c$ & p.\pageref{eq2:gzerores} \end{indextable} \newpage %=================== % Chapter 4 Notation %=================== \def\tabletitle{Chapter 4 Notation} % \def\tabletitle{Chapter 3 Notation} \subsection{\tabletitle} \begin{indextable}{\tabletitle} $\mathcal S_\delta$ & the complex strip about the real axis defined by $\mathcal S_1 := \{z \in \mathbb C ~:~ 0 < \im z < 2 \}$ & p.\pageref{thm3:main_result} \\ $G_\star$ & analytic continuation of $G_\star^+$ to the analytic strip $\mathcal S_1$ & p.\pageref{thm3:main_result} \\ $G_\star^-$ & the upper boundary value of $G_\star$ defined as a distribution in that \begin{teqn} G_\star^- * f = \lim_{y\nearrow 2} G_\star^+(\dotarg+iy)*f \end{teqn} for $f \in L^1(\mathbb R) \cap L^p(\mathbb R)$ ($1< p \leq 2$) & p.\pageref{eq3:lim} \\ $K$ & analytic continuation of $K^+$ to the $\mathcal S_1$ & p.\pageref{thm3:main_result} \\ $\mathfrak C$ & a portion of $K$ whose limit as $y\nearrow 2$ is a continuous limit operator; given by \begin{teqn} \mathfrak C(x,y) := \frac{1}{2\pi} e^{-\pi|x|} \, e^{- \sign(x) \, i \pi y } \int_{\mathbb R} e^{ix \xi} \rho\big(\xi, y, \sign(x)\big) \, \mathrm{d}\xi, \end{teqn} where $x \in \mathbb R$ and $y\in [0, 2]$ & p.\pageref{sym:mathfrakC} \\ $\rho$ & a function given by { \begin{teqn} \rho\big(\xi, y, c; \lambda\big) := \begin{cases} \dfrac{e^{-y\xi}}{p(\xi; \lambda) + i \,c\, \pi}, & \xi > 0\\ \\ \dfrac{1}{\zeta(\lambda)} \dfrac{ \big(\zeta(\lambda)- \xi - c \, i\pi \big) e^{(2-y)\xi} } {p(\xi; \lambda) + i\, c \, \pi}, & \xi < 0. \end{cases} \end{teqn} } & p.\pageref{eq0:smallR} \\ $\mathpzc R_\star$ & sum residues of $e^{iz\xi}/p(\xi)$ at the $\xi=0$ and $\xi=\lambda$ poles & p.\pageref{eq3:mathpzcR}\\ $\mathcal E_{\varepsilon}$ & a family of convolution operators given by \begin{talign} \left( \mathcal E_\varepsilon f \right)(x) &:= \frac{e^{-i \pi (2-\varepsilon)}}{2 \pi i} \int_{-\infty}^x \frac{e^{-\pi|x-x'|}}{(x-x') - i\varepsilon} f(x') \, \mathrm{d}x' \\ &\quad + \frac{e^{i \pi (2-\varepsilon)}}{2 \pi i} \int_x^{\infty} \frac{e^{-\pi|x-x'|} }{(x-x') - i\varepsilon} f(x') \, \mathrm{d}x' \end{talign} & p.\pageref{sym:almostExpCauchy} \\ $E_{\varepsilon}$ & exponentially weighted Cauchy transform; given by \begin{teqn} E_{\varepsilon} f(x) := \frac{1}{2\pi i} \int_{\mathbb R} \frac{e^{-\pi|x-x'|}}{(x-x') - i \varepsilon} f(x') \, \mathrm{d}x' \end{teqn} & p.\pageref{sym:expCauchy} \\ $E$ & exponentially weighted Hilbert Transform; given by \begin{teqn} Ef(x) := \frac{1}{2\pi i} \pv \int_{\mathbb R} \frac{e^{-\pi|x-x'|}}{x-x'} f(x') \, \mathrm{d}x', \end{teqn} where $\pv \int (\dotarg) \, \mathrm{d}\mu$ denotes a principle value integral. & p.\pageref{sym:ExpHil} \\ $\mathscr S(\mathbb R)$ & space of all Schwartz class functions on $\mathbb R$ & p.\pageref{sym3:schwartz} \\ $\mathpzc E_\varepsilon, ~\mathpzc P_\varepsilon$ & two families of convolution operators given by \[ \mathpzc E_\varepsilon(y) := \frac{1}{2\pi i} \frac{y}{y^2 + \varepsilon^2} e^{-\pi|y|}, \quad \text{and} \quad \mathpzc P_\varepsilon(y) := \frac{1}{\pi} \frac{\varepsilon}{y^2 + \varepsilon^2} e^{-\pi|y|} \] & p.\pageref{sym:badCauchy} \\ $E^{(\varepsilon)}$ & truncated exponentially weighted Hilbert Transform; given by \begin{teqn} E^{(\varepsilon)}f(x) := \frac{1}{2\pi i} \int_{|x'|\geq \varepsilon} \frac{e^{-\pi|x'|}}{x'} f(x-x') \, \mathrm{d}x' \end{teqn} by definition, $(Ef)(x) = \lim_{\varepsilon \searrow 0} E^{(\varepsilon)}f(x)$ & p.\pageref{sym:truncExpHil} \\ $P_\varepsilon$ & Poisson kernel; given by $P_\varepsilon(y) = \frac{1}{\pi}\frac{\varepsilon}{y^2+\varepsilon^2}$ & p.\pageref{sym3:poisson} \\ $E^*$ & the maximal operator associated with the Cauchy transform $E_{\varepsilon}$; given by \begin{teqn} E^*f(x) := \( E_{\varepsilon} \)^* f(x) := \sup_{\varepsilon>0} \left\{\left|E_{\varepsilon} f(x)\right|\right\}. \end{teqn} & p.\pageref{sym:maxCauchyT} \\ $M$ & The Hardy-Littlewood maximal operator; given by \begin{teqn} M f(x) = \sup_{r > 0} \left\{ \frac{1}{B(0, r)} \int_{B(0, r)} |f(x - x')| \, \mathrm{d}x' \right\}. \end{teqn} & p.\pageref{sym:hardy} \\ $m_E$ & Fourier multiplier for the exponentially weighted Hilbert transform; given by \begin{teqn} m_E(\xi) = \frac{1}{\pi} \arctan(\xi/\pi) \end{teqn} & p.\pageref{sym3:Emult} \\ $L^{p,\infty}$ & weak $L^p$ space; also denoted $weak$-$L^p$ & p.\pageref{sym:weakLp} \\ \end{indextable} \newpage %=================== % Chapter 5 Notation %=================== \def\tabletitle{Chapter 5 Notation} % \def\tabletitle{Chapter 4 Notation} \subsection{\tabletitle} \begin{indextable}{\tabletitle} $M_1$, $M_e$, $N_1$, $N_e$ & depending on context, either Jost solutions or analytic extensions of solutions to the integral equations \eqref{eq4:JostIE} & p.\pageref{defn4:jost}, p.\pageref{eq4:JostIE} \\ $X$ & space of potentials $u$ with $\nm{\inn{\dotarg}^{4} u}_{L^2} < \infty$ & p.\pageref{defn2:X} \\ $c_0$ & a strictly positive constant chosen in Proposition \ref{prop4:exist} to ensure the existence and uniqueness of Jost solutions for every potential $u \in X$ with $\|u\|_X < c_0$ & p.\pageref{prop4:exist} \\ $T_{\star, \lambda, u}$ & bounded operator on $\inn{\dotarg}L^\infty(\mathbb R)$ given by \begin{teqn} T_{\star, \lambda, u} f(x) := \big[ G_\star^+(\dotarg; \lambda) \big] * (u\,f)(x) \end{teqn} based on context, $T_{\star, \lambda, u}$ is sometimes denoted by $T_\star$, $T_{\star, \lambda}$, or $T_\lambda$ & p.\pageref{prop4:exist} \\ $a$, $b$, $\breve{a}$, $\breve{b}$ & coefficients for the scattering equations \eqref{eq3:M1N} and \eqref{eq3:N1M}; given by \begin{talign} a(\lambda) &:= 1 + i \alpha(\lambda) \, \int_{\mathbb R} u(x) \, M_1^+(x; \lambda, u) \, \mathrm{d}x \\ b(\lambda) &= i \beta(\lambda) \, \int_{\mathbb R} e^{-ix\lambda} \, u(x) \, M_1^+(x; \lambda, u) \, \mathrm{d}x \\ \breve{a}(\lambda) &:= 1 + \alpha(\lambda) \int_{\mathbb R} u(x) \, N_1(x; \lambda, u) \, \mathrm{d}x \\ \breve{b}(\lambda) &= i \beta(\lambda) \int_{\mathbb R} e^{-ix\lambda} \, u(x) \, N_1(x; \lambda, u) \, \mathrm{d}x \end{talign} & p.\pageref{prop4:SD} \\ $\alpha(\lambda; \delta)$ & residue of $e^{iz\xi}/p(\xi)$ at the $\xi=0$ pole; given by {\begin{teqn} \alpha(\lambda; \delta) = \frac{1}{1-2\delta\zeta(\lambda; \delta)} \end{teqn}} & p.\pageref{sym:alphabeta} \\ $\beta(\lambda; \delta)$ & $e^{iz\lambda}$ times the residue of $e^{iz\xi}/p(\xi)$ at the $\xi=\lambda$ pole; given by {\begin{teqn} \alpha(\lambda; \delta) = \frac{1}{1-2\delta\zeta(-\lambda; \delta)} \end{teqn}} & p.\pageref{sym:alphabeta} \\ $\mathscr D$ & direct scattering map for the ILW; given by $\mathscr D: B_X(0, c_0) \ni u \mapsto r \in L_\lambda^\infty(\mathbb R)$ & p.\pageref{sym4:dsm} \\ $r$ & reflection coefficient; given by $r(\lambda) = b(\lambda) / a(\lambda)$ & p.\pageref{sym4:dsm} \end{indextable} \end{document}
{ "alphanum_fraction": 0.5707602832, "avg_line_length": 36.9595015576, "ext": "tex", "hexsha": "4e716b6f7f5097ccf6a9f9a01a2f7397f550dfd8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "de0f27b6389ee55c24d155ff482743acbe6a35a1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ADGC/ilw-dsm-dissertation", "max_forks_repo_path": "Appendices/app-Symbols.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "de0f27b6389ee55c24d155ff482743acbe6a35a1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ADGC/ilw-dsm-dissertation", "max_issues_repo_path": "Appendices/app-Symbols.tex", "max_line_length": 94, "max_stars_count": null, "max_stars_repo_head_hexsha": "de0f27b6389ee55c24d155ff482743acbe6a35a1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ADGC/ilw-dsm-dissertation", "max_stars_repo_path": "Appendices/app-Symbols.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9548, "size": 23728 }
\par \section{Driver programs for the {\tt IVL} object} \label{section:IVL:drivers} \par This section contains brief descriptions of six driver programs. \par %======================================================================= \begin{enumerate} %----------------------------------------------------------------------- \item \begin{verbatim} testIO msglvl msgFile inFile outFile \end{verbatim} This driver program reads in a {\tt IVL} object from {\tt inFile} and writes out the object to {\tt outFile} \par \begin{itemize} \item The {\tt msglvl} parameter determines the amount of output --- taking {\tt msglvl >= 3} means the {\tt IVL} object is written to the message file. \item The {\tt msgFile} parameter determines the message file --- if {\tt msgFile} is {\tt stdout}, then the message file is {\it stdout}, otherwise a file is opened with {\it append} status to receive any output data. \item The {\tt inFile} parameter is the input file for the {\tt IVL} object. It must be of the form {\tt *.ivlf} or {\tt *.ivlb}. The {\tt IVL} object is read from the file via the {\tt IVL\_readFromFile()} method. \item The {\tt outFile} parameter is the output file for the {\tt IVL} object. It must be of the form {\tt *.ivlf} or {\tt *.ivlb}. The {\tt IVL} object is written to the file via the {\tt IVL\_writeToFile()} method. \end{itemize} %----------------------------------------------------------------------- \item \begin{verbatim} testExpand msglvl msgFile inIVLfile inEqmapFile outIVLfile \end{verbatim} This program is used to test the expand function. One application is the symbolic factorization. We need the adjacency structure of the factor matrix. We could compute it directly from the original graph, or we could compute the adjacency structure of the compressed graph and then expand it into the full adjacency structure. The second method is usually faster. \par \begin{itemize} \item The {\tt msglvl} parameter determines the amount of output --- taking {\tt msglvl >= 3} means the {\tt IVL} object is written to the message file. \item The {\tt msgFile} parameter determines the message file --- if {\tt msgFile} is {\tt stdout}, then the message file is {\it stdout}, otherwise a file is opened with {\it append} status to receive any output data. \item The {\tt inIVLfile} parameter is the input file for the first, unexpanded {\tt IVL} object. It must be of the form {\tt *.ivlf} or {\tt *.ivlb}. The {\tt IVL} object is read from the file via the {\tt IVL\_readFromFile()} method. \item The {\tt inEqmapFile} parameter is the input file for the map from uncompressed vertices to compressed vertices. It must be of the form {\tt *.ivf} or {\tt *.ivb}. The {\tt IV} object is read from the file via the {\tt IV\_readFromFile()} method. \item The {\tt outIVLfile} parameter is the output file for the second, expanded {\tt IVL} object. It must be of the form {\tt *.ivlf} or {\tt *.ivlb}. The {\tt IVL} object is read from the file via the {\tt IVL\_readFromFile()} method. \end{itemize} %----------------------------------------------------------------------- \end{enumerate}
{ "alphanum_fraction": 0.6672036082, "avg_line_length": 37.8536585366, "ext": "tex", "hexsha": "95ba5723dfcc6562b9b2b46cfb2f7fd936f37cf1", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/IVL/doc/drivers.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/IVL/doc/drivers.tex", "max_line_length": 72, "max_stars_count": null, "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/IVL/doc/drivers.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 782, "size": 3104 }
% $Id$ % % Earth System Modeling Framework % Copyright 2002-2020, University Corporation for Atmospheric Research, % Massachusetts Institute of Technology, Geophysical Fluid Dynamics % Laboratory, University of Michigan, National Centers for Environmental % Prediction, Los Alamos National Laboratory, Argonne National Laboratory, % NASA Goddard Space Flight Center. % Licensed under the University of Illinois-NCSA License. %\subsection{Description} %ESMF data types, such as Fields, FieldBundles, Arrays and ArrayBundles, are used %to exchange data between Components through States. In the simplest %scenario the producer Component or Coupler can compute the full data set %required by the consumer Component. However, memory constraints or otherwise %the nature of the algorithm, may require that the final calculation be %performed right before the data is consumed. %ESMF provides the concept of Attachable Methods that allows a producer %component to associate user defined methods with the data objects it provides. %The final calculation, while defined by the producer Component, is deferred %until the consumer Component requires its execution. %The current implementation of Attachable Methods is limited to the ESMF State %class. States are a general container class for Fields, FieldBundles, Arrays %and ArrayBundles. States provide the most general interface to Attachable %Methods. ESMF allows user methods to be attached to Components and States. Providing this capability supports a more object oriented way of model design. Attachable methods on Components can be used to implement the concept of generic Components where the specialization requires attaching methods with well defined names. This methods are then called by the generic Component code. Attaching methods to States can be used to supply data operations along with the data objects inside of a State object. This can be useful where a producer Component not only supplies a data set, but also the associated processing functionality. This can be more efficient than providing all of the possible sets of derived data.
{ "alphanum_fraction": 0.8156899811, "avg_line_length": 49.2093023256, "ext": "tex", "hexsha": "61babe707cc327df11a84575ddd51042e9eea71a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "94561c6a1d539917da5595a8de867e43f43bcafe", "max_forks_repo_licenses": [ "NCSA" ], "max_forks_repo_name": "Formatted/esmf", "max_forks_repo_path": "src/Superstructure/AttachMethods/doc/AttachMethods_desc.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "94561c6a1d539917da5595a8de867e43f43bcafe", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "NCSA" ], "max_issues_repo_name": "Formatted/esmf", "max_issues_repo_path": "src/Superstructure/AttachMethods/doc/AttachMethods_desc.tex", "max_line_length": 81, "max_stars_count": null, "max_stars_repo_head_hexsha": "94561c6a1d539917da5595a8de867e43f43bcafe", "max_stars_repo_licenses": [ "NCSA" ], "max_stars_repo_name": "Formatted/esmf", "max_stars_repo_path": "src/Superstructure/AttachMethods/doc/AttachMethods_desc.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 423, "size": 2116 }
\RequirePackage[l2tabu, orthodox]{nag} % tells out-of-date packages \documentclass[10pt, letterpaper]{scrartcl} \usepackage{amssymb,amsfonts} \usepackage{amsthm} % if using theorems and proofs \usepackage[cmex10]{amsmath} \usepackage[usenames,dvipsnames,svgnames]{xcolor} % use like \textcolor{red}{..} or {\color{red} ...} \usepackage{graphicx} \usepackage[margin=1in]{geometry}% can also add 'nohead', or leave blank %\usepackage{url} \usepackage{cancel} % for canceling terms \usepackage{booktabs} \usepackage{pdfpages} % for including pdfs \usepackage{algpseudocode} %algorithmicx \usepackage{wrapfig} %\usepackage{listings} \usepackage[numbered,framed]{matlab-prettifier} \lstdefinestyle{mat}{ frame=single, language=matlab, showstringspaces=false, % keywordstyle=\bfseries\color{blue}, % commentstyle=\color{green}, % stringstyle=\color{magenta}, } \lstset{ style = Matlab-editor, basicstyle = \mlttfamily, escapechar = ", mlshowsectionrules = true, } %% use withL: \lstinputlisting{myODEfunction.m} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} % important! \usepackage{hyperref} \hypersetup{pdfauthor={Stephen Becker}, pdftitle={}, colorlinks=true, citecolor=MidnightBlue, urlcolor=Bittersweet, } % -- Input macros and such -- \IfFileExists{custom_headers.tex}{ \input{custom_headers}}{ \input{../custom_headers}}{ } \let\oldenumerate\enumerate \renewcommand{\enumerate}{ \oldenumerate \setlength{\itemsep}{1em} \setlength{\parskip}{1em} \setlength{\parsep}{0pt} } \usepackage{enumitem} %\setlist{noitemsep} % See http://mirrors.fe.up.pt/pub/CTAN/macros/latex/contrib/enumitem/enumitem.pdf % and now, set par indent to zero \newlength{\savedparindent} \setlength{\savedparindent}{\parindent} \setlist[enumerate]{listparindent=\savedparindent} %\setlength{\parindent}{0pt} % Jan 2017, HW 3/4, comment this out. \usepackage{xspace} %\newcommand{\collaboration}{\textbf{Collaboration Allowed}\xspace} %\newcommand{\nocollaboration}{\textbf{No Collaboration}\xspace} \renewcommand{\T}{\mathbb{T}} \newcommand{\fn}{\widehat{f}_n} % Fourier coefficients \newcommand{\spi}{\frac{1}{\sqrt{2\pi}}} \renewcommand{\H}{\mathcal{H}} \renewcommand{\phi}{\varphi} \DeclareMathOperator{\ran}{ran} % ker already defined, and with lower case \newcommand*\Laplace{\mathop{}\!\mathbin\bigtriangleup} \renewcommand{\SS}{\mathcal{S}} % Schwartz space % My solution environment %%\newenvironment{solution}{\setlength{\parindent}{\savedparindent}{\bfseries Solution}:}{} \makeatletter \newcommand\solParagraph{\@startsection{paragraph}{4}{\z@}% % {-3.25ex \@plus -1ex \@minus -0.2ex}% {-.55ex \@plus -1ex \@minus -0.2ex}% {0.01pt}% {\raggedsection\normalfont\sectfont\nobreak\size@paragraph}% } \makeatother \newenvironment{solution}{\setlength{\parindent}{\savedparindent}\solParagraph{Solution:}}{} %\newenvironment{solution}{\emph{Solution}:}{} \newenvironment{instructions}{}{} \newenvironment{SolnComment}{}{} \usepackage{comment} %\def\solutions{1} % define solutions % !!! COMMENT or UNCOMMENT THIS LINE % NOTE: \begin{solution} should have NO SPACES BEFORE OR AFTER IT, % Can give very weird errors, hiding text, etc. See http://tex.stackexchange.com/a/91431/4603 \ifdefined\solutions \newcommand{\solTitle}[1]{#1} \excludecomment{instructions} \else \excludecomment{solution}% uncomment this line to hide solution \newcommand{\solTitle}[1]{} \excludecomment{SolnComment} \fi \title{Homework 7 \solTitle{Selected Solutions} \\APPM 4720/5720 Spring 2019 \\ Randomized Algorithms} %\author{Stephen Becker} \date{} \begin{document} \maketitle \vspace{-6em} \textbf{Due date}: Friday, Mar 1 2019 Theme: Regression \hfill Instructor: Stephen Becker %Dr.\ Becker %\vspace{1em} \begin{instructions} \paragraph{Instructions} Collaboration with your fellow students is allowed and in fact recommended, although direct copying is not allowed. Please write down the names of the students that you worked with. The internet is allowed for basic tasks only, not for directly looking for solutions. An arbitrary subset of these questions will be graded. %\paragraph{Reading} %Read the rest of chapters 9--11 in [BV2004]. %%Read chapter 3.1, 3.2 and 3.3 in [BV2004]. %%Students are \textbf{strongly advised} to skim appendices A and C in [BV2004] to look for unfamiliar material (and read in more detail if there is unfamiliar material) % %%\vspace{1em} %%\textbf{Special instructions this week}: % \end{instructions} \begin{enumerate}[align=left, leftmargin=*, label=\sffamily\bfseries Problem \arabic*:] \item \ [READING] Read section 1.1 ``Puzzle 1: Finding Missing Numbers'' from \href{http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.440.5344&rep=rep1&type=pdf}{``Data Streams: Algorithms and Applications''} by S.\ Muthukrishnan (Foundations and Trends® in Theoretical Computer Science, 2005). Read more if you are interested. For fun, here's a variant of the \href{https://en.wikipedia.org/wiki/Hat_puzzle}{```hat puzzle,''} slightly related to the types of ideas Muthukrishnan things about. Suppose there are $100$ people, all in a line and facing the front of the line (so the person at the back can see everyone in front of them, the next person can see everyone except the person behind them, and the person in front can't see anyone). Each person has a red or blue hat on, and they don't know what color their hat is. The game is as follows: starting with the person at the back of the line, each person in turn says either ``red'' or ``blue.'' The goal is to devise a strategy so that as many people as possible say the color that corresponds to their own hat (i.e., beat the 50\% success rate of random guessing). What's the best strategy? \textbf{Deliverable}: none required. \item \ [CODING] Faster least squares. Load the MNIST data we used previously, which gives a matrix $X$ which is $784\times 3000$. Make $5$ copies of this matrix (adding columns), then transpose it to get a new matrix (call it $A$ to follow our usual convention), which is now $M \times N$ with $M = 15000$ and $N=784$. Construct a random vector $x$ and compute $b=Ax+z$ where $z$ is a small amount of noise. Compute the least squares estimator $x_\text{LS} = \argmin_{x}\, \|Ax-b\|_2$. See code below: \begin{lstlisting} load MNIST_subsampled % loads X which is 784 x 3000 A = repmat( X, 1, 5 )'; [M,N] = size(A); xSignal = randn(N,1); b = A*xSignal + 1*randn(M,1); xLS = A\b; % least squares estimator \end{lstlisting} % A = gallery('condex',M); % in python, see https://github.com/macd/rogues For at least three types of sketches $S$ (Gaussian, count sketch, some type of fast Johnson-Lindenstrauss, Haar, sub-sampling, very sparse), solve the following sketched least-squares problem: $$ \hat{x} = \argmin_{x} \, \| SAx - Sb\|_2 $$ How accurate is $\hat{x}$ compared to $x_\text{LS}$? Is the sketched approach actually faster? \textbf{Deliverable}: investigate the accuracy and speed of sketched regression as a function of the number of rows $m$ in $S$. Make plots of error and time vs $m$, as in Fig.~\ref{fig:1}. Bonus (not required): do your accuracy results change significantly if we change to a different matrix $A$? You might hypothesize that it would not affect the Gaussian sketch, but may affect other sketches. Is this true? Bonus (not required): try a leverage score sampling approach. You'll first need to look into the approximate SVD. \begin{figure} \centering \includegraphics[width=2in]{Figs/HW6_accuracy.png} \includegraphics[width=2in]{Figs/HW6_time.png} \caption{Examples for the type of output we want on problem 2.} % Made figures with: https://www.mathworks.com/matlabcentral/fileexchange/38499-xkcdify % and then updated the font. % Font from: https://github.com/ipython/xkcd-font/blob/master/xkcd-script/font/xkcd-script.ttf \label{fig:1} \end{figure} \emph{Hint:} Matlab users, you can use \texttt{sketch.m} from the Github repository \\ \href{https://github.com/stephenbeckr/randomized-algorithm-class/blob/master/Code/}{github.com/stephenbeckr/randomized-algorithm-class/blob/master/Code/} (and if someone wants to make a python or Julia version, we can host that too). \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7192285213, "avg_line_length": 41.7317073171, "ext": "tex", "hexsha": "72c62427d293ffa81df3ac2cbec9e3ae72dbdc20", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5d1318ed06e59e518f815013c12d3e0935032cd4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jakeknigge/randomized-algorithm-class", "max_forks_repo_path": "Homeworks/APPM47205720Spr19_RandomizedAlgos_Homework07.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5d1318ed06e59e518f815013c12d3e0935032cd4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jakeknigge/randomized-algorithm-class", "max_issues_repo_path": "Homeworks/APPM47205720Spr19_RandomizedAlgos_Homework07.tex", "max_line_length": 826, "max_stars_count": null, "max_stars_repo_head_hexsha": "5d1318ed06e59e518f815013c12d3e0935032cd4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jakeknigge/randomized-algorithm-class", "max_stars_repo_path": "Homeworks/APPM47205720Spr19_RandomizedAlgos_Homework07.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2481, "size": 8555 }
\subsection{Pivotal supplier index (PSI)}
{ "alphanum_fraction": 0.75, "avg_line_length": 11, "ext": "tex", "hexsha": "1d2db598958c2340ac667bf41b443606246fe85f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/economics/econometricsAggregate/06-04-PSI.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/economics/econometricsAggregate/06-04-PSI.tex", "max_line_length": 41, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/economics/econometricsAggregate/06-04-PSI.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13, "size": 44 }
\section{Towards a Reproducible Thesis}\label{sec:reproducibility} % \subsection{Motivations}\label{sec:motivations} In some respects, the practice of writing software has diverged from the motivations of an academic researcher. The latter seeks to generate new knowledge and may write a set of example scripts/programs to demonstrate some novel idea or method. By contrast, the motivations of a software engineer are related to resiliency. Not only must they ensure the code works as expected given a myriad of ways users may interact with it, but it is necessary to write the code in a manner compatible with maintaining it into the future. Much of the work of writing ``good software'' is concerned with writing appropriate documentation to express the intended usage and logic underlying architectural decisions. Without proper context and an understandable architecture, new ideas that are implemented in programs are unlikely to be adopted. There are many ways to write a functioning program to demonstrate a proof-of-concept, but creating something that is \emph{user-friendly}, and scales to different computational environments/resources, requires an entirely different approach. Decisions made early in the software design cycle have lasting impacts on future features and functionality. Rigor is added to libraries through the writing of \emph{unit tests}, and eventually \emph{functional tests}, which validate individual components and entire workflows, respectively. The practice of \emph{continuous integration} ensures that the download and installation process is predictable and reproducible by running the requisite steps (and tests) in an ephemeral environment as an independent verification that programs execute as expected. Code that only runs on the one's computer is impractical, since any thorough review of the results requires validation by an independent third party. Having continuous integration (and deployment for packaging the software), tests, and documentation, allows a repository of code to be functionally and practically accessible to the larger research community. This thesis is concerned not only with a demonstration of novel mathematical content\---showcasing new ways to make inferences from noisy data in a novel Data-Consistent framework\---it also serves to set a precedent for guaranteeing that the results presented are \textbf{fully reproducible}. In mathematics, reproducibility is ensured through the use of proofs, which motivate the original work presented here. However, as the title of this thesis suggests, much of the work involves computational implementation of the novel research into Data Consistent Inversion, studying the impact of using computers to perform the task of making conclusions based on data. Mathematics is implemented on computers through software. We are therefore concerned with verifying and validating the expected functionality of that software, which aligns with our training as mathematicians; we care deeply about making sure things are rigorous. In short, we want to make sure that theory aligns with practice, and that both live up to high standards of intellectual scrutiny. Every computational result, illustrative figure, table, plot, etc. presented in this thesis is associated with the scripts that generate them, and are included in the publicly-available GitHub repository. With a minimal set of instructions, everything (including this document itself), can be reproduced. The specific tools and frameworks through which the reproduction of these results can be accomplished change over time. The GitHub repository \citep{github} for this dissertation provides a number of pathways for generating the results, including ones that do not require any installation of software on local or remote resources. \FloatBarrier
{ "alphanum_fraction": 0.8264852133, "avg_line_length": 123.2580645161, "ext": "tex", "hexsha": "cc230febac5dde1c4e413e8fd47fd70e4f821b49", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mathematicalmichael/thesis", "max_forks_repo_path": "software/intro.tex", "max_issues_count": 59, "max_issues_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_issues_repo_issues_event_max_datetime": "2021-11-24T17:52:57.000Z", "max_issues_repo_issues_event_min_datetime": "2019-12-27T23:15:05.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mathematicalmichael/thesis", "max_issues_repo_path": "software/intro.tex", "max_line_length": 293, "max_stars_count": 6, "max_stars_repo_head_hexsha": "2906b10f94960c3e75bdb48e5b8b583f59b9441e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mathematicalmichael/thesis", "max_stars_repo_path": "software/intro.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-28T20:34:29.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-24T08:05:49.000Z", "num_tokens": 704, "size": 3821 }
\documentclass[conference]{IEEEtran} \IEEEoverridecommandlockouts % The preceding line is only needed to identify funding in the first footnote. If that is unneeded, please comment it out. \usepackage{cite} \usepackage{amsmath,amssymb,amsfonts} \usepackage{algorithmic} \usepackage{graphicx} \usepackage{textcomp} \usepackage{xcolor} \usepackage{url} \def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}} \begin{document} \title{\huge Metamorphic Testing in cross-language Sentiment Analysis of Social Media} \author{\IEEEauthorblockN{Boyang Yan} \IEEEauthorblockA{\textit{School of Computing and Information Technology} \\ \textit{University of Wollongong}\\ Wollongong, Australia\\ [email protected]} \and \IEEEauthorblockN{Xiaoxia Pu} \IEEEauthorblockA{\textit{School of Computing and Information Technology} \\ \textit{University of Wollongong}\\ Wollongong, Australia\\ [email protected] } \and \IEEEauthorblockN{Xudong Zhang} \IEEEauthorblockA{\textit{School of Computing and Information Technology} \\ \textit{University of Wollongong}\\ Wollongong, Australia\\ [email protected]} \and \IEEEauthorblockN{Helene Tran} \IEEEauthorblockA{\textit{School of Computing and Information Technology} \\ \textit{University of Wollongong}\\ Wollongong, Australia\\ [email protected]} } \maketitle \begin{abstract} \end{abstract} \begin{IEEEkeywords} sentiment analysis, machine translation \end{IEEEkeywords} \section{Introduction} \nocite{*} \bibliographystyle{annotate} \bibliography{library} \cite{Senn:2009} Sentiment analysis is useful for analyzing a huge amount of data relating to personal opinions. It can be use in the e-business context. For example, a business manager can analysise customers’ attitudes; whether they like or dislike their product or service. Governments can also use sentiment analysis to analyze citizens' perspectives. In a word, sentiment analysis is coming into widespread used. In this semester, I am firstly focus on conducting literature review about the current state of Chinese Sentiment Analysis research. A review article published last year has been identified as relevant (Haiyun,2017). As Dr. Haiyun mentioned that English language sentiment analysis research has undergone major developments in recent years. Chinese sentiment analysis research has not evolved significantly. Today, a lot of English language sentiment analysis theories have been developed. Also, lots of translation tools are available. However, nobody working in the two fields of research is undertaking non-English sentiment analysis. So, the purpose of this research is to achieve a method for finding which machine translation service combined with which English sentiment analysis service can obtain reliable sentiment analysis results for non-English speakers, who do not have sentiment analysis tools to analyze their own language. However, Chinese research does not sufficiently represent all of the non-English speaking world. So, more information in Hindi, Arabic, Spanish, Portuguese and Japanese needs to identify. As a result, this research is proposed to create a testing model for finding the best-combined mode between English sentiment analysis tools and translation tools. There are couples of search terms used for this literature review, such as sentiment analysis, text mining. machine translation, text segmentation, text classification, machine learning, deep learning, neural network, metamorphic testing and adaptive random testing. In this literature review, it has 2 key journals, which are IEEE and ACM. The key author is Zhiquan (George) Zhou. All of reference article will come from books, research articles, review articles and website in recent 10 years, this research consists of three components. Firstly, measuring machine translation service quality based on sentiment analysis results from both sides. Secondly, testing sentiment analysis service quality. Thirdly, finding the best compound mode for machine translation service and sentiment analysis service. In the following statement, I will talk about Testing Method based on Metamorphic Testing, Machine Translation testing, Sentiment Analysis Testing and literature review of Experiment Support. \section{Machine Translation} I have found two research articles about testing modeling of machine translation. Somers (2005, p. 127 -133) argue that using around trip translation method to detect the quality of machine translation. For example, testing English to Chinese translation tool. Firstly, using English to Chinese translation tool translate test data to Chinese. Secondly, translation Chinese translated data back to English. Finally, compare the similarity for two English data set. Another article is about using third-party language to test the quality of machine translation (Pesu, 2017). For example, if testing English to Chinese translation tool. Firstly, random choice an Intermediate third-party language. Secondly, translation English test data to the third-party language, after translation third-party language to Chinese. These two steps are only finished one path. Another path is a translation in English directly to Chinese. In the end, the comparison the two paths of results similarity. In this article, the main finding is Google Translate is the best machine translation. In addition, there is a tendency to be produced better results in European languages, which use ANOVA Statistics method getting this conclusion. In my experiment, I also got Google Translator is the best machine translation compare with Yandex and Baidu. The three different machine translation testing modeling, we can call the three different Metamorphic Relations. So, I have found a research article Cao 2013 about the effectiveness of metamorphic relations. Also, when I finished reading all of the details, I have found Two-way ANOVA is also suitable for part 3 of my research (finding the best compound mode for machine translation service and sentiment analysis service). I think that the disadvantages of those three-testing modeling will be involved some of the noise. It is clear the Round-Trip will be involved translation back path noise. Using the third language will be involved in third language translation path noise. My modeling will be involved sentiment analysis tools’ noise. However, my modeling does not involve any noise from translation tool. I think my modeling is better than the other two testing modeling. I am total from another aspect to test translation tool. \section{Sentiment Analysis Testing} In this semester, I have finished experiment for testing Google, Yandex and Baidu machine translation service, also finished experiment for comparison Google and Baidu sentiment analysis tools. For the sentiment analysis, I only finish testing the overview performance. Next semester, I have prepared to test sentiment analysis tools by different components. So, I have found a book (Liu, 2012) about sentiment analysis. According to Liu’s book that text sentiment analysis roughly have 5 steps. 1. Texting segmentation for the separate sentence to a list of keywords 2. Filtering for removing unnecessary keywords that will not add value to sentiment analysis, Such as is, but, it and so on. 3. Finding the basic words for Convert all infections to their root words. 4. Making Features for Using the root words as features to indicate the positiveness or negativeness 5. The last step is Classifier for the train a classifier to predict positivity. I will base on these five steps to testing sentiment analysis in next semester. Testing Step 1 (Sentiment Analysis) Ramos (2003, p. 133 - 142) claims that Tf-idf, which is an abbreviation for the term Frequency Inverse Document Frequency, is one of words ranking calculation method and a kind of numerical statistic designed to reflect how important a word is to a collection or a corpus of documents. I think that Tf-idf is useful for testing sentiment analysis first step (texting segmentation). I can look for what is different between Baidu texting segmentation Ranking Group and Google texting segmentation Ranking Group in same test data. If these two rankings are different, I may identificate the quality for texting segmentation. Testing Method Accounting to Sethi (2017) said that there are two categorized testing techniques, which are Static Testing and Dynamic Testing. The Dynamic Testing are divided into three categories, which are Functional Testing, Structural testing and Non-Functional Testing. In my research project, I will focus on Functional Testing. Metamorphic Testing The proposed research is based on Metamorphic Testing. Metamorphic Testing is for testing function correctness. I have read a research article (Zhou, 2016), I think this article is well-explained for what is Metamorphic Testing. My testing modeling is based on Metamorphic Testing. So, this article is useful for me. As George (2016) said, Metamorphic testing (MT) is a property-based software testing method developed for automated test case generation and automated result verification, based on the effects of some expected properties of the target program. These properties, recognized as metamorphic relations (MRs), serve as essential relations among the inputs and outcomes of multiple executions of the target program. I make a example here to explain Metamorphic Testing. For instance, Test calculator function correctness will be shown if input 1 + 1, and the result will be 2. In this example we can easily make the judgment, correct or not. However, we will not easy to make this judgment quickly if input sin (3.7), and the calculator gives a output.in a generally acknowledged, Sin (3.7) = sin (3.7 + 360) is correct. In metamorphic testing, the name of 3.7 is the test case. The name of 3.7 + 360 is the follow-up test case. Metamorphic Relation is the relationship between two input test cases as well as the two outputs. Metamorphic testing is based on Metamorphic Relation. The two outputs need an existing mathematical relation. In my example, the relation is “=”. I think the advantage of Metamorphic testing (MT) method, which can automate result verification and test case generation. The disadvantage is that cannot detect memory leak or some others insensitivity failure situation. However, Metamorphic Testing is appropriate for testing translation tools and sentiment analysis tools. Effectiveness of Metamorphic Relations I have read an article which was written by Zhou in 2013. The main purpose of reading this article is to find the best one among the three testing modelings. This article is based on white-box testing, which have source code, as well as the most important conclusion is if the Metamorphic Relations can get bigger distance (dissimilarity) that will have more chance to detect failures. In other words, MRs with very different initial and follow-up execution are more likely to detect failures than those with similar initial and follow-up executions. The concept of “difference” are defined in namely coverage Manhattan distance (CMD), frequency Manhattan distance (FMD), and frequency Hamming distance (FHD) in regard to adaptive random testing (ART), where CMD metric on the basis of branch coverage execution profiles performs the best fault-detection effectiveness. I think the advantage of this article is suitable for finding the most effectiveness of Metamorphic Relations in White-box. However, this article is not suitable for Black - Box Testing. The reason is Black Box Testing have not source code available, so it cannot calculation the program’s distance. In this research project, my testing modeling have not source code available. So, I have found another research article (Henard,2016)talking about the difference between black-box testing and white-box testing. White-box VS Black-box Henard (2016) have done some research for difference between white box testing and black - box testing in 2016. They have two finding is useful in my research black-box testing and white-box testing performance just have a little difference (at most 4% fault detection rate difference). They also found black-box testing and white-box testing the overlap is very high. The first 10% of the prioritized test data already agree on at least 60% of the faults found. As the result, this research article has given me a lot of idea for how similar between white box testing and black box testing. I still have opportunity for compare those three modelings, which one is better. \section{Experiment Support} \label{sec:label} Stanisic (2015) have done a review article about experiment workflow. This review article is not related to my research topic, But very useful for improve research workflow, especially for writing experiment note. Git is version control system, which can rollback to older version of you writing. Emacs org mode is so powerful for you writing. Org-ref is citations, cross-references, indexes, glossaries and Bibtex utilities for org-mode. Org mode also can easier convert to latex format and html format. Even we can write your own emacs lisp for needed function. In my research, I need collect lots of data. So, I have read a website give me some ideas about how to manage experiment data. My personal opinion, if you write your own programs, such as using c++ or Python, I think the best way is using CSV format. Reasons: In C++ or Python, there are a lot of free libraries for reading and writing CSV format. In C++, I have found a library, which name is libxl(http://www.libxl.com/) for reading and writing Excel file but not free. In Python, there is a open source Excel library, which name is openpyxl, but I have found this library are not suitable for large amount of data operation, because very slow. Openpyxl is a Python library for reading and writing Excel 2010 xlsx/xlsm/xltx/xltm files. However, Microsoft excel have lots of build-in function for operation data. CSV can convert back to Excel format, if Microsoft Excel build-in function is suitable for your research project, which is better way, it is not necessary for us to rewrite function by self. For example, you write your own coding for collect data in CSV file, after you convert CSV file to excel for draw graphic. When you find Excel have not suitable build-in function for you. You can convert back to CSV. To sum up, several research literature indicate sentiment analysis is coming into widespread used, but the evidence of the measuring machine translation service quality is critical point, it lies in best compound mode for machine translation service and sentiment analysis service. Functional Testing as main test methods, Metamorphic Testing in cross-language sentiment analysis service are needed. Metamorphic Testing is appropriate for testing translation tools and sentiment analysis tools. Several studies have indicated that black-box testing, and white-box testing may play a part in the effectiveness of metamorphic relations. \section*{Acknowledgment} \bibliographystyle{IEEEtran} \bibliography{library} \end{document}
{ "alphanum_fraction": 0.8078665704, "avg_line_length": 65.6422413793, "ext": "tex", "hexsha": "ef2b5406f650f461861a072c698acf63b530a0c4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2ccf3cde0d1031d083e9de26692f1764a3b066c6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "yanboyang713/crossLanguageSentimentAnalysis", "max_forks_repo_path": "print/paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2ccf3cde0d1031d083e9de26692f1764a3b066c6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "yanboyang713/crossLanguageSentimentAnalysis", "max_issues_repo_path": "print/paper.tex", "max_line_length": 1809, "max_stars_count": null, "max_stars_repo_head_hexsha": "2ccf3cde0d1031d083e9de26692f1764a3b066c6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "yanboyang713/crossLanguageSentimentAnalysis", "max_stars_repo_path": "print/paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3229, "size": 15229 }
\begin{Definition}{facets} Let $\cell\subset \R^d$ be a polyhedron. We call the lower dimensional polyhedra constituting its boundary \define{facet}s. A facet of dimension zero is called \define{vertex}, of dimension one \define{edge}, and a facet of codimension one is called a \define{face}. \end{Definition} \begin{Definition}{mesh} A \define{mesh} $\mesh$ is a nonoverlapping subdivision of the domain $\domain$ into polyhedral \define{cell}s denoted by $\cell$, for instance simplices, quadrilaterals, or hexahedra. The faces of a cell are denoted by $\face$, the vertices by $\vertex$. Cells are typically considered open sets. A mesh $\mesh$ is called regular, if each face $\face \subset \d\cell$ of the cell $\cell\in\mesh$ is either a face of another cell $\cell\prime$, that is, $\overline{\face} = \overline{\cell} \cap \overline{\cell\prime}$, or a subset of $\d\domain$. \end{Definition} \begin{remark} For this introduction, we will assume that indeed $\domain$ is the union of mesh cells, which means, that its boundary consists of a finite union of planar faces. The more general case of a mesh approximating the domain will be deferred to later discussion. \end{remark} \begin{Definition}{finite-element} With a mesh cell $\cell$, we associate a finite dimensional \define{shape function} space $\shapespace(\cell)$ of dimension $n_\cell$. The term \define{node functional} denotes linear functionals on this space. A set of node functionals $\{\nodal_\cell^i\}_{i=1,\dots,n_\cell}$ is called \define{unisolvent} on $\shapespace(\cell)$ if for any vector $\vu = (u_1,\dots,u_{n_\cell})^T$ there exists a unique $u\in \shapespace(\cell)$ such that \begin{gather} \nodal_\cell^i(u) = \vu_i,\quad i=1,\dots,n_\cell. \end{gather} A \define{finite element} is a set of shape function spaces $\shapespace(\cell)$ for all $\cell\in\mesh$ together with unisolvent set of node functionals. \end{Definition} \begin{Notation}{dofs} If the node functionals $\nodal^i$ are unisolvent on $\shapespace(\cell)$, then, there is a basis $\{p_k\}$ of $\shapespace(\cell)$ such that \begin{gather} \nodal^i(p_k) = \delta_{ik}. \end{gather} We refer to $\{p_k\}$ as \define{shape function basis} and use the term \define{degrees of freedom} for both the node functionals and the basis functions. \end{Notation} \begin{Definition}{node-topology} Node functionals can be associated with the cell $\cell$ or with one of its lower dimensional boundary facets. We call this association the \define{topology} of the finite element. \end{Definition} \begin{Definition}{fe-space} The \define{finite element space} on the mesh $\mesh$, denoted by $V_\mesh$ is a subset of the concatenation of all shape function spaces, \begin{gather} V_\mesh \subset \bigl\{ f\in L^2(\domain) \big| f_{|\cell} \in \shapespace(\cell) \bigr\}. \end{gather} The \define{degrees of freedom} of $V_\mesh$ are the union of all node functionals, where we identify node functionals associated to boundary facets among all cells sharing this facet. The resulting dimension is \begin{gather} n = \dim V_\mesh \le \sum n_\cell. \end{gather} \end{Definition} \begin{figure}[tp] \begin{center} \includegraphics[width=.5\textwidth]{graph/concatenation.tikz} \end{center} \caption{Identification of node functionals. The node functionals on shared edges (separated for presentation purposes) are distinguished locally as belonging to their respective cells, but identical global indices are assigned to all nodes in a single circle. Thus, all associated shape functions obtain the same coefficient in the global basis representation of a finite element function $u$.} \label{fig:nodes-identification} \end{figure} \begin{Notation}{global-local} When we enumerate the degrees of freedom of $V_\mesh$, we obtain a global numbering of degrees of freedom $\nodal^i$ with $i=1,\dots,n$. For each mesh cell, we have a local numbering $\nodal_\cell^j$ with $j=1,\dots,n$. By construction of the finite element space, there is a unique $i$, such that $\nodal_\cell^j(f) = \nodal^i(f)$ for all cells $\cell$ and local indices $j$. The converse is not true due to the identification process. \end{Notation} \begin{Definition}{local-global} We refer to the mapping between $\nodal^i$ and $\nodal_\cell^j$ as the mapping between global and local indices \begin{gather} \iota: (\cell, j) \mapsto i. \end{gather} It induces a ``natural'' basis $\{v_i\}$ of $V_\mesh$ by \begin{gather} v_{i|\cell} = p_{\cell,j}, \end{gather} where $\{p_{\cell,j}\}$ is the shape function basis on $\cell$. For each $\nodal^i$, we define $\mesh(\nodal^i)$ as the set of cells $\cell$ sharing the node functional $\nodal^i$, and \begin{gather} \domain\left(\nodal^i\right) = \bigcup_{\cell\in \mesh(\nodal^i)} \cell. \end{gather} \end{Definition} \begin{Lemma}{fe-support} The support of the basis function $v_i\in V_\mesh$ is \begin{gather*} \operatorname{supp}(v_i) \subset \domain\left(\nodal^i\right). \end{gather*} \end{Lemma} \begin{Lemma}{mesh-continuity} Let $\mesh$ be a subdivision of $\domain$, and let $u$ be a function on $\domain$, such that $u_{|\cell} \in C^1(\overline{\cell})$ for each $\cell\in\mesh$. Then, \begin{gather} u\in H^1(\domain) \quad \Longleftrightarrow\quad u\in C(\overline\domain). \end{gather} \end{Lemma} \begin{Lemma}{nodal-continuity} We have $V_\mesh\subset C(\overline{\domain})$ if and only if for every facet $F$ of dimension $d_F < d$ there holds that \begin{enumerate} \item the traces of the spaces $\shapespace(\cell)$ on $F$ coincide for all cells $\cell$ having $F$ as a facet, \item The node functionals associated to the facet are unisolvent on this trace space. \end{enumerate} \end{Lemma} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Shape function spaces on simplices} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{Definition}{barycentric-coordinates} A simplex $\cell\in \R^d$ with vertices $\vertex_0,\dots,\vertex_d$ is described by a set of $d+1$ \define{barycentric coordinates} $\vlambda = (\lambda_0,\dots,\lambda_d)^T$ such that \begin{xalignat}2 0\le\lambda_i &\le 1& i&=0,\dots,d;\\ \lambda_i(\vertex_j) &= \delta_{ij}& i,j&=0,\dots,d\\ \sum \lambda_i(\vx) &= 1, \end{xalignat} and there holds \begin{gather} T = \Bigl\{x\in\R^d \Big| x = \sum \vertex_k\lambda_k \Bigr\}. \end{gather} \end{Definition} \begin{todo} Properties of simplicial coordinates, e.g. $\{\lambda_i=0\}$ is the facet opposite to $\vertex_i$. \end{todo} \begin{Lemma}{barycentric-affine} There is a matrix $B_T\in \R^{d+1\times d}$ and a vector $b_T\in\R^{d+1}$, such that \begin{gather} \vlambda = B_T\vx + b_T. \end{gather} \end{Lemma} \begin{Corollary}{barycentric-interpolation} The barycentric coordinates $\lambda_0,\dots,\lambda_d$ are the linear Lagrange interpolating functions for the points $\vertex_0,\dots,\vertex_d$. In particular, $\lambda_k \equiv 0$ on the facet not containing $\vertex_k$. \end{Corollary} \begin{example} We can use barycentric coordinates to define interpolating polynomials on simplicial meshes easily, as in Table~\ref{tab:barycentric-shapes}. \begin{table}[tp] \centering \begin{tabular}{|c|l|} \hline Degrees of freedom & Shape functions \\\hline \adjustbox{valign=center,margin=3pt}{\includegraphics[width=2cm]{mixed/fig/p1-p.tikz}} & {\begin{minipage}[b]{6cm} \begin{gather*} \phi_i = \lambda_i, \quad i=0,1,2 \end{gather*} \end{minipage}} \\\hline \adjustbox{valign=center,margin=3pt}{\includegraphics[width=2cm]{mixed/fig/p2-p.tikz}} & {\begin{minipage}[b]{6cm} \begin{xalignat*}2 \phi_{ii} &= 2\lambda_i^2 - \lambda_i, &i&=0,1,2\\ \phi_{ij} &= 4\lambda_i\lambda_j &j&\neq i \end{xalignat*} \end{minipage}} \\\hline \adjustbox{valign=center,margin=3pt}{\includegraphics[width=2cm]{mixed/fig/p3-p.tikz}} & {\begin{minipage}[b]{6cm} \begin{xalignat*}2 \phi_{iii} &= \tfrac12 \lambda_i(3\lambda_i-1)(3\lambda_i-2) &i&=0,1,2\\ \phi_{ij} &= \tfrac92\lambda_i\lambda_j(3\lambda_j-1) &j&\neq i\\ \phi_0 &= 27\lambda_0\lambda_1\lambda_2 \end{xalignat*} \end{minipage}} \\\hline \end{tabular} \caption{Degrees of freedom and shape functions of simplicial elements in terms of barycentric coordinates} \label{tab:barycentric-shapes} \end{table} \end{example} \begin{remark} The functions $\lambda_i(x)$ are the shape functions of the linear $P_1$ element on $T$. They allow us to define basis functions on the cell $T$ without use of a reference element $\widehat T$. Note that $\lambda_i\equiv 0$ on the face opposite to the vertex $x_i$. \end{remark} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Shape functions on tensor product cells} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{Definition}{tensor-product-polynomials} The space of \define{tensor product polynomials} of degree $k$ in $d$ dimensions, denoted as $\Q_k$ consists of polynomials of degree up to $k$ in each variable. Given a basis for one-dimensional polynomials $\{p_i\}_{i=0,\dots,k}$, a natural basis for $\Q_k$ is the \define{tensor product basis} \begin{gather} \label{eq:fem-intro:1} p_{i_1,\dots,i_d}(\vx) = p_{i_1}\otimes \dots\otimes p_{i_d}(\vx) = \prod_{k=1}^d p_{i_k}(x_k). \end{gather} \end{Definition} \begin{remark} Note that the basis functions of $\Q_k$ can be denoted as products of univariate polynomials, but that general polynomials in this space as linear combinations of these basis functions do not have this structure. \end{remark} \begin{Lemma}{tensor-product-node-functionals} Let $\{\nodal_j\}$ be a set of one-dimensional node functionals dual to the one-dimensional basis $\{p_i\}$ such that \begin{gather} \nodal_j(p_i) = \delta_{ij}. \end{gather} Then, a dual basis for $\{p_{i_1,\dots,i_d}\}$ is obtained by defining on the tensor product basis of $\Q_k$ \begin{gather} \label{eq:fem-intro:2} \nodal_{j_1,\dots,j_d}(p_{i_1,\dots,i_d}) = \nodal_{j_1} \otimes \dots\otimes \nodal_{j_d}(p_{i_1}\otimes\dots\otimes p_{i_d}) = \prod_{k=1}^d \nodal_{j_k}(p_{i_k}). \end{gather} \end{Lemma} \begin{proof} It is a theorem in linear algebra, that a linear functional on a vector space is uniquely defined by its values on a basis of the space. Thus,~\eqref{eq:fem-intro:2} uniquely defines the node functionals $\nodal_{j_1,\dots,j_d}$. The duality property follows from the fact that \begin{gather*} \nodal_{j_1,\dots,j_d}(p_{i_1,\dots,i_d}) = \prod_{k=1}^d \delta_{i_k,j_k}, \end{gather*} which is one if and only if all index pairs match and zero in all other cases. \end{proof} \begin{example} Let a basis $\{p_i\}$ of the univariate space $\P_k$ be defined by Lagrange interpolation in $k+1$ points $t_j \in [0,1]$. A basis of the $d$-dimensional space $\Q_k$ is then obtained by all possible products \begin{gather*} p_{i_1,\dots,i_d}(\vx) = \prod_{k=1}^d p_{i_k}(x_k). \end{gather*} The node functionals following the construction above are obtained by \begin{gather*} \nodal_{j_1,\dots,j_d}(p_{i_1,\dots,i_d}) = \prod_{k=1}^d p_{i_k}(x_{j_k}). \end{gather*} Finally, we have to convert the term on the right into an expression, which can be applied to any polynomial in $\Q_k$. To this end, we observe that \begin{gather*} \prod_{k=1}^d p_{i_k}(x_{j_k}) = p_{i_1,\dots,i_d}(x_{j_1},\dots,x_{j_d}). \end{gather*} Therefore, we conclude that the tensor product node functionals resulting from this construction are \begin{gather*} \nodal_{j_1,\dots,j_d}(p) = p(x_{j_1},\dots,x_{j_d}). \end{gather*} \end{example} \begin{Example*}{q2}{The space $\Q_2$} \begin{center} \includegraphics[width=.3\textwidth]{graph/shape0} \includegraphics[width=.3\textwidth]{graph/shape1} \includegraphics[width=.3\textwidth]{graph/shape2} \includegraphics[width=.3\textwidth]{graph/shape3} \includegraphics[width=.3\textwidth]{graph/shape4} \includegraphics[width=.3\textwidth]{graph/shape5} \includegraphics[width=.3\textwidth]{graph/shape6} \includegraphics[width=.3\textwidth]{graph/shape7} \includegraphics[width=.3\textwidth]{graph/shape8} \end{center} \end{Example*} \begin{Lemma}{tensor-product-trace} The trace of the $d$-dimensional tensor product polynomial space $\Q_k$ on the $\delta$-dimensional facets of the reference cube $\refcell = (0,1)^d$ is the $\delta$-dimensional space $\Q_k$. The traces from two cells sharing the same face coincide, if the mapping is continuous. Therefore, continuity can be achieved by unisolvent sets of node functionals on the face. \end{Lemma} \begin{proof} By keeping $d-\delta$ variables constant in the tensor product basis in~\eqref{eq:fem-intro:1}. \end{proof} \begin{Example*}{cg-q2}{Continuous basis functions} \begin{center} \includegraphics[height=.20\textwidth]{graph/cgbasis1-02} \includegraphics[height=.20\textwidth]{graph/cgbasis1-03} \includegraphics[height=.20\textwidth]{graph/cgbasis1-15} \includegraphics[height=.20\textwidth]{graph/cgbasis1-16} \includegraphics[height=.20\textwidth]{graph/cgbasis1-07} \includegraphics[height=.20\textwidth]{graph/cgbasis1-13} \includegraphics[height=.20\textwidth]{graph/cgbasis1-18} \includegraphics[height=.20\textwidth]{graph/cgbasis1-22} \includegraphics[height=.20\textwidth]{graph/cgbasis1-17} \includegraphics[height=.20\textwidth]{graph/cgbasis1-23} \includegraphics[height=.20\textwidth]{graph/cgbasis1-20} \includegraphics[height=.20\textwidth]{graph/cgbasis1-24} \end{center} \end{Example*} \begin{Example*}{dg-q2}{Discontinuous basis functions} \begin{center} \includegraphics[height=.20\textwidth]{graph/dgbasis1-08} \includegraphics[height=.20\textwidth]{graph/dgbasis1-15} \includegraphics[height=.20\textwidth]{graph/dgbasis1-20} \includegraphics[height=.20\textwidth]{graph/dgbasis1-27} \includegraphics[height=.20\textwidth]{graph/dgbasis1-07} \includegraphics[height=.20\textwidth]{graph/dgbasis1-19} \includegraphics[height=.20\textwidth]{graph/dgbasis1-23} \includegraphics[height=.20\textwidth]{graph/dgbasis1-30} \includegraphics[height=.20\textwidth]{graph/dgbasis1-26} \includegraphics[height=.20\textwidth]{graph/dgbasis1-33} \includegraphics[height=.20\textwidth]{graph/dgbasis1-24} \includegraphics[height=.20\textwidth]{graph/dgbasis1-22} \end{center} \end{Example*} \begin{example} As a second example, we choose $d=2$ and the univariate space $\P_2$ with node functionals \begin{gather} \nodal_0(p) = p(0), \quad \nodal_1(p) = \int_0^1 p(t) \dt, \quad \nodal_2(p) = p(1), \end{gather} that is, a mixture of Lagrange interpolation and orthogonality on the interval $[0,1]$. The matching basis polynomials are \begin{gather} p_0(t) = 3(1-t)^2 - 2(1-t), \quad p_1(t) = 6 t(1-t), \quad p_2(t) = 3t^2-2t. \end{gather} Follwoing the construction of the previous example, we obtain \begin{gather} \begin{aligned} \nodal_{00}(p) &= p(0,0), &\nodal_{02}(p) &= p(0,1),\\ \nodal_{20}(p) &= p(1,0), &\nodal_{22}(p) &= p(1,1). \end{aligned} \end{gather} Then, \begin{gather} \nodal_{01}(p_{01}) = \nodal_{01}(p_0\otimes p_1) = p_0(0)\int_0^1 p_1(y)\dy = \int_0^1 p(0,y) \dy. \end{gather} Thus, the node functional $\nodal_{01}$ is the integral over the left edge of the reference square. By the same construction, $\nodal_{01}$ is the integral over the right edge. $\nodal_{10}$ and $\nodal_{12}$ are the integrals over the bottom and top edge, respectively. Finally, \begin{gather} \nodal_{11}(p_{11}) = \int_0^1 p_1(x)\dx \int_0^1 p_1(y) \dy = \int_0^1\int_0^1 p_{11}(x,y) \dx\dy. \end{gather} Thus, the tensor product of two line integrals becomes the integral over the area. \end{example} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{The Galerkin equations and Céa's lemma} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{Definition*}{galerkin-approximation}{Galerkin approximation} Let $u\in V$ be determined by the weak formulation \begin{gather*} a(u,v) = f(v) \qquad\forall v\in V, \end{gather*} where $V$ is a suitable function space including boundary conditions. The \define{Galerkin approximation}, also called \define{conforming approximation} of this problem reads as follows: choose a subspace $V_n\subset V$ of dimension $n$ and find $u_n\in V_n$, such that \begin{gather*} a(u_n,v_n) = f(v_n) \qquad\forall v_n\in V_n. \end{gather*} We will refer to this equation as the \define{discrete problem}. \end{Definition*} \begin{Corollary*}{galerkin-equations}{Galerkin equations} After choosing a basis $\{v_i\}$ for $V_n$, the Galerkin equations are equivalent to a linear system \begin{gather} \mata \vu = \vf, \end{gather} with $\mata\in\R^{n\times n}$ and $\vf\in \R^n$ defined by \begin{gather} a_{ij} = a(v_j, v_i), \qquad f_i = f(v_i). \end{gather} \end{Corollary*} \begin{Lemma}{discrete-lax-milgram} If the lemma of Lax-Milgram holds for $a(.,.)$ on $V$, it holds on $V_n\subset V$. In particular, solvability of the Galerkin equations is implied. \end{Lemma} \begin{Lemma*}{cea}{Céa} Let $a(.,.)$ be a bounded and elliptic bilinear form on the Hilbert space $V$. Let $u \in V$ and $u_n\in V_n \subset V$ be the solution to the weak formulation and its Galerkin approximation \begin{gather*} \begin{aligned} a(u,v) &= f(v) & \qquad\forall v&\in V,\\ a(u_n,v_n) &= f(v_n) & \qquad\forall v_n&\in V_n, \end{aligned} \end{gather*} respectively. Then, there holds \begin{gather} \norm{u-u_h}_V \le \frac{M}{\alpha} \inf_{v_n\in V_n}\norm{u-v_h}_V. \end{gather} \end{Lemma*} \begin{Lemma}{fe-matrix} For a finite element discretization of Poisson's equation with the space $V_\mesh$, the Galerkin equations can be computed using the following formulas: \begin{alignat*}3 a_{ij} &= \int\limits_\domain \nabla v_j \cdot \nabla v_i \dx &&= \int\limits_{\domain(\nodal^i)} \nabla v_j \cdot \nabla v_i \dx &&= \sum_{\cell\in\mesh(\nodal^i)}\int\limits_\cell \nabla v_j \cdot \nabla v_i \dx\\ f_{i} &= \int\limits_\domain f v_i \dx &&= \int\limits_{\domain(\nodal^i)} f v_i \dx &&= \sum_{\cell\in\mesh(\nodal^i)}\int\limits_\cell f v_i \dx \end{alignat*} \end{Lemma} \begin{Algorithm*}{matrix-assembling}{Assembling the matrix} \begin{enumerate} \item Start with a matrix $\mata = 0 \in \R^{n\times n}$ \item Loop over all cells $\cell\in\mesh$ \item On each cell $\cell$, compute a cell matrix $\mata_\cell \in \R^{n_\cell\times n_\cell}$ by integrating \begin{gather} a_{\cell,ij} = \int_\cell \nabla p_{\cell,j}\cdot\nabla p_{\cell,i}\,dx, \end{gather} where $\{p_{\cell,i}\}$ is the shape function basis. \item Assemble the cell matrices into the global matrix by \begin{gather} a_{\iota(i),\iota(j)} = a_{\iota(i),\iota(j)} + a_{\cell,ij} \qquad i,j = 1,\dots,n_\cell. \end{gather} \end{enumerate} \end{Algorithm*} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Mapped finite elements} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{Definition}{mapped-mesh} A mapped mesh $\mesh$ is a set of cells $\cell$, which are defined by a single \define{reference cell} $\refcell$ and individual smooth mappings \begin{gather} \begin{split} \Phi_\cell \colon \refcell &\to \R^d\\ \Phi_\cell(\refcell) &= \cell. \end{split} \end{gather} The definition extends to small sets of reference cells, for instance for triangles and quadrilaterals. \end{Definition} \begin{Example}{mapping-linear} Let the reference triangle $\refcell$ be defined by \begin{gather} \refcell = \left\{ \begin{pmatrix} \refx\\\refy \end{pmatrix} \middle| \refx,\refy >0, \refx+\refy < 1 \right\}. \end{gather} Then, every cell $\cell$ spanned by the vertices $\vertex_0$, $\vertex_1$, and $\vertex_2$ is obtained by mapping $\refcell$ by the \putindex{affine mapping} \begin{gather} \Phi_\cell(\refvx) = \begin{pmatrix} X_1-X_0 & X_2 - X_0 \\ Y_1-Y_0 & Y_2 - Y_0 \end{pmatrix} \begin{pmatrix} \refx \\ \refy \end{pmatrix} + \begin{pmatrix} X_0 \\ Y_0 \end{pmatrix} =: \matb_\cell \refvx + \vb_\cell \end{gather} \end{Example} \begin{Example}{mapping-bilinear} The reference cell for a quadrilateral is the reference square $\refcell = (0,1)^2$. Every quadrilateral $\cell$ spanned by the vertices $\vertex_0$ to $\vertex_3$ is then obtained by the \putindex{bilinear mapping} \begin{gather} \Phi_\cell(\refvx) = \vertex_0 (1-\refx)(1-\refy) + \vertex_1 \refx(1-\refy) + \vertex_2 (1-\refx)\refy + \vertex_3 \refx\refy \end{gather} \end{Example} \begin{Definition}{mapped-fe} Mapped shape functions $\{p_i\}$ on a mesh cell $\cell$ are defined by a set of shape functions $\{\refp_i\}$ on the reference cell $\refcell$ through \define{pull-back} \begin{gather} \begin{split} p_i(\vx) &= \refp_i\left(\Phi^{-1}(\vx)\right) = \refp_i(\refvx),\\ \nabla p_i(\vx) &= \nabla\Phi^{-T}(\refvx)\refgrad\refp_i(\refvx) \end{split} \end{gather} \end{Definition} \begin{Lemma}{mapped-norms-affine} Let $\refcell$ be the reference triangle and let $\cell$ be a triangular mesh cell with mapping $\vx = \Phi_\cell(\refvx) = \matb \refvx + \vb$. Let there hold $u(\vx) = \refu(\refvx)$. Then, $u\in H^k(\cell)$ if and only if $\refu\in H^k(\refcell)$ and we have with some constant $c$ the estimates \begin{gather} \begin{split} \snorm{\refu}_{k,\refcell} &\le c \norm{\matb}^k (\det \matb)^{-\nicefrac12} \snorm{u}_{k,\cell},\\ \snorm{u}_{k,\cell} &\le c \norm{\matb^{-1}}^k (\det \matb)^{\nicefrac12} \snorm{\refu}_{k,\refcell}. \end{split} \end{gather} \end{Lemma} \begin{Lemma}{shape-regular-transformation} For a cell $\cell$, let $R$ be the radius of the circumscribed circle and $\rho$ the radius of the inscribed circle. Then, \begin{gather} \norm{\matb} \le c R, \qquad \norm{\matb^{-1}} \le c \rho^{-1}. \end{gather} \end{Lemma} \begin{proof} \begin{figure} \centering \begin{tikzpicture} \def\scale{3} %% reference cell \def\mid{\scale*0.2928932188} \def\outermid{\scale*0.5} \def\outerrad{\scale*0.7071067811} \node (mid) at (\mid,\mid) {$\refvx_0$}; \coordinate (x1) at (\scale*1,0); \node[right = \scale*0.01 of x1] (x1name) {$\refvx_1$}; \coordinate (x2) at (0,\scale*1); \node[above = \scale*0.01 of x2] (x2name) {$\refvx_2$}; \coordinate (x3) at (0,0); \node[left = \scale*0.01 of x3] (x3name) {$\refvx_3$}; \coordinate (xc) at (\outermid,\outermid); \node[above = \scale*0.01 of xc] (xcname) {$\refvx_c$}; \draw (x1)--(x2)--(x3)--(x1); % incircle \draw (mid) circle (\mid); \draw (mid) -- (\mid,0); \node[below = \scale*0.01 of mid,xshift=0.5em] (rho) {$\refrho$}; % circumcircle \draw (xc) circle (\outerrad); \draw (xc) -- (\outermid+\outerrad,\outermid); \node[above right = \scale*0.01 and \scale*0.25 of xc] (R) {$\reference{R}$}; \end{tikzpicture} \caption{Visualization of the inscribed and circumscribed circle of the reference cell $\refcell$.} \end{figure} Let us define $S_{\refrho}\coloneqq \{\refvy\in\R^d ~\mid~ \abs{\refvy}=\refrho \}$. For $\refvy\in S_{\refrho}$ there holds $\refvx\coloneqq \refvxo + \refvy\in \overline{\refcell}$ and $\vx=\matb\refvx + \vb = \matb(\refvxo+\refvy)+\vb$. Additionally, we have $\vxo = \matb \refvxo + \vb$ which yields \begin{align*} \abs{\vx-\vxo} = \abs{\matb\refvy}\leq 2R. \end{align*} Now consider the operator norm of $\matb$ \begin{align*} \norm{\matb} = \sup_{\refvy\in\R^d} \frac{\abs{\matb\refvy}}{\abs{\refvy}} = \sup_{\abs{\refvy}=1} \abs{\matb\refvy} = \sup_{\refvy\in S_{\refrho}} \frac{\abs{\matb\refvy}}{\refrho} \leq \frac{2R}{\refrho}. \end{align*} $\refrho$ is a constant only depending on $\refcell$ but not on $\cell$. Hence, we get $\norm{\matb}\leq cR$ with $c$ depending on $\refrho$. The same argument can be applied for the second estimate. We now define $S_\rho\coloneqq \{\vy\in\R^d~\mid~\abs{\vy}=\rho \}$. For $\vy\in S_\rho$ there holds $\vx\coloneqq\vxo+\vy\in\overline{\cell}$ and let $\refvx=\matb^{-1}\vx-\vb,~\refvxo=\matb^{-1} \vxo - \vb$ which yields \begin{align*} \abs{\refvx - \refvxo} = \abs{\matb^{-1} \vy}\leq 2\reference{R}. \end{align*} Analogous to the first estimate we obtain \begin{align*} \norm{\matb^{-1}} = \sup_{\vy\in S_\rho} \frac{\abs{\matb^{-1} \vy}}{\rho} \leq \frac{2\reference{R}}{\rho}. \end{align*} Now $\reference{R}$ is only depending on $\refcell$ but not on $\cell$. Hence, we have $\norm{\matb^{-1}}\leq c\rho^{-1}$ where $c$ depends on $\reference{R}$. \end{proof} \begin{Assumption}{mapping-decomposition} For more general mappings $\Phi\colon \refcell\to \cell$, we make the assumption, that they can be decomposed into three factors, \begin{gather} \Phi = \Phi_O \circ \Phi_S \circ \Phi_W, \end{gather} where $\Phi_O$ is a combination of translation and rotation, $\Phi_S$ is a scaling with a characteristic length $h_T$, and $\Phi_W$ is a warping function not changing the characteristic length. \end{Assumption} \begin{example} We construct the inverse of $\Phi$ in two dimensions by the following three steps, using as $h_\cell$ the length of the longest edge of $\cell$. \begin{enumerate} \item Choose $\Phi_O$ as the rigid body movement which maps the longest edge to the interval $(0,h_\cell)$ on the $x$-axis and the cell itself to $\cell_O$ in the positive half plane. This mapping has the structure \begin{gather*} \Phi^{-1}_O (\vx) = \mats \vx - \mats \vertex_0, \end{gather*} where $\mats$ is an orthogonal matrix and $\vertex_0$ is the vertex moved to the origin. \item Choose the scaling \begin{gather*} \Phi^{-1}_S (\vx) = \tfrac1{h_\cell} \vx, \end{gather*} such that the longest edge of the resulting cell $\cell_S$ has the longest edge equal to the interval $(0,1)$ on the $x$-axis. \item Warp the cell $\cell_S$ into the reference cell $\refcell$ by the mapping $\Phi^{-1}_W$. This operation leaves the longest edge untouched. For triangles, it is the uniquely defined linear transformation mapping the vertex not on the longest edge to $(0,1)$. For quadrilaterals, it is a bilinear transformation. \end{enumerate} In the first step, we have assumed that the cell is convex, which is always true for triangles. For nonconvex quadrilaterals, it can be shown that the determinant of $\nabla\Phi$ changes sign inside the cell, such that these cells are not useful for computations. The idea of this decomposition is, that we separate mappings changing the position, size, and shape of the cells. \end{example} \begin{Lemma*}{scaling-1}{Scaling lemma} Let the typical length of a cell $\cell$ be $h_\cell$. Assume there are constants $0 < M_\cell, m_\cell, d_\cell, D_\cell$, such that \begin{gather} \begin{split} \norm{\nabla\Phi_W(\refvx)} \le M_\cell, \\ \norm{\nabla\Phi_W^{-1}(\refvx)} \le m_\cell^{-1} , \\ d^2_\cell \le \det \nabla\Phi_W(\refx)) \le D^2_\cell. \end{split} \end{gather} for all $\refvx\in\refcell$. Then, for $k=0,1$ and a constant $c$ \begin{gather} \begin{split} \snorm{\refu}_{k,\refcell} &\le c \frac{M_\cell}{d_\cell} h_\cell^{k-\nicefrac d2} \snorm{u}_{k,\cell},\\ \snorm{u}_{k,\cell} &\le c \frac{D_\cell}{m_\cell} h_\cell^{\nicefrac d2-k} \snorm{\refu}_{k,\refcell}. \end{split} \end{gather} This extends to higher derivatives under assumptions on higher derivatives of $\Phi_\cell$. \end{Lemma*} \begin{proof} By the chain rule, $\nabla \Phi_T = \nabla \Phi_O \nabla \Phi_S \nabla \Phi_W$. By construction, $\nabla\Phi_O$ is an orthogonal matrix, such that \begin{gather*} \norm{\nabla\Phi_O} = \norm{\nabla\Phi_O^{-1}} = 1. \end{gather*} Since it preserves angles and lengths, $\det \nabla\Phi_O = 1$. Since $\Phi_S$ is a multiple of the identity, we have \begin{gather*} \norm{\nabla\Phi_S} = h_\cell, \quad\norm{\nabla\Phi_S^{-1}} = \frac1{h_\cell}, \quad\det\nabla\Phi_S = h_\cell^d. \end{gather*} By change of variables, we have \begin{gather*} \int_\cell u^2 \dvx = \int_{\refcell} \refu^2 \abs{\det\nabla\Phi_\cell} \dvxref = \int_{\refcell} \refu^2 \det\nabla\Phi_S\det\nabla\Phi_O\det\nabla\Phi_W \dvxref , \end{gather*} such that the case $k=0$ is proven immediately by \begin{gather*} h_T^d d_\cell^2 \int_{\refcell} \refu^2 \dvxref \le \int_\cell u^2 \dvx \le h_T^d D_\cell^2 \int_{\refcell} \refu^2 \dvxref \end{gather*} By the chain rule, we have \begin{gather*} \refgrad\refu(\refvx) = \nabla\Phi^T \nabla u(\vx) = \nabla\Phi_W^T \nabla\Phi_S^T \nabla\Phi_O^T \nabla u(\vx), \end{gather*} such that there holds \begin{gather} \begin{split} \abs{\refgrad\refu(\refvx)} &\le \norm{\nabla\Phi_W} h_\cell \abs{\nabla u},\\ \abs{\nabla u(\vx)} &\le \norm{\nabla\Phi_W^{-1}} h_\cell^{-1} \abs{\refgrad \refu}. \end{split} \end{gather} \end{proof} \begin{Remark}{simple-mappings} We have $d_\cell = D_\cell$, if and only if the mapping is affine. The quotient $M_\cell/m_\cell$ measures how much the shape of the mesh cell deviates from the reference cell. For instance, it is one for squares. \end{Remark} %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End:
{ "alphanum_fraction": 0.6414256033, "avg_line_length": 37.1115065243, "ext": "tex", "hexsha": "cf7458b8c2ae7db552c595a613ad5ce1368c3e1e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "ahumanita/notes", "max_forks_repo_path": "fem/fem-intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "ahumanita/notes", "max_issues_repo_path": "fem/fem-intro.tex", "max_line_length": 103, "max_stars_count": null, "max_stars_repo_head_hexsha": "73f23770e2b02f1b4a67987744ceffbd9ce797d7", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "ahumanita/notes", "max_stars_repo_path": "fem/fem-intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10351, "size": 31285 }
% Copyright 2011-2015 David Hadka. All Rights Reserved. % % This file is part of the MOEA Framework User Manual. % % Permission is granted to copy, distribute and/or modify this document under % the terms of the GNU Free Documentation License, Version 1.3 or any later % version published by the Free Software Foundation; with the Invariant Section % being the section entitled "Preface", no Front-Cover Texts, and no Back-Cover % Texts. A copy of the license is included in the section entitled "GNU Free % Documentation License". \chapter{Example: Knapsack Problem} In this chapter, we will walk through a complete example of creating a new optimization problem and solving it using the MOEA Framework. This example serves as a review of the topics learned thus far. We will also introduce several new concepts such as constraint handling. The problem we will be solving is the multiobjective version of the knapsack problem. The knapsack problem (discussed in much detail at \webpage{http://en.wikipedia.org/wiki/Knapsack_problem}) is a famous combinatorial problem that involves choosing which items to place in a knapsack to maximize the value of the items carried without exceeding the weight capacity of the knapsack. More formally, we are given $N$ items. Each item has a profit, $P(i)$, and weight, $W(i)$, for $i = {1, 2, \ldots, N}$. Let $d(i)$ represent our decision to place the $i$-th item in the knapsack, where $d(i)=1$ if the item is put into the knapsack and $d(i)=0$ otherwise. If the knapsack has a weight capacity of $C$, then the knapsack problem is defined as: \begin{equation*} \text{Maximize} \; \sum_{i=1}^{N} d(i)*P(i) \; \text{such that} \; \sum_{i=1}^{N} d(i)*W(i) \leq C \end{equation*} The summation on the left (which we are maximizing) calculates the total profit we gain from the items placed in the knapsack. The summation on the right side is a constraint that ensures the items placed in the knapsack do not exceed the weight capacity of the knapsack. The multiobjective knapsack problem that we will be solving in this section is very similar, except that we now have $2$ knapsacks to hold the items. Additionally, the profit and weights vary depending on which knapsack is holding each item. For example, an item will have a profit of $\$25$ and a weight of $5$ pounds in the first knapsack, but will have a profit of $\$15$ and a weight of $8$ pounds in the second knapsack. (It may seem unusual that the weight changes, but that is how the problem is defined in the literature.) Thus, profit is now defined by $P(i,j)$ and weight by $W(i,j)$, where the $j = {1, 2}$ term is the knapsack index. Lastly, each knapsack defines its own capacity, $C_1$ and $C_2$. Combining all of this, the multiobjective knapsack problem is formally defined as: \begin{equation*} \begin{array}{l} \text{Maximize} \; \sum_{i=1}^{N} d(i)*P(i,1) \; \text{such that} \; \sum_{i=1}^{N} d(i)*W(i,1) \leq C_1 \; \text{and}\\ \text{Maximize} \; \sum_{i=1}^{N} d(i)*P(i,2) \; \text{such that} \; \sum_{i=1}^{N} d(i)*W(i,2) \leq C_2 \end{array} \end{equation*} Once we have a firm understanding of the optimization problem, we can now work on solving this problem. You can find all of the code for this example in the \folder{examples/org/moeaframework/examples/ga/knapsack} folder in the source code distribution. \section{Data Files} We begin by developing a way to store all of the information required by the knapsack problem --- profits, weights, capacities --- in a text file. This will let us quickly generate and run new inputs for this problem. Fortunately, two researchers, Eckart Zitzler and Marco Laumanns, have already created a file format for multiobjective knapsack problems at \webpage{http://www.tik.ee.ethz.ch/sop/download/supplementary/testProblemSuite/}. For example, a simple $5$ item problem instance would appear as follows. \begin{lstlisting}[language=plaintext] knapsack problem specification (2 knapsacks, 5 items) = knapsack 1: capacity: +251 item 1: weight: +94 profit: +57 item 2: weight: +74 profit: +94 item 3: weight: +77 profit: +59 item 4: weight: +74 profit: +83 item 5: weight: +29 profit: +82 = knapsack 2: capacity: +190 item 1: weight: +55 profit: +20 item 2: weight: +10 profit: +19 item 3: weight: +97 profit: +20 item 4: weight: +73 profit: +66 item 5: weight: +69 profit: +48 \end{lstlisting} We will re-use this file format in this example. One advantage is that you can download any of the example knapsack problems from \webpage{http://www.tik.ee.ethz.ch/sop/download/supplementary/testProblemSuite/} and solve them with the program we are writing. Go ahead and save this example input file to \file{knapsack.5.2}. We will then load and solve this data file later in this chapter. \section{Encoding the Problem} The next step is to decide upon the encoding for the decision variables. Observe that we are deciding which items to place in the knapsacks. Recalling \chptref{chpt:representations}, the bit string representation works well for situation where we are making many yes/no decisions. For example, if $N=5$, we can represent the decision to include each item using a bit string with $5$ bits. Each bit in the string corresponds to an item, and is set to \java{1} if the item is included and \java{0} if the item is excluded. For instance, the bit string \java{00110} would place items 3 and 4 inside the knapsacks, excluding the rest. \section{Implementing the Problem} Having decided upon an encoding, we can now implement the knapsack problem as shown below. \begin{lstlisting}[language=Java] import java.io.File; import java.io.FileReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.Reader; import java.util.regex.Matcher; import java.util.regex.Pattern; import org.moeaframework.core.Problem; import org.moeaframework.core.Solution; import org.moeaframework.core.variable.BinaryVariable; import org.moeaframework.core.variable.EncodingUtils; import org.moeaframework.util.Vector; import org.moeaframework.util.io.CommentedLineReader; /** * Multiobjective 0/1 knapsack problem. */ public class Knapsack implements Problem { /** * The number of sacks. */ private int nsacks; /** * The number of items. */ private int nitems; /** * Entry {@code profit[i][j]} is the profit from including * item {@code j} in sack {@code i}. */ private int[][] profit; /** * Entry {@code weight[i][j]} is the weight incurred from * including item {@code j} in sack {@code i}. */ private int[][] weight; /** * Entry {@code capacity[i]} is the weight capacity of sack * {@code i}. */ private int[] capacity; /** * Constructs a multiobjective 0/1 knapsack problem instance * loaded from the specified file. * * @param file the file containing the knapsack problem * instance * @throws IOException if an I/O error occurred */ public Knapsack(File file) throws IOException { this(new FileReader(file)); } /** * Constructs a multiobjective 0/1 knapsack problem instance * loaded from the specified input stream. * * @param inputStream the input stream containing the knapsack * problem instance * @throws IOException if an I/O error occurred */ public Knapsack(InputStream inputStream) throws IOException { this(new InputStreamReader(inputStream)); } /** * Constructs a multiobjective 0/1 knapsack problem instance * loaded from the specified reader. * * @param reader the reader containing the knapsack problem * instance * @throws IOException if an I/O error occurred */ public Knapsack(Reader reader) throws IOException { super(); load(reader); } /** * Loads the knapsack problem instance from the specified * reader. * * @param reader the file containing the knapsack problem * instance * @throws IOException if an I/O error occurred */ private void load(Reader reader) throws IOException { Pattern specificationLine = Pattern.compile("knapsack problem specification \\((\\d+) knapsacks, (\\d+) items\\)"); Pattern capacityLine = Pattern.compile(" capacity: \\+(\\d+)"); Pattern weightLine = Pattern.compile(" weight: \\+(\\d+)"); Pattern profitLine = Pattern.compile(" profit: \\+(\\d+)"); CommentedLineReader lineReader = null; String line = null; Matcher matcher = null; try { lineReader = new CommentedLineReader(reader); line = lineReader.readLine(); // problem specification matcher = specificationLine.matcher(line); if (matcher.matches()) { nsacks = Integer.parseInt(matcher.group(1)); nitems = Integer.parseInt(matcher.group(2)); } else { throw new IOException("knapsack data file " + "not properly formatted: invalid specification " + "line"); } capacity = new int[nsacks]; profit = new int[nsacks][nitems]; weight = new int[nsacks][nitems]; for (int i = 0; i < nsacks; i++) { line = lineReader.readLine(); // line containing "=" line = lineReader.readLine(); // knapsack i line = lineReader.readLine(); // the knapsack capacity matcher = capacityLine.matcher(line); if (matcher.matches()) { capacity[i] = Integer.parseInt(matcher.group(1)); } else { throw new IOException("knapsack data file " + "not properly formatted: invalid capacity line"); } for (int j = 0; j < nitems; j++) { line = lineReader.readLine(); // item j line = lineReader.readLine(); // the item weight matcher = weightLine.matcher(line); if (matcher.matches()) { weight[i][j] = Integer.parseInt(matcher.group(1)); } else { throw new IOException("knapsack data file " + "not properly formatted: invalid weight line"); } line = lineReader.readLine(); // the item profit matcher = profitLine.matcher(line); if (matcher.matches()) { profit[i][j] = Integer.parseInt(matcher.group(1)); } else { throw new IOException("knapsack data file " + "not properly formatted: invalid profit line"); } } } } finally { if (lineReader != null) { lineReader.close(); } } } @Override public void evaluate(Solution solution) { boolean[] d = EncodingUtils.getBinary( solution.getVariable(0)); double[] f = new double[nsacks]; double[] g = new double[nsacks]; // calculate the profits and weights for the knapsacks for (int i = 0; i < nitems; i++) { if (d[i]) { for (int j = 0; j < nsacks; j++) { f[j] += profit[j][i]; g[j] += weight[j][i]; } } } // check if any weights exceed the capacities for (int j = 0; j < nsacks; j++) { if (g[j] <= capacity[j]) { g[j] = 0.0; } else { g[j] = g[j] - capacity[j]; } } // negate the objectives since Knapsack is maximization solution.setObjectives(Vector.negate(f)); solution.setConstraints(g); } @Override public String getName() { return "Knapsack"; } @Override public int getNumberOfConstraints() { return nsacks; } @Override public int getNumberOfObjectives() { return nsacks; } @Override public int getNumberOfVariables() { return 1; } @Override public Solution newSolution() { Solution solution = new Solution(1, nsacks, nsacks); solution.setVariable(0, EncodingUtils.newBinary(nitems)); return solution; } @Override public void close() { //do nothing } } \end{lstlisting} It is not vitally important to understand all of the code. Much of the code is for loading the data file discussed in the previous section. The key sections of the code you should pay attention to are the \java{evaluate} method starting on line 168 and the \java{newSolution} method on line 219. Starting with the \java{newSolution} method, notice how line 220 creates a solution using the three-argument constructor, \java{new Solution(1, nsacks, nsacks)}. The three argument constructor is used to define constraints. In this example, we are defining a problem with \java{1} decision variable, \java{nsacks} objectives, and \java{nsacks} constraints --- one objective and one constraint for each knapsack. Then on line 221 we set the one decision variable to be a bit string (binary encoding) with \java{nitems} bits. The \java{evaluate} method on line 168 is where the knapsack equations from the beginning of this chapter are calculated. We extract the bit string from the solution we are evaluating on line 169. When the bit is set to \java{1}, the corresponding item is placed in both knapsacks. Lines 175-182 sum up the profit and weight in each knapsack. Lines 185-191 then check if any of the weights exceeds the capacity of any knapsack. If the weight is less than the capacity, then the constraint is satisfied as we set the constraint value to 0 (line 187). However, if the capacity is exceeded, then the constraint is violated and we set the constraint to a non-zero value (line 189). To reiterate, constraints that are satisfied have a value of zero; violated constraints have non-zero values (both positive and negative). Lastly, we set the objective values on line 194 and the constraint values on line 195. Note on line 194 how we negate the objective values. This is because we are trying to maximize the objectives (the profits). See \sectref{sect:maximizing} for additional details on maximizing objectives. \section{Solving the Problem} With the problem implemented in Java, we can now solve the multiobjective knapsack problem using the optimization algorithms provided by the MOEA Framework. In this example, we will use the NSGA-II algorithm as shown below. \begin{lstlisting}[language=Java] import java.io.File; import java.io.IOException; import java.io.InputStream; import org.moeaframework.Executor; import org.moeaframework.core.NondominatedPopulation; import org.moeaframework.core.Solution; import org.moeaframework.util.Vector; /** * Example of binary optimization using the {@link Knapsack} * problem */ public class KnapsackExample { /** * Starts the example running the knapsack problem. * * @param args the command line arguments * @throws IOException if an I/O error occurred */ public static void main(String[] args) throws IOException { // solve using NSGA-II NondominatedPopulation result = new Executor() .withProblemClass(Knapsack.class, new File("knapsack.5.2")) .withAlgorithm("NSGAII") .withMaxEvaluations(50000) .distributeOnAllCores() .run(); // print the results for (int i = 0; i < result.size(); i++) { Solution solution = result.get(i); double[] objectives = solution.getObjectives(); // negate objectives to return them to their maximized // form objectives = Vector.negate(objectives); System.out.println("Solution " + (i+1) + ":"); System.out.println(" Sack 1 Profit: " + objectives[0]); System.out.println(" Sack 2 Profit: " + objectives[1]); System.out.println(" Binary String: " + solution.getVariable(0)); } } } \end{lstlisting} Here, we are using the \java{Executor} to configure and solve the Knapsack problem. Please refer to \chptref{chpt:executor} for more details. You can now run this example code. If all goes well, you will see output similar to: \begin{lstlisting}[language=plaintext] Solution 1: Sack 1 Profit: 259.0 Sack 2 Profit: 133.0 Binary String: 01011 \end{lstlisting} In this case, only one Pareto optimal solution was found. You can see the profits for each knapsack as well as identify which items were selected in this solution from the binary string being displayed. \section{Conclusion} This chapter walked you through a complete example of defining a new problem and solving it using the MOEA Framework. You should now have a general understanding of using the MOEA Framework. We recommend walking through the other examples in the \folder{examples} folder provided in the source code distribution.
{ "alphanum_fraction": 0.6977958589, "avg_line_length": 42.1202046036, "ext": "tex", "hexsha": "4902dfe5c9cf51867386be6fa2a96efc4b0e8ac3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_forks_repo_licenses": [ "Intel" ], "max_forks_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_forks_repo_path": "manual/knapsack.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Intel" ], "max_issues_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_issues_repo_path": "manual/knapsack.tex", "max_line_length": 826, "max_stars_count": null, "max_stars_repo_head_hexsha": "6b15dfe67521249ef1747f52a1ef709401eee377", "max_stars_repo_licenses": [ "Intel" ], "max_stars_repo_name": "BrunoGrisci/EngineeringDesignusingMultiObjectiveEvolutionaryAlgorithms", "max_stars_repo_path": "manual/knapsack.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4251, "size": 16469 }
\newpage \chapter*{Bibliography} \addcontentsline{toc}{chapter}{Bibliography} \begin{itemize} \item A.K. Author, \emph{Some book without title}, Cool Publishing House, 2019. \end{itemize}
{ "alphanum_fraction": 0.7461139896, "avg_line_length": 19.3, "ext": "tex", "hexsha": "92ffce42a499072dfc677332ee74882ceb8e0433", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-12-12T15:44:47.000Z", "max_forks_repo_forks_event_min_datetime": "2020-12-12T15:44:47.000Z", "max_forks_repo_head_hexsha": "929c138e8b3b038977b344e4fa05741dea2ccd22", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jmiszczak/latex2epub", "max_forks_repo_path": "paperback/biblio.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "929c138e8b3b038977b344e4fa05741dea2ccd22", "max_issues_repo_issues_event_max_datetime": "2021-01-02T11:57:36.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-02T11:57:36.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jmiszczak/latex2epub", "max_issues_repo_path": "paperback/biblio.tex", "max_line_length": 80, "max_stars_count": 2, "max_stars_repo_head_hexsha": "929c138e8b3b038977b344e4fa05741dea2ccd22", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jmiszczak/latex2epub", "max_stars_repo_path": "paperback/biblio.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-04T03:36:28.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-22T21:59:46.000Z", "num_tokens": 63, "size": 193 }
\chapter{Motivation}\label{sec-motivation} Satellites in orbit around Earth are used in many areas and disciplines, including space science, Earth observation, meteorology, climate research, telecommunication, navigation and human space exploration. They offer a unique resource for collecting scientific data, commercial opportunities and various essential applications and services, which lead to unrivalled possibilities for research and exploitation. However, in the past decades, with increasing space activities, a new and unexpected hazard has started to emerge: space debris. \begin{figure}[ht] \centering \includegraphics[width=1\textwidth]{fig/motivation/CataloguedSpaceDebrisOverTime} \caption{Catalogued space debris over time~\cite{wright2010current}} \label{moti-CataloguedSpaceDebrisOverTime} \end{figure} The Figure \ref{moti-CataloguedSpaceDebrisOverTime} shows the debris changing trend over nearly 60 years. The y direction shows the number of objects. Each point represents the monthly numberof objects in Earth Orbit. The x direction is time axis. Space debris is divided into four types, each type has its own color: \begin{enumerate} \item{Fragmentation Debris} \item{Spacecraft} \item{Mission-related Debris} \item{Rocket Bodies} \end{enumerate} The orbris increasing of Spacecraft and Rocket Bodies are quite linearly. Mission-related Debris is quite stable. The most dramatic changing type is Fragmentation debris and this type of debris is directly related to satellite explosion and artificial celestial body collision. From another statistical result , we can see this trend much more clearly. In almost 60 years of space activities, more than 5250 launches have resulted in some 42 000 tracked objects in orbit, of which about 23000 remain in space and are regularly tracked by the US Space Surveillance Network and maintained in their catalog, which covers objects larger than about 5-10 cm in low-Earth orbit (LEO) and 30 cm to 1 m at geostationary (GEO) altitudes. Only a small fraction, about 1200, are intact, operational satellites today. This large amount of space hardware has a total mass of more than 7500 tonnes. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{fig/motivation/all_evo_type_count} \caption{Count evolution by object type~\cite{ESAspacedebris}} \label{moti-all_evo_type_count} \end{figure} \begin{enumerate} \item{Spent upper stages}\\ About 24\% of the catalogued objects are satellites (less than a third of which are operational), and about 18\% are spent upper stages and mission related objects such as launch adapters and lens covers. More than 290 in-orbit fragmentation events have been recorded since 1961. Only a few were collisions and the majority of the events were explosions of spacecraft and upper stages. \item{Explosions of satellites and rocket bodies}\\ These fragmentation events are assumed to have generated a population of objects larger than 1 cm numbering on the order of 750000. The sporadic flux from naturally occurring meteoroids may only prevail over that from human-made debris objects near sizes of 0.1-1 mm. The main cause of in-orbit explosions is related to residual fuel that remains in tanks or fuel lines, or other remaining energy sources, that remain on board once a rocket stage or satellite has been discarded in Earth orbit. Over time, the harsh space environment can reduce the mechanical integrity of external and internal parts, leading to leaks and/or mixing of fuel components, which could trigger self-ignition. The resulting explosion can destroy the object and spread its mass across numerous fragments with a wide spectrum of masses and imparted velocities. \item{Antisagellite test}\\ Besides such accidental break-ups, satellite interceptions by surface-launched missiles have been a major contributor in the recent past. The Chinese FengYun-1C engagement in January 2007 alone increased the trackable space object population by 25\%. \item{Other sources}\\ Solid rocket-motor firings, the ejection of reactor cores from Buk reactors after the end of operation of Russian radar ocean reconnaissance satellites in the 1980s and the relase of thin copper wires as part of a radio communication experiment during the Midas missions in the 1960s are the three most important sources. \end{enumerate} The nightmare scenario that space debris experts contemplate is called the Kessler syndrome, after American astrophysicist Donald Kessler. In 1978, while working for NASA, he published an analysis~\cite{kessler1978collision} that showed frequent collisions exponentially increased the amount of space debris, leading to many more collisions, leading to much more debris until we lose the use of certain orbits because anything we put there would certainly be hit. \begin{quote} \em{"Since 2002, The growth has entered into the more feared exponential trend."} \end{quote} There can be absolutely no doubt that the time to do something about space debris has arrived. \begin{figure}[ht] \centering \includegraphics[width=0.75\textwidth]{fig/motivation/TrackedSpaceDebris} \caption{Tracked space debris in 1963 and fifty years later~\cite{Garbage}} \label{moti-TrackedSpaceDebris} \end{figure} The Earth is now surrounded by tens of thousands of sizeable chunks of debris orbiting at speeds upwards of 17,000 miles per hour. Add to these larger pieces an estimated hundreds of thousands of sub-centimeter-sized artificial particles, and you have got a recipe for potential disaster-for while these small particles may not seem imposing, they can present a serious threat to functional spacecraft. Active Debris Removal (ADR) is necessary to stabilize the growth of space debris, but even more important is that any newly launched objects comply with post-mission disposal guidelines especially orbital decay in less than 25 years. If this were not the case, most of the required ADR effort would go to compensate for the non-compliance of new objects.\\ Some actively removing space debris ideas or concepts has been proposed by the researchers and institutions around the world. Japan Aerospace Exploration Agency (JAXA), European Space Agency and Texas A\&M University each presents its own plan. \newpage
{ "alphanum_fraction": 0.8140984655, "avg_line_length": 77.2345679012, "ext": "tex", "hexsha": "4f6d99d310d78e852333a9bc77a9ec9ac9c55f1b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a1fdd6e4c371ca0fe4b9ce67c76241d7482acc1d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "KEVINHKUST/IndependentProject_Thesis", "max_forks_repo_path": "chapter/section-motivation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a1fdd6e4c371ca0fe4b9ce67c76241d7482acc1d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "KEVINHKUST/IndependentProject_Thesis", "max_issues_repo_path": "chapter/section-motivation.tex", "max_line_length": 526, "max_stars_count": null, "max_stars_repo_head_hexsha": "a1fdd6e4c371ca0fe4b9ce67c76241d7482acc1d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "KEVINHKUST/IndependentProject_Thesis", "max_stars_repo_path": "chapter/section-motivation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1357, "size": 6256 }
\section{Structure \& Roadmap} \label{sec:03-structure-roadmap}
{ "alphanum_fraction": 0.765625, "avg_line_length": 21.3333333333, "ext": "tex", "hexsha": "3302884650e7820cecb33cad0922478bdde631a5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8f0009eed4076e388990d99250303315e9550c29", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "christianwarmuth/awesome-master-thesis", "max_forks_repo_path": "expose-template/03_structure-roadmap.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "8f0009eed4076e388990d99250303315e9550c29", "max_issues_repo_issues_event_max_datetime": "2019-12-10T14:01:57.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-14T09:47:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "christianwarmuth/awesome-master-thesis", "max_issues_repo_path": "expose-template/03_structure-roadmap.tex", "max_line_length": 32, "max_stars_count": 10, "max_stars_repo_head_hexsha": "8f0009eed4076e388990d99250303315e9550c29", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "christianwarmuth/awesome-master-thesis", "max_stars_repo_path": "expose-template/03_structure-roadmap.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-24T07:01:14.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-14T09:46:08.000Z", "num_tokens": 20, "size": 64 }
\chapter{Creole} \label{ch:2} Although similarities among creoles have been known to exist at least since the pioneering work of Schuchardt and others in the latter half of the 19th century, it was not until the middle of the present century that articles by Taylor (\citeyear{Taylor1960}; \citeyear{Taylor1963}; etc.), \citet{Thompson1961}, \citet{Whinnom1956, Whinnom1965}, and others began to spell out these similarities in any detail. Curiously, their pioneering efforts were not systematically developed; in general, creolists continued to describe individual creoles, or (much more rarely) groups of creoles with a common superstratum (e.g., \citealt{Goodman1964}, \citealt{Hancock1970}, \citealt{Alleyne1980}), or else simply used already existing data in long-drawn-out and essentially fruitless debates on issues such as monogenesis versus poly\-genesis, or substrate versus superstrate influence (\citealt{Bickerton1976} provides a brief summary of these). While the profession badly needs a volume that would systematically compare all the well-known creoles, such a task lies beyond the scope of the present volume. Instead, I shall look at general creole patterns in the five areas covered in the last chapter, plus some other areas, to give a rough general picture which should enable us to deter\-mine how far they, and HCE\il{Hawaiian Creole English}, resemble one another; and I shall then %\originalpage{4} explore in greater depth two areas -- verb-phrase complementation and the syntax and semantics of TMA systems -- which have already been treated by various writers more extensively than other areas. We should then be in a position to answer the questions posed at the end of the previous chapter. Before embarking on this task, however, it is necessary to say a few words about some of the peculiar problems it involves. One set of problems arises from the limitations of many existing descriptions of creoles.\is{creole!description of} No creole language has yet been provided the kind of com\-prehensive and detailed reference grammar that is taken for granted in most areal fields. With too few exceptions, creole grammars tend to stop where the syntax gets interesting, e.g., ``complex sentences'' are often dismissed with a page or two of unanalyzed examples. For many creoles, only outline sketches are available. Moreover, some descriptions may be based on incorrect data or contain incorrect analyses. As for the two creoles that I know best -- \ili{Guyanese Creole} and Hawaiian Creole English -- I must regretfully state that I find all previous descriptions deficient\enlargethispage{1\baselineskip} or misleading in a number of respects.\is{creole!description of} It might be argued here that it is premature to begin any general or theoretical work, especially one of a novel or controversial nature, until these lacunae have been filled and these errors amended. For instance, \citet[2]{Corne1977} states: ``Questions about the `genesis' of the creole languages, their genetic relations with each other and with their source language(s), the processes of creolisation (and pidginisation) cannot be approached seriously unless we know something about the object being talked about, and that we shall not know (in sufficient detail) until a lot more of the unglamourous drudgery of careful descriptive work\is{creole!description of} has been completed.'' This statement shows a profound misunderstanding of the ways in which science is developed and knowledge increases. Empirical knowledge is no guarantee of certitude, and its absence no barrier to insight; I would oppose, to Corne, the following statement by \citet[3]{Dingwall1979}: ``Relying on logical argument alone, Leucippus was able to develop the atomic theory, while Aristotle\ia{Aristotle}, able to rely on the results of numerous dis% %\originalpage{45} sections, failed to discover the correct function of the brain, imagining it to be the cooling system of the body." The view that theorists are mere grandstanding prima donnas, while the real work of the trade is done by the modest empirical plodder, is a widespread misconception in creole studies that merely underlines the immaturity of the field.\is{theory, role of} In the real world, unglamorous drudges never arrive at that moment of revelation which is always, like the rainbow, just beyond the next bend. For them, it's always ``a little too early to judge{\textquotedbl}; the data are ``not yet all in.'' They bequeath to their successors no more than mountains of fact, which may or may not contain the nuggets that would genuinely enrich us; more often, I suspect, the latter, since the facts one can gather about any language are infinite in number, and by no means all of equal value. What is needed is not dogged fact-gathering (with or without moral sermons) but the capacity to distinguish between the trivial and the nontrivial. The task of the theorist is to tell the field worker where to look and what to look for, and if the latter chooses to reject such aid, he has about as much brain as the man who throws away his metal detector and proceeds to dig by hand the three-acre field where he thinks treasure lies buried.\\\\ Another problem in creole studies is the question of how to interpret differences among creoles, where they genuinely exist. Are we to assume that any and every difference must be given equal weight? Such an assumption would be naive, as I shall try to show.\is{creole!description of} \newpage Creoles are the nearest thing one can find to ab ovo creations of language, but they are not and cannot be purely ab ovo creations. At the very least, pidgins provide some input to them, and this, even if deficient, even if sometimes rejected, as we saw, is still input. Since pidgins show clear substratum differences\is{substratum influence}, and since the composition of substrata differs from place to place, that input must also be a variable, which must somehow be factored out if we are to determine the extent to which creoles are genuinely creative. %\originalpage{46} But there is another variable in pidgins that may have more far-reaching consequences than differences in the pidgin substrata: that is, the extent of superstrate influence\is{superstrate influence} on the pidgin. This in turn will depend on ratios of superstrate to nonsuperstrate speakers; if the former are numerous, there will be more superstrate features avail\-able to the first creole generation. We have already noted the case of Réunion\il{Réunion Creole}. But there are some areas in which \isi{population ratios} during the pidginization stage are unknown; hence, even if we exclude known cases like Réunion, this variable cannot be entirely eliminated. More\-over, factors other than demographic may influence it. Population ratios in \isi{Hawaii} differed little from those in the Caribbean\is{Caribbean}, but the (relative) freedom of an indentured as opposed to a slave society must have had some effects on the quantity and quality of linguistic inter\-action, and may well explain why we find more English features in both HPE\il{Hawaiian Pidgin English} and HCE\il{Hawaiian Creole English} than we do in a creole like Sranan, or even in the basilectal varieties of Guyanese\il{Guyanese Creole} or \ili{Jamaican Creole}. Other problems arise from the operation of linguistic change\is{linguistic change} processes, be those processes internal or contact produced. Internal changes affect languages regardless of their ancestry, and one would imagine that creoles, by the very recency of their emergence from rudimentary pre-pidgins, would be more, rather than less, subject to such changes than more developed languages. The oldest creoles have a time depth approaching five hundred years, which is certainly adequate for a number of significant changes to have taken place; but in most cases the absence of written records from earlier periods, and the unreliability of such records where they do exist -- not necessarily the fault of the witnesses, since these were Europeans, and code-switching presumably existed in the 17th century as it does today -- makes it difficult indeed to estimate the extent and nature of such changes.\footnote{Very few writers on creoles seem to have much background or experience in variation study, and on all the numerous occasions on which writers have used historical citations to make claims about earlier stages of creoles, I cannot recall a single one where the possi\-bility of codeswitching was even mentioned. It may well be that the average fieldhand was monolectal, but the slaves whose speech was most likely to be cited by Europeans were precisely the domestics and artisans who had the most access to superstrate models and who would therefore be the likeliest to be able and willing to adapt their speech in a superstrate direction when interacting with superstrate speakers. Historical citations should therefore be handled with great care, especially when they suggest earlier stages of a creole which would show a heavier superstrate influence than is found in the con\-temporary basilect of that creole.} However, in addition to internal change there is the contact-stimulated type of change known as \isi{decreolization}. This can affect any creole which has remained in contact with its superstrate, as most have. Decreolization is well documented for some English creoles %\originalpage{47} (\citealt{DeCamp1971}, \citealt{Bickerton1973a,Bickerton1975}), but has been largely ignored in studies of other creoles. \citet{Valdman1973} suggests that it is equally widespread among French varieties, and its presence in \ili{Cabo Verdiense} and some other \ili{Portuguese creoles} is quite apparent. The result of \isi{decreolization} is to create a continuum of intermediate varieties be\-tween creole and superstrate. If this process is sufficiently long and intense, the continuum may be progressively eroded at its creole end. The result may be a synchronic state in which the most conservative variety recoverable is already considerably different from (and con\-siderably closer to the superstrate than) the original creole; this is obvious in some cases, e.g., \isi{Trinidad}, but may be less so elsewhere. Again, in some cases where truly conservative varieties are recoverable, researchers may have failed to unearth them (the obser\-vations of \citet{Bailey1971} on the texts in \citet{LePageEtAl1960} are very relevant here). To compare a partially decreolized creole with a nondecreolized one can only produce an appearance of difference which might not have existed had it been possible to compare the two languages in their pristine condition. Yet, given present uncertainties as to which creoles have decreolized, and how much, this trap is one into which the most careful scholar might inadvertently fall. A field so fraught with possible sources of error might seem to provide the comparativist with an inexhaustible source of alibis. Faced with any apparent difference, he could say: ``Well, this must be due to one or another of these interfering factors, so let's just forget it!'' Any such procedure would turn the inquiry into a farce, and yet, in light of the foregoing paragraphs, it would be equally irresponsible to take every difference at its face value and accord equal weight to each. If, as indicated in \chapref{ch:1}, there is some unique, creative force at work in the formation of creoles, we must try to distinguish this from other forces that might interact with it and serve to mask it. But unless we can show precisely \textit{which} factor is involved, \textit{why} it should have taken effect, and \textit{how} it could have worked to provide the observed results, our efforts will be valueless. %\originalpage{48} A final problem concerns the weighing of evidence and the criteria for making judgments. This is of particular importance when we come\enlargethispage{1\baselineskip} to deal with apparent cases of substratum influence. Claims of substratum influence\is{substratum influence|(} still persist strongly in creole studies and are made in such recent works as \citet{JansenEtAl1978}, \citet{Alleyne1979, Alleyne1980}, etc. However, substrato\-maniacs, if I may give them their convenient and traditional name, seem to be satisfied with selecting particular structures in one or more creole languages and showing that superficially similar structures can be found in one or more West African languages\is{African languages} (at least one careful study, \citealt{Huttar1975}, has shown that such structures are not always as similar as they might appear at first glance). This may be just as well; if they pursued their inquiries any further, they would find that not only would they have to confront some rather serious diffi\-culties, but that even if they overcame these, they would, perforce, wind up in a position which is only a step away from that which is proposed here. Let us suppose that a very common structure in \ili{Caribbean creoles} is also attested for \ili{Yoruba} and perhaps one or two other rela\-tively minor languages (this case is not hypothetical; we shall meet with it in the very next section). To most substratomaniacs, the mere existence of such similarities constitutes self-evident proof of the connection. They seldom even consider the problem of transmission\is{transmission problem}. How does a rule get from \ili{Yoruba} into a creole? Theoretically, there are several possibilities. One at least -- some kind of monogenetic ancestor which would have taken structure from \ili{Yoruba} and other languages and passed it on to a wide range of descen\-dents -- has been proposed many times, but no body of evidence (save for just those creole similarities it purports to explain!) has ever been presented for such a language, and until one is, we can safely ignore it. Consequently, we must assume that our rule, and perhaps others, passed from \ili{Yoruba} into the antecedent pidgins of a number of creoles, and thence into those creoles. For this to have happened, a substantial number of \ili{Yoruba} speakers must have been present during the pidgin %\originalpage{49} phase in each area, or at least no later than the earliest phase of creoli\-zation. If not, if a substantial number did not arrive in a given area until after the creole had been formed, then previous speakers would hardly abandon the rules they themselves had arrived at and replace them with new rules, unless the number of \ili{Yoruba} was so great as to constitute an absolute majority -- and that, to the best of present knowledge, was never true at any time for any Caribbean\is{Caribbean} territory.\footnote{It is at least highly questionable whether even an absolute majority of speakers of a single substrate language can influence the formation of a creole. Just after the turn of the century, when creolization must have been actively in progress, the Japanese constituted 50 percent of the population of Hawaii, yet there is virtually no trace of Japanese influence on HCE. It would be interesting to hear the substratomaniac explanation for this fact, but dealing with counter-evidence has never been the strong point of that particular approach.}%\\\\ Now, while it would be difficult, if not impossible, to prove that there were \textit{no} \ili{Yoruba}s in any given area at the time of creolization, there are a number of areas where their presence must have been heavily outweighed by members of other groups. If we take Sara\-maccan, for instance, generally regarded as the most African-like of creoles, we find very few lexical survivals even from the whole Kwa group\il{Kwa languages} (of which \ili{Yoruba} is a member) but very many from Bantu lan\-guages\il{Bantu languages}, in particular \ili{Kikongo} \citep{Daeleman1972}. One would think that the first task in constructing any substratum theory would be to show that the necessary groups were in the necessary places at the necessary times. But this has simply not been done. There are linguistic as well as historical problems to be faced by any serious substratum theory. As things stand, we are asked to believe that different African languages\is{African languages} contributed different rules and features to particular creoles. To accept that this is possible is to accept what \citet{Dillard1970}, in a slightly different context, aptly termed the ``Cafe\-teria Principle.''\is{cafeteria principle@``cafeteria principle''} Dillard was arguing against the once widespread belief that creoles were mixtures of rules and features from various regional dialects of the British Isles. But if it is absurd to suppose that a creole could mix fragments of Irish, Wessex, Norfolk, and Yorkshire dialects it is at least as absurd to suppose that a creole could mix fragments of \ili{Yoruba}, Akan, Igbo, Mandinka, and Wolof -- to mention some of the African languages\is{African languages} which substratomaniacs most frequently invoke. Let us suppose, however, that such miracles were possible, and that \ili{Yoruba} speakers were indeed distributed in such a way that the %\originalpage{50} requisite input could be provided. Nobody can deny that, in every case, there were many other African languages involved in each area, and nobody who knows anything about African languages\is{African languages} can deny that, even within the Kwa\il{Kwa languages} group -- and a fortiori outside it -- there are wide differences in rules and rule systems\is{rules!types of|(}. What could be so special about a particular \ili{Yoruba} rule (such as the one for verb-focusing which we will shortly discuss) that would cause it to be accepted over all compe\-titors in a number of different and quite separate groups? This question has been raised with respect to one feature which is by no means limited to \ili{Yoruba}: verb-serialization. In their analysis of this phenomenon, \citet{JansenEtAl1978}, while accepting the standard substratum explanation, wonder why it is that creoles, with their clear preference for features that are unmarked in the Jakobsonian sense, should have selected one which is quite rare among the world's languages, and highly unstable (subject to rapid change) even in those that have it. They are unable to provide an answer, although I shall suggest one in the latter part of this chapter. In a general sense, we can claim that the only possible factor that could lead a group to accept a particular rule out of a set of alternatives must have to do with the emerging system of the language which that group is engaged in developing. It could only be that, at any given stage in that development, the language could only incorpor\-ate rules of a certain type, and would have to reject others. Although we still know far too little about dynamic processes in language to be able to say what such constraints on development might be like, we can be reasonably certain that they exist. Languages, even creoles, are systems, systems have structure, and things incompatible with that structure cannot be borrowed; SVO languages cannot borrow a set of post positions, to take an extreme and obvious case. If a marked struc\-ture is incorporated (and if verb-serialization is highly marked, then verb-focusing is super-highly marked), it can only be because the language, at that particular stage of its development, has to have some such rule. That a creole language has to have certain types of rules is %\originalpage{51} exactly what the present study is designed to prove. If such rules happen to be present in the input in certain cases, that is in no way counter to the theory expressed here; the creole will acquire such rules, not because they are in the input, for many conflicting rules must be there also, but because such a rule is required by the struc\-ture of the emerging language. Indeed, presence in the input may not even be a necessary, let alone a sufficient, condition since the first creole generation could well have devised such a rule for itself; we saw in the first chapter that that generation can and does invent rules without benefit of experience. But even if we accept the entire sub\-stratum case, the situation is not substantively changed; the first creole generation has merely acquired the kind of rule that it was programmed to acquire, and saved itself the trouble, so to speak, of having to invent something equivalent. Thus, when taken to their logical conclusions, substratum arguments\is{substratum influence|)} only bring us back to the question this book will try to solve: why do creole speakers acquire some types of rule, but not others?\\\\ With these points clarified, we can now survey some key areas of grammar and see something of the range and extent of the similarities which any creole theory must somehow account for.\is{rules!types of|)} \section{Movement rules}\is{Guyanese Creole!movement rules|(}\is{movement rules!in GC|(} HCE, as we have seen, moved focused constituents to sentence-initial position. The same procedure, with some modifications which I shall discuss in a moment, is followed by all other creoles. It is sometimes suggested that there is nothing very remarkable about this fact, since many languages have similar processes. But it is also true that many languages have also, or instead, other methods of marking focus, such as changes in stress or tone patterns, or the use of focusing particles. The fact that creoles have adopted none of these alternative strategies cannot be without significance. %\originalpage{52} However, there are certain differences between HCE and the \ili{Caribbean creoles} in the ways in which this general strategy is imple\-mented. I shall illustrate the Caribbean strategy from \ili{Guyanese Creole} (GC), since this language seems to be typical in all respects. Let us start with a simple declarative sentence such as \REF{ex:2:1}: \ea\label{ex:2:1} Jan bin sii wan uman \\ \glt `John had seen a woman' \z %remove the blank line between \z and the text to prevent indents The subject can be focused by adjoining the equative \isi{copula} \textit{a} to the first NP: \ea\label{ex:2:2} a Jan bin sii wan uman\\ \glt `It was John who had seen a woman' \z The object can be focused by moving the NP to sentence-initial position and again adjoining \textit{a}:\is{object-fronting} \ea\label{ex:2:3} a wan uman Jan bin sii \\ \glt `It was a woman that John had seen' \z Other VP constituents such as oblique-case NPs and adverbials can be focused in an identical manner. However, the verb\is{verb phrase|(} can also be focused by a rather different procedure; it is again preposed and \textit{a} is adjoined to it, but a copy is obligatorily left at the extraction site: \ea\label{ex:2:4} a sii Jan bin sii wan uman \z There is no exact equivalent to \REF{ex:2:4} in English; it is roughly equivalent to `John had \textit{seen} a woman' or `John had \textit{really} seen a woman' or `Seen a woman, that's what John had done'. In English, it is impossible to apply a movement rule to V alone; English movement rules\is{English!movement rules} apply to major categories, and major categories in English are NP and VP. This fact must immediately raise doubts about the status of %\originalpage{53} VP in GC, for while NP and V can be moved freely, VP cannot: \ea[*]{a sii wan uman Jan bin}\label{ex:2:5}\z \ea[*]{a bin sii wan uman Jan}\label{ex:2:6}\z \ea[*]{a sii wan uman Jan bin sii}\label{ex:2:7}\z \ea[*]{a bin sii wan uman Jan bin sii}\label{ex:2:8}\z Note that \REF{ex:2:6} without \textit{a} and with appropriate lexical and phonological changes would be grammatical in HCE: \ea\label{ex:2:9} (i) bin si wan wahini, Jan. \z One difference between GC and HCE\is{movement rules!in HCE|(} could then be due to the fact that the latter has the category VP while the former does not. VP has always been a problem for generative grammar; many scholars have been unwilling to accept it as a universal category since (among other things) it is hard to posit for VSO languages where it would be a discontinuous constituent in deep structure. I know of no rule in GC for which VP has to be specified in the structural description (GC does not have the equivalent of English VP deletion, for example). However, this seems like a pretty massive difference to begin our list of similarities with. If creoles are constrained by a genetic program, how could things like this possibly come about? If, as will be claimed in \chapref{ch:4}, the original building blocks of language are just NPs and Vs, then VP is not a primitive constituent, but V is; thus, in the earliest stages of a creole, I would predict that V, but not VP, would be a category. However, either as a result of \isi{decreolization}, involving contact with a language which already has VP as a category, or of internal change, a creole can develop VP.\\\\ Previously, we established that superstrate influence was one of the factors which would disrupt natural creole development, whether that influence came during pidginization or, much later, through %\originalpage{54} \isi{decreolization}. That the results of influence at these two points can be virtually identical and impossible to disentangle is testified by the eloquent bafflement of Corne's comments on the status of \ili{Réunion Creole} \citep[223--224]{Corne1977}.\footnote{``It is clear that R(éunion) C(reole) is, to quite a large degree, a different animal from M(auritian) C(reole), Ro(drigues) C(reole), and S(eychelles) C(reole) \ldots~There can be no doubt that RC shares many features in common with MC, RoC and SC \ldots~The usual explanation \ldots~is that RC is a `decreolized' version of proto-I(ndian) O(cean) C(reole) \ldots~Another, and perhaps more plausible explanation, is that RC is, on the contrary, a modified version of a variety of \textit{French} (original emphasis) \ldots~The modification of this \textit{lete} \textit{ki} French may be seen in terms of convergence~\ldots'' Corne is led to conclude that Bourbonnais (the conventional term for proto-IOC) did not originate on the Ile de Bourbon (the old name for Réunion), but he is unable to say where it did originate, or to commit himself as to whether there was or was not a true proto-IOC. In fact, only an analysis along the lines of \citet{Bickerton1975} can hope to make sense of RC history; but so far, no such analysis has been attempted.} For instance, as mesolectal varieties of GC come under English influence, they develop VP. Now, it seems plausible to suppose that HCE, which, as we have seen, was influenced by English more strongly than most other English creoles, acquired VP at birth, rather than two or three hundred years later (though I would agree that for the moment there is no obvious way to prove this). If this is so, then HCE would not be typical of the most natural creole development; but the overall theory would be unaffected, since Washabaugh's (\citeyear{Washabaugh1979}) claim that any genetically-programmed feature should appear universally in creoles, irrespective of other conflicting factors, is a blatant straw man. However, we still have to show why GC copies the verb\is{verb-copying}. Here, the hypothetical case of the \ili{Yoruba} rule discussed in the preceding section becomes real, for \ili{Yoruba} does indeed have a rule that yields sentences very similar to, although not identical with \REF{ex:2:4}. At first sight this rule looks so weird that one thinks (I myself thought for several years) that direct borrowing must be the only possible source. However, consider for a moment what would happen if GC had a rule which said, ``Move all major categories'' (probably true of any human lan\-guage), plus a condition which specified that major categories were NP and V (which is highly probable based on the evidence), but this movement did \textsc{not} leave a copy of V at the extraction site. Such a rule would separate verbs from their auxiliaries, and this would immediately cause severe processing problems for speakers of creoles. It is a condition on transformations generally that meaning be recoverable, but since a number of auxiliaries (e.g., GC \textit{go}) are homoph\-onous with full verbs or can modify zero copulas\is{copula}, and since many full verbs are homophonous with the nouns derived from them, sentences in which only V is fronted could wind up with meanings completely different from those they originally had. Take the following examples: %\originalpage{55} \ea[\hspaceThis{*}]{\label{ex:2:10} Jan bin go wok a haspital\\ \glt `John would have worked at the hospital'} \z \ea[*]{a wok \textit{Jan bin go a haspital}}\label{ex:2:11}\z The italicized main clause in \REF{ex:2:11} constitutes a complete sentence with the meaning `John had gone to the hospital'. Since \textit{wok} can be noun or verb, and since nouns are fronted without copying, as in \REF{ex:2:2} and \REF{ex:2:3}, \REF{ex:2:11} could be, and almost certainly would be, interpreted as `It was work that John had gone to the hospital for'. Again, if V-fronting minus copying were applied to \REF{ex:2:1} above, it would yield: \ea[*]{a sii Jan bin wan uman}\label{ex:2:12}\z This could only be interpreted as a (slightly ungrammatical) version of `He (or I) saw that John was a woman!' Thus, if a copy is not left, meaning is irrecoverable. It would seem, therefore, that any language with movement rules that involve V only, rather than VP, \textsc{must} de\-velop a copying rule (or if, as has often been suggested in the literature, movement rules normally consist of two parts, one which Chomsky-adjoins a copy of the constituent to S and one which deletes the original constituent, it must then merely suppress the second half of the process). No borrowing from any other language would be required. Moreover, a claim that GC borrowed the rule from \ili{Yoruba} sets up an infinite regress: where did \ili{Yoruba} borrow it from? It is much more plausible to suppose that languages independently invent rules when these are demanded by the structure of the language plus func\-tional requirements.\is{Guyanese Creole!movement rules|)} The other difference between GC and HCE rules involves the use of an equative \isi{copula}. HCE could not use such a \isi{copula} because it never developed one. Absence of an equative copula seems to be characteristic of those languages (e.g., HCE, \ili{Crioulo}, the \ili{Indian Ocean creoles} (IOC)) which show heavier superstrate influence\is{superstrate influence}, but as there is no plausible mechanism here to show \textsc{why} that influence should have this effect (positive influence is one thing, negative influence quite %\originalpage{56} another), we must note this as a potentially significant difference. In the absence of such a focus-marking device, some other morpheme must be recruited, and creoles seem to have no specific program for either source or position: \ili{Crioulo} \textit{ki} is drawn from a superstrate rela\-tivizer (Pg. \textit{que}) and postposed to the extracted constituent; Seychelles Creoles \textit{sa} is drawn from its own definite article, in turn derived from Fr. \textit{ça}, and optionally preposed; while HCE uses (for subject NP) the quasi-obligatory verb-predicate\is{verb phrase|)} marker fortuitously present in Filipino\is{Filipino pidgin speakers} versions of HPE\il{Hawaiian Pidgin English}, postposing it and adding number and gender. This diversity is in sharp contrast to the generality of left movement, and suggests (we will later provide abundant evidence, not just from creoles but from child language acquisition) that the genetic program which produces language in the species highly specifies some areas of language and leaves others undetermined; this is only to be expected, as a genetic blueprint which leaves no room for variation and development would freeze a species at a single developmental level.\is{movement rules!in GC|)}\is{movement rules!in HCE|)} \section{Articles} There seems, in contrast, to be hardly any variation at all in the way that creoles handle articles\is{articles}. Virtually all creoles have a system identical to that of HCE\is{Hawaiian Creole English!articles}: a definite article for presupposed-specific NP; an indefinite article for asserted-specific NP; and zero for nonspecific NP. GC provides the following examples:\is{Guyanese creole!articles} \ea\label{ex:2:13} Jan bai di buk\\ \glt `John bought the book (that you already know about)' \z \ea\label{ex:2:14} Jan bai wan buk\\ \glt `John bought a (particular) book' \z \ea\label{ex:2:15} Jan bai buk\\ \glt `John bought a book or books' \z \ea\label{ex:2:16} buk dia fi tru\\ \glt `Books are really expensive!' \z % CREOLE 57 \ili{Papiamentu} (PP) provides the following examples\is{articles}: \ea\label{ex:2:17} mi tin e buki\\ \glt `I have the book' \z \ea\label{ex:2:18} mi tin e bukinan \\ \glt `I have the books' \z \ea\label{ex:2:19} mi tin un buki\\ \glt `I have a book' \z \ea\label{ex:2:20} mi tin buki\\ \glt 'I have books' \z \ea\label{ex:2:21} buki ta caru \\ \glt `Books are expensive' \z \ili{Seychelles Creole} (SC) provides the following examples: \ea\label{ex:2:22} m\^o pe aste sa banan \\ \glt `I am buying the banana' \z \ea\label{ex:2:23} m\^o pe aste ban banan\\ \glt `I am buying the bananas' \z \ea\label{ex:2:24} m\^o pe aste \^e banan\\ \glt `I am buying a banana' \z \citet[13]{Corne1977} follows the same Anglocentric route as \citet{Perlman1973} did for HCE when dealing with nonspecifics; he cites the following two examples, \REF{ex:2:25} and \REF{ex:2:26}, as ``zero form \ldots~in NP of the VP'' and ``Ind + plural,'' respectively: \ea\label{ex:2:25} \gll fakter i n amen let isi?\\ postman {\PM} {\COMP} bring letter here\\ \glt `Did the postman bring a letter (here)?'\footnote{Note that \textit{fakter} `postman' also lacks an article, although the definite article is required in English. But in fact, the NP here is as nonspecific as \textit{let}. `\textsc{the} postman', `\textsc{the} doctor', `\textsc{the} cashier', etc., are really role titles. Postmen often change routes and schedules, and there is no indication in the sentence that one particular postman might have brought the letter, that either the speaker or the listener could have answered the question ``\textsc{which} postman?'' or that the identity of the postman had the slightest relevance to the topic.} \z \ea\label{ex:2:26} \gll nu pu al pret zuti\\ we {\IRR} go borrow tool \\ \glt `We shall go and borrow some tools' \z %\originalpage{58} Corne does not mention subject generics, but we can assume that these too are treated as nonspecifics. Similar illustrations could be produced for almost any creole. This area of grammar seems to be highly specified in creoles; the dis\-tinction between specific and nonspecific\is{specific-nonspecific distinction (SNSD)} is particularly clear and consistent, and when we look at language acquisition in \chapref{ch:3}, we will find confirmatory evidence that it is probably innate. \section{Tense-modality-aspect (TMA) systems} \is{tense-modality-aspect (TMA) systems|(}A majority of creoles, like HCE, express tense, modality, and aspect by means of three preverbal free morphemes, placed (if they co-occur) in that order. I have already discussed the typical creole system elsewhere (\citealt{Bickerton1974}, \citeyear[Chapter 2]{Bickerton1975}), so here I shall give only a brief outline, returning later in the chapter to go much more deeply into some apparent counterexamples which have been men\-tioned in the literature. In the typical system -- which HCE\is{Hawaiian Creole English!TMA system|(} shares with GC\is{Guyanese Creole!TMA system}, Sranan\il{Sranan|(} (SR), \ili{Saramaccan} (SA), \ili{Haitian Creole} (HC), and a number of other creoles -- ranges of meaning of the particles are identical: the tense particle ex\-presses [+Anterior]\is{anterior tense} (very roughly, past-before-past for action verbs and past for stative verbs);\footnote{The anterior-nonanterior distinction is not an easy one for the naive speaker (i.e., anyone who does not speak a creole) to under\-stand, as I have found in trying to teach it to several classes of graduate students. The reader who wishes to understand this is strongly recom\-mended to read the account in \citet[Chapter 2]{Bickerton1975}.} the modality particle expresses [+Irrealis] (which includes futures and conditionals); while the aspect particle expresses [+Nonpunctual] (progressive-durative plus habitual-iterative). The stem form in isolation expresses the unmarked term in these three oppositions, i.e., present statives and past nonstatives. In addition, there exist combined forms, some of which in some languages have been eroded (in GC\is{Guyanese Creole!TMA system} by phonological rules, in HCE by \isi{decreolization}), but of which the full set is attested for HC\il{Haitian Creole} \citep{Hall1953} and SR \citep{Voorhoeve1957}. Again, wherever combined forms are present, their meaning is the same: anterior plus irrealis, counterfactual conditions; anterior plus nonpunctual, past-before-past durative or habitual actions; irrealis\is{irrealis modality} plus nonpunctual, habitual or durative unrealized actions; anterior plus irrealis plus nonpunctual, counterfactuals which express duration or habituality.\is{nonpunctual aspect} % CREOLE 59 Surface forms, of course, take a number of different shapes:\is{Guyanese Creole!TMA system} anterior,\is{anterior tense} GC \textit{bin}, SA \textit{bi}, SR \textit{hen}, HC and \ili{Lesser Antillean Creole} (LAC) \textit{te}; irrealis,\is{irrealis modality} GC \textit{sa/go}, SA \textit{o}, SR \textit{sa}, HC \textit{ava}, LAC \textit{ke}; nonpunctual, GC \textit{a}, SA \textit{ta}, SR \textit{e}, HC \textit{ape}, LAC \textit{ka}. HCE, with \textit{bin}, \textit{go}, and \textit{stei}, shares even two of the GC surface forms\is{Guyanese Creole!TMA system}, although the two languages are several thousand miles apart and their speakers have never been in contact. Combined forms have almost disappeared through \isi{decreolization}, but are retained by a few speakers \citep{Bickerton1974}, and/or are attested for earlier periods \citep{Reinecke1969,Tsuzaki1971}, although nowadays those who remember them are so unsure of what they once meant that one investigator \citep{Perlman1973} accused his consultants of making them up! (See discussion in \citealt[183ff]{Bickerton1980}.) Thus, we can again claim a highly programmed area, and many details of the ways in which children acquire quite different\enlargethispage{1\baselineskip} kinds of TMA systems (see \chapref{ch:3}, below) will serve to confirm this claim.\is{Hawaiian Creole English!TMA system|)}\is{tense-modality-aspect (TMA) systems|)} \section{Realized and unrealized complements} What work has so far been done on creole complementation has focused largely on \isi{verb serialization}, so data on this topic are extremely scarce. However, all the languages for which I have been able to find good data attest an identical structure to that of HCE, i.e., complementizers which are selected by the semantics of the embedded S. \citet{Roberts1975} reports the following contrast from \ili{Jamaican Creole} (JC): \ea[\hspaceThis{*}]{\label{ex:2:27} im gaan fi bied, bot im duon bied\\ \glt `He went to wash, but he didn't wash'} \z \ea[*]{im gaan go bied, bot im duon bied}\label{ex:2:28}\z Here, \textit{go} as complementizer cannot co-occur with a negative conjunct because its meaning expresses a realized action. However, \textit{fi} is fully %\originalpage{60} compatible with negative conjuncts since the actions it introduces are not (or, perhaps, are not necessarily) realized. \citet{JansenEtAl1978} report an identical contrast in Sranan\il{Sranan|)}: \ea[\hspaceThis{*}]{\label{ex:2:29} a teki a nefi foe koti a brede, ma no koti en\\ \glt `He took the knife to cut the bread, but did not cut it'} \z \ea[*]{a teki a nefi koti a brede, ma no koti en}\label{ex:2:30}\z Here, the contrast is between \textit{foe} and {\O} as complementizers, but the semantic distinction is identical.\footnote{Jansen et al. have a different (and much more complex) explanation involving logical form, propositional islands, truth values, etc. Although they cite \citet{Roberts1975} in another context, they appear to be unaware of the JC examples in that paper, cited above as \REF{ex:2:27} and \REF{ex:2:28}, as well as of the other parallels cited here.} The examples so far have all been from \ili{English} creoles although it is obvious that the distinction cannot have been derived from a lan\-guage which does not make it: \ea\label{ex:2:31} {I} {managed} {to} {stop} {\rm (entails ``I stopped")}. \z \ea\label{ex:2:32} {I} {failed} {to} {stop} {\rm (entails ``I did not stop")}. \z \ea\label{ex:2:33} I {went} {to} {see} {Mary} {and} {we} {talked} {about} {old} {times.} \z \ea\label{ex:2:34} {I} {went} {to} {see} {Mary} {but} {she} {wasn't} {home.} \z Fortunately for those who might still hypothesize some occult \ili{English} influence, the same contrast is found in \ili{Mauritian Creole} (MC). In one of the texts in \citet{Baker1972}, we find \REF{ex:2:35}: \ea\label{ex:2:35} \gll li desid al met posoh ladah \\ she decide go put fish in-it\\ \glt `She decided to put a fish in (the pool)' \z A line or so later, \REF{ex:2:36} follows: \ea\label{ex:2:36} \gll li don posoh-la en ti noh-gate\\ she give fish-the one small nickname\\ \glt `She gave the fish a little nickname' \z % {\textbackslash} % % CREOLE\textsuperscript{ 61} In other words, she had indeed done what she decided to do, i.e., put a fish in the pool. The \textit{al}-complement, therefore, indicates a realized action\is{realized--unrealized distinction}. However, in another text we find \REF{ex:2:37}: \ea\label{ex:2:37} \gll li ti pe ale aswar pu al bril lakaz sa garsoh-la me lor sime ban dayin fin atake Ii\\ he {\TNS} {\MOD} go evening for go burn house that boy-the but on path {\PL} witch {\COMP} attack him\\ \glt `He would have gone that evening to burn the boy's house, but on the way he was attacked by witches' \z Here, the subject of the sentence was prevented from carrying out his intention by the witches; accordingly, the complement is marked with \textit{pu} \textit{al}. Since Baker does not discuss this construction, we have no way of knowing if, as I suspect, \REF{ex:2:38} would be ungrammatical in that particular context: \ea\label{ex:2:38} li ti pe ale aswar al bril lakaz \ldots \z However, all realized complements in Baker's texts are marked with \textit{al} or {\O} , and all unrealized complements are marked with \textit{pu} or \textit{pu} \textit{al.} These similarities, not previously pointed out in any published work, are particularly striking in that the structure looks like a highly marked one, being attested in few if any noncreole languages; and yet the identity is not merely semantic and syntactic; it extends even to the choice of lexical items -- for \textit{pu} derives from Fr. \textit{pour} `for', and Eng. \textit{for} is the source for HCE \textit{fo}, JC \textit{fi}, SR \textit{foe},\footnote{There is the possibility that an African source may also be involved. \ili{Yoruba}, for instance, has both \textit{fi} and \textit{f{\'u}n} (final nasals in Yoruba orthography mean that the preceding vowel is nasalized, and do not indicate the presence of a nasal consonant). Both verbs have a number of functions, but perhaps the most relevant for creoles are those found in sentences like \textit{{\'o} fi ow{\'o} n\'a\`a f\'un mi}, lit. `He take money the give me', or `He gave me the money'. The similarity to creole instrumentals is obvious, but if Yoruba \textit{fi} is the source for JC \textit{fi}, the shift in meaning is baffling. \textit{F\'un} is puzzling in a slightly different way. \citet{Rowlands1969} notes that ``Bilingual Yorubas tend to use \textit{f\'un} rather indiscriminately to translate `for'", making a joint source for GC \textit{fu}, SR \textit{foe} (phonetically /fu/) sound very plausible. Also many creoles use %\originalpage{307} verbs meaning `give' to introduce dative and/or benefactive cases (e.g., HC \textit{bay}, ST \textit{da}, etc.). But if SR \textit{foe} is derived from Yoruba \textit{f\'un}, why did SR select \textit{gi} (from Eng. \textit{give}) to mark oblique cases and use \textit{foe} as a complementizer? Moreover, HCE uses \textit{fo} as a complementizer\is{Hawaiian Creole English!complementation} without the benefit of any Yoruba model, and \ili{French} and Portuguese creoles turn Fr. \textit{pour} `for' and Pg. \textit{para} `for' into complementizers even though no one, to my knowledge, has suggested any verb with the form \textit{pu} or \textit{pa} in Yoruba or any West African language that could have served as a model. The question is by no means closed, however; it merely underlines the fact that we need to know a lot more both about different West African grammars and about what African lan\-guages were spoken in which creole areas.} while MC \textit{al} `go' and MC {\O} parallel HCE \textit{go}, JC \textit{go}, and SR {\O}. While it is conceivable that JC and SR might have some kind of genetic connection (although no historical or systematic linguistic evidence has been advanced), there is no possi\-bility that either could have any connection with HCE, and it is, if that is possible, even less likely that there was ever any connection between these three and MC, which has a different superstrate \textit{and} different substratum language. It is impossible to imagine any other %\originalpage{62} explanation than one based on the possession, by speakers in all four areas, of some quite specific program for language-building. \section{Relativization and subject-copying}\is{subject-copying|(} In these areas there exist certain differences. Most creoles have relative pronouns, at least when the head-noun is also subject of the relative clause, but HCE does not. However, the time that most creoles have had to gain relative pronouns is little less than the time it took English to gain them in this position \citep{BeverEtAl1971}. If creoles were indeed born without surface relativizers, then the same processing problems that \citeauthor{BeverEtAl1971} discuss would have applied to them, and there would have been a similar pressure to borrow or adapt some feature that would serve to avoid such problems.\is{relativization|(} However, any such speculation would be pure conjecture, if it were not for the fact that in a number of creoles there still exist conservative dialects or restricted sentence types in which relative pronouns are deletable in subject position -- or rather, more probably, were never inserted. In GC, for instance, this can happen when the head-noun of the relative clause is the object of the higher sentence and when the main verb of that sentence is an equivalent of \textit{have} or \textit{be} (the regular GC relative pronoun is \textit{we}):\is{Guyanese Creole!relativization} \ea\label{ex:2:39} wan a dem a di man bin get di bam \glt `One of them was the man who had the bomb' \z \ea\label{ex:2:40} shi get wan grandaata bina main \glt `She had a grand-daughter who was being looked after (by her)' \z \citet[38]{Corne1977} gives some examples of relative clauses in SC\il{Seychelles Creole} where, also, the head-noun is object of the main clause, and where no relativizer is present on the surface: \ea\label{ex:2:41} \gll i ana Bom Lulu i d{\^a}se deor\\ {\PM} there-is good-guy wolf {\PM} dance outside \\ \glt `There is Old Wolf who is dancing outside' \z %\originalpage{63} \ea\label{ex:2:42} \gll zot truv sa pov drayver i {\^a}kor pe at{\^a} mem\\ they see the poor driver {\PM} still {\ASP} wait same\\ \glt `They see the poor driver who is \textit{still} waiting' \z Again, although in general the \ili{Portuguese creoles} of the Bight of Benin have relative pronouns, \citet[97]{Valkoff1966} reports a conserva\-tive dialect of Annobones which lacks them: \ea\label{ex:2:43} \gll me mu gogo na-mina sa gavi \\ mother my like {\PL}-child be good\\ \glt `My mother likes the children who are good' \z Thus, although there is no proof that creoles started without relative pronouns, the possibility cannot be ruled out. Moreover, as we shall see later, a rather indirect argument based on grammatical sim\-plicity points in the same direction. From what we have already seen of \isi{movement rules}, we would not expect to find much similarity in the area of subject-copying. This, at least initially; is some kind of focusing device, and we saw that other creoles have means for focusing which employ other features. However, there are at least two creoles, \ili{Crioulo} (CR) and SC\il{Seychelles Creole}, which have a form \textit{i} that characteristically occurs between subject and predicate, but is also the third person singular subject pronoun in both cases. Clearly, what\-ever function this form may originally have had, it has now become obligatory in certain contexts; for instance, it serves to mark present tense nonverbal predicates, i.e., it functions as a kind of \isi{copula}: \ea\label{ex:2:44} \ili{\langCR}\\ \gll elis i amiigu\\ they {\PM} friend\\ \glt `They are friends' \z \ea\label{ex:2:45} \ili{\langCR}\\ \gll i amiigu\\ he friend\\ \glt `He is a friend' \z %\originalpage{64} \ea\label{ex:2:46} \ili{\langSC}\\ \gll lerua i bet\\ king {\PM} stupid\\ \glt `The king is stupid' \z \ea\label{ex:2:47} \ili{\langSC}\\ \gll i bet\\ he stupid\\ \glt `He is stupid' \z I shall not explore the meanings and functions of this particle since \citet{Wilson1962}, practically the only source for CR, does not provide adequate data, while at the other extreme, Corne (\citeyear{Corne1974-75} and \citeyear{Corne1977}) presents masses of data on the SC\il{Seychelles Creole} form and shows that its complexities will not yield easily to analysis. In any case, the principal point that was made with regard to HCE was not specifically to do with either relativization or subject-copying.\is{Hawaiian Creole English!subject-copying} We saw in \chapref{ch:1} that when the subject of the higher clause is also subject of the relative clause, the subject-copy pronoun follows the relative clause rather than the subject noun, although elsewhere it directly follows the subject noun -- as \textit{i} does in \REF{ex:2:46}, for example. Now, in both CR and SC\il{Seychelles Creole}, in other words in the only two creoles in which we could look for an analogue of the HCE structure (since they are the only ones with anything like a subject copy), we find that in subject-subject relative-clause sentences, \textit{i} follows the relative clause and not the noun subject, i.e., it also obeys the A-over-A principle\is{A-over-A principle} or its equivalent (examples from \citealt[30]{Wilson1962}; \citealt[53]{Corne1977}): \ea\label{ex:2:48} \ili{\langCR}\\ \gll ɔɔmi kə bay awɔnti i riba\\ man who go yesterday {\PM} return\\ \glt `The man who went yesterday has returned' \z \ea\label{ex:2:49} \ili{\langSC}\\ \gll sel abit{\^a} ki m{\^o} kapab al tr{\^o}p li i zis sa vie t{\^o}t{\^o}\\ only farmer that I can go fool him {\PM} just that old man \\ \glt `The only farmer that I can go and fool is just that old man' \z Moreover, these are not the only cases where the A-over-A principle\is{A-over-A principle} applies to creoles: the placement of HC articles in relative\- % CREOLE 65 clause sentences is also affected. Normally, the HC\il{Haitian Creole} definite article \textit{la} immediately follows the noun: \textit{chwal-la} `the horse', \textit{kapt{\^e}n-na} `the captain'. However, if the noun is head-noun of a relative clause, the definite article follows that clause, i.e., it is adjoined to the higher rather than the lower NP: \ea\label{ex:2:50} \gll kapt{\^e}n ki té-arété-l-la t-ap-mété-l n{\^a}-bétiz\\ captain who \TNS-arrest-him-the \TNS-\ASP-put-him in-ridicule\\ \glt `The captain who had arrested him was making fun of him' \z \ea\label{ex:2:51} \gll li voyé chwal yo pou-r{\^a}plasé sila ki mouri-a\\ he send horse {\PL} for-replace that which die-the\\ \glt `He was sending the horses to replace the one which had died' \z We have now surveyed the five areas which were discussed in \chapref{ch:1} and found that in three of them (articles, TMA markers, and realized/unrealized complements) the ``innovations'' made by the original speakers of HCE were identical with the equivalent forms and meanings in all or most creoles, while in the remaining two there were broad, general similarities along with some differences in detail. It is worth noting that the similarities are most striking where a combi\-nation of semantic and syntactic factors interact; where purely syntactic rules are involved, as with movement rules and relativization, there is a lesser degree of identity. Why this should be so will be explained in \chapref{ch:4}, fn. \ref{Ch4:Fn15}.\is{relativization|)} I shall now, much more briefly, indicate some other areas in which strong creole resemblances can be found, before proceeding to a more thorough analysis of TMA systems and VP complementation.\is{subject-copying|)} \section{Negation}\is{negation|(} In creoles generally, nondefinite subjects as well as nondefinite VP constituents must be negated, as well as the verb, in negative sentences. Examples are from GC and \ili{Papia Kristang} (PK):\is{Guyanese Creole!negation} %\originalpage{66} \ea\label{ex:2:52} non dag na bait non kyat\\ \glt `No dog bit any cat' \z \ea\label{ex:2:53} \gll ngka ng'koza nte mersimentu\\ not no-thing not-have value \\ \glt `Nothing has any value' \z Sentences of this kind do occur occasionally in HCE, e.g.:\is{Hawaiian Creole English!negation} \ea\label{ex:2:54} nowan no kaen bit diz gaiz \\ \glt `No one can beat these guys' \z However, while negated VP constituents are common, negated subjects with negative verb are rare, perhaps because of persecution in the schools.\is{negation|)} \section{Existential and possessive}\is{Guyanese Creole!existential-possessive}\is{possession} Over a wide range of creoles, the same lexical item is used to express \is{existential}existentials (``there is") and possessives (``have''), even though this is not true of any of the superstrates (it may be true of some substandard \ili{Portuguese} dialects of Brazil\is{Brazil}, but these may well be decreolized remains of an earlier creole). Examples are from GC, HC\il{Haitian Creole}, PP\il{Papiamentu}, and \ili{S\~ao Tomense} (ST), respectively: \ea\label{ex:2:55} dem \textit{get} wan uman we \textit{get} gyal-pikni \\ \glt `There is a woman who has a daughter' \z \ea\label{ex:2:56} \gll \emph{gê} you f{\^a}m ki gê you pitit-fi\\ have one woman who have one child-daughter\\ \glt `There is a woman who has a daughter' \z \ea\label{ex:2:57} \gll \emph{tin} un muhe cu \emph{tin} un yiu-muhe\\ have a woman who have a child-woman \\ \glt `There is a woman who has a daughter' \z \ea\label{ex:2:58} \gll \emph{te} ua mwala ku \emph{te} ua mina-mosa \\ have a woman who have a child-girl\\ \glt `There is a woman who has a daughter' \z %\originalpage{67} HCE follows an identical pattern: \ea\label{ex:2:59} \textit{get} wan wahini \textit{shi} \textit{get} wan data\\ \glt `There is a woman who has a daughter' \z We will refer to this area again in \chapref{ch:4}. \section{Copula}\is{copula|(} Practically all creoles show some similarities in this area. Adjec\-tives are surface verbs in creoles (see next section) and therefore require no copula. \is{locative verbs}Locatives are introduced by verbs which normally are limited to that role, i.e., do not extend to existential\is{existential} or prenominal environments. A split occurs over treatment of nominal complements: the more heavily superstrate-influenced creoles (HCE\is{Hawaiian Creole English!copula}, the \ili{Indian Ocean creoles}, some Asian \ili{Portuguese creoles}) tend to have zero copulas here also, although in some (SC\il{Seychelles Creole}, CR) the \textit{i} form appears here as a predicate marker; the less heavily superstrate-influenced creoles of the Caribbean generally have a distinct verb in these environments. There are minor exceptions to these generalizations. GC locative \textit{de} can express existentials\is{existential}, which HCE \textit{stei}, for example, cannot:\is{Guyanese Creole!copula} \ea[\hspaceThis{*}]{\label{ex:2:60} wok na de \glt `There isn't any work'} \z \ea[*]{\label{ex:2:61}wok no stei}\z \ea[\hspaceThis{*}]{\label{ex:2:62} nomo wok \glt `There isn't any work'} \z HCE and SC\il{Seychelles Creole} have negative existentials\is{existential} -- \textit{nomo} and \textit{napa} -- a feature found in few if any other creoles which, in the HCE case at least, represents an inheritance from the antecedent pidgin. The locative\is{locative verbs} \textit{ye} in HC appears only sentence-finally, i.e., where it is stressed, which, together with its phonetic shape, suggests that it disappeared from medial position through phonological reduction processes. %\originalpage{68} Although some of these differences may arise from pidgin retentions or post-creolization changes, it would seem that the copula area is only moderately specified. There is a general tendency toward semantic transparency, i.e., having separate forms for each semantically distinct copula function (attribution, with adjectives; equation or class membership, with predicate nominals; locative\is{locative verbs}, with adverbials of place). However, since these semantic distinctions are unambiguously marked by predicate type, to mark them a second time with distinc\-tive copulas may seem redundant, and perhaps this accounts for copula variability within individual creoles as well as across the class.\is{copula|)} \section{Adjectives as verbs} In a number of creoles (e.g., JC\il{Jamaican Creole}, \citealt{Bailey1966}; GC, \citealt{Bickerton1973b}) the adjective has been analyzed as forming a subcategory of stative verbs. Evidence from GC\is{Guyanese creole!adjectives|(} is the identical behavior of verbs and adjectives under a number of rules: \ea\label{ex:2:63} {i} {wok}\\ \glt `He worked' \z \ea\label{ex:2:64} i wiiri\\ \glt `He is tired' \z \ea\label{ex:2:65} i a wok\\ \glt `He is working' \z \ea\label{ex:2:66} ia {wiiri}\\ \glt `He is getting tired' \z \ea\label{ex:2:67} au i wok!\\ \glt `How he works!' \z \ea\label{ex:2:68} {au} {i} {wiiri!}\\ \glt `How tired he is!' \z \ea\label{ex:2:69} {a} {wok} {i} {wok}\\ \glt `Work, that's what he did' \z \ea\label{ex:2:70} {a} {wiiri} {i} {wiiri}\\ \glt `Tired, that's what he is' \z % CREOLE 69 Note that though syntactic rules apply identically, semantic interpre\-tation is often different in the two cases.\footnote{Both \citet{Christie1976} for LAC and \citet{Corne1981} for SC\il{Seychelles Creole} propose a tripartite division of verbs into Action, State, and Process. As far as I can tell (neither treatment is particularly rigorous), this proposal arises from a confusion of syntactic rules with semantic interpretation. For instance, it is not syntactic rules that (normally) bar co-occurrence between stative verbs and nonpunctual markers, as is shown in the discussion of the sentence \textit{i bina waan fu no} in \citet[38]{Bickerton1975}, which shows that pragmatic factors can also be involved.} Originally, all writers on the \ili{Indian Ocean creoles} who dealt with this area (\citealt{Baker1972}; \citealt{Corne1973,Corne1977}; \citealt{Papen1975,Papen1978}; \citealt{Bollee1977}; etc.) treated verbs and adjectives as distinct classes and posited an underlying copula before predicate adjectives, which was subsequently deleted. However, in an insightful article, \citet{Corne1981} renounces his former analysis and sets up a class of ``Verbals,'' which would contain predicate adjectives as well as verbs and which would not require a copula in underlying structure; these ``verbals'' would then undergo at least some (Corne seems hesitant to push his argument too hard) of the processes which verbs undergo. It is worth noting that some of the evidence Corne surveys bears a striking resemblance to that found in GC, in particular the ``inchoative'' meaning of the nonpunctual marker \textit{pe} when applied to adjectives, as compared to the meaning of the GC nonpunctual marker \textit{a} when similarly acquired; compare the following with \REF{ex:2:66}:\is{Guyanese creole!adjectives|)} \ea\label{ex:2:71} \gll li pe malad\\ he {\ASP} sick\\ \glt `He is getting sick' \z \ea\label{ex:2:72} li pe {\^a}-koler \glt `He is getting angry' \z \ea\label{ex:2:73} m{\^o} pe laf{\^e}\\ \glt `I am getting hungry' \z This resemblance between creoles so widely separated in location and origin is quite striking. Moreover, I know of no creole where an alterna\-tive analysis of adjectives\is{adjectives} would be required. HCE\is{Hawaiian Creole English!TMA system}, not surprisingly, has a similar ``inchoative'' sense when nonpunctual and adjective are conjoined: \ea\label{ex:2:74} ho, ai stei wail wid da meksikan gai\\ \glt `Wow, I was getting mad at the Mexican guy' \z %\originalpage{70} \section{Questions} No creole shows any difference in syntactic structure between questions and statements. Question-particles, where they occur, are sentence-final and optional:\is{questions!yes--no} \ea\label{ex:2:75} \ili{\langGC}\\ i bai di eg-dem\\ \glt `He bought the eggs' \z \ea\label{ex:2:76} \ili{\langGC}\\ i bai di eg-dem? \glt `Did he buy the eggs?' \z \ea\label{ex:2:77} \ili{\langHC}{}{}\\ \gll yo pa-t-a-vlé m{\^e}n{\^e}-m lakay-li\\ they not-\TNS-\MOD-want take-me house-his\\ \glt `They wouldn't have wanted to take me to his house' \z \ea\label{ex:2:78} \ili{\langHC}{}{}\\ yo pa-t-a-vlé m{\^e}n{\^e}-m lakay-li? \glt `Wouldn't they have wanted to take me to his house?' \z \section{Question words} \is{Guyanese Creole!questions|(}In WH-questions\is{English!questions}\is{questions!WH-}, the question-word is directly preposed to the declarative form of the sentence. The question-words themselves, if not clearly adapted from their superstrate equivalents, are always composed in the following manner: they are bimorphemic; the first morpheme is derived from a superstrate question-word -- English creole \textit{we}, \textit{wi}, or \textit{wa} from Eng. \textit{which} or \textit{what}, \ili{French} creole \textit{ki} from Fr. \textit{qui} `who' or \textit{que} `what', Portuguese creole \textit{ke} from \il{Portuguese}Pg. \textit{que} `what': \ea\label{ex:2:79} \ili{\langGC}{}{}\\ \gll wisaid yu bin de?\\ {which side} you {\TNS} be-\LOC \\ \glt `Where have you been?' \z \newpage \ea\label{ex:2:80} \ili{\langHC}{}{}\\ \gll ki koté ou wè pwas{\^o}-a?\\ what side you see fish-the\\ \glt `Where did you see the fish?' \z \ea\label{ex:2:81} \ili{\langST}{}{}\\ \gll ke situ e pe mi n-e-e?\\ what place he put maize in-it-\QP\\ \glt `Where did he put the maize?' \z %\originalpage{71} Other forms in English creoles include Cameroons Creole \textit{wetin}, lit., `what thing', `what'; GC \textit{wa mek}, lit., `what makes', `why'.\is{Guyanese Creole!questions|)}\is{questions!WH-} Very often a creole has doublets, a superstrate adaptation and a bimorphemic creole form. \citet[509]{Papen1978} gives the following sets for SC\il{Seychelles Creole} and RC: \ea\label{ex:2:82} \begin{tabbing} Lorem \= ~{\rm =}~ \= {\rm (iii)} \= Lorem Ipsum dolor \kill where \> {\rm =} \> {\rm (i)} \> (a)kot(e) {\rm (Fr. {\it à côté de} `at')}\\ \> \> {\rm (ii)} \> ki ladrua {\rm (Fr. *{\it qui l'endroit}), `Which place?'}\\ \> \> \> ki bor {\rm (Fr. *{\it qui bord}), `Which edge?'} \end{tabbing}\z \ea\label{ex:2:83} \begin{tabbing} Lorem \= ~{\rm =}~ \= {\rm (iii)} \= Lorem Ipsum dolor \kill how \> {\rm =} \> {\rm (i)} \> koma {\rm (Fr. {\it comment} `how')}\\ \> \> {\rm (ii)} \> ki maner {\rm (Fr. *{\it qui manière}), `What way?'}\end{tabbing}\z \ea\label{ex:2:84} \begin{tabbing} Lorem \= ~{\rm =}~ \= {\rm (iii)} \= Lorem Ipsum dolor \kill why \> {\rm =} \> {\rm (i)} \> (l)akoz ki {\rm (Fr. {\it la cause que} `the reason that')}\\ \> \> {\rm (ii)} \> ki fer {\rm (Fr. *{\it qui faire}), `What makes?'}\end{tabbing}\z \ea\label{ex:2:85} \begin{tabbing} Lorem \= ~{\rm =}~ \= {\rm (iii)} \= Lorem Ipsum dolor \kill when \> {\rm =} \> {\rm (i)} \> ka {\rm (Fr. {\it quand} `when')}\\ \> \> {\rm (ii)} \> ki ler {\rm (Fr. *{\it qui l'heure}), `What hour?'}\end{tabbing}\z Papen does not state whether, in his estimation, one set is older or more creole than the other (failure to make any serious attempt to sort variants is a grave weakness in the otherwise thorough work done recently on \ili{Indian Ocean creoles}), but we can be reasonably certain that the periphrastic forms represent the original creole; if the quasi-\ili{French} forms existed already, why should others have been invented?\is{questions!WH-} Since HPE\il{Hawaiian Pidgin English} speakers acquired the full set of English question-words\is{English!questions} except for \textit{why} (HPE \textit{wasamata,} lit., `What's the matter?', which seems not to have been passed on to HCE)\is{Hawaiian Creole English!questions}, HCE was never required to develop a bimorphemic set. However, the similarities above are so close that we can predict that any creole which did not borrow directly from its superstrate would develop a set of forms along these lines. \section{Passive equivalents} Passive constructions in creoles are extremely rare, and those that exist (the \textit{wordu} and \textit{ser} passives in PP, cf. \citealt{MarkeyEtAl1980}; the \textit{gay} passive in MC\il{Mauritian Creole} and SC\il{Seychelles Creole}, cf. \citealt{Corne1977}; and the \textit{get} \isi{passive} in %\originalpage{72} GC) are either marginal to the language or relatively recent super\-strate borrowings, or both. The general pattern of creoles is described by \citet{MarkeyEtAl1980} as ``rampant lexical diathesis{\textquotedbl}; for any V-transitive, N V N will be interpreted as ``actor-action-patient,'' while any N V will be interpreted as ``patient-action{\textquotedbl}:\is{Guyanese Creole!passive@Guyanese Creole!``passive''} \ea\label{ex:2:86} \ili{\langGC}{}{}\\ dem a ponish abi\\ \glt `They are making us suffer' \z \ea\label{ex:2:87} \ili{\langGC}{}{}\\ abi a ponish\\ \glt `We are suffering/being made to suffer' \z \ea\label{ex:2:88} \ili{\langJC}{}{}\\ dem plaan di tri \glt `They planted the tree' \z \ea\label{ex:2:89} \ili{\langJC}{}{}\\ di tri plaan \glt `The tree was planted' \z \ea\label{ex:2:90} \ili{\langHCE}{}{}\\ dei wen teik foa bead \glt `They took four boards' \z \ea\label{ex:2:91} \ili{\langHCE}{}{}\\ foa boad wen teik \glt `Four boards were taken' \z We shall return to structures of this type in \chapref{ch:3}.\\\\ We have now surveyed seven areas of the grammar in addition to the five already examined in greater depth. Of those seven, HCE\is{Hawaiian Creole English!passive@Hawaiian Creole English!``passive''} shows substantial identity with all other creoles in four (existential/possessive, adjective as verb, questions, and passive equivalents); substantial iden\-tity with a number of other creoles in one (copula); and little simi\-larity in two (negation, question-words). Thus, out of the twelve areas, HCE is identical with all or with a large percentage of creoles in eight, shows a fair degree of similarity in two, and differs sharply in two, one of which (negation) may well have followed the regular creole pattern before decreolization set in. This degree of identity is quite remarkable when we consider that HCE shares none of the substratum languages of the other creoles -- % %\originalpage{73} except that a superstrate language for some creoles was a substrate language in HCE\il{Hawaiian Creole English}, i.e., \ili{Portuguese}! However, there is nothing in the grammar of HCE except perhaps \textit{stei} as locative that one can point to as having stemmed from Portuguese influence. The only thing HCE seems to have in common with other creoles (apart from the simi\-lar social conditions that gave birth to them) is that all have European superstrates, a fact which has been used to caution creolists against premature universalist claims \citep{Reinecke1977}.% \footnote{A problem not faced by those who call for the examination of non-European creoles is that it is far from clear that there are any. The only languages without a European superstrate which might qualify under the conditions specified in \chapref{ch:1}, above, are \ili{Ki-Nubi} and \ili{Juba Arabic}. Although the data that have emerged on these lan\-guages so far are scanty and unclear (and for this reason I have refrained from citing them in the present volume), most of what is available suggests that they follow the creole pattern described here. But even these languages do not have a third condition which may be necessary to qualify for true creolehood: their populations were not, in general, displaced from their native homelands. It is a historical fact that it was only Europeans who uprooted people from their cultures and carried them across thousands of miles of ocean in order to exploit them; therefore, it is only in European colonies that one would expect to find the massive disruption of normal language continuity which would permit the emergence of innate faculties.} However, since practi\-cally all the common features of creoles are not only not shared by, but run dead counter to the structural tendencies of, Western Euro\-pean languages (the latter have well-established single copulas, well-established passives\is{passive}, use subject-verb or subject-auxiliary inversion in questions, etc.), no one could invoke this shared ancestry to explain creole similarities unless he were to propose that creoles, like naughty children, do everything the opposite of what their parents tell them to do! However, an earlier work of mine \citep{Bickerton1974} that was limited to a discussion of TMA systems\is{tense-modality-aspect (TMA) systems|(} has been the subject of a number of criticisms, several to the effect that there were a number of exceptions to the generalizations made therein. I shall therefore deal with the issues raised in the most cogent and extensive of these criti\-cisms, namely, \citet{Muysken1981a}, before going on to show that all the genuine divergences from the classic TMA pattern can be accounted for by the impingement on that pattern of three factors. Two of these factors are quite extraneous and have already been discussed: influence of the antecedent pidgin and language change. A third will have to wait until \chapref{ch:4} for a full explanation; for the time being, let us call it ``indeterminacy in \isi{semantic space}.'' Muysken challenges my analysis of creole TMA systems by evidence drawn from six languages: \ili{Papiamentu}, \ili{Negerhollands}, \ili{Senegal Kriol}, Seychellois, \ili{Tok Pisin}, and \ili{S\~ao Tomense}. Data from two of these are quite irrelevant to the issues involved. Tok Pisin has already been ruled as having arisen under circumstances so vastly different from those of the classic creoles that the fact that it is now some people's % %\originalpage{74} native language -- hence, nominally a creole -- has no bearing on the present discussion. \ili{Senegal Kriol} is described by Muysken himself as an ``inter-tribal lingua franca which may have had native speakers in the past and which has some recent ones now in urban areas''; since he himself is forced to admit that this checkered history may have ``given it a very marked, deviant character,'' one wonders why he should have bothered to present data from it. \ili{Negerhollands} is, or rather was, a genuine creole in the terms of this study, but there are at least two reasons why evidence drawn from it cannot stand up against evidence from languages which are still vital. First, the language is dead; one has to rely entirely on printed sources. This may not present a genuine obstacle to the writing of grammars of classical languages, but the case of creoles is quite different. If one takes the text\is{creole!texts|(} of a Hittite law or a Sanskrit prayer, one can be reasonably certain that it was written by a native speaker; if one takes any text of \ili{Negerhollands}, one can be certain that it was not written by a native speaker. As with virtually all other creoles, texts -- whether they take the form of fact or fiction, catechism or simulated dialogue -- were written by Europeans, with all the biases of their time and without any special linguistic skills or training. Many of the texts were written by missionaries, who are notorious for producing Europeanized varieties of pidgins and creoles wherever they go \citep{Voorhoeve1971}. This is not to say that a European, even a European missionary, could not on occasion accurately represent a creole. The problem is knowing when a creole \textit{is} being accurately represented. For example, there is one excellent literary source for GC: \citet{Quow1877}. It is too excellent, if anything, because it gives several stylistic levels without the facts that might enable one to sort them out. There are also a number of other sources, of widely varying quality. If I had had to write a GC grammar from written sources only, there is no way that I could have learned to prefer Quow whenever he is in conflict with other evidence; that knowledge came from having four years of unrestricted access to native speakers.% \footnote{However, anyone wishing to use Quow as a historical source should be warned that the above remarks apply only to his rendering of basilectal speakers. Like many whites, he did not feel threatened by illiterate blacks, and could therefore treat them objectively; but he did feel threatened by literate blacks, and in consequence, his ren\-derings of \textit{their} speech are spoiled by facetiousness and condescension.} Consequently, my work would have seriously misrepresented the language. %\originalpage{75} The second reason against using Negerhollands as evidence for any general creole tendency is that although languages, like people, die, they do not, like some people, drop dead. On the contrary, like Charles {\sc ii}, they are an unconscionable time a-dying, and since we know that in language death languages become severely distorted, but do not know at what time the process started, there is no way in which we can be certain what any text\is{creole!texts|)} represents -- whether the full flush of the language, the early onset of decrepitude, or the final phases of decay, in which key forms are lost or, worse, replaced by forms from competing languages and dialects. For these reasons, we can dismiss the third of Muysken's six languages. This leaves PP\il{Papiamentu}, SC\il{Seychelles Creole}, and ST. Muysken does not state where he acquired the data from \ili{S\~ao Tomense}. To the best of my knowledge, there are only two recent descriptions of the ST TMA system -- \citealt{Valkoff1966} and \citealt{Ferraz1979} -- although perhaps one should say that there are three, since Valkoff gives two different ones in the same chapter. His account is a somewhat tortuous one, and the exact status of these two descriptions is far from clear; he seems to suggest that the first is in some sense hypothetical, though whether intended as a reconstruc\-tion of some earlier phase of ST, or of proto-Bight-of-Benin, is far from clear. Be that as it may, one of the two forms he specifically stars as hypothetical turns up as real in Muysken's account, while four forms that appear in his second description do not appear in the first. Ferraz mentions Valkoff's work but does not discuss it; nor does he explain why, or even note, that his own account differs substantively from either of Valkoff's. Finally, Muysken's account bears scant resemblance to any of the previous three. In \tabref{tab:2.1}%on the following page , the various auxiliaries and combinations of auxiliaries claimed to occur in ST are arranged along the horizontal axis, and the four accounts (V\textsubscript{1} and V\textsubscript{2}, \citealt{Valkoff1966}; F, \citealt{Ferraz1979}; M, \citealt{Muysken1981a}) along the vertical. Pluses and minuses have the same values as in distinctive feature tables. %\originalpage{76} %%please move \begin{table} just above \begin{tabular \begin{table} \begin{tabularx}{.8\linewidth}{l*{9}{>{\centering\arraybackslash}p{.048\linewidth}}} \lsptoprule & ka & tava & ta & ska & kia & te & sa & bi & za\\ \midrule V\textsubscript{1} & + & -- & + & + & -- & -- & + & -- & -- \\ V\textsubscript{2} & + & + & -- & + & + & -- & -- & -- & + \\ F & + & + & -- & + & + & -- & -- & -- & -- \\ M & + & + & -- & -- & -- & + & + & + & -- \\\midrule \end{tabularx} \bigskip \begin{tabularx}{.8\linewidth}{l*{6}{>{\centering\arraybackslash}p{.09\linewidth}}} & tava ka & ta ka & sa ka & te di & ka bi & ka te \\ \midrule V\textsubscript{1} & -- & + & -- & + & + & -- \\ V\textsubscript{2} & + & -- & -- & + & + & -- \\ F & + & -- & -- & -- & -- & -- \\ M & + & -- & + & -- & + & + \\\lspbottomrule \end{tabularx} \caption{Four accounts of the ST TMA system} \label{tab:2.1} \end{table} In addition, Muysken's account suggests four more forms (\textit{tava ka te}, \textit{tava ka bi}, \textit{sa ka te}, and \textit{sa ka bi}) which are not attested anywhere else, although to do him justice this impression may merely result from a faulty formalism. Even making allowances for this, he attests four forms that the other sources do not attest,\enlargethispage{1\baselineskip} and he fails to attest two that both the other writers attest, as \tabref{tab:2.1} shows. If this picture seems confused, the reader had better not even attempt to follow the names which the various tenses, modes, and aspects are given by these three authors. I shall give a single example. The names of the \textit{ka + V} form are given, respectively, as: incompletive aorist, Valkoff\textsubscript{1}; habitual, Valkoff\textsubscript{2}; aorist, Ferraz; incompletive, Muysken. This pattern is followed throughout. If a tense, mode, or aspect is mentioned in two accounts, it has two names; if in three, three names; if in four, four names. Sometimes the differences in name merely disguise the semantic similarities of the accounts; sometimes they mark real conflicts; sometimes it is impossible to tell. In one case where there is a dear similarity between Valkoff's and Ferraz's accounts, Muysken is clearly wrong. Valkoff calls \textit{tava + V} ``completive %\originalpage{77} in the past'', Ferraz calls it ``pluperfect'', but it is obvious from their example sentences that whatever it is (and it looks like the anterior\is{anterior tense} of the present analysis), it is not a simple past -- which is what Muysken says it is. Here, of course, the evidence of Ferraz and Valkoff suports the position that Muysken is attacking. Muysken's analysis is supported by two example sentences. The original form of the analysis he is attempting to undermine, in \citet[Chapter~2]{Bickerton1975}, is supported by ninety-eight example sentences. Further comment should be superfluous. Until someone is prepared to devote to the analysis of the ST system at least a fraction of the amount of careful work that went into the analysis of the GC\is{Guyanese Creole!TMA system} system, we can dismiss the fourth of Muysken's six counterexemplary systems. The two remaining systems, those of PP\il{Papiamentu} and SC\il{Seychelles Creole}, have TMA sys\-tems too widely known to undergo much distortion, although even here Muysken's account is unsatisfactory in several respects. However, since the features of these and other systems which differ from my predictions have been mentioned by other writers (see \citealt{Hill1979}), I shall not comment further on Muysken's particular analysis, although I shall return to some broader aspects of his paper in \chapref{ch:4}. The major and, if we were to eliminate sloppy scholarship, perhaps the only deviations from the regular creole TMA system are the following: %\setcounter{itemize}{0} \begin{itemize}\label{majordeviations} \item[(A)] The presence in \ili{Crioulo} of an anterior marker, \textit{ba}, that follows rather than precedes the main verb. \item[(B)] The presence in \ili{Papiamentu} of an irrealis marker, \textit{lo}, that may occur before rather than after the subject. \item[(C)] The presence in certain creoles (e.g., \ili{Papiamentu}, \ili{Palenquero}, \ili{Papia Kristang}, and \ili{Negerhollands}) of tense markers that look more like +past than +anterior. \item[(D)] The presence in \ili{Indian Ocean creoles} of two markers, \textit{ti} and \textit{(fi)n}, which compete for some kind of pastness, and two markers, \textit{pu} and \textit{a(va)}, which compete for some kind of irrealis.\is{irrealis modality} %\originalpage{78} \item[(E)] The merging of iteratives/habituals with either punctuals or irrealis, claimed to occur in a number of creoles (cf. \citealt{Taylor1971}), thus reducing the nonpunctual\is{nonpunctual aspect} category to no more than a progressive/durative. \end{itemize} The first two deviations involve only syntactic aspects of TMA systems, while the other three involve semantic aspects. It will be convenient if we take (A) and (B) together since both arise from the nature of antecedent pidgins. \citet{Alleyne1979}, in arguing against the existence of a \isi{pidgin-creole cycle}, claims that no vestiges of pidgins can be found in creoles. This, if true, would be unsurprising -- as unsurprising as the fact that we find no trace of the caterpillar in the butterfly, and for similar reasons. In fact, the data now to be surveyed show some exceptions to the general irrecoverability of pre-creole pidgins. As is widely known (but see \citet{Labov1971} for explicit discussion), pidgins express temporal relations by means of sentence adverbs, in clause-external position, which indicate the temporal sequence of events. HPE\il{Hawaiian Pidgin English} has two, \textit{baimbai} `then, later, afterward', and \textit{pau} `done; already, finished': \ea\label{ex:2:92} {bambai} {mi} {waif} {hapai,} {bambai} {wan} {lil} {boi} {kam} \glt `Then my wife got pregnant, and later a little boy was born' \z \ea\label{ex:2:93} pau wrk fraidei, go daun kauai \glt `After work on Friday, we went down to Kauai' \z Both \textit{baimbai }and \textit{pau }can occur clause-finally, although this is much more frequently the case with \textit{pau}; another speaker might well have begun \REF{ex:2:93} with \textit{fraidei, hanahana pau} \ldots\xspace `On Friday, when work was over \textellipsis'. %\originalpage{79} \il{English creoles|(}If creoles were, as they are popularly supposed to be, no more than ``expansions'' of pidgins, one would expect them to take markers of this kind, transmute them into obligatory markers of tense, modal\-ity, or aspect (the ``later'' sequence-marker into a future or irrealis\is{irrealis modality}, the ``earlier'' sequence-marker into a past, anterior\is{anterior tense}, or completive), and incorporate them into an Aux category. But this development is the creole exception rather than the creole rule. When HCE developed out of HPE\il{Hawaiian Pidgin English}, neither \textit{pau} nor \textit{baimbai} underwent any change of meaning, nor were they incorporated into Aux. Two quite different forms, \textit{bin} and \textit{go}, were selected to express anterior and irrealis\is{irrealis modality}, respectively. \textit{Pau} and \textit{baimbai} are retained as optional, clause-external adverbs, but their frequency in HCE drops dramatically compared with their frequency in HPE (in the set of recordings which most accurately reflect basilectal HCE, \textit{bin} and \textit{go} occur a total of 433 times, while \textit{pau} and \textit{baimbai} occur a total of 38 times; \citealt[Tables 3.1, 3.6, 3.9]{Bickerton1977}).\is{Hawaiian Creole English!TMA system} Good data on pidgins are even harder to come by than good data on creoles, and data of any kind on the antecedent pidgins of any creole but HCE are simply nonexistent; however, I still think that reconstruction is possible if we make the simple and reasonable assump\-tion that other pidgins resembled HPE in taking their ``later'' marker from some temporal adverb and their ``earlier'' marker from some verb with the general meaning of \textit{finish} (the meaning of \textit{pau} in \ili{Hawaiian}). We can then go on to show that while a majority of creoles decisively rejected ``later'' markers, most, if not all, accepted ``earlier'' markers with a marginal status, while some, at a later stage, allowed them, more or less grudgingly, to occupy positions within Aux. This is under\-standable since, if we are right, ``earlier'' markers have a verbal source, while ``later'' ones have a nonverbal source. Most creole irrealis markers are derived from verbs or auxiliaries. English creole \textit{go} is an obvious case; SR \textit{sa} is usually (I am not sure if correctly) attributed to Eng. \textit{shall}, JC\il{Jamaican Creole} \textit{wi} to Eng. \textit{will}. The form that underlies most French creole\il{French creoles} irrealis\is{irrealis modality} markers is Fr.\il{French} \textit{va} `(3rd pers. sing.) go', yielding \textit{ava}, reduced to \textit{a}. LAC\il{Lesser Antillean Creole} \textit{ke} and ST\il{S\~ao Tomense} \textit{ka} remain mysterious; for the latter, \citet{Ferraz1979} suggests two possible sources -- \ili{Bini} \textit{ya}, an irrealis\is{irrealis modality} marker, and \ili{Twi} \textit{ka} `to be usual' -- while another possible source is Pg. \textit{ficar} `remain'. Only a few \ili{Portuguese creoles} show a different tendency, e.g., PK \textit{logo}, derived from the adverbial \il{Portuguese}Pg. \textit{logo} `next, %\originalpage{80} soon' and reducible to \textit{lo}. And \textit{lo}, as we have seen, is the Papiamentu form which deviates from the regular model. We will return to \textit{lo} in a moment. First, let us look at the prove\-nance of \textit{ba}. Most creoles have an ``earlier'' form which is derived from a verb with the meaning `finish'; in addition to \textit{pau}, we find IOC\il{Indian Ocean creoles} \textit{(fi)n} from Fr. \textit{fini} `finished (p. part.)', English creole \textit{don} from another past participle, Eng. \textit{done}, and Portuguese creole \textit{(ka)ba} from Pg. \textit{acabar} `finish' (\textit{kaba} is found in SR also). Looking over the range of creoles, it would seem that such markers can have three distinct distributions. First, they may remain as marginal particles, occurring option\-ally in clause-final position. This state is exemplified by SR, in which \textit{kaba} can only occur clause-finally and is never incorporated into Aux. The same is true of PP\il{Papiamentu} \textit{caba}. In basilectal GC, \textit{don} often occurs clause-finally (cf. \citealt[Examples 2.65--67]{Bickerton1975}).\is{Guyanese Creole!completive} Second, they may be incorporated into Aux but without its being possible to combine them with other Aux constituents. This state is exemplified by mesolectal GC \textit{don} and possibly also JC\il{Jamaican Creole} and other Caribbean \textit{don} and by HC\il{Haitian Creole} \textit{fin}. Third, they may be incorporated into Aux where they may combine with other Aux constituents quite freely. This state is exem\-plified by \ili{Krio} (KR) \textit{don}, and IOC \textit{(fi)n}, among others. If we were working with a static-synchronic model, we would have to stop with this statement. However, since we have to work with a dynamic model in order to account for creole development, we can next propose that these three ``states'' in fact constitute stages in a diachronic development and exemplify a gradual process of incorpora\-tion which is well advanced in some creoles and has not begun in others. In order to prove that states in different languages show differ\-ent stages of the same process, it is desirable to be able to point to languages in which two stages co-exist synchronically. Basilectal GC has both postclausal and preverbal \textit{don}, the latter becoming obligatory in the mesolectal varieties; thus GC represents the transition between states one and two. Evidence for IOC is conflicting, but by at least\il{English creoles|)} %\originalpage{81} some accounts, stages intermediate between noncombinability and free combinability (states two and three) are to be found there. \ili{Crioulo} \textit{ba} clearly derives from \textit{kaba}, which in accordance with its Portuguese etymon is stressed on the final syllable. Papiamentu \textit{lo} equally clearly derives from Pg. \textit{logo}. We can assume that in Portuguese pre-creole pidgins\il{Portuguese pidgins} generally \textit{logo} and \textit{kaba} were, respectively, the ``later'' and ``earlier'' forms that corresponded to HPE\il{Hawaiian Pidgin English} \textit{baimbai} and \textit{pau}. Papiamentu retained both; \ili{Crioulo} (as far as we can tell with present, inadequate data) retained only the second; and both PP\il{Papiamentu} \textit{lo} and CR \textit{ba} were incorporated \textit{semantically} into the TMA system (i.e., were allotted the expected meanings of irrealis\is{irrealis modality} and anterior\is{anterior tense}) while remaining \textit{syntactically} outside it (i.e., retained clause-external position, obligatory in the case of \textit{ba}, co-varying with subject type in the case of \textit{lo}). It is one thing to show that deviations (A) and (B) (Page~\pageref{majordeviations}) could have arisen from pidgin features; it is quite another to explain why in these two cases, but not in others, pidgin characteristics should have been able to override creole ones. However, we can make what is at least a very plausible conjecture. The only other member of the pidgin-creole\is{pidgin-creole cycle} family in which pidgin sequence-markers have graduated to creole auxiliaries is \ili{Tok Pisin} (TP). Here, ``later''-marker \textit{baimbai} reduced to \textit{bai}, acquired irrealis meaning, and is in the process of being incorporated into the auxiliary by native speakers \citep{SankoffEtAl1974}; ``earlier''-marker \textit{pinis} (from Eng. \textit{finish}) has followed a similar course except that it continues to occur only postverbally. I have consistently claimed that differences between TP and classic creoles would result from differences in their histories, in particular the period of several genera\-tions which TP passed as a pidgin prior to creolization. Such a period would allow time for the original sequence-markers to become firmly established in the language and to take on more tense-like and modal-like meanings through the operations of natural change. By the time TP creolized, therefore, it had already developed a complex auxiliary system, without any of the catastrophic suddenness which, as we saw %\originalpage{82} in \chapref{ch:1}, characterizes true creoles. The first creole generation in TP was therefore presented with a fait accompli; all it could do was accept the markers bequeathed to it and carry out some minor cosmetic operations on \textit{baimbai}, phonologically reducing it and shifting it to a more ``appropriate'' position. The gap between TP and true creoles is not, of course, an abso\-lute one. Some of the features that distinguish TP (prolonged growth period, sustained bilingualism, etc.) could be shared to a lesser extent with some of the true creoles. In the case of \ili{Crioulo}, evidence is flatly contradictory: ``\ili{Crioulo} \ldots~has no native speakers'' \citep{Alleyne1979}; ``\ili{Crioulo} \ldots~[is] the first language of many who are born and bred in the main towns'' \citep[vii]{Wilson1962}. A plausible compromise would seem to be late creolization followed by the persistence of a small native-speaker core within a wide lingua-franca penumbra. Under such circumstances, a more gradual transition from pidgin to creole, with concomitant retention of more pidgin features, is certainly a possibility. \isi{Curaçao}, home of \ili{Papiamentu}, might at first sight look very different from the Guinea of \ili{Crioulo} -- a Caribbean island where sustained bilingualism would have been impossible. However, Curaçao differs from most Caribbean islands\is{Caribbean} in that it is extremely dry and infertile. For over a century, before the Dutch seized it, and indeed to some extent thereafter, it served as a staging post in the slave trade, a place where slaves were held and seasoned while awaiting trans\-portation to other points in the Caribbean or Latin America. With a constant turnover in the population, and transients always heavily outnumbering the minority who remained, it may well be that a pidgin stage persisted here much longer than it did elsewhere in the Caribbean\is{Caribbean}, or at least long enough for more pidgin features to establish themselves. Clearly, in both cases, more historical study is needed, but the hypothe\-sis of somewhat delayed creolization would both explain the phe\-nomena involved and accord with our present knowledge of social history. %\originalpage{83} Let us turn now to Deviation C (Page~\pageref{majordeviations}), the past versus anterior\is{anterior tense} issue. In the first place, it must be made clear that GC, SR, and HC\il{Haitian Creole} were generally claimed to have past-tense markers, prior to my re\-analysis of their TMA systems\is{Guyanese Creole!TMA system} (see, e.g., \citet{Hall1953} for HC, \citet{Voorhoeve1957} for SR, etc.). However, that reanalysis has not been seriously challenged.\footnote{There have been some nonserious nonchallenges, of course. \citet{Christie1976} produced an analysis of LAC which showed it to be not far short of identity with GC but insisted on preserving traditional terms, obvious though it was that these did not fit (getting the distri\-bution of anterior correct and then calling it past is, to me at least, a quite incomprehensible maneuver). \citet{Seuren1980} endorsed the analysis of \citet{Voorhoeve1957}, shown in \citet{Bickerton1975} to be intern\-ally incoherent, and neatly avoided having to consider the latter analy\-sis by calling it ``sociolinguistic'' [\textit{sic!}]. But no one has systematically attempted to criticize my analyses of GC, SR, HC\il{Haitian Creole}, and HCE, for the obvious reasons.} It seems reasonable, therefore, to suppose that in a number of other creoles which I did not specifically examine, markers are still being described as ``past'' which in reality are +anterior\is{anterior tense}. Let us examine a language, \ili{Seychelles Creole}, of which this claim is frequently made. According to \citet[102]{Corne1977}, ``the marker \textit{ti} defines the past, both simple and habitual{\textquotedbl}; according to \citet[55]{Bollee1977}, ``\textit{ti} expresses the past, definite or indefinite; it is comparable to the past tense\is{past tense|(} in \ili{English}''\is{English!tenses}. However, these confident and sweeping statements are immediately modified by both parties. Corne observes that ``once past time has been established in a given situation, \textit{ti} is frequently omitted'', especially in narratives ``where, after an initial use (or uses) of \textit{ti}, much of the remainder of the story may be told with verb forms unmarked for Past (i.e., as a sort of ``historical present'')''. This ``historical present'' also crops up in Bollée's second thoughts. While, according to her, the zero or stem form of the verb ``has the value of the \ili{French} present tense'' -- the reader will note the Euro\-centrism that unites these accounts -- it also expresses the ``historical present'' which is ``above all others the narrative tense \ldots~. After a brief introduction in the past, the rest of the story is told in the pres\-ent.'' However, she immediately adds a second thought: ``The above is not quite correct; the past often reappears at the opening of a new paragraph.'' Accounts of this nature inevitably arouse one's suspicions, especially as the \isi{First Law of Creole Studies} states: ``Every creolist's analysis can be directly contradicted by that creolist's own texts and citations.'' To demonstrate this law, I will analyze the middle portion of the second paragraph of the story \textit{Sabotaz at de ser} \citep[166]{Bollee1977}: %\originalpage{84} \ea\label{ex:2:94} \gll {Biro} {leformasjo} {i} \emph{resevwar} {e} {let} {sorta} {lafras,} {avoje} {par} {e} {garso} {nome} {M}{sje} {Lezen} {ki} \emph{ti} \emph{ana} {trat-a,} {ki} \emph{ti} \emph{ule} {gaj} {portre} {e} {fij} {seselwas} {\ldots } {Me} {kom} {sa} {zofisje} \emph{ti} \emph{kon} {b}{je} {Msje} {ek} {M}{adam} {Lamur} {\ldots} {alor} i \emph{gaj} {e} {konsiderasjo} \\ Bureau information {\PM} receive a letter leaving France sent by a fellow name M. Lezen who {\TNS} have {thirty years} who {\TNS} want get portrait a girl Seychellois { } but as the agent {\TNS} know well M. and Mme. Lamur { } then he take into consideration\\ \glt `The Bureau of Information \textbf{received} a letter from France, sent by someone called Mr. Lezen, who \textbf{was} thirty years old, and who \textbf{wanted} to obtain a portrait of a Seychellois girl \ldots~. But as the agent \textbf{knew} Mr. and Mrs. Lamur well \ldots~he \textbf{took} into consideration \ldots ' \z Here, as in many other places in Bollée's texts, the narrative switches from ``historical present'' into ``past'' and back again, right in mid-paragraph. Even Bollée's final disclaimer, therefore, will not work here. What is the explanation? The alert reader will perhaps have noticed that the ``historical present'' verbs that immediately precede and follow the switch into ``past'' are both nonstatives -- \textit{resevwar} `receive', \textit{gaj(e)} `obtain, take' -- while the three verbs marked with \textit{ti} are all statives -- ana `have', \textit{ule} `want', and \textit{kone} `know'. If we refer back to the story openings that both Corne and Bollée mention, we find that the verbs marked there by \textit{ti} are also statives.\is{stative} Folktales\is{folktales} almost invariably begin with one or several of these: ``Once upon a time there \textit{was} a girl \ldots~ she \textit{was called} such and such~\ldots~ she \textit{had} two sisters \ldots '' It is this simple coincidence that has given rise to the hard-dying creole myth about ``narrative tenses'' and ``historical presents''. In fact, in systems which have the feature anterior, past-reference nonstatives are unmarked, while past-reference statives receive anterior %\originalpage{85} marking (see \citet[Chapter 2]{Bickerton1975} for an explanation of why this is so). Thus, the distribution of \textit{ti} and zero in SC\il{Seychelles Creole} texts follows exactly the same rule of anterior\is{anterior tense} marking that affects stative and nonstative pasts in GC, HC\il{Haitian Creole}, SR, etc.\is{Guyanese Creole!TMA system} Bollée and Corne\ia{Bollée, Annegret}\ia{Corne, Chris} cannot be blamed too heavily for this faulty analysis since SC is not a pure anterior system but one which under\-went certain changes when the completive \textit{(fi)n} was incorporated into Aux and permitted to combine with other markers. We shall see the consequences of this when we return to SC\il{Seychelles Creole} in the discussion of Deviation D. However, there are creoles in which the presence of the category past cannot be attributed to faulty analyses. Papiamentu is perhaps the best attested of these so, pending adequate data on the few creoles that seem to resemble it, we may take the PP\il{Papiamentu} model as typical. I pro\-pose to claim that wherever this deviance is attested, it is the result of either heavy superstrate influence on the pidgin stage, or (more prob\-able in the majority of cases) subsequent \isi{decreolization}. In the first place, in both HCE and GC\is{Hawaiian Creole English!TMA system}, where \isi{decreolization} phenomena are clear and well understood, the shift from anterior mark\-ing to past marking represents one of the earliest superstrate-influenced changes \citep{Bickerton1975,Bickerton1977}. Even in \ili{Sranan}, no longer in contact with its superstrate, a similar change is taking place at least among literate Sranan-Dutch bilinguals as can be seen if we compare the most recent texts with earlier ones in, e.g., \citet{Voorhoeve1976}. However, a problem arises in the case of languages for which, unlike GC or SR\is{Guyanese Creole!TMA system}, no prior anterior stage is attested. Can we reconstruct such a stage from synchronic evidence? There is a good likelihood that we can. In \citet{Bickerton1980} I showed that we could differentiate between \isi{decreolization} stages and natural changes: the former changed forms first and functions later; while the latter preserved old forms and gave them new functions. If the changes in PP\il{Papiamentu|(} are due to decreolization, and if there was an original anterior marker, then it follows that whatever is the past %\originalpage{86} marker now could not have been the anterior\is{anterior tense} marker then; in decreoli\-zation\is{decreolization|(}, instead of the original marker changing its function, a new marker is first adopted alongside of it, originally with an identical meaning (\textit{did} alongside GC \textit{bin}, \textit{wen} alongside HCE \textit{bin}), and then gradually changes its meaning to +past while the original anterior marker disappears or, if we are lucky, remains fossilized in some social or grammatical corner of the language. So, if we are both correct and lucky, we should be able to find in PP both a synchronic past marker and some vestige of the original anterior marker it displaced. The PP past marker is \textit{a}, presumably cognate with PQ\il{Palenquero} \textit{a}, PK\il{Papia Kristang} \textit{ya} -- all of which most probably derive from Pg. \textit{ja} `already'. Adverbs, as we have seen, are not a good source for creole TMA markers. Anterior markers are most often recruited from a past copula form: Fr. \textit{été} yielding \ili{French} creole \textit{ti}, \textit{te}, etc.; Eng. \textit{been} yielding \ili{English} creole \textit{bin}, \textit{ben}, etc.; and \il{Portuguese}Pg. \textit{estava} yielding ST (and other Bight-of-Benin creole) \textit{tava}. \textit{Tava/taba} is therefore what one would predict as an anterior marker in the original stage of any Portuguese creole\il{Portuguese creoles}. \textit{Taha} is of course well attested for PP, and its distribution is most interesting. Unlike other auxiliaries, it cannot occur alone before a verb, but only in conjunction with nonpunctual \textit{ta}: \ea[*]{mi taba lesa}\label{ex:2:95}\z \ea[ ]{ mi a lesa\\ \glt `I read (past)'} \label{ex:2:96}\z \ea[ ]{ mi ta lesa\\ \glt `I am reading'} \label{ex:2:97}\z \ea[ ]{ mi tabata lesa\\ \glt `I was reading'} \label{ex:2:98}\z The fusion of \textit{taba} and \textit{ta} clearly recalls GC \textit{bina}, of similar meaning and origin (i.e., the conjunction of anterior and nonpunctual attested, I believe, for every creole without exception). If \textit{a} had formed part of the original set of auxiliaries, whether with past or anterior\is{anterior tense} meaning, we would have expected to find the form *\textit{ata} for past-progressive sentences such as \REF{ex:2:98}. %\originalpage{87} We may therefore propose the following scenario for Papiamentu. Originally, it had \textit{taba} anterior and \textit{ta} nonpunctual, permitting the formation of \textit{tabata} (it is hard to think of any other way in which this form could have been derived). Decreolization then began, via contact between PP and the \ili{Spanish} of the Venezuelan mainland only a few miles away. Spanish \textit{ya} could have been the model as easily as Pg. \textit{ja}; indeed, for PP-Spanish bilinguals, \textit{ya} and \textit{ha}, the Spanish 3rd person perfective marker, could have easily reinforced one another to merge in \textit{a}. The result of borrowing \textit{a} as a past marker\is{past tense|)} would have been to bring Papiamentu, phonologically as well as semantically, more in line with its prestigious neighbor. But by the time \textit{a} entered the language, \textit{tabata} would already have come to be perceived as a single unit (as its modern orthography suggests) and would thus have survived the subsequent disappearance of \textit{taba}. Similarly, \textit{ti pe}, an SC\il{Seychelles Creole} form of comparable meaning, was retained even when some anterior functions of \textit{ti} were taken over by \textit{(fi)n}.\is{decreolization|)} A further argument for an original anterior-nonanterior distinc\-tion in Papiamentu comes from the synchronic distribution of zero forms. For example, \citet[107]{Goilo1953} observes that stem forms express the present indicative for verbs such as \textit{gusta} `like', \textit{quier} `love', \textit{jama} `be called', etc. -- i.e., statives -- while all other verbs form the same tense with \textit{ta}. It should be made quite clear that this is not a parallel to the English habitual-progressive distinction. In English\is{English!tenses}, we can have \textit{I write} as well as \textit{I want}, and there exists the opposition \textit{I write/I am writing}. Papiamentu presents quite a different picture: \ea[ ]{\label{ex:2:99} mi quier esaquinan \\ \glt `I want these'} \z \exewidth{(234)} \ea[*]{\label{ex:2:100} mi ta quier esaquinan} \z \ea[*]{\label{ex:2:101} mi skirbi buki} \z \ea[ ]{mi ta skirbi buki \\ \glt `I write books/am writing a book'}\label{ex:2:102}\z If Papiamentu started life with a simple past-present opposition, %\originalpage{88} expressed by \textit{a} versus \textit{ta}, then the fact that statives in the present cannot take \textit{ta}, while nonstatives must, becomes merely a mysterious anomaly. However, if it began with an anterior-nonanterior\is{anterior tense} opposition, the fact is well motivated. In all such systems, present-reference statives are unmarked (since by definition they cannot take nonpunctual mark\-ing), and present-reference nonstatives are obligatorily marked with the nonpunctual morpheme (since any event or action that is ongoing in the present must be either durative or part of a series, whereas, con\-versely, any punctual event or action must be over, i.e., in the past, by the time it can be referred to!). The distribution of zeros and nonpunctuals in PP is identical to that of synchronic SR or basilectal GC\is{Guyanese Creole!TMA system}, except for one thing: in the latter, past punctuals, as well as present statives, are zero-marked (again, \citet[Chapter 2]{Bickerton1975} explains why this should be \textit{so}). In Papiamentu, \textit{a} has moved in to fill the ``vacuum'' created by zero-marked past-reference nonstatives\is{stative}, thereby bringing the PP\il{Papiamentu|)} TMA system closer to European models, but, again, as with \textit{tabata}, leaving clear traces of the more creole system that must have existed at an earlier stage.\footnote{It is perhaps worth observing that no account of Papiamentu that I know of translates \textit{I had worked}, so that the PP TMA system may not, in fact, differ as much from the classic system as those ac\-counts might suggest. In general, not only are most analyses of TMA systems incorrect, nine out of ten of them are simply incomplete, lacking the critical information which would make it possible to deter\-mine how they work. Yet, since these defective analyses buttress Euro-centric prejudices, they are hardly ever questioned, let alone criticized.} We can now turn to Deviation D: the fact that MC\il{Mauritian Creole} and SC\il{Seychelles Creole} contain both \textit{ti} and \textit{(fi)n}, \textit{a} and \textit{pu}. First, I shall have to comment on the state-of-the-art in Indian Ocean creoles\il{Indian Ocean creoles}. In MC and SC\il{Seychelles Creole}, and a fortiori in \ili{Réunion Creole}, that state is perhaps best exemplified by the following pessimistic remarks of \citet[94--95]{Corne1977}: \begin{quote} A close study of SC preverbal markers has been made by Bollée, by Papen and by myself, and the results of our efforts do not always coincide \ldots~. The sociolinguistic background of our informants is to some extent different, and this alone is quite possibly the source of conflicting data. It seems likely that some speakers categorise given markers differently from other speakers. \end{quote} % {\textbackslash} %\originalpage{89} Methods which make it possible, without invoking extralinguistic data, to systematically order and account for all variations in such systems were publicly presented by DeCamp in 1968 \citep{DeCamp1971}, refined and extended by C.-J. \citeauthor{Bailey1973} (\citeyear{Bailey1973}, etc.), and applied to the analysis of a seemingly far worse preverbal chaos in \citet{Bickerton1975}, after they had been proven effective in a number of other areas where creoles showed similar variation\is{variation, linguistic} (\citealt{Bickerton1971,Bickerton1973a,Bickerton1973b}, etc.). The bibliographies of Baker, Bollée, Corne, and other IOC\il{Indian Ocean creoles} scholars (\citealt{CarayolEtAl1977} is a distinguished exception) betray no awareness of any of this work, which I suppose merely reflects the parochialism that afflicts the field. Given this methodological time lag, anything one says about IOC TMA systems must be treated as provisional. Combinations of markers form the liveliest areas of dispute. According to \citet{Baker1972} and \citet{Valdman1980}, \textit{(fi)n} will combine only with \textit{ti} in MC\il{Mauritian Creole}; according to \citet{Moorghen1975}, it will also combine with \textit{a} and \textit{pu}. According to \citet{Bollee1977}, \textit{(fi)n} will combine with \textit{a} and \textit{pu} in SC\il{Seychelles Creole}, but not with \textit{pe}; according to \citet{Corne1977}, \textit{n pe} com\-binations are found, in addition to the others. A dynamic analysis can easily reconcile all these apparent contradictions. It was suggested earlier that between the second stage of \textit{(fi)n} incorporation -- movement into Aux -- and the third stage of \textit{(fi)n} incorporation -- free combinability with other preverbal markers -- we would expect to find intermediate stages, and the data in the previous paragraph suggest just such stages, preserved synchronically in the Indian Ocean population either by different groups, or at different stylistic levels, or both. If those data are correct, it would appear that the combinability of \textit{(fi)n} began with \textit{ti} in SC, spread to MC, while in SC it extended also to \textit{pu} and \textit{a}, and (for at least some speakers) spread to \textit{pu} and \textit{a} in MC\il{Mauritian Creole} while it was extending to \textit{pe} in SC\il{Seychelles Creole} -- a classic demonstration of Baileyan \isi{wave theory}.\footnote{When I wrote this paragraph, I was quite unaware that Baker\ia{Baker, Philip} had produced an extremely interesting account of the historical de\-velopment of MC, based in part on an analysis of all currently known historical citations \citep{Baker1976}, which provides a striking piece of independent support for this analysis. While \textit{fini} is recorded as a pre\-verbal marker in 1780, \textit{ti} is not recorded until 1818; but the \textit{ti va} combination is recorded in 1828, while the \textit{ti fin} combination is not recorded until 1867! Granted that these dates are probably all late -- nonstandard speech phenomena tend to have a long and lively life before they tickle the bourgeoisie, cf. \textit{olelo pa'i'ai} (see \chapref{ch:1}) which blushed unseen in Hawaii for nearly a century -- there is no need to doubt that their order and spacing are substantially correct. Baker seems not to realize, however, that the 1780 source derives, on both internal and external evidence, from a pidgin and not a creole speaker.} This proposal is made all the more plausible by the fact that combinability proceeds from tense, the leftmost Aux constituent, to modal, the second leftmost, to aspect, the rightmost (which, perhaps coincidentally, perhaps %\originalpage{90} not, is a conjunction of two members of the same class, \textit{(fi)n} being a \isi{completive}, therefore aspectual, and therefore, initially perhaps, being in the same slot as \textit{pe} and thus barred from co-occurring with it).\\\\ We are now almost ready to consider what would have been the repercussions of an invasion by \textit{(fi)n} of a classic creole system. First, however, we must note a particular characteristic of TMA systems which, though seemingly obvious, has been ignored by virtually all work up to and including \citegenp{Comrie1976} influential study of aspect. Meillet's famous observation that ``language is a system in which every\-thing keeps its place'' has the corollary that if a new element intrudes, everything must shift its place somewhat; while the latter statement may not be true of languages considered as wholes, it is certainly true for tight little grammatical subsystems like those of TMA. A TMA system may be compared to a cake, a cake that is always the same size, for TMA systems, whether simple or complex, all have to cover the same semantic area: every verb has to have some tense, mood, aspect, or combination of these applied to it, for there are (pace some creolists) no such things as ``TMA-neutral'' sentences. But a cake may be split up into five, or eight, or ten slices, just as a TMA system can divide its semantic area among five, or eight, or ten TMA markers and/or combinations of markers. If a cake is divided into five slices, while another identical to it is divided into eight slices, there is no way in which each of the slices in Cake A can contain exactly the same amount of material as each of the slices in Cake B. In other words, how much, and exactly what, is contained in each slice will be largely determined by the number of slices. This is exactly the state of affairs in TMA systems throughout language; what each marker of modality, tense, or aspect means will be largely determined by how many markers of these things there are in the system and by what each of the others mean. Facts such as these are, however, ignored by most scholars in the field, who strive to fit all phenomena into the same conceptual straitjacket, and who, when %\originalpage{91} this fails, as fail it must, then seek, like Comrie, some kind of ideal type of the ``Progressive'' or the ``Perfective''. The main point to be grasped here is that if you mark out a cake to be cut into \textit{n} slices, then change your mind and decide to cut \textit{n + 1}, you can only get your extra slice at the expense of one or more of the originals. Thus, if \textit{(fi)n} were introduced as a ninth term into the classic eight-term creole system, it could only be accommodated by robbing the semantic domain of one or more existing markers. Since \textit{(fi)n} conjoined first with \textit{ti}, it is not surprising that \textit{ti} was its main victim. Admittedly, \textit{ti} held its ground with statives, as we saw in \REF{ex:2:94},\footnote{\citet{Corne1981} observes that ``with state ``Verbals'' \textit{fin} does not occur, since a state has by definition already been attained.'' Thus, the failure of \textit{fin} to take over anterior\is{anterior tense} marking in statives is a principled one, and not some inexplicable accident.} and in the nonpunctual \textit{ti pe} structure; the picture with punctual nonstatives is more complex. To clarify it, we need to refine the concept of anterior, which we can provisionally define as ``prior to the current focus of discourse''. But current focus may be explicit (where the times of an earlier and later event are directly contrasted), or implicit (where the relationship between the earlier and later events is simply assumed), or there may be nothing prior to current focus. The situation will become clearer with the following examples:\is{Guyanese Creole!TMA system|(} %\todo{These examples are set in rm in the original} %\ea\label{ex:2:103} %{Current focus, nothing prior:}{}{}\\ %\begin{tabular}{lll} %& Eng.: & Bill has come/came to see you\\ %& GC: & Bil (don) kom fi sii yu %\end{tabular} %\z \ea\label{ex:2:103} {Current focus, nothing prior:}\\ \ea \ili{\langEng}{}{}\\ Bill has come/came to see you\\ \ex \ili{\langGC}{}{}\\ Bil (don) kom fi sii yu\\ \z \z \ea\label{ex:2:104} {Prior event, current focus implicit:}{}\\ \ea \ili{\langEng}{}{}\\ Bill came/*has/*had come to see you yesterday, too\\ \ex \ili{\langGC}{}{}\\ Bil bin kom/*don kom/*kom fi sii yu yestide an aal\\ \z \z \ea\label{ex:2:105} {Prior event, current focus explicit:}{}\\ \ea \ili{\langEng}{}{}\\ When I got here, Bill had come/*has come/*came already\\ \ex \ili{\langGC}{}{}\\ wen mi riich, bil bin kom/*don kom/*kom aredi\\ \z \z In \REF{ex:2:104} current focus is on the present, second visit of Bill implied by \textit{too}; this, English\is{English!tenses} can handle by one of the means available for \REF{ex:2:103}, but the anterior system of GC cannot. Example \REF{ex:2:104} has to be treated exactly like \REF{ex:2:105} in GC; \REF{ex:2:105} must be treated differ% %\originalpage{92} ently from \REF{ex:2:104} in English. This illustrates just one of the many differences between past-nonpast\is{past tense} and anterior-nonanterior\is{anterior tense} systems. \citet[107]{Corne1977} has an illuminating minimal pair which shows that SC\il{Seychelles Creole} behaves much more like GC than like English\is{English!tenses} on ``current focus, nothing prior'' cases like \REF{ex:2:106} and ``prior event, current focus implicit'' cases like \REF{ex:2:108}; GC translations follow each example: \ea\label{ex:2:106} \gll mô \emph{n} \emph{vin} isi pur eksplik \ldots\\ I {\COMP} come here for explain\\ \glt `I have come here to explain (and I'm still here)' \z \ea\label{ex:2:107} \ili{\langGC}{}{}\\ mi \textit{(don)} \textit{kom} ya fi ekspleen \ldots \z \ea\label{ex:2:108} mô ti vin isi pur eksplik \ldots \\ \glt `I came (on a previous occasion) to explain (and then went away again)' \z \ea\label{ex:2:109} mi \textit{bin} \textit{kom} ya fi ekspleen \ldots \glt \z \noindent The implicit current focus is of course the speaker's most recent arrival, since he could not say \textit{I came} unless he were here again. However, when current focus is explicit, SC\il{Seychelles Creole} and GC part company (SC example from \citealt[108]{Corne1977}): \ea\label{ex:2:110} \gll letâ mô ti âtre dâ lasam, i \emph{ti} \emph{n} \emph{fini} mâz sô banan\footnotemark \\ time I {\TNS} enter in room, he {\TNS} {\COMP} finish eat his banana\\ \glt `When I entered the room, he had finished eating his banana' \z \footnotetext{Here Corne falls victim to the \isi{First Law of Creole Studies}, since he himself stated five pages earlier (\citeyear[103]{Corne1977}) that \textit{ti} is omitted from subordinate clauses. But I suspect that he was mostly right on this occasion and that he had not made allowances for the nonhomogeneity of SC. I would be prepared to bet that \REF{ex:2:110} came from a higher-class, more decreolized consultant.} \ea\label{ex:2:111} wen mi kom iin di ruum, i \textit{bin} \textit{finish }(*bin don finish) nyam i banana\\ \z In other words, it is only where there is prior reference with explicit current focus -- i.e., when two past events have to be explicitly ordered with respect to one another -- that \textit{ti n} encroaches on the domain of anterior \textit{ti}. However, reference of this kind is probably the most perceptually obvious of anterior functions (certainly it is the easiest to teach in creole courses), and its loss to a \isi{completive} cannot but serve to erode an anterior-based system and tilt it in the direction of a past-based system. %\originalpage{93} Anterior\is{anterior tense} is further eroded once one begins to get \textit{(ti) a n} and \textit{(ti) pu n} constructions. In the classic system, irrealis handles condi\-tionals and there is no distinction between probable and improbable conditions, so long as they are not counterfactual conditions: \ea\label{ex:2:112} \ili{\langGC}{}{}\\ mi go tel am if mi sii am \glt `I'll tell him if I see him' or `I would tell him if I saw him' \z Counterfactuals\is{counterfactuals} are expressed by a combination of anterior\is{anterior tense} and irrealis: anterior because current focus in such cases is always on the conse\-quences of not having done whatever one didn't do; and irrealis\is{irrealis modality} because the action or event in question is an imaginary one: \ea\label{ex:2:113} \ili{\langGC}{}{}\\ if mi bin sii am mi bin go tel am \glt `If I had seen him I would have told him' \z In SC\il{Seychelles Creole}, \textit{ti a n} (and less commonly, \textit{ti pit n}) naturally takes over from a prior \textit{ti a}, the ``pastest'' among conditions: counterfactuals such as \REF{ex:2:113}, yielding sentences such as \REF{ex:2:114} (example from \cite[109]{Corne1977}): \ea\label{ex:2:114} \gll mô \emph{ti} \emph{a} \emph{n} marie, si mô pa ti mizer\\ I {\TNS} {\MOD} {\COMP} marry, if I not {\TNS} poor\\ \glt `I would have gotten married if I weren't poor' \z \ea\label{ex:2:115} \ili{\langGC}{}{}\\ mi \textit{bin} \textit{go} mari if mi na bin puur \z The result of this development is another change in the system. The coming into existence of \textit{ti a n} does not automatically remove the former expression of counterfactuals, \textit{ti a}; \textit{ti a} remains in the lan\-guage, and what remains in the language has to mean something. The consequence of the \textit{ti a n} invasion is therefore the shifting of \textit{ti a} one step down the hierarchy of conditions, from impossible to merely improbable (\textit{if X, I would Y}) -- again, the example is from \citet[106]{Corne1977}: %\originalpage{94} \ea\label{ex:2:116} \gll si u ti aste lavian, i \emph{ti} \emph{a} mâze\\ if you {\TNS} buy meat, he {\TNS} {\MOD} eat\\ \glt `If you bought some meat, he would eat it' \z \ea\label{ex:2:117} \ili{\langGC}{}{}\\ if yu bai miit, i \textit{go} nyam am \z A further erosion of anterior terrain is indicated by the structure of the subordinate clauses in these examples. Note that the subordinate clause in counterfactual \REF{ex:2:115} requires anterior marking but that the subordinate clause in the merely improbable \REF{ex:2:117} does not; this is classic anterior marking. However, the subordinate clauses in both counterfactual \REF{ex:2:114} and merely improbable \REF{ex:2:116} are marked with \textit{ti}, presumably because subordinate clause marking is dragged down, so to speak, by the shift of \textit{ti a} main-clause marking from counterfactuals to improbables. In other words, once you turn a \isi{completive} loose in a classic creole TMA system, the only consequence must be a drastic remodeling of that system. Some creoles (SR\il{Sranan}, basilectal GC, HC\il{Haitian Creole}) have kept their completives\is{completive} under control either by keeping them out of Aux al\-together or by allowing them in but not letting them combine with other auxiliaries. It is not coincidence that these creoles are ones which have kept the classic TMA system virtually intact. On the other hand, creoles that have let the \isi{completive} have the run of the house -- such as SC\il{Seychelles Creole}, MC\il{Mauritian Creole}, and \ili{Krio} -- have, in consequence, had to change their TMA systems to a point at which reconstruction of the original system becomes quite difficult, although not -- thanks to the careful work of Corne and others -- impossible. The curious reader may well ask, ``Why is it that some systems have let loose their completives\is{completive}, while others have not?'' I have no answer to that, at present. Suffice it to say that doing so must remain an option within any theory of \isi{linguistic change}; if the option is taken, certain results must follow, as night, day; if it is not, they will not. It would be interesting to know why, but the fact that we do not can in no way affect the validity of the foregoing analysis. % CREOLE 95 The other main divergence of IOC\il{Indian Ocean creoles} from the classic model is the presence of two ``future'' forms, \textit{a} and \textit{pu}. Several things are at issue here. One of them is why either form should be limited to future, rather than being a true irrealis\is{irrealis modality}. In fact, on the evidence of \citet[103]{Corne1977}, \textit{pu} retains conditional meaning, if only in subordinate clauses, and thus still covers a large part of the semantic area of ir\-realis: \ea\label{ex:2:118} \gll ipa ti kone ki i \emph{pu} fer\\ he not {\TNS} know what he {\MOD} do \\ \glt `He didn't know what he would do' \z \ea\label{ex:2:119} \ili{\langGC}{}{}\\ i na (bin) no wa i go du/wa fe du \z The GC translation is instructive in several respects. First, the optionality of \textit{bin} serves to underscore another characteristic of ante\-rior as opposed to past marking. Expressions like \textit{he didn't know} are ambiguous between `he didn't know, but he knows now' and `he didn't know then and he still doesn't know'. In the first reading, \textit{bin} is obli\-gatorily present since this reading represents another instance of prior event (or rather, prior state, in this case) with implicit current focus, i.e., upon the change in state of the person referred to. In the second reading, \textit{bin} is obligatorily absent since the state of not knowing is a continuing one, and there is therefore nothing prior to refer to. Second, we see again that the range of a true irrealis\is{irrealis modality} (GC \textit{go}) parallels at least part of the range of an SC\il{Seychelles Creole} marker. But the third point is perhaps the most interesting. \textit{He did not know what he would do} is semantically close to, if not quite synonymous with, \textit{he did not know what to do}. The GC sentence \textit{i na no wa fi du} more accurately translates the second of these sentences. \textit{Fi} (which often takes the form \textit{fu}) derives from \il{English}Eng. \textit{for}, while \textit{pu} derives from Fr. \textit{pour} `for'. \textit{Fi} can also be a complementizer, as can \textit{pu}. \textit{Fi} can occur as an auxiliary in its own right: \ea\label{ex:2:120} mi fi go\\ \glt `I should/ought to go' \z %\originalpage{96} Note that \textit{fi} is narrowly restricted to a meaning of \isi{obligation}, while it is a verbal auxiliary. When it is a complementizer, however, it and its cognates in other creoles express irrealis\is{irrealis modality} meaning just as \textit{pu} does, see examples \REF{ex:2:27}--\REF{ex:2:30} and \REF{ex:2:35}--\REF{ex:2:37} above. \textit{Fi} in general does not combine with other auxiliaries, but in GC it does occasionally occur with \textit{bin}, and it is tempting to claim that it is starting to do just what \textit{(fi)n} did in IOC, i.e., combine first of all with the anterior marker. Thus one can have GC \REF{ex:2:121}: \ea\label{ex:2:121} mi bin fu nak am \glt `I should have hit him' or `I was about to hit him' \z The construction is not common in GC, and native speakers are more or less evenly divided as to which gloss is the more appropriate. Thus, in basilectal GC, with a classic TMA system, only a slight increase in the semantic range of \textit{fi} and in the syntactic privileges of occurrence is needed for the situation to begin to approximate that of SC\il{Seychelles Creole} and MC\il{Mauritian Creole}. All we have to assume is that a process which is beginning in GC (and which must be latent in any creole since all creoles, presumably -- the most drudgingly comprehensive grammars are all too often silent on this score -- have an auxiliary of obligation which is ipso facto +irrealis) has been taken a stage or two further in IOC, languages which we already know have a predilection for expanding and complicating Aux. The rest of the story is simple. As \textit{pu} was graduating as a full-fledged competitor to the original \is{irrealis modality}irrealis \textit{a}, \textit{(fi)n} was distorting the classic system in such a way that the irrealis scope of both markers, \textit{a} and \textit{pu}, was losing some of its conditional functions and thus was getting closer to a simple future. In any case, when any two mor\-phemes divide the semantic terrain of ``future'', it is a highly natural development that they should mark out their boundaries in some way, and that those boundaries, while often vague, should generally dis\-tinguish relatively likely from relatively unlikely events (cf. the discus\-sion of \textit{go} versus \textit{gon}, HCE's first mesolectal replacement, \citealt[23ff., 181ff.]{Bickerton1977}). %\originalpage{97} In fact, the IOC\il{Indian Ocean creoles} position is far from clear-cut, and MC and SC\il{Seychelles Creole} seem to have developed rather differently. According to \citet{Corne1977}, \textit{pu} in MC marks a more definite future. In SC\il{Seychelles Creole}, on the other hand, the precise roles of \textit{pu} and \textit{a} are more vague, although the fact that only \textit{pu} can occur in the scope of negation suggests that here, \textit{pu} may be becoming the less definite of the two.\\\\ We have now seen how two very natural developments could have turned a classic creole TMA system into the kind of system we see in IOC today. IOC scholars will doubtless object that the account I have given is a purely conjectural one. That may be; but if it is con\-jectural, that is only because those scholars have not done the job of tracing the diachronic development of IOC through synchronic resi\-dues, as was done in \citet{Bickerton1975} for GC.\is{Guyanese Creole!TMA system|)} The most anyone can do who does not have direct and unlimited access to the IOC commun\-ity is to show that similar developments have occurred or are occurring elsewhere in other creoles, which I have done. There is thus a prima facie case for the scenario outlined above; conclusive evidence can only come through the patient sifting of the highly variable data about which all IOC scholars have complained, but which none of them have yet exploited.\\\\ We can now turn to our final deviation -- the split between habi\-tuals\is{habitual} and progressives which, according to \citet{Taylor1971}, conflates the former with the ``completive'' (in the present terminology, zero-marked past punctuals) in JC\il{Jamaican Creole} and HC\il{Haitian Creole} and with the ``future'' (in the present terminology, irrealis\is{irrealis modality}) in CR and ST\ili{S\~ao Tomense}. Again, as with anterior versus past marking, we must first ask ourselves if the data on which such claims are made are valid. Again, we must answer that at least sometimes they are not. For example, \citet[31]{Hall1953} describes the HC aspect marker \textit{ap(e)} as ``indicating action which is continuing, not yet complete, or future'' -- in other words, \textit{ap(e)} does not include habituals. This, if true, would indicate that the HC\il{Haitian Creole} system is not a classic TMA system since in that system the nonpunctual category embraces both con% %\originalpage{98} tinuing and \isi{habitual} actions. However, the \isi{First Law of Creole Studies} enables us to find, in Hall's own texts, numerous sentences in which \textit{ap(e)} marks habituals, past as well as nonpast: \ea\label{ex:2:122} \gll sa \emph{k-ap-fè} mâjé, \emph{ap-fè} mâjé pou-apézé lwa yo\\ that which-\ASP-make eat, \ASP-make eat {for appease} \textit{loa} {\PL}\\ \glt `Whatever they \textit{used} \textit{to} \textit{give} it to eat, they \textit{used} \textit{to} \textit{give} it to eat to appease the African gods' \z \ea\label{ex:2:123} \gll tout gasô chak mèkrédi \emph{ap-prâ} \emph{bou-t-makak} yo\\ all fellow each Wednesday \ASP-take stick {\PL}\\ \glt `Each Wednesday, all the fellows \textit{take} their sticks' \z \ea\label{ex:2:124} \gll tout moun sou-latè \emph{ap-chaché} pou-yo viv avèk êtélijâs\\ all person on-earth \ASP-seek for-they live with intelligence\\ \glt `Everyone on earth \textit{tries} to live with a head well filled (with knowledge)' \z Although some claims may be disposed of in this way, there remains, as with anteriors, a residue of cases where genuine problems need to be resolved. In this particular case, a new factor enters the system: areas of relative indeterminacy in \isi{semantic space}. In \chapref{ch:4}, where we will try to extract the very roots of semantics, it will become apparent that Deviation E represents an inherent point of weakness in the semantic infrastructure of the TMA system\is{tense-modality-aspect (TMA) systems|)}; we will see also that such points \textsc{must} exist if language is to change and develop. However, since a full account of Deviation E depends on a prior analysis of the nature of \isi{semantic space}, it will have to be postponed until that chapter. For the present, then, we can conclude that the bulk of so-called ``counterexamples'' to our analysis of creole TMA systems arises from one of the three following sources: %\setcounter{itemize}{0} \begin{enumerate} \item Inadequate data-gathering and/or acceptance of inaccurate data and/or faulty analysis of data. \item A slightly longer than normal antecedent period of \isi{pidginization}, allowing pidgin features to become fixed. % CREOLE 99 \item Linguistic change, internal or contact-stimulated, subsequent to creolization. \end{enumerate} In one or two cases, 2. would distort the normal process of creolization, although we must note that only syntactic, and not semantic, aspects of that process are affected: PP \textit{lo} and CR \textit{ba} retain their predicted meanings, even though they do not assume their predicted place in sentence structure. Being a subsequent development, 3. cannot have any relevance to the process of creolization itself. As for 1., one can only hope that this will disappear as the field continues to develop.\\\\ Finally, I shall examine three types of complementation in creoles: complements of perception verbs\is{complementation!of perception verbs|(}; factive\is{complementation!factive}, nonfactive, and related complement structures; and ``serial verb'' structures. My aim in doing so will be twofold. First, as in the previous sections of this chapter, I shall seek to show that substantial identities of structure exist throughout creoles, even where these may be masked by ongoing change processes or other factors. But I also want to establish certain facts about the nature of creole syntax -- facts which will assume a greater significance when we meet them again in Chapters \ref{ch:3} and \ref{ch:4}. In English, the complements\is{English!complements} of perception verbs consist of nonfinite sentences from which aspectual markers are excluded, and the subjects of which have undergone raising:\footnote{If you believe in raising. If you don't, substitute ``whatever rule marks the second NP as object of the first V.''} \ea[ ]{\label{ex:2:125}I saw him leaving the building.}\z \ea[ ]{\label{ex:2:126}We can hear them play trombones.}\z \ea[*]{\label{ex:2:127}I saw he leaving the building.}\z \ea[*]{\label{ex:2:128}I saw him was leaving the building.}\z \ea[*]{\label{ex:2:129}We can hear them have played trombones.}\z While a superficially similar sentence, \REF{ex:2:130}, is grammatical, it does not contain a perception-verb complement, but a factive complement\is{complementation!factive} that has undergone complementizer deletion: %\originalpage{100} \ea\label{ex:2:130}I saw he was leaving the building.\z \ea\label{ex:2:131}I saw \textit{that} he was leaving the building.\z In creoles, perception-verb complements\is{Guyanese Creole!complementation|(} are finite, can contain aspect markers, and have subjects which do not undergo raising: \ea\label{ex:2:132} \ili{\langGC}{}{}\\ \gll mi hia drom a nak\\ I hear drum {\ASP} beat\\ \glt `I heard drums beating' \z \ea\label{ex:2:133} \ili{\langGC}{}{}\\ dem sii i kom \glt `They saw him come' \z Although one might be tempted to gloss \REF{ex:2:132} as `I heard \textit{that} drums were beating' -- along the lines of \REF{ex:2:130} -- such a gloss would be incorrect; factive complements\is{complementation!factive} are introduced by the obligatory particle \textit{se}, which we shall return to later: \ea\label{ex:2:134} \ili{\langGC}{}{}\\ mi hia \textit{se} drom a nak \glt `I heard {\scshape that} drums were beating' \z In \REF{ex:2:133}, the nominative case of the 3rd person singular pronoun is obligatory; the accusative case is ungrammatical: \ea\label{ex:2:135} \ili{\langGC}{}{}\\ \textnormal{*} dem sii am kom \z Free occurrence of nonpunctual-aspect\is{nonpunctual aspect} \textit{a} and the ungrammaticality of an accusative form such as would result from raising indicate that the embedded sentences in \REF{ex:2:132} and \REF{ex:2:133} are finite. At least two further arguments point in a similar direction. In English perception-verb complements\is{English!complements}, it is possible not merely to raise the subject of the embedded S, but also to delete it. Thus, alongside \REF{ex:2:136} we can have \REF{ex:2:137}: \ea\label{ex:2:136}I heard Bill singing.\z %\originalpage{101} \ea\label{ex:2:137}I heard singing.\z GC will allow the equivalent of \REF{ex:2:136}, but not of \REF{ex:2:137}: \ea[ ]{\label{ex:2:138}mi hia bil a sing}\z \ea[*]{\label{ex:2:139}mi hia a sing}\z It is characteristic of languages in general that while they may allow zero subjects\is{zero subject} in nonfinites, they cannot freely delete subjects in finite clauses, except of course under identity, which does not apply here. A second argument involves the \isi{Propositional Island Constraint (PIC)} as proposed by \citet{Chomsky1977}. The PIC affects structure of the form: \ea\label{ex:2:140}\ldots~X \ldots~$_{\alpha}$[\ldots~Y \ldots] \ldots~X \ldots \z and prevents any rule from moving a constituent from position Y to either position X just in case α marks a finite clause. Let us assume that, ignoring some nonrelevant details, the structure underlying both \REF{ex:2:132} and its English equivalent is something like \REF{ex:2:141}.\footnote{As mentioned earlier in this chapter, it seems likely that in reality GC does not have VP as a constituent at the basilectal level. The contrary is assumed here merely in order to simplify the comparison between the English and GC \textit{processes}, and is not meant to imply any substantive claim about GC structure.} In the case of the English version of \REF{ex:2:132}, this would yield a derived structure something like \REF{ex:2:142}. However, in the case of the GC version of \REF{ex:2:132}, \REF{ex:2:141} would represent the superficial as well as the underlying structure. \begin{exe} \ex\label{ex:2:141} %\resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] % Levels 1 & 2 \node at (0,0) (1S) {S}; \node [below left=\baselineskip and 18mm of 1S] (2NP) {NP}; \node [below=\baselineskip of 1S] (2AUX) {AUX}; \node [below right=\baselineskip and 30mm of 1S] (2VP) {VP}; % Level 3 \node [rectangle split, rectangle split parts=2, below=\baselineskip of 2NP] (3I) {I\nodepart{two}mi}; \node [below=\baselineskip of 2AUX] (3Past) {Past}; \node [below left=\baselineskip and 12mm of 2VP] (3V) {V}; \node [below right=\baselineskip and 12mm of 2VP] (3S) {S}; % Level 4 % \node [below=.1\baselineskip of 3I] (4mi) {mi}; \node [rectangle split, rectangle split parts=2, below=\baselineskip of 3V] (4hear) {hear\nodepart{two}hia}; \node [circle, draw, below left=\baselineskip and 12mm of 3S] (4NP) {NP}; \node [below=\baselineskip of 3S] (4AUX) {AUX}; \node [below right=\baselineskip and 12mm of 3S] (4VP) {VP}; % Levels 5 and 6 \node [rectangle split, rectangle split parts=2, below=\baselineskip of 4NP] (5drums) {drums\nodepart{two}drom}; \node [below=2\baselineskip of 4AUX] (5ASP) {ASP}; \node [below=2\baselineskip of 4VP] (5V) {V}; \node [rectangle split, rectangle split parts=2, below=\baselineskip of 5V] (6beat) {beat\nodepart{two}nak}; % Connections \draw (node cs:name=1S, anchor=south) -- (node cs:name=2NP); \draw (node cs:name=1S, anchor=south) -- (node cs:name=2AUX); \draw (node cs:name=1S, anchor=south) -- (node cs:name=2VP); \draw (node cs:name=2NP, anchor=south) -- (node cs:name=3I); \draw (node cs:name=2AUX, anchor=south) -- (node cs:name=3Past); \draw (node cs:name=2VP, anchor=south) -- (node cs:name=3V); \draw (node cs:name=2VP, anchor=south) -- (node cs:name=3S); \draw (node cs:name=3V, anchor=south) -- (node cs:name=4hear); \draw (node cs:name=3S, anchor=south) -- (node cs:name=4NP); \draw (node cs:name=3S, anchor=south) -- (node cs:name=4AUX); \draw (node cs:name=3S, anchor=south) -- (node cs:name=4VP); \draw (node cs:name=4NP, anchor=south) -- (node cs:name=5drums); \draw (node cs:name=4AUX, anchor=south) -- (node cs:name=5ASP); \draw (node cs:name=4VP, anchor=south) -- (node cs:name=5V); \draw (node cs:name=5V, anchor=south) -- (node cs:name=6beat); \end{tikzpicture} } %} \end{exe} %\originalpage{102} \begin{exe} \ex \label{ex:2:142} %\resizebox{.8\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] % Levels 1 & 2 \node at (0,0) (1S) {S}; \node [below left=\baselineskip and 12mm of 1S] (2NP) {NP}; \node [below=\baselineskip of 1S] (2AUX) {AUX}; \node [below right=\baselineskip and 24mm of 1S] (2VP) {VP}; % Level 3 \node [below=\baselineskip of 2NP] (3I) {I}; \node [below=\baselineskip of 2AUX] (3Past) {Past}; \node [below left=\baselineskip and 6mm of 2VP] (3V) {V}; \node [circle, draw, inner sep=2pt, yshift=.25em, below=\baselineskip of 2VP] (3NP) {NP}; \node [below right=\baselineskip and 6mm of 2VP] (3VP) {VP}; % Level4 \node [below=\baselineskip of 3V] (4hear) {\strut hear}; \node [below=\baselineskip of 3NP, yshift=.3em] (4drums) {\strut drums}; % NOT PERFECT!! \node [below=\baselineskip of 3VP] (4beating) {\strut beating}; % Connections \draw (node cs:name=1S, anchor=south) -- (node cs:name=2NP); \draw (node cs:name=1S, anchor=south) -- (node cs:name=2AUX); \draw (node cs:name=1S, anchor=south) -- (node cs:name=2VP); \draw (node cs:name=2NP, anchor=south) -- (node cs:name=3I); \draw (node cs:name=2AUX, anchor=south) -- (node cs:name=3Past); \draw (node cs:name=2VP, anchor=south) -- (node cs:name=3V); \draw (node cs:name=2VP, anchor=south) -- (node cs:name=3NP); \draw (node cs:name=2VP, anchor=south) -- (node cs:name=3VP); \draw (node cs:name=3V, anchor=south) -- (node cs:name=4hear); \draw (node cs:name=3NP, anchor=south) -- (node cs:name=4drums); \draw (node cs:name=3VP, anchor=south) -- (node cs:name=4beating); \end{tikzpicture} } %} \end{exe} If we have analyzed these sentences correctly, then it should be possible to extract from the circled node in \REF{ex:2:142}, since such a move does not violate the PIC\is{Propositional Island Constraint (PIC)}, but impossible to extract from the circled node in \REF{ex:2:141}, which would constitute such a violation since the lower S dominates a finite clause. Extraction from \REF{ex:2:142} is fine: \ea\label{ex:2:143} It was drums that I heard beating. \z \ea\label{ex:2:144} What did I hear beating? \z \ea\label{ex:2:145} The drums that I heard beating never stopped. \z Extraction from the circled node of \REF{ex:2:143}, however, yields only sentences which are ungrammatical in GC: \ea[*]{\label{ex:2:146}a drom mi hia a nak}\z \ea[*]{\label{ex:2:147}a wa mi hia a nak?}\z \ea[*]{\label{ex:2:148}di drom-dem we mi hia a nak neva stap}\z Since there is no other possible reason for the ungrammaticality of these sentences, we can only conclude that they are ungrammatical because they violate the PIC\is{Propositional Island Constraint (PIC)}, and that therefore the embedded sentence in \REF{ex:2:132} is a finite one. %\originalpage{103} Very few writers on creoles have discussed perception-verb com\-plements specifically, and fewer still have even attempted to analyze them. Thus, the most that can be done at present is to point to a wide range of superficially similar structures and hope that scholars in the various regions will determine whether they show the same constraints on subject deletion and extraction as did the GC examples. Similar structures are found in other English creoles, such as \ili{Belize Creole} (BC); in \ili{French creoles}, such as HC\il{Haitian Creole} and \ili{Guyanais} (GU); and in Portuguese creoles like ST\il{S\~ao Tomense}: \ea\label{ex:2:149} \ili{\langBC}{}{}\\ i onli si di tar a flo:t ina di bailing wata \glt `He only saw the tar floating (lit., tar was floating) in the boiling water' \z \ea\label{ex:2:150} \ili{\langGU}{}{}\\ \gll mo we li ka briga\\ I see he {\ASP} fight\\ \glt `I saw him fighting (lit., he was fighting)' \z \ea\label{ex:2:151} \ili{\langHC}{}{}\\ \gll li wè tèt Boukinèt ap-gadé li\\ he see head Bouquinette \ASP-watch him \\ \glt `He saw Bouquinette's head watching him (lit., head was watching)' \z \ea\label{ex:2:152} \ili{\langST}{}{}\\ \gll e be i-ska landa\\ he see {I-\ASP} swim\\ \glt `He saw me swimming (lit., I was swimming)' \z Before leaving perception-verb complements, we should note that there are also some similar constructions which are nonfinite in English but clearly finite in at least one English creole, GC; for example, causative\is{causative} imperatives: \ea[ ]{\label{ex:2:153} mek i gowe\\ \glt `Make him leave'} \z \ea[*]{\label{ex:2:154}mek am gowe}\z \ea[ ]{\label{ex:2:155} na mek i na wok\\ \glt `Don't prevent him from working'} \z %\originalpage{104} \ea[*]{\label{ex:2:156} na mek am na wok\\} \z \noindent Note the impossibility of clefting\is{clefting} such sentences in GC:\is{movement rules!in GC} \ea[ ]{\label{ex:2:157}I prevented him from working.}\z \ea[ ]{\label{ex:2:158}It was him that I prevented from working.}\z \ea[ ]{\label{ex:2:159}mi mek i na wok}\z \ea[*]{\label{ex:2:160}a i mi mek na wok}\z Example \REF{ex:2:160} is so bad that it is almost unpronounceable. However, the restriction does not apply to clefting per se since the subject NP may undergo the process: \ea a mi mek i na wok\\ \glt `It was I who prevented him from working' \label{ex:2:161} \z The fact that complements of perception\is{complementation!of perception verbs|)} and causation verbs appear to constitute finite sentences in creoles suggests the possibility that there might be no such thing as a nonfinite structure in these languages. In fact, I doubt whether there is any creole extant for which such an extreme statement would be true. However, there is a good deal of evidence which suggests that at their earliest stage of develop\-ment creoles may not have had any nonfinite structures.\\\\ It should have become apparent by now that we are not going to get very far with the study of creoles -- or of child language acquisition, or of language origins -- if we allow ourselves to remain trapped within the static, antiprocessual framework which has dominated linguistics since de Saussure. The emergence of creole languages is a process; language acquisition is a process; the original growth and development of human language was assuredly a process. To apply to processes those methods expressly designed to handle static-synchronic systems is simply absurd; in order to do this, you have to pretend that a process %\originalpage{105} is a state, and ignore exactly those characteristics that render it distinc\-tive. Such a procedure is sometimes defended as an ``\isi{idealization}'', cf. \citet[Chapter 8]{ChomskyEtAl1968}, but the difference between ``idealization'' and ``convenient fiction'' seems not to be grasped by these authors. In fact, static generativism, the only kind we have had so far (although there is no a priori reason why there should not be a dynamic generativism), has ignored creoles, ignored language origins, and in the case of language acquisition\is{language!acquisition of} -- something it could hardly ignore since the mystery of language acquisition was what it was originally set up to explain -- it has intervened with the sole result of turning off 90 percent of the workers in the field, as we shall see in the next chapter.\\\\ \is{complementation!factive|(}If we are going to call a spade a spade and a process a process, we need to make some basic assumptions. One is that previous changes in any language inevitably leave their footprints behind them \citep{Givón1971}. Another is that diachronic changes\is{linguistic change} must be directly reflected in synchronic variation \citep{WeinreichEtAl1968,Bickerton1975,Bailey1973}. Equipped with these, we shall examine other types of complementation in creoles to determine whether the current state of affairs in perception-verb\is{complementation!of perception verbs} and causative\is{causative} constructions may at one time have been that of all complement types. First let us look at a set of sentences which might appear to contain complementizers. In GC, there are three forms that might be taken for complementizers: \textit{se}, \textit{go}, and \textit{fu/fi.} We have already glanced at the second two in connection with the realized/nonrealized com\-plement distinction. The first, \textit{se}, introduces complements of verbs of reporting and ``psychological'' verbs: \ea\label{ex:2:162} i taak se i na si am \\ \glt `He said that he didn't see it' \z \ea\label{ex:2:163} i tel mi se i na si am\\ \glt `He told me that he didn't see it' \z \ea\label{ex:2:164} mi no se i na si am\\ \glt `I know that he didn't see it' \z %\originalpage{106} \noindent Clearly, the complements that \textit{se} introduces are finite Ss, just as are those introduced by Eng. \textit{that}. However, it does not follow from this that \textit{se} is a complementizer. Doubt arises in the first place because \textit{se}, unlike complementizers in general, is nondeletable. A sentence like \textit{mi hia se i a kom} means `I heard (that) he was coming'; \textit{mi hia i a kom}, however, cannot be synonymous with this, but can only mean `I heard him earning'. In other cases, such as \textit{mi taak se i a kom} `I said that he was coming', deletion yields only ungrammatical sentences: *~\textit{mi taak i a kom.} Further, there is the fact that {\itshape se}-clauses cannot be generated in subject position. In English\is{English!complements|(}, \textit{that}-clauses can be generated in subject position and then undergo optional rightward movement\is{movement rules!in GC} by a rule of \isi{extraposition}; thus, \REF{ex:2:165} would be assumed to be closer to its underlying structure than \REF{ex:2:166}, derived from the same underlying structure via extraposition: \ea\label{ex:2:165} That John has left isn't true. \z \ea\label{ex:2:166} It isn't true that John has left. \z However, a similar generalization could not be true for GC since while there is grammatical equivalent for \REF{ex:2:166}, there is no grammatical equivalent for \REF{ex:2:165}: \ea[*]{\label{ex:2:167} se jan gaan na tru} \z \ea[\hspaceThis{*}]{\label{ex:2:168} na tru se jan gaau} \z Not only can \textit{se}-clauses not be generated in subject position, they cannot be moved to sentence-initial position by any rule. There is no creole passive that would turn \textit{Everybody knows that he won} into \textit{That he won is known by everyone}. Clefting\is{clefting} and pseudoclefting will front simple NP objects of verbs like \textit{no} `know' but not \textit{se}-clause objects: \ea[ ]{\label{ex:2:169} mi no dis\\ \glt `I know this'} \z \ea[ ]{\label{ex:2:170} a dis mi no \\ \glt `It's this that I know'} \z \ea[ ]{\label{ex:2:171} dis a wa mi no\\ \glt `This is what I know'} \z \ea[ ]{\label{ex:2:172} mi no se dem gaan\\ %\originalpage{107} \glt `I heard (that) he was coming'} \z \ea[*]{\label{ex:2:173} a se dem gaan mi no}\z \ea[*]{\label{ex:2:174}se dem gaan a wa mi no}\z \noindent True, neither clefting\is{clefting} nor pseudoclefting works in English either, unless there is a head noun: \ea[*]{\label{ex:2:175}It's that they've left that worries Bill.}\z \ea[ ]{\label{ex:2:176} It's the fact that they've left that worries Bill.} \z But English can front via \isi{topicalization}: \ea\label{ex:2:177} I knew already that they'd left. \z \ea\label{ex:2:178} That they'd left I knew already. \z \noindent GC cannot: \ea[ ]{\label{ex:2:179} mi no aredi se dem gaan} \z \ea[*]{\label{ex:2:180} se dem gaan mi no aredi} \z Now, it is true that this datum, taken in isolation, says nothing directly about the status of \textit{se}. It merely suggests that \textit{se}-clauses cannot be dominated by an NP node since if they were, they would presumably be eligible for movement rules\is{English!movement rules} that affect NPs and would also constitute possible expansions of subject NPs. If we assumed that \textit{se}-clauses were generated under an \=S node which in turn was immediately dominated by VP (or S$_0$, if VP is not a constituent in GC grammar), all the above data would follow.\is{complementation!factive|)} However, there are some facts that suggest the possibility of %\originalpage{108} an alternative analysis. In English, there are pairs of sentences such as:\is{English!complements|)} \ea\label{ex:2:181} I'm glad \textit{that} \textit{they've} \textit{left}. \z \ea\label{ex:2:182} \textit{That} \textit{they've} \textit{left} makes me glad. \z These sentences are perhaps slightly less than synonymous, and they certainly would not be regarded as transformationally related; since we have already established that \textit{se}-clauses cannot be base-generated in subject position, it will come as no surprise that the GC equivalent of \REF{ex:2:181} is grammatical, while the GC equivalent of \REF{ex:2:182} is ungrammatical: \ea[ ]{\label{ex:2:183}mi glad se dem gaan}\z \ea[*]{\label{ex:2:184}se dem gaan mek ml glad }\z \noindent Yet \REF{ex:2:185} is grammatical: \ea\label{ex:2:185} dem gaan mek mi glad \glt lit., `They've left \textsc{cause} I glad' \z Example \REF{ex:2:185} cannot be derived from \REF{ex:2:184} by \textit{se}-deletion since, as we saw, \textit{se} does not delete. It could only be derived by embedding S under \textit{the} subject NP node. Again, these facts, taken in isolation, might not seem to con\-stitute evidence against the status of \textit{se} as a complementizer. Since we have already suggested that \textit{se}-clauses could be introduced under \=S not dominated by NP, all we need in order to accommodate \REF{ex:2:185} is a rule that will expand NP as S, but not as \=S. However, the picture would change somewhat if we could show one or both of two things: %\setcounter{itemize}{0} \begin{enumerate} \item That GC required a rule NP → \=S. \label{GCrequirementcondition} \item That \textit{se}-clauses could not be generated under \=S. \label{GCrequirementcondition2} \end{enumerate} In order to examine these possibilities, let us look at another quasi-% %\originalpage{109} complementizer, \textit{fi/fu} (henceforth referred to as \textit{fi}, for the sake of convenience, since \textit{fi} is the more basilectal, if nowadays rarer, form). In the GC lexicon, \textit{fi} must be entered both as a preposition and as a modal auxiliary of \isi{obligation}: \ea\label{ex:2:186} mi du am fi meri, na fi ayu\\ \glt `I did it for Mary, not for you (pl.)' \z \ea\label{ex:2:187} mi fi go tumara\\ \glt `I ought to go tomorrow' \z However, there are also sentences such as: \ea\label{ex:2:188} mi waan fi go\\ \glt `I want to go' \z \ea\label{ex:2:189} mi waan i fi go\\ \glt `I want him to go' \z In \REF{ex:2:188}, \textit{fi} looks like a complementizer, more or less the equivalent of Eng. \textit{to}. The likelihood that, unlike \textit{se}, \textit{fi} is a genuine complementizer is increased by the fact that \textit{fi} in \REF{ex:2:188} will delete without change of meaning: \ea\label{ex:2:190} mi waan go \\ \glt `I want to go' \z Unfortunately, \REF{ex:2:189} seems at first sight to suggest a quite different analysis. Complementizers normally precede the sentences they introduce, but \REF{ex:2:191} is ungrammatical: \ea[*]{\label{ex:2:191} mi waan fi i go} \z Complementizers may follow subjects of embedded sentences if raising (or whatever you believe in if you don't believe in raising) has taken place, as in \textit{I want him to go}; however, as with perception-verb complements\is{complementation!of perception verbs}, a morpheme-for-morpheme translation of such sentences is ungrammatical: % 11O \ea[*]{\label{ex:2:192} mi waan am fi go} \glt \z However, \textit{fi} in its \REF{ex:2:189} location is nondeletable: \ea[*]{\label{ex:2:193}mi waan i go} \z This contrasts with the status of \textit{fi} in \REF{ex:2:188}, and suggests that while \textit{fi} in \REF{ex:2:188} is a complementizer, \textit{fi} in \REF{ex:2:189} is a modal auxiliary. There would seem to be two possible analyses of \REF{ex:2:188} and \REF{ex:2:189}. In the first, \textit{fi} is really a modal auxiliary in both cases -- in \REF{ex:2:189} for the reasons already given, and in \REF{ex:2:188} because \REF{ex:2:188} is simply derived from \REF{ex:2:194} by equi-deletion (obligatory since \REF{ex:2:194} is ungrammatical without it): \ea[*]{\label{ex:2:194}mi waan mi fi go}\z This first solution would be tempting but for \REF{ex:2:190}: modal auxiliaries do not normally delete without loss of meaning. We might then wish to choose the second analysis, which would derive \REF{ex:2:189} from \REF{ex:2:195} via obligatory complementizer deletion since without such deletion \REF{ex:2:195} is simply ungrammatical: \ea[*]{\label{ex:2:195}mi waan fi i fi go}\z However, there still lurks in the background the possibility that, despite \REF{ex:2:190} and \REF{ex:2:195}, the prepositional role of \textit{fi} might somehow be involved (cf. the claim by \citet{KoopmanEtAl1981} that HC\il{Haitian Creole} \textit{pu}, a close relative of \textit{fi} ``can introduce final complements, either infinitival \ldots~or tensed''). To choose out of these three possibilities -- complementizer, modal verb, preposition -- we need to examine sentences in which %\originalpage{111} constituents are extracted from \textit{fi}-clauses. Let us look first of all at a sentence such as: \ea\label{ex:2:196} Where did he want to go? \z This has the GC equivalent: \ea\label{ex:2:197} wisaid i waan fi go? \z We should note also that sentences like \REF{ex:2:198} have exact English equivalents: \ea\label{ex:2:198} wisaid i waan mi fi go? \glt `Where did he want me to go?' \z In all three sentences \REF{ex:2:196}--\REF{ex:2:198}, a constituent, WH-place, is moved out\is{WH-movement|(} of an embedded S -- in the case of \REF{ex:2:198} presumably a tensed S.\is{movement rules!in GC|(} If \textit{fi} in \REF{ex:2:198} is a modal verb, and no complementizer or preposition has been deleted, \REF{ex:2:198} must have an underlying structure (ignoring irrelevant detail) something like \REF{ex:2:199}: \ea\label{ex:2:199} \resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] % Levels 1 to 3 \node at (0,0) (1S) {\=S}; \node [below left=\baselineskip and 6mm of 1S] (2COMP) {COMP}; \node [below right=\baselineskip and 24mm of 1S] (2S) {S}; \node [below left=\baselineskip and 12mm of 2S] (3NP) {NP}; \node [below=\baselineskip of 2S] (3V) {V}; \node [below right=\baselineskip and 24mm of 2S] (3S) {S}; % Levels 4 and 5 \node [below=\baselineskip of 3NP] (4i) {\strut i}; \node [below=\baselineskip of 3V] (4waan) {\strut waan}; \node [below left=\baselineskip and 6mm of 3S] (4NP) {\strut NP}; \node [below=\baselineskip of 3S] (4Aux) {\strut Aux}; \node [below right=\baselineskip and 6mm of 3S] (4V) {\strut V}; \node [below right=\baselineskip and 24mm of 3S] (4NP2) {\strut NP}; \node [below=\baselineskip of 4NP] (5mi) {\strut mi}; \node [below=\baselineskip of 4Aux] (5fi) {\strut fi}; \node [below=\baselineskip of 4V] (5go) {\strut go}; \node [isosceles triangle, isosceles triangle apex angle=100, shape border rotate=90, minimum height=1.5em, draw, below=3mm of 4NP2.center] (5polygon) {}; % \path [on grid] let \p1 = (4NP2), \p2 = (5go.base) in node at (\x1,\y2) (6WH) {WH-place}; \node [below=\baselineskip of 4NP2] (6WH) {\strut WH-place}; % Connections \draw (node cs:name=1S) -- (node cs:name=2COMP); \draw (node cs:name=1S) -- (node cs:name=2S); \draw (node cs:name=2S) -- (node cs:name=3NP); \draw (node cs:name=2S) -- (node cs:name=3V); \draw (node cs:name=2S) -- (node cs:name=3S); \draw (node cs:name=3NP) -- (node cs:name=4i); \draw (node cs:name=3V) -- (node cs:name=4waan); \draw (node cs:name=3S) -- (node cs:name=4NP); \draw (node cs:name=3S) -- (node cs:name=4Aux); \draw (node cs:name=3S) -- (node cs:name=4V); \draw (node cs:name=3S) -- (node cs:name=4NP2); \draw (node cs:name=4NP) -- (node cs:name=5mi); \draw (node cs:name=4Aux) -- (node cs:name=5fi); \draw (node cs:name=4V) -- (node cs:name=5go); \end{tikzpicture} } } \z %\originalpage{112} \isi{WH-movement} would then move WH-place under the COMP node. However, such movement would violate the PIC\is{Propositional Island Constraint (PIC)}, since it moves WH out of a tensed S. Similarly, if a deleted prepositional \textit{fi} introduced \textit{mi fi go} in \REF{ex:2:198}, the latter sentence would have an underlying structure something like \REF{ex:2:200}. Example \REF{ex:2:200} would involve a rule which would expand PP as either P NP or P S; just such a rule is proposed for HC\il{Haitian Creole} by \citet{KoopmanEtAl1981}, on rather similar evidence, involving prepositional \textit{pu} and its tensed complements. However, movement of the WH-constituent from the right-hand NP node to COMP would again involve violation of the PIC.\is{Propositional Island Constraint (PIC)} If, on the other hand, \textit{fi} is a complementizer, no violation need ensue. In this case, the underlying structure of \REF{ex:2:198} would be something like \REF{ex:2:201}. \ea\label{ex:2:200} \resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] % Levels 1 to 3 \node at (0,0) (1S) {\=S}; \node [below left=\baselineskip and 12mm of 1S.south] (2COMP) {COMP}; \node [below right=\baselineskip and 6mm of 1S.south] (2S) {S}; \node [below left=\baselineskip and 12mm of 2S] (3NP) {NP}; \node [below=\baselineskip of 2S] (3V) {V}; \node [below right=\baselineskip and 12mm of 2S] (3PP) {PP}; % Levels 4 and 5 \node [below=\baselineskip of 3NP] (4i) {\strut i}; \node [below=\baselineskip of 3V] (4waan) {\strut waan}; \node [below left=4\baselineskip and 3mm of 3PP] (4P) {\strut P}; \node [below right=4\baselineskip and 24mm of 3PP] (4S) {\strut S}; \node [below=\baselineskip of 4P] (5fi) {\strut fi}; \node [below left=4\baselineskip and 18mm of 4S] (5NP) {\strut NP}; \node [below left=4\baselineskip and 2mm of 4S] (5Aux) {\strut Aux}; \node [below right=4\baselineskip and 2mm of 4S] (5V) {\strut V}; \node [below right=4\baselineskip and 18mm of 4S] (5NP2) {\strut NP}; \node [below=\baselineskip of 5NP] (6mi) {\strut mi}; \node [below=\baselineskip of 5Aux] (6fi) {\strut fi}; \node [below=\baselineskip of 5V] (6go) {\strut go}; \node [isosceles triangle, isosceles triangle apex angle=100, shape border rotate=90, minimum height=1.5em, draw, below=3mm of 5NP2.center] (6polygon) {}; % \path [on grid] let \p1 = (5NP2), \p2 = (6go.base) in node at (\x1,\y2) (7WH) {WH-place}; \node [below=\baselineskip of 5NP2] (7WH) {\strut WH-place}; % Connections \draw (node cs:name=1S, anchor=south) -- (node cs:name=2COMP, anchor=north); \draw (node cs:name=1S, anchor=south) -- (node cs:name=2S, anchor=north); \draw (node cs:name=2S, anchor=south) -- (node cs:name=3NP); \draw (node cs:name=2S, anchor=south) -- (node cs:name=3V); \draw (node cs:name=2S, anchor=south) -- (node cs:name=3PP); \draw (node cs:name=3NP) -- (node cs:name=4i); \draw (node cs:name=3V) -- (node cs:name=4waan); \draw (3PP.south) -- (4P.north); \draw (3PP.south) -- (4S.north); \draw (4P.south) -- (5fi.north); \draw (4S.south) -- (node cs:name=5NP); \draw (4S.south) -- (node cs:name=5Aux); \draw (4S.south) -- (node cs:name=5V); \draw (4S.south) -- (node cs:name=5NP2); \draw (node cs:name=5NP) -- (node cs:name=6mi); \draw (node cs:name=5Aux) -- (node cs:name=6fi); \draw (node cs:name=5V) -- (node cs:name=6go); % \draw [red] (6mi.base) -- (7WH.base); % This line checks whether basline alignment is correct. \end{tikzpicture} }} \z % CREOLE 113 \ea\label{ex:2:201} \resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] % Levels 1 to 3 \node at (0,0) (1S) {\=S}; \node [below left=\baselineskip and 6mm of 1S] (2COMP) {\strut COMP}; \node [below right=\baselineskip and 24mm of 1S] (2S) {\strut S}; \node [below left=\baselineskip and 12mm of 2S] (3NP) {\strut NP}; \node [below=\baselineskip of 2S] (3V) {\strut V}; \node [below right=\baselineskip and 24mm of 2S] (3S) {\strut \=S}; \node [below=\baselineskip of 2COMP] (3space) {( )}; % Levels 4 and 5 \node [below=\baselineskip of 3NP] (4i) {\strut i}; \node [below=\baselineskip of 3V] (4waan) {\strut waan}; \node [below left=\baselineskip and 3mm of 3S] (4comp) {\strut COMP}; \node [below right=\baselineskip and 12mm of 3S] (4S) {\strut S}; \node [below=\baselineskip of 4comp] (5fi) {(fi)}; \node [below left=\baselineskip and 6mm of 4S] (5NP) {\strut NP}; \node [below=\baselineskip of 4S] (5Aux) {\strut Aux}; \node [below right=\baselineskip and 6mm of 4S] (5V) {\strut V}; \node [below right=\baselineskip and 24mm of 4S] (5NP2) {\strut NP}; \node [below=\baselineskip of 5NP] (6mi) {\strut mi}; \node [below=\baselineskip of 5Aux] (6fi) {\strut fi}; \node [below=\baselineskip of 5V] (6go) {\strut go}; \node [isosceles triangle, isosceles triangle apex angle=100, shape border rotate=90, minimum height=1.5em, draw, below=3mm of 5NP2.center] (6polygon) {}; % \path [on grid] let \p1 = (5NP2), \p2 = (6go.base) in node at (\x1,\y2) (6WH) {WH-place}; \node [below=\baselineskip of 5NP2] (6WH) {\strut WH-place}; % Connections \draw (node cs:name=1S, anchor=south) -- (node cs:name=2COMP); \draw (node cs:name=1S, anchor=south) -- (node cs:name=2S); \draw (node cs:name=2S, anchor=south) -- (node cs:name=3NP); \draw (node cs:name=2S, anchor=south) -- (node cs:name=3V); \draw (node cs:name=2S, anchor=south) -- (node cs:name=3S); \draw (node cs:name=3NP) -- (node cs:name=4i); \draw (node cs:name=3V) -- (node cs:name=4waan); \draw (node cs:name=3S, anchor=south) -- (node cs:name=4comp); \draw (node cs:name=3S, anchor=south) -- (node cs:name=4S); \draw (node cs:name=4S, anchor=south) -- (node cs:name=5NP); \draw (node cs:name=4S, anchor=south) -- (node cs:name=5Aux); \draw (node cs:name=4S, anchor=south) -- (node cs:name=5V); \draw (node cs:name=4S, anchor=south) -- (node cs:name=5NP2); \draw (node cs:name=5NP) -- (node cs:name=6mi); \draw (node cs:name=5Aux) -- (node cs:name=6fi); \draw (node cs:name=5V) -- (node cs:name=6go); \draw (node cs:name=2COMP) -- (node cs:name=3space); \draw (node cs:name=4comp) -- (node cs:name=5fi); \draw (6WH.south) edge [ dashed, -{Stealth[]}, bend left=60] (5fi.south); \draw (5fi.south west) edge [ dashed, -{Stealth[]}, bend left=50] (3space.south); \end{tikzpicture}}} \z Complementizer deletion, optional in \REF{ex:2:188}, is, as we have seen, obliga\-tory in \REF{ex:2:189} and \REF{ex:2:198}. However, once \textit{fi} is deleted, we have an empty COMP node (the one dominating the \textit{fi} in parentheses in \REF{ex:2:201}). \citet{Chomsky1977} has argued that WH-movement, being a cyclic rule, can move constituents from COMP to COMP, thus forming a ``bridge'' over the barrier of the PIC. In \REF{ex:2:201} -- but not in \REF{ex:2:199} or \REF{ex:2:200} -- there is a lower COMP node to which WH can be moved on the first cycle, allowing the second cycle to move it to the higher COMP node, as indicated by the dotted line in \REF{ex:2:201}. Thus, in contrast with \REF{ex:2:199} and \REF{ex:2:200}, WH-movement in \REF{ex:2:201} does not violate the PIC.\is{Propositional Island Constraint (PIC)} If the foregoing analysis is correct, GC does contain an \=S structure in some complements. However, there was no motivation in \REF{ex:2:201} for assuming \=S to be dominated by NP, so we have yet to prove condi\-tion (\ref{GCrequirementcondition}) (Page~\pageref{GCrequirementcondition}). In order to prove condition (\ref{GCrequirementcondition}), we need another set of \textit{fi}-sentences. Unlike perception-verb complements\is{complementation!of perception verbs}, which cannot have zero subjects (\textit{mi hia dem a sing} versus *~\textit{mi hia a sing}), \textit{fi}-clause complements can: \ea\label{ex:2:202} yu gafi kraas di riba fi miit tong \glt `You have to cross the river in order to get to town' \z %\originalpage{114} \ea\label{ex:2:203} na bin iizi fi kech taiga \glt `It wasn't easy to catch a jaguar' \z In both cases, the \textit{fi}-clause can be moved: \ea\label{ex:2:204} fi miit tong yu gafi kraas di riba \glt `To get to town you have to cross the river' \z \ea\label{ex:2:205} fi kech taiga na bin iizi \glt `To catch a jaguar wasn't easy' \z However, since both \citet{Woolford1979} and \citet{KoopmanEtAl1981} give arguments that \ili{Tok Pisin} and Haitian Creole, respectively, have homophonous pairs of complementizers and prepositions (TP \textit{long}, HC\il{Haitian Creole} \textit{pu}), we cannot automatically assume that \textit{fi} is a complementizer in \REF{ex:2:202}--\REF{ex:2:205} just because it was in \REF{ex:2:198}. To show this, we have to question the NP in \REF{ex:2:103}: \ea\label{ex:2:206} a wa na bin iizi fi kech ? \glt `What was it that wasn't easy to catch?' \z If \textit{fi} were a preposition in \REF{ex:2:203}, \REF{ex:2:203} and \REF{ex:2:206} would have the underlying structure of \REF{ex:2:200}; but we saw in our analysis of \REF{ex:2:200} that if WH-movement were applied to that structure, a violation of the PIC\is{Propositional Island Constraint (PIC)} would result. To avoid such violation, \textit{fi} would have to be a complementizer, and \REF{ex:2:203} and \REF{ex:2:206} would have to have the underlying structure of \REF{ex:2:201}. Since \REF{ex:2:206} is grammatical, \textit{fi} must be a complementizer. If this is the case, \REF{ex:2:205} must contain an \=S directly dominated by NP, as in \REF{ex:2:207}: %\originalpage{115} \ea\label{ex:2:207} %\resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] \node at (0,0) (1S) {S}; \node [below right=\baselineskip and 24mm of 1S] (2V) {V\textsubscript{adj}}; \node [below=\baselineskip of 1S] (2Aux) {Aux}; \node [below left=\baselineskip and 24mm of 1S] (2NP) {NP}; \node [below=\baselineskip of 2NP] (3S) {\strut\=S}; \node [below left=\baselineskip and .25mm of 2Aux] (3na) {\strut na}; \node [below right=\baselineskip and .25mm of 2Aux] (3bin) {\strut bin}; \node [below=\baselineskip of 2V] (3iizi) {iizi}; \node [below left=\baselineskip and 12mm of 3S] (4comp) {COMP}; \node [below right=\baselineskip and 6mm of 3S] (4S) {S}; \node [below=\baselineskip of 4comp] (5fi) {\strut fi}; \node [below left=\baselineskip and 6mm of 4S] (5NP) {\strut NP}; \node [below=\baselineskip of 4S] (5V) {\strut V}; \node [below right=\baselineskip and 6mm of 4S] (5NP2) {\strut NP}; \node [below=\baselineskip of 5NP] (6emptyset) {\strut $\varnothing$}; %from amssymb \node [below=\baselineskip of 5V] (6kech) {\strut kech}; \node [below=\baselineskip of 5NP2] (6taiga) {\strut taiga}; \draw (2NP.north) -- (1S.south) -- (2Aux.north); \draw (1S.south) -- (2V.north); \draw (3na.north) -- (2Aux.south) -- (3bin.north); \draw (2NP.south) -- (3S); \draw (2V.south) -- (3iizi); \draw (4comp.north) -- (3S.south) -- (4S.north); \draw (4comp.south) -- (5fi); \draw (5NP.north) -- (4S.south) -- (5NP2.north); \draw (4S.south) -- (5V.north); \draw (5NP.south) -- (6emptyset.north); \draw (5NP2.south) -- (6taiga.north); \draw (5V.south) -- (6kech.north); \end{tikzpicture} } % } \z \noindent Thus, the exclusion of \textit{se}-clauses from subject position, which we noted in discussing \REF{ex:2:184} above, cannot be due to the absence of a rule rewriting NP as \=S (COMP S). The ungrammaticality of \REF{ex:2:184} must, therefore, result from the fact that \textit{se} is not a complementizer, and consequently cannot be inserted in structures such as \REF{ex:2:207}. We can now turn to condition (\ref{GCrequirementcondition2}) (Page~\pageref{GCrequirementcondition2}). The fact that \textit{se} cannot be a complementizer, suggested by the foregoing analysis, would of course also make it impossible for \textit{se}-clauses to be generated in \=S complements. But let us assume for the moment that \textit{se} is a complementizer. If this were so, a sentence such as \REF{ex:2:208} below would have a structure similar to that of \REF{ex:2:201}: \ea\label{ex:2:208} dem taak se i de a tong \glt `They said that he was in town' \z In other words, the complement \=S would contain a COMP node which would permit COMP-to-COMP WH-movement\is{WH-movement|)} and hence permit questioning of the rightmost NP. We would then have to predict that \REF{ex:2:209} would be grammatical: %\originalpage{116} \ea[*]{\label{ex:2:209} wisaid dem taak se i de? \\ \glt `Where did they say he was?'} \z Unfortunately, it is not. We can therefore only conclude that \textit{se} is something other than a complementizer. The third quasi-complementizer, \textit{go}, is even more restricted than \textit{se}. Like \textit{se}, but unlike \textit{fi}, it cannot be preposed: \ea[ ]{\label{ex:2:210} i tek i gon fi shuut taiga \glt `He took his gun to shoot a jaguar (but did not necessarily do so)'} \z \ea[ ]{\label{ex:2:211} i tek i gon go shuut taiga \glt `He took his gun to shoot a jaguar (and did shoot one)'} \z \ea[ ]{\label{ex:2:212} fi shuut taiga i a tek i gon \glt `For shooting jaguars he used to take his gun'\footnotemark} \z \footnotetext{It is interesting to note that while \textit{fi}-clauses in complement position can refer to one-time actions (as in \REF{ex:2:210}), and in consequence the higher verb can take punctual marking, preposed \textit{fi}-clauses can refer only to habitual actions, and in consequence the higher verb must take nonpunctual marking. At the moment I have no idea why this is so.} \ea[*]{\label{ex:2:213} go shuut taiga i a tek i gon} \z Unlike both \textit{fi} and \textit{se}, \textit{go} cannot occur with adjectival verbs: \ea[ ]{\label{ex:2:214} mi glad fi sii yu\\ \glt `I'm glad to see you'} \z \ea[ ]{\label{ex:2:215} mi glad se yu kom \\ \glt `I'm glad you came'} \z \ea[*]{\label{ex:2:216} mi glad go sii yu} \z As with \textit{se} (but not \textit{fi}), complement constituents cannot be extracted: \ea[ ]{\label{ex:2:217} i gaan a tong go sii dakta\\ \glt `He's gone to town to see the doctor'} \z \ea[*]{\label{ex:2:218} a hu i gaan a tong go sii?\\ \glt `Who did he go to town to see?'} \z \ea[*]{\label{ex:2:219} di dakta we i gaan a tong go sii de bad an aal \\ \glt `The doctor he went to town to see is sick too'} \z %\originalpage{117} We can assume, as with \textit{se}, that extraction is blocked because COMP-to-COMP movement is impossible; therefore, \textit{go} is not a complementizer either. The claim that \textit{se} is a serial verb\is{verb serialization|(} in Krio has been argued strongly by \citet{Larimore1976}, but since there are some minor differences between the grammars of Krio and GC, not all her arguments apply to the latter language. I shall assume without further argument that \textit{se} and \textit{go} are both serial verbs. If this assumption is correct, then \textit{se} and \textit{go}, if not \textit{fi}, really belong with the verbs that we will discuss in the next section on serialization. But because \textit{fi} may be a complementizer synchronically, it by no means necessarily follows that \textit{fi} always was a complementizer. \citet[Example 9]{Washabaugh1979} cites the following sentence: \footnote{Washabaugh's analysis of \textit{fi} differs radically from that made in the present chapter, although there is no reason to suppose that the facts of PIC differ significantly from those of GC. However, since I have dealt with that analysis in \citet{Bickerton1980}, I will not repeat my criticisms of it here.}\is{movement rules!in GC|)} \ea\label{ex:2:220} ah waan di rien kom fi ah don go huoam \glt `I want the rain to come so that I won't have to go home' \z This sentence, from \ili{Providence Island Creole} (PIC), a variety similar in many ways to GC, is of a type claimed by Washabaugh to be ``rare in most contemporary varieties of [Caribbean English creoles] , but \ldots~frequent enough in older texts.'' Washabaugh does not analyze this sentence, so we do not know whether the following sentence would be rejected by Providence Islanders:\is{Guyanese Creole!complementation} \ea[?]{\label{ex:2:221} wisaid ah waan di rien kom fi ah don go?} \z It would almost certainly be rejected by speakers of other Caribbean English creoles. The most likely structure of \REF{ex:2:220} would be one similar to that of \REF{ex:2:185}, reproduced here for convenience: \begin{exe} \exr{ex:2:185} \textit{dem gaan mek mi glad} \end{exe} \noindent That structure is illustrated in \REF{ex:2:222}. Here, \textit{mek} functions rather like the abstract verb \textsc{cause} once posited by generative semanticists. In \REF{ex:2:220}, \textit{fi} would have a meaning something like \textsc{should cause}, with \textit{di rien kom} as its subject and \textit{ah don go huoam} as its object. On present evidence we cannot determine for sure whether \textit{fi} was once exclusively a serial verb. However, it seems reasonable to suppose that in GC and other creoles, serial verbs may be turning into complementizers. Such a process certainly exists in some West African languages \citep{Lord1976}\is{African languages}, and we shall shortly examine evidence from \ili{Sranan} which indicates that serial verbs there may be undergoing a rather similar kind of reanalysis. %\originalpage{118} \ea\label{ex:2:222} %\resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] \node at (0,0) (1S) {S}; \node [below right=\baselineskip and 24mm of 1S] (2NP2) {NP}; \node [below left=\baselineskip and 24mm of 1S] (2NP) {NP}; \node [below=\baselineskip of 1S] (2V) {V}; \node [below=\baselineskip of 2NP] (3S) {S}; \node [below=\baselineskip of 2NP2] (3S2) {S}; \node [below=\baselineskip of 2V] (3mek) {mek}; \node [below left=\baselineskip and 6mm of 3S] (4NP) {NP}; \node [below right=\baselineskip and 6mm of 3S] (4V) {V}; \node [below left=\baselineskip and 6mm of 3S2] (4NP2) {NP}; \node [below right=\baselineskip and 6mm of 3S2] (4V2) {V}; \node [below=\baselineskip of 4NP] (5dem) {dem}; \node [below=\baselineskip of 4V] (5gaan) {gaan}; \node [below=\baselineskip of 4NP2] (5mi) {mi}; \node [below=\baselineskip of 4V2] (5glad) {glad}; \draw (2NP.north) -- (1S.south) -- (2V.north); \draw (1S.south) -- (2NP2.north); \draw (3S) -- (2NP); \draw (3S2) -- (2NP2); \draw (2V) -- (3mek); \draw (4NP) -- (3S.south) -- (4V); \draw (4NP2) -- (3S2.south) -- (4V2); \draw (5dem) -- (4NP); \draw (5gaan) -- (4V); \draw (5mi) -- (4NP2); \draw (5glad) -- (4V2); \end{tikzpicture} %} } \z The boundaries of serial verb constructions are not easy to define, nor is it easy (or perhaps even desirable) to distinguish them from other superficially similar constructions such as ``verb chains'' \citep{Forman1972}. Here, I shall simply concern myself with those serial constructions which are equivalent to multi-case sentences, i.e., which mark oblique cases (dative, instrumental, etc.) with verbs rather than with prepositions or with other types of formal devices. Examples of such structures would include: \ea\label{ex:2:223} Directionals:\is{directionals}\\ \ili{\langSR}{}{}\\ \gll a \emph{waka} \emph{go} a wosu\\ he walk go to house \\ \glt `He walked home' \z %\originalpage{119} \ea\label{ex:2:224} Benefactives:\\ \ili{\langGU}{}{}\\ \gll li \emph{pote} sa \emph{bay} mo\\ he bring that give me\\ \glt `He brought that for me' \z \ea\label{ex:2:225} Datives:\\ \ili{\langST}{}{}\\ \gll \emph{e} \emph{fa} \emph{da} ine \\ he talk give them \\ \glt `He talked to them' \z \ea\label{ex:2:226} Instrumentals:\\ \ili{Djuka}{}{}\\ \gll a \emph{teke} nefi \emph{koti} a meti \\ he take knife cut the meat\\ \glt `He cut the meat with a knife' \z Sentences such as \REF{ex:2:223}--\REF{ex:2:226} are by no means always the only ways in which those creoles that have them can express case relations. Alongside \REF{ex:2:226}, \ili{Djuka} has \REF{ex:2:227}: \ea\label{ex:2:227} \gll a koti a meti anga nefi\\ he cut the meat with knife\\ \glt `He cut the meat with a knife' \z According to \citet{Huttar1975}, sentences like \REF{ex:2:227} occur more fre\-quently in speech than sentences like \REF{ex:2:226}. Which of such pairs represents the most conservative creole level? Serial verbs form a more marked means of expressing case rela\-tions than do prepositions. It is, therefore, relatively unlikely that a language which already had prepositions to mark case would develop serial verbs (except in certain circumstances which could hardly apply to creoles and which will be discussed later). On the other hand, it is relatively likely that a language which originally had only serial verbs as a case-marking device would subsequently develop prepositions, either by a type of reanalysis already attested for West African languages \citep{Lord1976}\is{African languages}, or by direct borrowing from a high-prestige language with which it was in contact (probably the case in any creole that has undergone even a relatively small amount of decreolization). We are therefore justified in assuming that serial verb constructions represent extremely conservative varieties of those creoles in which they are found. %\originalpage{120} Serial verbs are usually interpreted as the result of African substratum influence\is{substratum influence|(} on creoles, but creolists seldom if ever ask how those West African languages which have serial verbs (by no means all of them) happen to have come by them. Despite lip service to linguistic equality, a dual standard is still applied to creoles: if a creole has a feature, it must have borrowed it; but if a noncreole language has the same feature, it is assumed to be an independent innovation -- at least in the absence of clear evidence to the contrary. In fact, I would claim that creoles and West African languages invented verb serialization independently, but for slightly different reasons. Wherever serial verbs are found outside creoles, a change in \isi{word order} is always involved (see \citet{LiEtAl1974} for Chinese; \citet{Givón1974} and \citet{Hyman1974} for West African languages\is{African languages}; \citet{Bradshaw1979} for Austronesian languages in New Guinea). Sometimes the change may be contact-influenced, as in New Guinea; sometimes it may come from purely language-internal developments, as with the \ili{Kwa languages} of West Africa. Precise explanations of why SOV-SVO (West Africa) or SVO-SOV (New Guinea) changes involve serialization are still controversial. \citet{Givón1974} suggests that serialization results from the decay of post positional case marking, an explanation chal\-lenged by \citet{Hyman1974}; \citet{Bradshaw1979} suggests serialization eases parsing problems in a period of transition by generating sentences that can be parsed as either SVO or SOV without any semantic confu\-sion (we shall return to this point at the end of \chapref{ch:4}). However, there seems to be no serious ground for doubting that serialization and word-order change are involved with one another in some kind of way. Word-order change cannot have been a factor in creolization since most of the languages in contact, as well as the resultant creoles, have been SVO. However, the problem that word-order change creates -- %\originalpage{121} that of unambiguously identifying case roles while the change is under way -- must have been a problem in creolization too, if we assume what must almost certainly have been the case in at least some pidgins, i.e., that the latter did not contain (or at least did not contain a full range of) prepositions. Without prepositions and without inflectional morphology, how else could oblique cases be distinguished if not by serial verbs? More specific doubts about the viability of substratal accounts, as well as the seeds of an explanation as to why creoles differ so much in the extent to which they exhibit serialization, are suggested by the following data on the Surinam creoles (\ili{Djuka}, \ili{Sranan}, \ili{Saramaccan}). In these languages, instrumental constructions (as expressed via equiva\-lents of `He cut the meat with a knife') have the following range: \ea\label{ex:2:228} \ili{\langDJ}{}{}\\ a koti a meti anga nefi \z \ea\label{ex:2:229} \ili{\langSR}{}{}\\ a koti a meti nanga nefi \z \ea\label{ex:2:230} \ili{\langSA}{}{}\\ a koti di gbamba ku faka \z \ea\label{ex:2:231} \ili{\langDJ}{}{}\\ a teke nefi koti a meti \z \ea\label{ex:2:232} \ili{\langSR}{}{}\\ a teki nefi koti a meti \z \ea\label{ex:2:233} \ili{\langSA}{}{}\\ \textnormal{?} a tei faka koti di gbamba \z \noindent A sentence similar to \REF{ex:2:233} -- \textit{a tei di pau naki en}, lit., `He took the stick hit it', i.e., `He hit it with the stick' -- is cited in \citet{GrimesEtAl1970} but footnoted to the effect that the authors have since become highly doubtful as to its status in SA. In \citet{Glock1972}, which deals explicitly with case phenomena, there is no mention of sentences like \REF{ex:2:233}, although sentences like \REF{ex:2:230}, as well as serialization of other cases, are cited and discussed; nor is SA credited with \textit{tei}-serialization in \citet{JansenEtAl1978}, although, again, there is no explicit discussion. It is thus impossible to tell whether Saramaccan has this kind of serialization, although the present balance of evidence seems to be against it. Saramaccan is well known as being, among the three Surinam creoles (or, for that matter, among all the \ili{Caribbean creoles}), the one which best preserves African lexical and phonological characteristics %\originalpage{122} (note, in the preceding examples, \textit{gbamba} `meat', a word of presumably African origin which preserves the coarticulated and prenasalized stops characteristic of many West African\is{African languages} languages but of no other creoles, compared with DJ and SR\il{Sranan|(} \textit{meti} from Eng. \textit{meat}). This being so, and if serial constructions also reflect African influence, one would expect to find that SA had more of such constructions than DJ and SR, rather than the reverse. But while there is no explanation for the pattern in terms of substrate influence\is{substratum influence|)}, an explanation can be provided in terms of interaction between the antecedent pidgin and its superstrate. It seems reasonable to assume that if a creole can acquire prepositions from its antecedent pidgin\is{pidgin-creole cycle|(} (as HCE did), it will not need to develop serial verbs for case marking. The only question is, why should some antecedent pidgins acquire prepositions while others do not? Clearly, one factor is population balance, while another factor is the type of social structure; between them, these will determine the accessibility of the superstrate language and hence help to deter\-mine how many superstrate items the pidgin will absorb\is{superstrate influence}. However, these are by no means the only factors involved. Other things, including social conditions, being equal, structural differences between super\-strate features may determine whether a pidgin will or will not absorb these features. \is{English!phonology|(}For a superstrate feature to be accessible to a pidgin, that feature must be more or less unambiguous with respect to meaning, more or less free from mutation with respect to phonological structure, and as close as possible to the canonical form of CV(CV). The superstrate prepositions of instrumentality available to the three languages were: for SR and DJ, Eng. \textit{with}; for SA, Eng. \textit{with} and Pg. \textit{com} -- phonetically, [k\~o] or [k\~u] in many contemporary dialects. The former, with its initial semivowel (a marked segment) and final labiodental fricative (an extremely marked segment), is remote from the canonical pattern; the latter, in the form in which it is perhaps most frequently realized, fits it exactly. The relative difficulty of acquiring \textit{com} and \textit{with} may best be pictured if the reader imagines that his linguistic competence %\originalpage{123} is limited to African languages\is{African languages} and then attempts to segment the following synonymous utterances: \ea\label{ex:2:234} vai com aquele homem \z \ea\label{ex:2:235} go with that man \z Word boundaries in the Portuguese utterance are fairly un\-ambiguously marked; in the English one\is{English!phonology|)}, it would be hard to determine where the verb ended and the preposition began, or where the preposi\-tion ended and the demonstrative began -- or even to be sure that there was a preposition there at all. It is hardly surprising, therefore, that neither Sranan nor Djuka could absorb \textit{with}, but had to adopt a serializing device in order to express instrumentality, and that only later did they develop their own preposition, \textit{(n)anga}, of uncertain origin; whereas, on the other hand, Saramaccan, like all \ili{Portuguese creoles}, easily acquired \textit{ku} and thus did not need a serial construction for instrumentality. The underlying structure of serial-verb constructions has been a subject of some controversy (see \citet{Williams1971,Williams1975}, \citet{Roberts1975}, \citet{Voorhoeve1975}, \citet{JansenEtAl1978} for some differing views on this subject). I suspect that varying analyses are due at least in part to inherent conflicts in the data, and that these conflicts, in turn, are due to ongoing developments in creoles which have the result of complicating the original creole syntax and intro\-ducing categories which formed no part of the original grammar. If we are to understand what creoles are, and make comparisons between particular creoles without allowing ourselves to be misled by sub\-sequent and irrelevant accretions, we must -- to cite the words of \citeauthor{KoopmanEtAl1981}, on which I could not hope to improve -- ``restrict the notion of syntactic expansion to changes leading to the acquisition of features that are part of core grammar up to the time of creolization and to consider the emergence of other features as regular cases of syntactic change''\is{linguistic change} \citep[218]{KoopmanEtAl1981}. \citeauthor{KoopmanEtAl1981} assume, as I do, that pidgins begin with %\originalpage{124} nouns, verbs, and very little else. They assume that VP is a pidgin category, but do not defend it as such. For reasons given in the discussion of GC movement\is{movement rules!in Sranan|(} rules above, I do not think VP is a category in most early creoles, although of course creoles may acquire it later, either through decreolization or regular syntactic change (reanalysis). The only rule creoles would then require to generate any of the comple\-ment types, serial or other, discussed so far (with the exception of \textit{fi}-clauses) would be \REF{ex:2:236}:\is{rules!of early creoles} \ea\label{ex:2:236} S → NP Aux V (NP) (S) \z If in fact VPs were developed initially, two rules would be required: \ea\label{ex:2:237} S → NP Aux VP \z \ea\label{ex:2:238} VP → V (NP) (S) \z Not a great deal depends on which analysis is correct; therefore, since crucial evidence will be taken from Sranan, and since Sranan scholars generally assume a VP (although, once again, without explicit discussion), I shall accept \REF{ex:2:237} and \REF{ex:2:238} as specifying the earliest and most basic level of creole syntax, while continuing to suspect that \REF{ex:2:236} may be a more accurate representation of it.\is{pidgin-creole cycle|)} The evidence consists of judgments by native speakers of Sranan cited in \citet{JansenEtAl1978}. The parenthesized asterisks before certain sentences indicate that while some speakers found them grammatical, others did not. \ea{\label{ex:2:239} Kofi teki a nefi koti a brede \glt `Kofi cut the bread with the knife'} \z \ea{\label{ex:2:240} san Kofi teki koti a brede? \glt `What did Kofi cut the bread with?'} \z \ea{\label{ex:2:241}% (*)san Kofi teki a nefi koti? \glt `What did Kofi cut with the knife?'} \z \ea{\label{ex:2:242} na a nefi Kofi teki koti a brede. \glt `It was the knife that Kofi cut the bread with'} \z \ea{\label{ex:2:243}% (*)na a brede Kofi teki a nefi koti \glt `It was the bread that Kofi cut with the knife'} \z % CREOLE 125 \ea{\label{ex:2:244} na teki Kofi teki a nefi koti a brede\\ \glt `\textit{With} the knife, that's how Kofi cut the bread'} \z \ea{% (*)na koti Kofi teki a nefi koti a brede \glt `\textit{Cut} it, that's what Kofi did to the bread with the knife'} \label{ex:2:245}\z In \REF{ex:2:241} and \REF{ex:2:243}, some speakers can extract the most deeply embed\-ded NP, while others cannot. In \REF{ex:2:244}, a rule similar to the GC verb-focusing rule copies the higher of the two verbs, for all speakers; but in \REF{ex:2:245}, while some speakers can copy the more deeply embedded verb, others cannot.\is{Guyanese Creole!movement rules} These facts can be accounted for if we assume that the two sets of speakers have different types of underlying structures for these sentences, as represented in \REF{ex:2:246} and \REF{ex:2:247}, respectively: \ea\label{ex:2:246} %\resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] \node at (0,0) (1S) {S}; \node [below left=\baselineskip and 12mm of 1S] (2NP) {NP}; \node [below right=\baselineskip and 12mm of 1S] (2VP) {VP}; \node [below=\baselineskip of 2NP] (3Kofi) {Kofi}; \node [below left=\baselineskip and 12mm of 2VP] (3V) {V}; \node [below=\baselineskip of 2VP] (3NP) {NP}; \node [below right=\baselineskip and 24mm of 2VP] (3S) {S}; \node [below=\baselineskip of 3V] (4teki) {teki}; \node [below=\baselineskip of 3NP] (4nefi) {a nefi}; \node [below left=\baselineskip and 6mm of 3S] (4NP) {NP}; \node [below right=\baselineskip and 6mm of 3S] (4VP) {VP}; \node [below=\baselineskip of 4NP] (5Kofi) {(Kofi)}; \node [below left=\baselineskip and 3mm of 4VP] (5V) {V}; \node [below right=\baselineskip and 3mm of 4VP] (5NP) {NP}; \node [below=\baselineskip of 5V] (6koti) {koti}; \node [below=\baselineskip of 5NP] (6brede) {a brede}; \draw (2NP) -- (1S.south) -- (2VP); \draw (3Kofi) -- (2NP); \draw (3V) -- (2VP.south) -- (3NP); \draw (2VP.south) -- (3S); \draw (4teki)--(3V); \draw (4nefi)--(3NP); \draw (4NP) -- (3S.south) -- (4VP); \draw (5Kofi)--(4NP); \draw (5V) -- (4VP.south) -- (5NP); \draw (6koti) -- (5V); \draw (6brede) -- (5NP); \end{tikzpicture} } %} \z %\originalpage{126} \ea\label{ex:2:247} %\resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] \node at (0,0) (1S) {S}; \node [below left=\baselineskip and 12mm of 1S] (2NP) {NP}; \node [below right=\baselineskip and 12mm of 1S] (2VP) {VP}; \node [below=\baselineskip of 2NP] (3Kofi) {Kofi}; \node [below left=\baselineskip and 12mm of 2VP] (3V) {V}; \node [below=\baselineskip of 2VP] (3NP) {NP}; \node [below right=\baselineskip and 24mm of 2VP] (3VP) {VP}; \node [below=\baselineskip of 3V] (4teki) {teki}; \node [below=\baselineskip of 3NP] (4nefi) {a nefi}; \node [below left=\baselineskip and 3mm of 3VP] (5V) {V}; \node [below right=\baselineskip and 3mm of 3VP] (5NP) {NP}; \node [below=\baselineskip of 5V] (6koti) {koti}; \node [below=\baselineskip of 5NP] (6brede) {a brede}; \draw (2NP) -- (1S.south) -- (2VP); \draw (3Kofi) -- (2NP); \draw (3V) -- (2VP.south) -- (3NP); \draw (2VP.south) -- (3S); \draw (4teki)--(3V); \draw (4nefi)--(3NP); \draw (4NP) -- (3S.south) -- (4VP); \draw (6koti)--(5V); \draw (6brede)--(5NP); \end{tikzpicture} %} } \z Speakers who rejected \REF{ex:2:241}, \REF{ex:2:243}, and \REF{ex:2:245} would have \REF{ex:2:246} as an underlying structure. Here, both the \is{Specified Subject Condition (SSC)}Specified Subject Condition (SSC: \citealt{Chomsky1973}) and the PIC would block movement out of sites dominated by the lower S. Speakers who accepted these three sentences would have \REF{ex:2:247} as an underlying structure. Since \REF{ex:2:247} contains no tensed S, specified subject, or bounding nodes that would block extraction, all constituents could be moved without violating the SSC or the PIC. The first set of speakers would have rules \REF{ex:2:237} and \REF{ex:2:238}; the second set would replace \REF{ex:2:238} with \REF{ex:2:248}: \ea\label{ex:2:248} VP → V (NP) (VP) \z Greg Lee (p.c.) has pointed out that there is an alternative solution which does not involve positing two different underlying structures: the two sets of speakers could have different rule orderings for extraction and Equi\is{Equi-NP deletion}. If the first set ordered extraction before Equi, the lower occurrence of \textit{Kofi} would still be undeleted when extraction applied, the S-node would remain unpruned, and the SSC and PIC would still apply. If the second set ordered Equi before ex\-traction, the offending S-node would be pruned to give \REF{ex:2:247} as a derived structure, and no constraints would then inhibit movement. This is true, but it means that both sets of speakers would then have the more primitive phrase-structure rules\is{rules!phrase-structure} -- \REF{ex:2:237} and \REF{ex:2:238} -- that %\originalpage{127} would yield \REF{ex:2:246} as the underlying structure for \REF{ex:2:239}. I shall show in a moment that even if speakers did not have underlying structures like \REF{ex:2:247} for serial \textit{teki} sentences, they would require them for other types of serial-verb constructions. In any case, I know of no evidence that would point to differences in rule ordering among Sranan speakers; indeed, it is very hard, in creole grammars, to find any clear cases in which rule ordering is crucial. A second set of native-speaker judgments concerns Sranan directional constructions, for example \REF{ex:2:223}, repeated here for conveni\-ence as \REF{ex:2:249}: \ea\label{ex:2:249} \gll a waka go a wosu\\ he walk go to house \\ \glt `He walked home' \z Here, in contrast with the previous examples, there are no disagreements; either verb can be fronted by the same verb-focusing rule that led to disputes over the status of \REF{ex:2:245}; \ea\label{ex:2:250} na waka a waka go a wosu \glt `He \textit{walked} to the house (rather than ran to it)' \z \ea\label{ex:2:251} na go a waka go a wosu \glt `He walked \textit{to} the house (rather than away from it)' \z Thus there is no possibility that speakers could assign to \REF{ex:2:249} a struc\-tural description like that of \REF{ex:2:246}, where the existence of a tensed S would prohibit extraction of the lower verb; rather, \REF{ex:2:249} must have a structure similar to \REF{ex:2:247}, as illustrated in \REF{ex:2:252}:% on the following page: %\originalpage{128} \is{dative-benefactive case|(} \ea\label{ex:2:252} %\resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] \node at (0,0) (1S) {S}; \node [below left=\baselineskip and 12mm of 1S] (2NP) {NP}; \node [below right=\baselineskip and 12mm of 1S] (2VP) {VP}; \node [below=\baselineskip of 2NP] (3a) {\strut a}; \node [below left=\baselineskip and 6mm of 2VP] (3V) {\strut V}; \node [below right=\baselineskip and 6mm of 2VP] (3VP) {\strut VP}; \node [below=\baselineskip of 3V] (4waka) {\strut waka}; \node [below left=\baselineskip and 3mm of 3VP] (4V) {\strut V}; \node [below right=\baselineskip and 3mm of 3VP] (4PP) {\strut PP}; \node [below=\baselineskip of 4V] (5go) {go}; \node [below=\baselineskip of 4PP] (5wosu) {a wosu}; \draw (2NP) -- (1S.south) -- (2VP); \draw (3a)--(2NP); \draw (3V)--(2VP.south)--(3VP); \draw (3V)--(4waka); \draw (4V)--(3VP.south)--(4PP); \draw (4V)--(5go); \node [isosceles triangle, isosceles triangle apex angle=100, shape border rotate=90, minimum height=1.25em, draw, below=3mm of 4PP.center] (5polygon) {}; %\draw (5wosu.north east) -- (4PP.south) -- (5wosu.north west) -- cycle; % I don't like this, maybe some optimization is possible? \end{tikzpicture} } %} \z Finally, \ili{Sranan} speakers disagree again over sentences involving datives and benefactives expressed with \textit{gi}, which has an independent existence as a main verb with the meaning `give'. Differences may be illustrated by the following sentences: \ea{\label{ex:2:253} \gll Meri tek watra gi den plantjes\\ Mary take water give the plants\\ \glt `Mary brought water for the plants'} \z \ea{\label{ex:2:254} gi san Meri tek watra?\\ \glt `What did Mary bring water for?'} \z \ea{\label{ex:2:255} (*)san Meri teki watra gi? \glt `What did Mary bring water for?'} \z \ea{\label{ex:2:256} Meri teki a buku gi mi\\ \glt `Mary gave me the book'} \z \ea{\label{ex:2:257} (*)na mi Meri teki a buku gi\\ \glt `It was me Mary gave the book to'} \z \ea{\label{ex:2:258} (*)na gi Meri teki a buku gi mi \glt `Mary \textit{gave} the book to me'} \z Sranan does not strand prepositions, For those who find \REF{ex:2:255} and \REF{ex:2:257} ungrammatical, \textit{gi} must be a preposition, and they must have, instead of \REF{ex:2:248}, the rule \REF{ex:2:259}:\is{movement rules!in Sranan|)} %\originalpage{129} \ea\label{ex:2:259} VP → V (NP) (PP) \glt \z For those who find the two sentences grammatical, \textit{gi} must be a verb, and such speakers must have rule \REF{ex:2:248}. The verb-focusing rule that applies in \REF{ex:2:258} cannot front prepositions, so, again, those who find \REF{ex:2:258} ungrammatical must consider \textit{gi} as a preposition, while those who find it grammatical must consider \textit{gi} a verb. Differences of this kind are only comprehensible if the set of speakers who find all three sentences grammatical assigns to \REF{ex:2:253} and \REF{ex:2:256} a structure similar to \REF{ex:2:252}, with VP expanded as in \REF{ex:2:260} below; while the set of speakers who find all three sentences ungrammatical (and therefore regard \textit{gi} as a preposition) analyzes the same VP as in \REF{ex:2:261} below: \ea\label{ex:2:260} %\resizebox{.9\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] \node at (0,-.3) (1VP) {VP}; \node [above left=\baselineskip and 3mm of 1VP] (0inv) {}; \draw [dashed] (1VP.north)--(0inv); \node [below left=\baselineskip and 6mm of 1VP] (2V) {V}; \node [below=\baselineskip of 1VP] (2NP) {NP}; \node [below right=\baselineskip and 18mm of 1VP] (2VP) {VP}; \draw (2V)--(1VP.south)--(2NP); \draw (2VP)--(1VP.south); \node [below=\baselineskip of 2V] (3teki) {\strut teki}; \draw (2V)--(3teki); \node [rectangle split, rectangle split parts=2, below=\baselineskip of 2NP] (3watra) {watra\nodepart{two}buku}; \draw (2NP)--(3watra); \node [below left=\baselineskip and 6mm of 2VP] (3V) {\strut V}; \node [below right=\baselineskip and 6mm of 2VP] (3NP) {NP}; \draw (3V)--(2VP.south)--(3NP); \node [rectangle split, rectangle split parts=2, below=\baselineskip of 3V] (4gi) {gi}; \node [rectangle split, rectangle split parts=2, below=\baselineskip of 3NP] (4plantjes) {\strut plantjes\nodepart{two}mi}; \draw (3V) -- (4gi); \draw (3NP) -- (4plantjes); \end{tikzpicture} } %} \z \ea\label{ex:2:261} %\resizebox{\textwidth}{!}{ {\normalfont \begin{tikzpicture}[baseline] \node at (0,-.3) (1VP) {VP}; \node [above left=\baselineskip and 3mm of 1VP] (0inv) {}; \draw [dashed] (1VP.north)--(0inv); \node [below left=\baselineskip and 6mm of 1VP] (2V) {V}; \node [below=\baselineskip of 1VP] (2NP) {NP}; \node [below right=\baselineskip and 18mm of 1VP] (2PP) {PP}; \draw (2V)--(1VP.south)--(2NP); \draw (2PP)--(1VP.south); \node [below=\baselineskip of 2V] (3teki) {\strut teki}; \draw (2V)--(3teki); \node [rectangle split, rectangle split parts=2, below=\baselineskip of 2NP] (3watra) {watra\nodepart{two}buku}; \draw (2NP)--(3watra); \node [below left=\baselineskip and 6mm of 2PP] (3P) {\strut P}; \node [below right=\baselineskip and 6mm of 2VP] (3NP) {NP}; \draw (3V)--(2VP.south)--(3NP); \node [rectangle split, rectangle split parts=2, below=\baselineskip of 3P] (4gi) {gi}; \node [rectangle split, rectangle split parts=2, below=\baselineskip of 3NP] (4plantjes) {\strut plantjes\nodepart{two}mi}; \draw (3P) -- (4gi); \draw (3NP) -- (4plantjes); \end{tikzpicture} } %} \z \is{dative-benefactive case|)} %\originalpage{130} In other words, there are three distinct ways in which Sranan speakers analyze serial-verb constructions: \ea\label{ex:2:262} \begin{tabular}[t]{@{}lp{8.5cm}} VP → \ldots~(S) & (some speakers for \textit{teki} instrumentals)\\ VP → \ldots~(VP) & (some speakers for \textit{teki} instrumentals; all speakers for \textit{go} \isi{directionals}; some speakers for \textit{gi} dative/benefactives)\\ VP → \ldots~(PP)&(some speakers for \textit{gi} dative/benefactives) \end{tabular} \z \noindent If the first of these stages represents the most primitive level of creole development, as we have given reason to believe, then the data shown here, drawn from a \textit{synchronic} analysis of Sranan verb serialization, represent at the same time the \textit{diachronic development} of Sranan, from an original state in which presumably all serial verbs were full verbs in tensed sentences to a stage in which these verbs are beginning to be reduced to mere prepositions. Note that this process serves to bring Sranan\il{Sranan|)} structurally closer to the high-prestige language, Dutch, with which it has been in continuous contact for over three centuries. Thus, there is good reason for claiming, across creole languages generally, that the vast majority of embedded sentences are finite and tensed, and that where exceptions to this generalization can be found, they constitute developments that have taken place subsequent to creolization. The second half of this claim would be hard to prove convincingly because of the inaccessibility of evidence; but I know of neither facts nor arguments that would point in an opposite direction. With regard to the types of complementation featuring serial verbs, it would seem that the strongest constraint on such developments was the availability of superstrate prepositions for case-marking purposes. Where prepositions were available, even if African influence was strong (as with \ili{Saramaccan}), they would be chosen over serial models. In the absence of superstrate prepositions, serialization would always be chosen. I suspect that it was reinvented, rather than selected, %\originalpage{131} in most if not all cases; but if not, if it was indeed selected out of a range of substrate alternatives, the present theory would remain unaffected. This theory claims that verb serialization is the only answer to the problem of marking cases in languages which have only N and V as major categories. Thus, if such structures were selected from a substratum\is{substratum influence}, they were selected because they offered the only answer, not merely because they happened to be present in the substratum; and, in those cases where they were not present in the substratum (as may have been the case in creoles that drew heavily on Guinean or Bantu rather than \ili{Kwa languages}\il{Bantu languages}\il{Guinean languages}), readers of \chapref{ch:1} should have little doubt as to the power of creole children to invent such structures, should the language they were developing require these. However, before leaving serial verbs, a word is in order on HCE. Substratomaniacs will naturally wish to attribute the absence of serial-verb constructions in HCE to the absence of an African substratum rather than to the presence of prepositions. In fact, though no true serial constructions have developed as a consistent part of the synchronic grammar, sporadic residues of such constructions are to be found both in synchronic speech and in the literature. For instance, instrumental and directional uses\is{Guyanese Creole!directionals} of verbs would occur occasionally in the speech of the very oldest HCE speakers:\is{Hawaiian Creole English!directionals} \ea\label{ex:2:263} dei wan get naif pok yu\\ \glt `They want to stab you with a knife' \z \ea\label{ex:2:264} \gll dei wawk fit go skul\\ they walk feet go school\\ \glt `They went to school on foot' \z Moreover, \isi{decreolization} in Hawaii began so early and progressed so rapidly that there is good reason to believe that other similar forms were already lost by the early seventies. For instance, in basilectal GC, \textit{take it} and \textit{bring it} are regularly rendered as \textit{ker am go} (lit., `carry it go') and \textit{bing am kom}, a fact usually explained by pointing to a similar structure in \ili{Yoruba}. \citet{Smith1939}, listing the commonest ``mistakes'' %\originalpage{132} made by children in Hawaii -- most early sources of HCE have this deplorable pedagogical bias -- mentions \textit{take om go} and \textit{bring om come}, only this time the origin is given as Chinese! (In fact, the majority of children listed as using it are non-Chinese.) However, similar forms did not occur in any of our recordings (although the children of Smith's study would only have been in their forties when those recordings were made), and they would seem today to have disappeared entirely. Yet their existence, for however brief a period, can leave little doubt that HCE could have and would have invented regular serial-verb\is{verb serialization|)} constructions if no other means of marking case had been available.% \footnote{It would seem highly likely, indeed, that the inadequacies of existing creole descriptions, often referred to in this volume, have served to diminish, rather than exaggerate, the degree of creole similarity. To give just one very recent instance, it was long held that the verb-focusing rule discussed earlier in this chapter was not found in the grammars of any of the Indian Ocean creoles. Substratomaniacs could point to the nature of the substratum -- Eastern Bantu, Malagasy, and Indian languages -- as an explanation of this. Now \citet{Corne1977} reports the finding of verb-focusing structures with a copied verb\is{verb-copying} identical to those discussed in this chapter. Substratomaniacs will now doubtless seize on the claim by \citet{Baker1976} that in 1735, 60 percent of the nonwhite population of \isi{Mauritius} was from West Africa. However, this finding is strongly challenged by \citet{Chaudenson1979} on the basis of historical documents which he claims Baker did not examine; according to Chaudenson, the percentage of West Africans never rose much above 33. In fact, the outcome of the disagreement is rather irrelevant to the real issue. Baker's ``60 percent'' contained 66 percent of speakers from Guinea, and \ili{Guinean languages} differ markedly in structure from the Kwa languages which are usually claimed as the source of creole structures. On Baker's own figures, the Kwa speakers in Mauritius in 1735 must have amounted to about 130! Within a few years, the population of Mauritius topped the 10,000 mark, swelled by recruits from India and Madagascar (Baker admits that hardly any Kwa speakers arrived after 1735). The question that substratomaniacs have to answer is: how did 130 people manage to impose their grammar (assuming they had a common one, which is a big assumption) upon a population in which they were outnumbered 100 to 1?} We have now surveyed a wide range of creole structures across a number of unrelated creole languages. We have seen that even taking into account the, in some cases, several centuries of time that have elapsed since creolization, and the heavy pressures undergone by those creoles (a large majority) that are still in contact with their superstrates, these languages show similarities which go far beyond the possibility of coincidental resemblance, and which are not explicable in terms of conventional transmission processes such as diffusion or substratum influence (the ad hoc nature of the latter should be adequately demonstrated by the opportunism of those who attribute a structure to \ili{Yoruba} when it appears in the Caribbean and to Chinese when it appears in Hawaii). Moreover, we find that the more we strip creoles of their more recent developments, the more we factor out superficial and accidental features, the greater are the similarities that reveal themselves. Indeed, it would seem reasonable to suppose that the only differences among creoles at creolization were those due to differences in the nature of the antecedent pidgin, in particular to the extent to which superstrate features had been absorbed by that pidgin and were therefore directly accessible to the first creole generation in the outputs of their pidgin-speaking parents. Finally, the overall pattern of similarity which emerges from this chapter is entirely consonant with the process of building a language from the simplest constituents -- in many cases, no more than S, N, and V, the minimal constituents necessary for a pidgin.\is{pidgin-creole cycle} %\originalpage{133} In theory, given these basic constituents, there are perhaps not infinitely many but certainly a very large number of ways in which, one might suppose, a viable human language could be built -- at least as many ways as there are different kinds of human language. This would certainly be the conclusion to which any existing school of linguistic theory would lead one. It would, however, be an incorrect conclusion. The fact that there appears to be only one way of building up a language (with some, but relatively few and minor variations, of course) strongly suggests that when this problem was originally faced, whether thirty thousand years or thirty thousand centuries ago, it might have had to be\enlargethispage{1\baselineskip} solved in a very similar way, and we shall further explore this possibility in \chapref{ch:4}. Our original aim in this chapter was to show that the ``inventions'' of HCE\il{Hawaiian Creole English} speakers illustrated in \chapref{ch:1} were not peculiar to them, but followed a regular pattern of ``invention'' which emerged wherever human beings had to manufacture an adequate language\is{language!origins of} in short order from inadequate materials. Now, if all children can indeed do this -- and it would be bizarre indeed if the capacity developed only when it was needed -- they can only do so as the result of the factor which is responsible for all species-specific behavior: genetic transmission of the bioprogram\is{language bioprogram|(} for the species. The idea that there is a bioprogram for human (and other species) \textit{physical} development is wholly uncontroversial. No one supposes that human beings have to learn to breathe, eat, yell when they are hurt, stand upright, or flex the muscles of finger and thumb into what is the purely human, species-specific ``precision grip''. We speak of children ``learning to walk'', and we characteristically help them in their first stumbling efforts, hut no one seriously imagines that if we neglected to do this, the child would go crawling into maturity. The term ``learning'' is used here in a purely metaphorical sense. Yet the idea that there is a bioprogram for human \textit{mental} development still meets with massive resistance, despite the fact that Piaget and his disciples have shown how human cognitive development unrolls in a series of predetermined and invariant stages,\footnote{I am only too well aware that Piaget draws conclusions from his studies quite contrary to those drawn here. That he does so, however, has always seemed to me baffling in light of the fact that the developmental stages he posits bear a nativistic explanation much more easily than they do an experiential one. But there is not space here to attempt a reinterpretation of Piagetian findings, desirable though such an activity might seem. We will see in the next chapter, however, that some linguistic findings of Piaget's disciples can very easily (and very fruitfully) be reinterpreted in a nativistic manner (see especially the discussion of \citealt{BrockartEtAl1973}).} and despite %\originalpage{134} the fact that, at an ever-increasing rate over the last few decades, experiences long believed to be due to some unanalyzable entity called ``mind'' -- if they were indeed more than subjective illusions -- have been shown to be conditioned and in some cases entirely determined by electrochemical events in the brain. The mind-body \isi{dualism} that has so long dominated Western thought is beginning to seem more and more like an artifact of armchair philosophers operating in blissful ignorance of the laws of reality; and yet the idea \textit{that there is an innate bioprogram that determines the form of human language} is still vigorously if often quite illogically resisted, threatening, as it seems to, \isi{free will}, mental improvement, and the whole galaxy of human dreams and desires.\\\\ I shall return to these fears in the final chapter. In the next chapter I want to pursue what would appear to be an inevitable corollary of the language bioprogram theory. If it is the case that the creole child's capacity to create language is due to such a bioprogram, then, as noted above, it would be absurd to suppose that this bioprogram functions only in the rare and unnatural circumstances in which the normal cultural transmission of language breaks down. Forces that are under genetic control simply cannot be turned on and off in this way. Therefore, if our theory is correct, it should be the case that the acquisition of language\is{language!acquisition of|(} under \textit{normal} circumstances should differ considerably from what has hitherto been supposed. Briefly, the theory predicts that instead of merely processing linguistic input, the child will seek to actualize the blueprint for language with which his bioprogram provides him. We should note from the outset that there are numerous differences between the present theory and earlier Chomskyan theories of linguistic innateness\is{innateness (of language)|(}, although the latter are often so vague that such differences are not always clear. One point that should be made is that in the present theory, the child is not supposed to ``know'' the bioprogram language from birth -- whatever that might mean -- any more than we would suppose that a child at birth, or even at six months, ``knows'' how to walk. %\originalpage{135} Rather, the bioprogram language would unfold, just as a physical bioprogram unfolds; the language would grow just as the body grows, presenting the appropriate structures at the appropriate times and in the appropriate, preprogrammed sequences (I shall have more to say about the mechanisms by which this might be accomplished when we come to \chapref{ch:4}). \is{hypothesis formation|(}However, the vast mass of human children are not growing up in even a partial linguistic vacuum. There will be a ready-made language which their elders will be determined that they should learn. Thus, almost (but not quite) from the earliest stages, the evolving bioprogram will interact with the target language. Sometimes features in the bioprogram will be very similar to features in the target language, in which case we will find extremely rapid, early, and apparently effortless learning. Sometimes the target language will have evolved away from the bioprogram, to a greater or lesser extent, and in these cases we will expect to find common or even systematic ``errors'' which, in orthodox learning theory, will be attributed to ``incorrect hypotheses'' formed by the child, but which, I shall claim, are simply the result of the child's ignoring (because he is not ready for it) the data presented by speakers of the target language and following out instead the instructions of his bioprogram. Clearly, then, it should be possible to examine existing studies of child language acquisition and reinterpret them in light of the theory outlined above. If that theory is correct, we expect to find a wide variety of evidence that would arise directly from the interaction of bioprogram and target language, and hopefully, be able to account for phenomena of acquisition which have remained mysterious in all previous theories. Accordingly, the next chapter will present just such a survey of the existing literature on language acquisition.\is{language!acquisition of|)}\is{language bioprogram|)}
{ "alphanum_fraction": 0.7570624519, "avg_line_length": 90.6472267537, "ext": "tex", "hexsha": "2c65c0d2337b64d16bc0b5c02892b174836b7159", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bca083ed2f5f30dd4c2d6587366e8e0f649285c3", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "langsci/91", "max_forks_repo_path": "chapters/02.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bca083ed2f5f30dd4c2d6587366e8e0f649285c3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "langsci/91", "max_issues_repo_path": "chapters/02.tex", "max_line_length": 2209, "max_stars_count": null, "max_stars_repo_head_hexsha": "bca083ed2f5f30dd4c2d6587366e8e0f649285c3", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "langsci/91", "max_stars_repo_path": "chapters/02.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 62914, "size": 222267 }
\chapter{\textit{Martha}} \lettrine{W}{hen} she opened her eyes in the morning it was because a young house-maid had come into her room to light the fire and was kneeling\footnote{\textbf{kneel} v. (\textbf{pt./pp.} knelt) to move your body so that one or both of your knees are on the floor} on the hearth-rug\footnote{\textbf{hearth} n. the floor in front of or inside a fireplace} raking\footnote{\textbf{rake} n. a tool that has a series of metal, wooden, or plastic pieces at the end of a long handle and that is used to gather leaves, break apart soil, make ground smooth, etc. \textbf{rake} v. to use a rake to gather leaves, break apart soil, make ground smooth, etc.} out the cinders\footnote{\textbf{cinder} n. a very small piece of burned material (such as wood or coal)} noisily. Mary lay and watched her for a few moments and then began to look about the room. She had never seen a room at all like it and thought it curious and gloomy. The walls were covered with tapestry\footnote{\textbf{tapestry} n. a heavy cloth that has designs or pictures woven into it and that is used for wall hangings, curtains, etc.} with a forest scene embroidered\footnote{\textbf{embroider} v. to sew a design on a piese of cloth} on it. There were fantastically\footnote{\textbf{fantastic} adj. (\textbf{adv.} fantastically) extremely good} dressed people under the trees and in the distance there was a glimpse of the turrets\footnote{\textbf{turret} n. a small tower on a building} of a castle. There were hunters and horses and dogs and ladies. Mary felt as if she were in the forest with them. Ouot of a deep window she could see a great climbing stretch of land which seemed to have no trees on it, and to look rather like an endless, dull, purplish sea. ``What is that?'' she said, pointing out of the window. Martha, the young house-maid, who had just risen to her feet, looked and pointed also. ``That there?'' she said. ``Yes.'' ``That's th' moor,'' with a good-natured grin\footnote{\textbf{grin} v. (\textbf{n.} grin) to smile widely}. ``Dose tha' like it?'' ``No,'' answered Mary. ``I hate it.'' ``That's because tha'rt not used to it,'' Martha said, going back to her hearth. ``Tha' thinks it's too big an' bare\footnote{\textbf{bare} adj. not having a covering} now. But tha' will like it.'' ``Do you?'' inquired\footnote{\textbf{inquire} v. to ask for information} Mary. ``Aye, that I do,'' answered Martha, cheerfully polishing\footnote{\textbf{polish} v. to make (something) smooth and shiny by rubbing it} away at the grate\footnote{\textbf{grate} n. a metal frame with bars across it that is used in a fireplace or to cover an opening}. ``I just love it. It's none bare. It's covered wi' growin' things as smells sweet. It's fair lovely in spring an' summer when th' gorse an' broom an' heather's in flower. It smells o' honey an' there's such a lot o' fresh air--an' th' sky looks so high an' th' bees an' skylarks\footnote{\textbf{skylark} n. a small bird of Europe, Asia, and northern Africa that sings while it flies} makes such a nice noise hummin'\footnote{\textbf{hum} v. to make a low continuous sound} an' singin'. Eh! I wouldn't live away from th' moor for anythin'.'' Mary listened to her with a grave\footnote{\textbf{grave} adj. very serious: requiring or causing serious thought or concern}, puzzled expression. The native servants she had been used to in India were not in the least like this. They were obsequious\footnote{\textbf{obsequious} adj. too eager to help or obey someone important} and servile\footnote{\textbf{servile} adj. very obedient and trying too hard to please someone} and did not presume to talk to their masters as if they were their equals. They made salaams\footnote{\textbf{salaam} n. a Muslim greeting} and called them ``protector of the poor'' and names of that sort. Indian servants were commanded to do things, not asked. It was not the custom to say ``please'' and ``thank you'' and Mary had always slapped her Ayah in the face when she was angry. She wondered a little what this girl would do if one slapped her in the face. She was a round, rosy, good-natured-looking creature, but she had a sturdy\footnote{\textbf{sturdy} adj. having or showing mental or emotional strength} way which made Mistress Mary wonder if she might not even slap back--if the person who slapped her was only a little girl. ``You are a strange servant,'' she said from her pillows, rather haughtily\footnote{\textbf{haughty} adj. (\textbf{adv.} haughtily) having or showing the insulting attitude of people who think that they are better, smarter, or more important than other people}. Martha sat up on her heels\footnote{\textbf{heel} n. the back part of your foot that is below the ankle}, with her blackingbrush in her hand, and laughed, without seeming the least out of temper. ``Eh! I know that,'' she said. ``If there was a grand Missus at Misselthwaite I should never have been even one of th' under house-maids. I might have been let to be scullery-maid\footnote{\textbf{scullery} n. a room that is near the kitchen in a large and usually old house and that is used for washing dishes, doing messy kitchen tasks, etc.} but I'd never have been let upstairs. I'm too common an' I talk too much Yorkshire. But this is a funny house for all it's so grand. Seems like there's neither Master nor Mistress except Mr. Pitcher an' Mrs. Medlock. Mr. Craven, he won't be troubled about anythin' when he's here, an' he's nearly always away. Mrs. Medlock gave me th' place out o' kindness. She told me she could never have done it if Misselthwaite had been like other big houses.'' ``Are you going to be my servant?'' Mary asked, still in her imperious\footnote{\textbf{imperious} adj. having or showing the proud and unpleasant attitude of someone who gives orders and expects other people to obey them} little Indian way. Martha began to rub her grate again. ``I'm Mrs. Medlock's servant,'' she said stoutly. ``An' she's Mr. Craven's--but I'm to do the house-maid's work up here an' wait on you a bit. But you won't need much waitin' on.'' ``Who is going to dress me?'' demanded Mary. Martha sat up on her heels again and stared. She spoke in broad Yorkshire in her amazement. ``Canna' tha' dress thysel'!'' she said. ``What do you mean? I don't understand your language,'' said Mary. ``Eh! I forgot,'' Martha said. ``Mrs. Medlock told me I'd have to be careful or you wouldn't know what I was sayin'. I mean can't you put on your own clothes?'' ``No,'' answered Mary, quite indignantly\footnote{\textbf{indignant} adj. (\textbf{adv.} indignantly) feeling or showing anger because of something that is unfair or wrong}. ``I never did in my life. My Ayah dressed me, of course.'' ``Well,'' said Martha, evidently not in the least aware that she was impudent, ``it's time tha' should learn. Tha' cannot begin younger. It'll do thee good to wait on thysel' a bit. My mother always said she couldn't see why grand people's children didn't turn out fair fools--what with nurses an' bein' washed an' dressed an' took out to walk as if they was puppies!'' ``It is different in India,'' said Mistress Mary disdainfully. She could scarcely\footnote{\textbf{scarcely} adv. almost not at all} stand this. But Martha was not at all crushed. ``Eh! I can see it's different,'' she answered almost sympathetically\footnote{\textbf{sympathetic} adj. (\textbf{adv.} sympathetically) feeling or showing concern about someone who is in a bad situation}. ``I dare say it's because there's such a lot o' blacks there instead o' respectable white people. When I heard you was comin' from India I thought you was a black too.'' Mary sat up in bed furious. ``What!'' she said. ``What! You thought I was a native. You--you daughter of a pig!'' Martha stared and looked hot. ``Who are you callin' names?'' she said. ``You needn't be so vexed\footnote{\textbf{vex} v. to annoy or worry (someone)}. That's not th' way for a young lady to talk. I've nothin' against th' blacks. When you read about 'em in tracts they're always very religious\footnote{\textbf{religious} adj. believing in a god or a group of gods and following the rules of a religion}. You always read as a black's a man an' a brother. I've never seen a black an' I was fair pleased to think I was goin' to see one close. When I come in to light your fire this mornin' I crep' up to your bed an' pulled th' cover back careful to look at you. An' there you was,'' disappointedly, ``no more black than me--for all you're so yeller\footnote{\textbf{yell} v. (\textbf{n.} yeller) to say (something) very loudly especially because you are angry, surprised, or are trying to get someone's attention}.'' Mary did not even try to control her rage\footnote{\textbf{rage} n. a strong feeling of anger that is difficult to control} and humiliation\footnote{\textbf{humiliate} v. (\textbf{n.} humiliation) to make (someone) feel very ashamed or foolish}. ``You thought I was a native! You dared! You don't know anything about natives! They are not people--they're servants who must salaam to you. You know nothing about India. You know nothing about anything!'' She was in such a rage and felt so helpless before the girl's simple stare, and somehow she suddenly felt so horribly lonely and far away from everything she understood and which understood her, that she threw herself face downward on the pillows and burst\footnote{\textbf{burst} v. to break open or into pieces in a sudden and violent way} into passionate\footnote{\textbf{passionate} adj. having, showing, or expressing strong emotions or beliefs} sobbing. She sobbed so unrestrainedly\footnote{\textbf{unrestrained} adj. (\textbf{adv.} unrestrainedly) not held in place by a belt, seat, device, etc.} that good-natured Yorkshire Martha was a little firghtened and quite sorry for her. She went to the bed and bent over her. ``Eh! You mustn't cry like that there!'' She begged. ``You mustn't for sure. I didn't know you'd be vexed. I don't know anythin' about anythin'--just like you said. I beg you pardon, Miss. Do stop cryin'.'' There was something comforting\footnote{\textbf{comfort} v. (\textbf{adj.} comforting) to cause (someone) to feel less worried, upset, frightened, etc.} and really friendly in her queer Youkshire speech and sturdy way which had a good effect on Mary. She gradually\footnote{\textbf{gradual} adj. (\textbf{adv.} gradually) moving or changing in small amounts} ceased crying and became quiet. Martha looked relieved\footnote{\textbf{relieved} adj. feeling relaxed and happy because something difficult or unpleasant has been stopped, avoided, or made easier}. ``It's time for thee to get up now,'' she said. ``Mrs. Medlock said I was to carry tha' breakfast an' tea an' dinner into th' room next to this. It's been made into a nursery for thee. I'll help thee on with thy clothes if tha'll get out o' bed. If th' buttons are at th' back tha' cannot button them up tha' self.'' When Mary at last decided to get up, the clothes Martha took from the wardrobe\footnote{\textbf{wardrobe} n. a collection of clothes that a person owns or wears} were not the ones she had worn when she arrived the night before with Mrs. Medlock. ``Those are not mine,'' she said. ``Mine are black.'' She looked the thick white wool\footnote{\textbf{wool} n. the soft, thick hair of sheep and some other animals} coat and dress over, and added with cool approval: ``Those are nicer than mine.'' ``These are th' ones tha' must put on,'' Martha answered. ``Mr. Craven ordered Mrs. Medlock to get 'em in London. He said `I won't have a child dressed in black wanderin' about like a lost soul,' he said. `It'd make the place sadder than it is. Put color on her,' Mother she said she knew what he meant. Mother always knows what a body means. She doesn't hold with black hersel'.'' ``I hate black things,'' said Mary. The dressing process was one which taught them both something. Martha had ``buttoned up'' her little sisters and brothers but she had never seen a child who stood still and waited for another person to do things for her as if she had neither hands nor feet of her own. ``Why doesn't tha' put on tha' own shoes?'' she said when Mary quietly held out her foot. ``My Ayah did it,'' answered Mary, staring. ``It was the custom.'' She said that very often--``It was the custom.'' The native servants were always saying it. If one told them to do a thing their ancestors had not done for a thousand years they gazed at one mildly\footnote{\textbf{mild} adj. (\textbf{adv.} mildly) gentle in nature or behavior} and said. ``It is not the custom,'' and one knew that was the end of the matter. It had not been the custom that Mistress Mary should do anything but stand and allow herself to be dressed like a doll, but before she was ready for breakfast she began to suspect\footnote{\textbf{suspect} v. to think that (something, especially something bad) possibly exists, is true, will happen, etc.)} that her life at Misselthwaite Manor would end by teaching her a number of things quite new to her--things such as putting on her own shoes and stockings, and picking up things she let fall. If Martha had been a well-trained fine young lady's maid she would have been more subservient\footnote{\textbf{subservient} adj. very willing or too willing to obey someone else} and respectful and would have known that it was her business to brush hair, and botton boots, and pick things up and lay them away. She was, however, only an untrained Yorkshire rustic\footnote{\textbf{rustic} adj. of, relating to, or suitable for the country or people who live in the country} who had been brought up in a moorland cottage with a swarm\footnote{\textbf{swarm} n. a very large of insects moving together} of little brothers and sisters who had never dreamed of doing anything but waiting on themselves and on the younger ones who were either babies in arms or just learning to totter\footnote{\textbf{totter} v. to move or walk in a slow and unsteady way} about and tumble\footnote{\textbf{tumble} v. to fall down suddenly and quickly} over things. If Mary Lennox had been a child who was ready to be amused she would perhaps have laughed at Martha's readiness to talk, but Mary only listened to her coldly and wondered at her freedom of manner. At first she was not at all interested, but gradually, as the girl rattled\footnote{\textbf{rattle} v. to make quick, short, loud sounds while moving} on in her good-tempered, homely way, Mary began to notice what she was saying. ``Eh! you should see 'em all,'' she said. ``There's twelve of us an' my father only gets sixteen shilling a week. I can tell you my mother's put to it to get porridge for 'em all. They tumble about on th' moor an' play there all day an' mother says th' air of th' moor fattens 'em. She says she believes they eat th' grass same as th' wild ponies do. Our Dickon, he's twelve years old and he's got a young pony he calls his own.'' ``Where did he get it?'' asked Mary. ``He found it on th' moor with its mother when it was a little one an' he began to make friends with it an' give it bits o' bread an' pluck\footnote{\textbf{pluck} v. to pull (something) quickly to remove it} young grass for it. And it got to like him so it follows him about an' it lets him get on its back. Dickon's a kind lad an' animals likes him.'' Mary had never possessed\footnote{\textbf{possess} v. to have or own (something)} an animal pet of her own and had always thought she should like one. So she began to feel a slight interest in Dickon, and as she had never before been interested in any one but herself, it was the dawning of a healthy sentiment\footnote{\textbf{sentiment} n. an attitude or opinion}. When she went into the room which had been made into a nursery for her, she found that it was rather like the one she had slept in. It was not a child's room, but a grown-up person's room, with gloomy old pictures on the walls and heavy old oak chairs. A table in the center was set with a good substantial\footnote{\textbf{substantial} adj. enough to satisfy hunger} breakfast. But she had always had a very small appetite\footnote{\textbf{appetite} n. a physical desire for food}, and she looked with something more than indifference at the first plate Martha set before her. ``I don't want it,'' she said.
{ "alphanum_fraction": 0.7561915245, "avg_line_length": 150.0275229358, "ext": "tex", "hexsha": "9364d08367135659de255b3f749fb7ca2c6052c8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "855cf7d1a737b9bbff194c074c9964f23940d0bb", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "alexxyjiang/english-essays", "max_forks_repo_path": "The.Secret.Garden/chapter4.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "855cf7d1a737b9bbff194c074c9964f23940d0bb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "alexxyjiang/english-essays", "max_issues_repo_path": "The.Secret.Garden/chapter4.tex", "max_line_length": 1729, "max_stars_count": null, "max_stars_repo_head_hexsha": "855cf7d1a737b9bbff194c074c9964f23940d0bb", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "alexxyjiang/english-essays", "max_stars_repo_path": "The.Secret.Garden/chapter4.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4219, "size": 16353 }
\chapter{Miscellaneous problems} %===================================================================== % \section{Fun problems} % \label{sec:funproblems} \sectionWithAnswers{Fun problems}{sec:funproblems} In this section we present a number of problems for the interested reader to pursue. \begin{Exercise}\label{ex:misc:riffle} A riffle shuffle of a deck of even-numbered cards splits the deck into two equal parts, and then alternates the cards. That is, if the cards were originally numbered $1$ \ldots $10$, they will end up $1, 6, 2, 7, 3, 8, 4, 9, 5, 10$. Write a function {\tt riffle} that takes an even number of elements and returns the elements in the new order. The signature of your function should be: \begin{code} riffle : {a b} (fin a) => [2*a]b -> [2*a]b; \end{code} You might find the following Cryptol primitive functions useful: \begin{Verbatim} length : {n, a, b} (fin n, Literal n b) => [n]a -> b take : {front, back, a} (fin front) => [front + back]a -> [front]a drop : {front, back, a} (fin front) => [front + back]a -> [back]a join : {parts, each, a} (fin each) => [parts][each]a -> [parts * each] \end{Verbatim} Use the interpreter to pass arguments to these functions to familiarize yourself with their operation first. What happens when you call {\tt riffle} with a sequence that has an odd number of elements? \end{Exercise} \begin{Answer}\ansref{ex:misc:riffle} \begin{code} riffle xs = join [ [x, y] | x <- fh | y <- sh ] where [fh, sh] = split xs \end{code} The Cryptol type-checker will {\em not} allow passing an odd number of elements to {\tt riffle}. To see the error message, issue: {\tt riffle [1..5]}. \end{Answer} \begin{Exercise}\label{ex:misc:riffle2} Given a deck of 52 cards, 8 riffles returns the deck to its original position. Write a theorem stating this fact and prove it. \end{Exercise} \begin{Answer}\ansref{ex:misc:riffle2} Note that the actual contents of the input sequence is immaterial; we will just use 8-bit numbers. \begin{code} riffle8 : [52][8] -> Bit; property riffle8 deck = decks @ 8 == deck where decks = [deck] # [ riffle d | d <- decks ] \end{code} \end{Answer} \begin{Exercise}\label{ex:misc:sort} Define the merge-sort function in Cryptol, that works over arbitrary finite sequences of words. Prove its correctness. You might want to split the proof in two parts. First prove that the output is a permutation of the input, and then prove that the output is always in increasing order. \end{Exercise} \begin{Answer}\ansref{ex:misc:sort} A solution to the merge-sort problem can be found in the {\tt Examples} directory of the Cryptol distribution. \end{Answer} \begin{Exercise}\label{ex:misc:legato} Legato's challenge refers to the verification of an 8-bit multiplication algorithm encoded in Mostek assembly code. More information on Legato's challenge can be found at: \begin{center} \url{http://www.cs.utexas.edu/~moore/acl2/workshop-2004/contrib/legato/Weakest-Preconditions-Report.pdf} \end{center} Write a Cryptol program to model Legato's multiplication algorithm following the machine model. Prove that the multiplier is correct. \end{Exercise} \begin{Answer}\ansref{ex:misc:legato} A solution can be found at: \begin{center} \url{http://www.galois.com/blog/2009/07/08/legatosmultiplierincryptol} \end{center} \end{Answer} \begin{Exercise}\label{ex:misc:skein} The Cryptol distribution comes with a variety of crypto algorithm implementations, including AES and the SHA-3 candidate Skein. Study these definitions and experiment with them using the Cryptol interpreter. \end{Exercise}
{ "alphanum_fraction": 0.7135430917, "avg_line_length": 39.7282608696, "ext": "tex", "hexsha": "583552e7d1f1fba6f1baf1e0e4ff4e459bdd4e7c", "lang": "TeX", "max_forks_count": 117, "max_forks_repo_forks_event_max_datetime": "2022-03-06T15:40:57.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-01T18:45:39.000Z", "max_forks_repo_head_hexsha": "571f0dd249a72f830abd511caca87a971a91d07e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "axon-terminal/cryptol", "max_forks_repo_path": "docs/ProgrammingCryptol/misc/Misc.tex", "max_issues_count": 1050, "max_issues_repo_head_hexsha": "8cf1450fe6849d666007a71fe543f13cd32b942f", "max_issues_repo_issues_event_max_datetime": "2022-03-30T17:02:34.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-02T23:10:55.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "golegen/cryptol", "max_issues_repo_path": "docs/ProgrammingCryptol/misc/Misc.tex", "max_line_length": 105, "max_stars_count": 773, "max_stars_repo_head_hexsha": "8cf1450fe6849d666007a71fe543f13cd32b942f", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "golegen/cryptol", "max_stars_repo_path": "docs/ProgrammingCryptol/misc/Misc.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T04:26:02.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-08T15:43:54.000Z", "num_tokens": 1051, "size": 3655 }
\chapter*{Vita} \label{ch:vita} Jay Jay Billings is a Research Scientist in the Neutron Scattering Division (NSD) and Computer Science and Mathematics Division (CSMD) at Oak Ridge National Laboratory. He leads the Scientific Computing and Software Engineering group in NSD and Research Software Engineering group in CSMD. He holds the Bachelor’s Degree in Physics from Virginia Tech, class of 2005, and the Master of Science in Theoretical Astrophysics from the University of Tennessee, class of 2008. Mr. Billings’ research focuses on the design and implementation of modeling and simulation tools for energy science, a large part of which has been related to the study of scientific workflows in an HPC context. At Oak Ridge National Laboratory, Mr. Billings is leading the Scientific Software Initiative within CSMD and is a member of the Software Council, both of which are taking a new look at the way ORNL develops software for the Department of Energy. He is a founding member and current chair of the Science Working Group at the Eclipse Foundation, where he also leads the Eclipse Integrated Computational Environment and the Eclipse Advanced Visualization Project. Mr. Billings was also appointed to the Eclipse Architecture Council in 2016, and is a mentor for several additional Eclipse projects. He is a member of the Association for Computing Machinery. Mr. Billings has been funded by the Department of Energy Offices of Nuclear Energy, Energy Efficiency and Renewable Energy, Advanced Scientific Computing Research, Basic Energy Sciences, and Advanced Manufacturing. In addition to his day job, Mr. Billings is a candidate for the PhD in Energy Science from the Bredesen Center for Interdisciplinary Research and Education at the University of Tennessee. He spends his spare time with his family, and singing.
{ "alphanum_fraction": 0.818579235, "avg_line_length": 59.0322580645, "ext": "tex", "hexsha": "d00eefa423be33e0782807931419af89906305eb", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2017-01-09T14:26:56.000Z", "max_forks_repo_forks_event_min_datetime": "2017-01-09T14:26:56.000Z", "max_forks_repo_head_hexsha": "e8c1e50c3a6f0513cd5a3336dec74ce5e3d0547a", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "jayjaybillings/thesis", "max_forks_repo_path": "back-matter/vita.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e8c1e50c3a6f0513cd5a3336dec74ce5e3d0547a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "jayjaybillings/thesis", "max_issues_repo_path": "back-matter/vita.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "e8c1e50c3a6f0513cd5a3336dec74ce5e3d0547a", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "jayjaybillings/thesis", "max_stars_repo_path": "back-matter/vita.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 379, "size": 1830 }
% Created 2021-08-30 Mon 12:46 % Intended LaTeX compiler: pdflatex \documentclass[11pt]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{graphicx} \usepackage{grffile} \usepackage{longtable} \usepackage{wrapfig} \usepackage{rotating} \usepackage[normalem]{ulem} \usepackage{amsmath} \usepackage{textcomp} \usepackage{amssymb} \usepackage{capt-of} \usepackage{hyperref} \input{./settings} \titlegraphic{\hfill \includegraphics[width=.18\textwidth]{sslab-logo.pdf}} \author{Dinh Duy Kha} \date{\today} \title{ARM GPU Usage} \hypersetup{ pdfauthor={Dinh Duy Kha}, pdftitle={ARM GPU Usage}, pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs 28.0.50 (Org mode 9.5)}, pdflang={English}} \begin{document} \maketitle \tableofcontents \begin{latex} \end{latex} \section{Test} \label{sec:org89845b3} \begin{quote} OpenCL + ARM Mali + Linux DRM GPU stack analysis \begin{enumerate} \item How is a GPU context defined in terms memory and function calls? \item How is OpenCL <-> GPU kernel Jit done? where is it done? \item etc \end{enumerate} \end{quote} \section{OK\hfill{}\textsc{ATTACH}} \label{sec:org6aa4260} \url{\_20210830\_122312screenshot.png} \end{document}
{ "alphanum_fraction": 0.7470881864, "avg_line_length": 19.3870967742, "ext": "tex", "hexsha": "2300786dec11d15f7fee5cf8384d5409383844de", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a8e954aa8af587c065501825e6d85f6ec1a818d0", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "kha-dinh/org-notes", "max_forks_repo_path": "roam/20210830111915-arm_gpu_usage.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a8e954aa8af587c065501825e6d85f6ec1a818d0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "kha-dinh/org-notes", "max_issues_repo_path": "roam/20210830111915-arm_gpu_usage.tex", "max_line_length": 75, "max_stars_count": null, "max_stars_repo_head_hexsha": "a8e954aa8af587c065501825e6d85f6ec1a818d0", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "kha-dinh/org-notes", "max_stars_repo_path": "roam/20210830111915-arm_gpu_usage.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 397, "size": 1202 }
\chapter{Second appendix} \label{ap:ap2} This is the second appendix.
{ "alphanum_fraction": 0.75, "avg_line_length": 18, "ext": "tex", "hexsha": "5c092f2551ac257cbd1f11872ccb48d86042f23b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "04505aecd3e83d1b0a18fb491841c7a92172bab4", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "Gondolindrim/apaThesis", "max_forks_repo_path": "tex/appendixes/appendixB.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "04505aecd3e83d1b0a18fb491841c7a92172bab4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "Gondolindrim/apaThesis", "max_issues_repo_path": "tex/appendixes/appendixB.tex", "max_line_length": 40, "max_stars_count": null, "max_stars_repo_head_hexsha": "04505aecd3e83d1b0a18fb491841c7a92172bab4", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "Gondolindrim/apaThesis", "max_stars_repo_path": "tex/appendixes/appendixB.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 20, "size": 72 }
\hypertarget{Conclusion}{} \chapter{Conclusion} \label{Chapter:Conclusion} \lipsum[13-15]
{ "alphanum_fraction": 0.7692307692, "avg_line_length": 15.1666666667, "ext": "tex", "hexsha": "5532ef111b61860af6a36f8976b2e45e91d9f3bc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b6a734d44a7966613e2ce74e3e63656a6c393de0", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "GPMueller/mwe-tex", "max_forks_repo_path": "thesis/Conclusion/Conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b6a734d44a7966613e2ce74e3e63656a6c393de0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "GPMueller/mwe-tex", "max_issues_repo_path": "thesis/Conclusion/Conclusion.tex", "max_line_length": 26, "max_stars_count": 7, "max_stars_repo_head_hexsha": "b6a734d44a7966613e2ce74e3e63656a6c393de0", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "GPMueller/mwe-tex", "max_stars_repo_path": "thesis/Conclusion/Conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-24T12:06:39.000Z", "max_stars_repo_stars_event_min_datetime": "2016-11-15T16:48:58.000Z", "num_tokens": 29, "size": 91 }
\documentclass{article} \usepackage[margin=1.0in]{geometry} \usepackage[utf8]{inputenc} \usepackage[colorlinks = true, urlcolor = blue]{hyperref} \usepackage{csquotes} \usepackage{tikzsymbols} \setlength{\parindent}{0pt} \title{CS 6540 --- Human-Computer Interaction\\\textbf{Project 5: Conducting Interviews}} \author{ } \date{\textbf{Due:} TBD} \usepackage{natbib} \usepackage{graphicx} \begin{document} \maketitle \section{Summary} \textbf{Note:} As previously mentioned, this is a class that is partially focused on giving you \textit{hands-on experience} with \textit{skills that are useful} for conducting HCI research. As a result, this handout sometimes uses the term `research' as an adjective for, e.g., your focus topic for your group projects. We wish to emphasize that despite the occasional usage of that term, the purpose of these projects is entirely educational; we are not actually conducting research (i.e., a systematic investigation designed to develop or contribute to generalizable knowledge) and any findings from these group projects are not publishable. Review from a human subjects board is necessary in order to perform human subjects research (as opposed to classroom assignments, which are meant to increase your personal expertise and knowledge).\\ This is the second of four group projects.\\ You should use the same group from the previous project for this project. Additionally, you should use (roughly) the same research topic/research question as in the previous project. \textit{Note:} This does not mean that your research topic/research questions cannot evolve or become more specific over the course of the projects; however, you should not change topics entirely between projects.\\ The purpose of this assignment is to get hands-on experience with writing interview questions, conducting interviews, and transcribing interviews.\\ \textbf{Learning Objectives:} \begin{enumerate} \item Be able to write neutral, useful interview questions based on a research topic and research questions. \item Be able to conduct semi-structured interviews. \item Be able to transcribe interviews. \item Be able to reflect on the process of drafting, conducting, and transcribing an interview. \end{enumerate}\\ \section{Your Assignment Tasks} \begin{enumerate} \item Draft interview questions. \item Establish recruitment criteria and strategies and recruit participants. \item Conduct interviews. \item Partially transcribe interviews. \item Complete and turn in the writeup. Also turn in the interview recordings and transcriptions. \end{enumerate} \section{Other Details and Requirements} \begin{itemize} \item \textbf{Each member of your group} must conduct \textbf{at least 2 interviews}. 3--4 interviews per group member is preferable. Additionally, we require that each group member be present at \textbf{at least 2 other interviews} to observe and take notes. Please have a maximum of 2 members present at interviews. \item Overall, your group must have \textbf{at least 1 cumulative hour of interview time per group member}. \item As with Project 2 (Think-Aloud Protocol), you will be recording participant sessions. For this project we are asking for audio recordings only (not video or screen capture). As with Project 2, think about any privacy or data retention issues and prepare any scripts, if necessary, for explaining the situation to the participants and obtaining consent. \item \textbf{Each group member} must transcribe \textbf{at least 15 minutes} of interview audio. Everyone should transcribe a different 15-minute time window. \item You should aim to have interviews take between 30 and 60 minutes. There is no set number of questions you need to ask---papers that we have read include their interview questions and how long the interviews took, which can help you calibrate. \item While you shouldn't wildly change your interview questions between interviews, you should feel free to change, add, or drop questions as you proceed. As much as possible, this should only be done in collaboration with your group members. \item Participants can remain anonymous---we don't need to know who they are. \item Participants must be 18+. \item In general, you can ask any questions that: \begin{itemize} \item Are ethical. \item Have a point (What will you learn from them? Is it interesting? Would other researchers care?). \begin{itemize} \item More particularly, \textbf{are interview questions the best (or a reasonable) way to get at the information that you want to know?} If the answer is, ``actually, the research questions would best be answered by analyzing user logs and asking people is a poor substitute,'' then you should \textit{come up with (related) research questions that are more suited to being answered by interviews}. \item Imagine that you're writing a paper (like those we've read) that details the research questions, interview questions, and (eventually) results. \textbf{Would you find the paper convincing?} \end{itemize} \item Have something to do with technology. \item Have something to do with each other and your literature search. \end{itemize} Keep in mind that the results from your literature survey, this project, and the upcoming questionnaire project will be combined into one coherent written report for the final group project. \end{itemize} \section{To Turn In} You will be turning in: \begin{enumerate} \item \textbf{[60 points]} All of your recorded sessions. Each audio file should have an accompanying file with the exact interview materials (script, questions) that were used to conduct that specific interview. \textbf{Do not use Canvas to turn in these files.} We suggest that you use \href{https://gcloud.utah.edu/}{https://gcloud.utah.edu/}, which should offer you unlimited storage. Keep the recorded sessions private (not publicly viewable), but please share them with both \href{mailto:[email protected]}{[email protected]} and \href{mailto:[email protected]}{[email protected]}. \item \textbf{[15 points]} The partial interview transcripts. \textbf{Each member} should turn in their own transcript. They should indicate which file is being transcribed and what time frame in the file the transcription covers. They should also indicate which interviews they conducted and which interviews they attended as an observer/note-taker. \item The writeup of this assignment, as a PDF, to Canvas. Include all group member names on the writeup. Turn in only one writeup per group. As indicated below, all the writeup questions total 85 points. \end{enumerate} \section{The Writeup} The writeup is a reflection on drafting, conducting, and transcribing semi-structured interviews. In general we aren't as much looking for the ``right'' answer as for thoughtful reflection. There may be some redundancy in your answers, but in general attempt to use the question answers to convey any interesting insights that you encountered during the assignment. \begin{enumerate} \item \textbf{[25 points]} Give the final version of any interview questions and scripts that you used to conduct the interviews. Also include recruitment materials (e.g., email text). If the materials changed substantially over the course of the project, then give the original drafts as well. \item \textbf{[8 points]} Give the research questions that were you trying to address via your interview. Did the answers from the interview questions help answer the research questions? Why or why not? \item \textbf{[6 points]} How did the interview materials change (or not change) over time? Why? \item \textbf{[6 points]} If you had to categorize your interview questions into groups that represent interview sub-topics, what would they be? \item \textbf{[8 points]} How did you go about trying to write ``good'' interview questions? In addition to indicating any general properties you tried to achieve, give some examples of exact question wordings and what motivated them. \item \textbf{[4 points]} What were inclusion criteria for participation in your study? What were exclusion criteria? \item \textbf{[4 points]} How did you recruit for your interviews? What worked well? What didn't work well? \item \textbf{[2 points]} If you were conducting these interviews for research (instead of coursework), would you recruit differently? Why or why not? \item \textbf{[2 points]} If you were conducting these interviews for research, are there any other things that you would have done differently? Why or why not? \item \textbf{[2 points]} Did group members feel that they learned anything from observing each other? Why or why not? \item \textbf{[2 points]} Did group members feel that they learned anything from transcribing interview audio? Why or why not? \item \textbf{[0 points]} Did you use other resources to learn about drafting, conducting, or transcribing semi-structured interviews? If so, what were they, and were they helpful? \item \textbf{[4 points]} Were there other research topics or research questions that you (as a group) would rather have pursued if you weren't required to conduct interviews? \item \textbf{[2 points]} Were there any ways that you felt the format and requirements of this project limited questions that you would want to ask, if you were pursuing this project for research purposes? \item \textbf{[5 points]} Why are your research questions/interview subtopics/interview questions \textbf{well-suited to an interview method}, specifically? Why are they not better addressed by an eyetracking study/questionnaire (N$=$500)/log analysis/etc.? \item \textbf{[5 points]} Why are your research questions/interview subtopics important? Who cares, and why? \end{enumerate} \section*{Notes} \begin{itemize} \item Distance collaborations are alright! Specifically: \begin{itemize} \item You can run interviews remotely (phone or video chat) or recruit remote participants. \item If it is difficult to coordinate attending group members' interviews, you can instead listen to the audio recordings of the interviews. (It is, however, better to attend in person---you'll both be able to see body language and observe interactions that take place before recording starts.) \end{itemize} \item There are many blog posts on transcribing, e.g.: \href{https://qualpage.com/2017/01/19/tips-on-transcribing-qualitative-interviews-and-naturally-occurring-interaction/}{https://qualpage.com/2017/01/19/tips-on-transcribing-qualitative-interviews-and-naturally-occurring-interaction/} \item For \textbf{every interview question}, you should ask yourself and be able to answer: \begin{itemize} \item Why are we asking this question? \item What are we hoping to learn from any answers? \item (If your answers to the above two points don't closely match the interview questions you're asking, can you reasonably ask the question that you actually \textit{mean}?) \item If we wrote up something about the question answers in a paper, what would it look like? What would be able to claim? What would we \textit{not} be able to claim? \item Can you think of ways to misunderstand or be confused about the question (definitions of terms, what ``this'' refers to, taking out of context, etc.)? (Interview pilots, of course, are very useful for this reason!) \end{itemize} \end{itemize} \end{document}
{ "alphanum_fraction": 0.7633483778, "avg_line_length": 80.4315068493, "ext": "tex", "hexsha": "8752f8bf8ce7ea3fbd13dc695cb89f03891e007f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c8f96e272ac8df3e696992f0e5e7f7d4b3df2a38", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "tamaradenning/human-computer-interaction-course", "max_forks_repo_path": "CS 6540/Fall 2018/CS6540-HCI-project-5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c8f96e272ac8df3e696992f0e5e7f7d4b3df2a38", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "tamaradenning/human-computer-interaction-course", "max_issues_repo_path": "CS 6540/Fall 2018/CS6540-HCI-project-5.tex", "max_line_length": 844, "max_stars_count": null, "max_stars_repo_head_hexsha": "c8f96e272ac8df3e696992f0e5e7f7d4b3df2a38", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "tamaradenning/human-computer-interaction-course", "max_stars_repo_path": "CS 6540/Fall 2018/CS6540-HCI-project-5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2628, "size": 11743 }
\begin{appendices} \chapter{Appendix to Chapter \ref{ch:1}}\label{cha:append-chapt-refch:1} \section{Auxiliary Lemmata} Fundamental identity \begin{equation} \label{eq:A} e^{i\pi}=-1. \end{equation} Equivalence relation \begin{equation} \label{eq:B} A=B. \end{equation} \section{Proofs} \chapter{Appendix to Chapter \ref{ch:3}} \section{Proofs} \section{Supplementary Tables and Figures} \begin{longtable}{cc} % Caption to appear on the first page and in the list of tables \caption[Optional Short caption (used in list of tables)]{A long table} \label{grid_mlmmh} \\ Heading that appears & on first page only\\ \hline \endfirsthead \caption[]{(continued)}\\ Heading that appears & on all pages\\ \hline \endhead % optional footer \hline \multicolumn{2}{r}{{Continued on next page}} \\ \endfoot % footer to appear on tha last page (empty) \endlastfoot Contrary to popular & belief, Lorem Ipsum \\ is & not \\ simply & random \\ text &. It \\ has & roots \\ in & a \\ piece & of \\ classical & Latin \\ literature & from \\ 45 & BC \\, making & it \\ over & 2000 \\ years old. Richard & Mc \\Clintock &, a \\ Latin & professor \\ at & Hampden \\-Sydney & College \\ in & Virginia \\, looked & up \\ one & of \\ the & more \\obscure & Latin \\ words &, consectetur \\, from & a \\ Lorem & Ipsum \\ passage &, and \\ going & through \\ the & cites \\ of & the word in \\ classical & literature , discovered the \\ undoubtable & source. Lorem Ipsum \\ comes & from \\ sections & 1 \\.10 &.32 \\ and & 1 \\.10 &.33 \\ of &"de \\ Finibus & Bonorum \\ et & Malorum \\" (The & Extremes \\ of & Good \\ and & Evil \\) by & Cicero \\, written & in \\ 45 & BC \\. This & book \\ is & a \\ treatise & on \\ the & theory \\ of & ethics \\, very & popular \\ during &the \\ Renaissance &. The \\ first & line \\ of & Lorem \\ Ipsum, "Lorem ipsum & dolor \\ sit & amet \\..", comes from a & line \\ in & section 1.10.32.\\ \end{longtable} \begin{sidewaysfigure} \centering Supplementary figures and tables should be placed in the appendix, not at the end of a chapter. To rotate big tables and figures $90^{\circ}$, use the rotating package and the sidewaystable and sidewaysfigure environments. This ensures that the figure and caption get rotated, but the page number stays at the bottom of the page. \caption{Supplementary Figure} \label{fig:figuresup1} \end{sidewaysfigure} \begin{figure}[ht] \centering This is another supplementary figure. \caption{Another Figure} \label{fig:figuresup3} \end{figure} \end{appendices}
{ "alphanum_fraction": 0.6728056426, "avg_line_length": 33.1428571429, "ext": "tex", "hexsha": "aa1d1a7f43c20ac2fac2db7af7b5bd838850da6b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e93e95a5c16ec6989da0ebd938c76ea858ee1708", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "acwars/ice", "max_forks_repo_path": "chapters/appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e93e95a5c16ec6989da0ebd938c76ea858ee1708", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "acwars/ice", "max_issues_repo_path": "chapters/appendix.tex", "max_line_length": 152, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e93e95a5c16ec6989da0ebd938c76ea858ee1708", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "acwars/novax", "max_stars_repo_path": "chapters/appendix.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-11T04:24:57.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-11T04:24:57.000Z", "num_tokens": 756, "size": 2552 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[margin=3cm]{geometry} \setlength{\parindent}{0pt} %--don't indent paragraphs \begin{document} \centerline{\Large\bfseries Capital Gains Tax Calculations for \VAR{ tax_year }-\VAR{ ("{:02d}".format((tax_year + 1) % 100)) }} \BLOCK{ set total_proceeds = namespace(value=Decimal(0)) } \BLOCK{ set total_gain = namespace(value=Decimal(0)) } \BLOCK{ set total_loss = namespace(value=Decimal(0)) } \BLOCK{ set acquisition_count = namespace(value=0) } \BLOCK{ set disposal_count = namespace(value=0) } \BLOCK{ for date_index, symbol_dict in calculation_log.items() } \section*{\VAR{ date_from_index(date_index).strftime("%d %B %Y") }} \BLOCK{ for symbol, entries in symbol_dict.items() } \BLOCK{ if symbol.startswith("sell$") } \BLOCK{ set symbol = symbol[5:] } \BLOCK{ set is_disposal = True } \BLOCK{ else } \BLOCK{ set symbol = symbol[4:] } \BLOCK{ set is_disposal = False } \BLOCK{ endif } \BLOCK{ set overall_quantity = entries|sum(attribute="quantity") } \BLOCK{ if is_disposal } \BLOCK{ set overall_disposed = round_decimal(entries|sum(attribute="amount"), 2) } \BLOCK{ set total_proceeds.value = total_proceeds.value + overall_disposed } \BLOCK{ set overall_gain = round_decimal(entries|sum(attribute="gain"), 2) } \BLOCK{ set disposal_count.value = disposal_count.value + 1 } \subsection*{Disposal \VAR{ disposal_count.value }: \VAR{ overall_quantity } shares of \VAR{ symbol } for £\VAR{ "{:,}".format(overall_disposed) }} \BLOCK{ if overall_gain > 0 -} Chargeable \textbf{gain} is £\VAR{ "{:,}".format(overall_gain) } \BLOCK{ set total_gain.value = total_gain.value + overall_gain } Capital gain to date is £\VAR{ "{:,}".format(total_gain.value) } \BLOCK{ else } Chargeable \textbf{loss} is £\VAR{ "{:,}".format(-overall_gain) } \BLOCK{ set total_loss.value = total_loss.value - overall_gain } Capital loss to date is £\VAR{ "{:,}".format(total_loss.value) } \BLOCK{ endif } \BLOCK{ else } \BLOCK{ set acquisition_cost = round_decimal(entries[0].allowable_cost, 2) } \BLOCK{ if overall_quantity > 0 } \BLOCK{ set acquisition_count.value = acquisition_count.value + 1 } \subsection*{Acquisition \VAR{ acquisition_count.value }: \VAR{ overall_quantity } shares of \VAR{ symbol } for £\VAR{ "{:,}".format(acquisition_cost) }} \BLOCK{ else } \subsection*{Management fee for \VAR{ symbol } of £\VAR{ "{:,}".format(acquisition_cost) }} \BLOCK{ endif} \BLOCK{ endif } \begin{enumerate} \BLOCK{ for entry in entries } \item \textbf{\VAR{ entry.rule_type.name.replace("_", " ") }}. Quantity: \VAR{ entry.quantity }, \BLOCK{ if is_disposal } \BLOCK{ if entry.quantity < overall_quantity } disposal proceeds: £\VAR{ "{:,}".format(round_decimal(entry.amount, 2)) }, \BLOCK{ endif } allowable \BLOCK{ if entry.rule_type.name == "BED_AND_BREAKFAST" } cost on \VAR{ date_from_index(entry.bed_and_breakfast_date_index).strftime("%d %b %Y") }: \BLOCK{ else } cost: \BLOCK{ endif } £\VAR{ "{:,}".format(round_decimal(entry.allowable_cost, 2)) }, \BLOCK{ if entry.gain > 0} gain: £\VAR{ "{:,}".format(round_decimal(entry.gain, 2)) }. \BLOCK{ else } loss: £\VAR{ "{:,}".format(round_decimal(-entry.gain, 2)) }. \BLOCK{ endif } \BLOCK{ else } actual cost: £\VAR{ "{:,}".format(round_decimal(-entry.amount, 2)) }. \BLOCK{ endif} \BLOCK{ if is_disposal } Number of shares left in the pool: \BLOCK{ else } Number of shares in the pool: \BLOCK{ endif } \VAR{ entry.new_quantity }\BLOCK{ if entry.new_quantity > 0 }, new pool cost: £\VAR{ "{:,}".format(round_decimal(entry.new_pool_cost, 2)) }\BLOCK{ endif }. \BLOCK{ endfor} \end{enumerate} \BLOCK{ endfor } \BLOCK{ endfor } \section*{Overall} \subsection*{Number of acquisitions: \VAR{ acquisition_count.value }} \subsection*{Number of disposals: \VAR{ disposal_count.value }} \subsection*{Total disposal proceeds: £\VAR{ "{:,}".format(total_proceeds.value) }} \subsection*{Total capital gain before loss: £\VAR{ "{:,}".format(total_gain.value) }} \subsection*{Total capital loss: £\VAR{ "{:,}".format(total_loss.value) }} \subsection*{Total capital gain after loss: £\VAR{ "{:,}".format(total_gain.value - total_loss.value) }} \end{document}
{ "alphanum_fraction": 0.6675805346, "avg_line_length": 43.77, "ext": "tex", "hexsha": "206221a5fc978a8165f761c5c51874f6863c7cf1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b0be7785ba6fa0000c80ebe4078e726d985aae32", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "stepancheg/capital_gains_calculator", "max_forks_repo_path": "template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b0be7785ba6fa0000c80ebe4078e726d985aae32", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "stepancheg/capital_gains_calculator", "max_issues_repo_path": "template.tex", "max_line_length": 161, "max_stars_count": null, "max_stars_repo_head_hexsha": "b0be7785ba6fa0000c80ebe4078e726d985aae32", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "stepancheg/capital_gains_calculator", "max_stars_repo_path": "template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1213, "size": 4377 }
% Appendix Template \newcommand{\major}{1} \newcommand{\minor}{1} \newcommand{\undPrefix}{\major_\minor} \newcommand{\dotPrefix}{\major.\minor} \newcommand{\scoPrefix}{\major-\minor} \newcommand{\filePrefix}{\undPrefix} % \chapter{Results of experiment 1.1} % Main appendix title \chapter{Título de prueba} % Main appendix title % \label{sec:appendices} % \label{Appendix1-1} % Change X to a consecutive letter; for referencing this appendix elsewhere, use \ref{AppendixX} \label{Appendix\scoPrefix} % Change X to a consecutive letter; for referencing this appendix elsewhere, use \ref{AppendixX} These experiments are discussed \hyperref[disc:h1]{here} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.5\linewidth} % \centering\captionsetup{width=.8\linewidth}\includegraphics[width=\imgscale\linewidth]{Figures/1_1/covertype} \centering\captionsetup{width=.8\linewidth}\includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/covertype} \caption{Exp. 1.1 with Covertype. SVM is outperformed} \label{fig:\undPrefix_covertype} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth}\includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/digits} \caption{Exp. 1.1 with Digits. SVM is outperformed} % \label{fig:1_1_digits} \label{fig:\undPrefix_digits} \end{subfigure} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth}\includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/fall_detection} \caption{Exp. 1.1 with Fall Detection. RFF or \Nys\ can't outperform SVM} \label{fig:\undPrefix_fall_detection} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth}\includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/mnist} \caption{Exp. 1.1 with MNIST. SVM is outperformed} \label{fig:\undPrefix_mnist} \end{subfigure} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth}\includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/pen_digits} \caption{Exp. 1.1 with Pen Digits. SVM is outperformed} \label{fig:\undPrefix_pen_digits} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth}\includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/satellite} \caption{Exp. 1.1 with Satellite. SVM is outperformed} \label{fig:\undPrefix_satellite} \end{subfigure} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth} \includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/segment} \caption{Exp. 1.1 with Segment. SVM is outperformed} \label{fig:\undPrefix_segment} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth} \includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/vowel} \caption{Exp. 1.1 with Vowel. SVM is outperformed} \label{fig:\undPrefix_vowel} \end{subfigure} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth} \includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/segment} \caption{Exp. 1.1 with Segment. SVM is outperformed} \label{fig:\undPrefix_segment} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering\captionsetup{width=.8\linewidth} \includegraphics[width=\imgscale\linewidth]{Figures/\filePrefix/vowel} \caption{Exp. 1.1 with Vowel. SVM is outperformed} \label{fig:\undPrefix_vowel} \end{subfigure} \end{figure} \let\major\undefined \let\minor\undefined \let\undPrefix\undefined \let\dotPrefix\undefined \let\scoPrefix\undefined \let\filePrefix\undefined
{ "alphanum_fraction": 0.7424203822, "avg_line_length": 36.0091743119, "ext": "tex", "hexsha": "06b229b5bafb2866e0c518c3117afad13a510466", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b38ac01da641e40551c1b3fefc1dc3ebd1b8b0a9", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ribes96/TFG", "max_forks_repo_path": "Memoria/Appendices/TemporalAppendix1-1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b38ac01da641e40551c1b3fefc1dc3ebd1b8b0a9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ribes96/TFG", "max_issues_repo_path": "Memoria/Appendices/TemporalAppendix1-1.tex", "max_line_length": 126, "max_stars_count": null, "max_stars_repo_head_hexsha": "b38ac01da641e40551c1b3fefc1dc3ebd1b8b0a9", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ribes96/TFG", "max_stars_repo_path": "Memoria/Appendices/TemporalAppendix1-1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1231, "size": 3925 }
% !TeX root = ../../python-snippets.tex \section{Non-Categorized} All recipes, which do not fall into one of the underlying categories, are listed here. \subsection{Auto Login On Website} Using \lstinline{selenium} allows you to automatically open a new browser window and login into a certain website, e.g. GitHub. \lstinputlisting[caption=auto\_login\_website.py]{../third_party/auto_login_website.py} \subsection{Count Python Bytes In A Directory} \lstinline{PyFilesystem2} is Python's file system abstraction layer. You can use a file system object to analyse your files. The folowing recipe shows you how you can get the number of Python source code bytes in your directory. \lstinputlisting[caption=count\_python\_bytes.py]{../third_party/count_python_bytes.py} \subsection{Create World Maps} \lstinline{folium} builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the Leaflet.js library. Manipulate your data in Python, then visualize it in a Leaflet map via \lstinline{folium}. This recipe creates a world map with the USA in the centre of the map. \lstinputlisting[caption=folium\_snippet.py]{../third_party/folium_snippet.py} \subsection{Print Formatted JSON} Printing JSON object formatted is as easy as: \lstinputlisting[caption=formatted\_json.py]{../third_party/formatted_json.py} \begin{lstlisting}[caption=Output of formatted\_json.py] $ python formatted_json.py { "userId": 1, "id": 1, "title": "delectus aut autem", "completed": false } \end{lstlisting} \textbf{Note:} The snippet is part of the third party part as it makes use of the \lstinline{requests} library. The \lstinline{json} module is part of Pythons standard library. \subsection{Inspect Docker} You can inspect running Docker containers and existing images using the \lstinline{docker} module. \lstinputlisting[caption=inspect\_docker.py]{../third_party/inspect_docker.py} \subsection{Is Holiday} The \lstinline{holidays} module provides you an elegant and easy way to check, whether a given date is a holiday in the specified region. \lstinputlisting[caption=is\_holiday.py]{../third_party/is_holiday.py} \subsection{Web Scraping} The recipe in the \lstinline{mathematicians.py} file shows you how you can scrape information from the internet. It makes use of the following third party packages: \begin{itemize} \item BeautifulSoup \item requests \end{itemize} The recipe returns you the most popular mathematicians. As it's to long, please have a look at it in the repository. \subsection{Interacting With The Medium API} Another recipe, which is to long but worth to mention, is the one contianed by the \lstinline{medium.py} file. Running the file gives you the ability to interact with the Medium API via the command-line. \subsection{Mocking Requests} When writing tests for your application, you may come across the situation, where you have to mock requests. The following Listing shows you how you can do this by using the standard libraries \lstinline{unittest.mock} module. \lstinputlisting[caption=mocking\_requests.py]{../third_party/mocking_requests.py} \subsection{Mypy Example} The following Listing reveals the usage of \lstinline{mypy} as a static type checker. Running the snippet will throw a \lstinline{TypeError} as expected. \lstinputlisting[caption=mypy\_example.py]{../third_party/mypy_example.py} \subsection{NumPy Array Operations} Running the following code snippet reveals you some of the existing \lstinline{numpy} array operations. \lstinputlisting[caption=numpy\_array\_operations.py]{../third_party/numpy_array_operations.py} \begin{lstlisting}[caption=Output of numpy\_array\_operations.py] $ python numpy_array_operations.py [ 2 4 6 8 10] [ 1 4 9 16 25] \end{lstlisting} \subsection{Parse Complex Excel Sheets} You can parse more complex Excel Sheets by using \lstinline{pandas} and installed a \lstinline{pandas} extension called \lstinline{xlrd}. \lstinputlisting[caption=parse\_complex\_excel\_sheets.py]{../third_party/parse_complex_excel_sheets.py} \subsection{Nmap For Python} Creating your own port scanner becomes easy when using \lstinline{nmap}. Luckily, a Python package exists providing access to nmap: \lstinline{python-nmap}. \textbf{Hint:} Make sure you have Nmap installed on your operating system as \lstinline{python-nmap} only provides access to the Nmap API. \lstinputlisting[caption=port\_scanner\_nmap.py]{../third_party/port_scanner_nmap.py} \subsection{Test Renamed Class} Renaming a class can happen over time. If you want to test, wether your code is backwards compatible, you can make use of the following snippet. \lstinputlisting[caption=pytest\_rename\_class\_backwards\_compatibility.py]{../third_party/pytest_rename_class_backwards_compatibility.py} \subsection{Reduce Pandas Dataframe Memory} When dealing with large data sets, it can be an advantage to change column types from \lstinline{object} to \lstinline{category} as shown in the next Listing. \lstinputlisting[caption=reduce\_pandas\_df\_memory.py]{../third_party/reduce_pandas_df_memory.py} \subsection{Async Libraries} Before \lstinline{async}/\lstinline{await} became so popular and part of the standard library, you had several options to use asynchronous techniques in your project. The three main ones were \lstinline{asynio}, \lstinline{gevent} and \lstinline{tornado}. You can find an example for each in the \textit{third\_party} directory of the repository. The file names are as follows: \begin{itemize} \item \lstinline{test_asyncio.py} \item \lstinline{test_gevent.py} \item \lstinline{test_tornado.py} \end{itemize} \subsection{Unzip World Bank Data} The following recipe shows you how you can download a \lstinline{.zip} file from an online source, extract and work with the data. Therefore, except the \lstinline{requests} library only standard library modules are used. \lstinputlisting[caption=world\_bank\_data.py]{../third_party/world_bank_data.py} \subsection{Simple Debugger} When writing Python code you may come across situations, where you want to find out, what's going on. Sometimes you may make use of an actual debugger, but most of the times inserting some \lstinline{print}-statements seems just fine. The \lstinline{PySnooper} package is probably the best reason, why you should never use \lstinline{print}-statements to debug your code again. The following Listing shows you the simple usage of its \lstinline{snoop} decorator. \lstinputlisting[caption=simple\_debugger.py]{../third_party/simple_debugger.py} The output is too long to be covered here. Feel free to run the snippet from your terminal and get a sense of how amazing and helpful this little package can be. \subsection{Text Analysis} If you want a simple way to perform text analysis, you can use \lstinline{TextBlob}. The following snippet shows you a sample usage. Make sure to read the docs of the package as it provides functionality for translations and further analysis, too. \lstinputlisting[caption=text\_analysis.py]{../third_party/text_analysis.py} \textbf{Note:} Make sure to download the needed data before hand, otherwise an exception is thrown telling you to download it. \begin{lstlisting}[caption=Download data using NLTK] >>> import nltk >>> nltk.download('averaged_perceptron_tagger') \end{lstlisting} \subsection{Complex List Ops} In your day to day work you will very likely come across performance issues in Python. Sometimes it's a good idea to choose a different data structure than you have used to speed up certain operations. However, you may come across situations where this is even not enough. Instead of switching to another language like C, you may want to check out the \lstinline{blist} package first. \glqq The \lstinline{blist} is a drop-in replacement for the Python list that provides better performance when modifying large lists.\grqq Here is a small snippet illustrating the huge impact of using a \lstinline{blist} instead of a \lstinline{list}: \lstinputlisting[caption=complex\_list\_ops.py]{../third_party/complex_list_ops.py} \begin{lstlisting}[caption=Output of complex\_list\_ops.py] $ python complex_list_ops.py "Builtin" spend time: 24.61005 "Blist" spend time: 4.675813 \end{lstlisting} \glqq The blist package also provides \lstinline{sortedlist}, \lstinline{sortedset}, [...] \lstinline{btuple} types\grqq , and much more.
{ "alphanum_fraction": 0.7888757396, "avg_line_length": 39.3023255814, "ext": "tex", "hexsha": "bc59dd35fe4816541b98845d3b8281162cbc9035", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-05-18T12:49:21.000Z", "max_forks_repo_forks_event_min_datetime": "2020-05-19T21:18:12.000Z", "max_forks_repo_head_hexsha": "212f63f820b6f5842f74913ed08da18d41dfe7a4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DahlitzFlorian/python-snippets", "max_forks_repo_path": "ebook/chapters/third_party_chapters/non_categorized.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "212f63f820b6f5842f74913ed08da18d41dfe7a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DahlitzFlorian/python-snippets", "max_issues_repo_path": "ebook/chapters/third_party_chapters/non_categorized.tex", "max_line_length": 166, "max_stars_count": 29, "max_stars_repo_head_hexsha": "212f63f820b6f5842f74913ed08da18d41dfe7a4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DahlitzFlorian/python-snippets", "max_stars_repo_path": "ebook/chapters/third_party_chapters/non_categorized.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-08T22:09:03.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-25T09:35:12.000Z", "num_tokens": 2027, "size": 8450 }
\documentclass[bigger]{beamer} \input{header-beam} % change to header-handout for handouts % ==================== \title[Lecture 11]{Logic I F13 Lecture 11} \date{October 15, 2013} % ==================== \include{header} \setlength{\fitchprfwidth}{5em} \section{Quantifiers} \subsec{Determiners in English}{ \bit \item Articles: a, the \item Cardinal numbers: zero, one, two, \dots \item Quantifiers: all, few, many, several, some, every, each, any, no \item either, neither, both \item \dots \eit } \subsec{Quantifiers in FOL}{ \bit \item $\forall$: \emph{universal} quantifier, ``for all'' \[ \sf \forall x\, Cube(x) \] (Everything is a cube) \item $\exists$: \emph{existential} quantifier, ``exists'' \[ \sf \exists y\, Large(y) \] (Something is large) \eit } \subsec{Variables and Terms}{ \bit \item Variables: $x$, $y$, $z$, $x_1$, $x_2$, \dots \item Terms: \bit \item Constant symbols and variables are terms \item If $f$ is an $n$-place function symbol, and $t_1$, \dots, $t_n$ are terms, then $f(t_1, \dots, t_n)$ is a term \eit \item Eg, in the language of arithmetic: \[ 1 \qquad x \qquad (1 + (x + y)) \] \eit } \subsec{Well-Formed Formulas (wff's)}{ \bit \item If $P$ is an $n$-place predicate symbol, and $t_1$, \dots, $t_n$ are terms, then $P(t_1, \dots, t_n)$ is an atomic wff \item Every atomic wffs is a wff \item If $P$ is a wff, so is $\lnot P$ \item If $P_1$, \dots, $P_n$ are wffs, so are \[ (P_1 \land {} \dots \land P_n) \text{ and } (P_1 \lor {} \dots \lor P_n) \] \item If $P$ and $Q$ are wffs, then so are \[ (P \to Q) \text{ and } (P \iff Q) \] \item If $P$ is a wff and $x$ is a variable, then \[ \forall x\, P \text{ and } \exists x\, P \] are wffs \eit } \subsec{Examples}{ \bit \item Atomic wffs: $\sf Cube(x)$, $\sf Larger(x, x)$, $\sf Larger(x, y)$, $\sf Larger(z, a)$, $\sf a = y$ \item $\sf Large(x) \land Cube(x)$ \item $\sf Large(x) \to (Cube(x) \land Larger(x, b))$ \item $\sf \forall x\, Cube(x)$ \item $\sf \exists x\, (Large(x) \land Cube(x))$ \item $\sf \exists y\, Cube(x)$ \item $\sf \forall x\, Cube(x) \land \forall x\, Large(x)$ \item $\sf \forall x\, (Cube(x) \land \forall x\, Large(x))$ \item $\sf \forall x\, Cube(x) \land Large(x)$ \eit } \subsec{Scope and Binding}{ \[ \underline{\forall {\color{red}x}(Cube({\color{red}x}) \to (\underline{\exists {\color{blue}x}\, Smaller({\color{blue}x}, y)} \land \underline{\exists {\color{green}y}(Tet({\color{green}y}) \land Larger({\color{green}y}, {\color{red}x}))}))} \] \bits \item The \emph{scope} of a quantifier extends till end of wff it precedes \item Variables are \emph{bound} by the nearest matching quantifier to the left in whose scope they are \item Variables not bound are \emph{free} \item A \emph{sentence} is a wff with no free variables \eit } \section{Semantics of Quantifiers and Variables} \subsec{Possible Worlds and First-order Structure}{ \bit \item Collection of objects (\emph{domain}, not empty) \item Referents for each individual constant (which object it names) \item Properties of each object (shape, size, position on board) \bit\item Extension of each 1-place predicate symbol: collection of objects it applies to\eit \item Relations of each pair of objects (larger, same row, etc)\bit\item Extension of each $n$-place predicate symbol: tuple of objects standing in the relation \eit\eit } % Redo this -- motivate by ``\forall x Cube(x)'' should say ``everything is a cube'' -- but what if it's more complicated than just''Cube(x)''? Less abstract at the beginning! \subsec{Truth of Sentences}{ \bit \item Suppose $P(x)$ contains only $x$ free \item Suppose $n$ is a new constant symbol not occurring in $P(x)$ \item Let $P(n)$ be $P(x)$ with all free occurrences of $x$ replaced by $n$ \item $\forall x\, P(x)$ is true in a world $W$ iff for every $\alpha$ among the objects in $W$, $P(n)$ is true in the world $W_\alpha$ which is just like $W$ except $n$ names $\alpha$ \item $\exists x\, P(x)$ is true in a world $W$ iff for at least one $\alpha$ among the objects in $W$, $P(n)$ is true in the world $W_\alpha$ which is just like $W$ except $n$ names $\alpha$ \eit } \subsec{Satisfaction}{ \bit \item Define relation of \emph{satisfaction} between an object $\alpha$, a wff with one free variable $P(x)$, and a world~$W$\\ ``$\alpha$ satisfies $P(x)$'' \item \emph{$\alpha$ satisfies $P(x)$ in~$W$} iff $P(n)$ is true in the world~$W'$ which is just like $W$ except $n$ names $\alpha$ \item $P(x)$ defines a property in the world~$W$: the collection of all the objects $\alpha$ that satisfy $P(x)$ \item{} [More general: \emph{$\langle\alpha_1, \dots, \alpha_k\rangle$ satisfies $P(x_1, \dots, x_k)$ in $W$} iff $P(n_1, \dots, n_k)$ is true in $W'$ which is like $W$ except $n_i$ names $\alpha_i$ ($i = 1, \dots, k$)] \item $P(x_1, \dots, x_k)$ defines a $k$-place relation \eit } \subsec{Satisfaction and Expressing Properties and Relations}{ \bit \item $\sf Large(x) \land Cube(x)$ \dots ``$x$ is a large cube'' \item $\sf LeftOf(x, a) \lor RightOf(x, a)$ \dots ``$x$ is left of or right of a''; ``$x$ is to the side of a'' \item More than one way to express a property: \bit \item $\sf LeftOf(a, x) \lor RightOf(a, x)$ \item $\sf LeftOf(x, a) \lor LeftOf(a, x)$ \item $\sf \lnot SameCol(x, a)$ \eit \eit } \section{Combining Quantifiers with WFFs} \subsec{Quantifiers and Properties: $\forall$}{ \bit \item The sentence \[\sf \forall x\, P(x) \] says (is true iff) \bit \item $P(n)$ is true for \emph{whatever} object is named by~$n$ \item every object in the domain (= world) satisfies $P(x)$ \item every object in the domain has the property expressed by $P(x)$, \eit \eit } \subsec{Quantifiers and Properties: $\exists$}{ \bit \item The sentence \[\sf \exists x\, P(x) \] says (is true iff) \bit \item $P(n)$ is true for \emph{at least one} object named by~$n$ \item at least one object in the domain satisfies $P(x)$ \item at least one object in the domain has the property expressed by $P(x)$, \eit \eit } \subsec{Expressing ``Everything''}{ \bits \item ``Everything is a cube''\pause \bits \item $\sf \forall x\, Cube(x)$ \item True iff every object in the domain is a cube. \eit \item ``Everything is a large cube'' \bits \item $\sf \forall x(Large(x) \land Cube(x))$ \item True iff every object in the domain is large and a cube \eit \item ``Everything is either left of or right of b'' \bits \item $\sf \forall x(LeftOf(x, b) \lor RightOf(x, b))$ \item True iff every object in the domain is: either left of b or right of b \eit \eit } \subsec{Expressing ``Something''}{ \bits \item ``Something is a cube''\pause \bits \item $\sf \exists x\, Cube(x)$ \item True iff at least one object in the domain is a cube. \item ``There is a cube'', ``A cube exists'' \item ``There are cubes'', ``Cubes exist'' \eit \item ``Something is a large cube'' \bits \item $\sf \exists x(Large(x) \land Cube(x))$ \item True iff at least one object in the domain is large and a cube \eit \item ``Something is either left of or right of b'' \bits \item $\sf \exists x(LeftOf(x, b) \lor RightOf(x, b))$ \item True iff at least one object in the domain is: either left of b or right of b \item ``There are things left of or right of b'' \eit \eit } \subsec{Expressing ``Nothing''}{ \bit \item ``Nothing is a cube'' \bits \item $\sf \forall x\, \lnot Cube(x)$ \item True iff every object in the domain is \emph{something other than} a cube \item $\sf \lnot \exists x\, Cube(x)$ \item True iff it \emph{isn't} the case that at least one object is a cube \eit \item ``Nothing is a large cube'' \bits \item $\sf \forall x\,\lnot(Large(x) \land Cube(x))$ \item $\sf \lnot\exists x(Large(x) \land Cube(x))$ \eit \item ``Nothing is either left of or right of b'' \bits \item $\sf \forall x\lnot(LeftOf(x, b) \lor RightOf(x, b))$ \item $\sf \lnot\exists x (LeftOf(x, b) \lor RightOf(x, b))$ \eit \eit } \subsec{Expressing ``Something Isn't''}{ \bit \item ``Something isn't a cube'' \bit \item $\sf \exists x\, \lnot Cube(x)$ \eit \item ``Something isn't a large cube'' \bit \item $\sf \exists x\, \lnot(Large(x) \land Cube(x))$ \eit \item ``Something isn't either left of or right of b'' \bit \item $\sf \exists x\,\lnot(LeftOf(x, b) \lor RightOf(x, b))$ \eit \eit } \subsec{Determiner Phrases}{ \bit \item Determiners combine with noun phrases to make determiner phrases (DP): \bit \item ``\emph{A} large terahedron'' \item ``\emph{Three} cubes which are to the left of b'' \item ``\emph{Every} even number'' \item ``\emph{Some} cube(s) between d and e'' \item ``\emph{Most} philosophy majors'' \item ``\emph{Both} small cubes'' \eit \eit } \subsec{Determiner Phrases in Sentences}{ \bit \item DPs make subjects of sentence, just like names/constants and coordination constructions of them do \bit \item ``\emph{Claire and Alex} study logic''---``\emph{Most philosophy majors} study logic'' \item ``\emph{2} is prime''---``\emph{Some even number} is prime'' \item ``\emph{a} is between b and c''---``\emph{Every large cube} is between b and c'' \eit \item We know how to translate the former---how do we deal with the latter? \eit } \end{document}
{ "alphanum_fraction": 0.6631325828, "avg_line_length": 27.0680473373, "ext": "tex", "hexsha": "ea6f85d8f744d11a2e97a2d2939837a075714efe", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "rzach/phil279", "max_forks_repo_path": "279-lec11.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "rzach/phil279", "max_issues_repo_path": "279-lec11.tex", "max_line_length": 241, "max_stars_count": 5, "max_stars_repo_head_hexsha": "722ec82ae7a4593d40c72083d830c4e3e4864dc0", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "rzach/phil279", "max_stars_repo_path": "279-lec11.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-21T10:48:55.000Z", "max_stars_repo_stars_event_min_datetime": "2015-09-23T13:42:54.000Z", "num_tokens": 3107, "size": 9149 }
%report.tex % the glue for everything else %\includeonly{simulation} %\documentstyle[a4]{report} \documentstyle{report} \renewcommand{\baselinestretch}{1.5} \begin{document} \parindent 0pt \setlength{\parskip}{3ex} \include{title} \include{abstract} \tableofcontents \chapter{Introduction} \input{intro} \input{risc} \input{urisc} \newpage \input{formal} \include{architecture} \include{construction} \include{host} \include{execution} \include{control} \include{alu} \include{memory} \include{specification} \include{performance} \include{conclusions} \bibliographystyle{alpha} \bibliography{report} \appendix \include{credits} \include{components} \include{building} \include{mon} \include{epld} \include{simulation} \chapter{Circuit Diagrams} \end{document}
{ "alphanum_fraction": 0.7475, "avg_line_length": 20, "ext": "tex", "hexsha": "5ecc1235e62fca0eeb39d203e25219fdaac6d998", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2020-08-11T08:00:30.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-08T14:23:34.000Z", "max_forks_repo_head_hexsha": "adb784eff346bfd9ac13db9589fbf233a41e7f16", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "steveloughran/formality", "max_forks_repo_path": "papers/urisc/report.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "adb784eff346bfd9ac13db9589fbf233a41e7f16", "max_issues_repo_issues_event_max_datetime": "2015-06-30T18:38:04.000Z", "max_issues_repo_issues_event_min_datetime": "2015-06-29T15:52:11.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "steveloughran/formality", "max_issues_repo_path": "papers/urisc/report.tex", "max_line_length": 37, "max_stars_count": 20, "max_stars_repo_head_hexsha": "adb784eff346bfd9ac13db9589fbf233a41e7f16", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "steveloughran/formality", "max_stars_repo_path": "papers/urisc/report.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-11T08:00:28.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-03T22:45:45.000Z", "num_tokens": 229, "size": 800 }
% Template for PLoS % Version 3.1 February 2015 % % To compile to pdf, run: % latex plos.template % bibtex plos.template % latex plos.template % latex plos.template % dvipdf plos.template % % % % % % % % % % % % % % % % % % % % % % % % % -- IMPORTANT NOTE % % This template contains comments intended % to minimize problems and delays during our production % process. Please follow the template instructions % whenever possible. % % % % % % % % % % % % % % % % % % % % % % % % % % Once your paper is accepted for publication, % PLEASE REMOVE ALL TRACKED CHANGES in this file and leave only % the final text of your manuscript. % % There are no restrictions on package use within the LaTeX files except that % no packages listed in the template may be deleted. % % Please do not include colors or graphics in the text. % % Please do not create a heading level below \subsection. For 3rd level headings, use \paragraph{}. % % % % % % % % % % % % % % % % % % % % % % % % % % -- FIGURES AND TABLES % % Please include tables/figure captions directly after the paragraph where they are first cited in the text. % % DO NOT INCLUDE GRAPHICS IN YOUR MANUSCRIPT % - Figures should be uploaded separately from your manuscript file. % - Figures generated using LaTeX should be extracted and removed from the PDF before submission. % - Figures containing multiple panels/subfigures must be combined into one image file before submission. % For figure citations, please use "Fig." instead of "Figure". % See http://www.plosone.org/static/figureGuidelines for PLOS figure guidelines. % % Tables should be cell-based and may not contain: % - tabs/spacing/line breaks within cells to alter layout or alignment % - vertically-merged cells (no tabular environments within tabular environments, do not use \multirow) % - colors, shading, or graphic objects % See http://www.plosone.org/static/figureGuidelines#tables for table guidelines. % % For tables that exceed the width of the text column, use the adjustwidth environment as illustrated in the example table in text below. % % % % % % % % % % % % % % % % % % % % % % % % % % % -- EQUATIONS, MATH SYMBOLS, SUBSCRIPTS, AND SUPERSCRIPTS % % IMPORTANT % Below are a few tips to help format your equations and other special characters according to our specifications. For more tips to help reduce the possibility of formatting errors during conversion, please see our LaTeX guidelines at http://www.plosone.org/static/latexGuidelines % % Please be sure to include all portions of an equation in the math environment. % % Do not include text that is not math in the math environment. For example, CO2 will be CO\textsubscript{2}. % % Please add line breaks to long display equations when possible in order to fit size of the column. % % For inline equations, please do not include punctuation (commas, etc) within the math environment unless this is part of the equation. % % % % % % % % % % % % % % % % % % % % % % % % % % % Please contact [email protected] with any questions. % % % % % % % % % % % % % % % % % % % % % % % % % \documentclass[10pt,letterpaper]{article} \usepackage[top=0.85in,left=2.75in,footskip=0.75in]{geometry} % Use adjustwidth environment to exceed column width (see example table in text) \usepackage{changepage} % Use Unicode characters when possible \usepackage[utf8]{inputenc} % textcomp package and marvosym package for additional characters \usepackage{textcomp,marvosym} % fixltx2e package for \textsubscript \usepackage{fixltx2e} % amsmath and amssymb packages, useful for mathematical formulas and symbols \usepackage{amsmath,amssymb} % cite package, to clean up citations in the main text. Do not remove. \usepackage{cite} % Use nameref to cite supporting information files (see Supporting Information section for more info) \usepackage{nameref,hyperref} % line numbers \usepackage[right]{lineno} % ligatures disabled \usepackage{microtype} \DisableLigatures[f]{encoding = *, family = * } % rotating package for sideways tables \usepackage{rotating} % Remove comment for double spacing %\usepackage{setspace} %\doublespacing % Text layout \raggedright \setlength{\parindent}{0.5cm} \textwidth 5.25in \textheight 8.75in % Bold the 'Figure #' in the caption and separate it from the title/caption with a period % Captions will be left justified \usepackage[aboveskip=1pt,labelfont=bf,labelsep=period,justification=raggedright,singlelinecheck=off]{caption} % Use the PLoS provided BiBTeX style \bibliographystyle{resources/plos2015} % Remove brackets from numbering in List of References \makeatletter \renewcommand{\@biblabel}[1]{\quad#1.} \makeatother % Leave date blank \date{} % Header and Footer with logo \usepackage{lastpage,fancyhdr,graphicx} \usepackage{epstopdf} \pagestyle{myheadings} \pagestyle{fancy} \fancyhf{} \lhead{\includegraphics[width=2.0in]{resources/PLOS-Submission.eps}} \rfoot{\thepage/\pageref{LastPage}} \renewcommand{\footrule}{\hrule height 2pt \vspace{2mm}} \fancyheadoffset[L]{2.25in} \fancyfootoffset[L]{2.25in} \lfoot{\sf PLOS} %% Include all macros below \newcommand{\lorem}{{\bf LOREM}} \newcommand{\ipsum}{{\bf IPSUM}} % The below needed to build markdown documents containing lists with pandoc \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt} } %% END MACROS SECTION \begin{document} \vspace*{0.35in} % Title must be 250 characters or less. % Please capitalize all terms in the title except conjunctions, prepositions, and articles. \begin{flushleft} {\Large \textbf{$title$} } \newline % Insert author names, affiliations and corresponding author email (do not include titles, positions, or degrees). \\ $for(author)$ $author.given$ $author.family$ $$^{$author.affiliation$ $if(author.email)$,\ast$endif$}$$$sep$, $endfor$ \\ \bigskip $for(organization)$ \bf{$organization.id$} $organization.name$, $organization.address$ \\ $endfor$ \bigskip $for(author)$ $if(author.email)$ $$\ast$$ E-mail: $author.email$ $endif$ $endfor$ \end{flushleft} $body$ % Please keep the abstract below 300 words % Please keep the Author Summary between 150 and 200 words % Use first person. PLOS ONE authors please skip this step. % Author Summary not valid for PLOS ONE submissions. % For figure citations, please use "Fig." instead of "Figure". %\section*{References} % Either type in your references using % \begin{thebibliography}{} % \bibitem{} % Text % \end{thebibliography} % % OR % % Compile your BiBTeX database using our plos2015.bst % style file and paste the contents of your .bbl file % here. % \end{document}
{ "alphanum_fraction": 0.7319963537, "avg_line_length": 30.4722222222, "ext": "tex", "hexsha": "671010eb5d238e95085a7adfcfac3d5a87884ce9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-12-31T19:01:12.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-31T19:01:12.000Z", "max_forks_repo_head_hexsha": "352bf573245b2b1baf2e8d2d4e6125eb8ebc1547", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "mas-dse-pjmulroo/10-simple-rules-data-storage", "max_forks_repo_path": "manuscript/resources/plos_template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "352bf573245b2b1baf2e8d2d4e6125eb8ebc1547", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "mas-dse-pjmulroo/10-simple-rules-data-storage", "max_issues_repo_path": "manuscript/resources/plos_template.tex", "max_line_length": 280, "max_stars_count": null, "max_stars_repo_head_hexsha": "352bf573245b2b1baf2e8d2d4e6125eb8ebc1547", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "mas-dse-pjmulroo/10-simple-rules-data-storage", "max_stars_repo_path": "manuscript/resources/plos_template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1789, "size": 6582 }
\section{\module{fl} --- FORMS library interface for GUI applications} \declaremodule{builtin}{fl} \platform{IRIX} \modulesynopsis{FORMS library interface for GUI applications.} This module provides an interface to the FORMS Library\index{FORMS Library} by Mark Overmars\index{Overmars, Mark}. The source for the library can be retrieved by anonymous ftp from host \samp{ftp.cs.ruu.nl}, directory \file{SGI/FORMS}. It was last tested with version 2.0b. Most functions are literal translations of their C equivalents, dropping the initial \samp{fl_} from their name. Constants used by the library are defined in module \refmodule[fl-constants]{FL} described below. The creation of objects is a little different in Python than in C: instead of the `current form' maintained by the library to which new FORMS objects are added, all functions that add a FORMS object to a form are methods of the Python object representing the form. Consequently, there are no Python equivalents for the C functions \cfunction{fl_addto_form()} and \cfunction{fl_end_form()}, and the equivalent of \cfunction{fl_bgn_form()} is called \function{fl.make_form()}. Watch out for the somewhat confusing terminology: FORMS uses the word \dfn{object} for the buttons, sliders etc. that you can place in a form. In Python, `object' means any value. The Python interface to FORMS introduces two new Python object types: form objects (representing an entire form) and FORMS objects (representing one button, slider etc.). Hopefully this isn't too confusing. There are no `free objects' in the Python interface to FORMS, nor is there an easy way to add object classes written in Python. The FORMS interface to GL event handling is available, though, so you can mix FORMS with pure GL windows. \strong{Please note:} importing \module{fl} implies a call to the GL function \cfunction{foreground()} and to the FORMS routine \cfunction{fl_init()}. \subsection{Functions Defined in Module \module{fl}} \nodename{FL Functions} Module \module{fl} defines the following functions. For more information about what they do, see the description of the equivalent C function in the FORMS documentation: \begin{funcdesc}{make_form}{type, width, height} Create a form with given type, width and height. This returns a \dfn{form} object, whose methods are described below. \end{funcdesc} \begin{funcdesc}{do_forms}{} The standard FORMS main loop. Returns a Python object representing the FORMS object needing interaction, or the special value \constant{FL.EVENT}. \end{funcdesc} \begin{funcdesc}{check_forms}{} Check for FORMS events. Returns what \function{do_forms()} above returns, or \code{None} if there is no event that immediately needs interaction. \end{funcdesc} \begin{funcdesc}{set_event_call_back}{function} Set the event callback function. \end{funcdesc} \begin{funcdesc}{set_graphics_mode}{rgbmode, doublebuffering} Set the graphics modes. \end{funcdesc} \begin{funcdesc}{get_rgbmode}{} Return the current rgb mode. This is the value of the C global variable \cdata{fl_rgbmode}. \end{funcdesc} \begin{funcdesc}{show_message}{str1, str2, str3} Show a dialog box with a three-line message and an OK button. \end{funcdesc} \begin{funcdesc}{show_question}{str1, str2, str3} Show a dialog box with a three-line message and YES and NO buttons. It returns \code{1} if the user pressed YES, \code{0} if NO. \end{funcdesc} \begin{funcdesc}{show_choice}{str1, str2, str3, but1\optional{, but2\optional{, but3}}} Show a dialog box with a three-line message and up to three buttons. It returns the number of the button clicked by the user (\code{1}, \code{2} or \code{3}). \end{funcdesc} \begin{funcdesc}{show_input}{prompt, default} Show a dialog box with a one-line prompt message and text field in which the user can enter a string. The second argument is the default input string. It returns the string value as edited by the user. \end{funcdesc} \begin{funcdesc}{show_file_selector}{message, directory, pattern, default} Show a dialog box in which the user can select a file. It returns the absolute filename selected by the user, or \code{None} if the user presses Cancel. \end{funcdesc} \begin{funcdesc}{get_directory}{} \funcline{get_pattern}{} \funcline{get_filename}{} These functions return the directory, pattern and filename (the tail part only) selected by the user in the last \function{show_file_selector()} call. \end{funcdesc} \begin{funcdesc}{qdevice}{dev} \funcline{unqdevice}{dev} \funcline{isqueued}{dev} \funcline{qtest}{} \funcline{qread}{} %\funcline{blkqread}{?} \funcline{qreset}{} \funcline{qenter}{dev, val} \funcline{get_mouse}{} \funcline{tie}{button, valuator1, valuator2} These functions are the FORMS interfaces to the corresponding GL functions. Use these if you want to handle some GL events yourself when using \function{fl.do_events()}. When a GL event is detected that FORMS cannot handle, \function{fl.do_forms()} returns the special value \constant{FL.EVENT} and you should call \function{fl.qread()} to read the event from the queue. Don't use the equivalent GL functions! \end{funcdesc} \begin{funcdesc}{color}{} \funcline{mapcolor}{} \funcline{getmcolor}{} See the description in the FORMS documentation of \cfunction{fl_color()}, \cfunction{fl_mapcolor()} and \cfunction{fl_getmcolor()}. \end{funcdesc} \subsection{Form Objects} \label{form-objects} Form objects (returned by \function{make_form()} above) have the following methods. Each method corresponds to a C function whose name is prefixed with \samp{fl_}; and whose first argument is a form pointer; please refer to the official FORMS documentation for descriptions. All the \method{add_*()} methods return a Python object representing the FORMS object. Methods of FORMS objects are described below. Most kinds of FORMS object also have some methods specific to that kind; these methods are listed here. \begin{flushleft} \begin{methoddesc}[form]{show_form}{placement, bordertype, name} Show the form. \end{methoddesc} \begin{methoddesc}[form]{hide_form}{} Hide the form. \end{methoddesc} \begin{methoddesc}[form]{redraw_form}{} Redraw the form. \end{methoddesc} \begin{methoddesc}[form]{set_form_position}{x, y} Set the form's position. \end{methoddesc} \begin{methoddesc}[form]{freeze_form}{} Freeze the form. \end{methoddesc} \begin{methoddesc}[form]{unfreeze_form}{} Unfreeze the form. \end{methoddesc} \begin{methoddesc}[form]{activate_form}{} Activate the form. \end{methoddesc} \begin{methoddesc}[form]{deactivate_form}{} Deactivate the form. \end{methoddesc} \begin{methoddesc}[form]{bgn_group}{} Begin a new group of objects; return a group object. \end{methoddesc} \begin{methoddesc}[form]{end_group}{} End the current group of objects. \end{methoddesc} \begin{methoddesc}[form]{find_first}{} Find the first object in the form. \end{methoddesc} \begin{methoddesc}[form]{find_last}{} Find the last object in the form. \end{methoddesc} %--- \begin{methoddesc}[form]{add_box}{type, x, y, w, h, name} Add a box object to the form. No extra methods. \end{methoddesc} \begin{methoddesc}[form]{add_text}{type, x, y, w, h, name} Add a text object to the form. No extra methods. \end{methoddesc} %\begin{methoddesc}[form]{add_bitmap}{type, x, y, w, h, name} %Add a bitmap object to the form. %\end{methoddesc} \begin{methoddesc}[form]{add_clock}{type, x, y, w, h, name} Add a clock object to the form. \\ Method: \method{get_clock()}. \end{methoddesc} %--- \begin{methoddesc}[form]{add_button}{type, x, y, w, h, name} Add a button object to the form. \\ Methods: \method{get_button()}, \method{set_button()}. \end{methoddesc} \begin{methoddesc}[form]{add_lightbutton}{type, x, y, w, h, name} Add a lightbutton object to the form. \\ Methods: \method{get_button()}, \method{set_button()}. \end{methoddesc} \begin{methoddesc}[form]{add_roundbutton}{type, x, y, w, h, name} Add a roundbutton object to the form. \\ Methods: \method{get_button()}, \method{set_button()}. \end{methoddesc} %--- \begin{methoddesc}[form]{add_slider}{type, x, y, w, h, name} Add a slider object to the form. \\ Methods: \method{set_slider_value()}, \method{get_slider_value()}, \method{set_slider_bounds()}, \method{get_slider_bounds()}, \method{set_slider_return()}, \method{set_slider_size()}, \method{set_slider_precision()}, \method{set_slider_step()}. \end{methoddesc} \begin{methoddesc}[form]{add_valslider}{type, x, y, w, h, name} Add a valslider object to the form. \\ Methods: \method{set_slider_value()}, \method{get_slider_value()}, \method{set_slider_bounds()}, \method{get_slider_bounds()}, \method{set_slider_return()}, \method{set_slider_size()}, \method{set_slider_precision()}, \method{set_slider_step()}. \end{methoddesc} \begin{methoddesc}[form]{add_dial}{type, x, y, w, h, name} Add a dial object to the form. \\ Methods: \method{set_dial_value()}, \method{get_dial_value()}, \method{set_dial_bounds()}, \method{get_dial_bounds()}. \end{methoddesc} \begin{methoddesc}[form]{add_positioner}{type, x, y, w, h, name} Add a positioner object to the form. \\ Methods: \method{set_positioner_xvalue()}, \method{set_positioner_yvalue()}, \method{set_positioner_xbounds()}, \method{set_positioner_ybounds()}, \method{get_positioner_xvalue()}, \method{get_positioner_yvalue()}, \method{get_positioner_xbounds()}, \method{get_positioner_ybounds()}. \end{methoddesc} \begin{methoddesc}[form]{add_counter}{type, x, y, w, h, name} Add a counter object to the form. \\ Methods: \method{set_counter_value()}, \method{get_counter_value()}, \method{set_counter_bounds()}, \method{set_counter_step()}, \method{set_counter_precision()}, \method{set_counter_return()}. \end{methoddesc} %--- \begin{methoddesc}[form]{add_input}{type, x, y, w, h, name} Add a input object to the form. \\ Methods: \method{set_input()}, \method{get_input()}, \method{set_input_color()}, \method{set_input_return()}. \end{methoddesc} %--- \begin{methoddesc}[form]{add_menu}{type, x, y, w, h, name} Add a menu object to the form. \\ Methods: \method{set_menu()}, \method{get_menu()}, \method{addto_menu()}. \end{methoddesc} \begin{methoddesc}[form]{add_choice}{type, x, y, w, h, name} Add a choice object to the form. \\ Methods: \method{set_choice()}, \method{get_choice()}, \method{clear_choice()}, \method{addto_choice()}, \method{replace_choice()}, \method{delete_choice()}, \method{get_choice_text()}, \method{set_choice_fontsize()}, \method{set_choice_fontstyle()}. \end{methoddesc} \begin{methoddesc}[form]{add_browser}{type, x, y, w, h, name} Add a browser object to the form. \\ Methods: \method{set_browser_topline()}, \method{clear_browser()}, \method{add_browser_line()}, \method{addto_browser()}, \method{insert_browser_line()}, \method{delete_browser_line()}, \method{replace_browser_line()}, \method{get_browser_line()}, \method{load_browser()}, \method{get_browser_maxline()}, \method{select_browser_line()}, \method{deselect_browser_line()}, \method{deselect_browser()}, \method{isselected_browser_line()}, \method{get_browser()}, \method{set_browser_fontsize()}, \method{set_browser_fontstyle()}, \method{set_browser_specialkey()}. \end{methoddesc} %--- \begin{methoddesc}[form]{add_timer}{type, x, y, w, h, name} Add a timer object to the form. \\ Methods: \method{set_timer()}, \method{get_timer()}. \end{methoddesc} \end{flushleft} Form objects have the following data attributes; see the FORMS documentation: \begin{tableiii}{l|l|l}{member}{Name}{C Type}{Meaning} \lineiii{window}{int (read-only)}{GL window id} \lineiii{w}{float}{form width} \lineiii{h}{float}{form height} \lineiii{x}{float}{form x origin} \lineiii{y}{float}{form y origin} \lineiii{deactivated}{int}{nonzero if form is deactivated} \lineiii{visible}{int}{nonzero if form is visible} \lineiii{frozen}{int}{nonzero if form is frozen} \lineiii{doublebuf}{int}{nonzero if double buffering on} \end{tableiii} \subsection{FORMS Objects} \label{forms-objects} Besides methods specific to particular kinds of FORMS objects, all FORMS objects also have the following methods: \begin{methoddesc}[FORMS object]{set_call_back}{function, argument} Set the object's callback function and argument. When the object needs interaction, the callback function will be called with two arguments: the object, and the callback argument. (FORMS objects without a callback function are returned by \function{fl.do_forms()} or \function{fl.check_forms()} when they need interaction.) Call this method without arguments to remove the callback function. \end{methoddesc} \begin{methoddesc}[FORMS object]{delete_object}{} Delete the object. \end{methoddesc} \begin{methoddesc}[FORMS object]{show_object}{} Show the object. \end{methoddesc} \begin{methoddesc}[FORMS object]{hide_object}{} Hide the object. \end{methoddesc} \begin{methoddesc}[FORMS object]{redraw_object}{} Redraw the object. \end{methoddesc} \begin{methoddesc}[FORMS object]{freeze_object}{} Freeze the object. \end{methoddesc} \begin{methoddesc}[FORMS object]{unfreeze_object}{} Unfreeze the object. \end{methoddesc} %\begin{methoddesc}[FORMS object]{handle_object}{} XXX %\end{methoddesc} %\begin{methoddesc}[FORMS object]{handle_object_direct}{} XXX %\end{methoddesc} FORMS objects have these data attributes; see the FORMS documentation: \begin{tableiii}{l|l|l}{member}{Name}{C Type}{Meaning} \lineiii{objclass}{int (read-only)}{object class} \lineiii{type}{int (read-only)}{object type} \lineiii{boxtype}{int}{box type} \lineiii{x}{float}{x origin} \lineiii{y}{float}{y origin} \lineiii{w}{float}{width} \lineiii{h}{float}{height} \lineiii{col1}{int}{primary color} \lineiii{col2}{int}{secondary color} \lineiii{align}{int}{alignment} \lineiii{lcol}{int}{label color} \lineiii{lsize}{float}{label font size} \lineiii{label}{string}{label string} \lineiii{lstyle}{int}{label style} \lineiii{pushed}{int (read-only)}{(see FORMS docs)} \lineiii{focus}{int (read-only)}{(see FORMS docs)} \lineiii{belowmouse}{int (read-only)}{(see FORMS docs)} \lineiii{frozen}{int (read-only)}{(see FORMS docs)} \lineiii{active}{int (read-only)}{(see FORMS docs)} \lineiii{input}{int (read-only)}{(see FORMS docs)} \lineiii{visible}{int (read-only)}{(see FORMS docs)} \lineiii{radio}{int (read-only)}{(see FORMS docs)} \lineiii{automatic}{int (read-only)}{(see FORMS docs)} \end{tableiii} \section{\module{FL} --- Constants used with the \module{fl} module} \declaremodule[fl-constants]{standard}{FL} \platform{IRIX} \modulesynopsis{Constants used with the \module{fl} module.} This module defines symbolic constants needed to use the built-in module \refmodule{fl} (see above); they are equivalent to those defined in the C header file \code{<forms.h>} except that the name prefix \samp{FL_} is omitted. Read the module source for a complete list of the defined names. Suggested use: \begin{verbatim} import fl from FL import * \end{verbatim} \section{\module{flp} --- Functions for loading stored FORMS designs} \declaremodule{standard}{flp} \platform{IRIX} \modulesynopsis{Functions for loading stored FORMS designs.} This module defines functions that can read form definitions created by the `form designer' (\program{fdesign}) program that comes with the FORMS library (see module \refmodule{fl} above). For now, see the file \file{flp.doc} in the Python library source directory for a description. XXX A complete description should be inserted here!
{ "alphanum_fraction": 0.7463041522, "avg_line_length": 30.6863905325, "ext": "tex", "hexsha": "26133fde840b44102ce1d9729762ab3dd3049f02", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-03-27T01:55:17.000Z", "max_forks_repo_forks_event_min_datetime": "2015-07-16T08:14:13.000Z", "max_forks_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "marcosptf/cpython-2.0.1", "max_forks_repo_path": "Doc/lib/libfl.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_issues_repo_issues_event_max_datetime": "2021-05-03T21:20:50.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-18T15:48:14.000Z", "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "marcosptf/cpython-2.0.1", "max_issues_repo_path": "Doc/lib/libfl.tex", "max_line_length": 74, "max_stars_count": 5, "max_stars_repo_head_hexsha": "73c739a764e8b1dc84640e73b880bc66e1916bca", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "marcosptf/cpython-2.0.1", "max_stars_repo_path": "Doc/lib/libfl.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T21:47:20.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-26T21:53:36.000Z", "num_tokens": 4319, "size": 15558 }
%%% lecture 06 %%% \documentclass{beamer} \usepackage[utf8]{inputenc} \usepackage{algorithm2e, amsmath, amssymb, amsfonts, graphicx} % allow section.equation numbering \numberwithin{equation}{section} % use boadilla theme \usetheme{Boadilla} % remove navigation symbols \usenavigationsymbolstemplate{} % get numbered figure captions \setbeamertemplate{caption}[numbered] % changes itemize to circle + other things \useoutertheme{split} \useinnertheme{circles} % command for the title string. change for each lecture \newcommand{\lecturetitle}{Intro to Optimization, Part 1} % allow automatic alert-highlighted references and hyperlinks \newcommand{\aref}[1]{\alert{\ref{#1}}} \newcommand{\ahref}[2]{\href{#1}{\alert{#2}}} % title page stuff. brackets content displayed in footer bar \title[\lecturetitle]{\lecturetitle} % metadata. content in brackets is displayed in footer bar \author[Derek Huang (BAC Advanced Team)]{Derek Huang} \institute{BAC Advanced Team} \date{December 30, 2021} % change "ball" bullet to numbered bullet and section title for section \setbeamertemplate{section in toc}{\inserttocsectionnumber.~\inserttocsection} % change ball to gray square (copied from stackoverflow; \par needed for break) \setbeamertemplate{subsection in toc}{ \hspace{1.2em}{\color{gray}\rule[0.3ex]{3pt}{3pt}}~\inserttocsubsection\par} % use default enumeration scheme \setbeamertemplate{enumerate items}[default] % required line that fixes the problem of \mathbf, \bf not working in beamer % for later (post-2019) TeX Live installations. see the issue on GitHub: % https://github.com/josephwright/beamer/issues/630 \DeclareFontShape{OT1}{cmss}{b}{n}{<->ssub * cmss/bx/n}{} \begin{document} % title slide \begin{frame} \titlepage \centering % relative path may need to be updated depending on .tex file location \includegraphics[scale = 0.1]{../bac_logo1.png} \end{frame} % table of contents slide \begin{frame}{Overview} \tableofcontents \end{frame} \section{Convex optimization} \begin{frame}{Motivation} \begin{itemize} \item Let $ \mathbf{X} \in \mathbb{R}^{N \times d} $ be the input matrix, $ \mathbf{y} \in \mathbb{R}^N $ the response vector. Consider fitting a linear regression model s.t. we minimize the absolute value of the residuals. Optimal $ \hat{\mathbf{w}} \in \mathbb{R}^d $, $ \hat{b} \in \mathbb{R} $ solve \begin{equation*} \begin{array}{ll} \displaystyle\min_{\mathbf{w}, b} & \Vert\mathbf{y} - \mathbf{Xw} - b\mathbf{1}\Vert_1 \end{array} \end{equation*} \item Model parameter estimation is an optimization problem. \item Quite hard to solve. May be recast as the linear program (LP) \cite{bv_convex_opt} \begin{equation*} \begin{array}{ll} \displaystyle\min_{\mathbf{w}, b, \mathbf{t}} & \mathbf{1}^\top\mathbf{t} \\ \text{s.t.} & -\mathbf{t} \preceq \mathbf{y} - \mathbf{Xw} - b\mathbf{1} \preceq \mathbf{t} \\ & \mathbf{t} \succeq \mathbf{0} \end{array} \end{equation*} Here $ \mathbf{t} \in \mathbb{R}^N $. Algorithms for solving LPs are quite reliable \cite{bv_convex_opt}. \end{itemize} \end{frame} \subsection{Convex sets and functions} \begin{frame}{Convex sets and functions} \begin{itemize} \item Optimization problems broadly categorized as convex vs. nonconvex $ \Rightarrow $ we need to know what convexity means. \item \textit{Definition.} Let $ \mathbf{x}_1, \mathbf{x}_2 \in \mathbb{R}^n $, $ \mathbf{x}_1 \ne \mathbf{x}_2 $. The \textit{line} passing through points $ \mathbf{x}_1, \mathbf{x}_2 $ is $ \{\theta\mathbf{x}_1 + (1 - \theta)\mathbf{x}_2 : \theta \in \mathbb{R}\} $. The \textit{line segment} connecting points $ \mathbf{x}_1, \mathbf{x}_2 $ is $ \{\theta\mathbf{x}_1 + (1 - \theta) \mathbf{x}_2 : \theta \in [0, 1]\} $ \cite{bv_convex_opt}. \end{itemize} \begin{figure} \centering \includegraphics[scale = 0.3]{bv_fig_2.1.png} % remove excess space \vspace{-10 pt} \caption{ The $ \mathbf{x}_1, \mathbf{x}_2 $ line and line segment parametrized by $ \theta $\footnote{ Figure 2.1 from Boyd and Vandenberghe's \textit{Convex Optimization}. }. } \end{figure} \end{frame} \begin{frame}{Convex sets and functions} \begin{itemize} \item \textit{Definition.} A set $ C \subseteq \mathbb{R}^n $ is \textit{affine} if $ \forall \mathbf{x}, \mathbf{y} \in C $, $ \alpha \in \mathbb{R} $, $ \alpha\mathbf{x} + (1 - \alpha)\mathbf{y} \in C $, i.e. $ C $ contains the $ \mathbf{x}, \mathbf{y} $ line. \item \textit{Definition.} Let $ C \subseteq \mathbb{R}^n $. The \textit{affine hull} of $ C $, denoted $ \operatorname{aff} C $, is s.t. $ \operatorname{aff} C \triangleq \{ \sum_{i = 1}^k\theta_i\mathbf{x}_i : \mathbf{x}_1, \ldots \mathbf{x}_k \in C, \theta_1, \ldots \theta_k \in \mathbb{R}, \sum_{i = 1}^k\theta_i = 1 \} $ \cite{bv_convex_opt}. \item \textit{Definition.} A set $ C \subseteq \mathbb{R}^n $ is \textit{convex} if $ \forall \mathbf{x}, \mathbf{y} \in C $, $ \alpha \in [0, 1] $, $ \alpha\mathbf{x} + (1 - \alpha)\mathbf{y} \in C $, i.e. $ C $ contains the $ \mathbf{x}, \mathbf{y} $ line segment. \end{itemize} \begin{figure} \centering \includegraphics[scale = 0.2]{bv_fig_2.2.png} % remove excess space \vspace{-10 pt} \caption{ Convex and nonconvex sets. Only the leftmost set is convex\footnote{ Figure 2.2 from Boyd and Vandenberghe's \textit{Convex Optimization}. }. } % remove excess space \vspace{-10 pt} \end{figure} \begin{itemize} \item \textit{Definition.} Let $ C \subseteq \mathbb{R}^n $. The \textit{convex hull} of $ C $, denoted $ \operatorname{conv} C $, is s.t. $ \operatorname{conv} C \triangleq \{ \sum_{i = 1}^k\theta_i\mathbf{x}_i : \mathbf{x}_1, \ldots \mathbf{x}_k \in C, \theta_1, \ldots \theta_k \in [0, 1], \sum_{i = 1}^k\theta_i = 1 \} $. \end{itemize} % adjust spacing since there is a footnote \bigskip \end{frame} \begin{frame}{Convex sets and functions} \begin{itemize} \item If $ C $ affine, then $ \operatorname{aff} C = C $, while if $ C $ convex, then $ \operatorname{conv} C = C $. \end{itemize} \begin{figure} \centering % remove excess space \vspace{-3 pt} \includegraphics[scale = 0.24]{bv_fig_2.3.png} % remove excess space \vspace{-10 pt} \caption{ Examples of $ \mathbb{R}^2 $ convex hulls\footnote{ Figure 2.3 from Boyd and Vandenberghe's \textit{Convex Optimization}. }. } % remove excess space \vspace{-10 pt} \end{figure} \begin{itemize} \item \textit{Definition.} Let $ C \subseteq \mathbb{R}^n $. The \textit{relative interior} of $ C $, denoted $ \operatorname{relint} C $, is s.t. $ \operatorname{relint} C \triangleq \{\mathbf{x} \in C : B(\mathbf{x}, r) \cap \operatorname{aff} C \subseteq C, r \in (0, \infty)\} $ \cite{bv_convex_opt}. Here $ B(\mathbf{x}, r) \triangleq \{\mathbf{x}' \in \mathbb{R}^n : \Vert\mathbf{x}' - \mathbf{x}\Vert \le r\} $ for some norm $ \Vert\cdot\Vert $\footnote{ All norms define the same relative interior \cite{bv_convex_opt}. }. \item \textit{Definition.} $ f : \mathcal{M} \rightarrow \mathbb{R} $, $ \mathcal{M} \subseteq \mathbb{R}^n $, is \textit{convex} if $ \mathcal{M} $ convex and if $ \forall \mathbf{x}, \mathbf{y} \in \mathcal{M} $, $ \forall \alpha \in [0, 1] $, $ f(\alpha\mathbf{x} + (1 - \alpha)\mathbf{y}) \le \alpha f(\mathbf{x}) + (1 - \alpha)f(\mathbf{y}) $. \item \textit{Definition.} $ f : \mathcal{M} \rightarrow \mathbb{R} $ is \textit{concave} if $ -f $ is convex. \end{itemize} % spacing for footnote \bigskip \end{frame} \begin{frame}{Convex sets and functions} \begin{itemize} \item Affine and linear functions are both convex and concave. \end{itemize} \begin{figure} \centering % remove extra space \vspace{-3 pt} \includegraphics[scale = 0.24]{bv_fig_3.1.png} % remove extra space \vspace{-5 pt} \caption{Graph of a convex function\footnote{ Figure 3.1 from Boyd and Vandenberghe's \textit{Convex Optimization}. }. } % remove excess space \vspace{-10 pt} \end{figure} \begin{itemize} \item \textit{Examples.} \begin{itemize} \item \textit{Exponential.} $ \forall a \in \mathbb{R} $, $ e^{ax} $ convex on $ \mathbb{R} $ \cite{bv_convex_opt}. \item \textit{Powers.} $ x^a $ convex on $ (0, \infty) $ if $ a \in (\infty, 0] \cup [1, \infty) $, concave if $ a \in [0, 1] $. $ \forall a \in [1, \infty) $, $ |x|^a $ convex on $ \mathbb{R} $ \cite{bv_convex_opt}. \item \textit{Logarithms.} $ \log x $ concave on $ (0, \infty) $. \item \textit{Norms.} Any norm on $ \mathbb{R}^n $ is convex \cite{bv_convex_opt}, e.g. $ \ell^p $-norm. \end{itemize} \end{itemize} % spacing for footnote \bigskip \end{frame} \begin{frame}{Convex sets and functions} \begin{itemize} \item \textit{Theorem.} Let $ f : \mathcal{M} \rightarrow \mathbb{R} $ be differentiable $ \forall \mathbf{x} \in \mathcal{M} $. $ f $ convex $ \Leftrightarrow \mathcal{M} $ convex, $ \forall \mathbf{x}, \mathbf{y} \in \mathcal{M} $, $ f(\mathbf{y}) \ge f(\mathbf{x}) + \nabla f(\mathbf{x})^\top(\mathbf{y} - \mathbf{x}) $ \cite{bv_convex_opt}. \end{itemize} \begin{figure} \centering % remove extra space \vspace{-5 pt} \includegraphics[scale = 0.3]{bv_fig_3.2.png} % remove extra space \vspace{-10 pt} \caption{Convex function bounded below by tangent line\footnote{ Figure 3.2 from Boyd and Vandenberghe's \textit{Convex Optimization}. }. } % remove excess space \vspace{-15 pt} \end{figure} \begin{itemize} \item \textit{Theorem.} Let $ f : \mathcal{M} \rightarrow \mathbb{R} $ be twice differentiable $ \forall \mathbf{x} \in \mathcal{M} $. $ f $ convex $ \Leftrightarrow \mathcal{M} $ convex, $ \forall \mathbf{x} \in \mathcal{M} $, $ \nabla^2f(\mathbf{x}) \succeq \mathbf{0} $ \cite{bv_convex_opt}. \item \textit{Remark.} If $ \mathcal{M} \subseteq \mathbb{R} $ convex, reduces to $ \forall x \in \mathcal{M}, f''(x) \ge 0 $. \end{itemize} % spacing for footnote \medskip \end{frame} \subsection{Optimization problems} \begin{frame}{Optimization problems} \begin{itemize} \item \textit{Definition.} Let $ \mathcal{M}_f, \mathcal{M}_\mathbf{u}, \mathcal{M}_\mathbf{v} \subseteq \mathbb{R}^n $. For $ f: \mathcal{M}_f \rightarrow \mathbb{R} $, $ \mathbf{u} : \mathcal{M}_\mathbf{u} \rightarrow \mathbb{R}^p $, $ \mathbf{v} : \mathcal{M}_\mathbf{v} \rightarrow \mathbb{R}^q $, a \textit{standard form} optimization problem is\footnote{ Assume the problem is well-defined, i.e. \textit{feasible} (solvable) and bounded below. } \cite{bv_convex_opt} \begin{equation} \label{opt_prob_std} \begin{array}{ll} \displaystyle\min_\mathbf{x} & f(\mathbf{x}) \\ \text{s.t.} & \mathbf{u}(\mathbf{x}) \preceq \mathbf{0} \\ & \mathbf{v}(\mathbf{x}) = \mathbf{0} \end{array} \end{equation} $ \mathbf{x} \in \mathbb{R}^n $ is the \textit{optimization variable} \cite{bv_convex_opt}. $ \mathbf{u} \triangleq [ \ u_1 \ \ldots \ u_p \ ]^\top $ gives $ p $ \textit{inequality constraints}, $ \mathbf{v} \triangleq [ \ v_1 \ \ldots \ v_q \ ]^\top $ gives $ q $ \textit{equality constraints}. \item \textit{Definition.} The \textit{domain} of (\aref{opt_prob_std}) is $ \tilde{\mathcal{M}} \triangleq \mathcal{M}_f \cap \mathcal{M}_\mathbf{u} \cap \mathcal{M}_\mathbf{v} \ne \emptyset $ \cite{bv_convex_opt}. \item Problem is \textit{unconstrained} if no constraints, \textit{constrained} otherwise. Note maximization of $ f $ equivalent to minimization of $ -f $. \item \textit{Definition.} (\aref{opt_prob_std}) is a \textit{convex optimization problem} if functions $ f, u_1, \ldots u_p $ are convex and functions $ v_1, \ldots v_q $ are affine \cite{bv_convex_opt}. \end{itemize} % spacing for footnote \medskip \end{frame} \begin{frame}{Optimization problems} \begin{itemize} \item \textit{Examples.} \begin{itemize} \item \textit{Weighted linear least squares.} Let $ \mathbf{\Gamma} \triangleq \operatorname{diag}(\gamma_1, \ldots \gamma_N) \succ \mathbf{0} \in \mathbb{R}^{N \times N} $ be the data weighting matrix. The unconstrained problem to solve is \begin{equation*} \begin{array}{ll} \displaystyle\min_{\mathbf{w}, b} & \Vert \mathbf{\Gamma}^{1 / 2}(\mathbf{y} - \mathbf{Xw} - b\mathbf{1}) \Vert_2^2 \end{array} \end{equation*} \item \textit{SVM dual problem.} Note $ \mathbf{X} \triangleq [ \ \mathbf{x}_1 \ \ldots \mathbf{x}_N \ ]^\top $. The problem is \begin{equation*} \begin{array}{ll} \displaystyle\max_\alpha & \mathbf{1}^\top\alpha - \frac{1}{2}\alpha^\top\mathbf{H}\alpha \\ \text{s.t.} & \alpha^\top\mathbf{y} = 0 \\ & \mathbf{0} \preceq \alpha \preceq C\mathbf{1} \end{array} \end{equation*} Here $ \alpha \in \mathbb{R}^N $, $ \mathbf{H} \in \mathbb{R}^{N \times N} $ is such that $ h_{ij} = y_iy_j\mathbf{x}_i^\top\mathbf{x}_j $, $ C > 0 $. \item Both problems are \textit{quadratic programs}. Quadratic programs have convex, quadratic objectives and affine constraints (if any) \cite{bv_convex_opt}. \end{itemize} \end{itemize} \end{frame} \subsection{Feasibility and optimality} \begin{frame}{Feasibility and optimality} \begin{itemize} \item \textit{Definition.} Let $ \mathbf{u}, \mathbf{v}, \tilde{\mathcal{M}} $ be defined as in (\aref{opt_prob_std}). $ \mathbf{x}' \in \tilde{\mathcal{M}} $ is \textit{feasible} if $ \mathbf{x}' \in \mathcal{X}^* \triangleq \{\mathbf{x} \in \tilde{\mathcal{M}} : \mathbf{u}(\mathbf{x}) \preceq \mathbf{0}, \mathbf{v}(\mathbf{x}) = \mathbf{0}\} $. $ \mathcal{X}^* $ is the \textit{feasible set} \cite{bv_convex_opt}. \item \textit{Definition.} The \textit{optimal value} $ p^* $ of an optimization problem, as defined in (\aref{opt_prob_std}), is such that $ p^* = \inf\{f(\mathbf{x}) : \mathbf{x} \in \mathcal{X}^*\} $ \cite{bv_convex_opt}. \item \textit{Definition.} $ \mathbf{x}^* $ is \textit{[globally] optimal} if $ \mathbf{x}^* \in \mathcal{X}^* $ and $ f(\mathbf{x}^*) = p^* $ \cite{bv_convex_opt}. \item \textit{Definition.} $ \mathbf{x}' $ is \textit{locally optimal} if $ \mathbf{x}' \in \mathcal{X}^* $ and $ \exists r \in (0, \infty) $ such that $ f(\mathbf{x}') = \inf\{ f(\mathbf{x}) : \mathbf{x} \in \mathcal{X}^*, \Vert\mathbf{x}' - \mathbf{x}\Vert_2 \le r \}$ \cite{bv_convex_opt}. \item \textit{Theorem.} Suppose (\aref{opt_prob_std}) is a \alert{convex} optimization problem. If $ \mathbf{x}' $ is locally optimal, then $ \mathbf{x}' $ is [globally] optimal. \item \textit{Theorem.} Suppose (\aref{opt_prob_std}) is a \alert{convex} optimization problem and objective $ f $ differentiable. $ \mathbf{x}^* \in \tilde{\mathcal{M}} $ optimal $ \Leftrightarrow \mathbf{x}^* \in \mathcal{X}^* $, $ \forall \mathbf{x} \in \mathcal{X}^* $, $ \nabla f(\mathbf{x}^*)^\top(\mathbf{x} - \mathbf{x}^*) \ge 0 $. If no constraints, reduces to $ \nabla f(\mathbf{x}^*) = \mathbf{0} $ \cite{bv_convex_opt}. \end{itemize} \end{frame} \section{Duality} \subsection{The Lagrangian dual} \begin{frame}{The Lagrangian dual} \begin{itemize} \item \textit{Definition.} The \textit{Lagrangian} $ \mathcal{L}_f : \tilde{\mathcal{M}} \times \mathbb{R}^p \times \mathbb{R}^q \rightarrow \mathbb{R} $ of (\aref{opt_prob_std}) is s.t. \begin{equation} \label{std_lagrangian} \mathcal{L}_f(\mathbf{x}, \lambda, \nu) \triangleq f(\mathbf{x}) + \lambda^\top\mathbf{u}(\mathbf{x}) + \nu^\top\mathbf{v}(\mathbf{x}) \end{equation} $ \lambda \in \mathbb{R}^p $, $ \nu \in \mathbb{R}^q $ are the \textit{Lagrange multipliers} or \textit{dual variables}. \item \textit{Definition.} The \textit{[Lagrangian] dual} $ f_d : \tilde{\mathcal{M}}_d \rightarrow \mathbb{R} $ of (\aref{opt_prob_std}), is s.t. \begin{equation} \label{std_dual} f_d(\lambda, \nu) \triangleq \inf_{\mathbf{x} \in \tilde{\mathcal{M}}} \{\mathcal{L}_f(\mathbf{x}, \lambda, \nu)\} \end{equation} Here $ \tilde{\mathcal{M}}_d \subseteq \mathbb{R}^p \times \mathbb{R}^q $. $ f_d $ is \alert{always} concave, even if the problem (\aref{opt_prob_std}) is nonconvex \cite{bv_convex_opt}. $ (\lambda, \nu) \in \tilde{\mathcal{M}}_d $ with $ \lambda \succeq \mathbf{0} $ is called \textit{dual feasible} \cite{bv_convex_opt}. \item \textit{Theorem.} $ \forall (\lambda, \nu) \in \tilde{\mathcal{M}}_d $, $ \lambda \succeq \mathbf{0} $, $ f_d(\lambda, \nu) \le p^* $. \item For any dual feasible $ (\lambda, \nu) $, $ f_d(\lambda, \nu) $ gives a lower bound to the optimal value of the original optimization problem. \end{itemize} % more footnote spacing \medskip \end{frame} \begin{frame}{The Lagrangian dual} \begin{itemize} \item If we allow infinite values, (\aref{opt_prob_std}) can be written as the unconstrained \begin{equation*} \begin{array}{ll} \displaystyle\min_\mathbf{x} & \displaystyle f(\mathbf{x}) + \tilde{\mathbb{I}}^\infty_{\{ \mathbf{x}' \in \mathbb{R}^p : \mathbf{x}' \preceq \mathbf{0} \}}\circ\mathbf{u}(\mathbf{x}) + \tilde{\mathbb{I}}^\infty_{\{\mathbf{0}\}} \circ \mathbf{v}(\mathbf{x}) \end{array} \end{equation*} % \displaystyle used inline to raise the superscript more $ \displaystyle\tilde{\mathbb{I}}_A^\infty $ is such that for set $ A $, $ \displaystyle\tilde{\mathbb{I}}_A^\infty(x) = 0 $ if $ x \in A $, else $ \displaystyle\tilde{\mathbb{I}}_A^\infty(x) = \infty $. \item $ \lambda^\top\mathbf{u} $, $ \nu^\top\mathbf{v} $ are linear underestimators of $ \displaystyle\tilde{\mathbb{I}}^\infty_{\{ \mathbf{x}' \in \mathbb{R}^p : \mathbf{x}' \preceq \mathbf{0} \}}\circ\mathbf{u} $, $ \displaystyle \tilde{\mathbb{I}}^\infty_{\{\mathbf{0}\}}\circ\mathbf{v} $ when $ \lambda \succeq \mathbf{0} $, intuitively justifying why $ f_d $ yields a lower bound for $ p^* $ \cite{bv_convex_opt}. \item A natural question is to find the closest underestimator to the original problem, i.e. $ \lambda^*, \nu^* $ s.t. $ p^* - f_d(\lambda^*, \nu^*) $ is minimized. \item \textit{Definition.} Let $ f_d $ be the dual for (\aref{opt_prob_std}). The \textit{dual problem} for (\aref{opt_prob_std}) is \begin{equation} \label{opt_prob_std_dual} \begin{array}{ll} \displaystyle\max_{\lambda, \nu}& f_d(\lambda, \nu) \\ \text{s.t.} & \lambda\succeq \mathbf{0} \end{array} \end{equation} $ \lambda^*, \nu^* $ is \textit{dual optimal} if optimal for (\aref{opt_prob_std_dual}). (\aref{opt_prob_std}) is the \textit{primal problem}. \end{itemize} % spacing for the footnote \bigskip \end{frame} \begin{frame}{The Lagrangian dual} \begin{itemize} \item \textit{Example.} Consider solving an undetermined linear system with a minimum $ \ell^2 $-norm solution\footnote{ Squared $ \ell^2 $-norm is differentiable and does not change the solution. }. I.e. for $ q < n $, we want to solve \begin{equation*} \begin{array}{ll} \displaystyle\min_\mathbf{x} & \Vert\mathbf{x}\Vert_2^2 \\ \text{s.t.} & \mathbf{Ax} = \mathbf{b} \end{array} \end{equation*} Here $ \mathbf{A} \in \mathbb{R}^{q \times n} $, $ \mathbf{b} \in \mathbb{R}^q $. The Lagrangian $ \mathcal{L} : \mathbb{R}^n \times \mathbb{R}^q \rightarrow \mathbb{R} $ is s.t. $ \mathcal{L}(\mathbf{x}, \nu) \triangleq \mathbf{x}^\top\mathbf{x} + \nu^\top(\mathbf{Ax} - \mathbf{b}) $. Fixing $ \nu $, $ \mathcal{L} $ is convex in $ \mathbf{x} $, so at its minimizer $ \mathbf{x}_\nu $, $ \nabla_\mathbf{x}\mathcal{L}(\mathbf{x}_\nu, \nu) = 2\mathbf{x}_\nu + \mathbf{A}^\top\nu = \mathbf{0} \Rightarrow \mathbf{x}_\nu = -\frac{1}{2}\mathbf{A}^\top\nu $. Then, \begin{equation*} f_d(\nu) \triangleq \inf_{\mathbf{x} \in \mathbb{R}^n} \mathcal{L}(\mathbf{x}, \nu) = \mathcal{L}(\mathbf{x}_\nu, \nu) = -\frac{1}{4}\nu^\top\mathbf{AA}^\top\nu - \mathbf{b}^\top\nu \end{equation*} $ \forall \nu \in \mathbb{R}^q $, $ f_d(\nu) \in \mathbb{R} $, so $ \operatorname{dom}f_d = \mathbb{R}^q $. $ -f_d $ is convex, $ \mathbf{AA}^\top \succeq \mathbf{0} $\footnote{ $ \forall \mathbf{x} \in \mathbb{R}^n, \mathbf{x}^\top\mathbf{AA}^\top\mathbf{x} = \big(\mathbf{A}^\top\mathbf{x}\big)^\top\mathbf{A}^\top\mathbf{x} = \Vert\mathbf{A}^\top\mathbf{x}\Vert_2^2 \ge 0 $. }. \end{itemize} % spacing for footnote \medskip \end{frame} \subsection{Strong duality} \begin{frame}{Strong duality} \begin{itemize} \item \textit{Definition.} Let $ d^* $ denote the optimal value of the dual problem (\aref{opt_prob_std_dual}). The property $ d^* \le p^* $ is \textit{weak duality}, which always holds \cite{bv_convex_opt}. \item \textit{Definition.} If $ d^* = p^* $, we say that \textit{strong duality} holds. \item \textit{Theorem.} If (\aref{opt_prob_std}) is a convex problem and Slater's condition holds, i.e. $ \exists \mathbf{x} \in \operatorname{relint}\tilde{\mathcal{M}} $ s.t. $ \mathbf{u}(\mathbf{x}) \prec \mathbf{0} $, $ \mathbf{v}(\mathbf{x}) = \mathbf{0} $, strong duality holds \cite{bv_convex_opt}. \item If (\aref{opt_prob_std}) convex and $ u_1, \ldots u_k $ affine, $ k \le p $, Slater's condition can be refined s.t. if $ \exists \mathbf{x} \in \operatorname{relint}\tilde{\mathcal{M}} $ s.t. $ u_1(\mathbf{x}) \le 0, \ldots u_k(\mathbf{x}) \le 0 $, $ u_{k + 1}(\mathbf{x}) < 0, \ldots u_p(\mathbf{x}) < 0 $, $ \mathbf{v}(\mathbf{x}) = \mathbf{0} $, strong duality holds \cite{bv_convex_opt}. \item If (\aref{opt_prob_std}) convex, $ \mathbf{u} $ affine, $ \tilde{\mathcal{M}} = \mathbb{R}^n $, strong duality holds if $ |\mathcal{X}^*| > 0 $. \item \textit{Definition.} Suppose strong duality holds for (\aref{opt_prob_std}). Let $ \mathbf{x}^* $ be primal optimal and $ \lambda^*, \nu^* $ be dual optimal. Then, $ \forall i \in \{1, \ldots p\} $, $ \lambda_i^* > 0 \Rightarrow u_i(\mathbf{x}^*) = 0 $, i.e. \textit{complementary slackness} holds \cite{bv_convex_opt}. \item Equivalently, one can write $ u_i(\mathbf{x}^*) < 0 \Rightarrow \lambda_i^* = 0 $. \end{itemize} \end{frame} \subsection{Karush-Kuhn-Tucker conditions} \begin{frame}{Karush-Kuhn-Tucker conditions} \begin{itemize} \item Many problems have differentiable objectives and constraints. \item \textit{Definition.} Suppose strong duality holds for (\aref{opt_prob_std}) and $ f $, $ \mathbf{u} $, $ \mathbf{v} $ differentiable. Then, the following conditions must hold. \begin{enumerate} \item \textit{Stationarity.} $ \nabla f(\mathbf{x}^*) + \nabla\mathbf{u}(\mathbf{x}^*)^\top\lambda^* + \nabla\mathbf{v}(\mathbf{x}^*)^\top\nu^* = \mathbf{0} $. \item \textit{Primal feasibility.} $ \mathbf{u}(\mathbf{x}^*) \preceq \mathbf{0} $, $ \mathbf{v}(\mathbf{x}^*) = \mathbf{0} $. \item \textit{Dual feasibility.} $ \lambda^* \succeq \mathbf{0} $. \item \textit{Complementary slackness.} $ \mathbf{u}(\mathbf{x}^*)^\top \lambda^* = 0 $. \end{enumerate} These are the \textit{Karush-Kuhn-Tucker conditions} \cite{bv_convex_opt}. \item Note $ \mathbf{x}^* = \arg\min_{\mathbf{x} \in \tilde{\mathcal{M}}} \mathcal{L}_f(\mathbf{x}, \lambda^*, \nu^*) \Rightarrow \nabla_\mathbf{x}\mathcal{L}_f(\mathbf{x}^*, \lambda^*, \nu^*) = \mathbf{0} $, where \begin{equation*} \nabla_\mathbf{x}\mathcal{L}_f(\mathbf{x}^*, \lambda^*, \nu^*) \triangleq \nabla f(\mathbf{x}^*) + \nabla\mathbf{u}(\mathbf{x}^*)^\top\lambda^* + \nabla\mathbf{v}(\mathbf{x}^*)^\top\nu^* \end{equation*} By strong duality, $ f(\mathbf{x}^*) = f_d(\lambda^*, \nu^*) \triangleq \inf_{\mathbf{x} \in \tilde{\mathcal{M}}}\{ \mathcal{L}_f(\mathbf{x}, \lambda^*, \nu^*) \} $. \item Complementary slackness is equivalently written as $ \lambda_i^*u_i(\mathbf{x}^*) = 0 $, $ \forall i \in \{1, \ldots p\} $, as $ \mathbf{u}(\mathbf{x}^*)^\top\lambda^* = 0 \Leftrightarrow u_i(\mathbf{x}^*) < 0 \Rightarrow \lambda_i^* = 0 $. \end{itemize} \end{frame} \begin{frame}{Karush-Kuhn-Tucker conditions} \begin{itemize} \item KKT conditions are \alert{necessary} optimality conditions for differentiable optimization problems where strong duality holds. \item But if (\aref{opt_prob_std}) is convex and $ \mathbf{x}^*, \lambda^*, \nu^* $ satisfy the KKT conditions, then strong duality holds, $ \mathbf{x}^* $ is primal optimal, $ (\lambda^*, \nu^*) $ is dual optimal. \item KKT conditions are \alert{sufficient} for optimality if the problem is convex. \item Furthermore, if $ \mathbf{x}^* $ also satisfies Slater's condition, then KKT conditions become \alert{necessary and sufficient} for optimality. \item In summary: \begin{enumerate} \item Strong duality $ \Rightarrow $ KKT conditions satisfied. \item Convex problem, KKT conditions $ \Rightarrow $ strong duality. \item Convex problem, Slater's condition, KKT conditions $ \Leftrightarrow $ strong duality. \end{enumerate} \end{itemize} \end{frame} % BibTeX slide for references. should use either acm or ieeetr style \begin{frame}{References} \bibliographystyle{acm} % relative path may need to be updated depending on .tex file location \bibliography{../master_bib} \end{frame} \end{document}
{ "alphanum_fraction": 0.5554352973, "avg_line_length": 38.8005249344, "ext": "tex", "hexsha": "d5d41d0a2fcf6abaacc2683970291e7249eb6299", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "26de8661c3c5f00c13353e2d695ebf316545a037", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "phetdam/bac-advanced-ml", "max_forks_repo_path": "lessons/lecture_06/lecture_06.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "26de8661c3c5f00c13353e2d695ebf316545a037", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "phetdam/bac-advanced-ml", "max_issues_repo_path": "lessons/lecture_06/lecture_06.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "26de8661c3c5f00c13353e2d695ebf316545a037", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "phetdam/bac-advanced-ml", "max_stars_repo_path": "lessons/lecture_06/lecture_06.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9852, "size": 29566 }
\section{Task 2} For this task we are given the following code: \begin{figure} \begin{lstlisting} for i from 0 to N-1 // outer loop accum = A[i,0] * A[i,0]; B[i,0] = accum; for j from 1 to 63 // inner loop tmpA = A[i, j]; accum = sqrt(accum) + tmpA*tmpA; B[i,j] = accum; \end{lstlisting} \caption{Pseudocode given for the task.} \label{fig:t2code1} \end{figure} \subsection{Task 2.a} \begin{itemize} \item Because both \texttt{accum} and \texttt{tmpA} are declared outside the loop, the outer loop cannot be parallel since the different threads will attempt to write to the same memory locations. \begin{figure} \begin{lstlisting} for i from 0 to N-1 // outer loop float accum = A[i,0] * A[i,0]; B[i,0] = accum; for j from 1 to 63 // inner loop float tmpA = A[i, j]; accum = sqrt(accum) + tmpA*tmpA; B[i,j] = accum; \end{lstlisting} \caption{Code rewritten for allow for parallelisation of the outer loop.} \label{fig:t2code2} \end{figure} \item Since \texttt{accum} is declared outside the inner loop, it is not parallelisable since different threads would all try to access the same instance of \texttt{accum}. \end{itemize} \subsection{Task 2.b OpenMP Code} The code for this can be found in \textit{src/task2openMP.cpp} and \textit{src/cpuFunc.cu.h}. It can be built from the makefile using the texttt{make bonus} command/target. It can then be run by running the \textit{task2openMP} file. Execution times for different matrix sizes can be seen in Table \ref{tab:task2times} along with execution times for the following tasks. \subsection{Task 2.c Naive GPU Code} The code for this task can be found in \textit{src/task2.cu}, \textit{src/cpuFunc.cu.h}, \textit{src/gpuFunc.cu.h}. The kernel that is used for this task is \texttt{flatNaiveTask2Kernel}. Execution times for different matrix sizes can be seen in Table \ref{tab:task2times} along with execution times for the following tasks. \subsection{Task 2.d Transposed GPU Code} The code for this task can be found in \textit{src/task2.cu}, \textit{src/cpuFunc.cu.h}, \textit{src/gpuFunc.cu.h}. The kernel that is used for this task is \texttt{flatTransposedTask2Kernel}. It also uses the \texttt{flatSharedTransposeKernel} from Task 1. Execution times for different matrix sizes can be seen in Table \ref{tab:task2times} along with execution times for the following tasks. \subsection{Execution Time} All running times were taken over 100 repetitions of each matrix size for each different method. All the times we're taken running on one of the servers we were given access to as part of the course. \begin{table}[H] \centering \begin{tabular}{|r|r|r|r|r|} \hline \textbf{Dimensions} & \textbf{CPU} & \textbf{CPU OpenMP} & \textbf{GPU Naive} & \textbf{GPU Transposed} \\ \hline $10 \times 64$ & $12\mu s$ & $185\mu s$ & $51\mu s$ & $69\mu s$ \\ $20 \times 64$ & $23\mu s$ & $11\mu s$ & $67\mu s$ & $68\mu s$ \\ $30 \times 64$ & $34\mu s$ & $11\mu s$ & $85\mu s$ & $70\mu s$ \\ $40 \times 64$ & $45\mu s$ & $12\mu s$ & $88\mu s$ & $68\mu s$ \\ $50 \times 64$ & $57\mu s$ & $12\mu s$ & $87\mu s$ & $70\mu s$ \\ $60 \times 64$ & $68\mu s$ & $12\mu s$ & $89\mu s$ & $70\mu s$ \\ $70 \times 64$ & $80\mu s$ & $13\mu s$ & $88\mu s$ & $74\mu s$ \\ $80 \times 64$ & $91\mu s$ & $13\mu s$ & $89\mu s$ & $69\mu s$ \\ $90 \times 64$ & $102\mu s$ & $13\mu s$ & $88\mu s$ & $71\mu s$ \\ $100 \times 64$ & $114\mu s$ & $14\mu s$ & $88\mu s$ & $74\mu s$ \\ $2000 \times 64$ & $2277\mu s$ & $106\mu s$ & $166\mu s$ & $129\mu s$ \\ $3000 \times 64$ & $3402\mu s$ & $135\mu s$ & $188\mu s$ & $166\mu s$ \\ $4000 \times 64$ & $4555\mu s$ & $179\mu s$ & $210\mu s$ & $200\mu s$ \\ $5000 \times 64$ & $5687\mu s$ & $222\mu s$ & $143\mu s$ & $145\mu s$ \\ $10000 \times 64$ & $11373\mu s$ & $436\mu s$ & $382\mu s$ & $216\mu s$ \\ \hline \end{tabular} \caption{This table shows the execution times for different implementations of the algorithm in this task. Each time is the average over 100 runs.} \label{tab:task2times} \end{table} The first reading for the openMP solution is off, I have tried to re do the reading several times and also increase the number of repetitions in order to try and eliminate delays caused by a cold cache, but the reading is persistent. This leads me to conclude that the overhead for creating the openMP threads are much higher than the time saved by parallelising for a matrix with only 10 rows.
{ "alphanum_fraction": 0.5884878049, "avg_line_length": 47.0183486239, "ext": "tex", "hexsha": "d85d7df1e0f488baa03e86a16fa55d23ce82205b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2555ef889fb49e68485775a5ae7fd8b147623d70", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "martinnj/PMPH2015", "max_forks_repo_path": "Assignment3/report/task2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2555ef889fb49e68485775a5ae7fd8b147623d70", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "martinnj/PMPH2015", "max_issues_repo_path": "Assignment3/report/task2.tex", "max_line_length": 113, "max_stars_count": null, "max_stars_repo_head_hexsha": "2555ef889fb49e68485775a5ae7fd8b147623d70", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "martinnj/PMPH2015", "max_stars_repo_path": "Assignment3/report/task2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1625, "size": 5125 }
\chapter{Mathematical Model of Vehicle Dynamics} This chapter will detail the mathematical model that was developed to describe the motion of the uDrone. \section{Reference Frames and State Space} Conventions for creating this model will follow those laid out by Thor Fossen in the 2011 Handbook of Marine Craft Hydrodynamics and Control. The fixed world frame is defined using the North-East-Down (NED) coordinate system, represented as $\{n\} = (x_n, y_n, z_n)$. in this system, $x_n$ points North, $y_n$ points East, and $z_n$ points down. The body-fixed frame $\{b\} = (x_b, y_b, z_b)$ is fixed its origin point, $o_b$, to the uDrone. For purposes of this model, $o_b$ is set at the point on the plane of the motors at the back of the uDrone that is along its center-line. $x_b$ runs along the center-line of the vehicle, pointing from the aft (back) of the uDrone to the fore (front). $z_b$ runs from top to bottom and, following the right-hand rule, $y_b$ runs towards the starboard (right). Further, following convention, roll is defined as rotation about $x_b$, pitch as rotation about $y_b$, and yaw as rotation about $z_b$, with counter-clockwise (CCW) being the positive direction. This choice of body frame origin reduces the complexity of modeling the forces produced by the motors. The drawback is some added complexity caused by the moments of the center of buoyancy and center of gravity relative to the chosen origin, but this is more straight forward than the complexity of calculating motor forces about a different point. The position of the vehicle, or the body-fixed frame $\{b\}$, with respect to the world frame $\{n\}$ is expressed as $\bm{p} = [N, E, D]^T$. The attitude of the vehicle is expressed as $\bm{\Theta} = [\phi, \theta, \psi]^T$ using Euler angles. Together these create the 6-dimensional position/orientation vector $\bm{\eta} = [\bm{p}, \bm{\Theta]}^T$. The linear and angular velocities are expressed with respect to the vehicles fixed frame $\{b\}$. The linear velocity is $\bm{v} = [u, v, w]^T$ and angular velocity $\bm{\omega} = [p, q, r]^T$. when combine, these form the combine 6-dimensional velocity vector $\bm{\nu}= [\bm{v}, \bm{\omega]}^T$. All together, $\boldsymbol{\eta}$ and $\boldsymbol{\nu}$ represent the 12-dimensional state space describing the state of the uDrone \parencite{thor_kin}. % \begin{align*} % \boldsymbol{p}^{n}_{b}=\left[\begin{array}{c} % N \\ E \\ D % \end{array}\right] % \bm{\Theta}^{n}_{b}=\left[\begin{array}{c} % \phi \\ \theta \\ \psi % \end{array}\right] % \bm{\eta}=\left[\begin{array}{c} % \bm{p} \\ \bm{\Theta} % \end{array}\right] % \end{align*} % \begin{align*} % \boldsymbol{v}^{b}_{b}=\left[\begin{array}{c} % u \\ v \\ w % \end{array}\right] % \bm{\omega}^{b}_{b}=\left[\begin{array}{c} % p \\ q \\ r % \end{array}\right] % \bm{\nu}=\left[\begin{array}{c} % \bm{v} \\ \bm{\omega} % \end{array}\right] % \end{align*} \section{Equations of Motions} The general equation of motion for the uDrone can be derived as a function of the combine linear and angular velocity vector $\boldsymbol{\nu}$ \parencite{thor_mod}. This is shown in equation \ref{eqm}. \begin{gather} \underbrace{\boldsymbol{M}_{R B} \dot{\boldsymbol{\nu}}+\boldsymbol{C}_{R B}(\boldsymbol{\nu}) \boldsymbol{\nu}}_{\text {rigid-body forces}}+\underbrace{\boldsymbol{M}_{A} \dot{\boldsymbol{\nu}}+\boldsymbol{C}_{A}\left(\boldsymbol{\nu}\right) \boldsymbol{\nu}+\boldsymbol{D}\left(\boldsymbol{\nu}\right) \boldsymbol{\nu}}_{\text {hydrodynamic forces}}+\underbrace{\boldsymbol{g}(\boldsymbol{\eta})}_{\text{hydrostatic forces}}=\boldsymbol{\tau} \label{eqm} \end{gather} With the terms defined as: \begin{align*} \boldsymbol{\eta}&=\text{combine position and orientation}\\ \boldsymbol{\nu}&=\text{combine linear and angular velocity}\\ \dot{\boldsymbol{\nu}}&=\text{combine linear and angular acceleration}\\ \boldsymbol{M}_{R B}&=\text{rigid-body system inertia matrix}\\ \boldsymbol{C}_{R B}(\boldsymbol{\nu})&=\text{rigid-body Coriolis matrix}\\ \boldsymbol{M}_{A}&=\text{added mass system inertia matrix}\\ \boldsymbol{C}_{A}(\boldsymbol{\nu})&=\text{added mass Coriolis matrix}\\ \boldsymbol{D}(\boldsymbol{\nu})&=\text{damping matrix}\\ \boldsymbol{g}(\boldsymbol{\eta})&=\text{gravitational and buoyant forces}\\ \boldsymbol{\tau}&=\text{control inputs} \end{align*} % \begin{align*} % \boldsymbol{\eta}&=\text{combine position and orientation}\\ % \boldsymbol{\nu}&=\text{combine linear and angular velocity}\\ % \dot{\boldsymbol{\nu}}&=\text{combine linear and angular acceleration}\\ % \boldsymbol{\tau}&=\text{control inputs}\\ % \boldsymbol{M}&=\text{system inertia matrix}\\ % \boldsymbol{C}(\boldsymbol{\nu})&=\text{Coriolis matrix}\\ % \boldsymbol{D}(\boldsymbol{\nu})&=\text{damping matrix}\\ % \boldsymbol{g}(\boldsymbol{\eta})&=\text{gravitational and buoyant forces}\\ % \boldsymbol{\tau}&=\text{control inputs} % \end{align*} % \begin{align*} % \boldsymbol{M}&=\boldsymbol{M}_{R B}+\boldsymbol{M}_{A}\\ % \boldsymbol{C}&=\boldsymbol{C}_{R B}+\boldsymbol{C}_{A} % \end{align*} \section{Rigid-Body Forces} The exact values of the system inertia ($\boldsymbol{M}$), Coriolis ($\boldsymbol{C}$), and Drag ($\boldsymbol{D}$) matrices can be determined either through intensive hydrodynamic modeling or experimentation. Due to the complexity of hydrodynamic modeling, experimentation is typically used to determine these values in underwater vehicles similar to the uDrone \parencite{hipp_pen}. It is possible, however, to estimate the rigid body matrix values using information about the vehicle generated from the 3D model. This section details these estimations and explains how they were calculated. The rigid body system inertia matrix about the center of origin can be calculated using equation \ref{mrb} which is found in \cite{thor_rb}. \begin{gather} \boldsymbol{M}_{R B}^{C O}=\left[\begin{array}{cc} m \boldsymbol{I}_{3 \times 3} & -m \boldsymbol{S}\left(\boldsymbol{r}_{g}^{b}\right) \\ m \boldsymbol{S}\left(\boldsymbol{r}_{g}^{b}\right) & \boldsymbol{I}_{g}-m \boldsymbol{S}^{2}\left(\boldsymbol{r}_{g}^{b}\right) \end{array}\right] \label{mrb} \end{gather} % \begin{align*} % \boldsymbol{M}_{R B}^{C O}&=\text{rigid body system inertia matrix about the body fixed-frame origin}\\ % \boldsymbol{I}_{3 \times 3}&=\text{3 by 3 identity matrix}\\ % m&=\text{mass}\\ % \boldsymbol{S}(\cdot)&=\text{cross product operator}\\ % \boldsymbol{r}_{g}^{b}&=\text{ vector from the body fixed-frame origin to the center of gravity}\\ % \boldsymbol{I}_{g}&=\text{inertia matrix} % \end{align*} In this equation, $m$ is the mass, $\boldsymbol{I}_{3 \times 3}$ is the three-by-three identity matrix, $\boldsymbol{I}_{g}$ is the inertia matrix and $\boldsymbol{r}_{g}^{b}$ is the vector from the body fixed-frame origin to the center of gravity. $\boldsymbol{S}$ is the cross product operator matrix. Before the final vehicle was constructed, m, $\boldsymbol{I}_{g}$, and $\boldsymbol{r}_{g}^{b}$ were taken from the 3D model. Those estimates are shown in equation \ref{m_matrix}. \begin{equation} \boldsymbol{M}_{R B}^{C O}=\left[\begin{array}{cccccc} 9.9000 & 0 & 0 & 0 & 0 & 0 \\ 0 & 9.9000 & 0 & 0 & 0 & 1.4850 \\ 0 & 0 & 9.9000 & 0 & -1.4850 & 0 \\ 0 & 0 & 0 & 0.3420 & 0.0010 & -0.0001 \\ 0 & 0 & -1.4850 & 0.0010 & 0.5577 & -0.0015 \\ 0 & 1.4850 & 0 & -0.0001 & -0.0015 & 0.3167 \end{array}\right] \label{m_matrix} \end{equation} With a completed vehicle these numbers can be verified and varied if needed. For example, with no ballast, the uDrone is positively buoyant. Therefore, to maintain its depth without using power to stay underwater ballast will need to be added. By placing the ballast closer to the center of the vehicle, the moments of inertia will decrease. Conversely, by placing it further from the center the moments can be increased. Additionally, the ballast can be used to change the center of gravity relative to the center of buoyancy. This will create a moment in the vehicle that can help it stay oriented or level in specific ways. This relationship is discussed further in the section \ref{hydrostatics}. The Coriolis matrix is a function of the angular velocity of the vehicle and can be derived as shown in equation \ref{crb} from \cite{thor_rb}. \begin{gather} \boldsymbol{C}_{RB}^{CO}(\boldsymbol{\nu})=\left[\begin{array}{cc} m \boldsymbol{S}\left(\boldsymbol{\omega}^{b}\right) & -m \boldsymbol{S}\left(\boldsymbol{\omega}^{b}\right) \boldsymbol{S}\left(\boldsymbol{r}_{g}^{b}\right) \\ m \boldsymbol{S}\left(\boldsymbol{r}_{g}^{b}\right) \boldsymbol{S}\left(\boldsymbol{\omega}^{b}\right) & -\boldsymbol{S}\left(\left(\boldsymbol{I}_{g}-m \boldsymbol{S}^{2}\left(\boldsymbol{r}_{g}^{b}\right)\right) \boldsymbol{\omega}^{b}\right) \end{array}\right] \label{crb} \end{gather} % \begin{align*} % \boldsymbol{C}_{R B}^{C O}&=\text{rigid body Coriolis matrix about the body fixed-frame origin}\\ % m&=\text{mass}\\ % \boldsymbol{S}(\cdot)&=\text{cross product operator}\\ % \boldsymbol{\omega}^b&=\text{body-fixed frame angular velocity}\\ % \boldsymbol{r}_{g}^{b}&=\text{ vector from the body fixed-frame origin to the center of gravity}\\ % \boldsymbol{I}_{g}&=\text{inertia matrix} % \end{align*} Where $\boldsymbol{\omega}^{b}$ is the body-fixed frame angular velocity vector. Since this matrix is a function of the vehicle state, it will change with motion. The actual calculations of this matrix are handled by a MatLab program written as a companion to Thor Forson’s book \parencite{mss}. The values based on the system inertia matrix in equation \ref{m_matrix} along with a velocity state vector $\bm{\nu}= [1,0,0,0,0,0]^T$, which represents only forward motion, is shown in equation \ref{c_matrix}. \begin{equation} \boldsymbol{C}_{RB}^{CO}(\boldsymbol{\nu})=\left[\begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 9.9000 \\ 0 & 0 & 0 & 0 & -9.9000 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 9.9000 & 0 & 0 & 0 \\ 0 & -9.9000 & 0 & 0 & 0 & 0 \end{array}\right] \label{c_matrix} \end{equation} \section{Hydrostatic Forces} \label{hydrostatics} The hydrostatic term $\boldsymbol{g}(\boldsymbol{\eta})$ accounts for the forces on the uDrone caused by gravity and buoyancy. The force of gravity, or weight ($W$) is equal to the mass of the uDrone times the acceleration due to gravity. The buoyant force ($B$) is the weight of the water displaced by the uDrone. This is calculated by multiplying the volume of the uDrone, the density of water, and the acceleration due to gravity. If the location of the center of gravity and center of buoyancy (or center of volume) of the uDrone are not in the same spot then there will be a moment on the whole vehicle. Similarly, if the two forces are not equal then there will also be a net linear force on it. This value is a function of the orientation of the vehicle and can be seen in equation \ref{bigG} \parencite{thor_rb}. \begin{equation}\label{bigG} \boldsymbol{g}(\boldsymbol{\eta})=\left[\begin{array}{llll} & (W-B) \sin (\theta) \\ - & (W-B) \cos (\theta) \sin (\phi) \\ - & (W-B) \cos (\theta) \cos (\phi) \\ - & \left(y_{g} W-y_{b} B\right) \cos (\theta) \cos (\phi) & + & \left(z_{g} W-z_{b} B\right) \cos (\theta) \sin (\phi) \\ & \left(z_{g} W-z_{b} B\right) \sin (\theta) & + & \left(x_{g} W-x_{b} B\right) \cos (\theta) \cos (\phi) \\ - & \left(x_{g} W-x_{b} B\right) \cos (\theta) \sin (\phi) & - & \left(y_{g} W-y_{b} B\right) \sin (\theta) \end{array}\right] \end{equation} % \begin{equation} % \boldsymbol{g}(\boldsymbol{\eta})=\left[\begin{array}{llll} % 0\\0\\0\\0\\0\\0 % \end{array}\right] % \end{equation} Ballasting can be used to adjust the weight and center of mass on the uDrone. To simplify this equation the ballast is set so that the weight and buoyancy are equal and the center of mass and buoyancy are the same. This means the $\boldsymbol{g}$ vector will be zeros for all $\boldsymbol{\eta}$. These forces can be seen interacting on the uDrone in figure \ref{cord_frame}. Maintaining overlapped centers of buoyancy and gravity also makes the vehicle more nimble as there are less moments to overcome when rotating. This also has the benefit of making the uDrone more maneuverable. Equal weight and buoyancy reduces energy used as the uDrone does not need to overcome a force making it sink or float as it cruises through the water column. There might be reasons to adjust the ballasting of the uDrone to impart other properties on it. For example, if the vehicle is slightly positively buoyant than it will float in the event of a power failure, allowing for it to be recovered more easily. Also, if the center of gravity is placed below the center of buoyancy, but still in line with it in the z body direction, then the vehicle will be self righting in roll. This could make control easier as roll, which should be at zero most of the time, can be ignored. \section{Hydrodynamic Forces} \label{hydrodynamics} The added mass system inertia matrix and added mass Coriolis matrix can only be determined through experimentation or complex fluid dynamic simulation and are not calculated in the scope of this thesis. The damping matrix, however, can be estimated using drag calculations. The vehicle is modeled as a flat-faced cylinder moving through water in a direction along its axis. This estimation neglects two factors: 1) drag from the motors, and 2) rotational damping. Both of these forces are small in comparison to the drag of the flat-faced cylinder, so for the purposes of an approximate model, they can be ignored. The force of drag in a single direction can be calculated using the drag equation: \begin{gather} F_{D}=C_{D} A \frac{\rho V^{2}}{2} \end{gather} \begin{align*} F_D&=\text{drag Force}\\ C_D&=\text{coefficient of drag}\\ A&=\text{cross-sectional area normal to velocity}\\ \rho&=\text{fluid density}\\ V&=\text{velocity}\\ \end{align*} Based on the ratio of the diameter to the length of the main body of the uDrone, the Coefficient of drag in the direction of motion is estimated at 0.8. The density of salt water is 1024kg/m$^3$ and the surface area is calculated from the diameter of the eight inch cylinder, which is 0.037m$^2$. Putting this all together, damping force can be calculated as a function of velocity. This is: \begin{gather} \boldsymbol{D}(\boldsymbol{\nu})\boldsymbol{\nu}=\left[\begin{array}{c} -15.0u ^2 \\ \sim 0 \\ \sim 0 \\ D_{roll} \\ D_{pitch} \\ D_{yaw} \end{array}\right] \end{gather} $u$ is the velocity in the x body direction. The last three values which refer to the rotational damping. For the purposes of calculations in the next section these values were assumed to be zero. The second and third terms, however, will always be zero, or nearly zero, as the vehicle cannot independently move in the Y or Z direction. \section{Control Inputs}\label{control_inputs} The control inputs are the forces put on the uDrone from the thrusters. The four motor of the uDrone are arranged on a plane, all facing the same direction. The center point between all the thrusters along that plane is the Center of Origin $C_O$ for the vehicle. Thrusters one and three spin in a counterclockwise direction while thrusters two and four spin clockwise for forward thrust. This ensures that the vehicle has no roll moment when moving forward. Figure \ref{cord_frame} shows the uDrone with the $C_O$ and rotation direction of the thrusters. \begin{figure}[h] \includegraphics[width=\maxwidth{\textwidth}]{img/cord_frame.png} \caption{uDrone Coordinate Frame and Motor Directions} \label{cord_frame} \end{figure} Using this convention it is possible to determine the actuation matrix, tau, as a function of thruster inputs. \begin{gather} \boldsymbol{\tau}=\left[\begin{array}{c} F_1+F_2+F_3+F_4 \\ 0 \\ 0 \\ M_1-M_2+M_3-M_4 \\ (F_2+F_3-F_1-F_4) \frac{L}{\sqrt{2}} \\ (F_1+F_2-F_3-F_4) \frac{L}{\sqrt{2}} \end{array}\right] \label{tau} \end{gather} In equation \ref{tau} $F_x$ indicates the linear force produced by the $x^{th}$ motor and $M_x$ indicates the angular momentum produced by the $x^{th}$. $L$ is 0.115m, the length from $C_O$ to the center of the motor, which is the same for each motor. Unfortunately, there is no data from Blue Robotics about the moment of the thrusters, so the relationship between control input to the roll moment will need to be determined experimentally. Data is available, however, correlating motor power and thrust force. \begin{figure}[h] \includegraphics[width=\maxwidth{\textwidth}]{img/force_amps.png} \caption{Force Output vs. Current Draw for T200 Thruster at 14 Volts} \label{force_amp} \legend{\emph{Source}: \iftoggle{usebiblatex}{\textcite{t200}}{\citet{t200}}}% See: https://upload.wikimedia.org/wikipedia/commons/8/89/Antidorcas_marsupialis%2C_male_%28Etosha%2C_2012%29.jpg % \legend{\emph{Note}: Here is a note that is especially long to show what happens when it extends to more than one line.} \end{figure} Using the figure \ref{force_amp}, along with data about the uDrone battery, it is possible to calculate the maximum velocity, running time at this velocity, and running time at the operational velocity of 1m/s. While the uDrone can carry two 14V, 18Ah batteries, the intent is to use one for thrusters and the other to control all electronics. Therefore, all longevity calculations are done assuming 18Ah are available for thruster actuation. In order to calculate maximum velocity, the drag equation is set equal to the maximum thruster force. The T200 thrusters produce a maximum thrust of 41.68N each. This is 166.7N of total thrust. Setting this equal to $15u^2$ and solving yields a maximum speed of 3.33m/s. Maintaining this velocity takes 20.29A, for a total of 81.16A. The uDrone could only maintain this speed for approximately 13.3 minute with an 18Ah battery. Typically, however, the uDrone will be cruising at a speed of 1m/s. At this speed, the drag force is approximately 15N. To maintain this thrust each motor must produce about 3.75N of force. To maintain this force each motor will use approximately 0.5A, for a total of 2A. In order to stay conservative, a 50\% safety factor will be added to this value to account for uncalculated drag forces and actuation that affect orientations, not just forward motion. With this addition, the vehicle will use 3A for actuation during normal operation. With a single 18Ah battery used for thrusters, this will yield a dive time of approximately 6 hours.
{ "alphanum_fraction": 0.7163870008, "avg_line_length": 80.1361702128, "ext": "tex", "hexsha": "d677b8a40c5a21476fc7e7aa3a47ab022f745cfd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "402c17282b3e4b590539238aca405b1c071305af", "max_forks_repo_licenses": [ "Info-ZIP", "Linux-OpenIB" ], "max_forks_repo_name": "auman66/asu-thesis", "max_forks_repo_path": "model.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "402c17282b3e4b590539238aca405b1c071305af", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Info-ZIP", "Linux-OpenIB" ], "max_issues_repo_name": "auman66/asu-thesis", "max_issues_repo_path": "model.tex", "max_line_length": 821, "max_stars_count": null, "max_stars_repo_head_hexsha": "402c17282b3e4b590539238aca405b1c071305af", "max_stars_repo_licenses": [ "Info-ZIP", "Linux-OpenIB" ], "max_stars_repo_name": "auman66/asu-thesis", "max_stars_repo_path": "model.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5493, "size": 18832 }
\subsection{test 2.1} Some included text here.
{ "alphanum_fraction": 0.7346938776, "avg_line_length": 9.8, "ext": "tex", "hexsha": "f3285486112ee644864250279d8e6f363ae8d5c2", "lang": "TeX", "max_forks_count": 630, "max_forks_repo_forks_event_max_datetime": "2022-03-31T15:00:46.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-10T20:55:31.000Z", "max_forks_repo_head_hexsha": "84bc5aeb1d6c07f892b39ae99e356d7ab7012476", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "IngoMeyer441/vimtex", "max_forks_repo_path": "test/test-parser-toc/sub.tex", "max_issues_count": 2195, "max_issues_repo_head_hexsha": "84bc5aeb1d6c07f892b39ae99e356d7ab7012476", "max_issues_repo_issues_event_max_datetime": "2022-03-31T23:21:51.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-08T16:20:32.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "IngoMeyer441/vimtex", "max_issues_repo_path": "test/test-parser-toc/sub.tex", "max_line_length": 24, "max_stars_count": 4193, "max_stars_repo_head_hexsha": "84bc5aeb1d6c07f892b39ae99e356d7ab7012476", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "IngoMeyer441/vimtex", "max_stars_repo_path": "test/test-parser-toc/sub.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:03:40.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-11T18:43:41.000Z", "num_tokens": 14, "size": 49 }
\subsection{Blackjack} \screenshot{plugins/images/ss-blackjack}{Blackjack}{fig:blackjack} Blackjack, a game played in casinos around the world, is now available in the palm of your hand! The rules are simple: try to get as close to 21 without going over or simply beat out the dealer for the best hand. Although this may not seem difficult, blackjack is a game renowned for the strategy involved. This version includes the ability to split, buy insurance, and double down. For the full set of rules to the game, and other fascinating information visit\\ \url{http://www.blackjackinfo.com/blackjack-rules.php} \begin{btnmap} \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD% ,GIGABEAT_PAD,GIGABEAT_S_PAD,MROBE100_PAD,SANSA_C200_PAD,PBELL_VIBE500_PAD% ,SANSA_FUZEPLUS_PAD,SANSA_CLIP_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD} {\ButtonLeft{} / \ButtonRight{} / \ButtonUp{} / \ButtonDown} \opt{IPOD_4G_PAD,IPOD_3G_PAD,SANSA_E200_PAD,SANSA_FUZE_PAD} {\ButtonLeft{} / \ButtonRight{} / \ButtonScrollFwd{} / \ButtonScrollBack} \opt{IRIVER_H10_PAD} {\ButtonLeft{} / \ButtonRight{} / \ButtonScrollUp{} / \ButtonScrollDown} \opt{MPIO_HD300_PAD} {\ButtonRew{} / \ButtonFF{} / \ButtonScrollUp{} / \ButtonScrollDown} \opt{COWON_D2_PAD}{\TouchMidRight{} / \TouchMidLeft} \opt{HAVEREMOTEKEYMAP}{& } & Enter betting amount\\ \opt{RECORDER_PAD,IRIVER_H10_PAD,GIGABEAT_S_PAD,SAMSUNG_YH92X_PAD% ,SAMSUNG_YH820_PAD}{\ButtonPlay} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOn} \opt{IPOD_4G_PAD,IPOD_3G_PAD,IAUDIO_X5_PAD,GIGABEAT_PAD% ,SANSA_E200_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD,MROBE100_PAD,SANSA_FUZE_PAD,SANSA_FUZEPLUS_PAD% }{\ButtonSelect} \opt{ONDIO_PAD}{\ButtonMenu} \opt{COWON_D2_PAD}{\TouchTopRight} \opt{PBELL_VIBE500_PAD}{\ButtonOK} \opt{MPIO_HD300_PAD}{\ButtonEnter} \opt{HAVEREMOTEKEYMAP}{& } & Hit (Draw new card) / Select\\ \opt{RECORDER_PAD}{\ButtonFOne} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,IAUDIO_X5_PAD}{\ButtonRec} \opt{IRIVER_H10_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonFF} \opt{ONDIO_PAD,IPOD_4G_PAD,IPOD_3G_PAD,SANSA_E200_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_FUZE_PAD}{\ButtonRight} \opt{GIGABEAT_PAD,GIGABEAT_S_PAD}{\ButtonVolDown} \opt{MROBE100_PAD}{\ButtonDisplay} \opt{COWON_D2_PAD}{\TouchBottomLeft} \opt{SANSA_FUZEPLUS_PAD}{\ButtonBack} \opt{PBELL_VIBE500_PAD}{\ButtonCancel} \opt{MPIO_HD300_PAD}{\ButtonPlay} \opt{HAVEREMOTEKEYMAP}{& } & Stay (End hand)\\ \opt{RECORDER_PAD}{\ButtonFTwo} \opt{IRIVER_H100_PAD,IRIVER_H300_PAD,GIGABEAT_S_PAD}{\ButtonSelect} \opt{IAUDIO_X5_PAD,SANSA_FUZEPLUS_PAD}{\ButtonPlay} \opt{IRIVER_H10_PAD,SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{\ButtonRew} \opt{GIGABEAT_PAD}{\ButtonA} \opt{ONDIO_PAD}{\ButtonUp} \opt{MROBE100_PAD}{\ButtonDown} \opt{IPOD_4G_PAD,IPOD_3G_PAD,SANSA_E200_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD,SANSA_FUZE_PAD}{\ButtonLeft} \opt{COWON_D2_PAD}{\ButtonMinus} \opt{PBELL_VIBE500_PAD}{\ButtonMenu} \opt{MPIO_HD300_PAD}{\ButtonRec} \opt{HAVEREMOTEKEYMAP}{& } & Double down\\ \opt{RECORDER_PAD,ONDIO_PAD,IRIVER_H100_PAD,IRIVER_H300_PAD}{\ButtonOff} \opt{IPOD_4G_PAD,IPOD_3G_PAD}{\ButtonMenu} \opt{IAUDIO_X5_PAD,IRIVER_H10_PAD,SANSA_E200_PAD,SANSA_C200_PAD,SANSA_CLIP_PAD% ,GIGABEAT_PAD,MROBE100_PAD,COWON_D2_PAD,SANSA_FUZEPLUS_PAD}{\ButtonPower} \opt{GIGABEAT_S_PAD}{\ButtonBack} \opt{SANSA_FUZE_PAD}{Long \ButtonHome} \opt{PBELL_VIBE500_PAD}{\ButtonRec} \opt{MPIO_HD300_PAD}{\ButtonMenu} \opt{SAMSUNG_YH92X_PAD,SAMSUNG_YH820_PAD}{Long \ButtonRew} \opt{HAVEREMOTEKEYMAP}{& } & Pause game and go to menu / Cancel\\ \end{btnmap}
{ "alphanum_fraction": 0.7443687795, "avg_line_length": 48.9487179487, "ext": "tex", "hexsha": "f24c92ea00413a69792687d5b701f8c2cbff51cc", "lang": "TeX", "max_forks_count": 15, "max_forks_repo_forks_event_max_datetime": "2020-11-04T04:30:22.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-21T13:58:13.000Z", "max_forks_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_forks_repo_path": "manual/plugins/blackjack.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_issues_repo_issues_event_max_datetime": "2018-05-18T05:33:33.000Z", "max_issues_repo_issues_event_min_datetime": "2015-07-04T18:15:33.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_issues_repo_path": "manual/plugins/blackjack.tex", "max_line_length": 117, "max_stars_count": 24, "max_stars_repo_head_hexsha": "a701aefe45f03ca391a8e2f1a6e3da1b8774b2f2", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "Rockbox-Chinese-Community/Rockbox-RCC", "max_stars_repo_path": "manual/plugins/blackjack.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-05T14:09:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-03-10T08:43:56.000Z", "num_tokens": 1361, "size": 3818 }
\documentclass[a4paper,fleqn,usenatbib]{mnras} %========================================================================= \usepackage{amsmath} \usepackage{amssymb} \usepackage{multirow} \usepackage{graphicx} \usepackage{grffile} \usepackage[dvips]{epsfig} \usepackage{epsfig} \usepackage{color} \usepackage{caption} \usepackage{hyperref} \usepackage{bm} %Non reposionated tables %========================================================================= % INTERNAL MACROS %========================================================================= \def\be{\begin{equation}} \def\ee{\end{equation}} \def\ba{\begin{eqnarray}} \def\ea{\end{eqnarray}} % To highlight comments \definecolor{red}{rgb}{1,0.0,0.0} \newcommand{\red}{\color{red}} \definecolor{darkgreen}{rgb}{0.0,0.5,0.0} \newcommand{\SRK}[1]{\textcolor{darkgreen}{\bf SRK: \textit{#1}}} \newcommand{\SRKED}[1]{\textcolor{darkgreen}{\bf #1}} \newcommand{\before}[1]{\textcolor{red}{ #1}} \newcommand{\after}[1]{\textcolor{darkgreen}{ #1}} \newcommand{\hs}{{\hspace{1mm}}} \newcommand{\tol}{Tololo 1214-277} \newcommand{\rank}{\texttt{Ranked}} \newcommand{\boot}{\texttt{Bootstrapped}} \newcommand{\rand}{\texttt{Random}} \newcommand{\HI}{{\text{H\MakeUppercase{\romannumeral 1}}} } \newcommand{\HII}{{\text{H\MakeUppercase{\romannumeral 2}}} } \newcommand{\lya}{\ifmmode{{\rm Ly}\alpha}\else Ly$\alpha$\ \fi} \newcommand{\cm}{\ifmmode{{\rm cm}}\else cm\fi} \newcommand{\ccm}{\,\mathrm{cm}^{-3}} \newcommand{\ergps}{\,{\rm erg}\,{\rm s}\ifmmode{}^{-1}\else ${}^{-1}$\fi} \newcommand{\Mpch}{\,{\rm Mpc}\,\ifmmode h^{-1}\else $h^{-1}$\fi} \newcommand{\dd}{\mathrm{d}} \newcommand{\vek}[1]{\bm{#1}} \newcommand{\hb}{H$\beta$} \newcommand{\ha}{H$\alpha$} \newcommand{\oiii}{[OIII]} \newcommand{\oii}{[OII]} \newcommand{\nii}{[NII]} \newcommand{\esca}{erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$} \newcommand{\esc}{erg cm$^{-2}$ s$^{-1}$} \newcommand{\es}{erg s$^{-1}$} \newcommand{\esa}{erg s$^{-1}$} \newcommand{\kms}{\ifmmode\mathrm{km\ s}^{-1}\else km s$^{-1}$\fi} \newcommand{\hMsun}{{\ifmmode{h^{-1}{\rm{M_{\odot}}}}\else{$h^{-1}{\rm{M_{\odot}}}$}\fi}} \newcommand{\Msun}{{\ifmmode{{\rm{M_{\odot}}}}\else{${\rm{M_{\odot}}}$}\fi}} \newcommand{\jefr}[1]{\textcolor{darkgreen}{\bf JEFR: \textit{#1}}} \begin{document} %========================================================================= % FRONT MATTER %========================================================================= \title[LG satellites distribution asphericity]{We are not the 99 percent: quantifying asphericity in the distribution of Local Group satellites} \author[J.E. Forero-Romero \& V. Arias] {Jaime E. Forero-Romero $^{1}$ \thanks{[email protected]}, Ver\'onica Arias$^1$\\ %% $^1$ Departamento de F\'isica, Universidad de los Andes, Cra. 1 No. 18A-10 Edificio Ip, CP 111711, Bogot\'a, Colombia \\ } \maketitle \begin{abstract} We use simulations to build an explicit probability distribution for the asphericity in the satellite distribution around galaxies similar to the Local Group (LG) in the Lambda Cold Dark Matter (LCDM) paradigm. We use this distribution to to estimate the atipicallity of the satellite distributions in the LG even when the underlying simulations do not have enough systems fully resembling the LG. We demonstrate the method using three different simulations: Illustris-1, Illustris-1-Dark and ELVIS. Detailed results differ among the simulations suggesting a strong influence of the typical DM halo mass in the LG samples and simulated baryonic effects. However, there are three common trends. First, at most of $2\%$ the pairs are expected to have satellite distributions with the same asphericity as the LG; second, between $27\%$ to $56\%$ of the pairs have a halo with a satellite distribution as aspherical as in M31; and third, at most $3\%$ of the pairs have a satellite distribution as planar as in the MW. These quantitative results place the LG at the level of a $3\sigma$ outlier in the LCDM paradigm. We suggest that understanding the reasons for this atypicality will require quantifying the asphericity probability distribution as a function of halo mass and large scale environment. The approach presented here can facilitate that kind of study and other comparisons between different numerical setups and choices to study satellites around LG pairs in simulations. \end{abstract} \begin{keywords}Cosmology: Dark Matter --- Galaxies:Local Group --- Methods: numerical \end{keywords} \section{Introduction} The spatial distribution of satellite galaxies around our Milky Way (MW) and the M31 galaxy is becoming a stringent test for structure formation theories in a explicit cosmological context. This started with the suggested existence of a Magellanic Plane, a flattened structure of satellite galaxies and globular clusters around the MW, by \cite{1976RGOB..182..241K} and \cite{1976MNRAS.174..695L}. Fourty years later \cite{2005A&A...431..517K} quantified that the highly planar distribution of the 11 classical MW satellites has less than a $0.5\%$ chance to happen by chance if the parent distribution is spherically symmetric, interpreting this as a challenge to the Lambda Cold Dark Matter (LCDM) paradigm. The same year it was recognized, using numerical simulations, that this comparison was unfair given that Dark Matter halos in LCDM are expected to be triaxial and not spherical. Nevertheless, some estimates of the chances to find simulated satellites as planar as in the MW were as low as $2\%$ \citep{2005ApJ...629..219Z} while others expected planar satellite configurations in every DM halo \citep{2005MNRAS.363..146L}. Later, \cite{2007MNRAS.374.1125M} used simulations to confirm the low chances ($<0.5\%$) found in \cite{2005A&A...431..517K}. This result was challenged by \cite{2009MNRAS.399..550L} who continued to report high chances to find a planar configuration; two more recent numerical experiment with high resolution simulations (\cite{2013MNRAS.429..725S} and \cite{2016MNRAS.457.1931S}) reported contradicting results (low and high chances to find the observational result, respectively) but precise figures were not quoted in these three reports. Meanwhile, \cite{2013MNRAS.429.1502W} and \cite{2014ApJ...789L..24P} published the results of new tests using high resolution simulations and cosmological volumes arriving at a chance of $6\%$ and $0.77\%$, respectively, to have a satellite distribution as triaxial as the observations. In the case of M31, observational studies have found a planar satellite distribution in a special subset of $15$ satellites out of the total poulation of $27$ satellites \citep{2013ApJ...766..120C, 2013Natur.493...62I}, although considering the satellites ranked by luminosity does not show any special feature. All published studies agree on the point that the spatial distribution of the 15 to 27 brightest satellites are consistent with spherically symmetric distribution and easy to reproduce in LCDM simulations \citep{2006AJ....131.1405K,2007MNRAS.374.1125M, 2013ApJ...766..120C}. %comentario: no estoy segura de que el paper de Conn esté bien citado acá. el dice "It is clear that while the satellites of M31 when taken as a whole are no more planar than one can expect from a random distribution, a subset consisting of roughly half the sample is remarkably planar". Y en esta citación "15 to 27 brightest satellites" me parece que puede sonar contradictorio con lo que dice Conn. Table \ref{table:M31} and Table \ref{table:MW} summarize some more details on the results we have just mentioned \footnote{ Those tables also include the results from this paper using the methodolgy we describe in the upcoming sections.}. \begin{table*} \centering \begin{tabular}{|p{4.0cm}|p{4.5cm}| p{5.5cm}| c|}\hline Reference & Target Measurement & Parent Simulation & Probability ($\%$)\\\hline \text{\cite{2006AJ....131.1405K}} & RMS width in 15 brightest satellites & Monte Carlo satellite distributions with a power law radial distribution & $87-99$\\ \text{\cite{2007MNRAS.374.1125M}} & $c/a$ ratio and RMS width in 16 brightest satellites & Monte Carlo from a spherical power law radial distribution & $17$\\ \text{\cite{2013ApJ...766..120C}}& RMS width in the 27 brightest satellites & Monte Carlo randomized satellite distribution & High\\ This Work & $c/a$ ratio, $b/a$ ratio and RMS width in 11-15 brightest satellites & 24 halos from selected pairs in a cosmological N-body Dark Matter only simulation ($\sim 10^{6}$\Msun particle mass resolution)& 56 \\ This Work & $c/a$ ratio, $b/a$ ratio and RMS width in 11-15 brightest satellites & 20 halos from selected pairs in a cosmological N-body hydro simulation ($\sim 10^{6}$\Msun particle mass resolution)& 27 \\ This Work & $c/a$ ratio, $b/a$ ratio and RMS width in 11-15 brightest satellites & 12 high resolution DM only N-body simulations ($\sim 10^{5}$\Msun particle mass resolution) of halo pairs & 37 \\ \hline \end{tabular} \caption{Probability to find the triaxiality and/or root mean squared (RMS) height of M31 satellites in LCDM simulations. \label{table:M31}} \end{table*} \begin{table*} \centering \begin{tabular}{|p{4.0cm}|p{4.5cm}| p{5.5cm}| c|}\hline Reference & Target Measurement & Parent Simulation & Probability ($\%$)\\\hline \text{\cite{2005A&A...431..517K}} & RMS width in 11 classical satellites & Monte Carlo from a spherical power law radial distribution. & $<0.5$ \\ \text{\cite{2005MNRAS.363..146L}} & $c/a$ ratio of 11 classical satellites & 6 high resolution DM only N-body simulations ($\sim10^5$ \Msun\ particle mass resolution). & High\\ \text{\cite{2005ApJ...629..219Z}} & $c/a$ ratio and disk height in 11 classical satellites & 3 high resolution DM only N-body simulations. ($\sim10^6$ \Msun\ particle mass resolution) & 2 \\ \text{\cite{2007MNRAS.374.1125M}} & $c/a$ ratio and RMS width in 11-13 brightest satellites & Monte Carlo from a halo triaxiality distribution from LCDM simulations & $<0.5$\\ \text{\cite{2009MNRAS.399..550L}}& $c/a$ ratio in 11 classical satellites & 436 halos from a cosmological N-body simulation ($1.3\times 10^{8}$\Msun\ particle mass) & High \\ \text{\cite{2013MNRAS.429..725S}}& $c/a$ ratio in 12 brightest satellites & 6 High resolution DM only N-body simulations ($\sim 10^3$\Msun particle mass resolution) of individual halos & Low \\ \text{\cite{2013MNRAS.429.1502W}}& $c/a$ ratio or RMS width in 11 brightest satellites & 1686 halos from a cosmological DM only N-body simulation ($\sim 10^6$\Msun particle mass resolution) & 6 or 13 \\ \text{\cite{2014ApJ...789L..24P}}& $c/a$ and $b/a$ ratio in 11 classical satellites & 48 high resoltion DM only N-body simulations ($\sim 10^{5}$\Msun particle mass resolution) of both halo pairs and isolated halos & 0.77\\ \text{\cite{2016MNRAS.457.1931S}}& $c/a$ ratio in 11 classical satellites & 12 high resolution Hydro simulation of halo pairs ($\sim 10^{4}$\Msun particle mass resolution) & High\\ This Work & $c/a$ ratio, $b/a$ ratio and RMS width in 11-15 brightest satellites & 24 halos from selected pairs in a cosmological N-body Dark Matter only simulation ($\sim 10^{6}$\Msun particle mass resolution)& 3\\ This Work & $c/a$ ratio, $b/a$ ratio and RMS width in 11-15 brightest satellites & 20 halos from selected pairs in acosmological N-body hydro simulation ($\sim 10^{6}$\Msun particle mass resolution)& 0.6 \\ This Work & $c/a$ ratio, $b/a$ ratio and RMS width in 11-15 brightest satellites & 12 high resolution DM only N-body simulations ($\sim 10^{5}$\Msun particle mass resolution) of halo pairs & 0.2 \\ \hline \end{tabular} \caption{Same as Table \ref{table:M31} for the MW satellites. \label{table:MW}} \end{table*} Some of the difficulty in trying to reconcile and understand the seemly conflicting or inconclusive results on the MW has its origin on the frequentist fashion generally used to compute probabilities. Usually, this process starts by building a high level parent sample in the simulation and then counting how many elements has the subset meeting some criteria. This has two incovenients. The first is that the probability estimate is made against whatever turns out to be typical in each simulation. A fair comparison across simulations would require first characterizing all the simulations at the high level parent samples, something that is difficult to do in practice. The second inconvenient is that the systems that fully resemble the LG (i.e. its stellar mass content, morphology or kinematics) have a low cosmological number density, this means that for the current cosmological volumes in simulations the high level parent sample has a small size, making it hard to derive robust probabilities by counting. In this paper we present and demostrate a method to overcome these two limitations. We control the first effect by setting as a direct point of reference a spherical satellite distribution and not the simulations themselves. The spherical satellite distribution is built from the data itself (observational or simulated) by randomizing the angular position of each satellite around the central galaxy and keeping its radial distance fixed \citep{2017AN....338..854P}. We characterize the satellites in terms of the scalars describing its deviation from the spherical distribution. We then build an explicit analytic probability distribution for the asphericity; this solves the problem of having a common reference point to compare simulations. We use a simulation to estimate the parameters in this distribution to later use it as a parent sample to generate any desired number of samples that are by construction statistically compatible with the simulation; thus overcoming the problem of having a small number of systems in the parent simulations. To summarize, we use asphericity to characterize on equal footing simulations and observations. Then, we build an explicit probability distribution for the asphericity and use simulations to estimate its free parameters. Finally, we use these distributions to generate large numbers of samples and directly estimate the number of systems meeting a desired set of criteria. The rest of the paper describes in detail our implementation and results. It is structured as follows. In Section \ref{sec:DataSamples} we list the sources of the observational and simulated data to be used throughout the paper. In Section \ref{sec:SpatialMeasurements} we describe the methods we use to build halo pairs and characterize their satellite distributions. In Section \ref{sec:results} we present our results to finally conclude in Section \ref{sec:conclusions}. \section{Data samples}\label{sec:DataSamples} \subsection{Observational Data} \label{sec:obs} The base for our analysis is the catalog compiled by \cite{2014yCat..74351928P} which reports information on all galaxies within 3 Mpc around the Sun to that date. Detailed description of the compiled catalog can be found in \cite{2013MNRAS.435.1928P}, here we summarize the relevant features for the current study. The information in the catalogue is based on the catalogue compiled by \cite{2012AJ....144....4M}. The distance estimates are based on resolved stellar populations. We use three dimensional positions in a cartesian coordinate system as computed by \cite{2013MNRAS.435.1928P}. In this coordinate system the $z$-axis points towards the Galactic north pole, the $x$-axis points in the direction from the Sun to the Galactic center, and the $y$-axis points in the direction of the Galactic rotation. For both the M31 and MW we only use the 11 to 15 brightest satellites (using $M_V$ magnitudes) within a distance of $300$kpc from its central galaxy. The satellites included for the MW analysis are: LMC, SMC, Canis Major, Sagittarius dSph, Fornax, Leo I, Sculptor, Leo II, Sextans I, Carina, Ursa Minor, Draco, Canes Venatici (I), Hercules and Bootes II. The satellites included for the M31 analysis are: Triangulum, NGC205, M32, IC10, NGC185, NGC147, Andromeda VII, Andromeda II, Andromeda XXXII, Andromeda XXXI, Andromeda I, Andromeda VI, Andromeda XXIII, LGS 3, and Andromeda III. \subsection{Data from the Illustris project} \label{sec:illustris} We use publicly available data from the Illustris Project \citep{2014MNRAS.444.1518V}. This suite of cosmological simulations, performed using the quasi-Lagrangian code AREPO \citep{2010MNRAS.401..791S}, followed the coupled evolution of dark matter and gas and includes parametrizations to account for the effects of gas cooling, photoionization, star formation, stellar feedback, black hole and super massive black hole feedback. The simulation volume is a cubic box with a $75$ \Mpch\ side. The cosmological parameters correspond to a $\Lambda$CDM cosmology consistent with WMAP-9 measurements \citep{2013ApJS..208...19H}. We extract halo and galaxy information from the Illustris-1 and Illustris-1-Dark simulations, the former simulation includes hydrodynamics and star formation prescriptions while the latter only includes dark matter physics. These simulations have the highest resolution in the current release of the Illustris Project. Illustris-1 has $1820^3$ dark matter particles and $1820^3$ initial gas volumen elements, while Illustris-1-Dark has $1820^3$ dark matter particles. This corresponds to a dark matter particle mass of $6.3\times 10^6$\Msun\ and a minimum mass for the baryonic volume element of $8.0\times 10^7$\Msun for Illustris-1 and a dark matter particle mass of $7.6\times 10^6$\Msun for Illustris-1-Dark. In both simulations the dark matter gravitational softening is $1.4$ kpc. We buid a sample of pairs that resemble the conditions in the LG. To construct this sample we select from Illustris-1 all the galaxies with a stellar mass in the range $1\times10^{10}\Msun <M_{\star}<1.5 \times 10^{11} \Msun$. Then we select the pairs with the following conditions. \begin{itemize} \item For each galaxy $A$ we find its closest galaxy $B$, if galaxy $A$ is also the closest to halo $B$, the two are considered as a pair. \item With $d_{AB}$ the distance between the two galaxies and $M_{\star,min}$ the lowest stellar mass in the two galaxies, we discard pairs that have any other galaxy $C$ with stellar mass $M_{\star}>M_{\star, min}$ closer than $3\times d_{AB}$ from any of the pair's members. \item The distance $d_{AB}$ is greater than $700$ kpc. \item The relative radial velocity between the two galaxies, including the Hubble flow, is $-120\ \kms <v_{AB,r}<0\ \kms$. \end{itemize} We find 27 pairs with these conditions. We then select the pairs where in both halos there are at least 15 detected subhalos, thus discarding pairs with halos with the lowest mass. We end up with a total of 20 pairs in Illustris-1. In Illustris-1-Dark we use the center of mass position of the 27 pairs in Illustris-1 to find the matching halo pairs. After discarding the pairs with less than 15 detected subhalos in one of the halos we end up with a total of 24 pairs in Illustris-1-Dark. This corresponds to a pair number density of $\sim 2 \times10^{-5}$ pairs Mpc$^{-3}$. Appendix A shows the physical properties (stellar masses, maximum circular velocities, radial velocities and separation) of those pairs. Although Illustris-1 has stellar particles, we do not use their properties to select the satelite population. The smallest galaxies are barely resolved in stellar mass at magnitudes of $M_V=-9$, close to the limit of the 11 ``classical'' MW satellites, we use instead the dark matter information. Both in Illustris-1 and Illustris-1-Dark we chose the satellite samples by ranking the subhalos in decreasing order of its current maximum circular velocity and select the first $N_p$ halos in the list. Under these conditions the smallest sub-halos are sampled with at least $40$ particles. The results presented here correspond to $11\leq N_p\leq 15$. \subsection{Data from the ELVIS project} \label{sim:ELVIS} We use data from the public release of the Exploring the Local Universe In Simulations (ELVIS) project. For a detailed description of that project and its data we refer the reader to \cite{2014MNRAS.438.2578G}. Here we summarize the elements relevant to our discussion. ELVIS data comes from resimulations of dark matter halo pairs selected in dark matter only cosmological simulations. The parent cosmological boxes have a cosmology consistent with the Wilkinson Microwave Anisotropy Probe 7 results. The ELVIS project used the results from $50$ simulation boxes of side lenght $70.4$ Mpc to select pairs with kinematic characteristics similar to the LG. These selection criteria included the following \begin{itemize} \item The virial mass of each host must be in the range $1\times 10^{12} M_{\odot}< M_{vir}<3\times 10^{12}M_{\odot}$ \item The total pair mass must be in the range $2\times 10^{12} M_{\odot}< M_{vir}<5\times 10^{12}M_{\odot}$ \item The center of mass separation is in the range $0.6\leq d\leq1$ Mpc. \item The relative radial velocity is negative. \item No halos more massive than the least massive halo within $2.8$ Mpc and no halos with $M_{vir}>7\times 10^{13}$ within $7$ Mpc of the pairs' center of mass. \end{itemize} This corresponds to a pair number density of $\sim 8 \times10^{-6}$ pairs Mpc$^{-3}$, this is a factor $\sim 2.5$ lower than the pair number density we find in the Illustris-1 data. There were a total of 146 pairs that met those criteria, but only $12$ were chosen for resimulation. Additionally, the selected pairs for resimulation have a relative tangential velocity less than $75 $\kms. The dark matter particle resolution in these resimulations is $1.9\times 10^5$, a thousand times better than Illustris-1. In this paper we only use the results from these $12$ resimulated pairs. Appendix A compares the physical properties (stellar masses, maximum circular velocities, radial velocities and separation) in those pairs against the results from the Illustris-1 simulations. \section{Building, Characterizing and Comparing Satellites Spatial Distributions} \label{sec:SpatialMeasurements} \subsection{Building Satellite Samples} We compare the joint satellite distributions in the MW and M31 at fixed satellite number, $N_p$. This means that the magnitude cut corresponding to the faintest satellite included in the sample is different in each case. We make this choice to to rule out the influence of satellite numbers in the statistics. We compute the satellite statistics for 11 up to 15 satellites. The lower limit corresponds to the number of classical MW satellites, while the upper limit corresponds to the minimum number of M31 satellites usually included M31 studies. In simulations we rank the subhalos by their maximum circular velocity, in observations we rank the satellites by its $M_V$ magnitude. \begin{table*} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{|p{2.5cm}|c|c|c|c|c|c|} \hline \multirow{2}{4.0cm}{} & \multicolumn{2}{c|}{\textbf{Observations}} & \multicolumn{2}{c|}{\textbf{Randomized Obs.}} & \multicolumn{2}{c|}{\textbf{Normalized Units}}\\ % \hline % \textbf{Inactive Modes} & \textbf{Description}\\ \cline{2-7} & \textbf{M31} & \textbf{MW} & \textbf{M31} & \textbf{MW} & \textbf{M31} & \textbf{MW} \\ %\hhline{~--} \hline Plane width (kpc) & $59\pm 3$ & $21\pm 2$ & $65\pm 12$ & $45\pm 8$ & $-0.48\pm 0.24$ & $-2.48\pm 0.26$\\\hline $c/a$ ratio & $0.45\pm 0.04$ & $0.25\pm 0.05$ & $0.55\pm0.10$ & $0.53\pm 0.10$ & $-1.03\pm 0.37$ & $-2.18\pm 0.42$\\ \hline $b/a$ ratio & $0.82\pm 0.06$ & $0.80\pm 0.04$ & $0.82\pm0.07$ & $0.81\pm 0.08$ & $-0.02\pm 0.82$ & $-0.13\pm 0.47$\\ \hline \end{tabular} \caption{Results from observations. Mean values and standard deviations for the different quantities describing the satellite distributions: plane width, $c/a$ ratio and $b/a$ ratio. The first column refers to the results from observational data, the second column uses the spherically randomized version of the observational data, the third column corresponds to the observational data recentered and normalized by the randomized values. \label{table:observations}} \end{table*} \begin{table*} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{|p{2.5cm}|c|c|c|c|c|c|} \hline \multirow{2}{4.0cm}{} & \multicolumn{2}{c|}{\textbf{Illustris-1-Dark}} & \multicolumn{2}{c|}{\textbf{Illustris-1}} & \multicolumn{2}{c|}{\textbf{ELVIS}}\\ % \hline % \textbf{Inactive Modes} & \textbf{Description}\\ \cline{2-7} & \textbf{M31} & \textbf{MW} & \textbf{M31} & \textbf{MW}& \textbf{M31} & \textbf{MW}\\ %\hhline{~--} \hline Plane width (kpc) & $62\pm 3$ & $61\pm 3$ & $70\pm 4$ & $67\pm 2$ & $70\pm 2$& $68\pm 4$ \\\hline $c/a$ ratio & $0.50\pm0.01$ & $0.50\pm 0.02$ & $0.52\pm 0.01$ & $0.53\pm 0.01$ & $0.54\pm 0.01$& $0.49\pm 0.02$ \\ \hline $b/a$ ratio & $0.79\pm0.01$ & $0.81\pm 0.02$ & $0.80\pm 0.01$ & $0.80\pm 0.02$ & $0.80\pm0.01$& $0.81\pm 0.01$\\ \hline \end{tabular} \caption{Results from simulations. Mean values and standard deviations for the different quantities describing the satellite distributions: plane width, $c/a$ ratio and $b/a$ ratio. The first column summarizes the results from the 24 pairs in Illustris-1-Dark, the second column from the 20 pairs in Illustris-1 and the third column from the 12 pairs in the ELVIS project.\label{table:simulations}} \end{table*} \subsection{Describing Samples with the Inertia Tensor} We describe the satellites with the inertia tensor defined by the satellites' positions. \begin{equation} {\bf{\bar{I}}} = \sum_{k=1}^{N_p}[(\bf{r}_i - \bf{r}_0)^2\cdot \bf{1} - (\bf{r}_i-\bf{r}_0)\cdot (\bf{r}_i - \bf{r}_0)^{T}], \label{eq:intensor} \end{equation} % where $k$ indexes the set of satellites of interest $\bf{r}_k$ are the satellites' positions, $\bf{r}_{0}$ is the location of the central galaxy $\bf{1}$ is the unit matrix, and ${\bf r}^T$ is the transposed vector $\bf{r}$. We use $\bf{r}_0$ as the position of the central galaxy, and not the satellites' geometrical center, to allow for a fair comparison once the angular positions of the satellites are randomized around this point. From this tensor we compute its eigenvalues, $\lambda_1>\lambda_2>\lambda_3$, and corresponding eigenvectors, $\hat{I}_1$, $\hat{I}_2$, $\hat{I}_3$. We define the size of the three ellipsoidal axis as $a=\lambda_1$, $b=\lambda_2$ and $c=\lambda_3$. We also define $\hat{n}\equiv \hat{I}_1$ as the vector perpendicular to the planar satellite distribution. We also define the Root Mean Squared (RMS) plane width $w$ as the standard deviation of the satellite distances to the plane defined by the vector $\hat{n}$. To summarize we characterize the satellite distribution with the following quantities obtained from the inertia tensor: \begin{itemize} \item RMS plane width, $w$. \item $c/a$ axis ratio. \item $b/a$ axis ratio. \end{itemize} \subsection{Characterizing Asphericity} We compare each satellite distribution against its own spherically randomized distribution. We keep fixed the radial position of every satellite with respect to the central galaxy and then randomize its angular position. We repeat this procedure 1000 times for each satellite distribution and proceed to measure the quantities mentioned in the previous section: $w$, $c/a$ and $b/a$. For each quantity we compute the average and standard deviation from the 1000 random samples. This allows us to build a normalized version of all quantities of interest by substracting the mean and dividing between the standard deviation of the randomized samples. These normalized quantities are used to build the explicit probability distributions for the scalars describing asphericity. \subsection{Building an explicit asphericity probability distribution} After building the normalized variables with the simulated data we perform a Kolmogorov-Smirnov test with the null hypothesis of belonging to a normal distribution with mean and standard deviation computed from the mean and standard deviation estimated from the data. Although the physical quantities of interest are bound, unlike the random variables from a normal distribution, we find that the distributions for the normalized $w$, $c/a$ and $b/a$ are indeed consistent with gaussian ditributions. Based on this result we build a multivariate normal distribution for the joint distributions of the normalized $w$, $c/a$ and $b/a$: \begin{equation} p(X; \mu, \Sigma) = \frac{1}{(2\pi)^{3/2}|\Sigma|^{1/2}} \exp\left(-\frac{1}{2}(X-\mu)^{T}\Sigma^{-1}(X-\mu)\right), \label{eq:multivariate} \end{equation} % where $X=[w, c/a, b/a]^{T}$ is a vector variable with the normalized quantities, $\mu$ is the vector mean and the $\Sigma$ is the covariance matrix. We compute the preferred covariance matrix and the mean distribution values with a jackknife technique. That is, out of the the $n$ pairs in each simulation, we perfom $n$ different covariance and mean value measurements using only $n-1$ pairs. The reported covariance and mean values correspond to the average of all measurements, the corresponding standard deviation also helps us to estimate the uncertainty on every reported coefficient. This compact description allows us to generate samples of size $N$ that are consistent by construction with their parent simulation. Finally, we use the generated samples to estimate how common are the deviations from sphericity that we measure in the observational data. We use a double-tailed test in this comparison, meaning that we always measure the fraction of points with absolute values larger than the treshold absolute observed value. % referencia posiciones satellites % http://adsabs.harvard.edu/abs/2013MNRAS.435.1928P \begin{figure*} \centering \includegraphics[width=0.30\textwidth]{scatter_random_ranked_width.pdf} \includegraphics[width=0.30\textwidth]{scatter_ranked_illudm_width.pdf} \includegraphics[width=0.30\textwidth]{scatter_norm_ranked_illudm_width.pdf} \caption{Plane width characterization in observations and simulations (Illustris-1-Dark in this case). In all panels the horizontal/vertical axis corresponds to the M31/MW or the most/less massive halo in the pair Left. Plane width in physical units comparing the results from observations (stars) against the result of spherically randomizing the satellite positions (circles). Middle. Average from the observations (star) and the average from each pair in the simulation (circles). Right. Same as the middle panel except that this time each point has been normalized (median substracted and normalized by the standard deviation) to the results of its randomization. The main message of these panels is that the MW has a significantly thinner plane both compared to the result of its own satellite spherical randomization (left panel) and the expectation from simulations (middle and right panel). This low value is $2\sigma$ away from what is expected in a spherical distribution. In M31 the satellite distribution is in agreement with the expectations both from a spherical distribution and the results from the simulations. \label{fig:scatter_width}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.30\textwidth]{scatter_random_ranked_ca_ratio.pdf} \includegraphics[width=0.30\textwidth]{scatter_ranked_illudm_ca_ratio.pdf} \includegraphics[width=0.30\textwidth]{scatter_norm_ranked_illudm_ca_ratio.pdf} \caption{Same layout as in Figure \ref{fig:scatter_width}. This time for the $c/a$ axis ratio. The same message holds in this case; namely, that the MW has a significantly lower $c/a$ value than the expectation from the spherical distribution and from the simulations. This low value is also $2\sigma$ away from the expactations for a spherical distribution. On the other hand, M31 is consistent both with a spherical distribution and with the results from simulations. \label{fig:scatter_ca_ratio}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.30\textwidth]{scatter_random_ranked_ba_ratio.pdf} \includegraphics[width=0.30\textwidth]{scatter_ranked_illudm_ba_ratio.pdf} \includegraphics[width=0.30\textwidth]{scatter_norm_ranked_illudm_ba_ratio.pdf} \caption{Same layout as in Figure \ref{fig:scatter_width}. This time for the $b/a$ axis ratio. In this case both the MW and M31 are consistent with the results of a spherical distribution and the simulations. \label{fig:scatter_ba_ratio}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{gaussian_model_illustrisdm_M31.pdf} \includegraphics[width=0.45\textwidth]{gaussian_model_illustrisdm_MW.pdf} \caption{Results from multivariate gaussian model fitted to the normalized values for the plane width $w$, $c/a$ ratio and $b/a$ ratio from the Illustris-1-Dark results. Left/right panel correspond to M31/MW respectively. The cross indicates the observed values. The contour levels in the 2D histograms correspond to the $1\sigma$, $2\sigma$ and $3\sigma$ contours in two dimensions. The dashed vertical lines in the histograms along the diagonal correspond to the $1\sigma$ boundaries in one dimension. The results for the gaussian model are built from $10^6$ point realizations in the three-dimensional space spaned by the variables of interest for each halo. This plot clearly shows how the M31 results are well within the expectations from simulations while the MW has an unusual low value for the plane width and the $c/a$ axis ratio. \label{fig:correlations_illustrisdm}} \end{figure*} \section{Results} \label{sec:results} Table \ref{table:observations} and Table \ref{table:simulations} summarize the mean values and uncertainties for the plane $w$, $c/a$ ratio and $b/a$ ratio in the observations and simulations, respectively. The uncertainty in the observations is computed from the results with different number of satellites; in the simulations it corresponds to the standard deviation over different halos. The observed widths $w$ are always smaller than its randomized version. For M31 there is barely a ratio of $0.92$ between observed and randomized, while for the MW this factor goes down to less than half $0.48$. The observed $c/a$ ratio follows the same extreme trend for the MW compared to a M31 distribution closer to spherical, The $b/a$ ratio is statisically the same between observations and the spherical randomization. This confirms the extreme planar distribution for the MW and the more spherical distribution for the M31. In the following subsections we describe in detail the results on the distributions for $w$, $c/a$ and $b/a$. The plots in the main body of the paper correspond to the Illustris-1-Dark simulation; the results from the other simulations are included in the Appendix \ref{appendix:plots}. \subsection{Plane Width} Figure \ref{fig:scatter_width} summarizes the results for the width measurements. The left panel compares the results for the MW and M31 observations (stars) against its spherically randomized satellites (circles). The most interesting outcome is that the MW plane width is smaller than $\approx 98\%$ of the planes computed from the randomized distribution, while the M31 plane width is only slightly smaller than the average of the distribution. The middle panel in Figure \ref{fig:scatter_width} compares the observational result (star) against the measurements of all pairs from the Illustris1-Dark simulation (circles). In this case we have a similar result as before. The observed MW width is smaller than all the results in the simulation, there is not a single halo with similar values. On the other hand, the results for the M31 are entirely consistent with observations. Most of the halos in the simulation show a width value similar to M31. The right panel in Figure \ref{fig:scatter_width} shows the result for the normalized width. This panel tells the same story as the middle panel. The M31 values are typical while the MW is an outlier. The added value of the data in this panel is that it is the normalized which is consistent with normal distributions. This is the data used to build the mean values vector and covariance matrix described in Equation \ref{eq:multivariate}. \begin{figure*} \centering \includegraphics[width=0.47\textwidth]{expected_numbers_n_M31.pdf} \includegraphics[width=0.47\textwidth]{expected_numbers_n_MW.pdf} \caption{Probability distribution for the expected number of M31 and MW halos showing the same degree of atypicality as the Local Group if drawn from a sample of $10^4$ isolated pairs. The distributions correspond to results derived from Illustris-1-Dark, Illustris, and ELVIS data. In the most optimistic case (Illustris-1-Dark), $1\%$ of the pairs have two halos with properties similar to the LG. In the case of Illustris-1 this percentage drops to $0.1\%$ and $0.01\%$ for ELVIS. \label{fig:expected_number}} \end{figure*} \subsection{$c/a$ axis ratio} Figure \ref{fig:scatter_ca_ratio} shows the results for the minor to major axis ratio. The layout is the same as in Figure \ref{fig:scatter_width}. The results for the $c/a$ ratio follow the same trends as for the width $w$. The left panel in Figure \ref{fig:scatter_ca_ratio} shows how the MW $c/a$ ratio is significantly lower than the measured values for spherical distributions, and it is smaller than $\approx 98\%$ of the randomized distributions. On the other hand the ratio for M31 is lower than the mean of the spherical values but still well within its variance. The middle panel in the same Figure shows the LG compared against the results in the simulations. In this case we find a similar trend as before. The MW is atypical and M31 is within the variance from the simulation data. This time, however, there are two MW-like halos out of the total of 24 that show an $c/a$ as small as that of the MW. The right panel shows the normalized results. The MW shows a low $c/a$ ratio close to between two and three standard deviations away from the mean value of the spherical distribution; this contrasts with the results for M31 which are close to $1$ standard deviation away. \subsection{$b/a$ axis ratio} Figure \ref{fig:scatter_ba_ratio} shows the results for the minor to major axis ratio with the same layout as Figure \ref{fig:scatter_ca_ratio} In all cases of comparison (against randomized distribution and simulations) the results for both the MW and M31 are typical. \subsection{Fit to a Multivariate Gaussian Distributions} Figure \ref{fig:correlations_illustrisdm} illustrates the results from computing the covariance matrix and mean vector in Eq.\ref{eq:multivariate} from the normalized quantities obtained from the Illustris-1-Dark simulation. The distributions in this Figure are computed from $10^6$ points generated with the multivariate gaussian. Similar plots for Illustris-1 and ELVIS are in the Appendix \ref{appendix:plots}. The values for all the covariance matrices and mean vectors corresponding to all the simulations are listed in the Appendix \ref{appendix:covariance}. This result nicely summarizes the results we had in the previous sections. The left hand triangular plot shows how M31 falls into the middle of all 2D distributions and is always close to the peak and within the $1\sigma$ range. The right hand plot clearly places the MW observations outside the $3\sigma$ range in the joint distributions that involve the width $w$. In both cases the strongest positive correlation is present for the width and the $c/a$ axis ratio. A weaker correlation is present for the width and the $b/a$ axis ratio. \subsection{Number of Expected LG Systems} We use the fits to the multivariate gaussian distributions to compute the expected number of pairs with characteristics similar to those of the LG. To do this we generate $10^3$ samples, each sample containing $10^4$ pairs, where each pair member is drawn from the corresponding multivariate gaussian distribution. We consider that a sampled system is similar to the M31/MW galaxy if the distance of each of its normalized characteristics ($w$, $c/a$, $b/a$) to the sample mean is equal or larger than the distance of the observational values to the sample mean. That is, we perform a double-tailed test using the observational values as a threshold. Figure \ref{fig:expected_number} summarizes the results from this experiment. The left panel shows the probability density for the number of M31 systems in a parent sampe of $10^4$ pairs, the right panel shows the results for the MW. For M31, between $27\%$ and $56\%$ of the pairs have a satellite distribution as aspherical as the observed in M31. This fraction drops dramatically for the MW where only $0.02\%$ to $3\%$ of the satellites are expected to have as extreme aspherical distributions as the MW. Considering the joint distribution of M31 and MW we find that at most $2\%$ of the pairs are expected to be similar to the LG. In a three dimensional gaussian distribution, having a $1\sigma$, $2\sigma$ and $3\sigma$ interval corresponds to having respectively $19 \%$, $73 \%$ and $97 \%$ of the total of points in the distribution. With this result in mind the $LG$ has the same degree of atypicality as a $3\sigma$ outlier. % FUNDAMENTALS OF MULTIVARIATE STATISTICS, for Imaging and optics, % Bajorski. %In [150]: stats.chi2.cdf(l**2,3) %Out[150]: array([ 0.19874804, 0.73853587, 0.97070911]) Among the three simulations, the results infered from ELVIS data show the lowest fraction of M31 and MW systems; for Illustris-1-Dark we have the highest fraction of M31/MW systems. The results from Illustris-1 are in between these two, but closer to ELVIS. The most probable reason for these trends is the different median mass for the MW/M31 halos in the pairs from these simulations. For instance for the MW halo the median maximum circular velocity is $\sim 160$ \kms, $\sim 150$ \kms and $\sim 120$ \kms in the ELVIS, Illustris-1 and Illustris-1-Dark simulations, respectively. For the M31 halo this median velocity is $\sim 200$\kms both for the ELVIS and Illustris1 simulations, while for the Illustris-1-Dark it is $\sim 160$ \kms. Another element that influences this trend is the simulated physics. This is evident in the comparison between Illustris-1 and Illustris-1-Dark. These two simulations share the same characteristics except for the inclusion of baryonic physics in Illustris-1. We find that extreme MW systems are easier to find in the DM only simulation. The results listed in Appendix \ref{appendix:covariance} show that halos are closer to spherical once baryonic effects are included. For instance the mean value for the normalized width goes closer to zero, from $-0.45\pm 0.04$ to $-0.17\pm0.04$, and the same is true for the $c/a$ ratio that changes from $-0.61\pm0.04$ in the DM only simulation to $-0.40\pm 0.04$ in Illustris-1. We postpone a detailed quantification of this effect for a future study. \section{Conclusions}\label{sec:conclusions} In this paper we developed and demostrated a method to quantify the asphericity of the satellite distribution around the MW and M31. In the interest of keeping the method straightforward and robust we focus on the asphericity estimates for a fixed number of the brightest satellites around each galaxy. The method uses as a reference the spherically randomized data of the system under study \citep{2017AN....338..854P}. To this end, we first measure the width and axis ratios for the satellite distributions of interest. Then, we measure the same quantities for the same set of points after the spherical randomziation process. Finally, we renormalize the initial results to the mean value and standard deviation computed from the randomized data. We found that these normalized quantities are well described by a multivariate gaussian distribution in spite of the original quantities being bound. We estimated the mean and covariance of this distributions using the results of LG pairs coming from three different numerical simulations (Illustris-1, Illustris-1-Dark and ELVIS). Finally, we compared the observational results against the distributions derived from the simulations We found that in the best case (Illustris1-Dark) the degree of asphericity in the observed LG is only expected in $3\pm2$ pairs out of a sample of $10^4$ isolated pairs. This places the LG as a $3\sigma$ outlier. The weight to explain this atypical result is not distributed equally between the MW and M31. While M31 presents a fully typical asphericity in the expectations from LCDM, the MW shows aspherical deviations in plane width and the major-to-minor axis ratio highly atypical in the framework of LCDM, confirming the original hint by \cite{2005A&A...431..517K} and more recent results by \cite{2015ApJ...815...19P}. We estimated that with the M31 between $37\%$ and $58\%$ of the pairs show aspherical characteristics larger than M31, while this fraction drops to less than $1\%$ for the MW. These fractions are robust to changes in the numerical simulations, the criteria used to define the pairs and to the methods used to estimate the parameters of the multivariate normal distribution. The focus of our approach was building an explicit probability distributions for the observables of interest, instead of trying to find simulated objects that fulfill different observational criteria. This approach is particularly useful in the case of atypical observables as its the case of interest, as it allows for an atypicallity quantification without needing to build explicit samples of objects that are already scarce and difficult to find in simulations. An extension of this framework to outliers in higher order deviations (i.e. coherent \emph{velocity} structures) should also be possible, provided that an explicit probability distribution for the scalars of interest can be built. The approach presented here is also useful to gauge the influence of influence of different physical elements incluedes the simulation. For instance, in our case the data hints towards rounder satellite distributions in simulations that include baryonic effects. The highly aspherical satellite distribution in the MW is another piece of information that points at an atypical configuration in LCDM. We also have the number of satellites as bright as the Mageallanic Clouds, only expected in $5\%$ of galaxies \citep{2011ApJ...743..117B} and the satellite velocities around the MW with a radial/tangential anisotropy only expected in $3\%$ of systems in LCDM \citep{2017MNRAS.468L..41C}. One could also add the atypical kinematics of M31 with a very low tangential velocity, which is only expected in less than $1\%$ of the pairs with similar environmental characteristics \citep{ForeroRomero2013}. This atypicality should be seen as an opportunity to constrain in great detail the environment that allowed such a pattern to emerge. Although broad correlations between LG assembly, pair kinematics, halo shapes and satellite distributions are expected in LCDM \citep{2011MNRAS.417.1434F,2014MNRAS.443.1090F,2015ApJ...799...45F,2015MNRAS.452.1052L}, we claim that detailed studies of the satellite asphericity as a function of halo mass and cosmic web environment are still missing to understand what features in the initial conditions of our LG are responsible for the extreme features observed in its satellite distribution. \section*{Acknowledgements} We acknowledge financial support from: Universidad de Los Andes (Colombia); COLCIENCIAS (c\'odigo 120471250459, contrato FP44842-287- 2016); the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sk\l{}odowska-Curie grant agreement No 73437 (LACEGAL). \bibliographystyle{mnras} \bibliography{Dwarfs} \appendix \section{Physical Characteristics of the Isolated Pairs samples} \label{appendix:physical} \begin{figure} \centering \includegraphics[width=0.30\textwidth]{int_distro_LG_v_rad.pdf} \includegraphics[width=0.30\textwidth]{int_distro_LG_d.pdf} \includegraphics[width=0.30\textwidth]{int_distro_M31_vmax.pdf} \includegraphics[width=0.30\textwidth]{int_distro_MW_vmax.pdf} \caption{Physical characteristics of the LG pairs selected in the simulations. All lots show the integrated distributions. The physical properties are the radial comoving velocity between the MW and M31, the radial separation between the MW and M31 and the maximum circular velocity for the M31 and MW dark matter halo. \label{fig:physical}} \end{figure} \section{Covariance Matrices and Mean value vectors} \label{appendix:covariance} \subsection{Illustris-1} \subsubsection{M31} \[ \Sigma= \begin{bmatrix} 0.83\pm 0.03 & 0.78\pm 0.04 & 0.41\pm 0.04\\ 0.78\pm 0.04 & 1.16\pm 0.05 & -0.15\pm 0.05\\ 0.41\pm 0.04 & -0.15\pm 0.05 & 0.96\pm 0.04\\ \end{bmatrix} \] \[ \mu= \begin{bmatrix} -0.18\pm 0.04 & -0.41\pm 0.05 & -0.20\pm 0.05\\ \end{bmatrix} \] \subsubsection{MW} \[ \Sigma= \begin{bmatrix} 0.78\pm 0.06 & 0.63 \pm 0.07 & 0.40 \pm 0.03\\ 0.63\pm 0.07 & 0.69 \pm 0.08 & 0.06 \pm 0.02\\ 0.40\pm 0.03 & 0.06 \pm 0.02 & 0.61\pm 0.04\\ \end{bmatrix} \] \[ \mu= \begin{bmatrix} -0.17\pm 0.04 & -0.40\pm 0.04 & -0.23\pm 0.04\\ \end{bmatrix} \] \subsection{Illustris-1-Dark} \subsubsection{M31} \[ \Sigma= \begin{bmatrix} 1.50\pm 0.08 & 1.27\pm 0.08 & 0.64\pm 0.04\\ 1.27\pm 0.08 & 1.31\pm 0.08& 0.28\pm 0.04\\ 0.64\pm 0.04 & 0.28\pm 0.04 & 0.70\pm 0.04\\ \end{bmatrix} \] \[ \mu= \begin{bmatrix} -0.37\pm 0.05& -0.55\pm 0.05 & -0.13\pm 0.03\\ \end{bmatrix} \] \subsubsection{MW} \[ \Sigma= \begin{bmatrix} 1.03\pm 0.05 & 0.94\pm 0.05 & 0.36\pm 0.03\\ 0.94\pm 0.05 & 1.24\pm 0.07 & -0.18\pm 0.04\\ 0.36\pm 0.03 & -0.18\pm 0.04 & 0.86\pm 0.03\\ \end{bmatrix} \] \[ \mu= \begin{bmatrix} -0.45\pm 0.04 & -0.61\pm 0.05 & -0.31\pm 0.04\\ \end{bmatrix} \] \subsection{ELVIS} \subsubsection{M31} \[ \Sigma= \begin{bmatrix} 1.60\pm 0.19 & 1.22\pm 0.14 & 0.70 \pm 0.09\\ 1.22\pm 0.14 & 1.08\pm 0.10 & 0.35 \pm 0.05\\ 0.70\pm 0.09 & 0.35\pm 0.05 & 0.63 \pm 0.10\\ \end{bmatrix} \] \[ \mu= \begin{bmatrix} -0.18\pm 0.08 & -0.28\pm 0.07 &-0.22\pm 0.05\\ \end{bmatrix} \] \subsubsection{MW} \[ \Sigma= \begin{bmatrix} 0.45\pm 0.04 & 0.61\pm0.09 & -0.11\pm0.09 \\ 0.61\pm0.09 & 1.21\pm0.17 & -0.63\pm0.09\\ -0.11\pm0.05 & -0.63\pm0.09 & 0.64\pm0.06\\ \end{bmatrix} \] \[ \mu= \begin{bmatrix} -0.32\pm 0.04 & -0.74\pm 0.07 &-0.08\pm 0.05\\ \end{bmatrix} \] \section{Results from ELVIS and Illustris1} \label{appendix:plots} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{scatter_ranked_elvis_width.pdf} \includegraphics[width=0.32\textwidth]{scatter_ranked_elvis_ca_ratio.pdf} \includegraphics[width=0.32\textwidth]{scatter_ranked_elvis_ba_ratio.pdf} \includegraphics[width=0.32\textwidth]{scatter_norm_ranked_elvis_width.pdf} \includegraphics[width=0.32\textwidth]{scatter_norm_ranked_elvis_ca_ratio.pdf} \includegraphics[width=0.32\textwidth]{scatter_norm_ranked_elvis_ba_ratio.pdf} \caption{ELVIs results for the quantities presented for the Illustris1-Dark simulation in Figures \ref{fig:scatter_width}, \ref{fig:scatter_ca_ratio}, \ref{fig:scatter_ba_ratio}. Upper row corresponds to the raw values from observations and simulated pairs, while the second row normalizes the same values to the mean and standard deviation on its spherically randomized counterparts. \label{fig:scatter_elvis}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{gaussian_model_elvis_M31.pdf} \includegraphics[width=0.48\textwidth]{gaussian_model_elvis_MW.pdf} \caption{ Same layout as Figure \ref{fig:correlations_illustrisdm}, this time computed from the ELVIS data. \label{fig:correlations_elvis}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{scatter_ranked_width.pdf} \includegraphics[width=0.32\textwidth]{scatter_ranked_ca_ratio.pdf} \includegraphics[width=0.32\textwidth]{scatter_ranked_ba_ratio.pdf} \includegraphics[width=0.32\textwidth]{scatter_norm_ranked_width.pdf} \includegraphics[width=0.32\textwidth]{scatter_norm_ranked_ca_ratio.pdf} \includegraphics[width=0.32\textwidth]{scatter_norm_ranked_ba_ratio.pdf} \caption{Same layout as Figure \ref{fig:scatter_elvis}, this time computed from the Illustris-1 data. \label{fig:scatter_illustris}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{gaussian_model_illustris_M31.pdf} \includegraphics[width=0.48\textwidth]{gaussian_model_illustris_MW.pdf} \caption{ Same layout as Figure \ref{fig:correlations_illustrisdm}, this time computed from the Illustris-1 data. \label{fig:correlations_illustris}} \end{figure*} \end{document}
{ "alphanum_fraction": 0.7582295416, "avg_line_length": 43.3586601307, "ext": "tex", "hexsha": "6ee182490246d4846205108188eea76ec88013fe", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8715e8779ce89fe720b979b148183a99ae06ec23", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "VeronicaArias/AlineacionesIllustris", "max_forks_repo_path": "paper/paper_old.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "8715e8779ce89fe720b979b148183a99ae06ec23", "max_issues_repo_issues_event_max_datetime": "2016-12-15T19:26:33.000Z", "max_issues_repo_issues_event_min_datetime": "2016-06-29T21:06:24.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "VeronicaArias/AlineacionesIllustris", "max_issues_repo_path": "paper/paper_old.tex", "max_line_length": 403, "max_stars_count": 1, "max_stars_repo_head_hexsha": "8715e8779ce89fe720b979b148183a99ae06ec23", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "astroandes/SatelliteShapeLG", "max_stars_repo_path": "paper/paper_old.tex", "max_stars_repo_stars_event_max_datetime": "2015-05-01T04:41:51.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-01T04:41:51.000Z", "num_tokens": 14993, "size": 53071 }
\documentclass{article} \usepackage{todonotes,txfonts,balance,graphics,color} \usepackage{booktabs,textcomp,microtype,ccicons} \usepackage[margin=1in]{geometry} % \usepackage{babel} % \usepackage{csquotes} % \usepackage{biblatex-chicago} % \usepackage[ % backend=biber, % style=, % ]{biblatex} \usepackage[authordate,backend=biber,natbib]{biblatex-chicago} \usepackage[T1]{fontenc} \usepackage[super]{nth} % \usepackage[pdftex,pdfpagelabels=false]{hyperref} % \usepackage[all]{hypcap} % Fixes bug in hyperref caption linking \usepackage[utf8]{inputenc} % for a UTF8 editor only \def\plaintitle{Quantified Self: Ethnography of a Digital Culture} \def\plainauthor{Ali Alkhatib} \def\emptyauthor{} % llt: Define a global style for URLs, rather that the default one \makeatletter \def\url@leostyle{% \@ifundefined{selectfont}{ \def\UrlFont{\sf} }{ \def\UrlFont{\small\bf\ttfamily} }} \makeatother \urlstyle{leo} % To make various LaTeX processors do the right thing with page size. \def\pprw{8.5in} \def\pprh{11in} \special{papersize=\pprw,\pprh} \setlength{\paperwidth}{\pprw} \setlength{\paperheight}{\pprh} \setlength{\pdfpagewidth}{\pprw} \setlength{\pdfpageheight}{\pprh} % Make sure hyperref comes last of your loaded packages, to give it a % fighting chance of not being over-written, since its job is to % redefine many LaTeX commands. \definecolor{linkColor}{RGB}{6,125,233} % \hypersetup{% % pdftitle={\plaintitle}, % % Use \plainauthor for final version. % % pdfauthor={\plainauthor}, % pdfauthor={\emptyauthor}, % pdfkeywords={\plainkeywords}, % bookmarksnumbered, % pdfstartview={FitH}, % colorlinks, % citecolor=black, % filecolor=black, % linkcolor=black, % urlcolor=linkColor, % breaklinks=true, % hypertexnames=false % } % create a shortcut to typeset table headings % \newcommand\tabhead[1]{\small\textbf{#1}} % \bibliography{../../content/references} \bibliography{../../content/references} % End of preamble. Here it comes the document. \begin{document} \section*{Introduction} Benjamin Franklin, Nikola Tesla, and Leonardo Da Vinci all shared a quality aside from their eminent genius and achievements in sciences and technology: their obsessions with the quantitative, particularly regarding the relationship between numbers and human beings, were unmatched by their contemporaries. Da Vinci's Vitruvian Man represents a tribute to the perfect, if complex, geometry of nature and human anatomy; Franklin's quiet obsession with counting his steps, breathing, and exertion throughout the day did not escape biographical accounts of his life; and Tesla insisted on arrangements of multiples of three in everything from table linens to hotel room assignments. The appeal of mathematics in personal life, though well documented in Western history, is not exclusive to Western cultures. Mathematicians and scientists throughout the world and history, similarly drawn to numbers and positivism, dot the timeline of world history. Srinivasa Ramanujan, famous for the sheer number of discoveries he made virtually independent of his Western contemporary mathematicians, claimed to have dreamt of equations and formulas communicated to him by the goddess Namagiri. Indeed, he ``\dots peopled his world with those anthropomorphic formulae and numbers\dots'' \citep{nandy1998return}. % (Nandy 1998:105). \todo{citation?} These people, outliers in their respective cultures for their eccentric behavior and fascination with numbers, foreshadowed what is now a trend in self--tracking, measurement, and analysis: the ``Quantified Self'' movement (hereafter referred to as ``QS''). % \todo{citation?} In comparison to famous idiosyncratic and even non--normative historical examples discussed above, this movement represents a more formalized, if equally passionate, culture of people similarly fascinated by numbers representing various aspects of their lives, especially as they relate to the self and well--being. Today, technology facilitates self--quantification in the form of ubiquitous computing devices which track steps, measure sleep, count calories, and aggregate activities and other events into databases describing every movement, and perhaps every moment, of one's day. What prompted the growth of this sub--group into a mainstream culture, and what fostered that growth? Perhaps more importantly, why does a quantitative analysis of one's life appeal to the members of this group? What, ultimately, is the worldview being expressed? The beginnings of the answers to these questions would not only better illuminate the community of self--quantifiers, but would provide insight into where the QS movement is heading, and perhaps who will join it along the way. Even if mainstream culture as we know it does not adopt self--quantification wholeheartedly, its response to this culture may carry repercussions which can be traced from this, arguably the focal point of the modern QS movement. This, perhaps, is the primary driver behind studying such a dedicated group as those among the Quantified Self movement. While members of QS culture adopt technologies and practices many in mainstream culture would never incorporate into their own lives, those who practice self--quantification distill many of the ideas broadly held in popular culture, outlining patterns of behavior in much the same way that Coleman's study of hackers illuminated, through the narrow lens of a niche group, an aspect of larger culture more broadly \citep{coleman2013coding}. % (Coleman 2013). \todo{citation?} In this context, study of the culture of the QS community might yield insight into mainstream culture through the lens of self--analysis, quantification, and an increasing trend toward reliance upon quantitative data--driven decisions about our lives. As an anthropologist with a background in computer science, I found myself fascinated by digital cultures, but to describe self--quantification --- or even a community that is defined by its members' practices of self--quantification --- as a digital culture may seem like somewhat of a stretch. In casual discourse, when we describe a culture as ``digital'', we mean that it exists on the Internet, bound only by wires and abstracted from conventional ideations and limitations such as embodiment or geography. In fact, digital culture refers to any of several distinctions. There is the intuitive online culture, which Vincent Miller describes predominantly in Understanding Digital Cultures, and is the subject of significant academic interest. These cultures typically disassociate themselves from any offline relationships. But digital culture can also refer to cultures that exist both online and offline. These communities may relate their online activity with the offline in the form of ``real life'' meet--ups, or by coordinating activities offline (ranging a spectrum of offline activities from dating to protest). % \todo{citation?} Typically, members of these communities distinguish between the offline and online in a code--switching resembling streams of binary data --- a sequence of ones and zeros metaphorically representing offline versus online events and experiences. Quantified Self represents that stream of data compacted into an almost indistinguishable, continuous flow of ones and zeros: offline, analog activities and online, digital measurements and analyses. Quantified Self in this context, characterized by diligent recordings and intense self--scrutiny, does not represent mainstream culture \textit{per se}; not even most users of self--measurement and tracking technologies, such as Jawbone UP and Withings scales, would consider themselves members of the QS movement. QS culture represents not only the most active users of these devices and services, but potentially the forerunners of mainstream self--quantification as well. In exploring new processes and technologies, and attempting to reason about the self through these tools, the QS community field tests new approaches to scrutinizing the self, vetting these new methods for mainstream culture. Just as many of us grow accustomed to carefully managing our finances through tools like Mint, or track our fitness with any of a sundry of devices designed to passively track movement, the trends of QS culture today and in the near future will reveal trends in society at large. There is, perhaps, another aspect to the Quantified Self that makes it so compelling to study. Just as luminaries throughout history found themselves alone in their self--tracking habits, self--quantifiers have been a niche group. That is, until recently. The recent explosion of self--tracking technologies into the spotlight of mainstream culture is now forcing QS culture to reconcile internal conflicts as it is consumed by non--QS or mainstream culture, the way so many other niche groups have been assimilated and incorporated into the mainstream. At this crucial time in the life cycle of the QS movement, we can see some aspects of dedicated Quantified Self culture assimilating while others are shed. Still other facets of QS culture seem to cause backlash in mainstream culture. Like watching an astronomical stellar event, these periods of cultural assimilation are at once spectacularly revealing about both cultures, incredibly rare, and eminently unpredictable. \section*{Past Literature} \subsection*{Historical} Since the formal conception of its name, Quantified Self drew deeply from existing practices and cultures of self--tracking and measurement. The very basis of quantitative tracking at a large scale is steeped in historical precedent. Various Mesopotamian, Chinese, Indian, Incan, and other cultures all implemented their own censuses to facilitate their administration of sprawling, expansive empires. In their own ways, censuses quantified the presence and qualities of the people, households, and communities making up these civilizations. While the notion of quantifying the loosely defined self and informing policy through data--driven quantitative analysis is not new, the source of ``Quantified Self'', the formal name of the culture of interest, is not so storied. Indeed, the term ``Quantified Self'' is new, with records indicating its coinage only as far back as 2007 when a blog post --- attributed to Kevin Kelly and crediting Gary Wolf --- pondered the question ``what is the Quantified Self'' \citep{whatisQS}. % (Kelly and Wolf 2007). \todo{citation?} In its nascent form, QS referenced a range of topics: ``Personal Genome Sequencing, Lifelogging, Self--Experimentation, \dots Location tracking, Non--invasive probes, Digitizing Body Info\dots'' and other aspects of digital tracking and measurement \citep{whatisQS}. % (Kelly and Wolf 2007). \todo{citation?} Since this evident reification, QS has been the subject of intense interest both in academia and popular culture. Professional scientists as well as laymen have sought out QS practitioners as informants regarding everyday factors that might influence myriad aspects of our lives, including the public practice of science, health, ``Big Data'', and others. \subsection*{Public Science} One component of QS culture has been that of ``public science'', characterized by laymen engaging with scientific research and in experiments of their own % (Cohen 2014). \todo{citation?} Cohen's writing explores this subject but comes short of the extreme outcomes of public science wherein the public engage in self--experimentation often with little or now knowledge of the dangers involved, perhaps because the subject of writing was in fact a credentialed scientist, illustrating how ``\dots with lifelogging and the growth of the quantified self movement, there are greater opportunities for comparison and aggregation,'' but at the cost of problematizing standardized methodologies, complicating and perhaps foregoing scientific results in the conventional senses of reproducibility and repeatabilit (Halavais 2013). \todo{citation?} Nevertheless, Kido and Swan illustratively demonstrate the potential usefulness of ``citizen science'' concluding in part that ``personal [genomics] can be applied in the future to prophylactic medical care'', suggesting in part that public science in this context must be centrally managed, in this case by medical care practitioners (Kido and Swan 2014:3). \todo{citation?} Researchers have expressed skepticism at the follow--through of this idyllic description of the application of QS culture. Fajans's critique, that self--quantifiers are more interested in the ``allure of becoming a personal scientist empowered \dots [by] personal technology,'' is based primarily on fieldwork in the form of offline ``meet ups'', where his findings that participants were not particularly invested in quantitative analysis might be explained by circumstantial affordances, or rather lack thereof (Fajans 2007:3). \todo{citation?} Fajans neglects to account for the possibility that quantitative analysis was indeed a significant aspect of his participants' lives and world--views, but that their offline interactions did not afford intuitive or natural interaction with that experience of Quantified Self, and therefore their focus shifted away from it during offline interactions. All considered the appeal, if not the practice, of learning about the self through experimentation and science --- and certainly the ability to communicate one's personal findings with authority --- seem compelling to members of QS culture and at large. Halavais discusses this effect in his study of Reddit --- a community--driven forum where users vote on the quality of others' posts, thus affecting their visibility and ultimately impact --- especially subgroups with interests in dieting, fitness, and performance--enhancing drug use (Halavais 2013). \todo{citation?} In these cases both the potential to improve one's life through measured and even rigorous experimentation combined with the opportunity to become an authority, qualified by the now--ubiquitous acronym YMMV (``Your Mileage May Vary'', meaning that the advisor's experiences may be unique to him or her), seems to drive participation. The former catalyst described here, the opportunity to improve one's life through rigorous critical self--evaluation, seems in some ways to parallel Taylorism and scientific management. Namely, within the scope of various life goals (e.g. better health, happiness, etc.) being reduced to metrics (increasing the number of steps one takes per day, or increasing net worth, respectively), self--quantifying ubiquitous technologies lower the barrier to regular iterative experimentation and eventually perceived experiential authority in a given domain. It is worth noting, however, that scientific management under Taylorism has been characterized by an imposition of order by a superior upon a worker for the purpose of improving efficiency; reducing factory work to a series of refined, measured actions to eke greater output from minimal human labor. In the culture of QS, internal motivation to analyze and ultimately understand the self describes the prevalent ethos. \subsection*{Health} The field of medicine has been quick to adopt the measuring, tracking, and analytic features for which self--quantification has become famous, especially with regard to the democratizing access to data afforded by self--tracking technologies. In this area, Quantified Self runs the gamut of medical practice and research, including chronic disease management, research, and medical administration such as electronic medical records. Diabetic patients, for instance, have been measuring and tracking glucose levels for over 40 years through the use of home glucose monitors. More recently, the emergence of electronic medical records (EMRs) as a viable mechanism for tracking and administrating patient care in \todo{citation?}hospitals and clinics promises improvements in medical care across the board. Research on patient motivation to self--track and quantify has emerged from a variety of fields, each bringing a background of methodological biases and implications through which to parse. Gimpel, Henner, Nißen, and Görlitz approach patient--driven healthcare information systems from a perspective of information systems management and, in doing so, treat Quantified Self merely as a methodology rather than a rich culture with qualitative meaning (Gimpel et al. 2013:3). While their findings aptly differentiated Quantified Self data collection from sharing with other self--quantifiers, their strict focus on quantitative methods prevented their research from further exploring the culture of Quantified Self as such (Gimpel et al. 2013:2). Genetics, especially the democratized access to genome sequencing and analysis, has also played a significant role in the development of Quantified Self. With the recent growth in the industry of personalized genome sequencing thanks to services such as 23andMe, numerous companies offer participants the ability not only to learn more about their genetic heritage, but also ostensibly (and in fact controversially) make revelations about their probabilistic likelihood of various genetic \todo{citation?}disorders and diseases. In this space, biomedical ethics researchers Lee and Crawley (2009) discuss the ethical implications of 23andWe, \todo{citation?} a crowd--sourcing research arm of the corporate body's genome sequencing business, where participants complete surveys allowing 23andMe to associate phenotypic characteristics with genotype data already collected by the company (Lee and Crawley 2009:38). \todo{citation?} Lee and Crawley point out that 23andMe have sought healthy interaction with the scientific community at large, but ultimately the wealth of genetic data 23andMe offers to share is managed by a private and for--profit corporation, operating with vested interest and therefore bias in the use of said data. \subsection*{Gamification} Numerous studies have explored approaches to spark user engagement in otherwise tedious tasks such as counting steps or tracking sleep habits. Perhaps most noteworthy among approaches include ``gamification'', where an activity is made competitive. Competing for higher or lower numbers of points, especially among participating friends, is one such way. Oftentimes points will be accumulated by activities such as steps taken, or in the case of home energy usage measures of electricity used in watts, each gesturing toward holistic goals such as fitness and environmental awareness. Whitson considers the role of gamification in the context of self--quantification, pointing out that, ``\dots we gamify without technology all the time. Many of these games are tied with measuring our own performance,'' but nevertheless focusing her research on technological, automated self--tracking and gamification (Whitson 2013:169). \todo{citation?} Whitson's findings suggest an element of Taylorism in Quantified Self, which suggests that rigorously monitored and controlled processes might yield improvements upon the self (Whitson 2013:170). \todo{citation?} Whitson's described ``stream of numbers'' provides concrete, tangible references to otherwise vaguely described experiences in our day--to--day lives; a daily run becomes more memorable with the association of quantitative measures and perhaps a shared and therefore competitive aspect to the activity itself (Whitson 2013:175). \todo{citation?} With gaming and competition, addiction seems a natural subsequent area of exploration. To this end, Margaret Willis explores gamification and in particular addiction, finding based on Natasha Schull's research that ``\dots games [such as competitive running apps] are intentionally engineered to absorb the player'' (Willis 2013:9). \todo{citation?} Schull further explores this subject in the study of digital gambling, discovering that game designers enhance player engrossment ``\dots by muting the very features that define the cutting edge of digital game design'' (Schull 2005:77). \todo{citation?} The digitization of measures such as steps, sleep, and other metrics may allow designers of self--quantification tools and services to turn otherwise complicated and tedious activities into simple and addictive ones. \subsection*{Algorithmic Living} Whitson's description of gamification and QS culture shares many features with another component of QS --- that of living according to data--driven indicators, sometimes described as ``living algorithmically''. Sheth, Ananantharam, and Henson (2013) describe in case study and further investigation participants in research who quantify their health and modify \todo{citation?}their behavior according to the data afforded (Sheth et al. 2013:78). \todo{citation?} Sheth et al. describe a relationship between Data, Information, Knowledge, and Wisdom that implicitly permeates QS culture and Big Data more broadly (discussed later), \todo{citation?} though this study focuses on management of medical conditions (Sheth et al. 2013:81). \todo{citation?} While medical conditions demanding constant management and scrutiny (e.g. Diabetes) are on the rise in developed nations, these cases are driven by necessity rather than by desire and present a skewed perspective on the motivations of Quantified Self. More directly, the distinction between quantification of the self by choice versus by necessity influences the cultural norms and mores of the process and group engaging in this activity. Nevertheless, the notion of living according to data--driven metrics permeates QS culture across the board. Diabetic and other chronic treatment increasingly considers longitudinal, tracking data on a patient's health and wellbeing, informing larger trends in the context of the individual's life: by identifying repeated interactions between diet, exercise, mood, and life events in a patient's life versus clinical metrics such as glucose levels, patients with Diabetes might hope to manage their condition by casually integrating the analytic insights of self--quantifying technologies. This aspect of the Quantified Self was only explored superficially in this research, but interest in this area has prompted significant interest from medical and informatics fields. The focus of this research, instead, was to highlight the culture of self--quantification by choice --- those for whom the activities of self--tracking and analysis are opt--in, or discretionary. \subsection*{Quantified Other} The appeal of Quantified Self is not exclusively personal. Transnational corporations, approaching the scale of small nations in workforce size, have found that the costs associated with caring for their employees --- including health care, sick leave, and other externalities --- can be minimized through policies encouraging healthy living. To this end, BP and others have adopted programs offering employees devices such as Fitbit wristbands in the hopes that tracking their steps per day, minutes of sleep per night, and other ostensible indicators of wellness and fitness will give the users and the corporations more insight on their workforce's health (Olson 2014b). \todo{citation?} Olson exposes that participants in this aspect of the Quantified Self are not particularly motivated by the typical motivations self--quantifiers, but that ``\dots most people [at BP] don't really care about how many steps they've taken each day, but they do care about their insurance and energy bills'' (Olson 2014a). \todo{citation?} This motivation, it seems, is mutually shared; corporations benefit from having a healthy workforce not primarily because happy workers are the ultimate goal of companies like BP, but because healthy workers tend to cost companies less in insurance claims, vacation and sick days, and other draws on the bottom line. BP's impact would be significant on its own, but even more grandiose attempts to quantify individuals at large scales make endeavors such as BP's seem timid. Thermostat manufacturing startup Nest, recently acquired by Google, has ``struck deals with close to 20 utility companies\dots to manage the energy usage of Nest customers who had opted into their utility's demand--response programs'' providing Nest, and by extension the utility companies, with better insight into energy and resource demands more rapidly and in greater detail (Olson 2014a). \todo{citation?} It is worth contrasting this use of self--quantification from the dichotomous characterization established earlier, in the context of algorithmic living. Indeed, an able--bodied employee at BP with no medical complications is arguably not self--quantifying by necessity, but neither is he or she self--quantifying entirely by choice. Instead, various pressures (both corporate--political and financial) exert themselves on employees to cooperate with this initiative. \todo{citation?} Incentives such as saving money on company--provided insurance policies obscure this relationship, but Foucault's insights on the exertion of governmental power list critiques applicable here, suggesting a similar dynamic. Namely, Foucault names three states or phases in politics: the state of justice, the administrative state, and the governmental state. In the state of justice he describes, authority is used simplistically to enforce reciprocal obligations, but in the fifteenth and sixteenth centuries, Foucault identifies a transition of powers to societies of ``regulation and discipline'' (Foucault 1991:104). \todo{citation?} To this end, BP might be described as enforcing regulation and discipline in an almost literal or parental way, encouraging and even coercing its ``subjects'' to behave in ways it deems normative. Taylorism, differentiated earlier under the heading of public science, returns here as a critical lens toward BP's initiative to improve the wellbeing of its workforce. Whereas the primary factor distancing Taylorism from QS participants was the internal motivation of self--quantifiers compared to the external pressure of factory managers and owners striving to improve worker efficiency, BP's initiative, as previously described, does not fall in line with the internal motivations of members of QS. Instead, it more resembles Taylorism, where BP reduces health and wellbeing to steps, minutes of sleep, and other simplified metrics of health, all to further their efforts to maximize employee wellbeing and ultimately minimize claims for medical care and interruptions from work due to avoidable sickness. \subsection*{Big Data, Abstracted} If the spark of value in Quantified Self is in knowledge of the self through numbers, then its fuel seems to come from the magnitude of data generated surrounding the individual. In this sense and borrowing a term from Hegel, the ``Geist'' of QS is intrinsically related to ``Big Data,'' an all--encompassing term to represent wells of information so large that its management and analysis itself becomes non--trivial (Hegel 1869). \todo{citation?} Given that practitioners of the QS movement pride themselves on the sheer scale of data that can be collected, it follows that they would eventually lead to what some have described as the Quantified Other --- large--scale quantitative analysis at the city, state, nation, and global scale (Olson 2014b). \todo{citation?} This aspect of the Quantified Self brings together many of the previously discussed facets of the study of the self. With comprehensive personal data of individuals at the longitudinal scale of even relatively small cross sections of individuals, the thinking goes that a myriad of trends might emerge for those who have access to the bigger picture. At these scales, the outbreak and spread of infectious diseases may even become predictable; Google has certainly made an effort of it, success notwithstanding (Lazer et al. 2014:1205). \todo{citation?} In a study commissioned by the University of California Transportation Center, researchers pointed out that research through members of the QS movement provides social sciences researchers with the ability, in theory, to deeply study individuals' activities throughout the day and for extended periods of time without invasive research methods (Jariyasunant et al. 2011:3). \subsection*{Visualization} Quantified Self in its most modern context relies heavily on the meaningful automated representation of data, particularly using visual means. While research in the general area of data visualization informs much of this work, several studies consider specific factors related to Quantified Self, especially given several characteristics unique to Big Data such as the velocity of changing data, the scale or magnitude of the data generated, and the myriad sources and formats of data. Yang, Lee, and Gurrin (2013) considered the visualization of lifelogging data and found that affordances made by differing platforms (e.g. \todo{citation?} smartphones, laptops, etc\dots) provide users with unique opportunities to interpret data in specialized and compelling ways. These findings are crucial to the adoption of Quantified Self technologies, as Yang and Gurrin (2013) describe the challenges involved: \todo{citation?} ``[Quantified Self] might be overwhelming and impractical to manually scan the full contents\dots we write once but never read again'' (Yang and Gurrin 2013:1). \todo{citation?} Further, as Bowker (2013) points out, \todo{citation?} ``raw data'' is far from objective, or in any sense ``raw''; instead, categorizations in how data is collected and stored bias the data, and inbuilt biases on the part of the observer or analyst further skew this data (Bowker 1999:170). \todo{citation?} Rather than despair over the inherent biases and limited affordances provided by data structures, methodologies, and schema, data science has attempted to work within those limitations; Sharif and Symanzik (2013), \todo{citation?} for example, actively pursue data visualization tools for the R programming language, attempting to democratize visualization tools for users of actigraphy data such as step--tracking (Symanzik 2013). \todo{citation?} \subsection*{Methods} Studying the culture of Quantified Self proved challenging and necessitated measured approaches both to remain in line with ethical guidelines as well as to fully acclimate to the qualitative culture of a group which strongly prefers quantitative approaches. In my interactions with participants in the QS community, I found that indicators of an individual's dataset, unique to their personal behavior and habits, could identify or otherwise expose them to others. In accordance with Institutional Review Board guidelines, the analytical findings of self--quantifiers could therefore generally neither be saved nor interpreted during the course of the research. Interestingly, IRB guidelines provide caveats to considerations for privacy, confidentiality, and research participant wellbeing under the header of self--experimentation. Given that the primary investigator's knowledge of dangers, benefits, and vested interest in personal wellbeing is unparalleled, it follows that IRB approval for self--experimentation of this nature is not required. The implications of this precedent partially guided the methodology of this research, as discussed later. During the course of my research I studied the public communication and prescriptions of Quantified Self leaders who offered advice and suggestions to those who have sought input on how to begin quantifying and tracking themselves. From this information I learned more about devices, approaches, and guiding principles that would later inform my own journey into self--quantification, enabling an auto--ethnographic experience similar in nature to Linder's ethnology of body--building (Linder 2007). \todo{citation?} With a WiFi--connected scale, Bluetooth--enabled movement tracker, and a careful eye toward the stream of analytics from these and other sources, I began quantitatively tracking my steps, sleep, and ultimately my self. Maintaining distance between members of the Quantified Self community and myself by avoiding exposure to their personal data proved beneficial for a number of reasons. My initial expectation was that limiting exposure to QS data other than my own would eschew some concerns regarding participant privacy and confidentiality, and this expectation bore out. More interestingly however, direct access to another individual's data --- be it steps taken per day or financial details --- proved somewhat taboo in the QS culture. This and other subtle nuances in the etiquette of interacting with other self--quantifiers guided my research methods and ultimately shaped my own approaches in interacting with potential participants in my research. For more than two months, I tracked my weight, steps, sleep length and quality, and other details related to my fitness. More than that, and extending on much of the past research previously discussed, I followed other aspects of my life as well: finances, music consumption, online reading habits, and more. As I discovered quantifiable aspects of my life, I included them in my tracking regimen in the spirit of much of the Quantified Self literature I had discovered: automating what I could, simplifying what I needed to, but ultimately collecting as much data as possible for the purpose of later analysis --- whatever form that may take. Over time, my collection of data grew to a sizable magnitude, and I began attempting to visualize and analyze the data I had collected about myself in novel combinations and permutations: music habits during steps taken, and sleep overlaid against steps or compared to running times for corresponding days. I attempted to record my thought process during this time, making special note of what I was trying to accomplish by attempting various combinations and what appealed to me. After some time on my own, I began to approach other members of the Quantified Self community more directly, including friends from backgrounds unrelated to this culture of self--quantification. My semi--structured interviews, inquiries, and searches focused on the thoughts and impressions of participants more than, and in fact instead of, on the results of their tracking. I found my own experiences informing and prompting questions I might not have considered without the benefit of personal experience. This approach of participant--observation and semi--structured interviewing techniques allowed me to make more use of my time with self--quantifiers, who might otherwise have found my basic questions tiresome to endure. \subsection*{Ethnography} One warm February day I received my Jawbone UP --- a device designed to facilitate passive activity and sleep tracking --- and began tracking my day--to--day activity. As I had learned during my literature review, I was already tracking myself, even quantifying myself, for months and years leading up to this ethnographic fieldwork. What changed when I received an activity--tracking wristband? I can't precisely describe the transition, but the immediate feeling of wearing an accessory that I knew was constantly listening to my movements left me uneasy. I was vaguely familiar with concerns about data privacy, but this turned out not to be what was bothering me. Eventually I realized that it was the sense of constantly measuring myself and comparing myself to previous days, trends, and success streaks that pressured me to be more active. In time, I grew accustomed to the feeling of omnipresent self--tracking and eventually appreciated the stream of data I was tapping. I found myself engaging with the data in two ways, which largely shape the structure of this ethnographic account. The first aspect of my engagement with my data was in the visualization and wonder at the aggregated collection of information I was accruing. The second aspect was in the way I gained actionable insights, and changed my behavior, based on the growing data I collected. \subsection*{A Personal Cabinet of Wonder} As with many tracking devices, my wristband was a two--part deal: along with the device came the use of a mobile application I could use on my smartphone. This application served both as a focal point for the device to synchronize with, as well as the primary means of interpreting and visualizing information aggregated from synchronization. In addition to these basic functions, the app was designed to integrate with third--party services --- other tracking applications, such as running apps and weight--tracking apps --- to integrate with the rest of the data. Within a few days of beginning my fieldwork I was able to overlook the subtle catch of the wristband on my sleeves and the occasional light and vibration of its indicators. I began to spend more and more time checking on a mobile application adjoining the wristband for indicators of my daily progress: How many steps have I taken? How many hours (or minutes) have I slept? How do these relate to my self--defined goals? I explored \todo{citation?}various ways that my app could represent my data, at first in fine grain but eventually --- as the magnitude of my data supported it --- at daily, then weekly, then monthly levels. I was fascinated by the wealth of data I had access to, and virtually everything I did contributed to the pile. Over time, I more formally quantified other aspects of my life. My finances, using Web 2.0 services such as Mint, became visual eye candy and allowed me to spot trends and outliers in my spending history and habits. None of this information was necessarily out of my reach before --- certainly my bank provided a means of accessing and studying my transactions --- but I had never been able to interact with this plethora of data so fluidly before. Perhaps more importantly, the data had never been so visually interesting until such attention had been paid to its presentation. One member of Quantified Self, during a meet up event for other self--quantifiers, explained to another new participant that she was ``quantifying [her] life without even realizing it.'' He explained that the word requirements news reporters adhere to when writing their stories, the number of calories in the food we track when we diet, the gallons per mile we eke out of our cars during rush hour, and a host of other aspects of our lives all demonstrate quantification of some aspect of ourselves. This member proceeded to argue that virtually every digital artifact we produce --- from high--resolution photos to social networking check--ins --- comprised Quantified Self in some tangential but deeply entrenched way. In his words, ``when it comes down to it, even pixels of a photo are just bytes''. At this point other members of the group took issue, arguing that photos, such as those of meals, belonged in the related but ultimately different community of ``lifelogging''. I asked the group if anyone could explain the difference between lifelogging and Quantified Self, and the group quieted briefly. At first, the participant who described Quantified Self and lifelogging as related claimed, ``everything [including lifelogging] is Quantified Self''. To that, another replied ``no, they're cousins,'' suggesting that they're conceptually similar --- and even born of similar origins --- but ultimately different and arguably driving toward different ends. Some began to posit subtle distinctions but stopped mid--way, realizing that their boundaries excluded an aspect of Quantified Self or included an aspect of lifelogging that they had not intended. Eventually, another member suggested the following: Where the goal of lifelogging is to capture one's life, the goal of Quantified Self is to analyze one's life. This analysis, then, is a critical aspect of Quantified Self. During the meeting's round--table discussion, one member revealed several sheets of paper with what appeared, at first glance, to be gibberish. As it turned out, he had recently contracted his own genome sequence through 23andMe and was now curious what he could do with his newfound payload of data. The room buzzed with interest in his findings, and he answered several of the questions his peers asked about the insights he had made before one of the more veteran members began to describe several of the potential scientific uses of one's personal genome data. The organizer suggested considering donating genome data to an open genetic analysis project. He cautioned that doing so would make the individual's entire genetic corpus available for the world to see. Not only that, but it would provide strong hints about the genetic makeup of living and future family members. Nevertheless, the benefits to the scientific community, he argued, would be significant. In my research leading to fieldwork, I encountered a practice that had emerged centuries ago but carried parallels with some of the findings I described here. This practice manifested itself as a collection of fascinating, often exotic paraphernalia illustrating the exciting and far--reaching power of the collection's owner. These were called ``cabinets of wonder'', and the parallel I was observing was in the menagerie of data sources I was collecting and the visually fascinating results I was generating. Exactly what self--quantifiers would ultimately do with this visually wondrous data is, as it must have been when cabinets of wonder were at the height of their popularity, unclear. For now, as another parallel, the data I collected was interesting and perhaps even impressive to those who take an interest in this sort of thing; nothing more. \subsection*{Living Algorithmically} During the same discussion at the previously mentioned meeting, one attendee stood and described his background as a Silicon Valley entrepreneur and his experience undertaking a rare and extreme diet that, from his report, seemed to improve his cardiovascular, respiratory, and psychological health with no ill effects. His story was personal and charismatic, relating his spiritual reconnection with the practices of meditation and self--awareness, but the audience was unmoved. When he paused for a moment, the moderator interjected, asking that we ``not talk about theories, [but] hard data''. ``Hard data'' was an interesting concept to encounter in the field. As previously discussed, data has a fabled authority related to the arguably misplaced trust that data is perfect, objective, and abstract (Bowker). \todo{citation?} Nevertheless, this attendee's unwillingness (or inability) to discuss quantitatively backed insights earned him little respect among his peers. \todo{citation?} Eventually he returned to his seat, clearly frustrated that he had not visibly won anyone's interest in the novel diet he had adopted and seemingly benefited from. At some point exploring Quantified Self I found myself studying my data not only to find visualizations, but also to inform my own actions. I was transitioning from gathering data about my life to analyzing my life, or from lifelogging to true Quantified Self, as the participants who had attended the earlier meet up had differentiated the two. As I delved into the deeper analysis of my own data, I found the true appeal of the Quantified Self was not merely in the visualizations spoon--fed to me by applications supporting my devices, but in the analyses and visualizations that I could make for myself with my own data. I understood then what the entrepreneur was striving to achieve, and while he was using different means than the rest of the community, he seemed to have the same goal for which the other members strived. In the parlance of Silicon Valley entrepreneurship, Quantified Self is striving to hack wellbeing through data--driven insights and disruptive technologies. This aspect of Quantified Self was the first direct reference to Silicon Valley's tech culture, from which Gary Wolf coined the term while writing for Wired in 2007. But underpinning the entire Quantified Self movement is the premise that --- given enough data, scrutiny, and insight --- we can improve our lives. A member of Quantified Self described the appeal of collecting infinitely more data by pointing out that one ``\dots can aggregate later, not disentangle,'' referring to the ability of self--quantifiers to take fine--grained data and parse it into bigger pieces, hoping to find patterns and ultimately ``personal meaning out of data''. In the same way that imprecise census data cannot be made more precise, poor self--tracking technologies cannot magically yield more specific datasets. Some members of the QS culture identify with the inclusive perspective of everything in our culture incorporating some aspect of self--quantification and tracking, and for good reason. Before I began my fieldwork, I used Mint to track my finances, Last.fm and Spotify to analyze my music interests, Netflix to find good new movies and television shows, Pocket to organize my reading list, and Facebook, Twitter, and other social networking sites to manage my relationships with friends and family. All of these services --- and innumerable others --- make a business of the analysis of their users and deriving personal meaning from the mountain of data collected on those users. Question--and--answer sites Quora and StackOverflow might represent a contrasting example to sites such as Facebook et al., which encourage and cajole users into revealing more about their personal selves by design. Instead, Quora and StackOverflow have made a point of establishing themselves as epicenters of authoritative answers to challenging questions without focusing on holistic knowledge about their users. On both sites, virtually any user may post questions seeking substantive answers, and other users are encouraged to participate in the question either by answering it or appropriately moderating when necessary. Both sites, interestingly, reward users both for asking good questions and writing good answers by awarding users ``points''. Points cannot be redeemed for any tangible value on either site or for that matter elsewhere, but again on both sites the number of points an individual acquires can relate somewhat directly to the individual's overall prestige in their respective sub--group. Even sites that tend to eschew all--consuming knowledge of their users --- such as Quora and Stack Exchange --- quantify their users, ``gamifying'' the experience and motivating continued engagement. Why do intangible, arbitrary credits, points, and awards motivate users? With a meager 335 reputation, 1 silver badge and 7 bronze badges on Stack Overflow; with some 17,000 credits on Quora, I still can't answer these questions. Nevertheless, I find myself motivated by the votes people give to each answer I provide, each one another endorsement of my rightness by the community and a few more credits in the metaphorical bank. Gamification plays a significant role in online communities across the Internet, and the relationship between the Quantified Self and gamified services has been explored to some extent by researchers in various fields. What I can say at this point, entrenched in this self--quantifying culture and glancing occasionally at my UP app's step tracker with the compulsion of an addict, is that something about these discrete measures of my accomplishments and contributions seems pleasantly straightforward. Never mind that the mechanisms behind eventual ``success'' in these gamified services remain black boxes for users such as myself; I have an intuitive understanding of what gains points (on Quora and Stack Overflow, my ultimate goal is to write good answers), and that drives my interaction. \subsection*{Data ownership} One challenge self--quantifiers face in deriving personal meaning from data is a question of data ownership. With Facebook, Twitter, and other social networking sites making varied claims to the content their users upload, participants in QS carry a heightened awareness of terms of use, data ownership, and their freedom to export and potentially delete data if the need or whim arises. In my interviews with members of QS, each had strong opinions about what service they used predicated largely on the service provider's policy regarding user data. Some sites made it clear that the company retained all data generated by the user and synchronized with the site, usually offering to release that data only to ``premium'' users who pay for special access. Some services make no such clear explanation of their stance, but essentially communicate their position by providing users with no reasonable interface to export or delete data. Self--quantifiers unanimously expressed disdain for both of these paradigms, advocating that self--quantifiers support only sites that made it clear that a user's generated data belonged to the user and was consequently the user's to purge if desired. Some members of the Quantified Self community, not realizing the policies of the services for which they registered and now too entrenched to abandon their data, often find clever workarounds to these problems. In one case, a site provided app makers with free access to user data provided that the user relinquished permission to access it. One clever self--quantifier wrote a generic application and provided instructions to register it with the service provider, thus enabling members of the QS community to export their data under the guise of a third party app. With no effective way to prevent this use of their Application Programming Interface (API), \todo{citation?} the site that had once attempted to lock its users into their curated ecosystem through data ownership instead alienated the very users who might have otherwise been the service's most emphatic advocates. \subsection*{Public Science} As I explored the functions of my quantification technologies, I discovered that the company that made my wristband and activity--tracking application also produced a caffeine--tracking application. Within this app, I could indicate my caffeine consumption through an intuitive interface and visualize the amount of caffeine in my system using a simplified abstraction. Knowing my personal habits intuitively, I worried about the implications of committing my caffeine habits to record, especially to a record which communicated with my exercise and sleep schedules. In particular, I was concerned with the scrutiny that would follow patently unhealthy habits and behaviors; could future potential employers, with some access to this data, decide that I lived too erratic and unhealthy a lifestyle to be trusted with meaningful work? Regardless of my track record, could every nuanced detail of my data be used against me? Ultimately I was too persuaded by the promise of ``UP Coffee Experiments'' and their proposed results to abstain. Within days, my digital log populated itself with the typical range of caffeine vessels: coffee, soda, and occasionally caffeine pills. Correlating my data with sleep duration, frequency, and quality came automatically and I was finding\dots that caffeine interrupts my sleep in every measurable way. What's more, increased caffeine consumption correlated negatively with sleep duration and quality. I was less than impressed. Anyone could have identified this relationship. In fact, this kind of conclusion could have been drawn a priori. Nevertheless, the confirmation of these intuitive findings --- and of course the magnitude with which caffeine affected my sleep --- was enough to fascinate me and influence my behavior. I began to make efforts to consume less caffeine, and to cut myself off within 6 hours of my intended bedtime. Through science --- flawed and lacking in scientific rigor though it may have been– I made personally compelling conclusions that persuaded me to modify my behavior toward what were arguably healthier choices. \subsection*{Health} 23andMe gained some fame and notoriety as the focus of intense scrutiny by the FDA for allegedly making claims about its service --- to sequence customers' genetic information --- without adequate backing from the proper governmental bodies. Fortunately for me, 23andMe's genome sequencing service was available to me at the time of my research, and I readily took part in it by sending them a sample of my saliva in a test tube. Within a few weeks, 23andMe emailed me with notification that my genetic analysis results were available. While I discovered several interesting details about my predispositions to various genetic disorders, what was more compelling to my personal experience was the lack of indicators to certain others. Namely, indicators for Bipolar Disorder, which runs in my family, did not seem to show up in my genetic report. It was a sensation that may defy logic; I knew, from courses in quantitative analysis and statistical research, that I may manifest symptoms of Bipolar Disorder later in life and that probabilistic findings were not definitive. I also knew at a higher level that not only were some indicators of Bipolar Disorder not tested by 23andMe because of their inconsistency, but also that these findings might simply be wrong. Nevertheless, some indication to the negation had, for me, a profoundly relieving effect. In the time since I ran my results through 23andMe, the FDA enforced a block on their service for potentially misrepresenting what their product offers. Nevertheless, I was able to download my genetic data, in the form of a raw text file, for which I still hope to find some novel use, whether that's in the area of visualization or analytics. I found more pronounced changes in my behavior through the use of gamified self--quantifying services. I began my fieldwork running regularly, but without a formalized regimen or goals for my running pace or performance I was essentially running aimlessly. As I eased into quantifying various aspects of my life, I made a point to maintain a steady healthy diet, regular exercise, and other habits to maintain some consistency to enable longitudinal measures and analyses. After some time I realized that I was consistently exceeding the steps goal by as much as twofold. Upon closer inspection, I found that this was primarily due to my daily runs, which were getting longer in duration and distance. By the time I started collecting my thoughts, I was running 10 kilometers as part of my daily exercise regimen, more than three times what I seemed to be running before I began tracking and studying myself. My resting heart dropped, I lost several pounds, and my body fat percentage measurement steadily sank. All this, of course, monitored from the comfort of my smartphone, which allowed me to tap, paw, and scroll through my past measurements and recordings seeking trends and relationships I might not have seen otherwise. This longitudinal analysis provided me with context I had never experienced before; I was reviewing myself at a larger scale, as though reviewing the history of a third party on the whole and grasping broader brush strokes of my own life. I knew intuitively that these experiences were not telling me the whole story --- I can't identify the shin splints I experienced, or the feelings of acute dehydration I occasionally experienced --- but nevertheless I felt more informed from this data at a glance. \subsection*{Gamification} Reflecting in particular on my experience running and dieting, I can't ignore that something motivated my activity and general health habits which was either not present before, or not compelling enough, for instance, to motivate me to run as much as I did at the peak of my running regimen. I can only point to the step--tracking app and its insistence on vibrantly displaying for me the exact number of steps I took, how that related to my daily goal, and how each aggregation of steps broke down. Various gaps --- such as my sleep patterns and idle times throughout the day --- became frustrating reminders of my sedentary self. I made a point of checking my app frequently, even when I hadn't been particularly active, to see how far along I was. I went to the gym some days because I knew that if I didn't, I would miss out on a 5--day streak of reaching my target number of steps. From the outside, one might have seen it as an obsession. I found myself aiming higher and higher in number of steps, and scrutinizing over the amount of sleep I was getting each night. For me, inexplicably, sleep became a game not unlike golf; I wanted to minimize the amount of sleep I needed and got in an effort to maximize my productivity both in terms of fitness (i.e. number of steps) as well as make the most of the rest of my time awake. In hindsight, I may have lost sight of broader scope of my goals --- better fitness, as measured by those numbers --- but in the moment I found myself engrossed in these simplified metrics and aspiring for what may have been unsustainable behavior. \section*{Results} One of my first questions for knowledgeable self--quantifiers was about the relationship between life--logging and quantified self, or the extent to which there is a distinction between the two. The answers to this and similar questions elicited robust debate among members of the QS community, with one participant responding at first that ``\dots everything is Quantified Self'' before walking back that claim to mean that ``no, they [life--logging and quantified self] are cousins.'' This point differentiating life--logging from QS recurred throughout my conversations with self--quantifiers, seemingly with two camps generally consisting of those who believe that life--logging is fundamentally different from QS, and those who believe that life--logging, if not a superset of the QS movement, is at least closely related. ``Data tells stories'', one participant told me when I inquired about the appeal of self--quantification and the data it yielded. He went on to explain that data can provide insight into his (and my) life that can't necessarily be acquired through other \todo{citation?}methods. This sentiment held true when another new member asked what the ``big vision, or mission statement of QS'' is, to which one moderator replied, ``to help people get personal meaning out of data''. Finding meaning from that data requires, from the perspectives of those I asked, several characteristics. First, the devices used --- and invariably devices are involved --- must be reasonably precise, if not necessarily accurate. One member joked that, after complaining to a fitness watchband company that his unit was only 65\% accurate, technical support replied with ``Yeah, great, right?'' indicating that his (and ostensibly others') demands for accuracy were higher than the company's. \todo{citation?} His advice to others was to establish a baseline, ingraining that one's own activity tracker is inaccurate but will at least reliably underreport or over--report activity. One attendee had brought with him several sheets of paper seemingly fresh from a printer. When his turn came to speak, he asked what others recommend he do with his newly analyzed genetic information from 23andMe. This prompted vigorous, sometimes tangential debate surrounding the propriety and intended privacy of genetic information, the importance of getting the most specific data possible, and the role of genetics in Quantified Self. Interestingly, no one raised any concern that genetic information --- coded alphabetically and largely impossible to measure or analyze quantitatively --- might not be appropriate within the strict definition of ``Quantified Self''; namely, that data recorded and discussed in this forum be quantitative in some sense or another. Indeed, the question of lifelogging returned at this point, and the rest of the group engaged at this point. The notion that life--logging and the Quantified Self were interchangeable had become unanimously objectionable after some debate and discussion, but the border delineating life--logging and Quantified Self remained nonetheless blurry. The group seemed to agree, reluctantly, that Quantified Self differentiates itself from life--logging in that life--logging is all about ``capturing'' information, whereas Quantified Self is about ``analyzing'' information, often quantitative, though evidently not necessarily. Here in particular, some niche groups spoke up to emphasize that everything can be perceived quantitatively: photos and videos, comprised of countless pixels and bytes, are often analyzed quantitatively and can be meaningfully analyzed quantitatively (if not now, then in the not--too--distant future). Throughout my fieldwork, I found myself repeatedly asking what people's perspectives on privacy and data ownership were, struggling to reconcile the apparent openness with which self--quantifiers shared their experiences, published their data, and talked about their findings. When I asked self--quantifiers whether they were bothered by the fact that companies such as Jawbone and others retained their data, one responded nonchalantly ``nah\dots I mean it's part of the trade--off: we get to use these devices, and they get to use the data as well.'' The way self--quantifiers seemed to see it, the price they pay to track themselves using devices manufactured by companies such as Nike, Jawbone, Fitbit et al. was that these companies would have co--ownership of the data representing their activities. This notion of co--ownership of data was strikingly mature; in a closed ecosystem, one cannot prevent the manufacturers of a technology from amassing the collection of data generated by its users. This did not bother them, and in fact they pointed out that these arrangements enabled QS participants to compare themselves against the population as a whole, giving cross--sectional context to the longitudinal context they were distilling from their own measurements. Another member chimed in with a thought example: ``When I got my genes sequenced by 23andMe, they used my data to find trends and relationships in diseases, right? So why would I have a problem with that?'' he posited back to me. Ignoring the apparent optimism of this perspective (and the subtly implied possessive claim that the data was his, but nevertheless 23andMe was using it), I was admittedly stumped. This and other perspectives held by members of the QS movement seem to conflict with the positions I imagine mainstream culture would hold, and highlighted the difference between this chosen culture group and the population from which I ostensibly came. \section*{Discussion} The QS movement remains a niche community of enthusiasts whose world--views and interests in quantitative analysis and self--reflection represent non--normative behaviors, much like the aforementioned forerunners of QS: Franklin, Tesla, and undoubtedly countless others throughout the centuries. Nevertheless, clinical research and mainstream culture demonstrate that the relationship between the movement known as Quantified Self and ``mainstream culture'' --- those that do not quantify their lives deliberately and intentionally --- are coming together. But coming together does not mean that these cultures will syncretize in the way many anthropologists tend to think about syncretism; while it may be true that the future holds wearable activity--tracking devices for everyone in mainstream culture, several aspects of Quantified Self culture may pose challenges which will have to be navigated by both contemporary self--quantifiers as well as laymen incorporating quantification into their lives. Earlier in referencing Foucault's states of governmentality, I neglected to describe the third and final state he discussed: that of power through security. Dystopian though this outcome seems, revelations about massive--scale government wire--tapping and Internet traffic monitoring suggest that this future may be nearer than we previously thought. Members of Quantified Self seemed unperturbed by questions of privacy and data ownership, suggesting a difference of perspectives between the QS community and mainstream culture. To what extent will this influence the adoption of wearable tracking technologies made famously popular by the QS movement? Have the revelations of these government monitoring programs set off a now--inescapable aversion towards self--quantification for fear that aggregated data on the servers of various QS service providers will be vulnerable to government surveillance? Will these programs drive newfound popular interest in personally managed data, stored locally and securely on personal devices less susceptible to coordinated intrusion? Participants in the QS community seemed adamant that data ownership is non--negotiable, and the mobility of that data would certainly enable concerned self--quantifiers to secure their data if the need arose; is this ultimately the practical motivation of that perspective on data ownership? Less gloomily, the culture of QS may follow the cabinets of wonder and curiosity previously discussed, which served as the forerunners to modern museums through systems of formalization and systematized collection across institutions and bodies (Wilson 1996:60). \todo{citation?} Given the poor interoperability between various services --- even similar ones which track steps or sleep --- it might be that compatibility between these services, if it ever comes, will trigger a sea change in users' willingness to engage in self--quantification. Here too, QS participants have foreshadowed that concern; services that neither allow the export of data nor interact meaningfully with other services --- becoming ``silos'' of personal information, devoid of useful applications --- tend to arouse suspicion and ire among the community of self--quantifiers. This perspective might take hold among mainstream culture. Further complicating predictions is the knowledge that responses to cultural movements might take the form of backlash against the originating culture group in much the same way that the 1960s was characterized by the counter--culture response to the previous decade's emphasis on conformity. While these reactions can be traced a posteriori, they cannot be predicted a priori. Any of these outcomes is possible, as are innumerable others, but these outcomes cannot yet be predicted with any degree of certainty. Instead, one must keenly watch these initial interactions between Quantified Self and the mainstream of Western culture. The initial interactions, and the resolutions to these questions of security, privacy, data ownership, and other potential disconnects between the QS movement and the mainstream. The community of Quantified Self is at a pivotal moment in its contemporary timeline. On the one hand, its members stand to be subsumed into the mainstream culture, potentially alienating their most dedicated and intensive self--quantifiers but acquiring countless casual practitioners of Quantified Self. On the other hand, the ideals of those in the QS community represent to many a Foucauldian third state of governmentality, a worst--case scenario of the Quantified Other in which we are all relentlessly monitored, measured, and analyzed. These outcomes will depend on how members of the Quantified Self, particularly its advocates and evangelists, communicate the world--views of the QS community to mainstream culture as well as understanding the world--views --- and especially the concerns --- of those in the mainstream. Regardless of the outcomes of these debates, Quantified Self does not represent the last of our modern era of mobile technology and ubiquitous computing; on the contrary, QS technologies and the practices and patterns of behaviors in the QS culture hint instead at the next era: technology interwoven into the lives of those willing to let it fill the gaps in our otherwise analog experiences, informing and guiding us. To what extent and precisely how we interweave these threads into our lives are remains unclear. \printbibliography{} \end{document}
{ "alphanum_fraction": 0.8123500848, "avg_line_length": 54.961414791, "ext": "tex", "hexsha": "21322a37cae3f571b1b8881fb254a4999a385a49", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2021-04-27T22:58:37.000Z", "max_forks_repo_forks_event_min_datetime": "2015-08-08T07:01:37.000Z", "max_forks_repo_head_hexsha": "45eb7c9a5adfb82981cfed7b52ddd72252d81460", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "djp3/alialkhatib.github.io", "max_forks_repo_path": "media/papers/qs.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "45eb7c9a5adfb82981cfed7b52ddd72252d81460", "max_issues_repo_issues_event_max_datetime": "2021-10-09T23:03:09.000Z", "max_issues_repo_issues_event_min_datetime": "2020-02-26T23:55:52.000Z", "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "djp3/alialkhatib.github.io", "max_issues_repo_path": "media/papers/qs.tex", "max_line_length": 329, "max_stars_count": 2, "max_stars_repo_head_hexsha": "45eb7c9a5adfb82981cfed7b52ddd72252d81460", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "djp3/alialkhatib.github.io", "max_stars_repo_path": "media/papers/qs.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-12T07:41:41.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-19T07:29:18.000Z", "num_tokens": 13639, "size": 68372 }
\chapter{Literature Review} \label{chapter:litReview} \section{Tensegrity Structures} It is possible to design free-standing structures with axially loaded compression elements in a well crafted network of tensional elements. Such an arrangement is called a tensegrity structure (tensile integrity). Each element of the structure experiences either pure axial compression or pure tension \cite{BuckminsterFuller1975}\cite{Snelson1965}. The absence of bending or shear forces allows for highly efficient use of materials, resulting in lightweight, yet robust systems. Because the struts are not directly connected, tensegrities have the unique property that externally applied forces distribute through the structure via multiple load paths. This creates a soft structure, for a soft robot, out of inherently rigid materials. Since there are no rigid connections within the structure, there are also no lever arms to magnify forces. The result is a global level of robustness and tolerance to forces applied from any direction. %Tensegrity structures are composed of axially loaded compression elements encompassed within a network of tensional elements, and thus each %element experiences either pure linear compression or pure tension. As a result, individual elements can be extremely lightweight as there are no %bending or shear forces that must be resisted. Because the struts are not directly connected, a unique property of tensegrity structures is how externally applied forces distribute through the structure %via multiple load paths, creating a system level robustness and tolerance to forces applied from any direction. %Because there are no rigid connections within the structure, there are also no lever arms to magnify forces. Instead, all experienced forces act linearly on each structural element. Combined with the ability to diffuse forces globally, This makes tensegrity robots inherently compliant and extremely well suited for physical interactions with complex and poorly modeled natural environments. Active motion in tensegrity robots can be performed by changing cable lengths in parallel, enabling the use of many small actuators that work together, rather than individual heavy actuators which work in series. There are also many indications that tensegrity properties are prevalent throughout biological systems, and the morphology of the SUPERball that we are studying, especially when carrying a payload, ends up bearing a striking resemblance to the nucleated tensegrity model of cell structure~\cite{Wang2009}\cite{Wang2001}. \section{Prior Work in Tensegrity Robotics Design} An important advantage of tensegrity structures with respect to general pin-jointed structures is their increased mass-efficiency due to a high fraction of tensile members. Tensile members are generally more mass-efficient as they do not need to resist buckling. A further advantage from a robotics perspective is that forces diffuse in a tensegrity. There are no lever arms and torques do not accumulate at the joints as in a classic serial manipulator. Forces distribute through multiple load paths, thus increasing robustness and tolerance to mechanical failure. %advantages %applications The static properties of tensegrities have been thoroughly studied and some basic analysis is discussed in section \ref{modeling}. On the other hand, few examples are known of truly dynamic motion of these structures. Early examples of kinematic motion include the work at EPFL's IMAC laboratory~\cite{Fest2004}. Skelton and Sultan introduced algorithms for the positioning of tensegrity based telescopes and the dynamic control of a tensegrity flight simulator platform~\cite{sultan2000tensegrity}. Although there were some early efforts at MIT's CSAIL lab, it wasn't until the work of Paul and Lipson at Cornell University that the concept of tensegrity robotics became widespread~\cite{Paul2006a}. Paul and Lipson were the first to study the properties of dynamic tensegrity structures in hardware and simulation. A few years later Fivat and Lipson designed the IcoTens, a small actuated tensegrity icosahedron robot, but did not publish results. In recent years, the BIER lab at the University of Virginia has been studying Central Pattern Generator based control for tensegrity based fish tails, which is closely related to the control architectures proposed for SUPERball~\cite{Caluwaerts2013rsif,Bliss2012}. Mirats-Tur has presented design and controls work on various other tensegrity morphologies that have been tethered or fixed to the ground~\cite{GraellsRovira2009,miratstur2011athree-dof}. At Union College, Rieffel and colleagues are following an interesting line of work by considering vibration based actuation for small tensegrities~\cite{khazanov2014developing}. Related work was presented by B\"ohm and Zimmermann, who demonstrated controlled locomotion of vibration driven tensegrity robots with a single actuator~\cite{bohm2013vibration}. Finally, Shibata, Hirai and colleagues have developed pneumatically actuated rolling tensegrity structures~\cite{Shibata2009}. %goal Building upon these works, the \SB{} project seeks to push forward the tensegrity robotics field and develop truly untethered, highly dynamic and compliant robots exploiting the aforementioned advantages. \section{Tensegrity Robotics for Space Exploration} %NASA is supporting research into tensegrity robotics to create robots with many of the same qualities that benefit biological systems. The high strength-to-weight ratio of tensegrity structures is very attractive due to the impact of mass on mission launch costs. Large tensegrity structures have been shown to be deployable from small compact configurations which enable them to fit into space constrained launch fairings. While the above qualities have inspired studies of deployable antennae and other large space structures~\cite{Tibert2002}, it is in the realm of planetary exploration that we see the most significant role for many of the unique force distribution qualities of tensegrity robots. The project formally funded by the NASA Innovative Advanced Concepts (NIAC) program~\cite{NIACfinalreport} and currently NASA's Ground Changing Development (GCD) is funding this research to specifically study landing and surface mobility of tensegrities, exploiting the controllable compliance and force distribution properties which make for reliable and robust environmental interactions. The main goal is to develop tensegrity probes with an actively controllable tensile network to enable compact stowage for launch, followed by deployment in preparation for landing. Due to their natural compliance and structural force distribution properties, tensegrity probes can safely absorb significant impact forces, enabling high speed Entry, Descent, and Landing (EDL) scenarios where the probe itself acts much like an airbag. However, unlike an airbag which must be discarded after a single use, the tensegrity probe can actively control its shape to provide compliant rolling mobility while still maintaining the ability to safely absorb impact shocks that might occur during exploration. This combination of functions from a single structure enables compact and lightweight planetary exploration missions with the capabilities of traditional wheeled rovers, but with a mass and cost similar or less than a stationary probe. Therefore, a large fraction of the overall weight (as measured at atmospheric entry) of a tensegrity mission can be used for the scientific payload due to the dual use of the structure as a lander and a rover. This allows for cheaper missions and enables new forms of surface exploration that utilize the natural tolerance to impacts of tensegrities~\cite{Vytas_IPPW_2013}. \section{Tensegrities as Soft Robots with Morphological Computation Capabilities} \label{sec:robots} \input{tex/reviewPaper/02_robots} \section{Low-level Control for Tensegrity Robots} \label{sec:control} \input{tex/reviewPaper/03_control} \section{High Level Control for Tensegrity Robots} \label{sec:NN_and_Planning_overview} \subsection{Artificial Neural Networks} \label{sec:ann} Artificial Neural Networks (ANNs) are a theoretical group of mathematical models loosely based on neural networks found in biology that can estimate usually unknown functions which map multiple inputs to a set of outputs. An ANN is comprised of a large set of simple functions grouped into different layers that define a mapping function \(f:X \to Y\) where \(X\) is the input set and \(Y\) is the output set. In practice, layers are grouped into three main layers defined as 1) the Input layer, 2) the Hidden layer, and 3) the Output layer. The Input and Output layers transfer data to and from the Hidden layer, respectively. The Hidden layer can be comprised of one ore more sub-layers and is where the input data is converted to the output data. Each sub-layer of the Hidden layer is comprised of a defined number of simple functions \(H_{i}\) for \(i \in n\), where \(n\) is total number of functions in a sub-layer. Changing how each function \(h \in H\) is connected to other functions, by the use of weights, allows for the ANN to map its inputs to a desired output~\cite{lippmann1987introduction}. For our application, machine learning is used to estimate the connection weights through the use of a cost function. The cost function is an equation which measures the success of the learning process against a user defined parameter or set of parameters. %The cost function is an equation which measures how close to a user defined goal an iteration during the learning process was able to achieve. \subsubsection{ANN for Locomotion} Robotic learning methods have previously produced successful policies for tasks such as locomotion for walking robots and quadrupeds~\cite{tzs-spgrl-04,kp-pgrlf-04,cspd-bolg-15,kn-ldquad-11,kbps-temp-09}. These methods, however, typically require hand-engineered policy classes, such as a linear function approximator using a set of hand-designed features as input~\cite{tzs-spgrl-04}. For many tensegrity systems, it is difficult to design suitable policy classes, since the structure of a successful locomotion strategy might be highly complex. % We illustrate this in % Section~\ref{sec:results}, by demonstrating that it is desirable to have a % representation that is closed-loop, since open-loop control, though simpler to % design and implement, does not generalize as well to changes in terrain, % gravity, and other environmental and robot parameters. Some more recent methods learn deep neural network policies that are successful for tasks such as grasping with robotic arms and bipedal locomotion~\cite{slmja-trpo-15,lhphe-ccdrl-16}, and such policies are more expressive and require less hand-engineering compared to policy classes used in previous methods. One such method, which is used in this work in section~\ref{sec:mdgps}, is mirror descent guided policy search (MDGPS), an algorithm that frames the guided policy search (GPS) alternating optimization framework as approximate mirror descent~\cite{ml-gpsam-16}. MDGPS is was chosen for this work because it allows for deep neural network policies to be learned while maintaining sample efficiency, and it presents a natural extension to periodic locomotion tasks. A key problem for locomotion tasks is the difficulty of establishing stable periodic gaits, and this is exacerbated for tensegrity robots due to their complex dynamics and unusual control mechanisms. Near-stable behavior with even small inaccuracies can lead to compounding errors over time, and will not be successful in producing a continuous periodic gait. Previous algorithms have dealt with this problem by establishing periodicity directly through the choice of policy class~\cite{kp-pgrlf-04,gpw-fbwrc-06,emmnc-cpg-08}, utilizing a large number of samples~\cite{slmja-trpo-15}, or initializing from demonstrations~\cite{lk-gps-13,la-lnnpg-14}. Instead, this challenge is handled by sequentially training several simple policies that demonstrate good behavior from a wide range of states, and then learning a policy that reproduces the gait of all of the sequential policies for a successful periodic behavior. The resulting algorithm learns policies from scratch for robotic tasks that exhibit periodicity over long time horizons. Section~\ref{sec:mdgps} demonstrates through experimentation that a single learned policy for a tensegrity robot is capable of efficient, continuous locomotion in a range of different conditions by learning appropriate feedbacks from the robot's onboard sensors. \subsection{Path Planning Through a Sampling-Based Motion Planner} \label{sec:belief} Recently, Littlefield \etal~\cite{littlefieldintegrating} implemented a high level sampling-based motion planner on a simulated \SB{} robot in the NASA Tensegrity Robotics Toolkit (NTRT), explained in chapter~\ref{modeling}. This method, based on a kinodynamic planner published by Yanbo \etal~\cite{li2016asymptotically}, is able to converge to an asymptotically optimal solution on dynamic systems using only forward propagation and uses a novel parallel implementation to make the forward propagation step more computationally feasible. The whole process is done within the NTRT simulation environment, where each parallel process is a NTRT simulation of \SB{}. Figure~\ref{fig:particles} shows a visual representation of two forward propagation steps with their respective uncertainty shown as overlaid transparent robots. \begin{figure}[thpb] \centering %\setlength{\unitlength}{0.5\columnwidth} \includegraphics[width=\linewidth]{tex/img/particles} \caption{ \label{fig:particles} An example of a trajectory with two forward progagations and their uncertainty. Possible future configurations are shown as transparent versions of \SB{}. (figure courtesy of Zakary Littlefield). } \end{figure} The main control method for this process has been a random sample of individual low level motor commands to produce the desired motion. It has been shown with full actuation on the simulated \SB{} that path following over various terrains and obstacles are achievable with this method. However, this process only learns a set of open loop actions based on the simulation environment and requires \(20\)+ processing cores to have reasonable computation times. %% Do I still need this section???? % \subsection{Central Pattern Generators} % \label{sec:CPG_overview} % Central Pattern Generators, or CPGs, are a set of two or more rhythmic processes connected such that each output interacts with the other process(es) to sequentially decrease and increase each process' output. % The benefit of such a network for rhythmic generators is that, once started, the network will have a series of rhythmic outputs that are phase offset from each other with no need for any external sensory input. % Discovered over a hundred years ago~\cite{brown1911intrinsic} and formally observed and named in the early 1960's~\cite{wilson1961central}, CPGs have linked to many rhythmic biological systems e.g. vertebrate walking~\cite{brown1911intrinsic,grillner2006biological} and involuntary movements~\cite{janczewski2006distinct,robertson1981oscillatory}. % Because of the limited research into actuated tensegrity % robotics, many design aspects have yet to be carefully % studied. To date, the majority of constructed tensegrity robots % have been simple prototypes using servo motors, limited % sensing, and are often tethered for power and control [5]. % Others have had fewer limbs than the SUPER ball, or have % been secured to the ground as opposed to free-standing % [6][7]. Some related approaches utilize tensegrity as part of a % larger, more complicated system, but not as the primary loco- % motion method [8]. Others have created designs that do not % use direct cable actuation, as in the SUPER ball, but instead % have more limited forms of locomotion through vibration % [9][10]. Finally, the most similar designs to the SUPER ball % have not been engineered to specific design requirements nor % have the advanced sensing framework needed for controls % testing [11] % % The high strength-to-weight ratio of tensegrity structures % is very attractive due to the impact of mass on mission launch % costs. Large tensegrity structures have been shown to be % deployable from small compact configurations which enable % them to fit into space constrained launch fairings. While the % above qualities have inspired studies of deployable antennae % and other large space structures [12], it is in the realm % of planetary exploration that we see the most significant % role for many of the unique force distribution qualities of % tensegrity robots. A recent NIAC project [13] specifically % studies landing and surface mobility of tensegrities, ex- % ploiting the controllable compliance and force distribution % properties which make for reliable and robust environmental % interactions. % The main goal is to develop tensegrity probes with % an actively controllable tensile network to enable compact % stowage for launch, followed by deployment in preparation % for landing. Due to their natural compliance and structural % force distribution properties, tensegrity probes can safely % absorb significant impact forces, enabling high speed Entry, % Descent, and Landing (EDL) scenarios where the probe itself % acts much like an airbag. However, unlike an airbag which % must be discarded after a single use, the tensegrity probe % can actively control its shape to provide compliant rolling % mobility while still maintaining the ability to safely absorb % impact shocks that might occur during exploration. This % combination of functions from a single structure enables % compact and lightweight planetary exploration missions with % the capabilities of traditional wheeled rovers, but with a mass % and cost similar or less than a stationary probe. % % Therefore, a large fraction of the overall weight (as mea- % sured at atmospheric entry) of a tensegrity mission can be % used for the scientific payload due to the dual use of the % structure as a lander and a rover. This allows for cheaper % missions and enable new forms of surface exploration that % utilize the natural tolerance to impacts of tensegrities [14]. % % Buckminster Fuller [1] and the artist Kenneth Snelson [2] % initially explored tensegrity structures in the 1960s. Until % the mid-1990s the majority of tensegrity related research % was concerned with form-finding [15] and design analysis % of static structure [16][17]. More recently, active control % efforts for tensegrities began to emerge [18], as well as % descriptions of the dynamics of tensegrity structures taking % the connectivity pattern into account [17]. % The tensegrity principle allows for compliance and multi- % path load distribution, which is ideal for physical interaction % with the environment. However, these aspects also present % significant challenges to traditional control approaches. A % recent review [19] shows that there are still many open % problems in actively controlling tensegrities, especially when % interacting with an environment during locomotion or ma- % nipulation tasks. Though work has been done to control a % tensegrity to change into a specified shape [20], practical % determination of the desired shape itself is an ongoing % challenge. Recently, locomotion of icosahedral tensegrity % robots through body deformation was demonstrated [21]. % Other work has addressed collision between rigid tensegrity % elements during control generation [22][23]. % The approach taken by the NASA Dynamic Tensegrity % Robotics Lab builds on this by developing body defor- % mation control algorithms based on central pattern gen- % erators [24][25], distributed learning, reservoir computing, % and genetic algorithms [26], instead of traditional linear % and nonlinear systems approaches. To date, our approach % has shown promising results at productively harnessing the % potential of complex, compliant, and nonlinear tensegrity % structures. % \section{Motivation an Goal}
{ "alphanum_fraction": 0.7948519824, "avg_line_length": 75.8498168498, "ext": "tex", "hexsha": "7fa1d0ee491ec337a8fdefca3ca53f2e4c4f41cf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "92c7f0bdaecde6bce2c6ee47d401e0335e449d6b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "JEB12345/Advancement_UCSC", "max_forks_repo_path": "Dissertation/tex/LitReview.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "92c7f0bdaecde6bce2c6ee47d401e0335e449d6b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "JEB12345/Advancement_UCSC", "max_issues_repo_path": "Dissertation/tex/LitReview.tex", "max_line_length": 349, "max_stars_count": null, "max_stars_repo_head_hexsha": "92c7f0bdaecde6bce2c6ee47d401e0335e449d6b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "JEB12345/Advancement_UCSC", "max_stars_repo_path": "Dissertation/tex/LitReview.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4763, "size": 20707 }
\documentclass[journal]{IEEEtran} %\usepackage[left=2.2cm,right=2.2cm,top=2.5cm,bottom=2.5cm]{geometry} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{minted} \usepackage{booktabs} \usepackage{amsmath} \usepackage{float} \usepackage{mathtools} \usepackage{color} \usepackage{amsthm} \usepackage{parskip} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{amssymb} \theoremstyle{definition} \newtheorem{defn}{Definition}[section] \usepackage[binary-units=true]{siunitx} \newcommand{\py}[1]{\mintinline{python}{#1}} \newcommand{\java}[1]{\mintinline{java}{#1}} \newcommand{\sql}[1]{\mintinline{sql}{#1}} \renewcommand{\epsilon}{\varepsilon} \renewcommand{\theta}{\vartheta} \renewcommand{\kappa}{\varkappa} \renewcommand{\rho}{\varrho} \renewcommand{\phi}{\varphi} \usepackage{hyperref} \title{Architecture and Performance of Computer Systems \\ Project on Client-Server Application Performance} \author{Gilles Peiffer (24321600), Liliya Semerikova (64811600)} \date{January 6, 2020} \begin{document} \maketitle \begin{abstract} This paper studies the performance of a client-server application (more specifically, a MariaDB server and a remote computer sending randomized SQL queries to that server) in order to first measure the effect of various parameters (such as the query type or the rate at which queries are sent) on the response time of the application, and second, to compare these results to the expected values using the various models of queueing theory. \end{abstract} \section*{Introduction} The answers to the various tasks are given in Sections~\ref{sec:task1} and \ref{sec:task2}. Section~\ref{sec:char} describes how to reproduce the experiments we did. \section{Measurements} \label{sec:task1} For the first task, we first study the average query response time (and the influence of various parameters on it), then we try to find out what factors influence the response time, and where potential bottlenecks lie. \subsection{Characteristics for Reproducibility} \label{sec:char} All experiments were done on two different physical computers: \begin{itemize} \item The server was MariaDB 10.4.11 running on an Early 2015 MacBook Pro with Ubuntu 18.04 LTS ``Bionic Beaver'', with \SI{8}{\giga\byte} of \SI{1867}{\mega\hertz} of DDR3 RAM memory, a \SI{2.9}{\giga\hertz} Intel Core i5 CPU, an Intel Iris Graphics 6100 \SI{1536}{\mega\byte} GPU and \SI{500}{\giga\byte} of Flash storage. \item The remote client was running on a 2016 MacBook Pro with macOS Catalina 10.15.2, with \SI{8}{\giga\byte} of \SI{2133}{\mega\hertz} of LPDDR3 RAM memory, a \SI{2.9}{\giga\hertz} Dual-Core Intel Core i5 CPU, an Intel Iris Graphics 550 \SI{1536}{\mega\byte} GPU and \SI{251}{\giga\byte} of Flash storage. \end{itemize} The tests were done on various networks: \begin{itemize} \item a WiFi network in one of the team members' student residence, on which the final tests were done (the ones appearing on plots), and \item the home network of the other team member. \end{itemize} This network has a download speed of \SI{62}{\mega\bit\per\second} and an upload speed on the order of \SI{6}{\mega\bit\per\second} according to the \href{https://fast.com/en/gb}{\url{FAST.com}} speed test. For the measurements, no significant warmup was needed, as no amount of queries showed any signs of instability beyond what one would expect on a real-world physical network. All tests were repeated multiple times, and under different network loads, while final results were obtained when there was no other activity on the network. Every query is made using a new connection, on a different thread. For this purpose, we defined a new Java class, \java{SQLThread}, which extends the \java{Thread} class, queries the database when its \java{run()} method is called. The server is configured to allow up to three concurrent threads, by using the \mintinline[breaklines]{bash}{sudo mysqld --user mysql --innodb-thread-concurrency=3} command, unless specified otherwise. The measurements were written into csv files by the Java code, which was then read by some short Python scripts which generate the plots using \py{pandas}, \py{seaborn}, and \py{matplotlib}. \subsection{Average Query Response Time} \begin{defn}[Response time] The \emph{response time} is typically treated as the elapsed time from the moment that a user enters a command or activates a function until the time that the application indicates that the command or function has completed. \end{defn} The following section studies the influence of the query type and the query rate on the response time of the application. \subsubsection{Influence of the Query Type} For this experiment, various types of queries were considered, based on the ones provided in the client template: \begin{itemize} \item ``\java{GetAverage}'' queries compute the average salary over a certain subset of rows of the table, where the number of rows and the starting row are randomly determined\footnote{In order to simplify the modeling task of Section~\ref{sec:task2}, this random value is generated in a way that the number of rows over which the query operates is exponentially distributed.} outside of the timed section and are passed to the server using the \sql{LIMIT} and \sql{OFFSET} keywords. \item ``\java{Select}'' queries select a subset of rows (randomly, according to the same rules as for the previous query) and returns the result. \item ``\java{Write}'' queries insert a row into the database. \end{itemize} For every type, tests were run multiple times, in order to get an idea of the average response time without inter-arrival pauses. For the actual tests, every thread waits a certain amount of time before starting its execution. This amount is an exponentially distributed random variable, in order to mimic a Poisson process, where the parameter \(\lambda\) is different for every type of query and depends on the average response time as well as the number of concurrent threads allowed on the MariaDB server. In order to single out the influence of the query type, this value was chosen so as to make it very improbable that queries would have to wait before being answered (that is, the server had a very low utilization; approximately \(0.03\)). The result is shown on Figure~\ref{fig:query_influence}. This figure also contains theoretical results, which are explained in further detail in Section~\ref{sec:task2}. \begin{figure}[!hbtp] \centering \includegraphics[width=\columnwidth]{../plotting/query_influence} \caption{Influence of query type on response time} \label{fig:query_influence} \end{figure} The \(y\)-axis is shown with logarithmic units in order to clearly show the difference in response time for the various queries. The \java{Write} query is on average the fastest, which is logical since it only needs to insert a single row into the database; the \java{Select} and \java{GetAverage} queries are separated because the \sql{LIMIT} values are quite different for both requests, with the former having a higher limit than the latter. \subsubsection{Influence of the Number of Queries Per Second} Next, in order to control the influence of the rate at which queries are sent, we modify the sleep delay before each thread sends its query. We still use the same kind of delay as in the previous section, that is, one with exponentially distributed inter-arrival times. By changing the parameter of the distribution in order to shorten or lengthen the expected waiting time between queries, we were able to observe the influence of the arrival rate on the average response time. The results of this experiment are shown in Figures~\ref{fig:rate_influence_average} through~\ref{fig:rate_influence_write}. Again, theoretical results are described in Section~\ref{sec:task2}. \begin{figure}[!hbtp] \centering \includegraphics[width=\columnwidth]{../plotting/rate_influence_average} \caption{Influence of request rate on response time for the \java{GetAverage} query} \label{fig:rate_influence_average} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[width=\columnwidth]{../plotting/rate_influence_select} \caption{Influence of request rate on response time for the \java{Select} query} \label{fig:rate_influence_select} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[width=\columnwidth]{../plotting/rate_influence_write} \caption{Influence of request rate on response time for the \java{Write} query} \label{fig:rate_influence_write} \end{figure} As seen on the figures, the larger the mean-interarrival time, the shorter the response time, generally. This can be explained by the fact that if queries are sent in rapid succession, the model is going to be temporarily saturated, leading to longer waiting times for the queries that arrive later. On the other hand, if queries are sent with sufficiently large intervals of time between them, then the server will almost be guaranteed to be able to accept a new query as it arrives, meaning that the waiting time is negligible. This observation is used in Section~\ref{sec:task2} to estimate the service time of the various requests. \subsection{Factors Influencing the Query Response Time and Possible Bottlenecks} We have already noted in the previous sections that the query type and the rate at which queries are sent have an influence on the response time. In this section, we will give some other factors influencing the response time, and identify where bottlenecks lie. Using the command line tools \mintinline{bash}{top}, \mintinline{bash}{pidstat}, and \mintinline{bash}{nethogs}, we were able to note that the \mintinline{bash}{mysql} process only took up about \SI{2}{\percent} of the total available RAM, whereas CPU usage was usually around \SI{10}{\percent}, though it was slightly higher for the \sql{SELECT}-based queries than for the \sql{INSERT}-based ones (up to \SI{40}{\percent}). There were on average around \(40\) minor page faults per second and barely any major page faults. This leads us to believe that the most important factor is the CPU time, since only \SI{10}{\percent} of it was available to the process most of the time, as well as the minor page faults in the RAM, which can each take several hundreds of microseconds according to \cite{pagefault} (which is critical in time-sensitive applications like this one, especially when these faults happen as regularly as they do). Considering that the \java{Write} queries have a negligible computation time, one can consider them to be a decent indicator of network latency; taking this into account, the network is another factor that can heavily influence the response time of the (short) queries. One would also expect the number of threads on the server to have a rather large influence on the response time of the server, but as shown on Figure~\ref{fig:thread_influence}, this is not the case for small values of \(n\). A reasonable expectation to have is that the optimal number of threads on a real computer is going to be close The theoretical prediction however explodes for a small number of threads; the reason for this is explained in Section~\ref{sec:task2}. \begin{figure}[!hbtp] \centering \includegraphics[width=\columnwidth]{../plotting/thread_influence} \caption{Influence of the number of server threads on the response time for \java{GetAverage} queries, with an arrival rate \(\lambda = \SI{5}{queries\per\second}\)} \label{fig:thread_influence} \end{figure} \section{Modeling} \label{sec:task2} \subsection{Queueing Station Model} Using the theoretical results described in the lecture notes, we now attempt to model the client-server application using one of the models from queueing theory covered in class. Models are described in Kendall notation \cite{kendall}. As mentioned in the project statement, a choice was made to use Markovian arrivals, i.e., modulated by a Poisson process. In general, one would expect that using the \(M \mid G \mid m\) model is the optimal choice from the point of view of its predictive power, given that a database cannot reliably be said to have exponentially distributed service times. However, the \(M \mid G \mid m\) model is difficult to use due to its complexity (\cite{mgm}), and was not covered as extensively during the lectures\footnote{If one were to really insist, this would require computing the second moment of the service time (a relatively easy task using Python), and then applying the Pollaczek--Khinchine formula: \(\mathbb{E}R = \mathbb{E}S + \frac{\lambda \mathbb{E}S^2}{2(1-\rho)}\), where \(\rho\) is the average number of customers in the service station.}; therefore, we adapted the client code to send requests (for \java{GetAverage} and \java{Select} queries) which have a random parameter (the number of rows on which to operate) that we generate as an exponential random variable. We make the reasonable assumption that the service time is then a linear function of this number of rows, and hence conclude that it is also exponentially distributed. This then allows us to satisfy the assumptions needed for the easier, less general \(M \mid M \mid m\) model (\cite{kendall}), for which many theoretical results are readily available. The final parameter of the model is simply the number of threads; as mentioned in Section~\ref{sec:task1}, \(m=3\) unless specified otherwise. \subsubsection{Determining the parameters} In order to model a queueing station with the \(M \mid M \mid 3\) model, one must set an arrival rate \(\lambda\) and a service rate \(\mu\). As mentioned in the project statement, the service time is equivalent the response time under the assumption that the queuing station is completely empty upon arrival of the query. One could think to use the very first query response time as a good indicator of this, but to avoid having to deal with increased response times due to cache warmup, we opted for another estimation: the average response time for a very small arrival rate (or equivalently, a very high mean inter-arrival time). The arrival rate is very easy to obtain, as it is the client program which decides its value. We then define the \emph{utilization} of the server as \[ \chi = \frac{\lambda}{m\mu}. \] If \(\chi\) is greater than \(1\), then the queue will grow without bound, but if \(\chi < 1\), then the queue is stable and the system has a stationary distribution with probability mass function \[ \pi_{0} = \left(\sum_{i=0}^{m-1} \frac{a^i}{i!} + \frac{a^m}{m! (1 - \chi)}\right)^{-1}, \] where \(a = \lambda/\mu\). One can then compute the expectation of the response time \(R\) as \begin{equation} \mathbb{E}R = \frac{1}{\lambda} \left(a + \frac{\chi a^m}{(1 - \chi)^2 m!}\pi_{0}\right). \end{equation} However, when analyzing real-life models, one should be wary that these results make several assumptions, the most important one being that the utilization is much smaller than one. \subsection{Comparison to the Real Data} It is interesting to compare the model of the previous section to the real-life data that we obtained in Section~\ref{sec:task1}. As one can observe, the green graphs on the plots are what one would expect from a purely theoretical point of view. What follows is a discussion of these results. \begin{itemize} \item On Figures~\ref{fig:rate_influence_average} and \ref{fig:rate_influence_select}, the theoretical predictions line up almost perfectly with the data for larger inter-arrival times. This indicates that the model is most likely correct, as well as its parameters. For smaller inter-arrival times, the model tends to be off by quite a bit, and for small enough values, it even becomes completely unusable. This should not come as a surprise; this is all due to the assumptions of the model, namely the assumption that \(\chi \ll 1\). The more this is the case, the better the result (hence, for a very small arrival rate, the theoretical model lines up very well with the real data). On the other hand, once the utilization gets nearer to one, the results deteriorate, and if it becomes larger than one, the model loses all its value. Formulas for the busy period of the system exist as well (see for example \cite{mmm}), but were not covered in class, and were hence not reproduced in this paper. \item On Figure~\ref{fig:rate_influence_write}, the results are slightly worse than on the previous two; this can be explained by the fact that the assumption of exponentially distributed service times might not always be valid for the \java{Write} query type. \item Finally, Figure~\ref{fig:thread_influence} shows even better that a high utilization is the bane of stationary analysis, whereas for larger values of \(n\), the prediction is almost eerily accurate. This is the only instance where a number of threads different from three was used. \end{itemize} \section*{Conclusion} Servers are an ubiquitous part of the modern world, and it is massively important for the computer scientists of tomorrow to be able to understand how they work, both from a practical and theoretical point of view. On one hand, there are many useful tools which can help with the former, and on the other, there is a whole field of mathematics devoted to modeling and predicting the behaviour of these servers. \begin{thebibliography}{9} \bibitem{pagefault} William Cohen, Examining Huge Pages or Transparent Huge Pages performance Red Hat Developer, \href{https://developers.redhat.com/blog/2014/03/10/examining-huge-pages-or-transparent-huge-pages-performance/}{link}, March 10, 2014. \bibitem{kendall} Kendall, D. G. (1953). ``\href{http://projecteuclid.org/euclid.aoms/1177728975}{Stochastic Processes Occurring in the Theory of Queues and their Analysis by the Method of the Imbedded Markov Chain}''. The Annals of Mathematical Statistics. 24 (3): 338. doi:\href{https://projecteuclid.org/euclid.aoms/1177728975}{10.1214/aoms/1177728975}{}. JSTOR \href{https://www.jstor.org/stable/2236285}{2236285}. \bibitem{mgm} Kingman, J. F. C. (2009). ``The first Erlang century—and the next''. Queueing Systems. 63: 3–4. doi:\href{https://link.springer.com/article/10.1007\%2Fs11134-009-9147-4}{10.1007/s11134-009-9147-4}. \bibitem{mmm} Omahen, K.; Marathe, V. (1978). ``\href{https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1110\&context=cstech}{Analysis and Applications of the Delay Cycle for the M/M/c Queueing System}''. Journal of the ACM. 25 (2): 283. doi:\href{https://dl.acm.org/doi/10.1145/322063.322072}{10.1145/322063.322072}. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7772996235, "avg_line_length": 78.4388185654, "ext": "tex", "hexsha": "b33d1d8c6d0dde5c90a9611c164d1bf199746b04", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "799d470fa646785c332b5f5dc0348430be6ce6eb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Peiffap/lingi2241-project", "max_forks_repo_path": "report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "799d470fa646785c332b5f5dc0348430be6ce6eb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Peiffap/lingi2241-project", "max_issues_repo_path": "report/report.tex", "max_line_length": 723, "max_stars_count": null, "max_stars_repo_head_hexsha": "799d470fa646785c332b5f5dc0348430be6ce6eb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Peiffap/lingi2241-project", "max_stars_repo_path": "report/report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4648, "size": 18590 }
% corrected LN 96 \section*{Acknowledgment} I would like to thank my tutor Vladimir Despotovic for his constructive feedback and mentorship. His introduction and explanation of neural networks were outstanding. Additionally, I thank him for supervising my paper. I would recommend fellow BiCS Students interested in this field to work with Vladimir Despotovic.
{ "alphanum_fraction": 0.8236914601, "avg_line_length": 36.3, "ext": "tex", "hexsha": "e4d1498ef49bc34f9afd91baa49c4abb5506bf92", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca3f7bee2d4740c5c7ad9f586766ab04a0e5f58b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Lemswasabi/bsps3-report", "max_forks_repo_path": "sections/acknowledgment.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ca3f7bee2d4740c5c7ad9f586766ab04a0e5f58b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Lemswasabi/bsps3-report", "max_issues_repo_path": "sections/acknowledgment.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "ca3f7bee2d4740c5c7ad9f586766ab04a0e5f58b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Lemswasabi/bsps3-report", "max_stars_repo_path": "sections/acknowledgment.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 77, "size": 363 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{book} \title{Data Analysis in Software Engineering using R} \author{Daniel Rodriguez and Javier Dolado} \date{2021-10-10} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{iftex} \ifPDFTeX \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Data Analysis in Software Engineering using R}, pdfauthor={Daniel Rodriguez and Javier Dolado}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs,array} \usepackage{calc} % for calculating minipage widths % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{booktabs} \ifLuaTeX \usepackage{selnolig} % disable illegal ligatures \fi \usepackage[]{natbib} \bibliographystyle{apalike} \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \hypertarget{welcome}{% \chapter*{Welcome}\label{welcome}} \addcontentsline{toc}{chapter}{Welcome} This \textbf{Data Analysis in Software Engineering (DASE)} book/notes will try teach you how to do data science with R in Software Engineering. It is a work in progress. \textbf{Acknowledgments} Projects: \begin{itemize} \item PRESI: TIN2013-46928-C3 \begin{itemize} \tightlist \item amuSE TIN2013-46928-C3-2-R \item PERTEST TIN2013-46928-C3-1-R \end{itemize} \item QARE: TIN2016-76956-C3 \begin{itemize} \tightlist \item BadgePeople: TIN2016-76956-C3-3-R \item TESTEAMOS: TIN2016-76956-C3-1-R \end{itemize} \item Network SBSE (\href{https://www.uco.es/investigacion/proyectos/SEBASENet/index.php?title=P\%C3\%A1gina_principal}{SEBASENet}): TIN2015-71841-REDT \item TestBUS PID2019-105455GB-C32 \end{itemize} This work is licensed under the \href{http://creativecommons.org/licenses/by-nc-nd/3.0/us/}{Creative Commons Attribution-NonCommtercial-NoDerivs 3.0} United States License. \hypertarget{part-introduction-to-the-r-language}{% \part{Introduction to the R Language}\label{part-introduction-to-the-r-language}} \hypertarget{r-intro}{% \chapter{Introduction to R}\label{r-intro}} The goal of the first part of this book is to get you up to speed with the basics of \textbf{R} as quickly as possible. \hypertarget{installation}{% \section{Installation}\label{installation}} Install the latest preview version for getting all features. Follow the procedures according to your operating system. \begin{itemize} \tightlist \item Linux: You need to have \texttt{blas} and \texttt{gfortran} installed on your Linux, for installing the \texttt{coin} package. \item \emph{Rgraphviz} requires installation from \texttt{source("http://bioconductor.org/biocLite.R")}, then \texttt{biocLite("Rgraphviz")}. \item Uncomment the following lines for installing all missing packages (this will take some time): \end{itemize} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# listofpackages \textless{}{-} c("arules","arulesViz", "bookdown", "ggplot2", "vioplot", "UsingR", "fpc", "reshape", "party", "C50", "utils", "rpart", "rpart.plot", "class", "klaR", "e1071", "popbio", "boot", "dplyr", "doParallel", "gbm", "DMwR", "pROC", "neuralnet", "igraph", "RMySQL", "caret", "randomForest", "tm", "wordcloud", "xts", "lubridate", "forecast", "urca", "glmnet", "FSelector", "pls", "emoa", "foreign" )} \CommentTok{\# newpackages \textless{}{-} listofpackages[!(listofpackages \%in\% installed.packages()[,"Package"])]} \CommentTok{\# if(length(newpackages)\textgreater{}0) install.packages(newpackages,dependencies = TRUE)} \CommentTok{\# } \CommentTok{\# \# install from archive} \CommentTok{\# if (!is.element("rgp", installed.packages()[,1]))} \CommentTok{\# \{ install.packages("https://cran.r{-}project.org/src/contrib/Archive/rgp/rgp\_0.4{-}1.tar.gz", } \CommentTok{\# repos = NULL)} \CommentTok{\# \}} \DocumentationTok{\#\# end of installing packages} \CommentTok{\# in Linux you may need to run several commands (in the terminal) and install additional libraries, e.g. } \CommentTok{\# sudo R CMD javareconf} \CommentTok{\# sudo apt{-}get install build{-}essential} \CommentTok{\# sudo apt{-}get install libxml2{-}dev} \CommentTok{\# sudo apt{-}get install libpq} \CommentTok{\# sudo apt{-}get install libpq{-}dev} \CommentTok{\# sudo apt{-}get install {-}y libmariadb{-}client{-}lgpl{-}dev} \CommentTok{\# sudo apt{-}get install texlive{-}xetex} \CommentTok{\# sudo apt{-}get install r{-}cran{-}rmysql} \end{Highlighting} \end{Shaded} \hypertarget{r-and-rstudio}{% \section{R and RStudio}\label{r-and-rstudio}} \begin{itemize} \item R is a programming language for statistical computing and data analysis that supports a variety of programming styles. See \href{https://en.wikipedia.org/wiki/R_(programming_language)}{R in Wikipedia} \item R has multiple online resources and books. \item \href{https://google.github.io/styleguide/Rguide.xml}{R coding style} \item \href{https://www.r-bloggers.com/}{R-Bloggers} \item Getting help in R \begin{itemize} \tightlist \item \href{https://github.com/rstudio/cheatsheets/raw/master/rstudio-ide.pdf}{RStudio cheat sheet} \item \href{http://github.com/rstudio/cheatsheets/raw/master/base-r.pdf}{Base R cheat sheet} \item \href{https://www.rstudio.com/wp-content/uploads/2016/02/advancedR.pdf}{Advanced R cheat sheet} \item \href{https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf}{Data Visualization cheat sheet} \item \href{https://github.com/rstudio/cheatsheets/raw/master/rmarkdown-2.0.pdf}{R Markdown cheatsheet} \item {[}R Markdown Basics{]} (\url{http://rmarkdown.rstudio.com/authoring_basics.html}) \item \href{https://github.com/rstudio/cheatsheets/raw/master/reticulate.pdf}{Python with R and Reticulate Cheatsheet} \item \href{https://github.com/rstudio/cheatsheets/raw/master/caret.pdf}{caret} \item \href{https://rstudio.com/resources/cheatsheets/}{All cheatsheets and translations} \item \texttt{help("\ \ \ \ ")} command \end{itemize} \item R as a calculator. Console: It uses the command-line interface. \item This document is an RMarkdown document. See \url{bookdown.org} \end{itemize} Examples: \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{,}\DecValTok{5}\NormalTok{,}\DecValTok{6}\NormalTok{) }\CommentTok{\# Create ordered collection (vector)} \NormalTok{y }\OtherTok{\textless{}{-}}\NormalTok{ x}\SpecialCharTok{\^{}}\DecValTok{2} \CommentTok{\# Square the elements of x} \FunctionTok{print}\NormalTok{(y) }\CommentTok{\# print (vector) y} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1 4 9 16 25 36 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{mean}\NormalTok{(y) }\CommentTok{\# Calculate average (arithmetic mean) of (vector) y; result is scalar} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 15.16667 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{var}\NormalTok{(y) }\CommentTok{\# Calculate sample variance} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 178.9667 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{lm\_1 }\OtherTok{\textless{}{-}} \FunctionTok{lm}\NormalTok{(y }\SpecialCharTok{\textasciitilde{}}\NormalTok{ x) }\CommentTok{\# Fit a linear regression model "y = f(x)" or "y = B0 + (B1 * x)"} \CommentTok{\# store the results as lm\_1} \FunctionTok{print}\NormalTok{(lm\_1) }\CommentTok{\# Print the model from the (linear model object) lm\_1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Call: ## lm(formula = y ~ x) ## ## Coefficients: ## (Intercept) x ## -9.333 7.000 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{summary}\NormalTok{(lm\_1) }\CommentTok{\# Compute and print statistics for the fit} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Call: ## lm(formula = y ~ x) ## ## Residuals: ## 1 2 3 4 5 6 ## 3.3333 -0.6667 -2.6667 -2.6667 -0.6667 3.3333 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -9.3333 2.8441 -3.282 0.030453 * ## x 7.0000 0.7303 9.585 0.000662 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 3.055 on 4 degrees of freedom ## Multiple R-squared: 0.9583, Adjusted R-squared: 0.9478 ## F-statistic: 91.88 on 1 and 4 DF, p-value: 0.000662 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# of the (linear model object) lm\_1} \FunctionTok{par}\NormalTok{(}\AttributeTok{mfrow=}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{)) }\CommentTok{\# Request 2x2 plot layout} \FunctionTok{plot}\NormalTok{(lm\_1) }\CommentTok{\# Diagnostic plot of regression model} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-2-1.pdf} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{help}\NormalTok{(lm)} \NormalTok{?lm} \FunctionTok{apropos}\NormalTok{(}\StringTok{"lm"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] ".colMeans" ".lm.fit" "colMeans" "confint.lm" ## [5] "contr.helmert" "dummy.coef.lm" "glm" "glm.control" ## [9] "glm.fit" "KalmanForecast" "KalmanLike" "KalmanRun" ## [13] "KalmanSmooth" "kappa.lm" "lm" "lm_1" ## [17] "lm.fit" "lm.influence" "lm.wfit" "model.matrix.lm" ## [21] "nlm" "nlminb" "predict.glm" "predict.lm" ## [25] "residuals.glm" "residuals.lm" "summary.glm" "summary.lm" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{example}\NormalTok{(lm)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## lm> require(graphics) ## ## lm> ## Annette Dobson (1990) "An Introduction to Generalized Linear Models". ## lm> ## Page 9: Plant Weight Data. ## lm> ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) ## ## lm> trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) ## ## lm> group <- gl(2, 10, 20, labels = c("Ctl","Trt")) ## ## lm> weight <- c(ctl, trt) ## ## lm> lm.D9 <- lm(weight ~ group) ## ## lm> lm.D90 <- lm(weight ~ group - 1) # omitting intercept ## ## lm> ## No test: ## lm> ##D anova(lm.D9) ## lm> ##D summary(lm.D90) ## lm> ## End(No test) ## lm> opar <- par(mfrow = c(2,2), oma = c(0, 0, 1.1, 0)) ## ## lm> plot(lm.D9, las = 1) # Residuals, Fitted, ... \end{verbatim} \includegraphics{DASE_files/figure-latex/unnamed-chunk-2-2.pdf} \begin{verbatim} ## ## lm> par(opar) ## ## lm> ## Don't show: ## lm> ## model frame : ## lm> stopifnot(identical(lm(weight ~ group, method = "model.frame"), ## lm+ model.frame(lm.D9))) ## ## lm> ## End(Don't show) ## lm> ### less simple examples in "See Also" above ## lm> ## lm> ## lm> \end{verbatim} \begin{itemize} \item R script. \# A file with R commands \texttt{\#\ comments} \texttt{source("filewithcommands.R")} \texttt{sink("recordmycommands.lis")} \texttt{savehistory()} \item From command line: \begin{itemize} \tightlist \item Rscript\\ \item Rscript file with \texttt{-e} (e.g.~\texttt{Rscript\ -e\ 2+2})\\ \item To exit R: \texttt{quit()} \end{itemize} \item Variables. R is case sensitive \end{itemize} \begin{Shaded} \begin{Highlighting}[] \NormalTok{var1 }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\DecValTok{10} \NormalTok{vAr1 }\OtherTok{\textless{}{-}} \DecValTok{11}\SpecialCharTok{:}\DecValTok{20} \NormalTok{var1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1 2 3 4 5 6 7 8 9 10 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{vAr1 } \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 11 12 13 14 15 16 17 18 19 20 \end{verbatim} \begin{itemize} \item Operators \begin{itemize} \tightlist \item assign operator \texttt{\textless{}-}\\ \item sequence operator, for example: \texttt{mynums\ \textless{}-\ 0:20} \item arithmetic operators: + - = / \^{} \%/\% (integer division) \%\% (modulus operator) \end{itemize} \item The workspace. Objects. \begin{itemize} \tightlist \item \texttt{ls()} \texttt{objects()} \texttt{ls.str()} lists and describes the objects\\ \item \texttt{rm(x)} delete a variable. E.g., \texttt{rm(totalCost)} \item \texttt{s.str()} \item \texttt{objects()} \item \texttt{str()} The structure function provides information about the variable \end{itemize} \item RStudio, RCommander and RKWard are the well-known IDEs for R (more later). \end{itemize} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \begin{itemize} \item Four \# (`\#\#\#\#') create an \emph{environment} in RStudio. An environment binds a set of names to a set of values. You can think of an environment as a bag of names. \begin{itemize} \tightlist \item \href{http://adv-r.had.co.nz/Environments.html\#env-basics}{Environment basics} \end{itemize} \end{itemize} \includegraphics[width=0.75\linewidth]{figures/environments} Working directories: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# set your working directory} \CommentTok{\# setwd("\textasciitilde{}/workingDir/")} \FunctionTok{getwd}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "/home/drg/Projects/DASE" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# record R commands:} \CommentTok{\# sink("recordmycommands.txt", append = TRUE)} \end{Highlighting} \end{Shaded} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{basic-data-types}{% \section{Basic Data Types}\label{basic-data-types}} \begin{itemize} \item \texttt{class(\ )} \item logical: \texttt{TRUE}, \texttt{FALSE} \item numeric, integer: \begin{itemize} \tightlist \item \texttt{is.numeric(\ \ )} \item \texttt{is.integer(\ \ )}\strut \\ \end{itemize} \item \texttt{character} \end{itemize} Examples: \begin{Shaded} \begin{Highlighting}[] \ConstantTok{TRUE} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{class}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "logical" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \ConstantTok{FALSE} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \ConstantTok{NA} \CommentTok{\# missing} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] NA \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{class}\NormalTok{(}\ConstantTok{NA}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "logical" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{T} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{F} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \ConstantTok{NaN} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] NaN \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{class}\NormalTok{(}\ConstantTok{NaN}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "numeric" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# numeric data type} \DecValTok{2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{class}\NormalTok{(}\DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "numeric" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FloatTok{2.5} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 2.5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{2L }\CommentTok{\# integer} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{class}\NormalTok{(2L)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "integer" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.numeric}\NormalTok{(}\DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.numeric}\NormalTok{(2L)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.integer}\NormalTok{(}\DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.integer}\NormalTok{(2L)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.numeric}\NormalTok{(}\ConstantTok{NaN}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{itemize} \item data type coercion: \begin{itemize} \tightlist \item \texttt{as.numeric(\ \ )} \item \texttt{as.character(\ \ )}\strut \\ \item \texttt{as.integer(\ \ )} \end{itemize} \end{itemize} Examples: \begin{Shaded} \begin{Highlighting}[] \NormalTok{truenum }\OtherTok{\textless{}{-}} \FunctionTok{as.numeric}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{)} \NormalTok{truenum} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{class}\NormalTok{(truenum)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "numeric" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{falsenum }\OtherTok{\textless{}{-}} \FunctionTok{as.numeric}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{)} \NormalTok{falsenum} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{num2char }\OtherTok{\textless{}{-}} \FunctionTok{as.character}\NormalTok{(}\DecValTok{55}\NormalTok{)} \NormalTok{num2char} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "55" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{char2num }\OtherTok{\textless{}{-}} \FunctionTok{as.numeric}\NormalTok{(}\StringTok{"55.3"}\NormalTok{)} \NormalTok{char2int }\OtherTok{\textless{}{-}} \FunctionTok{as.integer}\NormalTok{(}\StringTok{"55.3"}\NormalTok{)} \end{Highlighting} \end{Shaded} \hypertarget{mising-values}{% \subsection{Mising values}\label{mising-values}} \begin{itemize} \tightlist \item \texttt{NA} stands for Not Available, which is not a number as well. It applies to missing values. \item \texttt{NaN} means `Not a Number' \end{itemize} Examples: \begin{Shaded} \begin{Highlighting}[] \ConstantTok{NA} \SpecialCharTok{+} \DecValTok{1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] NA \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{mean}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{5}\NormalTok{,}\ConstantTok{NA}\NormalTok{,}\DecValTok{7}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] NA \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{mean}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{5}\NormalTok{,}\ConstantTok{NA}\NormalTok{,}\DecValTok{7}\NormalTok{), }\AttributeTok{na.rm=}\ConstantTok{TRUE}\NormalTok{) }\CommentTok{\# some functions allow to remove NAs} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 6 \end{verbatim} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{vectors}{% \section{Vectors}\label{vectors}} Examples: \begin{Shaded} \begin{Highlighting}[] \NormalTok{phases }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"reqs"}\NormalTok{, }\StringTok{"dev"}\NormalTok{, }\StringTok{"test1"}\NormalTok{, }\StringTok{"test2"}\NormalTok{, }\StringTok{"maint"}\NormalTok{)} \FunctionTok{str}\NormalTok{(phases)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## chr [1:5] "reqs" "dev" "test1" "test2" "maint" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{is.vector}\NormalTok{(phases)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{thevalues }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{15}\NormalTok{, }\DecValTok{60}\NormalTok{, }\DecValTok{30}\NormalTok{, }\DecValTok{35}\NormalTok{, }\DecValTok{22}\NormalTok{)} \FunctionTok{names}\NormalTok{(thevalues) }\OtherTok{\textless{}{-}}\NormalTok{ phases} \FunctionTok{str}\NormalTok{(thevalues)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Named num [1:5] 15 60 30 35 22 ## - attr(*, "names")= chr [1:5] "reqs" "dev" "test1" "test2" ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{thevalues} \end{Highlighting} \end{Shaded} \begin{verbatim} ## reqs dev test1 test2 maint ## 15 60 30 35 22 \end{verbatim} A single value is a vector! Example: \begin{Shaded} \begin{Highlighting}[] \NormalTok{aphase }\OtherTok{\textless{}{-}} \DecValTok{44} \FunctionTok{is.vector}\NormalTok{(aphase)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{length}\NormalTok{(aphase)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{length}\NormalTok{(thevalues)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 5 \end{verbatim} \hypertarget{coercion-for-vectors}{% \subsection{Coercion for vectors}\label{coercion-for-vectors}} \begin{Shaded} \begin{Highlighting}[] \NormalTok{thevalues1 }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{15}\NormalTok{, }\DecValTok{60}\NormalTok{, }\StringTok{"30"}\NormalTok{, }\DecValTok{35}\NormalTok{, }\DecValTok{22}\NormalTok{)} \FunctionTok{class}\NormalTok{(thevalues1)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "character" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{thevalues1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "15" "60" "30" "35" "22" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# \textless{}{-} is equivalent to assign ( )} \FunctionTok{assign}\NormalTok{(}\StringTok{"costs"}\NormalTok{, }\FunctionTok{c}\NormalTok{(}\DecValTok{50}\NormalTok{, }\DecValTok{100}\NormalTok{, }\DecValTok{30}\NormalTok{))} \end{Highlighting} \end{Shaded} \hypertarget{vector-arithmetic}{% \subsection{Vector arithmetic}\label{vector-arithmetic}} The operation is carried out in all the elements of the vector. For example: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{assign}\NormalTok{(}\StringTok{"costs"}\NormalTok{, }\FunctionTok{c}\NormalTok{(}\DecValTok{50}\NormalTok{, }\DecValTok{100}\NormalTok{, }\DecValTok{30}\NormalTok{))} \NormalTok{costs}\SpecialCharTok{/}\DecValTok{3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 16.66667 33.33333 10.00000 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{costs }\SpecialCharTok{{-}} \DecValTok{5} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 45 95 25 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{costs }\OtherTok{\textless{}{-}}\NormalTok{ costs }\SpecialCharTok{{-}} \DecValTok{5} \NormalTok{incomes }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{200}\NormalTok{, }\DecValTok{800}\NormalTok{, }\DecValTok{10}\NormalTok{)} \NormalTok{earnings }\OtherTok{\textless{}{-}}\NormalTok{ incomes }\SpecialCharTok{{-}}\NormalTok{ costs} \FunctionTok{sum}\NormalTok{(earnings)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 845 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# R recycles values in vectors!} \NormalTok{vector1 }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{)} \NormalTok{vector2 }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{,}\DecValTok{11}\NormalTok{,}\DecValTok{12}\NormalTok{,}\DecValTok{13}\NormalTok{,}\DecValTok{14}\NormalTok{,}\DecValTok{15}\NormalTok{,}\DecValTok{16}\NormalTok{)} \NormalTok{vector1 }\SpecialCharTok{+}\NormalTok{ vector2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning in vector1 + vector2: longer object length is not a multiple of shorter ## object length \end{verbatim} \begin{verbatim} ## [1] 11 13 15 14 16 18 17 \end{verbatim} Subsetting vectors \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Subsetting vectors []} \NormalTok{phase1 }\OtherTok{\textless{}{-}}\NormalTok{ phases[}\DecValTok{1}\NormalTok{]} \NormalTok{phase1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "reqs" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{phase3 }\OtherTok{\textless{}{-}}\NormalTok{ phases[}\DecValTok{3}\NormalTok{]} \NormalTok{phase3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "test1" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{thevalues[phase1]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## reqs ## 15 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{thevalues[}\StringTok{"reqs"}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## reqs ## 15 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{testphases }\OtherTok{\textless{}{-}}\NormalTok{ phases[}\FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{)]} \NormalTok{thevalues[testphases]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## test1 test2 ## 30 35 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Negative indexes} \NormalTok{phases1 }\OtherTok{\textless{}{-}}\NormalTok{ phases[}\SpecialCharTok{{-}}\DecValTok{5}\NormalTok{]} \NormalTok{phases} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "reqs" "dev" "test1" "test2" "maint" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{phases1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "reqs" "dev" "test1" "test2" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#phases2 \textless{}{-} phases[{-}testphases] \#\# error in argument} \NormalTok{phases2 }\OtherTok{\textless{}{-}}\NormalTok{ phases[}\SpecialCharTok{{-}}\FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{)]} \NormalTok{phases2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "reqs" "dev" "maint" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# subset using logical vector} \NormalTok{phases3 }\OtherTok{\textless{}{-}}\NormalTok{ phases[}\FunctionTok{c}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)] }\CommentTok{\#recicled first value} \NormalTok{phases3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "dev" "test1" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{selectionv }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)} \NormalTok{phases3 }\OtherTok{\textless{}{-}}\NormalTok{ phases[selectionv]} \NormalTok{phases3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "dev" "test1" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{selectionvec2 }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)} \NormalTok{thevalues2 }\OtherTok{\textless{}{-}}\NormalTok{ thevalues[selectionvec2]} \NormalTok{thevalues2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## reqs test1 maint ## 15 30 22 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Generating regular sequences with \textasciigrave{}:\textasciigrave{} and \textasciigrave{}seq\textasciigrave{}} \NormalTok{aseqofvalues }\OtherTok{\textless{}{-}} \DecValTok{1}\SpecialCharTok{:}\DecValTok{20} \NormalTok{aseqofvalues2 }\OtherTok{\textless{}{-}} \FunctionTok{seq}\NormalTok{(}\AttributeTok{from=}\SpecialCharTok{{-}}\DecValTok{3}\NormalTok{, }\AttributeTok{to=}\DecValTok{3}\NormalTok{, }\AttributeTok{by=}\FloatTok{0.5}\NormalTok{ )} \NormalTok{aseqofvalues2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues3 }\OtherTok{\textless{}{-}} \FunctionTok{seq}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{100}\NormalTok{, }\AttributeTok{by=}\DecValTok{10}\NormalTok{)} \NormalTok{aseqofvalues4 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{6}\NormalTok{, }\DecValTok{8}\NormalTok{)]} \NormalTok{aseqofvalues4} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10 30 50 70 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues4 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[}\SpecialCharTok{{-}}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{6}\NormalTok{, }\DecValTok{8}\NormalTok{)]} \NormalTok{aseqofvalues4} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0 20 40 60 80 90 100 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues3[}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{)] }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{666}\NormalTok{,}\DecValTok{888}\NormalTok{)} \NormalTok{aseqofvalues3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 666 888 20 30 40 50 60 70 80 90 100 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Logical values in vectors TRUE/FALSE} \NormalTok{aseqofvalues3 }\SpecialCharTok{\textgreater{}} \DecValTok{50} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE TRUE TRUE TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues5 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[aseqofvalues3 }\SpecialCharTok{\textgreater{}} \DecValTok{50}\NormalTok{]} \NormalTok{aseqofvalues5} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 666 888 60 70 80 90 100 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues6 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[}\SpecialCharTok{!}\NormalTok{(aseqofvalues3 }\SpecialCharTok{\textgreater{}} \DecValTok{50}\NormalTok{)]} \NormalTok{aseqofvalues6} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 20 30 40 50 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Comparison functions} \NormalTok{aseqofvalues7 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[aseqofvalues3 }\SpecialCharTok{==} \DecValTok{50}\NormalTok{]} \NormalTok{aseqofvalues7} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 50 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues8 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[aseqofvalues3 }\SpecialCharTok{==} \DecValTok{22}\NormalTok{]} \NormalTok{aseqofvalues8} \end{Highlighting} \end{Shaded} \begin{verbatim} ## numeric(0) \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues9 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[aseqofvalues3 }\SpecialCharTok{!=} \DecValTok{50}\NormalTok{]} \NormalTok{aseqofvalues9} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 666 888 20 30 40 60 70 80 90 100 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{logicalcond }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3 }\SpecialCharTok{\textgreater{}=} \DecValTok{50} \NormalTok{aseqofvalues10 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[logicalcond]} \NormalTok{aseqofvalues10} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 666 888 50 60 70 80 90 100 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Remove Missing Values (NAs)} \NormalTok{aseqofvalues3[}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{)] }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\ConstantTok{NA}\NormalTok{,}\ConstantTok{NA}\NormalTok{)} \NormalTok{aseqofvalues3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] NA NA 20 30 40 50 60 70 80 90 100 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{aseqofvalues3 }\OtherTok{\textless{}{-}}\NormalTok{ aseqofvalues3[}\SpecialCharTok{!}\FunctionTok{is.na}\NormalTok{(aseqofvalues3)]} \NormalTok{aseqofvalues3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 20 30 40 50 60 70 80 90 100 \end{verbatim} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{arrays-and-matrices}{% \section{Arrays and Matrices}\label{arrays-and-matrices}} Matrices are actually long vectors. \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{12}\NormalTok{, }\AttributeTok{nrow =}\DecValTok{2}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 1 3 5 7 9 11 ## [2,] 2 4 6 8 10 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{12}\NormalTok{, }\AttributeTok{ncol =}\DecValTok{3}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] ## [1,] 1 5 9 ## [2,] 2 6 10 ## [3,] 3 7 11 ## [4,] 4 8 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{12}\NormalTok{, }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{, }\AttributeTok{byrow =} \ConstantTok{TRUE}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] [,6] ## [1,] 1 2 3 4 5 6 ## [2,] 7 8 9 10 11 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{12}\NormalTok{, }\AttributeTok{nrow=}\DecValTok{3}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{4}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## [1,] 1 4 7 10 ## [2,] 2 5 8 11 ## [3,] 3 6 9 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{12}\NormalTok{, }\AttributeTok{nrow=}\DecValTok{3}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{4}\NormalTok{, }\AttributeTok{byrow=}\ConstantTok{TRUE}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## [1,] 1 2 3 4 ## [2,] 5 6 7 8 ## [3,] 9 10 11 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# recycling} \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{5}\NormalTok{, }\AttributeTok{nrow=}\DecValTok{3}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{4}\NormalTok{, }\AttributeTok{byrow=}\ConstantTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning in matrix(1:5, nrow = 3, ncol = 4, byrow = TRUE): data length [5] is not ## a sub-multiple or multiple of the number of rows [3] \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## [1,] 1 2 3 4 ## [2,] 5 1 2 3 ## [3,] 4 5 1 2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# rbind cbind} \FunctionTok{cbind}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{, }\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] ## [1,] 1 1 ## [2,] 2 2 ## [3,] 3 3 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rbind}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{, }\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] ## [1,] 1 2 3 ## [2,] 1 2 3 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{8}\NormalTok{, }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{, }\AttributeTok{ncol=}\DecValTok{4}\NormalTok{, }\AttributeTok{byrow=}\ConstantTok{TRUE}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## [1,] 1 2 3 4 ## [2,] 5 6 7 8 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rbind}\NormalTok{(mymat, }\DecValTok{9}\SpecialCharTok{:}\DecValTok{12}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## [1,] 1 2 3 4 ## [2,] 5 6 7 8 ## [3,] 9 10 11 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{cbind}\NormalTok{(mymat, }\FunctionTok{c}\NormalTok{(}\DecValTok{5}\NormalTok{,}\DecValTok{9}\NormalTok{))} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 2 3 4 5 ## [2,] 5 6 7 8 9 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{8}\NormalTok{, }\AttributeTok{byrow =} \ConstantTok{TRUE}\NormalTok{, }\AttributeTok{nrow=}\DecValTok{2}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## [1,] 1 2 3 4 ## [2,] 5 6 7 8 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rownames}\NormalTok{(mymat) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"row1"}\NormalTok{, }\StringTok{"row2"}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## row1 1 2 3 4 ## row2 5 6 7 8 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{colnames}\NormalTok{(mymat) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"col1"}\NormalTok{, }\StringTok{"col2"}\NormalTok{, }\StringTok{"col3"}\NormalTok{, }\StringTok{"col4"}\NormalTok{)} \NormalTok{mymat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## col1 col2 col3 col4 ## row1 1 2 3 4 ## row2 5 6 7 8 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat2 }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{12}\NormalTok{, }\AttributeTok{byrow=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{nrow=}\DecValTok{3}\NormalTok{, }\AttributeTok{dimnames=}\FunctionTok{list}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\StringTok{"row1"}\NormalTok{, }\StringTok{"row2"}\NormalTok{, }\StringTok{"row3"}\NormalTok{),} \FunctionTok{c}\NormalTok{(}\StringTok{"col1"}\NormalTok{, }\StringTok{"col2"}\NormalTok{, }\StringTok{"col3"}\NormalTok{, }\StringTok{"col4"}\NormalTok{)))} \NormalTok{mymat2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## col1 col2 col3 col4 ## row1 1 2 3 4 ## row2 5 6 7 8 ## row3 9 10 11 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Coercion in Arrays} \NormalTok{matnum }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{8}\NormalTok{, }\AttributeTok{ncol =} \DecValTok{2}\NormalTok{)} \NormalTok{matnum} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] ## [1,] 1 5 ## [2,] 2 6 ## [3,] 3 7 ## [4,] 4 8 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{matchar }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(LETTERS[}\DecValTok{1}\SpecialCharTok{:}\DecValTok{6}\NormalTok{], }\AttributeTok{nrow =} \DecValTok{4}\NormalTok{, }\AttributeTok{ncol =} \DecValTok{3}\NormalTok{)} \NormalTok{matchar} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] ## [1,] "A" "E" "C" ## [2,] "B" "F" "D" ## [3,] "C" "A" "E" ## [4,] "D" "B" "F" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{matchars }\OtherTok{\textless{}{-}} \FunctionTok{cbind}\NormalTok{(matnum, matchar)} \NormalTok{matchars} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] ## [1,] "1" "5" "A" "E" "C" ## [2,] "2" "6" "B" "F" "D" ## [3,] "3" "7" "C" "A" "E" ## [4,] "4" "8" "D" "B" "F" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Subsetting } \NormalTok{mymat3 }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\FunctionTok{sample}\NormalTok{(}\SpecialCharTok{{-}}\DecValTok{8}\SpecialCharTok{:}\DecValTok{15}\NormalTok{, }\DecValTok{12}\NormalTok{), }\AttributeTok{nrow=}\DecValTok{3}\NormalTok{) }\CommentTok{\#sample 12 numbers between {-}8 and 15} \NormalTok{mymat3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] ## [1,] 5 13 -5 15 ## [2,] 3 -8 12 0 ## [3,] -3 8 4 -1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[}\DecValTok{1}\NormalTok{,}\DecValTok{4}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 15 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[}\DecValTok{3}\NormalTok{,]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] -3 8 4 -1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[,}\DecValTok{4}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 15 0 -1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[}\DecValTok{5}\NormalTok{] }\CommentTok{\# counts elements by column} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] -8 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[}\DecValTok{9}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 4 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\# Subsetting multiple elements} \NormalTok{mymat3[}\DecValTok{2}\NormalTok{, }\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{3}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 3 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] ## [1,] 3 12 0 ## [2,] -3 4 -1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{rownames}\NormalTok{(mymat3) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"r1"}\NormalTok{, }\StringTok{"r2"}\NormalTok{, }\StringTok{"r3"}\NormalTok{)} \FunctionTok{colnames}\NormalTok{(mymat3) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"c1"}\NormalTok{, }\StringTok{"c2"}\NormalTok{, }\StringTok{"c3"}\NormalTok{, }\StringTok{"c4"}\NormalTok{)} \NormalTok{mymat3[}\StringTok{"r2"}\NormalTok{, }\FunctionTok{c}\NormalTok{(}\StringTok{"c1"}\NormalTok{, }\StringTok{"c3"}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c1 c3 ## 3 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Subset by logical vector} \NormalTok{mymat3[}\FunctionTok{c}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{),} \FunctionTok{c}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c1 c3 ## 3 12 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat3[}\FunctionTok{c}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{),} \FunctionTok{c}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## c1 c3 c4 ## r2 3 12 0 ## r3 -3 4 -1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# matrix arithmetic} \NormalTok{row1 }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{220}\NormalTok{, }\DecValTok{137}\NormalTok{)} \NormalTok{row2 }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{345}\NormalTok{, }\DecValTok{987}\NormalTok{)} \NormalTok{row3 }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{111}\NormalTok{, }\DecValTok{777}\NormalTok{)} \NormalTok{mymat4 }\OtherTok{\textless{}{-}} \FunctionTok{rbind}\NormalTok{(row1, row2, row3)} \FunctionTok{rownames}\NormalTok{(mymat4) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"row\_1"}\NormalTok{, }\StringTok{"row\_2"}\NormalTok{, }\StringTok{"row\_3"}\NormalTok{)} \FunctionTok{colnames}\NormalTok{(mymat4) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"col\_1"}\NormalTok{, }\StringTok{"col\_2"}\NormalTok{)} \NormalTok{mymat4} \end{Highlighting} \end{Shaded} \begin{verbatim} ## col_1 col_2 ## row_1 220 137 ## row_2 345 987 ## row_3 111 777 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat4}\SpecialCharTok{/}\DecValTok{10} \end{Highlighting} \end{Shaded} \begin{verbatim} ## col_1 col_2 ## row_1 22.0 13.7 ## row_2 34.5 98.7 ## row_3 11.1 77.7 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat4 }\SpecialCharTok{{-}}\DecValTok{100} \end{Highlighting} \end{Shaded} \begin{verbatim} ## col_1 col_2 ## row_1 120 37 ## row_2 245 887 ## row_3 11 677 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat5 }\OtherTok{\textless{}{-}} \FunctionTok{rbind}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{50}\NormalTok{,}\DecValTok{50}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{,}\DecValTok{10}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\DecValTok{100}\NormalTok{,}\DecValTok{100}\NormalTok{))} \NormalTok{mymat5} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] ## [1,] 50 50 ## [2,] 10 10 ## [3,] 100 100 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat4 }\SpecialCharTok{{-}}\NormalTok{ mymat5} \end{Highlighting} \end{Shaded} \begin{verbatim} ## col_1 col_2 ## row_1 170 87 ## row_2 335 977 ## row_3 11 677 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mymat4 }\SpecialCharTok{*}\NormalTok{ (mymat5}\SpecialCharTok{/}\DecValTok{100}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## col_1 col_2 ## row_1 110.0 68.5 ## row_2 34.5 98.7 ## row_3 111.0 777.0 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# index matrices} \NormalTok{m1 }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{20}\NormalTok{, }\AttributeTok{dim=}\FunctionTok{c}\NormalTok{(}\DecValTok{4}\NormalTok{,}\DecValTok{5}\NormalTok{))} \NormalTok{m1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 5 9 13 17 ## [2,] 2 6 10 14 18 ## [3,] 3 7 11 15 19 ## [4,] 4 8 12 16 20 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{index }\OtherTok{\textless{}{-}} \FunctionTok{array}\NormalTok{(}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\SpecialCharTok{:}\DecValTok{3}\NormalTok{, }\DecValTok{3}\SpecialCharTok{:}\DecValTok{1}\NormalTok{), }\AttributeTok{dim=}\FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{,}\DecValTok{2}\NormalTok{))} \NormalTok{index} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] ## [1,] 1 3 ## [2,] 2 2 ## [3,] 3 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#use the "index matrix" as the index for the other matrix} \NormalTok{m1[index] }\OtherTok{\textless{}{-}}\DecValTok{0} \NormalTok{m1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] [,2] [,3] [,4] [,5] ## [1,] 1 5 0 13 17 ## [2,] 2 0 10 14 18 ## [3,] 0 7 11 15 19 ## [4,] 4 8 12 16 20 \end{verbatim} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{factors}{% \section{Factors}\label{factors}} \begin{itemize} \tightlist \item Factors are variables in R which take on a limited number of different values; such variables are often referred to as `categorical variables' and `enumerated type'. \item Factors in R are stored as a vector of integer values with a corresponding set of character values to use when the factor is displayed. \item The function \texttt{factor} is used to encode a vector as a factor \end{itemize} \begin{Shaded} \begin{Highlighting}[] \NormalTok{personnel }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Analyst1"}\NormalTok{, }\StringTok{"ManagerL2"}\NormalTok{, }\StringTok{"Analyst1"}\NormalTok{, }\StringTok{"Analyst2"}\NormalTok{,} \StringTok{"Boss"}\NormalTok{, }\StringTok{"ManagerL1"}\NormalTok{, }\StringTok{"ManagerL2"}\NormalTok{, }\StringTok{"Programmer1"}\NormalTok{,} \StringTok{"Programmer2"}\NormalTok{, }\StringTok{"Programmer3"}\NormalTok{, }\StringTok{"Designer1"}\NormalTok{,}\StringTok{"Designer2"}\NormalTok{,} \StringTok{"OtherStaff"}\NormalTok{) }\CommentTok{\# staff in a company} \NormalTok{personnel\_factors }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(personnel)} \NormalTok{personnel\_factors }\CommentTok{\#sorted alphabetically} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] Analyst1 ManagerL2 Analyst1 Analyst2 Boss ManagerL1 ## [7] ManagerL2 Programmer1 Programmer2 Programmer3 Designer1 Designer2 ## [13] OtherStaff ## 11 Levels: Analyst1 Analyst2 Boss Designer1 Designer2 ManagerL1 ... Programmer3 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(personnel\_factors)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Factor w/ 11 levels "Analyst1","Analyst2",..: 1 7 1 2 3 6 7 9 10 11 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{personnel2 }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(personnel, } \AttributeTok{levels =} \FunctionTok{c}\NormalTok{(}\StringTok{"Boss"}\NormalTok{, }\StringTok{"ManagerL1"}\NormalTok{, }\StringTok{"ManagerL2"}\NormalTok{, } \StringTok{"Analyst1"}\NormalTok{, }\StringTok{"Analyst2"}\NormalTok{, }\StringTok{"Designer1"}\NormalTok{,} \StringTok{"Designer2"}\NormalTok{, }\StringTok{"Programmer1"}\NormalTok{, }\StringTok{"Programmer2"}\NormalTok{, } \StringTok{"Programmer3"}\NormalTok{, }\StringTok{"OtherStaff"}\NormalTok{)) } \CommentTok{\#do not duplicate the same factors} \NormalTok{personnel2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] Analyst1 ManagerL2 Analyst1 Analyst2 Boss ManagerL1 ## [7] ManagerL2 Programmer1 Programmer2 Programmer3 Designer1 Designer2 ## [13] OtherStaff ## 11 Levels: Boss ManagerL1 ManagerL2 Analyst1 Analyst2 Designer1 ... OtherStaff \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(personnel2)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Factor w/ 11 levels "Boss","ManagerL1",..: 4 3 4 5 1 2 3 8 9 10 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# a factor\textquotesingle{}s levels will always be character values.} \FunctionTok{levels}\NormalTok{(personnel2) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"B"}\NormalTok{, }\StringTok{"M1"}\NormalTok{, }\StringTok{"M2"}\NormalTok{, }\StringTok{"A1"}\NormalTok{, }\StringTok{"A2"}\NormalTok{,} \StringTok{"D1"}\NormalTok{, }\StringTok{"D2"}\NormalTok{, }\StringTok{"P1"}\NormalTok{, }\StringTok{"P2"}\NormalTok{, }\StringTok{"P3"}\NormalTok{, }\StringTok{"OS"}\NormalTok{)} \NormalTok{personnel2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] A1 M2 A1 A2 B M1 M2 P1 P2 P3 D1 D2 OS ## Levels: B M1 M2 A1 A2 D1 D2 P1 P2 P3 OS \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{personnel3 }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(personnel,} \AttributeTok{levels =} \FunctionTok{c}\NormalTok{(}\StringTok{"Boss"}\NormalTok{, }\StringTok{"ManagerL1"}\NormalTok{, }\StringTok{"ManagerL2"}\NormalTok{,} \StringTok{"Analyst1"}\NormalTok{, }\StringTok{"Analyst2"}\NormalTok{, }\StringTok{"Designer1"}\NormalTok{,} \StringTok{"Designer2"}\NormalTok{, }\StringTok{"Programmer1"}\NormalTok{, }\StringTok{"Programmer2"}\NormalTok{,} \StringTok{"Programmer3"}\NormalTok{, }\StringTok{"OtherStaff"}\NormalTok{),} \FunctionTok{c}\NormalTok{(}\StringTok{"B"}\NormalTok{, }\StringTok{"M1"}\NormalTok{, }\StringTok{"M2"}\NormalTok{, }\StringTok{"A1"}\NormalTok{, }\StringTok{"A2"}\NormalTok{, }\StringTok{"D1"}\NormalTok{, }\StringTok{"D2"}\NormalTok{,} \StringTok{"P1"}\NormalTok{, }\StringTok{"P2"}\NormalTok{, }\StringTok{"P3"}\NormalTok{, }\StringTok{"OS"}\NormalTok{))} \NormalTok{personnel3} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] A1 M2 A1 A2 B M1 M2 P1 P2 P3 D1 D2 OS ## Levels: B M1 M2 A1 A2 D1 D2 P1 P2 P3 OS \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\# Nominal versus ordinal, ordered factors} \NormalTok{personnel3[}\DecValTok{1}\NormalTok{] }\SpecialCharTok{\textless{}}\NormalTok{ personnel3[}\DecValTok{2}\NormalTok{] }\CommentTok{\# error, factors not ordered} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning in Ops.factor(personnel3[1], personnel3[2]): '<' not meaningful for ## factors \end{verbatim} \begin{verbatim} ## [1] NA \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{tshirts }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"M"}\NormalTok{, }\StringTok{"L"}\NormalTok{, }\StringTok{"S"}\NormalTok{, }\StringTok{"S"}\NormalTok{, }\StringTok{"L"}\NormalTok{, }\StringTok{"M"}\NormalTok{, }\StringTok{"L"}\NormalTok{, }\StringTok{"M"}\NormalTok{)} \NormalTok{tshirt\_factor }\OtherTok{\textless{}{-}} \FunctionTok{factor}\NormalTok{(tshirts, }\AttributeTok{ordered =} \ConstantTok{TRUE}\NormalTok{,} \AttributeTok{levels =} \FunctionTok{c}\NormalTok{(}\StringTok{"S"}\NormalTok{, }\StringTok{"M"}\NormalTok{, }\StringTok{"L"}\NormalTok{))} \NormalTok{tshirt\_factor} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] M L S S L M L M ## Levels: S < M < L \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{tshirt\_factor[}\DecValTok{1}\NormalTok{] }\SpecialCharTok{\textless{}}\NormalTok{ tshirt\_factor[}\DecValTok{2}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{lists}{% \section{Lists}\label{lists}} Lists are the R objects which contain elements of different types: numbers, strings, vectors and other lists. A list can also contain a matrix or a function as one of their elements. A list is created using \texttt{list()} function. Operators for subsetting lists include: - `{[}' returns a list - `{[}{[}' returns the list element - `\$' returns the content of that element in the list \begin{Shaded} \begin{Highlighting}[] \FunctionTok{c}\NormalTok{(}\StringTok{"R good times"}\NormalTok{, }\DecValTok{190}\NormalTok{, }\DecValTok{5}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "R good times" "190" "5" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song }\OtherTok{\textless{}{-}} \FunctionTok{list}\NormalTok{(}\StringTok{"R good times"}\NormalTok{, }\DecValTok{190}\NormalTok{, }\DecValTok{5}\NormalTok{)} \FunctionTok{is.list}\NormalTok{(song)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] TRUE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(song)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 3 ## $ : chr "R good times" ## $ : num 190 ## $ : num 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{names}\NormalTok{(song) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"title"}\NormalTok{, }\StringTok{"duration"}\NormalTok{, }\StringTok{"track"}\NormalTok{)} \NormalTok{song} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "R good times" ## ## $duration ## [1] 190 ## ## $track ## [1] 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song}\SpecialCharTok{$}\NormalTok{title} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song2 }\OtherTok{\textless{}{-}} \FunctionTok{list}\NormalTok{(}\AttributeTok{title=}\StringTok{"Good Friends"}\NormalTok{, } \AttributeTok{duration =} \DecValTok{125}\NormalTok{,} \AttributeTok{track =} \DecValTok{2}\NormalTok{,} \AttributeTok{rank =} \DecValTok{6}\NormalTok{)} \NormalTok{song3 }\OtherTok{\textless{}{-}} \FunctionTok{list}\NormalTok{(}\AttributeTok{title=}\StringTok{"Many Friends"}\NormalTok{, } \AttributeTok{duration =} \DecValTok{125}\NormalTok{,} \AttributeTok{track=} \DecValTok{2}\NormalTok{,} \AttributeTok{rank =} \DecValTok{1}\NormalTok{,} \AttributeTok{similar2 =}\NormalTok{ song2)} \NormalTok{song[}\DecValTok{1}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song}\SpecialCharTok{$}\NormalTok{title} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(song[}\DecValTok{1}\NormalTok{])} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 1 ## $ title: chr "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song[[}\DecValTok{1}\NormalTok{]]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(song[[}\DecValTok{1}\NormalTok{]])} \end{Highlighting} \end{Shaded} \begin{verbatim} ## chr "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song2[}\DecValTok{3}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $track ## [1] 2 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song3[}\DecValTok{5}\NormalTok{] }\CommentTok{\# a list} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $similar2 ## $similar2$title ## [1] "Good Friends" ## ## $similar2$duration ## [1] 125 ## ## $similar2$track ## [1] 2 ## ## $similar2$rank ## [1] 6 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(song3[}\DecValTok{5}\NormalTok{])} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 1 ## $ similar2:List of 4 ## ..$ title : chr "Good Friends" ## ..$ duration: num 125 ## ..$ track : num 2 ## ..$ rank : num 6 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song3[[}\DecValTok{5}\NormalTok{]]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "Good Friends" ## ## $duration ## [1] 125 ## ## $track ## [1] 2 ## ## $rank ## [1] 6 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song3}\SpecialCharTok{$}\NormalTok{similar2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "Good Friends" ## ## $duration ## [1] 125 ## ## $track ## [1] 2 ## ## $rank ## [1] 6 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song[}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{3}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "R good times" ## ## $track ## [1] 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(song[}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{3}\NormalTok{)])} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 2 ## $ title: chr "R good times" ## $ track: num 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{result }\OtherTok{\textless{}{-}}\NormalTok{ song[}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{3}\NormalTok{)]} \NormalTok{result[}\DecValTok{1}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{result[[}\DecValTok{1}\NormalTok{]]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{str}\NormalTok{(result)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 2 ## $ title: chr "R good times" ## $ track: num 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{result}\SpecialCharTok{$}\NormalTok{title} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "R good times" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{result}\SpecialCharTok{$}\NormalTok{track} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# access with [[ to content } \NormalTok{song3[[}\DecValTok{5}\NormalTok{]][[}\DecValTok{1}\NormalTok{]]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Good Friends" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song3}\SpecialCharTok{$}\NormalTok{similar2[[}\DecValTok{1}\NormalTok{]]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Good Friends" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Subsets} \DocumentationTok{\#\#\# subset by names} \NormalTok{song[}\FunctionTok{c}\NormalTok{(}\StringTok{"title"}\NormalTok{, }\StringTok{"track"}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "R good times" ## ## $track ## [1] 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{song3[}\StringTok{"similar2"}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $similar2 ## $similar2$title ## [1] "Good Friends" ## ## $similar2$duration ## [1] 125 ## ## $similar2$track ## [1] 2 ## ## $similar2$rank ## [1] 6 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{resultsimilar }\OtherTok{\textless{}{-}}\NormalTok{ song3[}\StringTok{"similar2"}\NormalTok{]} \FunctionTok{str}\NormalTok{(resultsimilar)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 1 ## $ similar2:List of 4 ## ..$ title : chr "Good Friends" ## ..$ duration: num 125 ## ..$ track : num 2 ## ..$ rank : num 6 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{resultsimilar1 }\OtherTok{\textless{}{-}}\NormalTok{song3[[}\StringTok{"similar2"}\NormalTok{]]} \FunctionTok{str}\NormalTok{(resultsimilar1)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 4 ## $ title : chr "Good Friends" ## $ duration: num 125 ## $ track : num 2 ## $ rank : num 6 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{resultsimilar1}\SpecialCharTok{$}\NormalTok{title} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Good Friends" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# subset by logicals} \NormalTok{song[}\FunctionTok{c}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $title ## [1] "R good times" ## ## $track ## [1] 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{result3 }\OtherTok{\textless{}{-}}\NormalTok{ song[}\FunctionTok{c}\NormalTok{(}\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)] }\CommentTok{\# is a list of two elements} \CommentTok{\# extending the list} \NormalTok{shared }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Hillary"}\NormalTok{, }\StringTok{"Mari"}\NormalTok{, }\StringTok{"Mikel"}\NormalTok{, }\StringTok{"Patty"}\NormalTok{)} \NormalTok{song3}\SpecialCharTok{$}\NormalTok{shared }\OtherTok{\textless{}{-}}\NormalTok{ shared} \FunctionTok{str}\NormalTok{(song3)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 6 ## $ title : chr "Many Friends" ## $ duration: num 125 ## $ track : num 2 ## $ rank : num 1 ## $ similar2:List of 4 ## ..$ title : chr "Good Friends" ## ..$ duration: num 125 ## ..$ track : num 2 ## ..$ rank : num 6 ## $ shared : chr [1:4] "Hillary" "Mari" "Mikel" "Patty" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{cities }\OtherTok{\textless{}{-}} \FunctionTok{list}\NormalTok{(}\StringTok{"Bilbao"}\NormalTok{, }\StringTok{"New York"}\NormalTok{, }\StringTok{"Tartu"}\NormalTok{)} \NormalTok{song3[[}\StringTok{"cities"}\NormalTok{]] }\OtherTok{\textless{}{-}}\NormalTok{ cities} \FunctionTok{str}\NormalTok{(song3)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 7 ## $ title : chr "Many Friends" ## $ duration: num 125 ## $ track : num 2 ## $ rank : num 1 ## $ similar2:List of 4 ## ..$ title : chr "Good Friends" ## ..$ duration: num 125 ## ..$ track : num 2 ## ..$ rank : num 6 ## $ shared : chr [1:4] "Hillary" "Mari" "Mikel" "Patty" ## $ cities :List of 3 ## ..$ : chr "Bilbao" ## ..$ : chr "New York" ## ..$ : chr "Tartu" \end{verbatim} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{data-frames}{% \section{Data frames}\label{data-frames}} A data frame is the data structure most often used for data analyses. A data frame is a list of equal-length vectors. Each element of the list can be thought of as a column and the length of each element of the list is the number of rows. As a result, data frames can store different classes of objects in each column (i.e.~numeric, character, factor). The \texttt{tidyverse} package provides a version of the data frame called \texttt{tibble} \begin{Shaded} \begin{Highlighting}[] \NormalTok{thenames }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Ane"}\NormalTok{, }\StringTok{"Mike"}\NormalTok{, }\StringTok{"Laura"}\NormalTok{, }\StringTok{"Viktoria"}\NormalTok{, }\StringTok{"Martin"}\NormalTok{)} \NormalTok{ages }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{44}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{33}\NormalTok{, }\DecValTok{15}\NormalTok{, }\DecValTok{65}\NormalTok{)} \NormalTok{employee }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{TRUE}\NormalTok{, }\ConstantTok{FALSE}\NormalTok{)} \NormalTok{mydataframe }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(thenames, ages, employee)} \NormalTok{mydataframe} \end{Highlighting} \end{Shaded} \begin{verbatim} ## thenames ages employee ## 1 Ane 44 FALSE ## 2 Mike 20 FALSE ## 3 Laura 33 TRUE ## 4 Viktoria 15 TRUE ## 5 Martin 65 FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{names}\NormalTok{(mydataframe) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"FirstName"}\NormalTok{, }\StringTok{"Age"}\NormalTok{, }\StringTok{"Employee"}\NormalTok{)} \FunctionTok{str}\NormalTok{(mydataframe)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 5 obs. of 3 variables: ## $ FirstName: chr "Ane" "Mike" "Laura" "Viktoria" ... ## $ Age : num 44 20 33 15 65 ## $ Employee : logi FALSE FALSE TRUE TRUE FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#strings are not factors!} \NormalTok{mydataframe }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(thenames, ages, employee,} \AttributeTok{stringsAsFactors=}\ConstantTok{FALSE}\NormalTok{)} \FunctionTok{names}\NormalTok{(mydataframe) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"FirstName"}\NormalTok{, }\StringTok{"Age"}\NormalTok{, }\StringTok{"Employee"}\NormalTok{)} \FunctionTok{str}\NormalTok{(mydataframe)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 5 obs. of 3 variables: ## $ FirstName: chr "Ane" "Mike" "Laura" "Viktoria" ... ## $ Age : num 44 20 33 15 65 ## $ Employee : logi FALSE FALSE TRUE TRUE FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# subset data frame} \NormalTok{mydataframe[}\DecValTok{4}\NormalTok{,}\DecValTok{2}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 15 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydataframe[}\DecValTok{4}\NormalTok{, }\StringTok{"Age"}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 15 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydataframe[, }\StringTok{"FirstName"}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Ane" "Mike" "Laura" "Viktoria" "Martin" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydataframe[}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{5}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\StringTok{"Age"}\NormalTok{, }\StringTok{"Employee"}\NormalTok{)]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Age Employee ## 2 20 FALSE ## 5 65 FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{matfromframe }\OtherTok{\textless{}{-}} \FunctionTok{as.matrix}\NormalTok{(mydataframe[}\FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{,}\DecValTok{5}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\StringTok{"Age"}\NormalTok{, }\StringTok{"Employee"}\NormalTok{)])} \FunctionTok{str}\NormalTok{(matfromframe)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## num [1:2, 1:2] 20 65 0 0 ## - attr(*, "dimnames")=List of 2 ## ..$ : chr [1:2] "2" "5" ## ..$ : chr [1:2] "Age" "Employee" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydataframe[}\DecValTok{3}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Employee ## 1 FALSE ## 2 FALSE ## 3 TRUE ## 4 TRUE ## 5 FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# convert to vector} \NormalTok{mydf0 }\OtherTok{\textless{}{-}}\NormalTok{ mydataframe[}\DecValTok{3}\NormalTok{] }\CommentTok{\#data.frame} \FunctionTok{str}\NormalTok{(mydf0)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 5 obs. of 1 variable: ## $ Employee: logi FALSE FALSE TRUE TRUE FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{myvec }\OtherTok{\textless{}{-}}\NormalTok{ mydataframe[[}\DecValTok{3}\NormalTok{]] }\CommentTok{\#vector} \FunctionTok{str}\NormalTok{(myvec)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## logi [1:5] FALSE FALSE TRUE TRUE FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydf0asvec }\OtherTok{\textless{}{-}} \FunctionTok{as.vector}\NormalTok{(mydataframe[}\DecValTok{3}\NormalTok{]) }\CommentTok{\# but it doesn\textquotesingle{}t work . Use [[]]} \FunctionTok{str}\NormalTok{(mydf0asvec)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 5 obs. of 1 variable: ## $ Employee: logi FALSE FALSE TRUE TRUE FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydf0asvec }\OtherTok{\textless{}{-}} \FunctionTok{as.vector}\NormalTok{(mydataframe[[}\DecValTok{3}\NormalTok{]])} \FunctionTok{str}\NormalTok{(mydf0asvec)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## logi [1:5] FALSE FALSE TRUE TRUE FALSE \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# add column} \NormalTok{height }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{166}\NormalTok{, }\DecValTok{165}\NormalTok{, }\DecValTok{158}\NormalTok{, }\DecValTok{176}\NormalTok{, }\DecValTok{199}\NormalTok{)} \NormalTok{weight }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{66}\NormalTok{, }\DecValTok{77}\NormalTok{, }\DecValTok{99}\NormalTok{, }\DecValTok{88}\NormalTok{, }\DecValTok{109}\NormalTok{)} \NormalTok{mydataframe}\SpecialCharTok{$}\NormalTok{height }\OtherTok{\textless{}{-}}\NormalTok{ height } \NormalTok{mydataframe[[}\StringTok{"weight"}\NormalTok{]] }\OtherTok{\textless{}{-}}\NormalTok{ weight} \NormalTok{mydataframe} \end{Highlighting} \end{Shaded} \begin{verbatim} ## FirstName Age Employee height weight ## 1 Ane 44 FALSE 166 66 ## 2 Mike 20 FALSE 165 77 ## 3 Laura 33 TRUE 158 99 ## 4 Viktoria 15 TRUE 176 88 ## 5 Martin 65 FALSE 199 109 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# add a column } \NormalTok{birthplace }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Tallinn"}\NormalTok{, }\StringTok{"London"}\NormalTok{, }\StringTok{"Donostia"}\NormalTok{, }\StringTok{"Paris"}\NormalTok{, }\StringTok{"New York"}\NormalTok{)} \NormalTok{mydataframe }\OtherTok{\textless{}{-}} \FunctionTok{cbind}\NormalTok{(mydataframe, birthplace)} \NormalTok{mydataframe} \end{Highlighting} \end{Shaded} \begin{verbatim} ## FirstName Age Employee height weight birthplace ## 1 Ane 44 FALSE 166 66 Tallinn ## 2 Mike 20 FALSE 165 77 London ## 3 Laura 33 TRUE 158 99 Donostia ## 4 Viktoria 15 TRUE 176 88 Paris ## 5 Martin 65 FALSE 199 109 New York \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# add a row } \NormalTok{anton }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{FirstName =} \StringTok{"Anton"}\NormalTok{, }\AttributeTok{Age =} \DecValTok{77}\NormalTok{, }\AttributeTok{Employee=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{height=} \DecValTok{170}\NormalTok{, }\AttributeTok{weight =} \DecValTok{65}\NormalTok{, }\AttributeTok{birthplace =}\StringTok{"Amsterdam"}\NormalTok{, }\AttributeTok{stringsAsFactors=}\ConstantTok{FALSE}\NormalTok{)} \NormalTok{mydataframe }\OtherTok{\textless{}{-}} \FunctionTok{rbind}\NormalTok{ (mydataframe, anton)} \NormalTok{mydataframe} \end{Highlighting} \end{Shaded} \begin{verbatim} ## FirstName Age Employee height weight birthplace ## 1 Ane 44 FALSE 166 66 Tallinn ## 2 Mike 20 FALSE 165 77 London ## 3 Laura 33 TRUE 158 99 Donostia ## 4 Viktoria 15 TRUE 176 88 Paris ## 5 Martin 65 FALSE 199 109 New York ## 6 Anton 77 TRUE 170 65 Amsterdam \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# sorting } \NormalTok{mydataframeSorted }\OtherTok{\textless{}{-}}\NormalTok{ mydataframe[}\FunctionTok{order}\NormalTok{(mydataframe}\SpecialCharTok{$}\NormalTok{Age, }\AttributeTok{decreasing =} \ConstantTok{TRUE}\NormalTok{), ] }\CommentTok{\#all columns} \NormalTok{mydataframeSorted} \end{Highlighting} \end{Shaded} \begin{verbatim} ## FirstName Age Employee height weight birthplace ## 6 Anton 77 TRUE 170 65 Amsterdam ## 5 Martin 65 FALSE 199 109 New York ## 1 Ane 44 FALSE 166 66 Tallinn ## 3 Laura 33 TRUE 158 99 Donostia ## 2 Mike 20 FALSE 165 77 London ## 4 Viktoria 15 TRUE 176 88 Paris \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mydataframeSorted2 }\OtherTok{\textless{}{-}}\NormalTok{ mydataframe[}\FunctionTok{order}\NormalTok{(mydataframe}\SpecialCharTok{$}\NormalTok{Age, }\AttributeTok{decreasing =} \ConstantTok{TRUE}\NormalTok{), }\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{6}\NormalTok{) ]} \NormalTok{mydataframeSorted2} \end{Highlighting} \end{Shaded} \begin{verbatim} ## FirstName Age birthplace ## 6 Anton 77 Amsterdam ## 5 Martin 65 New York ## 1 Ane 44 Tallinn ## 3 Laura 33 Donostia ## 2 Mike 20 London ## 4 Viktoria 15 Paris \end{verbatim} \hypertarget{r-functional-functions}{% \section{R Functional Functions}\label{r-functional-functions}} R is a functional language and there are some special functions provided: \texttt{apply()}, \texttt{lapply()}, \texttt{sapply()}, \texttt{tapply()}, \texttt{mapply()}, \texttt{vapply()} One of its main strengths lies in the use of the functions \texttt{apply()} (and all its variations) on lists, matrices, data frames or other data structures. The \texttt{tidyverse} package provides the \texttt{purrr} package for functional programming. The topic of functional programming lies beyond the purpose of this introduction. Most of the commands that we use in our scripts are functions applied to data. \hypertarget{environments}{% \section{Environments}\label{environments}} An environment is a place where R stores variables that is where R binds a set of names to a set of object values. An environment is something like a bag or a list of names. Every name in an environment is unique. The top level environment \texttt{R\_GlobalEnv} is created when we start up R and is the global environment. Every environment has parent environment. When we define a function, a new environment is created. \hypertarget{global-variables-local-variables-and-programming-scope}{% \subsection{Global variables, local variables and programming scope}\label{global-variables-local-variables-and-programming-scope}} Global variables are those variables which exists throughout the execution of a program. Local variables are those variables which exist only within a certain part of a program like a function. The super-assignment operator, \textless\textless-, is used to make assignments to global variables or to make assignments in the parent environment. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# variables and functions in the current environment} \FunctionTok{ls}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "ages" "anton" "aphase" ## [4] "aseqofvalues" "aseqofvalues10" "aseqofvalues2" ## [7] "aseqofvalues3" "aseqofvalues4" "aseqofvalues5" ## [10] "aseqofvalues6" "aseqofvalues7" "aseqofvalues8" ## [13] "aseqofvalues9" "birthplace" "char2int" ## [16] "char2num" "cities" "costs" ## [19] "ctl" "earnings" "employee" ## [22] "falsenum" "group" "height" ## [25] "incomes" "index" "lm_1" ## [28] "lm.D9" "lm.D90" "logicalcond" ## [31] "m1" "matchar" "matchars" ## [34] "matfromframe" "matnum" "mydataframe" ## [37] "mydataframeSorted" "mydataframeSorted2" "mydf0" ## [40] "mydf0asvec" "mymat" "mymat2" ## [43] "mymat3" "mymat4" "mymat5" ## [46] "myvec" "num2char" "opar" ## [49] "personnel" "personnel_factors" "personnel2" ## [52] "personnel3" "phase1" "phase3" ## [55] "phases" "phases1" "phases2" ## [58] "phases3" "result" "result3" ## [61] "resultsimilar" "resultsimilar1" "row1" ## [64] "row2" "row3" "selectionv" ## [67] "selectionvec2" "shared" "song" ## [70] "song2" "song3" "testphases" ## [73] "thenames" "thevalues" "thevalues1" ## [76] "thevalues2" "trt" "truenum" ## [79] "tshirt_factor" "tshirts" "var1" ## [82] "vAr1" "vector1" "vector2" ## [85] "weight" "x" "y" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# to get the current environment} \FunctionTok{environment}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## <environment: R_GlobalEnv> \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# the base environment is the environment of the base package} \FunctionTok{baseenv}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## <environment: base> \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# list of environments} \FunctionTok{search}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] ".GlobalEnv" "package:stats" "package:graphics" ## [4] "package:grDevices" "package:utils" "package:datasets" ## [7] "package:methods" "Autoloads" "package:base" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# functions and environments } \CommentTok{\# in this example we do not return any value } \NormalTok{myfunction }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{() \{} \NormalTok{ myvar\_a}\OtherTok{\textless{}{-}} \DecValTok{50} \NormalTok{ myfunctioninside }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{() \{} \NormalTok{ myvar\_a }\OtherTok{\textless{}{-}} \DecValTok{100} \CommentTok{\# myvar\_a \textless{}\textless{}{-} 100} \FunctionTok{print}\NormalTok{(myvar\_a)} \NormalTok{ \}} \FunctionTok{myfunctioninside}\NormalTok{()} \FunctionTok{print}\NormalTok{(myvar\_a)} \CommentTok{\# myvar\_a \textless{}\textless{}{-} 100} \NormalTok{\}} \NormalTok{myvar\_a }\OtherTok{\textless{}{-}} \DecValTok{10} \FunctionTok{myfunction}\NormalTok{()} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 100 ## [1] 50 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{print}\NormalTok{(myvar\_a)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# create environment} \NormalTok{my\_env }\OtherTok{\textless{}{-}} \FunctionTok{new.env}\NormalTok{()} \NormalTok{my\_env} \end{Highlighting} \end{Shaded} \begin{verbatim} ## <environment: 0x55d7d29f4c80> \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ls}\NormalTok{(my\_env)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## character(0) \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{character}\NormalTok{(}\DecValTok{0}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## character(0) \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{assign}\NormalTok{(}\StringTok{"myvar\_a"}\NormalTok{, }\DecValTok{700}\NormalTok{, }\AttributeTok{envir=}\NormalTok{my\_env)} \NormalTok{my\_env}\SpecialCharTok{$}\NormalTok{mytext }\OtherTok{=} \StringTok{" a text"} \FunctionTok{ls}\NormalTok{(my\_env)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "mytext" "myvar_a" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{myvar\_a} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 10 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{my\_env}\SpecialCharTok{$}\NormalTok{myvar\_a} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 700 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{parent.env}\NormalTok{(my\_env)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## <environment: R_GlobalEnv> \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{get}\NormalTok{(}\StringTok{\textquotesingle{}myvar\_a\textquotesingle{}}\NormalTok{, }\AttributeTok{envir=}\NormalTok{my\_env)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 700 \end{verbatim} \hypertarget{reading-data}{% \section{Reading Data}\label{reading-data}} R is capable of reading most formats including CSV, MS Excel formats (xlsx, etc.) as well as other statistical packages (e.g.~SAS, SPSS, etc.) and data mining tools such as ARFF (Weka's format). R also provides has two native data formats, Rdata and Rds. These formats are used when leaving or starting an R session so that R objects can be stored or retrieved to continue in the same state). While Rdata is used to save multiple R objects, Rds is used to save a single R object. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{load}\NormalTok{(}\StringTok{"data.rdata"}\NormalTok{) }\CommentTok{\# It is needed to be in the same directory (setwd())} \end{Highlighting} \end{Shaded} To read CSV (Comma Separated Values) files in R \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Import the data and look at the first six rows} \NormalTok{f }\OtherTok{\textless{}{-}} \FunctionTok{read.csv}\NormalTok{(}\StringTok{"data.csv"}\NormalTok{)} \FunctionTok{head}\NormalTok{(f)} \end{Highlighting} \end{Shaded} To read ARFF files, we can use the \texttt{foreign} library. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \NormalTok{isbsg }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"datasets/effortEstimation/isbsg10teaser.arff"}\NormalTok{)} \NormalTok{mydataISBSG }\OtherTok{\textless{}{-}}\NormalTok{ isbsg[, }\FunctionTok{c}\NormalTok{(}\StringTok{"FS"}\NormalTok{, }\StringTok{"N\_effort"}\NormalTok{)]} \FunctionTok{str}\NormalTok{(mydataISBSG)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 37 obs. of 2 variables: ## $ FS : num 225 599 333 748 158 427 461 257 115 116 ... ## $ N_effort: num 1856 10960 5661 1518 3670 ... \end{verbatim} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{plots}{% \section{Plots}\label{plots}} There are several graphic packages that are recommended, in particular \texttt{ggplot}. However, there is some basic support in the R base for graphics. The following Figure \ref{fig:plotExample} shows a simple plot. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(mydataISBSG}\SpecialCharTok{$}\NormalTok{FS, mydataISBSG}\SpecialCharTok{$}\NormalTok{N\_effort)} \end{Highlighting} \end{Shaded} \begin{figure} \centering \includegraphics{DASE_files/figure-latex/plotExample-1.pdf} \caption{\label{fig:plotExample}Simple plot} \end{figure} \begin{center}\rule{0.5\linewidth}{0.5pt}\end{center} \hypertarget{control-flow-in-r}{% \section{Control flow in R}\label{control-flow-in-r}} R provides most common control flow structures found in most languages \texttt{if} \begin{Shaded} \begin{Highlighting}[] \NormalTok{x }\OtherTok{\textless{}{-}} \DecValTok{6} \ControlFlowTok{if}\NormalTok{ (x }\SpecialCharTok{\textgreater{}=} \DecValTok{5}\NormalTok{) \{} \StringTok{"x is greater than or equals 5"} \NormalTok{\} }\ControlFlowTok{else}\NormalTok{ \{} \StringTok{"x is smaller than 5"} \NormalTok{\}} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "x is greater than or equals 5" \end{verbatim} \texttt{ifelse} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \NormalTok{kc1 }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"datasets/defectPred/D1/KC1.arff"}\NormalTok{)} \NormalTok{kc1}\SpecialCharTok{$}\NormalTok{Defective }\OtherTok{\textless{}{-}} \FunctionTok{ifelse}\NormalTok{(kc1}\SpecialCharTok{$}\NormalTok{Defective }\SpecialCharTok{==} \StringTok{"Y"}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{)} \FunctionTok{head}\NormalTok{(kc1, }\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## LOC_BLANK BRANCH_COUNT LOC_CODE_AND_COMMENT LOC_COMMENTS ## 1 0 1 0 0 ## CYCLOMATIC_COMPLEXITY DESIGN_COMPLEXITY ESSENTIAL_COMPLEXITY LOC_EXECUTABLE ## 1 1 1 1 3 ## HALSTEAD_CONTENT HALSTEAD_DIFFICULTY HALSTEAD_EFFORT HALSTEAD_ERROR_EST ## 1 11.58 2.67 82.35 0.01 ## HALSTEAD_LENGTH HALSTEAD_LEVEL HALSTEAD_PROG_TIME HALSTEAD_VOLUME ## 1 11 0.38 4.57 30.88 ## NUM_OPERANDS NUM_OPERATORS NUM_UNIQUE_OPERANDS NUM_UNIQUE_OPERATORS LOC_TOTAL ## 1 4 7 3 4 5 ## Defective ## 1 0 \end{verbatim} \texttt{for} loops \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{for}\NormalTok{(x }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\DecValTok{5}\NormalTok{)\{} \FunctionTok{print}\NormalTok{(x)} \NormalTok{\}} \end{Highlighting} \end{Shaded} \hypertarget{built-in-datasets}{% \section{Built-in Datasets}\label{built-in-datasets}} R comes with some built-in datasets ready to use \href{http://www.sthda.com/english/wiki/r-built-in-data-sets}{Description of datasets} \url{http://www.sthda.com/english/wiki/r-built-in-data-sets} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{data}\NormalTok{() }\CommentTok{\#list of datasets already available} \end{Highlighting} \end{Shaded} Then, to load a dataset is as follows. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# load the mtcars Motor Trend Car Road Tests} \FunctionTok{data}\NormalTok{(}\StringTok{"mtcars"}\NormalTok{)} \end{Highlighting} \end{Shaded} And another example. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Monthly Airline Passenger Numbers 1949{-}1960} \CommentTok{\# Time series object ts() convert a vector to a time series} \FunctionTok{data}\NormalTok{(}\StringTok{"AirPassengers"}\NormalTok{)} \FunctionTok{str}\NormalTok{(AirPassengers)} \FunctionTok{plot}\NormalTok{(AirPassengers)} \end{Highlighting} \end{Shaded} \hypertarget{other-tools-with-r}{% \section{Other tools with R}\label{other-tools-with-r}} \hypertarget{rattle}{% \subsection{Rattle}\label{rattle}} There is graphical interface, Rattle, that allow us to perform some data mining tasks with R \citep{Williams11}. \begin{figure} \centering \includegraphics{./figures/rattle.png} \caption{Rattle: GUI for Data mining with R} \end{figure} \hypertarget{jamovi}{% \subsection{Jamovi}\label{jamovi}} GUI for statistical analysis in R. It allow us to export the actual R code. Its Website is: \url{https://www.jamovi.org/} \begin{figure} \centering \includegraphics{./figures/jamovi.png} \caption{Jamovi} \end{figure} \hypertarget{jasp}{% \subsection{JASP}\label{jasp}} There is another GUI for statistics, JASP, but it is not so easy at the moment to export the R code. \url{https://jasp-stats.org/} \hypertarget{part-introduction-to-data-mining}{% \part{Introduction to Data Mining}\label{part-introduction-to-data-mining}} We will deal with extracting information from data, either for estimation, defect prediction, planning, etc. We will provide an overview of data analsyis using different techniques. \hypertarget{what-is-data-mining-knowledge-discovery-in-databases-kdd}{% \chapter{What is Data Mining / Knowledge Discovery in Databases (KDD)}\label{what-is-data-mining-knowledge-discovery-in-databases-kdd}} The non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data \citep{FayyadPS1996} \begin{figure} \centering \includegraphics{figures/Fayyad96kdd-process.png} \caption{KDD Process} \end{figure} The Cross Industry Process for Data Mining (CRISP-DM) also provides a common and well-developed framework for delivering data mining projects identifying six steps \citep{shearer00crisp}: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Problem Understanding \item Data Understanding \item Data Preparation \item Modeling \item Evaluation \item Deployment \end{enumerate} \begin{figure} \centering \includegraphics{figures/CRISP-DM_Process_Diagram.png} \caption{CRISP-DM (Wikipedia)} \end{figure} \hypertarget{the-aim-of-data-analysis-and-statistical-learning}{% \section{The Aim of Data Analysis and Statistical Learning}\label{the-aim-of-data-analysis-and-statistical-learning}} \begin{itemize} \tightlist \item The aim of any data analysis is to \textbf{understand the data} \item and to build models for making predictions and estimating future events based on past data \item and to make statistical inferences from our data. \item We may want to test different hypothesis on the data \item We want to generate conclusions about the population where our sample data comes from \item Most probably we are interested in building a model for quality, time, defects or effort prediction \end{itemize} \includegraphics{figures/prediction.png} \begin{itemize} \tightlist \item We want to find a function \(f()\), that given \(X1, X2, ...\) computes \(Y=f(X1, X2, ..., Xn)\) \end{itemize} \hypertarget{data-science}{% \section{Data Science}\label{data-science}} Data science (DS) is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structured and unstructured data. Data science is related to data mining, machine learning and big data. We may say that the term DS embraces all terms related to data analysis that previously were under different disciplines. \begin{figure} \centering \includegraphics{figures/Data_science.png} \caption{Wikipedia Data Science} \end{figure} \hypertarget{some-references}{% \section{Some References}\label{some-references}} \begin{itemize} \tightlist \item \href{https://cran.r-project.org/doc/manuals/r-release/R-intro.pdf}{W.N. Venables, D.M. Smith and the R Core Team, An Introduction to R} \end{itemize} Generic books about statistics: \begin{itemize} \item \href{https://cran.r-project.org/doc/contrib/Verzani-SimpleR.pdf}{John Verzani, \emph{simpleR - Using R for Introductory Statistics}} \item \href{https://www.springer.com/gp/book/9780387790534}{Peter Dalgaard, \emph{Introductory Statistics with R}, 2nd Edt., Springer, 2008} \item \href{http://www.springer.com/it/book/9781461471370}{Gareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani, \emph{An Introduction to Statistical Learning with Applications in R}, Springer, 2013} \item \href{https://www.routledge.com/products/9780415879682}{Geoff Cumming, \emph{Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis}, Routledge, New York, 2012} \end{itemize} \hypertarget{data-mining-and-data-science-with-r}{% \section{Data Mining and Data Science with R}\label{data-mining-and-data-science-with-r}} \begin{itemize} \item \href{https://r4ds.had.co.nz/}{R for Data Science} \item \href{https://www.manning.com/books/practical-data-science-with-r-second-edition}{Practical Data Science with R} *\href{https://www.jaredlander.com/r-for-everyone/}{R for Everyone: Advanced Analytics and Graphics} \begin{itemize} \item \href{http://www.springer.com/gp/book/9781441998897}{Graham Williams, \emph{Data Mining with Rattle and R: The Art of Excavating Data for Knowledge Discovery}, Springer 2011} Also the author maintains a Web site: \url{http://rattle.togaware.com/} \item \href{https://www.crcpress.com/Data-Mining-with-R-Learning-with-Case-Studies/Torgo/9781439810187}{Luis Torgo, \emph{Data Mining with R: Learning with Case Studies}, Chapman and Hall/CRC, 2010} \item \url{http://www.rdatamining.com/} \end{itemize} \end{itemize} \hypertarget{data-mining-with-weka}{% \section{Data Mining with Weka}\label{data-mining-with-weka}} Weka is another popular framework written in Java that can be used and extended with other languages and frameworks. The authors of Weka also have a popular book: \begin{itemize} \tightlist \item Ian Witten, Eibe Frank, Mark Hall, Christopher J. Pal, Data Mining: Practical Machine Learning Tools and Techniques (4th Edt), Morgan Kaufmann, 2016, ISBN: 978-0128042915 \end{itemize} \hypertarget{r-markdown}{% \section{R Markdown}\label{r-markdown}} Cheatsheet link for the markdown documents \url{https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet} This is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see \url{http://rmarkdown.rstudio.com}. When you click the \textbf{Knit} button a document will be generated that includes both content as well as the output of any embedded R code chunks within the document. You can embed an R code chunk like this: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{summary}\NormalTok{(cars)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## speed dist ## Min. : 4.0 Min. : 2.00 ## 1st Qu.:12.0 1st Qu.: 26.00 ## Median :15.0 Median : 36.00 ## Mean :15.4 Mean : 42.98 ## 3rd Qu.:19.0 3rd Qu.: 56.00 ## Max. :25.0 Max. :120.00 \end{verbatim} \hypertarget{including-plots}{% \section{Including Plots}\label{including-plots}} You can also embed plots, for example: \includegraphics{DASE_files/figure-latex/pressure-1.pdf} Note that the \texttt{echo\ =\ FALSE} parameter was added to the code chunk to prevent printing of the R code that generated the plot. \hypertarget{references}{% \section{References}\label{references}} \url{https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet} \url{https://rmarkdown.rstudio.com/lesson-15.html} \begin{longtable}[]{@{} >{\raggedleft\arraybackslash}p{(\columnwidth - 0\tabcolsep) * \real{0.06}}@{}} \toprule \begin{minipage}[b]{\linewidth}\raggedleft output: html\_document pdf\_document: default \end{minipage} \\ \midrule \endhead \#\# R and Python \\ R and Python can interact together via the \emph{reticulate} package. \\ The documentation for the \texttt{reticulate} package can be found here: \url{https://rstudio.github.io/reticulate/} \\ \includegraphics{./figures/reticulated_python.png} \\ Instructions for configuring the system can be found at RStudio site: \url{https://support.rstudio.com/hc/en-us/articles/360023654474-Installing-and-Configuring-Python-with-RStudio}. \\ Or we can create our environment following - {[}R and Python -- a happy union with reticulate webinar{]}\url{https://www.youtube.com/watch?v=8WE-EU5k97Q\&t=27s} \\ \texttt{r\ install.packages("reticulate")} \\ Note that the \texttt{reticulate} package needs Python \textgreater= 2.7 and for \texttt{NumPy} requires NumPy \textgreater= 1.6. \\ \#\# Using Python with RMarkdown and RStudio \\ The R build-in dataset will be used later \\ \texttt{r\ library("reticulate")\ \#\ use\_virtualenv("myenv")\ data("mtcars")} \\ \texttt{python\ print("Hello\ Python!")} \\ \texttt{\#\#\ Hello\ Python!} \\ ```python datactrs = \{ `CHN': \{`COUNTRY': `China', `POP': 1\_398.72, `AREA': 9\_596.96, `GDP': 12\_234.78, `CONT': `Asia'\}, `IND': \{`COUNTRY': `India', `POP': 1\_351.16, `AREA': 3\_287.26, `GDP': 2\_575.67, `CONT': `Asia', `IND\_DAY': `1947-08-15'\}, `USA': \{`COUNTRY': `US', `POP': 329.74, `AREA': 9\_833.52, `GDP': 19\_485.39, `CONT': `N.America', `IND\_DAY': `1776-07-04'\}, `IDN': \{`COUNTRY': `Indonesia', `POP': 268.07, `AREA': 1\_910.93, `GDP': 1\_015.54, `CONT': `Asia', `IND\_DAY': `1945-08-17'\}, `BRA': \{`COUNTRY': `Brazil', `POP': 210.32, `AREA': 8\_515.77, `GDP': 2\_055.51, `CONT': `S.America', `IND\_DAY': `1822-09-07'\}, `PAK': \{`COUNTRY': `Pakistan', `POP': 205.71, `AREA': 881.91, `GDP': 302.14, `CONT': `Asia', `IND\_DAY': `1947-08-14'\}, `NGA': \{`COUNTRY': `Nigeria', `POP': 200.96, `AREA': 923.77, `GDP': 375.77, `CONT': `Africa', `IND\_DAY': `1960-10-01'\}, `BGD': \{`COUNTRY': `Bangladesh', `POP': 167.09, `AREA': 147.57, `GDP': 245.63, `CONT': `Asia', `IND\_DAY': `1971-03-26'\}, `RUS': \{`COUNTRY': `Russia', `POP': 146.79, `AREA': 17\_098.25, `GDP': 1\_530.75, `IND\_DAY': `1992-06-12'\}, `MEX': \{`COUNTRY': `Mexico', `POP': 126.58, `AREA': 1\_964.38, `GDP': 1\_158.23, `CONT': `N.America', `IND\_DAY': `1810-09-16'\}, `JPN': \{`COUNTRY': `Japan', `POP': 126.22, `AREA': 377.97, `GDP': 4\_872.42, `CONT': `Asia'\}, `DEU': \{`COUNTRY': `Germany', `POP': 83.02, `AREA': 357.11, `GDP': 3\_693.20, `CONT': `Europe'\}, `FRA': \{`COUNTRY': `France', `POP': 67.02, `AREA': 640.68, `GDP': 2\_582.49, `CONT': `Europe', `IND\_DAY': `1789-07-14'\}, `GBR': \{`COUNTRY': `UK', `POP': 66.44, `AREA': 242.50, `GDP': 2\_631.23, `CONT': `Europe'\}, `ITA': \{`COUNTRY': `Italy', `POP': 60.36, `AREA': 301.34, `GDP': 1\_943.84, `CONT': `Europe'\}, `ARG': \{`COUNTRY': `Argentina', `POP': 44.94, `AREA': 2\_780.40, `GDP': 637.49, `CONT': `S.America', `IND\_DAY': `1816-07-09'\}, `DZA': \{`COUNTRY': `Algeria', `POP': 43.38, `AREA': 2\_381.74, `GDP': 167.56, `CONT': `Africa', `IND\_DAY': `1962-07-05'\}, `CAN': \{`COUNTRY': `Canada', `POP': 37.59, `AREA': 9\_984.67, `GDP': 1\_647.12, `CONT': `N.America', `IND\_DAY': `1867-07-01'\}, `AUS': \{`COUNTRY': `Australia', `POP': 25.47, `AREA': 7\_692.02, `GDP': 1\_408.68, `CONT': `Oceania'\}, `KAZ': \{`COUNTRY': `Kazakhstan', `POP': 18.53, `AREA': 2\_724.90, `GDP': 159.41, `CONT': `Asia', `IND\_DAY': `1991-12-16'\} \} \\ columns = (`COUNTRY', `POP', `AREA', `GDP', `CONT', `IND\_DAY') ``` \\ ```python import pandas as pd import seaborn as sns \#ubuntu \#sudo apt-get install -y python3-seaborn import matplotlib.pyplot as plt \\ tips = sns.load\_dataset(``tips'') mylist = {[}``youtube'', `linkedin', `1littlecoder'{]} sns.scatterplot(x=tips{[}`total\_bill'{]}, y = tips{[}`tip'{]}, hue=tips{[}`day'{]}) plt.show() ``` \\ \includegraphics{DASE_files/figure-latex/impor_tips_fmri-1.pdf} \\ \texttt{python\ fmri\ =\ sns.load\_dataset("fmri")} \\ \texttt{r\ f1\ \textless{}-\ subset(py\$fmri,\ region\ ==\ "parietal")} \\ \texttt{python\ import\ matplotlib\ as\ mpl\ sns.lmplot("timepoint","signal",\ data=r.f1)} \\ \includegraphics{DASE_files/figure-latex/accesing_r-1.pdf} \\ \texttt{python\ mpl.pyplot.show()} \\ \includegraphics{DASE_files/figure-latex/accesing_r-2.pdf} \\ \texttt{python\ sns.lmplot("mpg",\ "cyl",\ data=r.mtcars)} \\ \includegraphics{DASE_files/figure-latex/accesing_r-3.pdf} \\ \texttt{python\ mpl.pyplot.show()} \\ \includegraphics{DASE_files/figure-latex/accesing_r-4.pdf} \\ \texttt{python\ import\ pandas\ as\ pd\ df\ =\ pd.DataFrame(data=datactrs,\ index=columns).T\ df} \\ \texttt{\#\#\ \ \ \ \ \ \ \ \ COUNTRY\ \ \ \ \ \ POP\ \ \ \ \ \ AREA\ \ \ \ \ \ \ GDP\ \ \ \ \ \ \ CONT\ \ \ \ \ IND\_DAY\ \#\#\ CHN\ \ \ \ \ \ \ China\ \ 1398.72\ \ \ 9596.96\ \ 12234.78\ \ \ \ \ \ \ Asia\ \ \ \ \ \ \ \ \ NaN\ \#\#\ IND\ \ \ \ \ \ \ India\ \ 1351.16\ \ \ 3287.26\ \ \ 2575.67\ \ \ \ \ \ \ Asia\ \ 1947-08-15\ \#\#\ USA\ \ \ \ \ \ \ \ \ \ US\ \ \ 329.74\ \ \ 9833.52\ \ 19485.39\ \ N.America\ \ 1776-07-04\ \#\#\ IDN\ \ \ Indonesia\ \ \ 268.07\ \ \ 1910.93\ \ \ 1015.54\ \ \ \ \ \ \ Asia\ \ 1945-08-17\ \#\#\ BRA\ \ \ \ \ \ Brazil\ \ \ 210.32\ \ \ 8515.77\ \ \ 2055.51\ \ S.America\ \ 1822-09-07\ \#\#\ PAK\ \ \ \ Pakistan\ \ \ 205.71\ \ \ \ 881.91\ \ \ \ 302.14\ \ \ \ \ \ \ Asia\ \ 1947-08-14\ \#\#\ NGA\ \ \ \ \ Nigeria\ \ \ 200.96\ \ \ \ 923.77\ \ \ \ 375.77\ \ \ \ \ Africa\ \ 1960-10-01\ \#\#\ BGD\ \ Bangladesh\ \ \ 167.09\ \ \ \ 147.57\ \ \ \ 245.63\ \ \ \ \ \ \ Asia\ \ 1971-03-26\ \#\#\ RUS\ \ \ \ \ \ Russia\ \ \ 146.79\ \ 17098.25\ \ \ 1530.75\ \ \ \ \ \ \ \ NaN\ \ 1992-06-12\ \#\#\ MEX\ \ \ \ \ \ Mexico\ \ \ 126.58\ \ \ 1964.38\ \ \ 1158.23\ \ N.America\ \ 1810-09-16\ \#\#\ JPN\ \ \ \ \ \ \ Japan\ \ \ 126.22\ \ \ \ 377.97\ \ \ 4872.42\ \ \ \ \ \ \ Asia\ \ \ \ \ \ \ \ \ NaN\ \#\#\ DEU\ \ \ \ \ Germany\ \ \ \ 83.02\ \ \ \ 357.11\ \ \ \ 3693.2\ \ \ \ \ Europe\ \ \ \ \ \ \ \ \ NaN\ \#\#\ FRA\ \ \ \ \ \ France\ \ \ \ 67.02\ \ \ \ 640.68\ \ \ 2582.49\ \ \ \ \ Europe\ \ 1789-07-14\ \#\#\ GBR\ \ \ \ \ \ \ \ \ \ UK\ \ \ \ 66.44\ \ \ \ \ 242.5\ \ \ 2631.23\ \ \ \ \ Europe\ \ \ \ \ \ \ \ \ NaN\ \#\#\ ITA\ \ \ \ \ \ \ Italy\ \ \ \ 60.36\ \ \ \ 301.34\ \ \ 1943.84\ \ \ \ \ Europe\ \ \ \ \ \ \ \ \ NaN\ \#\#\ ARG\ \ \ Argentina\ \ \ \ 44.94\ \ \ \ 2780.4\ \ \ \ 637.49\ \ S.America\ \ 1816-07-09\ \#\#\ DZA\ \ \ \ \ Algeria\ \ \ \ 43.38\ \ \ 2381.74\ \ \ \ 167.56\ \ \ \ \ Africa\ \ 1962-07-05\ \#\#\ CAN\ \ \ \ \ \ Canada\ \ \ \ 37.59\ \ \ 9984.67\ \ \ 1647.12\ \ N.America\ \ 1867-07-01\ \#\#\ AUS\ \ \ Australia\ \ \ \ 25.47\ \ \ 7692.02\ \ \ 1408.68\ \ \ \ Oceania\ \ \ \ \ \ \ \ \ NaN\ \#\#\ KAZ\ \ Kazakhstan\ \ \ \ 18.53\ \ \ \ 2724.9\ \ \ \ 159.41\ \ \ \ \ \ \ Asia\ \ 1991-12-16} \\ \texttt{python\ df.to\_csv(\textquotesingle{}datasets/data\_countries.csv\textquotesingle{})} We can read the dataset from python \\ \texttt{python\ df1\ =\ pd.read\_csv("datasets/other/data\_countries.csv",\ index\_col=0)} \\ Use R to read and write data from a package \\ \texttt{r\ library("nycflights13")\ write.csv(flights,\ "datasets/other/flights.csv")} \\ Use python to read the dataset and process the data \\ \texttt{python\ import\ pandas\ flights\ =\ pandas.read\_csv("datasets/other/flights.csv")\ flights\ =\ flights{[}flights{[}\textquotesingle{}dest\textquotesingle{}{]}\ =="ORD"{]}\ flights\ =\ flights{[}{[}\textquotesingle{}carrier\textquotesingle{},\ \textquotesingle{}dep\_delay\textquotesingle{},\ \textquotesingle{}arr\_delay\textquotesingle{}{]}{]}\ flights\ =\ flights.dropna()\ print(flights.head())} \\ \texttt{\#\#\ \ \ \ carrier\ \ dep\_delay\ \ arr\_delay\ \#\#\ 5\ \ \ \ \ \ \ UA\ \ \ \ \ \ \ -4.0\ \ \ \ \ \ \ 12.0\ \#\#\ 9\ \ \ \ \ \ \ AA\ \ \ \ \ \ \ -2.0\ \ \ \ \ \ \ \ 8.0\ \#\#\ 25\ \ \ \ \ \ MQ\ \ \ \ \ \ \ \ 8.0\ \ \ \ \ \ \ 32.0\ \#\#\ 38\ \ \ \ \ \ AA\ \ \ \ \ \ \ -1.0\ \ \ \ \ \ \ 14.0\ \#\#\ 57\ \ \ \ \ \ AA\ \ \ \ \ \ \ -4.0\ \ \ \ \ \ \ \ 4.0} Use Python for plotting \\ \texttt{python\ import\ matplotlib.pyplot\ as\ plt\ import\ numpy\ as\ np\ t\ =\ np.arange(0.0,\ 2.0,\ 0.01)\ s\ =\ 1\ +\ np.sin(2*np.pi*t)\ plt.plot(t,s)\ plt.xlabel(\textquotesingle{}time\ (s)\textquotesingle{})\ plt.ylabel(\textquotesingle{}voltage\ (mV)\textquotesingle{})\ plt.grid(True)\ plt.savefig("test.png")\ plt.show()} \\ \includegraphics{DASE_files/figure-latex/matplotlib-1.pdf} \\ Use R for plotting Python objects: \includegraphics{DASE_files/figure-latex/unnamed-chunk-28-3.pdf} \\ Note that the \texttt{echo\ =\ FALSE} parameter was added to the code chunk to prevent printing of the R code that generated the plot. \\ \#\# References \\ - \href{https://info.rstudio.com/WN07MCXqS1A2NYS0040LW00}{https://rstudio.com/resources/webinars/rstudio-a-single-home-for-r-and-python/} \\ - \href{https://rstudio.github.io/reticulate/}{R interface to Python} \\ - \href{https://blog.rstudio.com/2020/07/28/practical-interoperability/}{3 Wild-Caught R and Python Applications} \\ - \href{https://www.r-bloggers.com/2021/01/rstudio-python-visual-markdown-editor-rstudio-latest-update/}{RStudio + Python, Visual Markdown Editor -- RStudio Latest Update} \\ - {[}Arrays in R and in Python{]}\url{https://rstudio.github.io/reticulate/articles/arrays.html} \\ - \url{https://www.r-bloggers.com/2021/02/pythons-pandas-vs-rs-dplyr-which-is-the-best-data-analysis-library/?utm_source=feedburner\&utm_medium=email\&utm_campaign=Feed\%3A+RBloggers+\%28R+bloggers\%29} \\ \\ \# (PART) Data Sources and Metrics and Standards in Software Engineering Defect Prediction \{-\} \\ \# Data Sources in Software Engineering \\ We classify this trail in the following categories: \\ * \emph{Source code} can be studied to measure its properties, such as size or complexity. \\ * \emph{Source Code Management Systems} (SCM) make it possible to store all the changes that the different source code files undergo during the project. Also, SCM systems allow for work to be done in parallel by different developers over the same source code tree. Every change recorded in the system is accompanied with meta-information (author, date, reason for the change, etc) that can be used for research purposes. \\ * \emph{Issue or Bug tracking systems} (ITS). Bugs, defects and user requests are managed in ISTs, where users and developers can fill tickets with a description of a defect found, or a desired new functionality. All the changes to the ticket are recorded in the system, and most of the systems also record the comments and communications among all the users and developers implied in the task. \\ * \emph{Messages} between developers and users. In the case of free/open source software, the projects are open to the world, and the messages are archived in the form of mailing lists and social networks which can also be mined for research purposes. There are also some other open message systems, such as IRC or forums. \\ * \emph{Meta-data about the projects}. As well as the low level information of the software processes, we can also find meta-data about the software projects which can be useful for research. This meta-data may include intended-audience, programming language, domain of application, license (in the case of open source), etc. \\ \includegraphics{figures/artifactsMeta.png} \\ * \emph{Usage data}. There are statistics about software downloads, logs from servers, software reviews, etc. \\ Types of information stored in the repositories: \\ * Meta-information about the project itself and the people that participated. \\ + Low-level information \\ * Mailing Lists (ML) \\ * Bug Tracking Systems (BTS) or Project Tracker System (PTS) \\ * Software Configuration Management Systems (SCM) \\ + Processed information. For example project management information about the effort estimation and cost of the project. \\ * Whether the repository is public or not \\ * Single project vs.~multiprojects. Whether the repository contains information of a single project with multiples versions or multiples projects and/or versions. \\ * Type of content, open source or industrial projects \\ * Format in which the information is stored and formats or technologies for accessing the information: \\ + Text. It can be just plain text, CSV (Comma Separated Values) files, Attribute-Relation File Format (ARFF) or its variants \\ + Through databases. Downloading dumps of the database. \\ + Remote access such as APIs of Web services or REST \\ \# Repositories \\ There is a number of open research repositories in Software Engineering. Among them: \\ + Zenodo. It is becoming a popular site for publishing datasets associated with papers. It provides DOIs for referencing data and code: \url{https://zenodo.org/} \\ + Spinellis maintais a curated repository on Github: \url{https://github.com/dspinellis/awesome-msr} \\ + PROMISE (PRedictOr Models In Software Engineering). There is a conference with this name (\href{https://promiseconf.github.io/}{Promise Conference}) \\ Some popular datasets used as benchmarking in may paper can still be found on: \url{http://promise.site.uottawa.ca/SERepository/datasets-page.html} The is some well-known issues wit the NASA datasets and the source code is not available. \\ \\ + FLOSSMole \citep{HCC06} \url{http://flossmole.org/} \\ + FLOSSMetrics \citep{herraiz2009flossmetrics}: \url{http://flossmetrics.org/} \\ + Qualitas Corpus (QC) \citep{QualitasCorpus2010}: \url{http://qualitascorpus.com/} \\ + Sourcerer Project \citep{LBNRB09}: \url{http://sourcerer.ics.uci.edu/} \\ + Ultimate Debian Database (UDD) \citep{NZ10} \url{http://udd.debian.org/} \\ + SourceForge Research Data Archive (SRDA) \citep{VanAntwerpM2008} \url{http://zerlot.cse.nd.edu/} \\ \\ + Software-artifact Infrastructure Repository (SIR) {[}\url{http://sir.unl.edu}{]} \\ + OpenHub: \url{https://www.openhub.net/} \\ Not openly available (and mainly for effort estimation): \\ + The International Software Benchmarking Standards Group (ISBSG) \url{http://www.isbsg.org/} \\ \\ Some papers and publications/theses that have been used in the literature: \\ + Helix Data Set \citep{Vasa2010}: \url{http://www.ict.swin.edu.au/research/projects/helix/} \\ + Bug Prediction Dataset (BPD) \citep[\citet{ALR11}]{DAmb2010a}: \url{http://bug.inf.usi.ch/} \\ + Eclipse Bug Data (EBD) \citep[\citet{NZZH12}]{ZPZ07}: \url{http://www.st.cs.uni-saarland.de/softevo/bug-data/eclipse/} \\ \# Open Tools/Dashboards to extract data \\ Process to extract data: \\ \includegraphics{./figures/process.png} \\ Within the open source community, several toolkits allow us to extract data that can be used to explore projects: \\ Metrics Grimoire \url{http://metricsgrimoire.github.io/} \\ \includegraphics{figures/Grimoire.png} \\ SonarQube \url{http://www.sonarqube.org/} \\ \includegraphics{figures/sonarQube.png} \\ CKJM (OO Metrics tool) \url{http://gromit.iiar.pwr.wroc.pl/p_inf/ckjm/} \\ Collects a large number of object-oriented metrics from code. \\ \#\# Issues \\ There are problems such as different tools report different values for the same metric \citep{Lincke2008} \\ It is well-know that the NASA datasets have some problems: \\ + \citep{Gray2011} The misuse of the NASA metrics data program data sets for automated software defect prediction \\ + \citep{Shepperd2013} Data Quality: Some Comments on the NASA Software Defect Datasets \\ \\ \#\# Effort Estimation Data in Software Engineering \\ It is worth highlighting the case of software effort estimation datasets with their peculiarities. First, most effort estimation datasets used in the literature are scattered through research papers with the exception of a few kept in the PROMISE repository. Mair et al \citeyearpar{MairSJ05} also have analysed available datasets in the field of cost estimation identifying 65 different datasets in 50 papers. \\ Second, their size is very small with the exception of ISBSG repository discussed previously which a small sample is available through PROMISE and the China dataset with 499 instances. \\ Third, some can be quite old in a context and time that is not applicable to current development environments. The authors noted that the oldest datasets (COCOMO, Desharnais, Kemerer and Albrecht and Gaffney) tend to be the most studied ones and a subset of the most relevant ones. Also, from the artificial intelligence or data mining point of view effort estimation has been mainly tackled with different types of regression techniques and more recently with techniques which are also typically considered under the umbrella of data mining techniques. However, as the number of examples per dataset is increasing, other machine learning techniques are also being studied (e.g.: Dejaeger et al \citeyearpar{Dejaeger_TSE12_EffEst} report on a comparison of several machine learning techniques to effort estimation with only 5 out the 9 used datasets publicly available). From the data mining point of view, the small number of instances hinders the application of machine learning techniques. \\ However, software effort and cost estimation still remain one of the main challenges in software engineering and have attracted a great deal of interest by many researchers \citeyearpar{Jorgensen07}. For example, there are continuous analyses of whether software development follows economies or diseconomies of scale (see \citep[\citet{Banker1994},\citet{Kitchenham2002}]{Dolado01_CostEst}). \\ Next Table \ref{tab:effEstimation} (following Mair et al \citeyearpar{MairSJ05} ) shows the most open cost/effort datasets available in the literature with their main reference. \\ Table: \label{tab:effEstimation} Effort Estimation Dataset from articles \\ \textbar{} Reference \textbar{} Instances \textbar{} Attributes \textbar{} \textbar{} ----------------------------------\textbar{} ------------: \textbar------------:\textbar{} \textbar Abran and Robillard \citeyearpar{Abran_TSE96_FP} \textbar{} 21 \textbar{} 31\textbar{} \textbar Albrecht-Gaffney \citeyearpar{AlbrechtG83} \textbar{} 24 \textbar{} 7 \textbar{} \textbar Bailey and Basili \citeyearpar{Bailey81} \textbar{} 18 \textbar{} 9 \textbar{} \textbar Belady and Lehman \citeyearpar{Belady79} \textbar{} 33 \textbar{} \textbar{} \textbar Boehm (aka COCOMO Dataset) \citeyearpar{Boehm81} \textbar{} 63 \textbar{} 43 \textbar{} \textbar China dataset{[}\^{}1{]} \textbar{} 499 \textbar{} 18 \textbar{} \textbar Desharnais \citeyearpar{Desharnais88} \textbar{} 61 \textbar{} 10 \textbar{} \textbar Dolado \citeyearpar{Dolado97} \textbar{} 24 \textbar{} 7 \textbar{} \textbar Hastings and Sajeev \citeyearpar{Hastings01} \textbar{} 8 \textbar{} 14 \textbar{} \textbar Heiat and Heiat \citep{Heiat97} \textbar{} 35 \textbar{} 4 \textbar{} \textbar Jeffery and Stathis \citeyearpar{Jeffery_ESE96} \textbar{} 17 \textbar{} 7 \textbar{} \textbar Jorgensen \citeyearpar{Jorgensen04} \textbar{} 47 \textbar{} 4 \textbar{} \textbar Jorgensen et al. \citeyearpar{Jorgensen2003} \textbar{} 20 \textbar{} 4 \textbar{} \textbar Kemerer \citeyearpar{Kemerer87} \textbar{} 15 \textbar{} 5 \textbar{} \textbar Kitchenham (Mermaid 2) \citeyearpar{Kitchenham2002} \textbar{} 30 \textbar{} 5 \textbar{} \textbar Kitchenham et al.~(CSC) \citeyearpar{Kitchenham02_CSC} \textbar{} 145 \textbar{} 9 \textbar{} \textbar Kitchenham and Taylor (ICL) \citeyearpar{Kitchenham85} \textbar{} 10 \textbar{} 6 \textbar{} \textbar Kitchenham and Taylor (BT System X) \citeyearpar{Kitchenham85} \textbar{} 10 \textbar{} 3 \textbar{} \textbar Kitchenham and Taylor (BT Software Houses) \citeyearpar{Kitchenham85} \textbar{} 12 \textbar{} 6 \textbar{} \textbar Li et al.(USP05) \citeyearpar{LiRAR07}{[}\^{}2{]} \textbar{} 202 \textbar{} 16 \textbar{} \textbar Mišić and Tevsić \citeyearpar{Misic19981} \textbar{} 6 \textbar{} 16 \textbar{} \textbar Maxwell (Dev Effort) \citeyearpar{Maxwell02} \textbar{} 63 \textbar{} 32 \textbar{} \textbar Maxwell (Maintenance Eff) \citeyearpar{Maxwell02} \textbar{} 67 \textbar{} 28 \textbar{} \textbar Miyazaki et al. \citeyearpar{Miyazaki94} \textbar{} 47 \textbar{} 9 \textbar{} \textbar Moser et al. \citeyearpar{Moser1999} \textbar{} 37 \textbar{} 4 \textbar{} \textbar Shepperd and Cartwright \citep{Shepperd_TSE01} \textbar{} 39 \textbar{} 3 \textbar{} \textbar Shepperd and Schofield (Telecom 1) \citeyearpar{Shepperd97_Analogy} \textbar{} 18 \textbar{} 5 \textbar{} \textbar Schofield (real-time 1) \citeyearpar[\citet{Shepperd97_Analogy}]{Schofield98PhD} \textbar{} 21 \textbar{} 4 \textbar{} \textbar Schofield (Mermaid) \citeyearpar{Schofield98PhD} \textbar{} 30 \textbar{} 18 \textbar{} \textbar Schofield (Finnish) \citeyearpar{Schofield98PhD} \textbar{} 39 \textbar{} 30 \textbar{} \textbar Schofield (Hughes) \citeyearpar{Schofield98PhD} \textbar{} 33 \textbar{} 14\textbar{} \textbar Woodfield et al. \citeyearpar{Woodfield81} \textbar{} 63 \textbar{} 8 \textbar{} \\ {[}\^{}1{]}: Donated through PROMISE. {[}\^{}2{]}: Only a subset of the data in the paper, the complete dataset is donated through PROMISE \\ \\ \# (PART) Exploratory and Descriptive Data analysis \{-\} \\ \# Exploratory Data Analysis \\ \#\# Descriptive statistics \\ The first task to do with any dataset is to characterize it in terms of summary statistics and graphics. \\ Displaying information graphically will help us to identify the main characteristics of the data. To describe a distribution we often want to know where it is centered and and what the spread is (mean, median, quantiles) \\ \#\# Basic Plots \\ * \emph{Histogram} defines a sequence of breaks and then counts the number of observations in the bins formed by the breaks. \\ * \emph{Boxplot} used to summarize data succinctly, quickly displaying if the data is symmetric or has suspected outliers. \\ \includegraphics{./figures/boxplotexp.png} \\ * \emph{Q-Q plot} is used to determine if the data is close to being normally distributed. The quantiles of the standard normal distribution is represented by a straight line. The normality of the data can be evaluated by observing the extent in which the points appear on the line. When the data is normally distributed around the mean, then the mean and the median should be equal. Quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities, or dividing the observations in a sample in the same way. \\ \includegraphics{./figures/Iqr_with_quantile.png} \\ * \emph{Scatterplot} provides a graphical view of the relationship between two sets of numbers: one numerical variable against another. \\ * \emph{Kernel Density} plot visualizes the underlying distribution of a variable. Kernel density estimation is a non-parametric method of estimating the probability density function of continuous random variable. It helps to identify the distribution of the variable. \\ * \emph{Violin plot} is a combination of a boxplot and a kernel density plot. \\ \#\# Normality \\ * A normal distribution is an arrangement of a data set in which most values cluster in the middle of the range * A graphical representation of a normal distribution is sometimes called a \emph{bell curve} because of its shape. * Many procedures in statistics are based on this property. \emph{Parametric} procedures require the normality property. * In a normal distribution about 95\% of the probability lies within 2 Standard Deviations of the mean. * Two examples: one population with mean 60 and the standard deviation of 1, and the other with mean 60 and \(sd=4\) (means shifted to 0) \\ \texttt{r\ \#\ Area\ within\ 2SD\ of\ the\ mean\ par(mfrow=c(1,2))\ plot(function(x)\ dnorm(x,\ mean\ =\ 0,\ sd\ =\ 1),\ xlim\ =\ c(-3,\ 3),\ main\ =\ "SD\ 1",\ xlab\ =\ "x",\ ylab\ =\ "",\ cex\ =\ 2)\ segments(-2,\ 0,\ -2,\ 0.4)\ segments(2,\ 0,\ 2,\ 0.4)\ \#\ Area\ within\ 4SD\ of\ the\ mean\ plot(function(x)\ dnorm(x,\ mean\ =\ 0,\ sd\ =\ 4),\ xlim\ =\ c(-12,\ 12),\ main\ =\ "SD\ 4",\ xlab\ =\ "x",\ ylab\ =\ "",\ cex\ =\ 2)\ segments(-8,\ 0,\ -8,\ 0.1)\ segments(8,\ 0,\ 8,\ 0.1)} \\ \includegraphics{DASE_files/figure-latex/SDPlotExample-1.pdf} \\ - if we sample from this population we get ``another population'': \\ ```r \#tidy uses the package formatR to format the code \\ sample.means \textless- rep(NA, 1000) for (i in 1:1000) \{ sample.40 \textless- rnorm(40, mean = 60, sd = 4) \#rnorm generates random numbers from normal distribution sample.means{[}i{]} \textless- mean(sample.40) \} means40 \textless- mean(sample.means) sd40 \textless- sd(sample.means) means40 ``` \\ \texttt{\#\#\ {[}1{]}\ 60.0144} \\ \texttt{r\ sd40} \\ \texttt{\#\#\ {[}1{]}\ 0.6592934} \\ - These sample means are another ``population''. The sampling distribution of the sample mean is normally distributed meaning that the ``mean of a representative sample provides an estimate of the unknown population mean''. This is shown in Figure \ref{fig:sampleMeansExample} \\ \texttt{r\ hist(sample.means)} \\ \includegraphics{DASE_files/figure-latex/sampleMeansExample-1.pdf} \\ \#\# Using a running Example to visualise the different plots \\ As a running example we do next: \\ 1. Set the path to to the file \\ 2. Read the \emph{Telecom1} dataset and print out the summary statistics with the command \texttt{summary} \\ \texttt{r\ options(digits=3)\ telecom1\ \textless{}-\ read.table("./datasets/effortEstimation/Telecom1.csv",\ sep=",",header=TRUE,\ stringsAsFactors=FALSE,\ dec\ =\ ".")\ \#read\ data\ summary(telecom1)} \\ \texttt{\#\#\ \ \ \ \ \ \ size\ \ \ \ \ \ \ \ \ \ \ effort\ \ \ \ \ \ \ \ EstTotal\ \#\#\ \ Min.\ \ \ :\ \ 3.0\ \ \ Min.\ \ \ :\ \ 24\ \ \ Min.\ \ \ :\ 30\ \#\#\ \ 1st\ Qu.:\ 37.2\ \ \ 1st\ Qu.:\ 119\ \ \ 1st\ Qu.:142\ \#\#\ \ Median\ :\ 68.5\ \ \ Median\ :\ 222\ \ \ Median\ :289\ \#\#\ \ Mean\ \ \ :100.3\ \ \ Mean\ \ \ :\ 284\ \ \ Mean\ \ \ :320\ \#\#\ \ 3rd\ Qu.:164.0\ \ \ 3rd\ Qu.:\ 352\ \ \ 3rd\ Qu.:472\ \#\#\ \ Max.\ \ \ :284.0\ \ \ Max.\ \ \ :1116\ \ \ Max.\ \ \ :777} \\ * We see that this dataset has three variables (or parameters) and few data points (18) + \emph{size}: the independent variable + \emph{effort}: the dependent variable + \emph{EstTotal}: the estimates coming from an estimation method * Basic Plots \\ ```r par(mfrow=c(1,2)) \#n figures per row size\_telecom1 \textless- telecom1\(size effort_telecom1 <- telecom1\)effort \\ hist(size\_telecom1, col=``blue'', xlab=`size', ylab = `Probability', main = `Histogram of project Size') lines(density(size\_telecom1, na.rm = T, from = 0, to = max(size\_telecom1))) plot(density(size\_telecom1)) ``` \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-30-1.pdf} \\ \texttt{r\ hist(effort\_telecom1,\ col="blue")\ plot(density(effort\_telecom1))} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-30-2.pdf} \\ \texttt{r\ boxplot(size\_telecom1)\ boxplot(effort\_telecom1)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-30-3.pdf} \\ \texttt{r\ \#\ violin\ plots\ for\ those\ two\ variables\ library(vioplot)\ vioplot(size\_telecom1,\ names\ =\ \textquotesingle{}\textquotesingle{})\ title("Violin\ Plot\ of\ Project\ Size")\ vioplot(effort\_telecom1,\ names\ =\ \textquotesingle{}\textquotesingle{})\ title("Violin\ Plot\ of\ Project\ Effort")} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-30-4.pdf} \\ \texttt{r\ par(mfrow=c(1,1))\ qqnorm(size\_telecom1,\ main="Q-Q\ Plot\ of\ \textquotesingle{}size\textquotesingle{}")\ qqline(size\_telecom1,\ col=2,\ lwd=2,\ lty=2)\ \#draws\ a\ line\ through\ the\ first\ and\ third\ quartiles} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-30-5.pdf} \\ \texttt{r\ qqnorm(effort\_telecom1,\ \ main="Q-Q\ Plot\ of\ \textquotesingle{}effort\textquotesingle{}")\ qqline(effort\_telecom1)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-30-6.pdf} \\ * We can observe the non-normality of the data. \\ * We may look the possible relationship between size and effort with a scatter plot \\ \texttt{r\ plot(size\_telecom1,\ effort\_telecom1)} \\ \includegraphics{DASE_files/figure-latex/scatterplotExample-1.pdf} \\ \#\#\# Example with the China dataset \\ \texttt{r\ library(foreign)\ china\ \textless{}-\ read.arff("./datasets/effortEstimation/china.arff")\ china\_size\ \textless{}-\ china\$AFP\ summary(china\_size)} \\ \texttt{\#\#\ \ \ \ Min.\ 1st\ Qu.\ \ Median\ \ \ \ Mean\ 3rd\ Qu.\ \ \ \ Max.\ \#\#\ \ \ \ \ \ \ 9\ \ \ \ \ 100\ \ \ \ \ 215\ \ \ \ \ 487\ \ \ \ \ 438\ \ \ 17518} \\ \texttt{r\ china\_effort\ \textless{}-\ china\$Effort\ summary(china\_effort)} \\ \texttt{\#\#\ \ \ \ Min.\ 1st\ Qu.\ \ Median\ \ \ \ Mean\ 3rd\ Qu.\ \ \ \ Max.\ \#\#\ \ \ \ \ \ 26\ \ \ \ \ 704\ \ \ \ 1829\ \ \ \ 3921\ \ \ \ 3826\ \ \ 54620} \\ \texttt{r\ par(mfrow=c(1,2))\ hist(china\_size,\ col="blue",\ xlab="Adjusted\ Function\ Points",\ main="Distribution\ of\ AFP")\ hist(china\_effort,\ col="blue",xlab="Effort",\ main="Distribution\ of\ Effort")} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-31-1.pdf} \\ \texttt{r\ boxplot(china\_size)\ boxplot(china\_effort)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-31-2.pdf} \\ \texttt{r\ qqnorm(china\_size)\ qqline(china\_size)\ qqnorm(china\_effort)\ qqline(china\_effort)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-31-3.pdf} * We observe the non-normality of the data. \\ \#\#\#\# Normality. Galton data \\ It is the data based on the famous 1885 Francis Galton's study about the relationship between the heights of adult children and the heights of their parents. \\ \includegraphics{DASE_files/figure-latex/galtonData-1.pdf} \\ \#\#\#\# Normalization \\ Take \(log\)s in both independent variables. For example, with the \emph{China} dataset. \\ \includegraphics{DASE_files/figure-latex/logExample-1.pdf} \includegraphics{DASE_files/figure-latex/logExample-2.pdf} \\ * If the \(log\) transformation is used, then the estimation equation is: \(y= e^{b_0 + b_1 log(x)} \) \\ \#\# Correlation \\ \emph{Correlation} is a statistical relationship between two sets of data. With the whole dataset we may check for the linear Correlation of the variables we are interested in. \\ As an example with the China dataset \\ \texttt{r\ par(mfrow=c(1,1))\ plot(china\_size,china\_effort)} \\ \includegraphics{DASE_files/figure-latex/correlationChinaDataset-1.pdf} \\ \texttt{r\ cor(china\_size,china\_effort)} \\ \texttt{\#\#\ {[}1{]}\ 0.685} \\ \texttt{r\ cor.test(china\_size,china\_effort)} \\ \texttt{\#\#\ \#\#\ \ Pearson\textquotesingle{}s\ product-moment\ correlation\ \#\#\ \#\#\ data:\ \ china\_size\ and\ china\_effort\ \#\#\ t\ =\ 21,\ df\ =\ 497,\ p-value\ \textless{}2e-16\ \#\#\ alternative\ hypothesis:\ true\ correlation\ is\ not\ equal\ to\ 0\ \#\#\ 95\ percent\ confidence\ interval:\ \#\#\ \ 0.635\ 0.729\ \#\#\ sample\ estimates:\ \#\#\ \ \ cor\ \#\#\ 0.685} \\ \texttt{r\ cor(china\_size,china\_effort,\ method="spearman")} \\ \texttt{\#\#\ {[}1{]}\ 0.649} \\ \texttt{r\ cor(china\_size,china\_effort,\ method="kendall")} \\ \texttt{\#\#\ {[}1{]}\ 0.468} \\ \#\# Confidence Intervals. Bootstrap * Until now we have generated point estimates * A \emph{confidence interval} (CI) is an interval estimate of a population parameter. The parameter can be the mean, the median or other. The frequentist CI is an observed interval that is different from sample to sample. It frequently includes the value of the unobservable parameter of interest if the experiment is repeated. The \emph{confidence level} is the value that measures the frequency that the constructed intervals contain the true value of the parameter. * The construction of a confidence interval with an exact value of confidence level for a distribution requires some statistical properties. Usually, \emph{normality} is one of the properties required for computing confidence intervals. + Not all confidence intervals contain the true value of the parameter. + Simulation of confidence intervals \\ An example from Ugarte et al. \citep{ugarte2015probability} \\ \texttt{r\ set.seed(10)\ norsim(sims\ =\ 100,\ n\ =\ 36,\ mu\ =\ 100,\ sigma\ =\ 18,\ conf.level\ =\ 0.95)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-32-1.pdf} \\ * The range defined by the confidence interval will vary with each sample, because the sample size will vary each time and the standard deviation will vary too. * 95\% confidence interval: it is the probability that the hypothetical confidence intervals (that would be computed from the hypothetical repeated samples) will contain the population mean. * the particular interval that we compute on one sample does not mean that the population mean lies within that interval with a probability of 95\%. * Recommended reading: \citep{Hoekstra2014} \emph{Robust misinterpretation of confidence intervals} \\ \#\# Nonparametric Bootstrap * For computing CIs the important thing is to know the assumptions that are made to ``know'' the distribution of the statistic. * There is a way to compute confidence intervals without meeting the requirements of parametric methods. * \textbf{Resampling} or \textbf{bootstraping} is a method to calculate estimates of a parameter taking samples from the original data and using those \emph{resamples} to calculate statistics. Using the resamples usually gives more accurate results than using the original single sample to calculate an estimate of a parameter. \\ \includegraphics{figures/bootstrap.png} - An example of bootstrap CI can be found in Chapter \ref{evaluationSE}, ``Evaluation in Software Engineering'' \\ \\ \# Classical Hypothesis Testing \\ - By ``classical'' we mean the standard ``frequentist'' approach to hypothesis testing. The ``frequentist'' approach to probability sees it as the frequency of events in the long run. We repeat experiments over and over and we count the times that our object of interest appears in the sequence. \\ - The classical approach is usually called \textbf{null hypothesis significance testing} (NHST) because the process starts by setting a null hypothesis \(H_0\) which is the opposite about what we think is true. \\ - The rationale of the process is that the statistical hypothesis should be \emph{falsifiable}, that is, we can find evidence that the hypothesis is not true. We try to find evidence against the null hypothesis in order to support our alternative hypothesis \(H_A\) \\ - Usually, the null hypothesis is described as the situation of ``no effect'' and the alternative hypothesis describes the effect that we are looking for. \\ - After collecting data, taking an actual sample, we measure the distance of our parameter of interest from the hypothesized population parameter, and use the facts of the sampling distribution to determine the probability of obtaining such a sample \emph{assuming the hypothesis is true}. This is amounts to a test of the hypothesis. \\ - If the probability of our sample, given the null hypothesis is high, this provides evidence that the null hypothesis is true. Conversely, if the probability of the sample is low (given the hypothesis), this is evidence against the null hypothesis. The hypothesis being tested in this way is named the \emph{null hypothesis}. \\ - The goal of the test is to determine if the null hypothesis can be rejected. A statistical test can either reject or fail to reject a null hypothesis, but never prove it true. \\ - We can make two types of errors: false positive (Type I) and false negative (Type II) \\ - Type I and Type II errors \\ \includegraphics{figures/typeIandIIwiki.png} \\ - Two-tailed NHST \\ \includegraphics{figures/stat_power_ggplot.png} \\ - One-tailed NHST \\ \includegraphics{figures/One-tailedNHST.png} \\ - elementary example \\ \texttt{r\ data\ =\ c(52.7,\ 53.9,\ 41.7,\ 71.5,\ 47.6,\ 55.1,\ 62.2,\ 56.5,\ 33.4,\ 61.8,\ 54.3,\ 50.0,\ 45.3,\ 63.4,\ 53.9,\ 65.5,\ 66.6,\ 70.0,\ 52.4,\ 38.6,\ 46.1,\ 44.4,\ 60.7,\ 56.4);\ t.test(data,\ mu=50,\ alternative\ =\ \textquotesingle{}greater\textquotesingle{})} \\ \texttt{\#\#\ \#\#\ \ One\ Sample\ t-test\ \#\#\ \#\#\ data:\ \ data\ \#\#\ t\ =\ 2,\ df\ =\ 23,\ p-value\ =\ 0.02\ \#\#\ alternative\ hypothesis:\ true\ mean\ is\ greater\ than\ 50\ \#\#\ 95\ percent\ confidence\ interval:\ \#\#\ \ 50.9\ \ Inf\ \#\#\ sample\ estimates:\ \#\#\ mean\ of\ x\ \#\#\ \ \ \ \ \ 54.3} \\ - Keeping this simple, we could start hypothesis testing about one sample median with the wilcoxon test for non-normal distributions. \\ - ``ae'' is the absolute error in the China Test data \\ \texttt{r\ median(ae)} \\ \texttt{\#\#\ {[}1{]}\ 867} \\ \texttt{r\ mean(ae)} \\ \texttt{\#\#\ {[}1{]}\ 1867} \\ \texttt{r\ wilcox.test(ae,\ mu=800,\ alternative\ =\ \textquotesingle{}greater\textquotesingle{})\ \#change\ the\ values\ of\ mu\ and\ see\ the\ results} \\ \texttt{\#\#\ \#\#\ \ Wilcoxon\ signed\ rank\ test\ with\ continuity\ correction\ \#\#\ \#\#\ data:\ \ ae\ \#\#\ V\ =\ 8990,\ p-value\ =\ 8e-04\ \#\#\ alternative\ hypothesis:\ true\ location\ is\ greater\ than\ 800} \\ - Quick introduction at \url{https://psychstatsworkshop.wordpress.com/2014/08/06/lesson-9-hypothesis-testing/} \\ \#\# p-values - p-value: the p-value of a statistical test is the probability, computed assuming that \(H_0\) is true, that the test statistic would take a value as extreme or more extreme than that actually observed. - \url{http://www.nature.com/news/psychology-journal-bans-p-values-1.17001} - \url{https://www.sciencenews.org/blog/context/p-value-ban-small-step-journal-giant-leap-science} \\ \includegraphics{figures/pvalueBan.png} \\ \\ \# (PART) Preprocessing \{-\} \\ \# Preprocessing \\ Following the data mining process, we describe what is meant by preprocessing, classical supervised models, unsupervised models and evaluation in the context of software engineering with examples \\ This task is probably the hardest and where most of effort is spend in the data mining process. It is quite typical to transform the data, for example, finding inconsistencies, normalising, imputing missing values, transforming input data, merging variables, etc. \\ Typically, pre-processing consist of the following tasks (subprocesses): \\ + Data cleaning (consistency, noise detection, outliers) + Data integration + Data transformation (normalisation, discretisation) and derivation of new attributes from existing ones (e.g., population density from population and area) + Missing data imputation + Data reduction (feature selection and instace selection) \\ \#\# Data \\ \emph{Consistent} data are semantically correct based on real-world knowledge of the problem, i.e., no constrains are violated and data that can be used for inducing models and analysis. For example, the LoC or effort is constrained to non-negative values. We can also consider that to multiple attributes are consistent among them, and even datasets (e.g., same metrics but collected by different tools) \\ \#\# Missing values \\ \emph{Missing values} will have a negative effect when analysing the data or learning models. The results can be biased when compared with the models induced from the complete data, the results can be harder to analyse, it may be needed to discard records with missing values depending on the algorithm and this can be an important problems with small datasets such as the effort estimation ones. \\ Missing data is typically classified into: * MCAR (Missing Completely at Random) or MAR (Missing At Random) where there is no reason for those missing values and we can assume that the distribution could follow the attribute's distribution. * MNAR (Missing Not At Random) where there is a pattern for those missing values and it may may be advisable to check the data gathering process to try to understand why such information is missing. \\ \emph{Imputation} consists in replacing missing values for estimates of those missing values. Many algorithms do cannot handle missing values and therefore, imputation methods are needed. We can use simple approaches such as the replacing the missing values with the mean or mode of the attribute. More elaborated approaches include: \\ * EM (Expectation-Maximisation) * Distance-based + \(k\)-NN (\(k\)-Nearest Neighbours) + Clustering \\ In R, a missing value is represented with \texttt{NA} and the analyst must decide what to do with missing data. The simplest approach is to leave out instances (ignore missing -IM-) with with missing data. This functionality is supported by many base functions through the \texttt{na.rm} option. \\ The \texttt{mice} R package. MICE (Multivariate Imputation via Chained Equations) assumes that data are missing at random. Other packages include \texttt{Amelia}, \texttt{missForest}, \texttt{Hmisc} and \texttt{mi}. \\ \#\# Noise \\ Imperfections of the real-world data that influences negatively in the induced machine learning models. Approaches to deal with noisy data include: * Robust learners capable of handling noisy data (e.g., C4.5 through pruning strategies) * Data polishing methods which aim to correct noisy instances prior training * Noise filters which are used to identify and eliminate noisy instances from the training data. \\ Types of noise data: * Class Noise (aka label noise). + There can be contradictory cases (all attributes have the same value except the class) + Misclassifications. The class attribute is not labeled with the true label (golden truth) * Attribute Noise. Values of attributes that are noise, missing or unknown. \\ \#\# Outliers \\ There is a large amount of literature related to outlier detection, and furthermore several definitions of outlier exist. \\ ```r library(DMwR2) library(foreign) \\ kc1 \textless- read.arff(``./datasets/defectPred/D1/KC1.arff'') ``` \\ The LOF algorithm (\texttt{lofactor}), given a data set it produces a vector of local outlier factors for each case. \\ \texttt{r\ kc1num\ \textless{}-\ kc1{[},1:21{]}\ outlier.scores\ \textless{}-\ lofactor(kc1num,\ k=5)\ plot(density(na.omit(outlier.scores)))} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-37-1.pdf} \\ \texttt{r\ outliers\ \textless{}-\ order(outlier.scores,\ decreasing=T){[}1:5{]}\ print(outliers)} \\ \texttt{\#\#\ {[}1{]}\ \ 1\ \ 6\ 14\ 31\ 33} \\ Another simple method of Hiridoglou and Berthelot for positive observations. \\ \#\# Feature selection \\ Feature Selection (FS) aims at identifying the most relevant attributes from a dataset. It is important in different ways: \\ * A reduced volume of data allows different data mining or searching techniques to be applied. \\ * Irrelevant and redundant attributes can generate less accurate and more complex models. Furthermore, data mining algorithms can be executed faster. \\ * It avoids the collection of data for those irrelevant and redundant attributes in the future. \\ The problem of FS received a thorough treatment in pattern recognition and machine learning. Most of the FS algorithms tackle the task as a \emph{search} problem, where each state in the search specifies a distinct subset of the possible attributes \citep{BL97}. The search procedure is combined with a criterion to evaluate the merit of each candidate subset of attributes. There are a multiple possible combinations between each procedure search and each attribute measure \citep{LY05}. \\ There are two major approaches in FS from the method's output point of view: \\ * \emph{Feature subset selection} (FSS) \\ * \emph{Feature ranking} in which attributes are ranked as a list of features which are ordered according to evaluation measures (a subset of features is often selected from the top of the ranking list). \\ FFS algorithms designed with different evaluation criteria broadly fall into two categories: \\ * The \emph{filter} model relies on general characteristics of the data to evaluate and select feature subsets without involving any data mining algorithm. \\ * The \emph{wrapper} model requires one predetermined mining algorithm and uses its performance as the evaluation criterion. It searches for features better suited to the mining algorithm aiming to improve mining performance, but it also tends to be more computationally expensive than filter model \citep[\citet{Lan94}]{KJ97}. \\ Feature subset algorithms search through candidate feature subsets guide by a certain evaluation measure \citep{LM98} which captures the goodness of each subset. An optimal (or near optimal) subset is selected when the search stops. \\ Some existing evaluation measures that have been shown effective in removing both irrelevant and redundant features include the consistency measure \citep{DLM00}, the correlation measure \citep{Hal99} and the estimated accuracy of a learning algorithm \citep{KJ97}. \\ + \emph{Consistency} measure attempts to find a minimum number of features that separate classes as consistently as the full set of features can. An inconsistency is defined as to instances having the same feature values but different class labels. \\ + \emph{Correlation} measure evaluates the goodness of feature subsets based on the hypothesis that good feature subsets contain features highly correlated to the class, yet uncorrelated to each other. \\ + \emph{Wrapper-based} attribute selection uses the target learning algorithm to estimate the worth of attribute subsets. The feature subset selection algorithm conducts a search for a good subset using the induction algorithm itself as part of the evaluation function. \\ Langley \citeyearpar{Lan94} notes that feature selection algorithms that search through the space of feature subsets must address four main issues: (i) the starting point of the search, (ii) the organization of the search, (iii) the evaluation of features subsets and (iv) the criterion used to terminate the search. Different algorithms address theses issues differently. \\ It is impractical to look at all possible feature subsets, even with a small number of attributes. Feature selection algorithms usually proceed greedily and are be classified into those that add features to an initially empty set (\emph{forward selection}) and those that remove features from an initially complete set (\emph{backwards elimination}). Hybrids both add and remove features as the algorithm progresses. Forward selection is much faster than backward elimination and therefore scales better to large data sets. A wide range of search strategies can be used: best-first, branch-and-bound, simulated annealing, genetic algorithms (see Kohavi and John \citeyearpar{KJ97} for a review). \\ \#\#\# FSelector package in R \\ The FSelector package in R implements many algorithms available in Weka \\ ```r library(FSelector) library(foreign) \\ cm1 \textless- read.arff(``./datasets/defectPred/D1/CM1.arff'') \\ cm1RFWeigths \textless- random.forest.importance(Defective \textasciitilde{} ., cm1) cutoff.biggest.diff(cm1RFWeigths) ``` \\ \texttt{\#\#\ {[}1{]}\ "LOC\_COMMENTS"\ \ \ \ \ \ \ \ \ "NUM\_UNIQUE\_OPERATORS"} \\ Using the Information Gain measure as ranking: \\ \texttt{r\ cm1GRWeights\ \textless{}-\ gain.ratio(Defective\ \textasciitilde{}\ .,\ cm1)\ cm1GRWeights} \\ \texttt{\#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ attr\_importance\ \#\#\ LOC\_BLANK\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ BRANCH\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ CALL\_PAIRS\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ LOC\_CODE\_AND\_COMMENT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ LOC\_COMMENTS\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0754\ \#\#\ CONDITION\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ CYCLOMATIC\_COMPLEXITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ CYCLOMATIC\_DENSITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ DECISION\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ DECISION\_DENSITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ DESIGN\_COMPLEXITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ DESIGN\_DENSITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ EDGE\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ ESSENTIAL\_COMPLEXITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ ESSENTIAL\_DENSITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ LOC\_EXECUTABLE\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0888\ \#\#\ PARAMETER\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ HALSTEAD\_CONTENT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0701\ \#\#\ HALSTEAD\_DIFFICULTY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ HALSTEAD\_EFFORT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0375\ \#\#\ HALSTEAD\_ERROR\_EST\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0448\ \#\#\ HALSTEAD\_LENGTH\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0425\ \#\#\ HALSTEAD\_LEVEL\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ HALSTEAD\_PROG\_TIME\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0375\ \#\#\ HALSTEAD\_VOLUME\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0471\ \#\#\ MAINTENANCE\_SEVERITY\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ MODIFIED\_CONDITION\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ MULTIPLE\_CONDITION\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ NODE\_COUNT\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ NORMALIZED\_CYLOMATIC\_COMPLEXITY\ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ NUM\_OPERANDS\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0000\ \#\#\ NUM\_OPERATORS\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0471\ \#\#\ NUM\_UNIQUE\_OPERANDS\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0589\ \#\#\ NUM\_UNIQUE\_OPERATORS\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0616\ \#\#\ NUMBER\_OF\_LINES\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0573\ \#\#\ PERCENT\_COMMENTS\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0663\ \#\#\ LOC\_TOTAL\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.0763} \\ \texttt{r\ cutoff.biggest.diff(cm1GRWeights)} \\ \texttt{\#\#\ \ {[}1{]}\ "LOC\_EXECUTABLE"\ \ \ \ \ \ \ "LOC\_TOTAL"\ \ \ \ \ \ \ \ \ \ \ \ "LOC\_COMMENTS"\ \#\#\ \ {[}4{]}\ "HALSTEAD\_CONTENT"\ \ \ \ \ "PERCENT\_COMMENTS"\ \ \ \ \ "NUM\_UNIQUE\_OPERATORS"\ \#\#\ \ {[}7{]}\ "NUM\_UNIQUE\_OPERANDS"\ \ "NUMBER\_OF\_LINES"\ \ \ \ \ \ "HALSTEAD\_VOLUME"\ \#\#\ {[}10{]}\ "NUM\_OPERATORS"\ \ \ \ \ \ \ \ "HALSTEAD\_ERROR\_EST"\ \ \ "HALSTEAD\_LENGTH"\ \#\#\ {[}13{]}\ "HALSTEAD\_EFFORT"\ \ \ \ \ \ "HALSTEAD\_PROG\_TIME"} \\ \texttt{r\ \#\ After\ assigning\ weights,\ we\ can\ select\ the\ statistaclly\ significant\ ones\ cm1X2Weights\ \textless{}-\ chi.squared(Defective\ \textasciitilde{}\ .,\ cm1)\ cutoff.biggest.diff(cm1X2Weights)} \\ \texttt{\#\#\ \ {[}1{]}\ "LOC\_EXECUTABLE"\ \ \ \ \ \ \ "LOC\_COMMENTS"\ \ \ \ \ \ \ \ \ "LOC\_TOTAL"\ \#\#\ \ {[}4{]}\ "NUM\_UNIQUE\_OPERATORS"\ "NUM\_UNIQUE\_OPERANDS"\ \ "NUMBER\_OF\_LINES"\ \#\#\ \ {[}7{]}\ "HALSTEAD\_VOLUME"\ \ \ \ \ \ "NUM\_OPERATORS"\ \ \ \ \ \ \ \ "HALSTEAD\_ERROR\_EST"\ \#\#\ {[}10{]}\ "HALSTEAD\_CONTENT"\ \ \ \ \ "HALSTEAD\_EFFORT"\ \ \ \ \ \ "HALSTEAD\_PROG\_TIME"\ \#\#\ {[}13{]}\ "HALSTEAD\_LENGTH"\ \ \ \ \ \ "PERCENT\_COMMENTS"} \\ Using CFS attribute selection \\ ```r library(FSelector) library(foreign) \\ cm1 \textless- read.arff(``./datasets/defectPred/D1/CM1.arff'') \\ result \textless- cfs(Defective \textasciitilde{} ., cm1) f \textless- as.simple.formula(result, ``Defective'') f ``` \\ \texttt{\#\#\ Defective\ \textasciitilde{}\ LOC\_COMMENTS\ +\ LOC\_EXECUTABLE\ +\ HALSTEAD\_CONTENT\ +\ \#\#\ \ \ \ \ NUM\_UNIQUE\_OPERATORS\ +\ PERCENT\_COMMENTS\ \#\#\ \textless{}environment:\ 0x55d7e72edbe0\textgreater{}} \\ Other packages for Feature selection in R include \texttt{FSelectorRccp} which re-implments the FSlector without WEKA dependencies. \\ Another popular package is \texttt{Boruta}, which is based on selection based on Random Forest. \\ \#\# Instance selection \\ Removal of samples (complementary to the removal of attributes) in order to scale down the dataset prior to learning a model so that there is (almost) no performance loss. \\ There are two types of processes: \\ * \emph{Prototype Selection} (PS) \citep{GDCH12} when the subset is used with a distance based method (kNN) \\ * \emph{Training Set Selection} (TSS) \citep{CanoHL07} in which an actual model is learned. \\ It is also a search problem as with \emph{feature selection}. Garcia et al. \citeyearpar{GDCH12} provide a comprehensive overview of the topic. \\ \#\# Discretization \\ This process transforms continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. \\ \#\# Correlation Coefficient and Covariance for Numeric Data \\ Two random variables \(x\) and \(y\) are called independent if the probability distribution of one variable is not affected by the presence of another. \\ \(\tilde{\chi}^2=\frac{1}{d}\sum_{k=1}^{n} \frac{(O_k - E_k)^2}{E_k}\) \\ \texttt{r\ chisq.test(kc1\$LOC\_BLANK,kc1\$BRANCH\_TOTAL)} \\ \texttt{\#\#\ \#\#\ \ Chi-squared\ test\ for\ given\ probabilities\ \#\#\ \#\#\ data:\ \ kc1\$LOC\_BLANK\ \#\#\ X-squared\ =\ 17705,\ df\ =\ 2095,\ p-value\ \textless{}2e-16} \\ \texttt{r\ chisq.test(kc1\$DESIGN\_COMPLEXITY,kc1\$CYCLOMATIC\_COMPLEXITY)} \\ \texttt{\#\#\ \#\#\ \ Pearson\textquotesingle{}s\ Chi-squared\ test\ \#\#\ \#\#\ data:\ \ kc1\$DESIGN\_COMPLEXITY\ and\ kc1\$CYCLOMATIC\_COMPLEXITY\ \#\#\ X-squared\ =\ 25101,\ df\ =\ 696,\ p-value\ \textless{}2e-16} \\ \#\# Normalization \\ \#\#\# Min-Max Normalization \\ \(z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}\) \\ \texttt{r\ library(caret)\ preObj\ \textless{}-\ preProcess(kc1{[},\ -22{]},\ method=c("center",\ "scale"))} \\ \#\#\# Z-score normalization TBD \\ \#\# Transformations \\ \#\#\# Linear Transformations and Quadratic Trans formations TBD \\ \#\#\# Box-cox transformation TBD \\ \#\#\# Nominal to Binary tranformations TBD \\ \#\# Preprocessing in R \\ \#\#\# The \texttt{dplyr} package \\ The \emph{\href{https://cran.r-project.org/web/packages/dplyr/index.html}{dplyr}} package created by Hadley Wickham. Some functions are similar to SQL syntax. key functions in dplyr include: \\ + select: select columns from a dataframe + filter: select rows from a dataframe + summarize: allows us to do summary stats based upon the grouped variable + group\_by: group by a factor variable + arrange: order the dataset + joins: as in sql left join \\ Tutorial: \url{https://github.com/justmarkham/dplyr-tutorial} \\ Examples \\ \texttt{r\ library(dplyr)} \\ Describe the dataframe: \\ \texttt{r\ str(kc1)} \\ \texttt{\#\#\ \textquotesingle{}data.frame\textquotesingle{}:\ \ \ \ 2096\ obs.\ of\ \ 22\ variables:\ \#\#\ \ \$\ LOC\_BLANK\ \ \ \ \ \ \ \ \ \ \ \ :\ num\ \ 0\ 0\ 0\ 0\ 2\ 0\ 0\ 0\ 0\ 2\ ...\ \#\#\ \ \$\ BRANCH\_COUNT\ \ \ \ \ \ \ \ \ :\ num\ \ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ ...\ \#\#\ \ \$\ LOC\_CODE\_AND\_COMMENT\ :\ num\ \ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ ...\ \#\#\ \ \$\ LOC\_COMMENTS\ \ \ \ \ \ \ \ \ :\ num\ \ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ ...\ \#\#\ \ \$\ CYCLOMATIC\_COMPLEXITY:\ num\ \ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ ...\ \#\#\ \ \$\ DESIGN\_COMPLEXITY\ \ \ \ :\ num\ \ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ ...\ \#\#\ \ \$\ ESSENTIAL\_COMPLEXITY\ :\ num\ \ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ ...\ \#\#\ \ \$\ LOC\_EXECUTABLE\ \ \ \ \ \ \ :\ num\ \ 3\ 1\ 1\ 1\ 8\ 3\ 1\ 1\ 1\ 9\ ...\ \#\#\ \ \$\ HALSTEAD\_CONTENT\ \ \ \ \ :\ num\ \ 11.6\ 0\ 0\ 0\ 18\ ...\ \#\#\ \ \$\ HALSTEAD\_DIFFICULTY\ \ :\ num\ \ 2.67\ 0\ 0\ 0\ 3.5\ 2.67\ 0\ 0\ 0\ 3.75\ ...\ \#\#\ \ \$\ HALSTEAD\_EFFORT\ \ \ \ \ \ :\ num\ \ 82.3\ 0\ 0\ 0\ 220.9\ ...\ \#\#\ \ \$\ HALSTEAD\_ERROR\_EST\ \ \ :\ num\ \ 0.01\ 0\ 0\ 0\ 0.02\ 0.01\ 0\ 0\ 0\ 0.04\ ...\ \#\#\ \ \$\ HALSTEAD\_LENGTH\ \ \ \ \ \ :\ num\ \ 11\ 1\ 1\ 1\ 19\ 11\ 1\ 1\ 1\ 29\ ...\ \#\#\ \ \$\ HALSTEAD\_LEVEL\ \ \ \ \ \ \ :\ num\ \ 0.38\ 0\ 0\ 0\ 0.29\ 0.38\ 0\ 0\ 0\ 0.27\ ...\ \#\#\ \ \$\ HALSTEAD\_PROG\_TIME\ \ \ :\ num\ \ 4.57\ 0\ 0\ 0\ 12.27\ ...\ \#\#\ \ \$\ HALSTEAD\_VOLUME\ \ \ \ \ \ :\ num\ \ 30.9\ 0\ 0\ 0\ 63.1\ ...\ \#\#\ \ \$\ NUM\_OPERANDS\ \ \ \ \ \ \ \ \ :\ num\ \ 4\ 0\ 0\ 0\ 7\ 4\ 0\ 0\ 0\ 10\ ...\ \#\#\ \ \$\ NUM\_OPERATORS\ \ \ \ \ \ \ \ :\ num\ \ 7\ 1\ 1\ 1\ 12\ 7\ 1\ 1\ 1\ 19\ ...\ \#\#\ \ \$\ NUM\_UNIQUE\_OPERANDS\ \ :\ num\ \ 3\ 0\ 0\ 0\ 5\ 3\ 0\ 0\ 0\ 8\ ...\ \#\#\ \ \$\ NUM\_UNIQUE\_OPERATORS\ :\ num\ \ 4\ 1\ 1\ 1\ 5\ 4\ 1\ 1\ 1\ 6\ ...\ \#\#\ \ \$\ LOC\_TOTAL\ \ \ \ \ \ \ \ \ \ \ \ :\ num\ \ 5\ 3\ 3\ 3\ 12\ 5\ 3\ 3\ 3\ 13\ ...\ \#\#\ \ \$\ Defective\ \ \ \ \ \ \ \ \ \ \ \ :\ Factor\ w/\ 2\ levels\ "N","Y":\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ 1\ ...} \\ \texttt{tbl\_df} creates a ``local data frame'' as a wrapper for better printing \\ \texttt{r\ kc1\_tbl\ \textless{}-\ tbl\_df(kc1)\ \#deprecated} \\ \texttt{\#\#\ Warning:\ \textasciigrave{}tbl\_df()\textasciigrave{}\ was\ deprecated\ in\ dplyr\ 1.0.0.\ \#\#\ Please\ use\ \textasciigrave{}tibble::as\_tibble()\textasciigrave{}\ instead.\ \#\#\ This\ warning\ is\ displayed\ once\ every\ 8\ hours.\ \#\#\ Call\ \textasciigrave{}lifecycle::last\_lifecycle\_warnings()\textasciigrave{}\ to\ see\ where\ this\ warning\ was\ generated.} \\ \texttt{r\ kc1\_tbl\ \textless{}-\ tibble(kc1)} \\ Filter: \\ \texttt{r\ \#\ Filter\ rows:\ use\ comma\ or\ \&\ to\ represent\ AND\ condition\ filter(kc1\_tbl,\ Defective\ ==\ "Y"\ \&\ LOC\_BLANK\ !=\ 0)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 251\ x\ 22\ \#\#\ \ \ \ LOC\_BLANK\ BRANCH\_COUNT\ LOC\_CODE\_AND\_COMMENT\ LOC\_COMMENTS\ CYCLOMATIC\_COMPLEXI\textasciitilde{}\ \#\#\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ \ \ 6\ \ \ \ \ \ \ \ \ \ \ 21\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ 10\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 11\ \#\#\ \ 2\ \ \ \ \ \ \ \ \ 5\ \ \ \ \ \ \ \ \ \ \ 15\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 8\ \#\#\ \ 3\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ 5\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \#\#\ \ 4\ \ \ \ \ \ \ \ \ 4\ \ \ \ \ \ \ \ \ \ \ \ 5\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \#\#\ \ 5\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ 11\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 6\ \#\#\ \ 6\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ 23\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 12\ \#\#\ \ 7\ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ 11\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 6\ \#\#\ \ 8\ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ 13\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 7\ \#\#\ \ 9\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ 17\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 9\ \#\#\ 10\ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \#\ ...\ with\ 241\ more\ rows,\ and\ 17\ more\ variables:\ DESIGN\_COMPLEXITY\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ ESSENTIAL\_COMPLEXITY\ \textless{}dbl\textgreater{},\ LOC\_EXECUTABLE\ \textless{}dbl\textgreater{},\ HALSTEAD\_CONTENT\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_DIFFICULTY\ \textless{}dbl\textgreater{},\ HALSTEAD\_EFFORT\ \textless{}dbl\textgreater{},\ HALSTEAD\_ERROR\_EST\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_LENGTH\ \textless{}dbl\textgreater{},\ HALSTEAD\_LEVEL\ \textless{}dbl\textgreater{},\ HALSTEAD\_PROG\_TIME\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_VOLUME\ \textless{}dbl\textgreater{},\ NUM\_OPERANDS\ \textless{}dbl\textgreater{},\ NUM\_OPERATORS\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ NUM\_UNIQUE\_OPERANDS\ \textless{}dbl\textgreater{},\ NUM\_UNIQUE\_OPERATORS\ \textless{}dbl\textgreater{},\ LOC\_TOTAL\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ Defective\ \textless{}fct\textgreater{}} \\ Another operator is \texttt{\%in\%}. \\ Select: \\ \texttt{r\ select(kc1\_tbl,\ contains("LOC"),\ Defective)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2,096\ x\ 6\ \#\#\ \ \ \ LOC\_BLANK\ LOC\_CODE\_AND\_COMME\textasciitilde{}\ LOC\_COMMENTS\ LOC\_EXECUTABLE\ LOC\_TOTAL\ Defective\ \#\#\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \textless{}dbl\textgreater{}\ \textless{}fct\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ 5\ N\ \#\#\ \ 2\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 3\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 4\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 5\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 8\ \ \ \ \ \ \ \ 12\ N\ \#\#\ \ 6\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ 5\ N\ \#\#\ \ 7\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 8\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 9\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ 10\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 9\ \ \ \ \ \ \ \ 13\ N\ \#\#\ \#\ ...\ with\ 2,086\ more\ rows} \\ Now, \texttt{kc1\_tbl} contains(``LOC''), Defective \\ Filter and Select together: \\ \texttt{r\ \#\ nesting\ method\ filter(select(kc1\_tbl,\ contains("LOC"),\ Defective),\ Defective\ !=0)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2,096\ x\ 6\ \#\#\ \ \ \ LOC\_BLANK\ LOC\_CODE\_AND\_COMME\textasciitilde{}\ LOC\_COMMENTS\ LOC\_EXECUTABLE\ LOC\_TOTAL\ Defective\ \#\#\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \textless{}dbl\textgreater{}\ \textless{}fct\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ 5\ N\ \#\#\ \ 2\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 3\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 4\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 5\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 8\ \ \ \ \ \ \ \ 12\ N\ \#\#\ \ 6\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ 5\ N\ \#\#\ \ 7\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 8\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 9\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ 10\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 9\ \ \ \ \ \ \ \ 13\ N\ \#\#\ \#\ ...\ with\ 2,086\ more\ rows} \\ It is easier usign the chaining method: \\ \texttt{r\ \#\ chaining\ method\ kc1\_tbl\ \%\textgreater{}\%\ select(contains("LOC"),\ Defective)\ \%\textgreater{}\%\ filter(Defective\ !=0)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2,096\ x\ 6\ \#\#\ \ \ \ LOC\_BLANK\ LOC\_CODE\_AND\_COMME\textasciitilde{}\ LOC\_COMMENTS\ LOC\_EXECUTABLE\ LOC\_TOTAL\ Defective\ \#\#\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \textless{}dbl\textgreater{}\ \textless{}fct\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ 5\ N\ \#\#\ \ 2\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 3\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 4\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 5\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 8\ \ \ \ \ \ \ \ 12\ N\ \#\#\ \ 6\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ 5\ N\ \#\#\ \ 7\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 8\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ \ 9\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ 3\ N\ \#\#\ 10\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ 9\ \ \ \ \ \ \ \ 13\ N\ \#\#\ \#\ ...\ with\ 2,086\ more\ rows} \\ Arrange ascending \\ \texttt{r\ \#\ kc1\_tbl\ \%\textgreater{}\%\ select(LOC\_TOTAL,\ Defective)\ \%\textgreater{}\%\ arrange(LOC\_TOTAL)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2,096\ x\ 2\ \#\#\ \ \ \ LOC\_TOTAL\ Defective\ \#\#\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \textless{}fct\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 2\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 3\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 4\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 5\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 6\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 7\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 8\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \ 9\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ 10\ \ \ \ \ \ \ \ \ 1\ N\ \#\#\ \#\ ...\ with\ 2,086\ more\ rows} \\ Arrange descending: \\ \texttt{r\ kc1\_tbl\ \%\textgreater{}\%\ select(LOC\_TOTAL,\ Defective)\ \%\textgreater{}\%\ arrange(desc(LOC\_TOTAL))} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2,096\ x\ 2\ \#\#\ \ \ \ LOC\_TOTAL\ Defective\ \#\#\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \textless{}fct\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ 288\ Y\ \#\#\ \ 2\ \ \ \ \ \ \ 286\ Y\ \#\#\ \ 3\ \ \ \ \ \ \ 283\ N\ \#\#\ \ 4\ \ \ \ \ \ \ 220\ Y\ \#\#\ \ 5\ \ \ \ \ \ \ 217\ Y\ \#\#\ \ 6\ \ \ \ \ \ \ 210\ N\ \#\#\ \ 7\ \ \ \ \ \ \ 205\ Y\ \#\#\ \ 8\ \ \ \ \ \ \ 184\ Y\ \#\#\ \ 9\ \ \ \ \ \ \ 179\ Y\ \#\#\ 10\ \ \ \ \ \ \ 176\ Y\ \#\#\ \#\ ...\ with\ 2,086\ more\ rows} \\ Mutate: \\ \texttt{r\ kc1\_tbl\ \%\textgreater{}\%\ filter(Defective\ ==\ "Y")\ \%\textgreater{}\%\ select(NUM\_OPERANDS,\ NUM\_OPERATORS,\ Defective)\ \%\textgreater{}\%\ mutate(HalsteadLength\ =\ NUM\_OPERANDS\ +\ NUM\_OPERATORS)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 325\ x\ 4\ \#\#\ \ \ \ NUM\_OPERANDS\ NUM\_OPERATORS\ Defective\ HalsteadLength\ \#\#\ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \textless{}fct\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ \ \ \ \ 64\ \ \ \ \ \ \ \ \ \ \ 107\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 171\ \#\#\ \ 2\ \ \ \ \ \ \ \ \ \ \ 52\ \ \ \ \ \ \ \ \ \ \ \ 89\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 141\ \#\#\ \ 3\ \ \ \ \ \ \ \ \ \ \ 17\ \ \ \ \ \ \ \ \ \ \ \ 41\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 58\ \#\#\ \ 4\ \ \ \ \ \ \ \ \ \ \ 41\ \ \ \ \ \ \ \ \ \ \ \ 74\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 115\ \#\#\ \ 5\ \ \ \ \ \ \ \ \ \ \ 54\ \ \ \ \ \ \ \ \ \ \ \ 95\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 149\ \#\#\ \ 6\ \ \ \ \ \ \ \ \ \ \ 75\ \ \ \ \ \ \ \ \ \ \ 156\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 231\ \#\#\ \ 7\ \ \ \ \ \ \ \ \ \ \ 54\ \ \ \ \ \ \ \ \ \ \ \ 95\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 149\ \#\#\ \ 8\ \ \ \ \ \ \ \ \ \ \ 56\ \ \ \ \ \ \ \ \ \ \ \ 99\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 155\ \#\#\ \ 9\ \ \ \ \ \ \ \ \ \ \ 69\ \ \ \ \ \ \ \ \ \ \ 124\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 193\ \#\#\ 10\ \ \ \ \ \ \ \ \ \ \ 44\ \ \ \ \ \ \ \ \ \ \ \ 60\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 104\ \#\#\ \#\ ...\ with\ 315\ more\ rows} \\ \texttt{summarise}: Reduce variables to values \\ \texttt{r\ \#\ Create\ a\ table\ grouped\ by\ Defective,\ and\ then\ summarise\ each\ group\ by\ taking\ the\ mean\ of\ loc\ kc1\_tbl\ \%\textgreater{}\%\ group\_by(Defective)\ \%\textgreater{}\%\ summarise(avg\_loc\ =\ mean(LOC\_TOTAL,\ na.rm=TRUE))} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2\ x\ 2\ \#\#\ \ \ Defective\ avg\_loc\ \#\#\ \ \ \textless{}fct\textgreater{}\ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \#\#\ 1\ N\ \ \ \ \ \ \ \ \ \ \ \ 15.9\ \#\#\ 2\ Y\ \ \ \ \ \ \ \ \ \ \ \ 44.7} \\ \texttt{r\ \#\ Create\ a\ table\ grouped\ by\ Defective,\ and\ then\ summarise\ each\ group\ by\ taking\ the\ mean\ of\ loc\ kc1\_tbl\ \%\textgreater{}\%\ group\_by(Defective)\ \%\textgreater{}\%\ summarise\_each(funs(mean,\ min,\ max),\ BRANCH\_COUNT,\ LOC\_TOTAL)} \\ \texttt{\#\#\ Warning:\ \textasciigrave{}summarise\_each\_()\textasciigrave{}\ was\ deprecated\ in\ dplyr\ 0.7.0.\ \#\#\ Please\ use\ \textasciigrave{}across()\textasciigrave{}\ instead.\ \#\#\ This\ warning\ is\ displayed\ once\ every\ 8\ hours.\ \#\#\ Call\ \textasciigrave{}lifecycle::last\_lifecycle\_warnings()\textasciigrave{}\ to\ see\ where\ this\ warning\ was\ generated.} \\ \texttt{\#\#\ Warning:\ \textasciigrave{}funs()\textasciigrave{}\ was\ deprecated\ in\ dplyr\ 0.8.0.\ \#\#\ Please\ use\ a\ list\ of\ either\ functions\ or\ lambdas:\ \#\#\ \#\#\ \ \ \#\ Simple\ named\ list:\ \#\#\ \ \ list(mean\ =\ mean,\ median\ =\ median)\ \#\#\ \#\#\ \ \ \#\ Auto\ named\ with\ \textasciigrave{}tibble::lst()\textasciigrave{}:\ \#\#\ \ \ tibble::lst(mean,\ median)\ \#\#\ \#\#\ \ \ \#\ Using\ lambdas\ \#\#\ \ \ list(\textasciitilde{}\ mean(.,\ trim\ =\ .2),\ \textasciitilde{}\ median(.,\ na.rm\ =\ TRUE))\ \#\#\ This\ warning\ is\ displayed\ once\ every\ 8\ hours.\ \#\#\ Call\ \textasciigrave{}lifecycle::last\_lifecycle\_warnings()\textasciigrave{}\ to\ see\ where\ this\ warning\ was\ generated.} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2\ x\ 7\ \#\#\ \ \ Defective\ BRANCH\_COUNT\_mean\ LOC\_TOTAL\_mean\ BRANCH\_COUNT\_min\ LOC\_TOTAL\_min\ \#\#\ \ \ \textless{}fct\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \#\#\ 1\ N\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3.68\ \ \ \ \ \ \ \ \ \ \ 15.9\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ 2\ Y\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 10.1\ \ \ \ \ \ \ \ \ \ \ \ 44.7\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ 2\ \#\#\ \#\ ...\ with\ 2\ more\ variables:\ BRANCH\_COUNT\_max\ \textless{}dbl\textgreater{},\ LOC\_TOTAL\_max\ \textless{}dbl\textgreater{}} \\ It seems than the number of \emph{Defective} modules is larger than the \emph{Non-Defective} ones. We can count them with: \\ \texttt{r\ \#\ n()\ or\ tally\ kc1\_tbl\ \%\textgreater{}\%\ group\_by(Defective)\ \%\textgreater{}\%\ tally()} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2\ x\ 2\ \#\#\ \ \ Defective\ \ \ \ \ n\ \#\#\ \ \ \textless{}fct\textgreater{}\ \ \ \ \ \textless{}int\textgreater{}\ \#\#\ 1\ N\ \ \ \ \ \ \ \ \ \ 1771\ \#\#\ 2\ Y\ \ \ \ \ \ \ \ \ \ \ 325} \\ It seems that it's an imbalanced dataset\ldots{} \\ \texttt{r\ \#\ randomly\ sample\ a\ fixed\ number\ of\ rows,\ without\ replacement\ kc1\_tbl\ \%\textgreater{}\%\ sample\_n(2)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 2\ x\ 22\ \#\#\ \ \ LOC\_BLANK\ BRANCH\_COUNT\ LOC\_CODE\_AND\_COMMENT\ LOC\_COMMENTS\ CYCLOMATIC\_COMPLEXITY\ \#\#\ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \#\#\ 1\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2\ \#\#\ 2\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \#\ ...\ with\ 17\ more\ variables:\ DESIGN\_COMPLEXITY\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ ESSENTIAL\_COMPLEXITY\ \textless{}dbl\textgreater{},\ LOC\_EXECUTABLE\ \textless{}dbl\textgreater{},\ HALSTEAD\_CONTENT\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_DIFFICULTY\ \textless{}dbl\textgreater{},\ HALSTEAD\_EFFORT\ \textless{}dbl\textgreater{},\ HALSTEAD\_ERROR\_EST\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_LENGTH\ \textless{}dbl\textgreater{},\ HALSTEAD\_LEVEL\ \textless{}dbl\textgreater{},\ HALSTEAD\_PROG\_TIME\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_VOLUME\ \textless{}dbl\textgreater{},\ NUM\_OPERANDS\ \textless{}dbl\textgreater{},\ NUM\_OPERATORS\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ NUM\_UNIQUE\_OPERANDS\ \textless{}dbl\textgreater{},\ NUM\_UNIQUE\_OPERATORS\ \textless{}dbl\textgreater{},\ LOC\_TOTAL\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ Defective\ \textless{}fct\textgreater{}} \\ \texttt{r\ \#\ randomly\ sample\ a\ fraction\ of\ rows,\ with\ replacement\ kc1\_tbl\ \%\textgreater{}\%\ sample\_frac(0.05,\ replace=TRUE)} \\ \texttt{\#\#\ \#\ A\ tibble:\ 105\ x\ 22\ \#\#\ \ \ \ LOC\_BLANK\ BRANCH\_COUNT\ LOC\_CODE\_AND\_COMMENT\ LOC\_COMMENTS\ CYCLOMATIC\_COMPLEXI\textasciitilde{}\ \#\#\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ \#\#\ \ 1\ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ 3\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2\ \#\#\ \ 2\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \ 3\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \ 4\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ 5\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 3\ \#\#\ \ 5\ \ \ \ \ \ \ \ \ 2\ \ \ \ \ \ \ \ \ \ \ \ 7\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 4\ \#\#\ \ 6\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \ 7\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \ 8\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \ 9\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ 10\ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0\ \ \ \ \ \ \ \ \ \ \ \ 1\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1\ \#\#\ \#\ ...\ with\ 95\ more\ rows,\ and\ 17\ more\ variables:\ DESIGN\_COMPLEXITY\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ ESSENTIAL\_COMPLEXITY\ \textless{}dbl\textgreater{},\ LOC\_EXECUTABLE\ \textless{}dbl\textgreater{},\ HALSTEAD\_CONTENT\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_DIFFICULTY\ \textless{}dbl\textgreater{},\ HALSTEAD\_EFFORT\ \textless{}dbl\textgreater{},\ HALSTEAD\_ERROR\_EST\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_LENGTH\ \textless{}dbl\textgreater{},\ HALSTEAD\_LEVEL\ \textless{}dbl\textgreater{},\ HALSTEAD\_PROG\_TIME\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ HALSTEAD\_VOLUME\ \textless{}dbl\textgreater{},\ NUM\_OPERANDS\ \textless{}dbl\textgreater{},\ NUM\_OPERATORS\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ NUM\_UNIQUE\_OPERANDS\ \textless{}dbl\textgreater{},\ NUM\_UNIQUE\_OPERATORS\ \textless{}dbl\textgreater{},\ LOC\_TOTAL\ \textless{}dbl\textgreater{},\ \#\#\ \#\ \ \ Defective\ \textless{}fct\textgreater{}} \\ \texttt{r\ \#\ Better\ formatting\ adapted\ to\ the\ screen\ width\ glimpse(kc1\_tbl)} \\ \texttt{\#\#\ Rows:\ 2,096\ \#\#\ Columns:\ 22\ \#\#\ \$\ LOC\_BLANK\ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 0,\ 0,\ 0,\ 0,\ 2,\ 0,\ 0,\ 0,\ 0,\ 2,\ 2,\ 0,\ 2,\ 1,\ 2,\ 2,\ \textasciitilde{}\ \#\#\ \$\ BRANCH\_COUNT\ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 5,\ \textasciitilde{}\ \#\#\ \$\ LOC\_CODE\_AND\_COMMENT\ \ \textless{}dbl\textgreater{}\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ \textasciitilde{}\ \#\#\ \$\ LOC\_COMMENTS\ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ 0,\ \textasciitilde{}\ \#\#\ \$\ CYCLOMATIC\_COMPLEXITY\ \textless{}dbl\textgreater{}\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 3,\ \textasciitilde{}\ \#\#\ \$\ DESIGN\_COMPLEXITY\ \ \ \ \ \textless{}dbl\textgreater{}\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 2,\ \textasciitilde{}\ \#\#\ \$\ ESSENTIAL\_COMPLEXITY\ \ \textless{}dbl\textgreater{}\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 1,\ 3,\ \textasciitilde{}\ \#\#\ \$\ LOC\_EXECUTABLE\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 3,\ 1,\ 1,\ 1,\ 8,\ 3,\ 1,\ 1,\ 1,\ 9,\ 8,\ 1,\ 8,\ 1,\ 8,\ 12,\textasciitilde{}\ \#\#\ \$\ HALSTEAD\_CONTENT\ \ \ \ \ \ \textless{}dbl\textgreater{}\ 11.6,\ 0.0,\ 0.0,\ 0.0,\ 18.0,\ 11.6,\ 0.0,\ 0.0,\ 0.0,\ \textasciitilde{}\ \#\#\ \$\ HALSTEAD\_DIFFICULTY\ \ \ \textless{}dbl\textgreater{}\ 2.67,\ 0.00,\ 0.00,\ 0.00,\ 3.50,\ 2.67,\ 0.00,\ 0.00,\ \textasciitilde{}\ \#\#\ \$\ HALSTEAD\_EFFORT\ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 82.3,\ 0.0,\ 0.0,\ 0.0,\ 220.9,\ 82.3,\ 0.0,\ 0.0,\ 0.0,\textasciitilde{}\ \#\#\ \$\ HALSTEAD\_ERROR\_EST\ \ \ \ \textless{}dbl\textgreater{}\ 0.01,\ 0.00,\ 0.00,\ 0.00,\ 0.02,\ 0.01,\ 0.00,\ 0.00,\ \textasciitilde{}\ \#\#\ \$\ HALSTEAD\_LENGTH\ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 11,\ 1,\ 1,\ 1,\ 19,\ 11,\ 1,\ 1,\ 1,\ 29,\ 19,\ 1,\ 19,\ 1,\ \textasciitilde{}\ \#\#\ \$\ HALSTEAD\_LEVEL\ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 0.38,\ 0.00,\ 0.00,\ 0.00,\ 0.29,\ 0.38,\ 0.00,\ 0.00,\ \textasciitilde{}\ \#\#\ \$\ HALSTEAD\_PROG\_TIME\ \ \ \ \textless{}dbl\textgreater{}\ 4.57,\ 0.00,\ 0.00,\ 0.00,\ 12.27,\ 4.57,\ 0.00,\ 0.00,\textasciitilde{}\ \#\#\ \$\ HALSTEAD\_VOLUME\ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 30.9,\ 0.0,\ 0.0,\ 0.0,\ 63.1,\ 30.9,\ 0.0,\ 0.0,\ 0.0,\ \textasciitilde{}\ \#\#\ \$\ NUM\_OPERANDS\ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 4,\ 0,\ 0,\ 0,\ 7,\ 4,\ 0,\ 0,\ 0,\ 10,\ 7,\ 0,\ 7,\ 0,\ 11,\ 1\textasciitilde{}\ \#\#\ \$\ NUM\_OPERATORS\ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 7,\ 1,\ 1,\ 1,\ 12,\ 7,\ 1,\ 1,\ 1,\ 19,\ 12,\ 1,\ 12,\ 1,\ 16\textasciitilde{}\ \#\#\ \$\ NUM\_UNIQUE\_OPERANDS\ \ \ \textless{}dbl\textgreater{}\ 3,\ 0,\ 0,\ 0,\ 5,\ 3,\ 0,\ 0,\ 0,\ 8,\ 5,\ 0,\ 5,\ 0,\ 6,\ 9,\ \textasciitilde{}\ \#\#\ \$\ NUM\_UNIQUE\_OPERATORS\ \ \textless{}dbl\textgreater{}\ 4,\ 1,\ 1,\ 1,\ 5,\ 4,\ 1,\ 1,\ 1,\ 6,\ 5,\ 1,\ 5,\ 1,\ 8,\ 12,\textasciitilde{}\ \#\#\ \$\ LOC\_TOTAL\ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}dbl\textgreater{}\ 5,\ 3,\ 3,\ 3,\ 12,\ 5,\ 3,\ 3,\ 3,\ 13,\ 12,\ 3,\ 12,\ 4,\ 13\textasciitilde{}\ \#\#\ \$\ Defective\ \ \ \ \ \ \ \ \ \ \ \ \ \textless{}fct\textgreater{}\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ N,\ \textasciitilde{}} \\ \#\# Other libraries and tricks \\ The \texttt{lubridate} package contains a number of functions facilitating the conversion of text to POSIX dates. As an example, consider the following code. We may use this, for example, with time series. \\ For example \url{https://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf} \\ \texttt{r\ library(lubridate)\ dates\ \textless{}-\ c("15/02/2013",\ "15\ Feb\ 13",\ "It\ happened\ on\ 15\ 02\ \textquotesingle{}13")\ dmy(dates)} \\ \texttt{\#\#\ {[}1{]}\ "2013-02-15"\ "2013-02-15"\ "2013-02-15"} \\ \\ \# (PART) Supervised Models \{-\} \\ \# Supervised Classification \\ A classification problem can be defined as the induction, from a dataset \(\cal D\), of a classification function \(\psi\) that, given the attribute vector of an instance/example, returns a class \({c}\). A regression problem, on the other hand, returns an numeric value. \\ Dataset, \(\cal D\), is typically composed of \(n\) attributes and a class attribute \(C\). \\ \textbar{} \(Att_1\) \textbar{} \ldots{} \textbar{} \(Att_n\) \textbar{} \(Class\) \textbar{} \textbar----------\textbar-----\textbar{} ---------\textbar---------\textbar{} \textbar{} \(a_{11}\) \textbar{} \ldots{} \textbar{} \(a_{1n}\) \textbar{} \(c_1\) \textbar{} \textbar{} \(a_{21}\) \textbar{} \ldots{} \textbar{} \(a_{2n}\) \textbar{} \(c_2\) \textbar{} \textbar{} \ldots{} \textbar{} \ldots{} \textbar{} \ldots{} \textbar{} \ldots{} \textbar{} \textbar{} \(a_{m1}\) \textbar{} \ldots{} \textbar{} \(a_{mn}\) \textbar{} \(c_m\) \textbar{} \\ Columns are usually called \emph{attributes} or \emph{features}. Typically, there is a \emph{class} attribute, which can be numeric or discrete. When the class is numeric, it is a regression problem. With discrete values, we can talk about binary classification or multiclass (multinomial classification) when we have more than three values. There are variants such \emph{multi-label} classification (we will cover these in the advanced models section). \\ Once we learn a model, new instances are classified. As shown in the next figure. \\ \includegraphics{figures/supervClass1.png} \\ We have multiple types of models such as \emph{classification trees}, \emph{rules}, \emph{neural networks}, and \emph{probabilistic classifiers} that can be used to classify instances. \\ Fernandez et al provide an extensive comparison of 176 classifiers using the UCI dataset \citep{FernandezCBA14}. \\ We will show the use of different classification techniques in the problem of defect prediction as running example. In this example,the different datasets are composed of classical metrics (\emph{Halstead} or \emph{McCabe} metrics) based on counts of operators/operands and like or object-oriented metrics (e.g.~Chidamber and Kemerer) and the class attribute indicating whether the module or class was defective. \\ \#\# Classification Trees \\ There are several packages for inducing classification trees, for example with the \href{https://cran.r-project.org/web/packages/party/index.html}{party package} (recursive partitioning): \\ ```r library(foreign) \# To load arff file library(party) \# Build a decision tree library(caret) \\ jm1 \textless- read.arff(``./datasets/defectPred/D1/JM1.arff'') str(jm1) ``` \\ \texttt{\#\#\ \textquotesingle{}data.frame\textquotesingle{}:\ \ \ \ 9593\ obs.\ of\ \ 22\ variables:\ \#\#\ \ \$\ LOC\_BLANK\ \ \ \ \ \ \ \ \ \ \ \ :\ num\ \ 447\ 37\ 11\ 106\ 101\ 67\ 105\ 18\ 39\ 143\ ...\ \#\#\ \ \$\ BRANCH\_COUNT\ \ \ \ \ \ \ \ \ :\ num\ \ 826\ 29\ 405\ 240\ 464\ 187\ 344\ 47\ 163\ 67\ ...\ \#\#\ \ \$\ LOC\_CODE\_AND\_COMMENT\ :\ num\ \ 12\ 8\ 0\ 7\ 11\ 4\ 9\ 0\ 1\ 7\ ...\ \#\#\ \ \$\ LOC\_COMMENTS\ \ \ \ \ \ \ \ \ :\ num\ \ 157\ 42\ 17\ 344\ 75\ 1\ 40\ 10\ 6\ 49\ ...\ \#\#\ \ \$\ CYCLOMATIC\_COMPLEXITY:\ num\ \ 470\ 19\ 404\ 127\ 263\ 94\ 207\ 24\ 94\ 34\ ...\ \#\#\ \ \$\ DESIGN\_COMPLEXITY\ \ \ \ :\ num\ \ 385\ 19\ 2\ 105\ 256\ 63\ 171\ 13\ 67\ 25\ ...\ \#\#\ \ \$\ ESSENTIAL\_COMPLEXITY\ :\ num\ \ 113\ 6\ 1\ 33\ 140\ 27\ 58\ 1\ 3\ 1\ ...\ \#\#\ \ \$\ LOC\_EXECUTABLE\ \ \ \ \ \ \ :\ num\ \ 2824\ 133\ 814\ 952\ 1339\ ...\ \#\#\ \ \$\ HALSTEAD\_CONTENT\ \ \ \ \ :\ num\ \ 210\ 108\ 101\ 218\ 106\ ...\ \#\#\ \ \$\ HALSTEAD\_DIFFICULTY\ \ :\ num\ \ 384.4\ 46.3\ 206\ 215.2\ 337.4\ ...\ \#\#\ \ \$\ HALSTEAD\_EFFORT\ \ \ \ \ \ :\ num\ \ 31079782\ 232044\ 4294926\ 10100867\ 12120796\ ...\ \#\#\ \ \$\ HALSTEAD\_ERROR\_EST\ \ \ :\ num\ \ 26.95\ 1.67\ 6.95\ 15.65\ 11.98\ ...\ \#\#\ \ \$\ HALSTEAD\_LENGTH\ \ \ \ \ \ :\ num\ \ 8441\ 685\ 2033\ 5669\ 4308\ ...\ \#\#\ \ \$\ HALSTEAD\_LEVEL\ \ \ \ \ \ \ :\ num\ \ 0\ 0.02\ 0\ 0\ 0\ 0.02\ 0\ 0.03\ 0.01\ 0.02\ ...\ \#\#\ \ \$\ HALSTEAD\_PROG\_TIME\ \ \ :\ num\ \ 1726655\ 12891\ 238607\ 561159\ 673378\ ...\ \#\#\ \ \$\ HALSTEAD\_VOLUME\ \ \ \ \ \ :\ num\ \ 80843\ 5009\ 20848\ 46944\ 35928\ ...\ \#\#\ \ \$\ NUM\_OPERANDS\ \ \ \ \ \ \ \ \ :\ num\ \ 3021\ 295\ 813\ 2301\ 1556\ ...\ \#\#\ \ \$\ NUM\_OPERATORS\ \ \ \ \ \ \ \ :\ num\ \ 5420\ 390\ 1220\ 3368\ 2752\ ...\ \#\#\ \ \$\ NUM\_UNIQUE\_OPERANDS\ \ :\ num\ \ 609\ 121\ 811\ 262\ 226\ 167\ 279\ 47\ 117\ 355\ ...\ \#\#\ \ \$\ NUM\_UNIQUE\_OPERATORS\ :\ num\ \ 155\ 38\ 411\ 49\ 98\ 27\ 105\ 18\ 52\ 23\ ...\ \#\#\ \ \$\ LOC\_TOTAL\ \ \ \ \ \ \ \ \ \ \ \ :\ num\ \ 3442\ 222\ 844\ 1411\ 1532\ ...\ \#\#\ \ \$\ Defective\ \ \ \ \ \ \ \ \ \ \ \ :\ Factor\ w/\ 2\ levels\ "N","Y":\ 2\ 2\ 2\ 2\ 2\ 2\ 2\ 2\ 1\ 2\ ...} \\ ```r \# Stratified partition (training and test sets) set.seed(1234) inTrain \textless- createDataPartition(y=jm1\$Defective,p=.60,list=FALSE) jm1.train \textless- jm1{[}inTrain,{]} jm1.test \textless- jm1{[}-inTrain,{]} \\ jm1.formula \textless- jm1\$Defective \textasciitilde{} . \# formula approach: defect as dependent variable and the rest as independent variables jm1.ctree \textless- ctree(jm1.formula, data = jm1.train) \\ \# predict on test data pred \textless- predict(jm1.ctree, newdata = jm1.test) \# check prediction result table(pred, jm1.test\$Defective) ``` \\ \texttt{\#\#\ \#\#\ pred\ \ \ \ N\ \ \ \ Y\ \#\#\ \ \ \ N\ \ 168\ \ \ 11\ \#\#\ \ \ \ Y\ 2965\ \ 692} \\ \texttt{r\ plot(jm1.ctree)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-58-1.pdf} \\ Using the C50 package, there are two ways, specifying train and testing \\ \texttt{r\ library(C50)\ require(utils)\ \#\ c50t\ \textless{}-\ C5.0(jm1.train{[},-ncol(jm1.train){]},\ jm1.train{[},ncol(jm1.train){]})\ c50t\ \textless{}-\ C5.0(Defective\ \textasciitilde{}\ .,\ jm1.train)\ summary(c50t)\ plot(c50t)\ c50tPred\ \textless{}-\ predict(c50t,\ jm1.train)\ \#\ table(c50tPred,\ jm1.train\$Defective)} \\ Using the \href{https://cran.r-project.org/web/packages/rpart/index.html}{`rpart'} package \\ \texttt{r\ \#\ Using\ the\ \textquotesingle{}rpart\textquotesingle{}\ package\ library(rpart)\ jm1.rpart\ \textless{}-\ rpart(Defective\ \textasciitilde{}\ .,\ data=jm1.train,\ parms\ =\ list(prior\ =\ c(.65,.35),\ split\ =\ "information"))\ \#\ par(mfrow\ =\ c(1,2),\ xpd\ =\ NA)\ plot(jm1.rpart)\ text(jm1.rpart,\ use.n\ =\ TRUE)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-60-1.pdf} \\ \texttt{r\ jm1.rpart} \\ \texttt{\#\#\ n=\ 5757\ \#\#\ \#\#\ node),\ split,\ n,\ loss,\ yval,\ (yprob)\ \#\#\ \ \ \ \ \ \ *\ denotes\ terminal\ node\ \#\#\ \#\#\ \ 1)\ root\ 5757\ 2010.0\ N\ (0.650\ 0.350)\ \#\#\ \ \ \ 2)\ LOC\_TOTAL\textless{}\ 38.5\ 4172\ \ 969.0\ N\ (0.751\ 0.249)\ *\ \#\#\ \ \ \ 3)\ LOC\_TOTAL\textgreater{}=38.5\ 1585\ \ 825.0\ Y\ (0.441\ 0.559)\ \#\#\ \ \ \ \ \ 6)\ LOC\_TOTAL\textless{}\ 87.5\ 1027\ \ 540.0\ N\ (0.523\ 0.477)\ \#\#\ \ \ \ \ \ \ 12)\ LOC\_BLANK\textless{}\ 7.5\ 580\ \ 263.0\ N\ (0.572\ 0.428)\ *\ \#\#\ \ \ \ \ \ \ 13)\ LOC\_BLANK\textgreater{}=7.5\ 447\ \ 240.0\ Y\ (0.465\ 0.535)\ \#\#\ \ \ \ \ \ \ \ \ 26)\ HALSTEAD\_DIFFICULTY\textgreater{}=34.9\ 62\ \ \ 15.3\ N\ (0.738\ 0.262)\ *\ \#\#\ \ \ \ \ \ \ \ \ 27)\ HALSTEAD\_DIFFICULTY\textless{}\ 34.9\ 385\ \ 197.0\ Y\ (0.430\ 0.570)\ *\ \#\#\ \ \ \ \ \ 7)\ LOC\_TOTAL\textgreater{}=87.5\ 558\ \ 233.0\ Y\ (0.316\ 0.684)\ *} \\ \texttt{r\ library(rpart.plot)\ \#\ asRules(jm1.rpart)\ \#\ fancyRpartPlot(jm1.rpart)} \\ \#\# Rules \\ C5 Rules \\ \texttt{r\ library(C50)\ c50r\ \textless{}-\ C5.0(jm1.train{[},-ncol(jm1.train){]},\ jm1.train{[},ncol(jm1.train){]},\ rules\ =\ TRUE)\ summary(c50r)} \\ \texttt{\#\#\ \#\#\ Call:\ \#\#\ C5.0.default(x\ =\ jm1.train{[},\ -ncol(jm1.train){]},\ y\ =\ \#\#\ \ jm1.train{[},\ ncol(jm1.train){]},\ rules\ =\ TRUE)\ \#\#\ \#\#\ \#\#\ C5.0\ {[}Release\ 2.07\ GPL\ Edition{]}\ \ \ \ \ \ Sun\ Oct\ 10\ 13:35:54\ 2021\ \#\#\ -\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\ \#\#\ \#\#\ Class\ specified\ by\ attribute\ \textasciigrave{}outcome\textquotesingle{}\ \#\#\ \#\#\ Read\ 5757\ cases\ (22\ attributes)\ from\ undefined.data\ \#\#\ \#\#\ Rules:\ \#\#\ \#\#\ Rule\ 1:\ (5682/1005,\ lift\ 1.0)\ \#\#\ \ NUM\_OPERANDS\ \textless{}=\ 376\ \#\#\ \ -\textgreater{}\ \ class\ N\ \ {[}0.823{]}\ \#\#\ \#\#\ Rule\ 2:\ (75/24,\ lift\ 3.7)\ \#\#\ \ NUM\_OPERANDS\ \textgreater{}\ 376\ \#\#\ \ -\textgreater{}\ \ class\ Y\ \ {[}0.675{]}\ \#\#\ \#\#\ Default\ class:\ N\ \#\#\ \#\#\ \#\#\ Evaluation\ on\ training\ data\ (5757\ cases):\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ Rules\ \#\#\ \ \ \ -\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\/-\ \#\#\ \ \ \ \ \ No\ \ \ \ \ \ Errors\ \#\#\ \#\#\ \ \ \ \ \ \ 2\ 1029(17.9\%)\ \ \ \textless{}\textless{}\ \#\#\ \#\#\ \#\#\ \ \ \ \ (a)\ \ \ (b)\ \ \ \ \textless{}-classified\ as\ \#\#\ \ \ \ -\/-\/-\/-\ \ -\/-\/-\/-\ \#\#\ \ \ \ 4677\ \ \ \ 24\ \ \ \ (a):\ class\ N\ \#\#\ \ \ \ 1005\ \ \ \ 51\ \ \ \ (b):\ class\ Y\ \#\#\ \#\#\ \#\#\ \ Attribute\ usage:\ \#\#\ \#\#\ \ 100.00\%\ NUM\_OPERANDS\ \#\#\ \#\#\ \#\#\ Time:\ 0.2\ secs} \\ \texttt{r\ \#\ c50rPred\ \textless{}-\ predict(c50r,\ jm1.train)\ \#\ table(c50rPred,\ jm1.train\$Defective)} \\ \#\# Distanced-based Methods \\ In this case, there is no model as such. Given a new instance to classify, this approach finds the closest \(k\)-neighbours to the given instance. \\ \includegraphics{./figures/279px-KnnClassification.svg.png} (Source: Wikipedia - \url{https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm}) \\ ```r library(class) m1 \textless- knn(train=jm1.train{[},-22{]}, test=jm1.test{[},-22{]}, cl=jm1.train{[},22{]}, k=3) \\ table(jm1.test{[},22{]},m1) ``` \\ \texttt{\#\#\ \ \ \ m1\ \#\#\ \ \ \ \ \ \ \ N\ \ \ \ Y\ \#\#\ \ \ N\ 2851\ \ 282\ \#\#\ \ \ Y\ \ 554\ \ 149} \\ \#\# Neural Networks \\ \includegraphics{./figures/neuralnet.png} \\ \includegraphics{./figures/neuralnet2.png} \\ \#\# Support Vector Machine \\ \includegraphics{./figures/Kernel_Machine.svg.png} (Source: wikipedia \url{https://en.wikipedia.org/wiki/Support_vector_machine}) \\ \#\# Probabilistic Methods \\ \#\#\# Naive Bayes \\ Probabilistic graphical model assigning a probability to each possible outcome \(p(C_k, x_1,\ldots,x_n)\) \\ \includegraphics{./figures/classifier_NB.png} \\ Using the \texttt{klaR} package with \texttt{caret}: \\ \texttt{r\ library(caret)\ library(klaR)} \\ \texttt{\#\#\ Loading\ required\ package:\ MASS} \\ \texttt{\#\#\ \#\#\ Attaching\ package:\ \textquotesingle{}MASS\textquotesingle{}} \\ \texttt{\#\#\ The\ following\ object\ is\ masked\ from\ \textquotesingle{}package:dplyr\textquotesingle{}:\ \#\#\ \#\#\ \ \ \ \ select} \\ \texttt{\#\#\ The\ following\ object\ is\ masked\ from\ \textquotesingle{}package:sm\textquotesingle{}:\ \#\#\ \#\#\ \ \ \ \ muscle} \\ \texttt{r\ model\ \textless{}-\ NaiveBayes(Defective\ \textasciitilde{}\ .,\ data\ =\ jm1.train)\ predictions\ \textless{}-\ predict(model,\ jm1.test{[},-22{]})\ confusionMatrix(predictions\$class,\ jm1.test\$Defective)} \\ \texttt{\#\#\ Confusion\ Matrix\ and\ Statistics\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ Reference\ \#\#\ Prediction\ \ \ \ N\ \ \ \ Y\ \#\#\ \ \ \ \ \ \ \ \ \ N\ 2963\ \ 554\ \#\#\ \ \ \ \ \ \ \ \ \ Y\ \ 170\ \ 149\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Accuracy\ :\ 0.811\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 95\%\ CI\ :\ (0.799,\ 0.824)\ \#\#\ \ \ \ \ No\ Information\ Rate\ :\ 0.817\ \#\#\ \ \ \ \ P-Value\ {[}Acc\ \textgreater{}\ NIR{]}\ :\ 0.815\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Kappa\ :\ 0.2\ \#\#\ \#\#\ \ Mcnemar\textquotesingle{}s\ Test\ P-Value\ :\ \textless{}2e-16\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Sensitivity\ :\ 0.946\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Specificity\ :\ 0.212\ \#\#\ \ \ \ \ \ \ \ \ \ Pos\ Pred\ Value\ :\ 0.842\ \#\#\ \ \ \ \ \ \ \ \ \ Neg\ Pred\ Value\ :\ 0.467\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ Prevalence\ :\ 0.817\ \#\#\ \ \ \ \ \ \ \ \ \ Detection\ Rate\ :\ 0.772\ \#\#\ \ \ \ Detection\ Prevalence\ :\ 0.917\ \#\#\ \ \ \ \ \ \ Balanced\ Accuracy\ :\ 0.579\ \#\#\ \#\#\ \ \ \ \ \ \ \ \textquotesingle{}Positive\textquotesingle{}\ Class\ :\ N\ \#\#} \\ Using the \texttt{e1071} package: \\ ```r library (e1071) n1 \textless-naiveBayes(jm1.train\$Defective \textasciitilde{} ., data=jm1.train) \\ \# Show first 3 results using `class' head(predict(n1,jm1.test, type = c(``class'')),3) \# class by default ``` \\ \texttt{\#\#\ {[}1{]}\ Y\ Y\ Y\ \#\#\ Levels:\ N\ Y} \\ \texttt{r\ \#\ Show\ first\ 3\ results\ using\ \textquotesingle{}raw\textquotesingle{}\ head(predict(n1,jm1.test,\ type\ =\ c("raw")),3)} \\ \texttt{\#\#\ \ \ \ \ \ \ \ \ \ \ \ N\ Y\ \#\#\ {[}1,{]}\ 8.6e-50\ 1\ \#\#\ {[}2,{]}\ 0.0e+00\ 1\ \#\#\ {[}3,{]}\ 0.0e+00\ 1} \\ There are other variants such as TAN and KDB that do not assume the independece condition allowin us more complex structures. \\ \includegraphics{./figures/classifier_TAN.png} \\ \includegraphics{./figures/classifier_KDB.png} \\ A comprehensice comparison of \\ \#\# Linear Discriminant Analysis (LDA) \\ One classical approach to classification is Linear Discriminant Analysis (LDA), a generalization of Fisher's linear discriminant, as a method used to find a linear combination of features to separate two or more classes. \\ \texttt{r\ ldaModel\ \textless{}-\ train\ (Defective\ \textasciitilde{}\ .,\ data=jm1.train,\ method="lda",\ preProc=c("center","scale"))\ ldaModel} \\ \texttt{\#\#\ Linear\ Discriminant\ Analysis\ \#\#\ \#\#\ 5757\ samples\ \#\#\ \ \ 21\ predictor\ \#\#\ \ \ \ 2\ classes:\ \textquotesingle{}N\textquotesingle{},\ \textquotesingle{}Y\textquotesingle{}\ \#\#\ \#\#\ Pre-processing:\ centered\ (21),\ scaled\ (21)\ \#\#\ Resampling:\ Bootstrapped\ (25\ reps)\ \#\#\ Summary\ of\ sample\ sizes:\ 5757,\ 5757,\ 5757,\ 5757,\ 5757,\ 5757,\ ...\ \#\#\ Resampling\ results:\ \#\#\ \#\#\ \ \ Accuracy\ \ Kappa\ \#\#\ \ \ 0.82\ \ \ \ \ \ 0.164} \\ We can observe that we are training our model using \texttt{Defective\ \textasciitilde{}\ .} as a formula were \texttt{Defective} is the class variable separed by \texttt{\textasciitilde{}} and the ´.´ means the rest of the variables. Also, we are using a filter for the training data to (preProc) to center and scale. \\ Also, as stated in the documentation about the \texttt{train} method : \textgreater{} \url{http://topepo.github.io/caret/training.html} \\ ```r ctrl \textless- trainControl(method = ``repeatedcv'',repeats=3) ldaModel \textless- train (Defective \textasciitilde{} ., data=jm1.train, method=``lda'', trControl=ctrl, preProc=c(``center'',``scale'')) \\ ldaModel ``` \\ \texttt{\#\#\ Linear\ Discriminant\ Analysis\ \#\#\ \#\#\ 5757\ samples\ \#\#\ \ \ 21\ predictor\ \#\#\ \ \ \ 2\ classes:\ \textquotesingle{}N\textquotesingle{},\ \textquotesingle{}Y\textquotesingle{}\ \#\#\ \#\#\ Pre-processing:\ centered\ (21),\ scaled\ (21)\ \#\#\ Resampling:\ Cross-Validated\ (10\ fold,\ repeated\ 3\ times)\ \#\#\ Summary\ of\ sample\ sizes:\ 5181,\ 5182,\ 5181,\ 5182,\ 5180,\ 5181,\ ...\ \#\#\ Resampling\ results:\ \#\#\ \#\#\ \ \ Accuracy\ \ Kappa\ \#\#\ \ \ 0.82\ \ \ \ \ \ 0.159} \\ Instead of accuracy we can activate other metrics using \texttt{summaryFunction=twoClassSummary} such as \texttt{ROC}, \texttt{sensitivity} and \texttt{specificity}. To do so, we also need to speficy \texttt{classProbs=TRUE}. \\ ```r ctrl \textless- trainControl(method = ``repeatedcv'',repeats=3, classProbs=TRUE, summaryFunction=twoClassSummary) ldaModel3xcv10 \textless- train (Defective \textasciitilde{} ., data=jm1.train, method=``lda'', trControl=ctrl, preProc=c(``center'',``scale'')) \\ ldaModel3xcv10 ``` \\ \texttt{\#\#\ Linear\ Discriminant\ Analysis\ \#\#\ \#\#\ 5757\ samples\ \#\#\ \ \ 21\ predictor\ \#\#\ \ \ \ 2\ classes:\ \textquotesingle{}N\textquotesingle{},\ \textquotesingle{}Y\textquotesingle{}\ \#\#\ \#\#\ Pre-processing:\ centered\ (21),\ scaled\ (21)\ \#\#\ Resampling:\ Cross-Validated\ (10\ fold,\ repeated\ 3\ times)\ \#\#\ Summary\ of\ sample\ sizes:\ 5181,\ 5181,\ 5181,\ 5182,\ 5182,\ 5181,\ ...\ \#\#\ Resampling\ results:\ \#\#\ \#\#\ \ \ ROC\ \ \ \ Sens\ \ \ Spec\ \#\#\ \ \ 0.708\ \ 0.971\ \ 0.143} \\ Most methods have parameters that need to be optimised and that is one of the \\ ```r plsFit3x10cv \textless- train (Defective \textasciitilde{} ., data=jm1.train, method=``pls'', trControl=trainControl(classProbs=TRUE), metric=``ROC'', preProc=c(``center'',``scale'')) \\ plsFit3x10cv ``` \\ \texttt{\#\#\ Partial\ Least\ Squares\ \#\#\ \#\#\ 5757\ samples\ \#\#\ \ \ 21\ predictor\ \#\#\ \ \ \ 2\ classes:\ \textquotesingle{}N\textquotesingle{},\ \textquotesingle{}Y\textquotesingle{}\ \#\#\ \#\#\ Pre-processing:\ centered\ (21),\ scaled\ (21)\ \#\#\ Resampling:\ Bootstrapped\ (25\ reps)\ \#\#\ Summary\ of\ sample\ sizes:\ 5757,\ 5757,\ 5757,\ 5757,\ 5757,\ 5757,\ ...\ \#\#\ Resampling\ results\ across\ tuning\ parameters:\ \#\#\ \#\#\ \ \ ncomp\ \ Accuracy\ \ Kappa\ \#\#\ \ \ 1\ \ \ \ \ \ 0.821\ \ \ \ \ 0.0620\ \#\#\ \ \ 2\ \ \ \ \ \ 0.821\ \ \ \ \ 0.0978\ \#\#\ \ \ 3\ \ \ \ \ \ 0.821\ \ \ \ \ 0.0992\ \#\#\ \#\#\ Accuracy\ was\ used\ to\ select\ the\ optimal\ model\ using\ the\ largest\ value.\ \#\#\ The\ final\ value\ used\ for\ the\ model\ was\ ncomp\ =\ 3.} \\ \texttt{r\ plot(plsFit3x10cv)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-68-1.pdf} \\ The parameter \texttt{tuneLength} allow us to specify the number values per parameter to consider. \\ ```r plsFit3x10cv \textless- train (Defective \textasciitilde{} ., data=jm1.train, method=``pls'', trControl=ctrl, metric=``ROC'', tuneLength=5, preProc=c(``center'',``scale'')) \\ plsFit3x10cv ``` \\ \texttt{\#\#\ Partial\ Least\ Squares\ \#\#\ \#\#\ 5757\ samples\ \#\#\ \ \ 21\ predictor\ \#\#\ \ \ \ 2\ classes:\ \textquotesingle{}N\textquotesingle{},\ \textquotesingle{}Y\textquotesingle{}\ \#\#\ \#\#\ Pre-processing:\ centered\ (21),\ scaled\ (21)\ \#\#\ Resampling:\ Cross-Validated\ (10\ fold,\ repeated\ 3\ times)\ \#\#\ Summary\ of\ sample\ sizes:\ 5181,\ 5182,\ 5181,\ 5182,\ 5181,\ 5182,\ ...\ \#\#\ Resampling\ results\ across\ tuning\ parameters:\ \#\#\ \#\#\ \ \ ncomp\ \ ROC\ \ \ \ Sens\ \ \ Spec\ \#\#\ \ \ 1\ \ \ \ \ \ 0.700\ \ 0.996\ \ 0.0429\ \#\#\ \ \ 2\ \ \ \ \ \ 0.703\ \ 0.989\ \ 0.0710\ \#\#\ \ \ 3\ \ \ \ \ \ 0.706\ \ 0.990\ \ 0.0720\ \#\#\ \ \ 4\ \ \ \ \ \ 0.708\ \ 0.990\ \ 0.0808\ \#\#\ \ \ 5\ \ \ \ \ \ 0.708\ \ 0.990\ \ 0.0808\ \#\#\ \#\#\ ROC\ was\ used\ to\ select\ the\ optimal\ model\ using\ the\ largest\ value.\ \#\#\ The\ final\ value\ used\ for\ the\ model\ was\ ncomp\ =\ 5.} \\ \texttt{r\ plot(plsFit3x10cv)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-69-1.pdf} \\ Finally to predict new cases, \texttt{caret} will use the best classfier obtained for prediction. \\ \texttt{r\ plsProbs\ \textless{}-\ predict(plsFit3x10cv,\ newdata\ =\ jm1.test,\ type\ =\ "prob")} \\ \texttt{r\ plsClasses\ \textless{}-\ predict(plsFit3x10cv,\ newdata\ =\ jm1.test,\ type\ =\ "raw")\ confusionMatrix(data=plsClasses,jm1.test\$Defective)} \\ \texttt{\#\#\ Confusion\ Matrix\ and\ Statistics\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ Reference\ \#\#\ Prediction\ \ \ \ N\ \ \ \ Y\ \#\#\ \ \ \ \ \ \ \ \ \ N\ 3094\ \ 652\ \#\#\ \ \ \ \ \ \ \ \ \ Y\ \ \ 39\ \ \ 51\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Accuracy\ :\ 0.82\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 95\%\ CI\ :\ (0.807,\ 0.832)\ \#\#\ \ \ \ \ No\ Information\ Rate\ :\ 0.817\ \#\#\ \ \ \ \ P-Value\ {[}Acc\ \textgreater{}\ NIR{]}\ :\ 0.317\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Kappa\ :\ 0.091\ \#\#\ \#\#\ \ Mcnemar\textquotesingle{}s\ Test\ P-Value\ :\ \textless{}2e-16\ \#\#\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Sensitivity\ :\ 0.9876\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Specificity\ :\ 0.0725\ \#\#\ \ \ \ \ \ \ \ \ \ Pos\ Pred\ Value\ :\ 0.8259\ \#\#\ \ \ \ \ \ \ \ \ \ Neg\ Pred\ Value\ :\ 0.5667\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ \ Prevalence\ :\ 0.8167\ \#\#\ \ \ \ \ \ \ \ \ \ Detection\ Rate\ :\ 0.8066\ \#\#\ \ \ \ Detection\ Prevalence\ :\ 0.9765\ \#\#\ \ \ \ \ \ \ Balanced\ Accuracy\ :\ 0.5300\ \#\#\ \#\#\ \ \ \ \ \ \ \ \textquotesingle{}Positive\textquotesingle{}\ Class\ :\ N\ \#\#} \\ \#\#\# Predicting the number of defects (numerical class) \\ From the Bug Prediction Repository (BPR) \url{http://bug.inf.usi.ch/download.php} \\ Some datasets contain CK and other 11 object-oriented metrics for the last version of the system plus categorized (with severity and priority) post-release defects. Using such dataset: \\ ```r jdt \textless- read.csv(``./datasets/defectPred/BPD/single-version-ck-oo-EclipseJDTCore.csv'', sep=``;'') \\ \# We just use the number of bugs, so we removed others jdt\(classname <- NULL jdt\)nonTrivialBugs \textless- NULL jdt\(majorBugs <- NULL jdt\)minorBugs \textless- NULL jdt\(criticalBugs <- NULL jdt\)highPriorityBugs \textless- NULL jdt\$X \textless- NULL \\ \# Caret library(caret) \\ \# Split data into training and test datasets set.seed(1) inTrain \textless- createDataPartition(y=jdt\$bugs,p=.8,list=FALSE) jdt.train \textless- jdt{[}inTrain,{]} jdt.test \textless- jdt{[}-inTrain,{]} ``` \\ \texttt{r\ ctrl\ \textless{}-\ trainControl(method\ =\ "repeatedcv",repeats=3)\ glmModel\ \textless{}-\ train\ (bugs\ \textasciitilde{}\ .,\ data=jdt.train,\ method="glm",\ trControl=ctrl,\ preProc=c("center","scale"))\ glmModel} \\ \texttt{\#\#\ Generalized\ Linear\ Model\ \#\#\ \#\#\ 798\ samples\ \#\#\ \ 17\ predictor\ \#\#\ \#\#\ Pre-processing:\ centered\ (17),\ scaled\ (17)\ \#\#\ Resampling:\ Cross-Validated\ (10\ fold,\ repeated\ 3\ times)\ \#\#\ Summary\ of\ sample\ sizes:\ 719,\ 718,\ 718,\ 718,\ 718,\ 718,\ ...\ \#\#\ Resampling\ results:\ \#\#\ \#\#\ \ \ RMSE\ \ \ Rsquared\ \ MAE\ \#\#\ \ \ 0.936\ \ 0.273\ \ \ \ \ 0.442} \\ Others such as Elasticnet: \\ \texttt{r\ glmnetModel\ \textless{}-\ train\ (bugs\ \textasciitilde{}\ .,\ data=jdt.train,\ method="glmnet",\ trControl=ctrl,\ preProc=c("center","scale"))\ glmnetModel} \\ \texttt{\#\#\ glmnet\ \#\#\ \#\#\ 798\ samples\ \#\#\ \ 17\ predictor\ \#\#\ \#\#\ Pre-processing:\ centered\ (17),\ scaled\ (17)\ \#\#\ Resampling:\ Cross-Validated\ (10\ fold,\ repeated\ 3\ times)\ \#\#\ Summary\ of\ sample\ sizes:\ 718,\ 718,\ 718,\ 718,\ 718,\ 718,\ ...\ \#\#\ Resampling\ results\ across\ tuning\ parameters:\ \#\#\ \#\#\ \ \ alpha\ \ lambda\ \ \ RMSE\ \ \ Rsquared\ \ MAE\ \#\#\ \ \ 0.10\ \ \ 0.00112\ \ 1.004\ \ 0.249\ \ \ \ \ 0.458\ \#\#\ \ \ 0.10\ \ \ 0.01120\ \ 0.938\ \ 0.247\ \ \ \ \ 0.447\ \#\#\ \ \ 0.10\ \ \ 0.11195\ \ 0.840\ \ 0.272\ \ \ \ \ 0.430\ \#\#\ \ \ 0.55\ \ \ 0.00112\ \ 1.006\ \ 0.249\ \ \ \ \ 0.458\ \#\#\ \ \ 0.55\ \ \ 0.01120\ \ 0.918\ \ 0.249\ \ \ \ \ 0.444\ \#\#\ \ \ 0.55\ \ \ 0.11195\ \ 0.825\ \ 0.286\ \ \ \ \ 0.433\ \#\#\ \ \ 1.00\ \ \ 0.00112\ \ 1.008\ \ 0.248\ \ \ \ \ 0.458\ \#\#\ \ \ 1.00\ \ \ 0.01120\ \ 0.902\ \ 0.252\ \ \ \ \ 0.442\ \#\#\ \ \ 1.00\ \ \ 0.11195\ \ 0.831\ \ 0.282\ \ \ \ \ 0.446\ \#\#\ \#\#\ RMSE\ was\ used\ to\ select\ the\ optimal\ model\ using\ the\ smallest\ value.\ \#\#\ The\ final\ values\ used\ for\ the\ model\ were\ alpha\ =\ 0.55\ and\ lambda\ =\ 0.112.} \\ \#\# Binary Logistic Regression (BLR) \\ Binary Logistic Regression (BLR) can models fault-proneness as follows \\ \(fp(X) = \frac{e^{logit()}}{1 + e^{logit(X)}}\) \\ where the simplest form for logit is: \\ \(logit(X) = c_{0} + c_{1}X\) \\ ```r jdt \textless- read.csv(``./datasets/defectPred/BPD/single-version-ck-oo-EclipseJDTCore.csv'', sep=``;'') \\ \# Caret library(caret) \\ \# Convert the response variable into a boolean variable (0/1) jdt\(bugs[jdt\)bugs\textgreater=1{]}\textless-1 \\ cbo \textless- jdt\(cbo bugs <- jdt\)bugs \\ \# Split data into training and test datasets jdt2 = data.frame(cbo, bugs) inTrain \textless- createDataPartition(y=jdt2\$bugs,p=.8,list=FALSE) jdtTrain \textless- jdt2{[}inTrain,{]} jdtTest \textless- jdt2{[}-inTrain,{]} ``` \\ BLR models fault-proneness are as follows \(fp(X) = \frac{e^{logit()}}{1 + e^{logit(X)}}\) \\ where the simplest form for logit is \(logit(X) = c_{0} + c_{1}X\) \\ ```r \# logit regression \# glmLogit \textless- train (bugs \textasciitilde{} ., data=jdt.train, method=``glm'', family=binomial(link = logit)) \\ glmLogit \textless- glm (bugs \textasciitilde{} ., data=jdtTrain, family=binomial(link = logit)) summary(glmLogit) ``` \\ \texttt{\#\#\ \#\#\ Call:\ \#\#\ glm(formula\ =\ bugs\ \textasciitilde{}\ .,\ family\ =\ binomial(link\ =\ logit),\ data\ =\ jdtTrain)\ \#\#\ \#\#\ Deviance\ Residuals:\ \#\#\ \ \ \ Min\ \ \ \ \ \ 1Q\ \ Median\ \ \ \ \ \ 3Q\ \ \ \ \ Max\ \#\#\ -3.654\ \ -0.591\ \ -0.515\ \ -0.471\ \ \ 2.150\ \#\#\ \#\#\ Coefficients:\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Estimate\ Std.\ Error\ z\ value\ Pr(\textgreater{}\textbar{}z\textbar{})\ \#\#\ (Intercept)\ -2.20649\ \ \ \ 0.13900\ \ -15.87\ \ \ \textless{}2e-16\ ***\ \#\#\ cbo\ \ \ \ \ \ \ \ \ \ 0.06298\ \ \ \ 0.00765\ \ \ \ 8.23\ \ \ \textless{}2e-16\ ***\ \#\#\ -\/-\/-\ \#\#\ Signif.\ codes:\ \ 0\ \textquotesingle{}***\textquotesingle{}\ 0.001\ \textquotesingle{}**\textquotesingle{}\ 0.01\ \textquotesingle{}*\textquotesingle{}\ 0.05\ \textquotesingle{}.\textquotesingle{}\ 0.1\ \textquotesingle{}\ \textquotesingle{}\ 1\ \#\#\ \#\#\ (Dispersion\ parameter\ for\ binomial\ family\ taken\ to\ be\ 1)\ \#\#\ \#\#\ \ \ \ \ Null\ deviance:\ 807.98\ \ on\ 797\ \ degrees\ of\ freedom\ \#\#\ Residual\ deviance:\ 691.80\ \ on\ 796\ \ degrees\ of\ freedom\ \#\#\ AIC:\ 695.8\ \#\#\ \#\#\ Number\ of\ Fisher\ Scoring\ iterations:\ 5} \\ Predict a single point: \\ \texttt{r\ newData\ =\ data.frame(cbo\ =\ 3)\ predict(glmLogit,\ newData,\ type\ =\ "response")} \\ \texttt{\#\#\ \ \ \ \ 1\ \#\#\ 0.117} \\ Draw the results, modified from: \url{http://www.shizukalab.com/toolkits/plotting-logistic-regression-in-r} \\ ```r results \textless- predict(glmLogit, jdtTest, type = ``response'') \\ range(jdtTrain\$cbo) ``` \\ \texttt{\#\#\ {[}1{]}\ \ \ 0\ 156} \\ \texttt{r\ range(results)} \\ \texttt{\#\#\ {[}1{]}\ 0.0992\ 0.9993} \\ \texttt{r\ plot(jdt2\$cbo,jdt2\$bugs)\ curve(predict(glmLogit,\ data.frame(cbo=x),\ type\ =\ "response"),add=TRUE)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-78-1.pdf} \\ \texttt{r\ \#\ points(jdtTrain\$cbo,fitted(glmLogit))} \\ Another type of graph: \\ \texttt{r\ library(popbio)} \\ \texttt{\#\#\ \#\#\ Attaching\ package:\ \textquotesingle{}popbio\textquotesingle{}} \\ \texttt{\#\#\ The\ following\ object\ is\ masked\ from\ \textquotesingle{}package:caret\textquotesingle{}:\ \#\#\ \#\#\ \ \ \ \ sensitivity} \\ \texttt{r\ logi.hist.plot(jdt2\$cbo,jdt2\$bugs,boxp=FALSE,type="hist",col="gray")} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-79-1.pdf} \\ \#\# The caret package \\ There are hundreds of packages to perform classification task in R, but many of those can be used throught the `caret' package which helps with many of the data mining process task as described next. \\ The caret package\url{http://topepo.github.io/caret/} provides a unified interface for modeling and prediction with around 150 different models with tools for: \\ + data splitting \\ + pre-processing \\ + feature selection \\ + model tuning using resampling \\ + variable importance estimation, etc. \\ Website: \url{http://caret.r-forge.r-project.org} \\ JSS Paper: \url{www.jstatsoft.org/v28/i05/paper} \\ Book: \href{http://AppliedPredictiveModeling.com/}{Applied Predictive Modeling} \\ \\ \# Regression \{\#regression\} \\ \#\# Linear Regression modeling \\ - \emph{Linear Regression} is one of the oldest and most known predictive methods. As its name says, the idea is to try to fit a linear equation between a dependent variable and an independent, or explanatory, variable. The idea is that the independent variable \(x\) is something the experimenter controls and the dependent variable \(y\) is something that the experimenter measures. The line is used to predict the value of \(y\) for a known value of \(x\). The variable \(x\) is the predictor variable and \(y\) the response variable. \\ - \emph{Multiple linear regression} uses 2 or more independent variables for building a model. See \url{https://www.wikipedia.org/wiki/Linear_regression}. \\ - First proposed many years ago but still very useful\ldots{} \\ \includegraphics{figures/galton.png} \\ - The equation takes the form \(\hat{y}=b_0+b_1 * x\) - The method used to choose the values \(b_0\) and \(b_1\) is to minimize the sum of the squares of the residual errors. \\ \#\#\# Regression: Galton Data \\ Not related to Software Engineering but \ldots{} \\ \texttt{r\ library(UsingR)\ data(galton)\ par(mfrow=c(1,2))\ hist(galton\$child,col="blue",breaks=100)\ hist(galton\$parent,col="blue",breaks=100)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-80-1.pdf} \\ \texttt{r\ plot(galton\$parent,galton\$child,pch=1,col="blue",\ cex=0.4)\ lm1\ \textless{}-\ lm(galton\$child\ \textasciitilde{}\ galton\$parent)\ lines(galton\$parent,lm1\$fitted,col="red",lwd=3)\ plot(galton\$parent,lm1\$residuals,col="blue",pch=1,\ cex=0.4)\ abline(c(0,0),col="red",lwd=3)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-80-2.pdf} \\ \texttt{r\ qqnorm(galton\$child)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-80-3.pdf} \\ \#\#\# Simple Linear Regression \\ - Given two variables \(Y\) (response) and \(X\) (predictor), the assumption is that there is an approximate (\(\approx\)) \emph{linear} relation between those variables. - The mathematical model of the observed data is described as (for the case of simple linear regression): \( Y \approx \beta_0 + \beta_1 X\) \\ - the parameter \(\beta_0\) is named the \emph{intercept} and \(\beta_1\) is the \emph{slope} - Each observation can be modeled as \\ \(y_i = \beta_0 + \beta_1 x_i + \epsilon_i; \epsilon_i \sim N(0,\sigma^2)\) - \(\epsilon_i\) is the \emph{error} - This means that the variable \(y\) is normally distributed: \( y_i \sim N( \beta_0 + \beta_1 x_i, \sigma^2) \) \\ - The \emph{predictions} or \emph{estimations} of this model are obtained by a linear equation of the form \(\hat{Y}=\hat{\beta_0} + \hat{\beta}_1X\), that is, each new prediction is computed with \(\hat{y}_i = \hat{\beta}_0 + \hat{\beta}_1x_i \). - The actual parameters \(\beta_0\) and \(\beta_1\) are unknown - The parameters \(\hat{\beta}_0\) and \(\hat{\beta}_1\) of the linear equation can be estimated with different methods. \\ \#\#\# Least Squares \\ - One of the most used methods for computing \(\hat{\beta}_0\) and \(\hat{\beta}_1\) is the criterion of ``least squares'' minimization. - The data is composed of \(n\) pairs of observations \((x_i, y_i)\) - Given an observation \(y_i\) and its corresponding estimation \(\hat{y_i})\) the \emph{residual} \(e_i\) is defined as \(e_i= y_i - \hat{y_i}\) - the Residual Sum of Squares is defined as \(RSS=e_1^2+\dots + e_i^2+\dots+e_n^2\) - the Least Squares Approach minimizes the RSS - as result of that minimizitation, it can be obtained, by means of calculus, the estimation of \(\hat{\beta}_0\) and \(\hat{\beta}_1\) as \(\hat{\beta}_1=\frac{\sum_{i=1}^{n}{(x_i-\bar{x})(y_i-\bar{y})}}{\sum_{i=1}^{n}(x_i-\bar{x})^2}\) and \(\hat{\beta}_0=\bar{y}-\hat{\beta}_1\bar{x} \) where \(\bar{y}\) and \(\bar{x}\) are the sample means. - the variance \(\sigma^2\) is estimated by \(\hat\sigma^2 = {RSS}/{(n-2)}\) where n is the number of observations - The \emph{Residual Standard Error} is defined as \(RSE = \sqrt{{RSS}/{(n-2)}}\) - The equation \( Y = \beta_0 + \beta_1 X + \epsilon\) defines the linear model, i.e., the \emph{population regression line} - The \emph{least squares line} is \(\hat{Y}=\hat{\beta_0} + \hat{\beta}_1X\) - \emph{Confidence intervals} are computed using the \emph{standard errors} of the intercept and the slope. - The \(95\%\) confidence interval for the slope is computed as \([\hat{\beta}_1 - 2 \cdot SE(\hat{\beta}_1), \hat{\beta}_1+SE(\hat{\beta}_1)]\) - where \( SE(\hat{\beta}_1) = \sqrt{\frac{\sigma^2}{\sum_{i=1}^{n}(x_i-\bar{x})^2}}\) \\ \#\#\# Linear regression in R \\ The following are the basic commands in R: \\ - The basic function is \texttt{lm()}, that returns an object with the model. - Other commands: \texttt{summary} prints out information about the regression, \texttt{coef} gives the coefficients for the linear model, \texttt{fitted} gives the predictd value of \(y\) for each value of \(x\), \texttt{residuals} contains the differences between observed and fitted values. - \href{https://stat.ethz.ch/R-manual/R-devel/library/stats/html/predict.lm.html}{\texttt{predict}} will generate predicted values of the response for the values of the explanatory variable. \\ \#\# Linear Regression Diagnostics \\ - Several plots help to evaluate the suitability of the linear regression + \emph{Residuals vs fitted}: The residuals should be randomly distributed around the horizontal line representing a residual error of zero; that is, there should not be a distinct trend in the distribution of points. + \emph{Standard Q-Q plot}: residual errors are normally distributed + \emph{Square root of the standardized residuals vs the fitted values}: there should be no obvious trend. This plot is similar to the residuals versus fitted values plot, but it uses the square root of the standardized residuals. + \emph{Leverage}: measures the importance of each point in determining the regression result. Smaller values means that removing the observation has little effect on the regression result. \\ \#\#\# Simulation example \\ \#\#\#\# Simulate a dataset \\ ```r set.seed(3456) \# equation is y = -6.6 + 0.13 x +e \# range x 100,400 a \textless- -6.6 b \textless- 0.13 num\_obs \textless- 60 xmin \textless- 100 xmax \textless- 400 x \textless- sample(seq(from=xmin, to = xmax, by =1), size= num\_obs, replace=FALSE) \\ sderror \textless- 9 \# sigma for the error term in the model e \textless- rnorm(num\_obs, 0, sderror) \\ y \textless- a + b * x + e \\ newlm \textless- lm(y\textasciitilde x) summary(newlm) ``` \\ \texttt{\#\#\ \#\#\ Call:\ \#\#\ lm(formula\ =\ y\ \textasciitilde{}\ x)\ \#\#\ \#\#\ Residuals:\ \#\#\ \ \ \ \ Min\ \ \ \ \ \ 1Q\ \ Median\ \ \ \ \ \ 3Q\ \ \ \ \ Max\ \#\#\ -26.518\ \ -5.645\ \ \ 0.363\ \ \ 5.695\ \ 18.392\ \#\#\ \#\#\ Coefficients:\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Estimate\ Std.\ Error\ t\ value\ Pr(\textgreater{}\textbar{}t\textbar{})\ \#\#\ (Intercept)\ \ -7.9060\ \ \ \ \ 3.3922\ \ \ -2.33\ \ \ \ 0.023\ *\ \#\#\ x\ \ \ \ \ \ \ \ \ \ \ \ \ 0.1331\ \ \ \ \ 0.0132\ \ \ 10.05\ \ 2.6e-14\ ***\ \#\#\ -\/-\/-\ \#\#\ Signif.\ codes:\ \ 0\ \textquotesingle{}***\textquotesingle{}\ 0.001\ \textquotesingle{}**\textquotesingle{}\ 0.01\ \textquotesingle{}*\textquotesingle{}\ 0.05\ \textquotesingle{}.\textquotesingle{}\ 0.1\ \textquotesingle{}\ \textquotesingle{}\ 1\ \#\#\ \#\#\ Residual\ standard\ error:\ 8.48\ on\ 58\ degrees\ of\ freedom\ \#\#\ Multiple\ R-squared:\ \ 0.635,\ \ Adjusted\ R-squared:\ \ 0.629\ \#\#\ F-statistic:\ \ 101\ on\ 1\ and\ 58\ DF,\ \ p-value:\ 2.57e-14} \\ ```r cfa1 \textless- coef(newlm){[}1{]} cfb2 \textless- coef(newlm){[}2{]} plot(x,y, xlab=``x axis'', ylab= ``y axis'', xlim = c(xmin, xmax), ylim = c(0,60), sub = ``Line in black is the actual model'') title(main = paste(``Line in blue is the Regression Line for'', num\_obs, '' points.'')) \\ abline(a = cfa1, b = cfb2, col= ``blue'', lwd=3) abline(a = a, b = b, col= ``black'', lwd=1) \#original line ``` \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-81-1.pdf} \\ \#\#\#\#\# Subset a set of points from the same sample \\ ```r \# sample from the same x to compare least squares lines \# change the denominator in newsample to see how the least square lines changes accordingly. newsample \textless- as.integer(num\_obs/8) \# number of pairs x,y \\ idxs\_x1 \textless- sample(1:num\_obs, size = newsample, replace = FALSE) \#sample indexes x1 \textless- x{[}idxs\_x1{]} e1 \textless- e{[}idxs\_x1{]} y1 \textless- a + b * x1 + e1 xy\_obs \textless- data.frame(x1, y1) names(xy\_obs) \textless- c(``x\_obs'', ``y\_obs'') \\ newlm1 \textless- lm(y1\textasciitilde x1) summary(newlm1) ``` \\ \texttt{\#\#\ \#\#\ Call:\ \#\#\ lm(formula\ =\ y1\ \textasciitilde{}\ x1)\ \#\#\ \#\#\ Residuals:\ \#\#\ \ \ \ \ \ 1\ \ \ \ \ \ 2\ \ \ \ \ \ 3\ \ \ \ \ \ 4\ \ \ \ \ \ 5\ \ \ \ \ \ 6\ \ \ \ \ \ 7\ \#\#\ \ 3.968\ -8.537\ \ 3.141\ -8.723\ \ 7.294\ -0.235\ \ 3.092\ \#\#\ \#\#\ Coefficients:\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Estimate\ Std.\ Error\ t\ value\ Pr(\textgreater{}\textbar{}t\textbar{})\ \#\#\ (Intercept)\ \ \ 2.9107\ \ \ \ \ 7.7166\ \ \ \ 0.38\ \ \ \ 0.722\ \#\#\ x1\ \ \ \ \ \ \ \ \ \ \ \ 0.0913\ \ \ \ \ 0.0328\ \ \ \ 2.79\ \ \ \ 0.039\ *\ \#\#\ -\/-\/-\ \#\#\ Signif.\ codes:\ \ 0\ \textquotesingle{}***\textquotesingle{}\ 0.001\ \textquotesingle{}**\textquotesingle{}\ 0.01\ \textquotesingle{}*\textquotesingle{}\ 0.05\ \textquotesingle{}.\textquotesingle{}\ 0.1\ \textquotesingle{}\ \textquotesingle{}\ 1\ \#\#\ \#\#\ Residual\ standard\ error:\ 6.89\ on\ 5\ degrees\ of\ freedom\ \#\#\ Multiple\ R-squared:\ \ 0.609,\ \ Adjusted\ R-squared:\ \ 0.53\ \#\#\ F-statistic:\ 7.77\ on\ 1\ and\ 5\ DF,\ \ p-value:\ 0.0385} \\ ```r cfa21 \textless- coef(newlm1){[}1{]} cfb22 \textless- coef(newlm1){[}2{]} \\ plot(x1,y1, xlab=``x axis'', ylab= ``y axis'', xlim = c(xmin, xmax), ylim = c(0,60)) title(main = paste(``New line in red with'', newsample, '' points in sample'')) \\ abline(a = a, b = b, col= ``black'', lwd=1) \# True line abline(a = cfa1, b = cfb2, col= ``blue'', lwd=1) \#sample abline(a = cfa21, b = cfb22, col= ``red'', lwd=2) \#new line ``` \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-82-1.pdf} \\ \#\#\#\#\# Compute a confidence interval on the original sample regression line \\ ```r newx \textless- seq(xmin, xmax) ypredicted \textless- predict(newlm, newdata=data.frame(x=newx), interval= ``confidence'', level= 0.90, se = TRUE) \\ plot(x,y, xlab=``x axis'', ylab= ``y axis'', xlim = c(xmin, xmax), ylim = c(0,60)) \# points(x1, fitted(newlm1)) abline(newlm) \\ lines(newx,ypredicted\(fit[,2],col="red",lty=2) lines(newx,ypredicted\)fit{[},3{]},col=``red'',lty=2) ``` \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-83-1.pdf} \\ \texttt{r\ \#\ Plot\ the\ residuals\ or\ errors\ ypredicted\_x\ \textless{}-\ predict(newlm,\ newdata=data.frame(x=x))\ plot(x,y,\ xlab="x\ axis",\ ylab=\ "y\ axis",\ xlim\ =\ c(xmin,\ xmax),\ ylim\ =\ c(0,60),\ sub\ =\ "",\ pch=19,\ cex=0.75)\ title(main\ =\ paste("Residuals\ or\ errors",\ num\_obs,\ "\ points."))\ abline(newlm)\ segments(x,\ y,\ x,\ ypredicted\_x)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-83-2.pdf} \\ \#\#\#\#\# Take another sample from the model and explore \\ ```r \# equation is y = -6.6 + 0.13 x +e \# range x 100,400 num\_obs \textless- 35 xmin \textless- 100 xmax \textless- 400 x3 \textless- sample(seq(from=xmin, to = xmax, by =1), size= num\_obs, replace=FALSE) sderror \textless- 14 \# sigma for the error term in the model e3 \textless- rnorm(num\_obs, 0, sderror) \\ y3 \textless- a + b * x3 + e3 \\ newlm3 \textless- lm(y3\textasciitilde x3) summary(newlm3) ``` \\ \texttt{\#\#\ \#\#\ Call:\ \#\#\ lm(formula\ =\ y3\ \textasciitilde{}\ x3)\ \#\#\ \#\#\ Residuals:\ \#\#\ \ \ \ Min\ \ \ \ \ 1Q\ Median\ \ \ \ \ 3Q\ \ \ \ Max\ \#\#\ -40.87\ \ -9.20\ \ -2.28\ \ 12.08\ \ 47.17\ \#\#\ \#\#\ Coefficients:\ \#\#\ \ \ \ \ \ \ \ \ \ \ \ \ Estimate\ Std.\ Error\ t\ value\ Pr(\textgreater{}\textbar{}t\textbar{})\ \#\#\ (Intercept)\ \ -0.9284\ \ \ \ \ 8.7458\ \ \ -0.11\ \ \ 0.9161\ \#\#\ x3\ \ \ \ \ \ \ \ \ \ \ \ 0.1193\ \ \ \ \ 0.0345\ \ \ \ 3.45\ \ \ 0.0015\ **\ \#\#\ -\/-\/-\ \#\#\ Signif.\ codes:\ \ 0\ \textquotesingle{}***\textquotesingle{}\ 0.001\ \textquotesingle{}**\textquotesingle{}\ 0.01\ \textquotesingle{}*\textquotesingle{}\ 0.05\ \textquotesingle{}.\textquotesingle{}\ 0.1\ \textquotesingle{}\ \textquotesingle{}\ 1\ \#\#\ \#\#\ Residual\ standard\ error:\ 17.2\ on\ 33\ degrees\ of\ freedom\ \#\#\ Multiple\ R-squared:\ \ 0.266,\ \ Adjusted\ R-squared:\ \ 0.243\ \#\#\ F-statistic:\ 11.9\ on\ 1\ and\ 33\ DF,\ \ p-value:\ 0.00153} \\ ```r cfa31 \textless- coef(newlm3){[}1{]} cfb32 \textless- coef(newlm3){[}2{]} plot(x3,y3, xlab=``x axis'', ylab= ``y axis'', xlim = c(xmin, xmax), ylim = c(0,60)) title(main = paste(``Line in red is the Regression Line for'', num\_obs, '' points.'')) abline(a = cfa31, b = cfb32, col= ``red'', lwd=3) abline(a = a, b = b, col= ``black'', lwd=2) \#original line abline(a = cfa1, b = cfb2, col= ``blue'', lwd=1) \#first sample \\ \# confidence intervals for the new sample \\ newx \textless- seq(xmin, xmax) ypredicted \textless- predict(newlm3, newdata=data.frame(x3=newx), interval= ``confidence'', level= 0.90, se = TRUE) \\ lines(newx,ypredicted\(fit[,2],col="red",lty=2, lwd=2) lines(newx,ypredicted\)fit{[},3{]},col=``red'',lty=2, lwd=2) ``` \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-84-1.pdf} \\ \#\#\# Diagnostics fro assessing the regression line \\ \#\#\#\# Residual Standard Error - It gives us an idea of the typical or average error of the model. It is the estimated standard deviation of the residuals. \\ \#\#\#\# \(R^2\) statistic - This is the proportion of variability in the data that is explained by the model. Best values are those close to 1. \\ \#\# Multiple Linear Regression \\ \#\#\# Partial Least Squares - If several predictors are highly correlated, the least squares approach has high variability. - PLS finds linear combinations of the predictors, that are called \emph{components} or \emph{latent} variables. \\ \#\# Linear regression in Software Effort estimation \\ Fitting a linear model to log-log - the predictive power equation is \(y= e^{b_0}*x^{b_1}\), ignoring the bias corrections. Note: depending how the error term behaves we could try another general linear model (GLM) or other model that does not rely on the normality of the residuals (quantile regression, etc.) - First, we are fitting the model to the whole dataset. But it is not the right way to do it, because of overfitting. \\ \texttt{r\ library(foreign)\ china\ \textless{}-\ read.arff("./datasets/effortEstimation/china.arff")\ china\_size\ \textless{}-\ china\$AFP\ summary(china\_size)} \\ \texttt{\#\#\ \ \ \ Min.\ 1st\ Qu.\ \ Median\ \ \ \ Mean\ 3rd\ Qu.\ \ \ \ Max.\ \#\#\ \ \ \ \ \ \ 9\ \ \ \ \ 100\ \ \ \ \ 215\ \ \ \ \ 487\ \ \ \ \ 438\ \ \ 17518} \\ \texttt{r\ china\_effort\ \textless{}-\ china\$Effort\ summary(china\_effort)} \\ \texttt{\#\#\ \ \ \ Min.\ 1st\ Qu.\ \ Median\ \ \ \ Mean\ 3rd\ Qu.\ \ \ \ Max.\ \#\#\ \ \ \ \ \ 26\ \ \ \ \ 704\ \ \ \ 1829\ \ \ \ 3921\ \ \ \ 3826\ \ \ 54620} \\ \texttt{r\ par(mfrow=c(1,2))\ hist(china\_size,\ col="blue",\ xlab="Adjusted\ Function\ Points",\ main="Distribution\ of\ AFP")\ hist(china\_effort,\ col="blue",xlab="Effort",\ main="Distribution\ of\ Effort")} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-85-1.pdf} \\ \texttt{r\ boxplot(china\_size)\ boxplot(china\_effort)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-85-2.pdf} \\ \texttt{r\ qqnorm(china\_size)\ qqline(china\_size)\ qqnorm(china\_effort)\ qqline(china\_effort)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-85-3.pdf} \\ Applying the \texttt{log} function (it computes natural logarithms, base \(e\)) \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-86-1.pdf} \includegraphics{DASE_files/figure-latex/unnamed-chunk-86-2.pdf} \\ \texttt{r\ linmodel\_logchina\ \textless{}-\ lm(logchina\_effort\ \textasciitilde{}\ logchina\_size)\ par(mfrow=c(1,1))\ plot(logchina\_size,\ logchina\_effort)\ abline(linmodel\_logchina,\ lwd=3,\ col=3)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-87-1.pdf} \\ \texttt{r\ par(mfrow=c(1,2))\ plot(linmodel\_logchina,\ ask\ =\ FALSE)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-87-2.pdf} \includegraphics{DASE_files/figure-latex/unnamed-chunk-87-3.pdf} \\ \texttt{r\ linmodel\_logchina} \\ \texttt{\#\#\ \#\#\ Call:\ \#\#\ lm(formula\ =\ logchina\_effort\ \textasciitilde{}\ logchina\_size)\ \#\#\ \#\#\ Coefficients:\ \#\#\ \ \ (Intercept)\ \ logchina\_size\ \#\#\ \ \ \ \ \ \ \ \ 3.301\ \ \ \ \ \ \ \ \ \ 0.768} \\ \#\# References \\ - The New Statistics with R, Andy Hector, 2015 - An Introduction to R, W.N. Venables and D.M. Smith and the R Development Core Team - Practical Data Science with R, Nina Zumel and John Mount - G. James et al, An Introduction to Statistical Learning with Applications in R, Springer, 2013 \\ \\ \# (PART) Unsupervised Models \{-\} \\ \# Unsupervised or Descriptive modeling \\ From the descriptive (unsupervised) point of view, patterns are found to predict future behaviour or estimate. This include association rules, clustering, or tree clustering which aim at grouping together objects (e.g., animals) into successively larger clusters, using some measure of similarity or distance. The dataset will be as the previous table without the \(C\) class attribute \\ \textbar{} Att\textsubscript{1}\textbar{} \textbar{} Att\textsubscript{n} \textbar{} \textbar-------\textbar-----\textbar{} -------\textbar{} \textbar{} a\textsubscript{11} \textbar{} \ldots{} \textbar{} a\textsubscript{1n} \textbar{} \textbar{} a\textsubscript{21} \textbar{} \ldots{} \textbar{} a\textsubscript{2n} \textbar{} \textbar{} \ldots{} \textbar{} \ldots{} \textbar{} \ldots{} \textbar{} \textbar{} a\textsubscript{m1} \textbar{} \ldots{} \textbar{} a\textsubscript{mn} \textbar{} \\ \#\# Clustering \\ ```r library(foreign) library(fpc) \\ kc1 \textless- read.arff(``./datasets/defectPred/D1/KC1.arff'') \\ \# Split into training and test datasets set.seed(1) ind \textless- sample(2, nrow(kc1), replace = TRUE, prob = c(0.7, 0.3)) kc1.train \textless- kc1{[}ind==1, {]} kc1.test \textless- kc1{[}ind==2, {]} \\ \# No class kc1.train\$Defective \textless- NULL \\ ds \textless- dbscan(kc1.train, eps = 0.42, MinPts = 5) \\ kc1.kmeans \textless- kmeans(kc1.train, 2) ``` \\ \#\#\# k-Means \\ \texttt{r\ library(reshape,\ quietly=TRUE)\ library(graphics)\ kc1kmeans\ \textless{}-\ kmeans(sapply(na.omit(kc1.train),\ rescaler,\ "range"),\ 10)\ \#plot(kc1kmeans,\ col\ =\ kc1kmeans\$cluster)\ \#points(kc1kmeans\$centers,\ col\ =\ 1:5,\ pch\ =\ 8)} \\ \#\# Association rules \\ ```r library(arules) \\ \# x \textless- as.numeric(kc1\$LOC\_TOTAL) \# str(x) \# summary(x) \# hist(x, breaks=30, main=``LoC Total'') \# xDisc \textless- discretize(x, categories=5) \# table(xDisc) \\ for(i in 1:21) kc1{[},i{]} \textless- discretize(kc1{[},i{]}, method = ``interval'', breaks = 5) \\ rules \textless- apriori(kc1, parameter = list(minlen=3, supp=0.05, conf=0.35), appearance = list(rhs=c(``Defective=Y''), default=``lhs''), control = list(verbose=F)) \\ \#rules \textless- apriori(kc1, \# parameter = list(minlen=2, supp=0.05, conf=0.3), \# appearance = list(rhs=c(``Defective=Y'', ``Defective=N''), \# default=``lhs'')) \\ inspect(rules) ``` \\ \texttt{\#\#\ \ \ \ \ lhs\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ rhs\ \ \ \ \ \ \ \ \ \ \ support\ confidence\ coverage\ lift\ count\ \#\#\ {[}1{]}\ \{HALSTEAD\_CONTENT={[}38.6,77.2),\ \#\#\ \ \ \ \ \ HALSTEAD\_LEVEL={[}0,0.4)\}\ \ \ \ \ \ \ =\textgreater{}\ \{Defective=Y\}\ \ 0.0539\ \ \ \ \ \ 0.370\ \ \ \ 0.146\ 2.39\ \ \ 113\ \#\#\ {[}2{]}\ \{LOC\_CODE\_AND\_COMMENT={[}0,2.4),\ \#\#\ \ \ \ \ \ HALSTEAD\_CONTENT={[}38.6,77.2)\}\ =\textgreater{}\ \{Defective=Y\}\ \ 0.0525\ \ \ \ \ \ 0.377\ \ \ \ 0.139\ 2.43\ \ \ 110\ \#\#\ {[}3{]}\ \{LOC\_CODE\_AND\_COMMENT={[}0,2.4),\ \#\#\ \ \ \ \ \ HALSTEAD\_CONTENT={[}38.6,77.2),\ \#\#\ \ \ \ \ \ HALSTEAD\_LEVEL={[}0,0.4)\}\ \ \ \ \ \ \ =\textgreater{}\ \{Defective=Y\}\ \ 0.0515\ \ \ \ \ \ 0.374\ \ \ \ 0.138\ 2.41\ \ \ 108} \\ \texttt{r\ library(arulesViz)\ plot(rules)} \\ \includegraphics{DASE_files/figure-latex/unnamed-chunk-90-1.pdf} \\ \\ \# (PART) Evaluation \{-\} \\ \# Evaluation of Models \\ Once we obtain the model with the training data, we need to evaluate it with some new data (testing data). \\ \textgreater{} \textbf{No Free Lunch theorem} \textgreater{} In the absence of any knowledge about the prediction problem, no model \textgreater{} can be said to be uniformly better than any other \\ \#\# Building and Validating a Model \\ We cannot use the the same data for training and testing (it is like evaluating a student with the exercises previously solved in class, the student's marks will be ``optimistic'' and we do not know about student capability to generalise the learned concepts). \\ Therefore, we should, at a minimum, divide the dataset into \emph{training} and \emph{testing}, learn the model with the training data and test it with the rest of data as explained next. \\ \#\#\# Holdout approach \\ \textbf{Holdout approach} consists of dividing the dataset into \emph{training} (typically approx. 2/3 of the data) and \emph{testing} (approx 1/3 of the data). + Problems: Data can be skewed, missing classes, etc. if randomly divided. Stratification ensures that each class is represented with approximately equal proportions (e.g., if data contains approximately 45\% of positive cases, the training and testing datasets should maintain similar proportion of positive cases). \\ Holdout estimate can be made more reliable by repeating the process with different subsamples (repeated holdout method). \\ The error rates on the different iterations are averaged (overall error rate). \\ - Usually, part of the data points are used for building the model and the remaining points are used for validating the model. There are several approaches to this process. - \emph{Validation Set approach}: it is the simplest method. It consists of randomly dividing the available set of observations into two parts, a \emph{training set} and a \emph{validation set} or hold-out set. Usually 2/3 of the data points are used for training and 1/3 is used for testing purposes. \\ \includegraphics{figures/validation.png} \\ \#\#\# Cross Validation (CV) \\ \emph{k-fold Cross-Validation} involves randomly dividing the set of observations into \(k\) groups, or folds, of approximately equal size. One fold is treated as a validation set and the method is trained on the remaining \(k-1\) folds. This procedure is repeated \(k\) times. If \(k\) is equal to \(n\) we are in the previous method. \\ + 1st step: split dataset (\(\cal D\)) into \(k\) subsets of approximately equal size \(C_1, \dots, C_k\) \\ + 2nd step: we construct a dataset \(D_i = D-C_i\) used for training and test the accuracy of the classifier \(D_i\) on \(C_i\) subset for testing \\ Having done this for all \(k\) we estimate the accuracy of the method by averaging the accuracy over the \(k\) cross-validation trials \\ \includegraphics{figures/kfold.png} \\ \#\#\# Leave-One-Out Cross-Validation (LOO-CV) \\ - \emph{Leave-One-Out Cross-Validation} (LOO-CV): This is a special case of CV. Instead of creating two subsets for training and testing, a single observation is used for the validation set, and the remaining observations make up the training set. This approach is repeated \(n\) times (the total number of observations) and the estimate for the test mean squared error is the average of the \(n\) test estimates. \\ \includegraphics{figures/leaveone.png} \\ \\ \bottomrule \end{longtable} output: html\_document: default pdf\_document: default --- \#\# Evaluation of Classification Models The confusion matrix (which can be extended to multiclass problems) is a table that presents the results of a classification algorithm. The following table shows the possible outcomes for binary classification problems: \begin{longtable}[]{@{}lll@{}} \toprule & \(Act Pos\) & \(Act Neg\) \\ \midrule \endhead \(Pred Pos\) & \(TP\) & \(FP\) \\ \(Pred Neg\) & \(FN\) & \(TN\) \\ \bottomrule \end{longtable} where \emph{True Positives} (\(TP\)) and \emph{True Negatives} (\(TN\)) are respectively the number of positive and negative instances correctly classified, \emph{False Positives} (\(FP\)) is the number of negative instances misclassified as positive (also called Type I errors), and \emph{False Negatives} (\(FN\)) is the number of positive instances misclassified as negative (Type II errors). \begin{itemize} \tightlist \item \href{https://en.wikipedia.org/wiki/Confusion_matrix}{Confusion Matrix in Wikipedia} \end{itemize} From the confusion matrix, we can calculate: \begin{itemize} \item \emph{True positive rate}, or \emph{recall } (\(TP_r = recall = TP/TP+FN\)) is the proportion of positive cases correctly classified as belonging to the positive class. \item \emph{False negative rate} (\(FN_r=FN/TP+FN\)) is the proportion of positive cases misclassified as belonging to the negative class. \item \emph{False positive rate} (\(FP_r=FP/FP+TN\)) is the proportion of negative cases misclassified as belonging to the positive class. \item \emph{True negative rate} (\(TN_r=TN/FP+TN\)) is the proportion of negative cases correctly classified as belonging to the negative class. \end{itemize} There is a trade-off between \(FP_r\) and \(FN_r\) as the objective is minimize both metrics (or conversely, maximize the true negative and positive rates). It is possible to combine both metrics into a single figure, predictive \(accuracy\): \[accuracy = \frac{TP + TN}{TP + TN + FP + FN}\] to measure performance of classifiers (or the complementary value, the \emph{error rate} which is defined as \(1-accuracy\)) \begin{itemize} \item Precision, fraction of relevant instances among the retrieved instances, \[\frac{TP}{TP+FP}\] \item Recall\$ (\(sensitivity\) probability of detection, \(PD\)) is the fraction of relevant instances that have been retrieved over total relevant instances, \(\frac{TP}{TP+FN}\) \item \emph{f-measure} is the harmonic mean of precision and recall, \(2 \cdot \frac{precision \cdot recall}{precision + recall}\) \item G-mean: \(\sqrt{PD \times Precision}\) \item G-mean2: \(\sqrt{PD \times Specificity}\) \item J coefficient, \(j-coeff = sensitivity + specificity - 1 = PD-PF\) \item A suitable and interesting performance metric for binary classification when data are imbalanced is the Matthew's Correlation Coefficient (\(MCC\))\textasciitilde{}\cite{Matthews1975Comparison}: \end{itemize} \[MCC=\frac{TP\times TN - FP\times FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}\] \(MCC\) can also be calculated from the confusion matrix. Its range goes from -1 to +1; the closer to one the better as it indicates perfect prediction whereas a value of 0 means that classification is not better than random prediction and negative values mean that predictions are worst than random. \hypertarget{prediction-in-probabilistic-classifiers}{% \subsection{Prediction in probabilistic classifiers}\label{prediction-in-probabilistic-classifiers}} A probabilistic classifier estimates the probability of each of the posible class values given the attribute values of the instance \(P(c|{x})\). Then, given a new instance, \({x}\), the class value with the highest a posteriori probability will be assigned to that new instance (the \emph{winner takes all} approach): \(\psi({x}) = argmax_c (P(c|{x}))\) \hypertarget{other-metrics-used-in-software-engineering-with-classification}{% \section{Other Metrics used in Software Engineering with Classification}\label{other-metrics-used-in-software-engineering-with-classification}} In the domain of defect prediction and when two classes are considered, it is also customary to refer to the \emph{probability of detection}, (\(pd\)) which corresponds to the True Positive rate (\(TP_{rate}\) or \emph{Sensitivity}) as a measure of the goodness of the model, and \emph{probability of false alarm} (\(pf\)) as performance measures\textasciitilde{}\cite{Menzies07}. The objective is to find which techniques that maximise \(pd\) and minimise \(pf\). As stated by Menzies et al., the balance between these two measures depends on the project characteristics (e.g.~real-time systems vs.~information management systems) it is formulated as the Euclidean distance from the sweet spot \(pf=0\) and \(pd=1\) to a pair of \((pf,pd)\). \[balance=1-\frac{\sqrt{(0-pf^2)+(1-pd^2)}}{\sqrt{2}}\] It is normalized by the maximum possible distance across the ROC square (\(\sqrt{2}, 2\)), subtracted this value from 1, and expressed it as a percentage. \hypertarget{graphical-evaluation}{% \section{Graphical Evaluation}\label{graphical-evaluation}} \hypertarget{receiver-operating-characteristic-roc}{% \subsection{Receiver Operating Characteristic (ROC)}\label{receiver-operating-characteristic-roc}} The \emph{Receiver Operating Characteristic} (\(ROC\))\citep{Fawcett2006} curve which provides a graphical visualisation of the results. \begin{figure} \centering \includegraphics{figures/roc.png} \caption{Receiver Operating Characteristic} \end{figure} The Area Under the ROC Curve (AUC) also provides a quality measure between positive and negative rates with a single value. A simple way to approximate the AUC is with the following equation: \(AUC=\frac{1+TP_{r}-FP_{r}}{2}\) \hypertarget{precision-recall-curve-prc}{% \subsection{Precision-Recall Curve (PRC)}\label{precision-recall-curve-prc}} Similarly to ROC, another widely used evaluation technique is the Precision-Recall Curve (PRC), which depicts a trade off between precision and recall and can also be summarised into a single value as the Area Under the Precision-Recall Curve (AUPRC)\textasciitilde{}\cite{Davis2006}. \%AUPCR is more accurate than the ROC for testing performances when dealing with imbalanced datasets as well as optimising ROC values does not necessarily optimises AUPR values, i.e., a good classifier in AUC space may not be so good in PRC space. \%The weighted average uses weights proportional to class frequencies in the data. \%The weighted average is computed by weighting the measure of class (TP rate, precision, recall \ldots) by the proportion of instances there are in that class. Computing the average can be sometimes be misleading. For instance, if class 1 has 100 instances and you achieve a recall of 30\%, and class 2 has 1 instance and you achieve recall of 100\% (you predicted the only instance correctly), then when taking the average (65\%) you will inflate the recall score because of the one instance you predicted correctly. Taking the weighted average will give you 30.7\%, which is much more realistic measure of the performance of the classifier. \hypertarget{numeric-prediction-evaluation}{% \section{Numeric Prediction Evaluation}\label{numeric-prediction-evaluation}} In the case of defect prediction, it matters the difference between the predicted value and the actual value. Common performance metrics used for numeric prediction are as follows, where \(\hat{y_n}\) represents the predicted value and \(y_n\) the actual one. Mean Square Error (\(MSE\)) \(MSE = \frac{(\hat{y_1} - y_1)^2 + \ldots +(\hat{y_n} - y_n)^2}{n} = \frac{1}{n}\sum_{i=1}^n(\hat{y_i} - y_i)^2\) Root mean-squared error (\(RMSE\)) \({RMSE} = \sqrt{\frac{\sum_{t=1}^n (\hat y_t - y)^2}{n}}\) Mean Absolute Error (\(MAE\)) \(MAE = \frac{|\hat{y_1} - y_1| + \ldots +|\hat{y_n} - y_n|}{n} = \sqrt{\frac{\sum_{t=1}^n |\hat y_t - y|}{n}}\) Relative Absolute Error (\(RAE\)) \(RAE = \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - \theta_i | }\) Root Relative-Squared Error (\(RRSE\)) \(RRSE = \sqrt{ \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - \theta_i | } }\) where \(\hat{\theta}\) is a mean value of \(\theta\). Relative-Squared r (\(RSE\)) \(\frac{(p_1-a_1)^2 + \ldots +(p_n-a_n)^2}{(a_1-\hat{a})^2 + \ldots + (a_n-\hat{a})^2}\) where (\(\hat{a}\) is the mean value over the training data) Relative Absolute Error (\(RAE\)) Correlation Coefficient \emph{Correlation coefficient} between two random variables \(X\) and \(Y\) is defined as \(\rho(X,Y) = \frac{{\bf Cov}(X,Y)}{\sqrt{{\bf Var}(X){\bf Var}(Y)}}\). The sample correlation coefficient\} \(r\) between two samples \(x_i\) and \(y_j\) is vvdefined as \(r = S_{xy}/\sqrt{S_{xx}S_{yy}}\) Example: Is there any linear relationship between the effort estimates (\(p_i\)) and actual effort (\(a_i\))? \(a\|39,43,21,64,57,47,28,75,34,52\) \(p\|65,78,52,82,92,89,73,98,56,75\) \begin{Shaded} \begin{Highlighting}[] \NormalTok{p}\OtherTok{\textless{}{-}}\FunctionTok{c}\NormalTok{(}\DecValTok{39}\NormalTok{,}\DecValTok{43}\NormalTok{,}\DecValTok{21}\NormalTok{,}\DecValTok{64}\NormalTok{,}\DecValTok{57}\NormalTok{,}\DecValTok{47}\NormalTok{,}\DecValTok{28}\NormalTok{,}\DecValTok{75}\NormalTok{,}\DecValTok{34}\NormalTok{,}\DecValTok{52}\NormalTok{)} \NormalTok{a}\OtherTok{\textless{}{-}}\FunctionTok{c}\NormalTok{(}\DecValTok{65}\NormalTok{,}\DecValTok{78}\NormalTok{,}\DecValTok{52}\NormalTok{,}\DecValTok{82}\NormalTok{,}\DecValTok{92}\NormalTok{,}\DecValTok{89}\NormalTok{,}\DecValTok{73}\NormalTok{,}\DecValTok{98}\NormalTok{,}\DecValTok{56}\NormalTok{,}\DecValTok{75}\NormalTok{)} \CommentTok{\#} \FunctionTok{cor}\NormalTok{(p,a)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.84 \end{verbatim} \(R^2\) \hypertarget{evaluationSE}{% \chapter{Measures of Evaluation in Software Engineering}\label{evaluationSE}} \hypertarget{effort-estimation-evaluation-metrics}{% \section{Effort estimation evaluation metrics}\label{effort-estimation-evaluation-metrics}} There are several measures typically used in software engineering. In particular for effort estimation, the following metrics are extensively used in addition or instead of statistical measures. \begin{itemize} \item \emph{Mean of the Absolute Error (MAR)}: compute the absolute errors and take the mean \item \emph{Geometric Mean of the Absolute Error (gMAR)}: more appropriate when the distribution is skewed \item \emph{Mean Magnitude of the Relative Error (MMRE)}: this measure has been critisized many times as a biased measure (\(\frac{\sum_{i=1}^{n}{|{\hat{y}_i-y_i}|}/y_i}{n}\)) \item \emph{Median Magnitude of the Relative Error (MdMRE)}: using the median instead of the mean \item \emph{Level of Prediction} (\(Pred(l)\)) defined as the percentage of estimates that are within the percentage level \(l\) of the actual values. The level of prediction is typically set at 25\% below and above the actual value and an estimation method is considered good if it gives a result of more than 75\%. \item \emph{Standardised Accuracy (SA)} (proposed by Shepperd and MacDonnell): this measure overcomes all the problems of the MMRE. It is defined as the MAR relative to random guessing (\(SA=1-{\frac{MAR}{\overline{MAR}_{P_0}}\times100}\)) \item \emph{Random guessing}: \(\overline{MAR}_{P_0}\) is defined as: predict a \(\hat{y}_t\) for the target case \emph{t} by randomly sampling (with equal probability) over all the remaining n-1 cases and take \(\hat{y}_t=y_r\) where \(r\) is drawn randomly from \(1\) to \(n\) and \(r\neq t\). \item \emph{Exact \(\overline{MAR}_{P_0}\)}: it is an improvement over \(\overline{MAR}_{P_0}\). For small datasets the ``random guessing'' can be computed exactly by iterating over all data points. \end{itemize} \hypertarget{evaluation-of-the-model-in-the-testing-data}{% \section{Evaluation of the Model in the Testing data}\label{evaluation-of-the-model-in-the-testing-data}} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \NormalTok{gm\_mean }\OtherTok{=} \ControlFlowTok{function}\NormalTok{(x, }\AttributeTok{na.rm=}\ConstantTok{TRUE}\NormalTok{)\{} \FunctionTok{exp}\NormalTok{(}\FunctionTok{sum}\NormalTok{(}\FunctionTok{log}\NormalTok{(x[x }\SpecialCharTok{\textgreater{}} \DecValTok{0}\NormalTok{]), }\AttributeTok{na.rm=}\NormalTok{na.rm) }\SpecialCharTok{/} \FunctionTok{length}\NormalTok{(x))\}} \NormalTok{chinaTrain }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"./datasets/effortEstimation/china3AttSelectedAFPTrain.arff"}\NormalTok{)} \NormalTok{logchina\_size }\OtherTok{\textless{}{-}} \FunctionTok{log}\NormalTok{(chinaTrain}\SpecialCharTok{$}\NormalTok{AFP)} \NormalTok{logchina\_effort }\OtherTok{\textless{}{-}} \FunctionTok{log}\NormalTok{(chinaTrain}\SpecialCharTok{$}\NormalTok{Effort)} \NormalTok{linmodel\_logchina\_train }\OtherTok{\textless{}{-}} \FunctionTok{lm}\NormalTok{(logchina\_effort }\SpecialCharTok{\textasciitilde{}}\NormalTok{ logchina\_size)} \NormalTok{chinaTest }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"./datasets/effortEstimation/china3AttSelectedAFPTest.arff"}\NormalTok{)} \NormalTok{b0 }\OtherTok{\textless{}{-}}\NormalTok{ linmodel\_logchina\_train}\SpecialCharTok{$}\NormalTok{coefficients[}\DecValTok{1}\NormalTok{]} \NormalTok{b1 }\OtherTok{\textless{}{-}}\NormalTok{ linmodel\_logchina\_train}\SpecialCharTok{$}\NormalTok{coefficients[}\DecValTok{2}\NormalTok{]} \NormalTok{china\_size\_test }\OtherTok{\textless{}{-}}\NormalTok{ chinaTest}\SpecialCharTok{$}\NormalTok{AFP} \NormalTok{actualEffort }\OtherTok{\textless{}{-}}\NormalTok{ chinaTest}\SpecialCharTok{$}\NormalTok{Effort} \CommentTok{\# predEffort \textless{}{-} exp(b0+b1*log(china\_size\_test)) wr} \NormalTok{predEffort }\OtherTok{\textless{}{-}} \FunctionTok{exp}\NormalTok{(b0)}\SpecialCharTok{*}\NormalTok{china\_size\_test}\SpecialCharTok{\^{}}\NormalTok{b1} \NormalTok{err }\OtherTok{\textless{}{-}}\NormalTok{ actualEffort }\SpecialCharTok{{-}}\NormalTok{ predEffort }\CommentTok{\#error or residual} \NormalTok{ae }\OtherTok{\textless{}{-}} \FunctionTok{abs}\NormalTok{(err)} \FunctionTok{hist}\NormalTok{(ae, }\AttributeTok{main=}\StringTok{"Absolute Error in the China Test data"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-92-1.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mar }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(ae)} \NormalTok{mre }\OtherTok{\textless{}{-}}\NormalTok{ ae}\SpecialCharTok{/}\NormalTok{actualEffort} \NormalTok{mmre }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(mre)} \NormalTok{mdmre }\OtherTok{\textless{}{-}} \FunctionTok{median}\NormalTok{(mre)} \NormalTok{gmar }\OtherTok{\textless{}{-}} \FunctionTok{gm\_mean}\NormalTok{(ae)} \NormalTok{mar} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1867 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mmre} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1.15 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{mdmre} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.551 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{gmar} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 833 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{level\_pred }\OtherTok{\textless{}{-}} \FloatTok{0.25} \CommentTok{\#below and above (both)} \NormalTok{lowpred }\OtherTok{\textless{}{-}}\NormalTok{ actualEffort}\SpecialCharTok{*}\NormalTok{(}\DecValTok{1}\SpecialCharTok{{-}}\NormalTok{level\_pred)} \NormalTok{uppred }\OtherTok{\textless{}{-}}\NormalTok{ actualEffort}\SpecialCharTok{*}\NormalTok{(}\DecValTok{1}\SpecialCharTok{+}\NormalTok{level\_pred)} \NormalTok{pred }\OtherTok{\textless{}{-}}\NormalTok{ predEffort }\SpecialCharTok{\textless{}=}\NormalTok{ uppred }\SpecialCharTok{\&}\NormalTok{ predEffort }\SpecialCharTok{\textgreater{}=}\NormalTok{ lowpred }\CommentTok{\#pred is a vector with logical values } \NormalTok{Lpred }\OtherTok{\textless{}{-}} \FunctionTok{sum}\NormalTok{(pred)}\SpecialCharTok{/}\FunctionTok{length}\NormalTok{(pred)} \NormalTok{Lpred} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.186 \end{verbatim} \hypertarget{building-a-linear-model-on-the-telecom1-dataset}{% \section{Building a Linear Model on the Telecom1 dataset}\label{building-a-linear-model-on-the-telecom1-dataset}} \begin{itemize} \tightlist \item Although there are few data points we split the file into Train (2/3) and Test (1/3) \end{itemize} \begin{Shaded} \begin{Highlighting}[] \NormalTok{telecom1 }\OtherTok{\textless{}{-}} \FunctionTok{read.table}\NormalTok{(}\StringTok{"./datasets/effortEstimation/Telecom1.csv"}\NormalTok{, }\AttributeTok{sep=}\StringTok{","}\NormalTok{,}\AttributeTok{header=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{stringsAsFactors=}\ConstantTok{FALSE}\NormalTok{, }\AttributeTok{dec =} \StringTok{"."}\NormalTok{) }\CommentTok{\#read data} \NormalTok{samplesize }\OtherTok{\textless{}{-}} \FunctionTok{floor}\NormalTok{(}\FloatTok{0.66}\SpecialCharTok{*}\FunctionTok{nrow}\NormalTok{(telecom1))} \FunctionTok{set.seed}\NormalTok{(}\DecValTok{012}\NormalTok{) }\CommentTok{\# to make the partition reproducible} \NormalTok{train\_idx }\OtherTok{\textless{}{-}} \FunctionTok{sample}\NormalTok{(}\FunctionTok{seq\_len}\NormalTok{(}\FunctionTok{nrow}\NormalTok{(telecom1)), }\AttributeTok{size =}\NormalTok{ samplesize)} \NormalTok{telecom1\_train }\OtherTok{\textless{}{-}}\NormalTok{ telecom1[train\_idx, ]} \NormalTok{telecom1\_test }\OtherTok{\textless{}{-}}\NormalTok{ telecom1[}\SpecialCharTok{{-}}\NormalTok{train\_idx, ]} \FunctionTok{par}\NormalTok{(}\AttributeTok{mfrow=}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \CommentTok{\# transformation of variables to log{-}log} \NormalTok{xtrain }\OtherTok{\textless{}{-}} \FunctionTok{log}\NormalTok{(telecom1\_train}\SpecialCharTok{$}\NormalTok{size)} \NormalTok{ytrain }\OtherTok{\textless{}{-}} \FunctionTok{log}\NormalTok{(telecom1\_train}\SpecialCharTok{$}\NormalTok{effort)} \NormalTok{lmtelecom1 }\OtherTok{\textless{}{-}} \FunctionTok{lm}\NormalTok{( ytrain }\SpecialCharTok{\textasciitilde{}}\NormalTok{ xtrain)} \FunctionTok{plot}\NormalTok{(xtrain, ytrain)} \FunctionTok{abline}\NormalTok{(lmtelecom1, }\AttributeTok{lwd=}\DecValTok{2}\NormalTok{, }\AttributeTok{col=}\StringTok{"blue"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-94-1.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{b0\_tel1 }\OtherTok{\textless{}{-}}\NormalTok{ lmtelecom1}\SpecialCharTok{$}\NormalTok{coefficients[}\DecValTok{1}\NormalTok{]} \NormalTok{b1\_tel1 }\OtherTok{\textless{}{-}}\NormalTok{ lmtelecom1}\SpecialCharTok{$}\NormalTok{coefficients[}\DecValTok{2}\NormalTok{]} \CommentTok{\# calculate residuals and predicted values} \NormalTok{res }\OtherTok{\textless{}{-}} \FunctionTok{signif}\NormalTok{(}\FunctionTok{residuals}\NormalTok{(lmtelecom1), }\DecValTok{5}\NormalTok{)} \NormalTok{xtest }\OtherTok{\textless{}{-}}\NormalTok{ telecom1\_test}\SpecialCharTok{$}\NormalTok{size} \NormalTok{ytest }\OtherTok{\textless{}{-}}\NormalTok{ telecom1\_test}\SpecialCharTok{$}\NormalTok{effort} \CommentTok{\# pre\_tel1 \textless{}{-} exp(b0\_tel1+b1\_tel1*log(xtest))} \NormalTok{pre\_tel1 }\OtherTok{\textless{}{-}} \FunctionTok{exp}\NormalTok{(b0\_tel1)}\SpecialCharTok{*}\NormalTok{xtest}\SpecialCharTok{\^{}}\NormalTok{b1\_tel1} \CommentTok{\# plot distances between points and the regression line} \FunctionTok{plot}\NormalTok{(xtest, ytest)} \FunctionTok{curve}\NormalTok{(}\FunctionTok{exp}\NormalTok{(b0\_tel1}\SpecialCharTok{+}\NormalTok{b1\_tel1}\SpecialCharTok{*}\FunctionTok{log}\NormalTok{(x)), }\AttributeTok{from=}\DecValTok{0}\NormalTok{, }\AttributeTok{to=}\DecValTok{300}\NormalTok{, }\AttributeTok{add=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{col=}\StringTok{"blue"}\NormalTok{, }\AttributeTok{lwd=}\DecValTok{2}\NormalTok{)} \FunctionTok{segments}\NormalTok{(xtest, ytest, xtest, pre\_tel1, }\AttributeTok{col=}\StringTok{"red"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-94-2.pdf} \hypertarget{building-a-linear-model-on-the-telecom1-dataset-with-all-observations}{% \section{Building a Linear Model on the Telecom1 dataset with all observations}\label{building-a-linear-model-on-the-telecom1-dataset-with-all-observations}} \begin{itemize} \tightlist \item Just to visualize results \end{itemize} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mfrow=}\FunctionTok{c}\NormalTok{(}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))} \NormalTok{effort\_telecom1 }\OtherTok{\textless{}{-}}\NormalTok{ telecom1}\SpecialCharTok{$}\NormalTok{effort} \NormalTok{size\_telecom1 }\OtherTok{\textless{}{-}}\NormalTok{ telecom1}\SpecialCharTok{$}\NormalTok{size} \NormalTok{lmtelecom }\OtherTok{\textless{}{-}} \FunctionTok{lm}\NormalTok{(effort\_telecom1 }\SpecialCharTok{\textasciitilde{}}\NormalTok{ size\_telecom1)} \FunctionTok{plot}\NormalTok{(size\_telecom1, effort\_telecom1)} \FunctionTok{abline}\NormalTok{(lmtelecom, }\AttributeTok{lwd=}\DecValTok{3}\NormalTok{, }\AttributeTok{col=}\StringTok{"blue"}\NormalTok{)} \CommentTok{\# calculate residuals and predicted values} \NormalTok{res }\OtherTok{\textless{}{-}} \FunctionTok{signif}\NormalTok{(}\FunctionTok{residuals}\NormalTok{(lmtelecom), }\DecValTok{5}\NormalTok{) } \NormalTok{predicted }\OtherTok{\textless{}{-}} \FunctionTok{predict}\NormalTok{(lmtelecom)} \CommentTok{\# plot distances between points and the regression line} \FunctionTok{segments}\NormalTok{(size\_telecom1, effort\_telecom1, size\_telecom1, predicted, }\AttributeTok{col=}\StringTok{"red"}\NormalTok{)} \NormalTok{level\_pred }\OtherTok{\textless{}{-}} \FloatTok{0.25} \CommentTok{\#below and above (both)} \NormalTok{lowpred }\OtherTok{\textless{}{-}}\NormalTok{ effort\_telecom1}\SpecialCharTok{*}\NormalTok{(}\DecValTok{1}\SpecialCharTok{{-}}\NormalTok{level\_pred)} \NormalTok{uppred }\OtherTok{\textless{}{-}}\NormalTok{ effort\_telecom1}\SpecialCharTok{*}\NormalTok{(}\DecValTok{1}\SpecialCharTok{+}\NormalTok{level\_pred)} \NormalTok{predict\_inrange }\OtherTok{\textless{}{-}}\NormalTok{ predicted }\SpecialCharTok{\textless{}=}\NormalTok{ uppred }\SpecialCharTok{\&}\NormalTok{ predicted }\SpecialCharTok{\textgreater{}=}\NormalTok{ lowpred }\CommentTok{\#pred is a vector with logical values } \NormalTok{Lpred }\OtherTok{\textless{}{-}} \FunctionTok{sum}\NormalTok{(predict\_inrange)}\SpecialCharTok{/}\FunctionTok{length}\NormalTok{(predict\_inrange)} \NormalTok{Lpred} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.444 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#Visually plot lpred} \FunctionTok{segments}\NormalTok{(size\_telecom1, lowpred, size\_telecom1, uppred, }\AttributeTok{col=}\StringTok{"red"}\NormalTok{, }\AttributeTok{lwd=}\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-95-1.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{err\_telecom1 }\OtherTok{\textless{}{-}} \FunctionTok{abs}\NormalTok{(effort\_telecom1 }\SpecialCharTok{{-}}\NormalTok{ predicted)} \NormalTok{mar\_tel1 }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(err\_telecom1)} \NormalTok{mar\_tel1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 125 \end{verbatim} \hypertarget{standardised-accuracy-examples}{% \section{Standardised Accuracy Examples}\label{standardised-accuracy-examples}} \hypertarget{standardised-accuracy-marp0-using-the-china-test-dataset}{% \subsection{Standardised Accuracy MARP0 using the China Test dataset}\label{standardised-accuracy-marp0-using-the-china-test-dataset}} \begin{itemize} \tightlist \item Computing \(MARP_0\) in the China Test data \end{itemize} \begin{Shaded} \begin{Highlighting}[] \NormalTok{estimEffChinaTest }\OtherTok{\textless{}{-}}\NormalTok{ predEffort }\CommentTok{\# This will be overwritten, no problem} \NormalTok{numruns }\OtherTok{\textless{}{-}} \DecValTok{9999} \NormalTok{randguessruns }\OtherTok{\textless{}{-}} \FunctionTok{rep}\NormalTok{(}\DecValTok{0}\NormalTok{, numruns)} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{numruns) \{ } \ControlFlowTok{for}\NormalTok{ (j }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\FunctionTok{length}\NormalTok{(estimEffChinaTest)) \{} \NormalTok{ estimEffChinaTest[j] }\OtherTok{\textless{}{-}} \FunctionTok{sample}\NormalTok{(actualEffort[}\SpecialCharTok{{-}}\NormalTok{j],}\DecValTok{1}\NormalTok{)\}}\CommentTok{\#replacement with random guessingt } \NormalTok{ randguessruns[i] }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(}\FunctionTok{abs}\NormalTok{(estimEffChinaTest}\SpecialCharTok{{-}}\NormalTok{actualEffort))} \NormalTok{ \} } \NormalTok{marp0Chinatest }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(randguessruns)} \NormalTok{marp0Chinatest} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 3949 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{hist}\NormalTok{(randguessruns, }\AttributeTok{main=}\StringTok{"MARP0 distribution of the China dataset"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-96-1.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{saChina }\OtherTok{=}\NormalTok{ (}\DecValTok{1}\SpecialCharTok{{-}}\NormalTok{ mar}\SpecialCharTok{/}\NormalTok{marp0Chinatest)}\SpecialCharTok{*}\DecValTok{100} \NormalTok{saChina} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 52.7 \end{verbatim} \hypertarget{standardised-accuracy.-marp0-using-the-telecom1-dataset}{% \subsection{Standardised Accuracy. MARP0 using the Telecom1 dataset}\label{standardised-accuracy.-marp0-using-the-telecom1-dataset}} \begin{itemize} \tightlist \item Computing \(MARP_0\) \end{itemize} \begin{Shaded} \begin{Highlighting}[] \NormalTok{telecom1 }\OtherTok{\textless{}{-}} \FunctionTok{read.table}\NormalTok{(}\StringTok{"./datasets/effortEstimation/Telecom1.csv"}\NormalTok{, }\AttributeTok{sep=}\StringTok{","}\NormalTok{,}\AttributeTok{header=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{stringsAsFactors=}\ConstantTok{FALSE}\NormalTok{, }\AttributeTok{dec =} \StringTok{"."}\NormalTok{) }\CommentTok{\#read data} \CommentTok{\#par(mfrow=c(1,2))} \CommentTok{\#size \textless{}{-} telecom1[1]$size not needed now} \NormalTok{actualEffTelecom1 }\OtherTok{\textless{}{-}}\NormalTok{ telecom1[}\DecValTok{2}\NormalTok{]}\SpecialCharTok{$}\NormalTok{effort} \NormalTok{estimEffTelecom1 }\OtherTok{\textless{}{-}}\NormalTok{ telecom1[}\DecValTok{3}\NormalTok{]}\SpecialCharTok{$}\NormalTok{EstTotal }\CommentTok{\# this will be overwritten} \NormalTok{numruns }\OtherTok{\textless{}{-}} \DecValTok{9999} \NormalTok{randguessruns }\OtherTok{\textless{}{-}} \FunctionTok{rep}\NormalTok{(}\DecValTok{0}\NormalTok{, numruns)} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{numruns) \{ } \ControlFlowTok{for}\NormalTok{ (j }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\FunctionTok{length}\NormalTok{(estimEffTelecom1)) \{} \NormalTok{ estimEffTelecom1[j] }\OtherTok{\textless{}{-}} \FunctionTok{sample}\NormalTok{(actualEffTelecom1[}\SpecialCharTok{{-}}\NormalTok{j],}\DecValTok{1}\NormalTok{)\}}\CommentTok{\#replacement with random guessingt } \NormalTok{ randguessruns[i] }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(}\FunctionTok{abs}\NormalTok{(estimEffTelecom1}\SpecialCharTok{{-}}\NormalTok{actualEffTelecom1))} \NormalTok{ \} } \NormalTok{marp0telecom1 }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(randguessruns)} \NormalTok{marp0telecom1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 271 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{hist}\NormalTok{(randguessruns, }\AttributeTok{main=}\StringTok{"MARP0 distribution of the Telecom1 dataset"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-97-1.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{saTelecom1 }\OtherTok{\textless{}{-}}\NormalTok{ (}\DecValTok{1}\SpecialCharTok{{-}}\NormalTok{ mar\_tel1}\SpecialCharTok{/}\NormalTok{marp0telecom1)}\SpecialCharTok{*}\DecValTok{100} \NormalTok{saTelecom1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 53.9 \end{verbatim} \hypertarget{standard-accuracy-marp0-using-the-atkinson-dataset}{% \subsection{Standard Accuracy MARP0 using the Atkinson Dataset}\label{standard-accuracy-marp0-using-the-atkinson-dataset}} \begin{itemize} \tightlist \item For checking results you may use figure Atkinson in Shepperd \& MacDonnell \end{itemize} \begin{verbatim} ## [1] 281 \end{verbatim} \includegraphics{DASE_files/figure-latex/unnamed-chunk-98-1.pdf} \hypertarget{exact-marp0}{% \section{Exact MARP0}\label{exact-marp0}} Langdon et al\citeyearpar{Langdon2016} provide a solution to calculate Shepperd and MacDonell's \(MAR\)\_\{P\_0\}\$ exactly. An R code implementation is as follows. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#example dataset} \NormalTok{atkinson\_actual\_effort }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{670}\NormalTok{,}\DecValTok{912}\NormalTok{,}\DecValTok{218}\NormalTok{,}\DecValTok{595}\NormalTok{,}\DecValTok{267}\NormalTok{,}\DecValTok{344}\NormalTok{,}\DecValTok{229}\NormalTok{,}\DecValTok{190}\NormalTok{,}\DecValTok{869}\NormalTok{,}\DecValTok{109}\NormalTok{,}\DecValTok{289}\NormalTok{,}\DecValTok{616}\NormalTok{,}\DecValTok{557}\NormalTok{,}\DecValTok{416}\NormalTok{,}\DecValTok{578}\NormalTok{,}\DecValTok{438}\NormalTok{)} \NormalTok{myabs }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(x,y) }\FunctionTok{abs}\NormalTok{(x}\SpecialCharTok{{-}}\NormalTok{y)} \CommentTok{\#diffs is square array whose i,jth element = abs(actual\_i {-} actual\_j)} \CommentTok{\#in practice this is good enough but could be made more efficient by not} \CommentTok{\#explicitly storing the matrix and only using the values below the diagonal.} \NormalTok{diffs }\OtherTok{\textless{}{-}} \FunctionTok{outer}\NormalTok{(atkinson\_actual\_effort,atkinson\_actual\_effort,myabs)} \NormalTok{marp0 }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(diffs)} \NormalTok{marp0} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 264 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\#\# same procedure without using the outer function} \NormalTok{act\_effort }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{670}\NormalTok{,}\DecValTok{912}\NormalTok{,}\DecValTok{218}\NormalTok{,}\DecValTok{595}\NormalTok{,}\DecValTok{267}\NormalTok{,}\DecValTok{344}\NormalTok{,}\DecValTok{229}\NormalTok{,}\DecValTok{190}\NormalTok{,}\DecValTok{869}\NormalTok{,}\DecValTok{109}\NormalTok{,}\DecValTok{289}\NormalTok{,}\DecValTok{616}\NormalTok{,}\DecValTok{557}\NormalTok{,}\DecValTok{416}\NormalTok{,}\DecValTok{578}\NormalTok{,}\DecValTok{438}\NormalTok{)} \NormalTok{n }\OtherTok{\textless{}{-}} \FunctionTok{length}\NormalTok{(act\_effort)} \NormalTok{diffs\_guess }\OtherTok{\textless{}{-}} \FunctionTok{matrix}\NormalTok{(}\AttributeTok{nrow=}\NormalTok{n, }\AttributeTok{ncol=}\NormalTok{n)} \FunctionTok{colnames}\NormalTok{(diffs\_guess) }\OtherTok{\textless{}{-}}\NormalTok{ act\_effort} \FunctionTok{rownames}\NormalTok{(diffs\_guess) }\OtherTok{\textless{}{-}}\NormalTok{ act\_effort } \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{1}\SpecialCharTok{:}\NormalTok{n)\{} \NormalTok{ diffs\_guess[i,] }\OtherTok{\textless{}{-}}\NormalTok{ act\_effort }\SpecialCharTok{{-}}\NormalTok{ act\_effort[i]} \NormalTok{\}} \NormalTok{diffs\_guess }\OtherTok{\textless{}{-}} \FunctionTok{abs}\NormalTok{(diffs\_guess)} \NormalTok{means\_per\_point }\OtherTok{\textless{}{-}} \FunctionTok{apply}\NormalTok{(diffs\_guess, }\DecValTok{2}\NormalTok{, mean)} \NormalTok{marp0 }\OtherTok{\textless{}{-}} \FunctionTok{mean}\NormalTok{(means\_per\_point)} \NormalTok{marp0} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 264 \end{verbatim} \hypertarget{computing-the-bootstraped-confidence-interval-of-the-mean-for-the-test-observations-of-the-china-dataset}{% \section{Computing the bootstraped confidence interval of the mean for the Test observations of the China dataset:}\label{computing-the-bootstraped-confidence-interval-of-the-mean-for-the-test-observations-of-the-china-dataset}} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(boot)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Attaching package: 'boot' \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:survival': ## ## aml \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:lattice': ## ## melanoma \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:sm': ## ## dogs \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{hist}\NormalTok{(ae, }\AttributeTok{main=}\StringTok{"Absolute Errors of the China Test data"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-101-1.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{level\_confidence }\OtherTok{\textless{}{-}} \FloatTok{0.95} \NormalTok{repetitionsboot }\OtherTok{\textless{}{-}} \DecValTok{9999} \NormalTok{samplemean }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(x, d)\{}\FunctionTok{return}\NormalTok{(}\FunctionTok{mean}\NormalTok{(x[d]))\}} \NormalTok{b\_mean }\OtherTok{\textless{}{-}} \FunctionTok{boot}\NormalTok{(ae, samplemean, }\AttributeTok{R=}\NormalTok{repetitionsboot)} \NormalTok{confint\_mean\_China }\OtherTok{\textless{}{-}} \FunctionTok{boot.ci}\NormalTok{(b\_mean)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning in boot.ci(b_mean): bootstrap variances needed for studentized intervals \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{confint\_mean\_China} \end{Highlighting} \end{Shaded} \begin{verbatim} ## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS ## Based on 9999 bootstrap replicates ## ## CALL : ## boot.ci(boot.out = b_mean) ## ## Intervals : ## Level Normal Basic ## 95% (1420, 2316 ) (1386, 2284 ) ## ## Level Percentile BCa ## 95% (1450, 2348 ) (1496, 2419 ) ## Calculations and Intervals on Original Scale \end{verbatim} \begin{itemize} \tightlist \item Computing the bootstraped geometric mean \end{itemize} \begin{Shaded} \begin{Highlighting}[] \NormalTok{boot\_geom\_mean }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(error\_vec)\{} \NormalTok{ log\_error }\OtherTok{\textless{}{-}} \FunctionTok{log}\NormalTok{(error\_vec[error\_vec }\SpecialCharTok{\textgreater{}} \DecValTok{0}\NormalTok{])} \NormalTok{ log\_error }\OtherTok{\textless{}{-}}\NormalTok{log\_error[}\FunctionTok{is.finite}\NormalTok{(log\_error)] }\CommentTok{\#remove the {-}Inf value before calculating the mean, just in case} \NormalTok{ samplemean }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(x, d)\{}\FunctionTok{return}\NormalTok{(}\FunctionTok{mean}\NormalTok{(x[d]))\}} \NormalTok{ b }\OtherTok{\textless{}{-}} \FunctionTok{boot}\NormalTok{(log\_error, samplemean, }\AttributeTok{R=}\NormalTok{repetitionsboot) }\CommentTok{\# with package boot} \CommentTok{\# this is a boot for the logs} \FunctionTok{return}\NormalTok{(b)} \NormalTok{\}} \CommentTok{\# BCAconfidence interval for the geometric mean} \NormalTok{BCAciboot4geommean }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(b)\{ } \NormalTok{ conf\_int }\OtherTok{\textless{}{-}} \FunctionTok{boot.ci}\NormalTok{(b, }\AttributeTok{conf=}\NormalTok{level\_confidence, }\AttributeTok{type=}\StringTok{"bca"}\NormalTok{)}\SpecialCharTok{$}\NormalTok{bca }\CommentTok{\#following 10.9 of Ugarte et al.\textquotesingle{}s book} \NormalTok{ conf\_int[}\DecValTok{5}\NormalTok{] }\OtherTok{\textless{}{-}} \FunctionTok{exp}\NormalTok{(conf\_int[}\DecValTok{5}\NormalTok{]) }\CommentTok{\# the boot was computed with log. Now take the measure back to its previous units} \NormalTok{ conf\_int[}\DecValTok{4}\NormalTok{] }\OtherTok{\textless{}{-}} \FunctionTok{exp}\NormalTok{(conf\_int[}\DecValTok{4}\NormalTok{])} \FunctionTok{return}\NormalTok{ (conf\_int)} \NormalTok{\}} \CommentTok{\# this is a boot object} \NormalTok{b\_gm }\OtherTok{\textless{}{-}} \FunctionTok{boot\_geom\_mean}\NormalTok{(ae) }\CommentTok{\#"ae" is the absolute error in the China Test data} \FunctionTok{print}\NormalTok{(}\FunctionTok{paste0}\NormalTok{(}\StringTok{"Geometric Mean of the China Test data: "}\NormalTok{, }\FunctionTok{round}\NormalTok{(}\FunctionTok{exp}\NormalTok{(b\_gm}\SpecialCharTok{$}\NormalTok{t0), }\AttributeTok{digits=}\DecValTok{3}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Geometric Mean of the China Test data: 832.55" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{b\_ci\_gm }\OtherTok{\textless{}{-}} \FunctionTok{BCAciboot4geommean}\NormalTok{(b\_gm)} \FunctionTok{print}\NormalTok{(}\FunctionTok{paste0}\NormalTok{(}\StringTok{"Confidence Interval: "}\NormalTok{, }\FunctionTok{round}\NormalTok{(b\_ci\_gm[}\DecValTok{4}\NormalTok{], }\AttributeTok{digits=}\DecValTok{3}\NormalTok{), }\StringTok{" {-} "}\NormalTok{, }\FunctionTok{round}\NormalTok{(b\_ci\_gm[}\DecValTok{5}\NormalTok{], }\AttributeTok{digits=}\DecValTok{3}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Confidence Interval: 679.439 - 1016.691" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Make a \% confidence interval bca} \CommentTok{\# BCAciboot \textless{}{-} function(b)\{ } \CommentTok{\# conf\_int \textless{}{-} boot.ci(b, conf=level\_confidence, type="bca")$bca \#following 10.9 of Ugarte et al.\textquotesingle{}s book} \CommentTok{\# return (conf\_int)} \CommentTok{\# \}} \end{Highlighting} \end{Shaded} \hypertarget{defect-prediction-evaluation-metrics}{% \section{Defect prediction evaluation metrics}\label{defect-prediction-evaluation-metrics}} In addition to the machine learning metrics for classification, Jiang \emph{et al.} provide a survey \citep{Jiang2008}. \hypertarget{part-advanced-topics}{% \part{Advanced Topics}\label{part-advanced-topics}} \hypertarget{feature-selection}{% \chapter{Feature Selection}\label{feature-selection}} This technique consists in selecting the most relevant attributes. The need of applying FS includes the following points: \begin{itemize} \item A reduced volume of data allows different data mining or searching techniques to be applied. \item Irrelevant and redundant attributes can generate less accurate and more complex models. Furthermore, data mining algorithms can be executed faster. \item It is possible to avoid the collection of data for those irrelevant and redundant attributes in the future. \end{itemize} FS algorithms designed with different evaluation criteria broadly fall into two categories {[}21, 14, 13, 3, 1, 12, 5{]}: \begin{itemize} \item The \emph{filter model} relies on general characteristics of the data to evaluate and select feature subsets without involving any data mining algorithm. \item The \emph{wrapper model} requires one predetermined mining algorithm and uses its performance as the evaluation criterion. It searches for features better suited to the mining algorithm aiming to improve mining performance, but it also tends to be more computationally expensive than filter model {[}11, 12{]}. \end{itemize} \hypertarget{instance-selection}{% \section{Instance Selection}\label{instance-selection}} \hypertarget{missing-data-imputation}{% \section{Missing Data Imputation}\label{missing-data-imputation}} \hypertarget{feature-selection-example}{% \chapter{Feature Selection Example}\label{feature-selection-example}} Feature Selection in R and Caret \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(caret)} \FunctionTok{library}\NormalTok{(doParallel) }\CommentTok{\# parallel processing} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Loading required package: foreach \end{verbatim} \begin{verbatim} ## Loading required package: iterators \end{verbatim} \begin{verbatim} ## Loading required package: parallel \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(dplyr) }\CommentTok{\# Used by caret} \FunctionTok{library}\NormalTok{(pROC) }\CommentTok{\# plot the ROC curve} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Type 'citation("pROC")' for a citation. \end{verbatim} \begin{verbatim} ## ## Attaching package: 'pROC' \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:stats': ## ## cov, smooth, var \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \DocumentationTok{\#\#\# Use the segmentationData from caret} \CommentTok{\# Load the data and construct indices to divided it into training and test data sets.} \CommentTok{\#set.seed(10)} \NormalTok{kc1 }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"./datasets/defectPred/D1/KC1.arff"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \NormalTok{ inTrain }\OtherTok{\textless{}{-}} \FunctionTok{createDataPartition}\NormalTok{(}\AttributeTok{y =}\NormalTok{ kc1}\SpecialCharTok{$}\NormalTok{Defective,} \DocumentationTok{\#\# the outcome data are needed} \AttributeTok{p =}\NormalTok{ .}\DecValTok{75}\NormalTok{,} \DocumentationTok{\#\# The percentage of data in the} \DocumentationTok{\#\# training set} \AttributeTok{list =} \ConstantTok{FALSE}\NormalTok{)} \end{Highlighting} \end{Shaded} The function \texttt{createDataPartition} does a stratified partitions. \begin{Shaded} \begin{Highlighting}[] \NormalTok{training }\OtherTok{\textless{}{-}}\NormalTok{ kc1[inTrain,]} \FunctionTok{nrow}\NormalTok{(training)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 1573 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{testing }\OtherTok{\textless{}{-}}\NormalTok{ kc1[}\SpecialCharTok{{-}}\NormalTok{inTrain, ]} \FunctionTok{nrow}\NormalTok{(testing)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 523 \end{verbatim} The train function can be used to + evaluate, using resampling, the effect of model tuning parameters on performance + choose the ``optimal'' model across these parameters + estimate model performance from a training set \begin{Shaded} \begin{Highlighting}[] \NormalTok{fitControl }\OtherTok{\textless{}{-}} \FunctionTok{trainControl}\NormalTok{(}\DocumentationTok{\#\# 10{-}fold CV} \AttributeTok{method =} \StringTok{"repeatedcv"}\NormalTok{,} \AttributeTok{number =} \DecValTok{10}\NormalTok{,} \DocumentationTok{\#\# repeated ten times} \AttributeTok{repeats =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} gbmFit1 \textless- train(Defective \textasciitilde{} ., data = training, method = ``gbm'', trControl = fitControl, \#\# This last option is actually one \#\# for gbm() that passes through verbose = FALSE) gbmFit1 \begin{Shaded} \begin{Highlighting}[] \NormalTok{plsFit }\OtherTok{\textless{}{-}} \FunctionTok{train}\NormalTok{(Defective }\SpecialCharTok{\textasciitilde{}}\NormalTok{ .,} \AttributeTok{data =}\NormalTok{ training,} \AttributeTok{method =} \StringTok{"pls"}\NormalTok{,} \DocumentationTok{\#\# Center and scale the predictors for the training} \DocumentationTok{\#\# set and all future samples.} \AttributeTok{preProc =} \FunctionTok{c}\NormalTok{(}\StringTok{"center"}\NormalTok{, }\StringTok{"scale"}\NormalTok{)} \NormalTok{)} \end{Highlighting} \end{Shaded} To fix \texttt{\{r\}\ testPred\ \textless{}-\ predict(plsFit,\ testing)\ postResample(testPred,\ testing\$Defective)\ sensitivity(testPred,\ testing\$Defective)\ confusionMatrix(testPred,\ testing\$Defective)} When there are three or more classes, confusionMatrix will show the confusion matrix and a set of ``one-versus-all'' results. \hypertarget{advanced-models}{% \chapter{Advanced Models}\label{advanced-models}} \hypertarget{genetic-programming-for-symbolic-regression}{% \section{Genetic Programming for Symbolic Regression}\label{genetic-programming-for-symbolic-regression}} This technique is inspired by Darwin's evolution theory. + 1960s by I. Rechenberg in his work ``Evolution strategies`` + 1975 Genetic Algorithms (GAs) invented by J Holland and published in his book''Adaption in Natural and Artificial Systems`` + 1992 J. Koza has used genetic algorithm to evolve programs to perform certain tasks. He called his method ``genetic programming'' Other reference for GP: Langdon WB, Poli R (2001) Foundations of Genetic Programming. Springer. \includegraphics{figures/gpEvolution.png} \begin{itemize} \tightlist \item Depending on the function set used and the function to be minimised, GP can generate almost any type of curve \end{itemize} \includegraphics{figures/gp1.png} \includegraphics{figures/gp2.png} \begin{figure} \centering \includegraphics{figures/evoAlg.png} \caption{Evolutionary Algorithms} \end{figure} In R, we can use the ``rgp'' package \hypertarget{genetic-programming-example}{% \section{Genetic Programming Example}\label{genetic-programming-example}} \hypertarget{load-data}{% \subsection{Load Data}\label{load-data}} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \CommentTok{\#read data} \NormalTok{telecom1 }\OtherTok{\textless{}{-}} \FunctionTok{read.table}\NormalTok{(}\StringTok{"./datasets/effortEstimation/Telecom1.csv"}\NormalTok{, }\AttributeTok{sep=}\StringTok{","}\NormalTok{,}\AttributeTok{header=}\ConstantTok{TRUE}\NormalTok{, }\AttributeTok{stringsAsFactors=}\ConstantTok{FALSE}\NormalTok{, }\AttributeTok{dec =} \StringTok{"."}\NormalTok{) } \NormalTok{size\_telecom1 }\OtherTok{\textless{}{-}}\NormalTok{ telecom1}\SpecialCharTok{$}\NormalTok{size} \NormalTok{effort\_telecom1 }\OtherTok{\textless{}{-}}\NormalTok{ telecom1}\SpecialCharTok{$}\NormalTok{effort} \NormalTok{chinaTrain }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"./datasets/effortEstimation/china3AttSelectedAFPTrain.arff"}\NormalTok{)} \NormalTok{china\_train\_size }\OtherTok{\textless{}{-}}\NormalTok{ chinaTrain}\SpecialCharTok{$}\NormalTok{AFP } \NormalTok{china\_train\_effort }\OtherTok{\textless{}{-}}\NormalTok{ chinaTrain}\SpecialCharTok{$}\NormalTok{Effort} \NormalTok{chinaTest }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"./datasets/effortEstimation/china3AttSelectedAFPTest.arff"}\NormalTok{)} \NormalTok{china\_size\_test }\OtherTok{\textless{}{-}}\NormalTok{ chinaTest}\SpecialCharTok{$}\NormalTok{AFP} \NormalTok{actualEffort }\OtherTok{\textless{}{-}}\NormalTok{ chinaTest}\SpecialCharTok{$}\NormalTok{Effort} \end{Highlighting} \end{Shaded} \hypertarget{genetic-programming-for-symbolic-regression-china-dataset.}{% \subsection{Genetic Programming for Symbolic Regression: China dataset.}\label{genetic-programming-for-symbolic-regression-china-dataset.}} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(}\StringTok{"rgp"}\NormalTok{)} \FunctionTok{options}\NormalTok{(}\AttributeTok{digits =} \DecValTok{5}\NormalTok{)} \NormalTok{stepsGenerations }\OtherTok{\textless{}{-}} \DecValTok{100} \NormalTok{initialPopulation }\OtherTok{\textless{}{-}} \DecValTok{100} \NormalTok{Steps }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\DecValTok{10}\NormalTok{)} \NormalTok{y }\OtherTok{\textless{}{-}}\NormalTok{ china\_train\_effort }\CommentTok{\#} \NormalTok{x }\OtherTok{\textless{}{-}}\NormalTok{ china\_train\_size }\CommentTok{\# } \NormalTok{data2 }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(y, x) }\CommentTok{\# create a data frame with effort, size} \CommentTok{\# newFuncSet \textless{}{-} mathFunctionSet} \CommentTok{\# alternatives to mathFunctionSet} \CommentTok{\# newFuncSet \textless{}{-} expLogFunctionSet \# sqrt", "exp", and "ln"} \CommentTok{\# newFuncSet \textless{}{-} trigonometricFunctionSet} \CommentTok{\# newFuncSet \textless{}{-} arithmeticFunctionSet} \NormalTok{newFuncSet }\OtherTok{\textless{}{-}} \FunctionTok{functionSet}\NormalTok{(}\StringTok{"+"}\NormalTok{,}\StringTok{"{-}"}\NormalTok{,}\StringTok{"*"}\NormalTok{, }\StringTok{"/"}\NormalTok{,}\StringTok{"sqrt"}\NormalTok{, }\StringTok{"log"}\NormalTok{, }\StringTok{"exp"}\NormalTok{) }\CommentTok{\# ,, )} \NormalTok{gpresult }\OtherTok{\textless{}{-}} \FunctionTok{symbolicRegression}\NormalTok{(y }\SpecialCharTok{\textasciitilde{}}\NormalTok{ x, } \AttributeTok{data=}\NormalTok{data2, }\AttributeTok{functionSet=}\NormalTok{newFuncSet,} \AttributeTok{populationSize=}\NormalTok{initialPopulation,} \AttributeTok{stopCondition=}\FunctionTok{makeStepsStopCondition}\NormalTok{(stepsGenerations))} \NormalTok{bf }\OtherTok{\textless{}{-}}\NormalTok{ gpresult}\SpecialCharTok{$}\NormalTok{population[[}\FunctionTok{which.min}\NormalTok{(}\FunctionTok{sapply}\NormalTok{(gpresult}\SpecialCharTok{$}\NormalTok{population, gpresult}\SpecialCharTok{$}\NormalTok{fitnessFunction))]]} \NormalTok{wf }\OtherTok{\textless{}{-}}\NormalTok{ gpresult}\SpecialCharTok{$}\NormalTok{population[[}\FunctionTok{which.max}\NormalTok{(}\FunctionTok{sapply}\NormalTok{(gpresult}\SpecialCharTok{$}\NormalTok{population, gpresult}\SpecialCharTok{$}\NormalTok{fitnessFunction))]]} \NormalTok{bf1 }\OtherTok{\textless{}{-}}\NormalTok{ gpresult}\SpecialCharTok{$}\NormalTok{population[[}\FunctionTok{which.min}\NormalTok{((gpresult}\SpecialCharTok{$}\NormalTok{fitnessValues))]]} \FunctionTok{plot}\NormalTok{(x,y)} \FunctionTok{lines}\NormalTok{(x, }\FunctionTok{bf}\NormalTok{(x), }\AttributeTok{type =} \StringTok{"l"}\NormalTok{, }\AttributeTok{col=}\StringTok{"blue"}\NormalTok{, }\AttributeTok{lwd=}\DecValTok{3}\NormalTok{)} \FunctionTok{lines}\NormalTok{(x,}\FunctionTok{wf}\NormalTok{(x), }\AttributeTok{type =} \StringTok{"l"}\NormalTok{, }\AttributeTok{col=}\StringTok{"red"}\NormalTok{, }\AttributeTok{lwd=}\DecValTok{2}\NormalTok{)} \NormalTok{x\_test }\OtherTok{\textless{}{-}}\NormalTok{ china\_size\_test} \NormalTok{estim\_by\_gp }\OtherTok{\textless{}{-}} \FunctionTok{bf}\NormalTok{(x\_test)} \NormalTok{ae\_gp }\OtherTok{\textless{}{-}} \FunctionTok{abs}\NormalTok{(actualEffort }\SpecialCharTok{{-}}\NormalTok{ estim\_by\_gp)} \FunctionTok{mean}\NormalTok{(ae\_gp)} \end{Highlighting} \end{Shaded} \hypertarget{genetic-programming-for-symbolic-regression.-telecom1-dataset.}{% \subsection{Genetic Programming for Symbolic Regression. Telecom1 dataset.}\label{genetic-programming-for-symbolic-regression.-telecom1-dataset.}} \begin{itemize} \tightlist \item For illustration purposes only. We use all data points. \end{itemize} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# y \textless{}{-} effort\_telecom1 \# all data points} \CommentTok{\# x \textless{}{-} size\_telecom1 \# } \CommentTok{\# } \CommentTok{\# data2 \textless{}{-} data.frame(y, x) \# create a data frame with effort, size} \CommentTok{\# \# newFuncSet \textless{}{-} mathFunctionSet} \CommentTok{\# \# alternatives to mathFunctionSet} \CommentTok{\# newFuncSet \textless{}{-} expLogFunctionSet \# sqrt", "exp", and "ln"} \CommentTok{\# \# newFuncSet \textless{}{-} trigonometricFunctionSet} \CommentTok{\# \# newFuncSet \textless{}{-} arithmeticFunctionSet} \CommentTok{\# \# newFuncSet \textless{}{-} functionSet("+","{-}","*", "/","sqrt", "log", "exp") \# ,, )} \CommentTok{\# } \CommentTok{\# gpresult \textless{}{-} symbolicRegression(y \textasciitilde{} x, } \CommentTok{\# data=data2, functionSet=newFuncSet,} \CommentTok{\# populationSize=initialPopulation,} \CommentTok{\# stopCondition=makeStepsStopCondition(stepsGenerations))} \CommentTok{\# } \CommentTok{\# bf \textless{}{-} gpresult$population[[which.min(sapply(gpresult$population, gpresult$fitnessFunction))]]} \CommentTok{\# wf \textless{}{-} gpresult$population[[which.max(sapply(gpresult$population, gpresult$fitnessFunction))]]} \CommentTok{\# } \CommentTok{\# bf1 \textless{}{-} gpresult$population[[which.min((gpresult$fitnessValues))]]} \CommentTok{\# plot(x,y)} \CommentTok{\# lines(x, bf(x), type = "l", col="blue", lwd=3)} \CommentTok{\# lines(x,wf(x), type = "l", col="red", lwd=2)} \end{Highlighting} \end{Shaded} \hypertarget{neural-networks}{% \section{Neural Networks}\label{neural-networks}} A neural network (NN) simulates some of the learning functions of the human brain. It can recognize patterns and ``learn'' . Through the use of a trial and error method the system ``learns'' to become an ``expert'' in the field. A NN is composed of a set of nodes (units, neurons, processing elements) + Each node has input and output + Each node performs a simple computation by its node function Weighted connections between nodes + Connectivity gives the structure/architecture of the net + What can be computed by a NN is primarily determined by the connections and their weights \includegraphics{figures/neuralnet.png} \includegraphics{figures/neuralnet2.png} There are several packages in R to work with NNs + \href{https://cran.r-project.org/web/packages/neuralnet/index.html}{neuralnet} + \href{https://cran.r-project.org/web/packages/nnet/index.html}{nnet} + \href{https://cran.r-project.org/web/packages/RSNNS/index.html}{RSNNS} TO BE FIXED!!!: The following is an example with the neuralnet package (TO DO, denormalize!). Neural nets need scaling of variables to work properly. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \FunctionTok{library}\NormalTok{(neuralnet)} \NormalTok{chinaTrain }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"datasets/effortEstimation/china3AttSelectedAFPTrain.arff"}\NormalTok{)} \NormalTok{afpsize }\OtherTok{\textless{}{-}}\NormalTok{ chinaTrain}\SpecialCharTok{$}\NormalTok{AFP} \NormalTok{effort\_china }\OtherTok{\textless{}{-}}\NormalTok{ chinaTrain}\SpecialCharTok{$}\NormalTok{Effort} \NormalTok{chinaTest }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"datasets/effortEstimation/china3AttSelectedAFPTest.arff"}\NormalTok{)} \NormalTok{AFPTest }\OtherTok{\textless{}{-}}\NormalTok{ chinaTest}\SpecialCharTok{$}\NormalTok{AFP} \NormalTok{actualEffort }\OtherTok{\textless{}{-}}\NormalTok{ chinaTest}\SpecialCharTok{$}\NormalTok{Effort} \NormalTok{trainingdata }\OtherTok{\textless{}{-}} \FunctionTok{cbind}\NormalTok{(afpsize,effort\_china)} \FunctionTok{colnames}\NormalTok{(trainingdata) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Input"}\NormalTok{,}\StringTok{"Output"}\NormalTok{)} \NormalTok{testingdata }\OtherTok{\textless{}{-}} \FunctionTok{cbind}\NormalTok{(afpsize,effort\_china)} \FunctionTok{colnames}\NormalTok{(trainingdata) }\OtherTok{\textless{}{-}} \FunctionTok{c}\NormalTok{(}\StringTok{"Input"}\NormalTok{,}\StringTok{"Output"}\NormalTok{)} \CommentTok{\#Normalize data} \NormalTok{norm.fun }\OtherTok{=} \ControlFlowTok{function}\NormalTok{(x)\{(x }\SpecialCharTok{{-}} \FunctionTok{min}\NormalTok{(x))}\SpecialCharTok{/}\NormalTok{(}\FunctionTok{max}\NormalTok{(x) }\SpecialCharTok{{-}} \FunctionTok{min}\NormalTok{(x))\}} \NormalTok{data.norm }\OtherTok{=} \FunctionTok{apply}\NormalTok{(trainingdata, }\DecValTok{2}\NormalTok{, norm.fun)} \CommentTok{\#data.norm} \NormalTok{testdata.norm }\OtherTok{\textless{}{-}} \FunctionTok{apply}\NormalTok{(trainingdata, }\DecValTok{2}\NormalTok{, norm.fun)} \CommentTok{\#testdata.norm} \CommentTok{\#Train the neural network} \CommentTok{\#Going to have 10 hidden layers} \CommentTok{\#Threshold is a numeric value specifying the threshold for the partial} \CommentTok{\#derivatives of the error function as stopping criteria.} \CommentTok{\#net\_eff \textless{}{-} neuralnet(Output\textasciitilde{}Input,trainingdata, hidden=5, threshold=0.25)} \NormalTok{net\_eff }\OtherTok{\textless{}{-}} \FunctionTok{neuralnet}\NormalTok{(Output}\SpecialCharTok{\textasciitilde{}}\NormalTok{Input, data.norm, }\AttributeTok{hidden=}\DecValTok{10}\NormalTok{, }\AttributeTok{threshold=}\FloatTok{0.01}\NormalTok{)} \CommentTok{\# Print the network} \CommentTok{\# print(net\_eff)} \CommentTok{\#Plot the neural network} \FunctionTok{plot}\NormalTok{(net\_eff)} \CommentTok{\#Test the neural network on some training data} \CommentTok{\#testdata.norm\textless{}{-}data.frame((testdata[,1] {-} min(data[, \textquotesingle{}displ\textquotesingle{}]))/(max(data[, \textquotesingle{}displ\textquotesingle{}]){-}min(data[, \textquotesingle{}displ\textquotesingle{}])),(testdata[,2] {-} min(data[, \textquotesingle{}year\textquotesingle{}]))/(max(data[, \textquotesingle{}year\textquotesingle{}]){-}min(data[, \textquotesingle{}year\textquotesingle{}])),(testdata[,3] {-} min(data[, \textquotesingle{}cyl\textquotesingle{}]))/(max(data[, \textquotesingle{}cyl\textquotesingle{}]){-}min(data[, \textquotesingle{}cyl\textquotesingle{}])),(testdata[,4] {-} min(data[, \textquotesingle{}hwy\textquotesingle{}]))/(max(data[, \textquotesingle{}hwy\textquotesingle{}]){-}min(data[, \textquotesingle{}hwy\textquotesingle{}])))} \CommentTok{\# Run them through the neural network} \CommentTok{\# net.results \textless{}{-} compute(net\_eff, testdata.norm[,2]) } \CommentTok{\#net.results \textless{}{-} compute(net\_eff, dataTest.norm) \# With normalized data} \CommentTok{\#Lets see what properties net.sqrt has} \CommentTok{\#ls(net.results)} \CommentTok{\#Lets see the results} \CommentTok{\#print(net.results$net.result)} \CommentTok{\#Lets display a better version of the results} \CommentTok{\#cleanoutput \textless{}{-} cbind(testdata.norm[,2],actualEffort,} \CommentTok{\# as.data.frame(net.results$net.result))} \CommentTok{\#colnames(cleanoutput) \textless{}{-} c("Input","Expected Output","Neural Net Output")} \CommentTok{\#print(cleanoutput)} \end{Highlighting} \end{Shaded} \hypertarget{support-vector-machines}{% \section{Support Vector Machines}\label{support-vector-machines}} SVM \hypertarget{ensembles}{% \section{Ensembles}\label{ensembles}} Ensembles or meta-learners combine multiple models to obtain better predictions i.e., this technique consists in combining single classifiers (sometimes are also called weak classifiers). A problem with ensembles is that their models are difficult to interpret (they behave as blackboxes) in comparison to decision trees or rules which provide an explanation of their decision making process. They are typically classified as Bagging, Boosting and Stacking (Stacked generalization). \hypertarget{bagging}{% \subsection{Bagging}\label{bagging}} Bagging (also known as Bootstrap aggregating) is an ensemble technique in which a base learner is applied to multiple equal size datasets created from the original data using bootstraping. Predictions are based on voting of the individual predictions. An advantage of bagging is that it does not require any modification to the learning algorithm and takes advantage of the instability of the base classifier to create diversity among individual ensembles so that individual members of the ensemble perform well in different regions of the data. Bagging does not perform well with classifiers if their output is robust to perturbation of the data such as nearest-neighbour (NN) classifiers. \hypertarget{boosting}{% \subsection{Boosting}\label{boosting}} Boosting techniques generate multiple models that complement each other inducing models that improve regions of the data where previous induced models preformed poorly. This is achieved by increasing the weights of instances wrongly classified, so new learners focus on those instances. Finally, classification is based on a weighted voted among all members of the ensemble. In particular, AdaBoost.M1 {[}15{]} is a popular boosting algorithm for classification. The set of training examples is assigned an equal weight at the beginning and the weight of instances is either increased or decreased depending on whether the learner classified that instance incorrectly or not. The following iterations focus on those instances with higher weights. AdaBoost.M1 can be applied to any base learner. \hypertarget{rotation-forests}{% \subsection{Rotation Forests}\label{rotation-forests}} Rotation Forests {[}40{]} combine randomly chosen subsets of attributes (random subspaces) and bagging approaches with principal components feature generation to construct an ensemble of decision trees. Principal Component Analysis is used as a feature selection technique combining subsets of attributes which are used with a bootstrapped subset of the training data by the base classifier. \hypertarget{boosting-in-r}{% \subsection{Boosting in R}\label{boosting-in-r}} In R, there are three packages to deal with Boosting: gmb, ada and the mboost packages. An example of gbm using the caret package. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# load libraries} \FunctionTok{library}\NormalTok{(caret)} \FunctionTok{library}\NormalTok{(pROC)} \DocumentationTok{\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#} \CommentTok{\# model it} \DocumentationTok{\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#} \CommentTok{\# Get names of caret supported models (just a few {-} head)} \FunctionTok{head}\NormalTok{(}\FunctionTok{names}\NormalTok{(}\FunctionTok{getModelInfo}\NormalTok{()))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "ada" "AdaBag" "AdaBoost.M1" "adaboost" "amdai" ## [6] "ANFIS" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Show model info and find out what type of model it is} \FunctionTok{getModelInfo}\NormalTok{()}\SpecialCharTok{$}\NormalTok{gbm}\SpecialCharTok{$}\NormalTok{tags} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Tree-Based Model" "Boosting" ## [3] "Ensemble Model" "Implicit Feature Selection" ## [5] "Accepts Case Weights" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{getModelInfo}\NormalTok{()}\SpecialCharTok{$}\NormalTok{gbm}\SpecialCharTok{$}\NormalTok{type} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Regression" "Classification" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \FunctionTok{library}\NormalTok{(caret)} \FunctionTok{library}\NormalTok{(pROC)} \NormalTok{kc1 }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"./datasets/defectPred/D1/KC1.arff"}\NormalTok{)} \CommentTok{\# Split data into training and test datasets} \CommentTok{\# }\AlertTok{TODO}\CommentTok{: Improve this with createDataParticion from Caret} \FunctionTok{set.seed}\NormalTok{(}\DecValTok{1234}\NormalTok{)} \NormalTok{ind }\OtherTok{\textless{}{-}} \FunctionTok{sample}\NormalTok{(}\DecValTok{2}\NormalTok{, }\FunctionTok{nrow}\NormalTok{(kc1), }\AttributeTok{replace =} \ConstantTok{TRUE}\NormalTok{, }\AttributeTok{prob =} \FunctionTok{c}\NormalTok{(}\FloatTok{0.7}\NormalTok{, }\FloatTok{0.3}\NormalTok{))} \NormalTok{kc1.train }\OtherTok{\textless{}{-}}\NormalTok{ kc1[ind}\SpecialCharTok{==}\DecValTok{1}\NormalTok{, ]} \NormalTok{kc1.test }\OtherTok{\textless{}{-}}\NormalTok{ kc1[ind}\SpecialCharTok{==}\DecValTok{2}\NormalTok{, ]} \CommentTok{\# create caret trainControl object to control the number of cross{-}validations performed} \NormalTok{objControl }\OtherTok{\textless{}{-}} \FunctionTok{trainControl}\NormalTok{(}\AttributeTok{method=}\StringTok{\textquotesingle{}cv\textquotesingle{}}\NormalTok{, }\AttributeTok{number=}\DecValTok{3}\NormalTok{, }\AttributeTok{returnResamp=}\StringTok{\textquotesingle{}none\textquotesingle{}}\NormalTok{, }\AttributeTok{summaryFunction =}\NormalTok{ twoClassSummary, }\AttributeTok{classProbs =} \ConstantTok{TRUE}\NormalTok{)} \CommentTok{\# run model} \NormalTok{objModel }\OtherTok{\textless{}{-}} \FunctionTok{train}\NormalTok{(Defective }\SpecialCharTok{\textasciitilde{}}\NormalTok{ .,} \AttributeTok{data =}\NormalTok{ kc1.train,} \AttributeTok{method =} \StringTok{\textquotesingle{}gbm\textquotesingle{}}\NormalTok{, } \AttributeTok{trControl =}\NormalTok{ objControl, } \AttributeTok{metric =} \StringTok{"ROC"} \CommentTok{\#,} \CommentTok{\#preProc = c("center", "scale")} \NormalTok{ )} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8395 -nan 0.1000 0.0121 ## 2 0.8125 -nan 0.1000 0.0122 ## 3 0.7898 -nan 0.1000 0.0085 ## 4 0.7698 -nan 0.1000 0.0072 ## 5 0.7568 -nan 0.1000 0.0057 ## 6 0.7477 -nan 0.1000 0.0048 ## 7 0.7414 -nan 0.1000 0.0012 ## 8 0.7354 -nan 0.1000 0.0018 ## 9 0.7297 -nan 0.1000 0.0021 ## 10 0.7259 -nan 0.1000 0.0004 ## 20 0.6798 -nan 0.1000 0.0011 ## 40 0.6539 -nan 0.1000 0.0004 ## 60 0.6409 -nan 0.1000 -0.0008 ## 80 0.6314 -nan 0.1000 -0.0005 ## 100 0.6228 -nan 0.1000 -0.0008 ## 120 0.6129 -nan 0.1000 -0.0004 ## 140 0.6054 -nan 0.1000 -0.0002 ## 150 0.6027 -nan 0.1000 -0.0014 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8274 -nan 0.1000 0.0181 ## 2 0.7993 -nan 0.1000 0.0085 ## 3 0.7788 -nan 0.1000 0.0068 ## 4 0.7534 -nan 0.1000 0.0073 ## 5 0.7411 -nan 0.1000 0.0064 ## 6 0.7307 -nan 0.1000 0.0023 ## 7 0.7179 -nan 0.1000 0.0042 ## 8 0.7090 -nan 0.1000 0.0030 ## 9 0.7003 -nan 0.1000 0.0019 ## 10 0.6883 -nan 0.1000 0.0038 ## 20 0.6319 -nan 0.1000 0.0013 ## 40 0.5943 -nan 0.1000 -0.0005 ## 60 0.5686 -nan 0.1000 -0.0007 ## 80 0.5471 -nan 0.1000 -0.0004 ## 100 0.5339 -nan 0.1000 -0.0003 ## 120 0.5164 -nan 0.1000 -0.0007 ## 140 0.5031 -nan 0.1000 -0.0003 ## 150 0.4959 -nan 0.1000 -0.0004 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8276 -nan 0.1000 0.0191 ## 2 0.7982 -nan 0.1000 0.0080 ## 3 0.7726 -nan 0.1000 0.0088 ## 4 0.7516 -nan 0.1000 0.0060 ## 5 0.7286 -nan 0.1000 0.0102 ## 6 0.7114 -nan 0.1000 0.0076 ## 7 0.6994 -nan 0.1000 0.0029 ## 8 0.6901 -nan 0.1000 0.0032 ## 9 0.6769 -nan 0.1000 0.0052 ## 10 0.6693 -nan 0.1000 0.0009 ## 20 0.6233 -nan 0.1000 -0.0015 ## 40 0.5712 -nan 0.1000 -0.0007 ## 60 0.5373 -nan 0.1000 0.0009 ## 80 0.5079 -nan 0.1000 -0.0010 ## 100 0.4864 -nan 0.1000 -0.0008 ## 120 0.4654 -nan 0.1000 -0.0007 ## 140 0.4444 -nan 0.1000 -0.0009 ## 150 0.4324 -nan 0.1000 -0.0001 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8396 -nan 0.1000 0.0109 ## 2 0.8169 -nan 0.1000 0.0084 ## 3 0.7981 -nan 0.1000 0.0065 ## 4 0.7839 -nan 0.1000 0.0051 ## 5 0.7743 -nan 0.1000 0.0037 ## 6 0.7665 -nan 0.1000 0.0025 ## 7 0.7540 -nan 0.1000 0.0050 ## 8 0.7452 -nan 0.1000 0.0045 ## 9 0.7393 -nan 0.1000 0.0024 ## 10 0.7329 -nan 0.1000 0.0021 ## 20 0.6974 -nan 0.1000 0.0011 ## 40 0.6736 -nan 0.1000 -0.0011 ## 60 0.6574 -nan 0.1000 -0.0008 ## 80 0.6460 -nan 0.1000 -0.0001 ## 100 0.6348 -nan 0.1000 0.0005 ## 120 0.6258 -nan 0.1000 -0.0003 ## 140 0.6159 -nan 0.1000 -0.0000 ## 150 0.6120 -nan 0.1000 -0.0003 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8351 -nan 0.1000 0.0163 ## 2 0.8086 -nan 0.1000 0.0111 ## 3 0.7881 -nan 0.1000 0.0106 ## 4 0.7709 -nan 0.1000 0.0074 ## 5 0.7576 -nan 0.1000 0.0060 ## 6 0.7454 -nan 0.1000 0.0047 ## 7 0.7349 -nan 0.1000 0.0047 ## 8 0.7303 -nan 0.1000 0.0009 ## 9 0.7212 -nan 0.1000 0.0028 ## 10 0.7147 -nan 0.1000 0.0011 ## 20 0.6731 -nan 0.1000 -0.0008 ## 40 0.6194 -nan 0.1000 -0.0002 ## 60 0.5928 -nan 0.1000 -0.0012 ## 80 0.5713 -nan 0.1000 -0.0002 ## 100 0.5540 -nan 0.1000 -0.0009 ## 120 0.5374 -nan 0.1000 -0.0006 ## 140 0.5247 -nan 0.1000 -0.0009 ## 150 0.5169 -nan 0.1000 -0.0003 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8275 -nan 0.1000 0.0158 ## 2 0.8018 -nan 0.1000 0.0124 ## 3 0.7813 -nan 0.1000 0.0094 ## 4 0.7602 -nan 0.1000 0.0077 ## 5 0.7446 -nan 0.1000 0.0038 ## 6 0.7322 -nan 0.1000 0.0028 ## 7 0.7218 -nan 0.1000 0.0023 ## 8 0.7100 -nan 0.1000 0.0030 ## 9 0.6981 -nan 0.1000 0.0038 ## 10 0.6911 -nan 0.1000 0.0017 ## 20 0.6455 -nan 0.1000 -0.0013 ## 40 0.5885 -nan 0.1000 -0.0010 ## 60 0.5562 -nan 0.1000 -0.0008 ## 80 0.5251 -nan 0.1000 -0.0012 ## 100 0.4970 -nan 0.1000 -0.0001 ## 120 0.4705 -nan 0.1000 -0.0009 ## 140 0.4522 -nan 0.1000 -0.0009 ## 150 0.4433 -nan 0.1000 -0.0012 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8394 -nan 0.1000 0.0118 ## 2 0.8188 -nan 0.1000 0.0072 ## 3 0.7986 -nan 0.1000 0.0065 ## 4 0.7875 -nan 0.1000 0.0038 ## 5 0.7759 -nan 0.1000 0.0012 ## 6 0.7731 -nan 0.1000 -0.0002 ## 7 0.7589 -nan 0.1000 0.0048 ## 8 0.7511 -nan 0.1000 0.0029 ## 9 0.7471 -nan 0.1000 0.0015 ## 10 0.7381 -nan 0.1000 0.0041 ## 20 0.7008 -nan 0.1000 0.0002 ## 40 0.6788 -nan 0.1000 -0.0009 ## 60 0.6652 -nan 0.1000 -0.0001 ## 80 0.6550 -nan 0.1000 -0.0003 ## 100 0.6453 -nan 0.1000 -0.0007 ## 120 0.6399 -nan 0.1000 -0.0012 ## 140 0.6335 -nan 0.1000 0.0000 ## 150 0.6315 -nan 0.1000 -0.0003 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8354 -nan 0.1000 0.0150 ## 2 0.8132 -nan 0.1000 0.0093 ## 3 0.7961 -nan 0.1000 0.0045 ## 4 0.7781 -nan 0.1000 0.0082 ## 5 0.7684 -nan 0.1000 0.0040 ## 6 0.7528 -nan 0.1000 0.0057 ## 7 0.7415 -nan 0.1000 0.0042 ## 8 0.7334 -nan 0.1000 0.0038 ## 9 0.7255 -nan 0.1000 0.0003 ## 10 0.7150 -nan 0.1000 0.0026 ## 20 0.6734 -nan 0.1000 0.0005 ## 40 0.6304 -nan 0.1000 -0.0001 ## 60 0.6008 -nan 0.1000 -0.0003 ## 80 0.5804 -nan 0.1000 -0.0011 ## 100 0.5621 -nan 0.1000 -0.0001 ## 120 0.5404 -nan 0.1000 -0.0006 ## 140 0.5303 -nan 0.1000 -0.0012 ## 150 0.5231 -nan 0.1000 -0.0008 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8381 -nan 0.1000 0.0104 ## 2 0.8087 -nan 0.1000 0.0090 ## 3 0.7875 -nan 0.1000 0.0045 ## 4 0.7677 -nan 0.1000 0.0085 ## 5 0.7504 -nan 0.1000 0.0063 ## 6 0.7400 -nan 0.1000 0.0023 ## 7 0.7307 -nan 0.1000 0.0008 ## 8 0.7172 -nan 0.1000 0.0045 ## 9 0.7089 -nan 0.1000 0.0022 ## 10 0.6994 -nan 0.1000 0.0034 ## 20 0.6439 -nan 0.1000 0.0003 ## 40 0.5957 -nan 0.1000 -0.0005 ## 60 0.5590 -nan 0.1000 -0.0001 ## 80 0.5348 -nan 0.1000 -0.0004 ## 100 0.5080 -nan 0.1000 -0.0014 ## 120 0.4793 -nan 0.1000 -0.0009 ## 140 0.4563 -nan 0.1000 -0.0016 ## 150 0.4475 -nan 0.1000 -0.0006 ## ## Iter TrainDeviance ValidDeviance StepSize Improve ## 1 0.8374 -nan 0.1000 0.0124 ## 2 0.8182 -nan 0.1000 0.0086 ## 3 0.8004 -nan 0.1000 0.0083 ## 4 0.7852 -nan 0.1000 0.0060 ## 5 0.7742 -nan 0.1000 0.0036 ## 6 0.7620 -nan 0.1000 0.0048 ## 7 0.7528 -nan 0.1000 0.0047 ## 8 0.7464 -nan 0.1000 0.0026 ## 9 0.7441 -nan 0.1000 -0.0001 ## 10 0.7362 -nan 0.1000 0.0034 ## 20 0.7017 -nan 0.1000 0.0015 ## 40 0.6737 -nan 0.1000 0.0000 ## 60 0.6611 -nan 0.1000 -0.0004 ## 80 0.6530 -nan 0.1000 -0.0001 ## 100 0.6468 -nan 0.1000 -0.0002 ## 120 0.6410 -nan 0.1000 -0.0004 ## 140 0.6335 -nan 0.1000 -0.0000 ## 150 0.6312 -nan 0.1000 -0.0003 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Find out variable importance} \FunctionTok{summary}\NormalTok{(objModel)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-110-1.pdf} \begin{verbatim} ## var rel.inf ## NUM_OPERANDS NUM_OPERANDS 14.28 ## HALSTEAD_CONTENT HALSTEAD_CONTENT 12.88 ## HALSTEAD_DIFFICULTY HALSTEAD_DIFFICULTY 12.45 ## HALSTEAD_VOLUME HALSTEAD_VOLUME 9.38 ## LOC_COMMENTS LOC_COMMENTS 6.03 ## NUM_UNIQUE_OPERATORS NUM_UNIQUE_OPERATORS 5.77 ## NUM_UNIQUE_OPERANDS NUM_UNIQUE_OPERANDS 5.32 ## LOC_CODE_AND_COMMENT LOC_CODE_AND_COMMENT 4.64 ## HALSTEAD_EFFORT HALSTEAD_EFFORT 4.45 ## LOC_EXECUTABLE LOC_EXECUTABLE 4.08 ## LOC_BLANK LOC_BLANK 3.74 ## CYCLOMATIC_COMPLEXITY CYCLOMATIC_COMPLEXITY 3.65 ## ESSENTIAL_COMPLEXITY ESSENTIAL_COMPLEXITY 3.41 ## LOC_TOTAL LOC_TOTAL 3.00 ## HALSTEAD_LENGTH HALSTEAD_LENGTH 2.15 ## NUM_OPERATORS NUM_OPERATORS 1.67 ## BRANCH_COUNT BRANCH_COUNT 1.59 ## DESIGN_COMPLEXITY DESIGN_COMPLEXITY 1.52 ## HALSTEAD_ERROR_EST HALSTEAD_ERROR_EST 0.00 ## HALSTEAD_LEVEL HALSTEAD_LEVEL 0.00 ## HALSTEAD_PROG_TIME HALSTEAD_PROG_TIME 0.00 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# find out model details} \NormalTok{objModel} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Stochastic Gradient Boosting ## ## 1500 samples ## 21 predictor ## 2 classes: 'N', 'Y' ## ## No pre-processing ## Resampling: Cross-Validated (3 fold) ## Summary of sample sizes: 1000, 1000, 1000 ## Resampling results across tuning parameters: ## ## interaction.depth n.trees ROC Sens Spec ## 1 50 0.803 0.978 0.145 ## 1 100 0.809 0.982 0.171 ## 1 150 0.814 0.982 0.179 ## 2 50 0.807 0.979 0.167 ## 2 100 0.805 0.976 0.222 ## 2 150 0.801 0.966 0.269 ## 3 50 0.808 0.968 0.179 ## 3 100 0.809 0.961 0.209 ## 3 150 0.803 0.957 0.278 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 10 ## ROC was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 150, interaction.depth = ## 1, shrinkage = 0.1 and n.minobsinnode = 10. \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \DocumentationTok{\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#} \CommentTok{\# evalutate model} \DocumentationTok{\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#} \CommentTok{\# get predictions on your testing data} \CommentTok{\# class prediction} \NormalTok{predictions }\OtherTok{\textless{}{-}} \FunctionTok{predict}\NormalTok{(}\AttributeTok{object=}\NormalTok{objModel, kc1.test[,}\SpecialCharTok{{-}}\DecValTok{22}\NormalTok{], }\AttributeTok{type=}\StringTok{\textquotesingle{}raw\textquotesingle{}}\NormalTok{)} \FunctionTok{head}\NormalTok{(predictions)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] N N N N N N ## Levels: N Y \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{postResample}\NormalTok{(}\AttributeTok{pred=}\NormalTok{predictions, }\AttributeTok{obs=}\FunctionTok{as.factor}\NormalTok{(kc1.test[,}\DecValTok{22}\NormalTok{]))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Accuracy Kappa ## 0.864 0.228 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# probabilities } \NormalTok{predictions }\OtherTok{\textless{}{-}} \FunctionTok{predict}\NormalTok{(}\AttributeTok{object=}\NormalTok{objModel, kc1.test[,}\SpecialCharTok{{-}}\DecValTok{22}\NormalTok{], }\AttributeTok{type=}\StringTok{\textquotesingle{}prob\textquotesingle{}}\NormalTok{)} \FunctionTok{head}\NormalTok{(predictions)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## N Y ## 1 0.920 0.0800 ## 2 0.961 0.0395 ## 3 0.840 0.1604 ## 4 0.752 0.2484 ## 5 0.825 0.1749 ## 6 0.933 0.0670 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{postResample}\NormalTok{(}\AttributeTok{pred=}\NormalTok{predictions[[}\DecValTok{2}\NormalTok{]], }\AttributeTok{obs=}\FunctionTok{ifelse}\NormalTok{(kc1.test[,}\DecValTok{22}\NormalTok{]}\SpecialCharTok{==}\StringTok{\textquotesingle{}yes\textquotesingle{}}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{0}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## RMSE Rsquared MAE ## 0.205 NA 0.140 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{auc }\OtherTok{\textless{}{-}} \FunctionTok{roc}\NormalTok{(}\FunctionTok{ifelse}\NormalTok{(kc1.test[,}\DecValTok{22}\NormalTok{]}\SpecialCharTok{==}\StringTok{"Y"}\NormalTok{,}\DecValTok{1}\NormalTok{,}\DecValTok{0}\NormalTok{), predictions[[}\DecValTok{2}\NormalTok{]])} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Setting levels: control = 0, case = 1 \end{verbatim} \begin{verbatim} ## Setting direction: controls < cases \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{print}\NormalTok{(auc}\SpecialCharTok{$}\NormalTok{auc)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Area under the curve: 0.792 \end{verbatim} \hypertarget{further-classification-models}{% \chapter{Further Classification Models}\label{further-classification-models}} \hypertarget{multilabel-classification}{% \section{Multilabel classification}\label{multilabel-classification}} Some datasets, for example, reviews of applications and mobile applications repositories such as App Store or Google play contain reviews that can have several labels at the same time (e.g.~bugs, feature requests, etc.) \hypertarget{semi-supervised-learning}{% \section{Semi-supervised Learning}\label{semi-supervised-learning}} Self train a model on semi-supervised data \url{http://www.inside-r.org/packages/cran/dmwr/docs/SelfTrain} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(DMwR2)} \DocumentationTok{\#\# Small example with the Iris classification data set} \FunctionTok{data}\NormalTok{(iris)} \DocumentationTok{\#\# Dividing the data set into train and test sets} \NormalTok{idx }\OtherTok{\textless{}{-}} \FunctionTok{sample}\NormalTok{(}\DecValTok{150}\NormalTok{,}\DecValTok{100}\NormalTok{)} \NormalTok{tr }\OtherTok{\textless{}{-}}\NormalTok{ iris[idx,]} \NormalTok{ts }\OtherTok{\textless{}{-}}\NormalTok{ iris[}\SpecialCharTok{{-}}\NormalTok{idx,]} \DocumentationTok{\#\# Learn a tree with the full train set and test it} \NormalTok{stdTree }\OtherTok{\textless{}{-}} \FunctionTok{rpartXse}\NormalTok{(Species}\SpecialCharTok{\textasciitilde{}}\NormalTok{ .,tr,}\AttributeTok{se=}\FloatTok{0.5}\NormalTok{)} \FunctionTok{table}\NormalTok{(}\FunctionTok{predict}\NormalTok{(stdTree,ts,}\AttributeTok{type=}\StringTok{\textquotesingle{}class\textquotesingle{}}\NormalTok{),ts}\SpecialCharTok{$}\NormalTok{Species)} \DocumentationTok{\#\# Now let us create another training set with most of the target} \DocumentationTok{\#\# variable values unknown} \NormalTok{trSelfT }\OtherTok{\textless{}{-}}\NormalTok{ tr} \NormalTok{nas }\OtherTok{\textless{}{-}} \FunctionTok{sample}\NormalTok{(}\DecValTok{100}\NormalTok{,}\DecValTok{70}\NormalTok{)} \NormalTok{trSelfT[nas,}\StringTok{\textquotesingle{}Species\textquotesingle{}}\NormalTok{] }\OtherTok{\textless{}{-}} \ConstantTok{NA} \DocumentationTok{\#\# Learn a tree using only the labelled cases and test it} \NormalTok{baseTree }\OtherTok{\textless{}{-}} \FunctionTok{rpartXse}\NormalTok{(Species}\SpecialCharTok{\textasciitilde{}}\NormalTok{ .,trSelfT[}\SpecialCharTok{{-}}\NormalTok{nas,],}\AttributeTok{se=}\FloatTok{0.5}\NormalTok{)} \FunctionTok{table}\NormalTok{(}\FunctionTok{predict}\NormalTok{(baseTree,ts,}\AttributeTok{type=}\StringTok{\textquotesingle{}class\textquotesingle{}}\NormalTok{),ts}\SpecialCharTok{$}\NormalTok{Species)} \DocumentationTok{\#\# The user{-}defined function that will be used in the self{-}training process} \NormalTok{f }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(m,d) \{ } \NormalTok{ l }\OtherTok{\textless{}{-}} \FunctionTok{predict}\NormalTok{(m,d,}\AttributeTok{type=}\StringTok{\textquotesingle{}class\textquotesingle{}}\NormalTok{)} \NormalTok{ c }\OtherTok{\textless{}{-}} \FunctionTok{apply}\NormalTok{(}\FunctionTok{predict}\NormalTok{(m,d),}\DecValTok{1}\NormalTok{,max)} \FunctionTok{data.frame}\NormalTok{(}\AttributeTok{cl=}\NormalTok{l,}\AttributeTok{p=}\NormalTok{c)} \NormalTok{\}} \DocumentationTok{\#\# Self train the same model using the semi{-}superside data and test the} \DocumentationTok{\#\# resulting model} \NormalTok{treeSelfT }\OtherTok{\textless{}{-}} \FunctionTok{SelfTrain}\NormalTok{(Species}\SpecialCharTok{\textasciitilde{}}\NormalTok{ .,trSelfT,}\FunctionTok{learner}\NormalTok{(}\StringTok{\textquotesingle{}rpartXse\textquotesingle{}}\NormalTok{,}\FunctionTok{list}\NormalTok{(}\AttributeTok{se=}\FloatTok{0.5}\NormalTok{)),}\StringTok{\textquotesingle{}f\textquotesingle{}}\NormalTok{)} \FunctionTok{table}\NormalTok{(}\FunctionTok{predict}\NormalTok{(treeSelfT,ts,}\AttributeTok{type=}\StringTok{\textquotesingle{}class\textquotesingle{}}\NormalTok{),ts}\SpecialCharTok{$}\NormalTok{Species)} \end{Highlighting} \end{Shaded} \hypertarget{social-network-analysis-in-se}{% \chapter{Social Network Analysis in SE}\label{social-network-analysis-in-se}} In this example, we will data from the MSR14 challenge. Further information and datasets: \url{http://openscience.us/repo/msr/msr14.html} Similar databases can be obtained using MetricsGrimoire or other tools. In this simple example, we create a network form the users and following extracted from GitHub and stored in a MySQL database. We can read a file directely from MySQL dump \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(RMySQL)} \CommentTok{\# Connecting to MySQL} \NormalTok{mydb }\OtherTok{=} \FunctionTok{dbConnect}\NormalTok{(}\FunctionTok{MySQL}\NormalTok{(), }\AttributeTok{user=}\StringTok{\textquotesingle{}msr14\textquotesingle{}}\NormalTok{, }\AttributeTok{password=}\StringTok{\textquotesingle{}msr14\textquotesingle{}}\NormalTok{, }\AttributeTok{dbname=}\StringTok{\textquotesingle{}msr14\textquotesingle{}}\NormalTok{, }\AttributeTok{host=}\StringTok{\textquotesingle{}localhost\textquotesingle{}}\NormalTok{)} \CommentTok{\# Retrieving data from MySQL} \NormalTok{sql }\OtherTok{\textless{}{-}} \StringTok{"select user\_id, follower\_id from followers limit 100;"} \NormalTok{rs }\OtherTok{=} \FunctionTok{dbSendQuery}\NormalTok{(mydb, sql)} \NormalTok{data }\OtherTok{\textless{}{-}} \FunctionTok{fetch}\NormalTok{(rs, }\AttributeTok{n=}\SpecialCharTok{{-}}\DecValTok{1}\NormalTok{)} \end{Highlighting} \end{Shaded} Alternatively, we can create e CSV file directly from MySQL and load it \begin{Shaded} \begin{Highlighting}[] \SpecialCharTok{$}\NormalTok{mysql }\SpecialCharTok{{-}}\NormalTok{u msr14 }\SpecialCharTok{{-}}\NormalTok{pmsr14 msr14} \SpecialCharTok{\textgreater{}}\NormalTok{ SELECT }\StringTok{\textquotesingle{}user\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}follower\textquotesingle{}} \NormalTok{UNION ALL} \NormalTok{SELECT user\_id,follower\_id } \NormalTok{ FROM followers } \NormalTok{ LIMIT }\DecValTok{1000} \NormalTok{ INTO OUTFILE }\StringTok{"/tmp/followers.csv"} \NormalTok{ FIELDS TERMINATED BY }\StringTok{\textquotesingle{},\textquotesingle{}} \NormalTok{ LINES TERMINATED BY }\StringTok{\textquotesingle{}}\SpecialCharTok{\textbackslash{}n}\StringTok{\textquotesingle{}}\NormalTok{;} \end{Highlighting} \end{Shaded} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Data already extracted and stored as CSV file (for demo purposes)} \NormalTok{dat }\OtherTok{=} \FunctionTok{read.csv}\NormalTok{(}\StringTok{"./datasets/sna/followers.csv"}\NormalTok{, }\AttributeTok{header =} \ConstantTok{FALSE}\NormalTok{, }\AttributeTok{sep =} \StringTok{","}\NormalTok{)} \NormalTok{dat }\OtherTok{\textless{}{-}} \FunctionTok{head}\NormalTok{(dat,}\DecValTok{100}\NormalTok{)} \end{Highlighting} \end{Shaded} We can now create the graph \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(igraph)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Attaching package: 'igraph' \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:arules': ## ## union \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:class': ## ## knn \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:modeltools': ## ## clusters \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:lubridate': ## ## %--%, union \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:dplyr': ## ## as_data_frame, groups, union \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:stats': ## ## decompose, spectrum \end{verbatim} \begin{verbatim} ## The following object is masked from 'package:base': ## ## union \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Create a graph} \NormalTok{g }\OtherTok{\textless{}{-}} \FunctionTok{graph.data.frame}\NormalTok{(dat, }\AttributeTok{directed =} \ConstantTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} Some values: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{summary}\NormalTok{(g); } \end{Highlighting} \end{Shaded} \begin{verbatim} ## IGRAPH 0554337 DN-- 95 100 -- ## + attr: name (v/c) \end{verbatim} Plotting the graph: \begin{Shaded} \begin{Highlighting}[] \NormalTok{layout1 }\OtherTok{\textless{}{-}} \FunctionTok{layout.fruchterman.reingold}\NormalTok{(g)} \FunctionTok{plot}\NormalTok{(g, layout1)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning in if (axes) {: the condition has length > 1 and only the first element ## will be used \end{verbatim} \includegraphics{DASE_files/figure-latex/unnamed-chunk-117-1.pdf} Other layout \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(g, }\AttributeTok{layout=}\NormalTok{layout.kamada.kawai)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-118-1.pdf} A tk application can launched to show the plot interactively: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(g, }\AttributeTok{layout =}\NormalTok{ layout.fruchterman.reingold)} \end{Highlighting} \end{Shaded} Some metrics: \begin{Shaded} \begin{Highlighting}[] \NormalTok{metrics }\OtherTok{\textless{}{-}} \FunctionTok{data.frame}\NormalTok{(} \AttributeTok{deg =} \FunctionTok{degree}\NormalTok{(g),} \AttributeTok{bet =} \FunctionTok{betweenness}\NormalTok{(g),} \AttributeTok{clo =} \FunctionTok{closeness}\NormalTok{(g),} \AttributeTok{eig =} \FunctionTok{evcent}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{vector,} \AttributeTok{cor =} \FunctionTok{graph.coreness}\NormalTok{(g)} \NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Warning in closeness(g): At centrality.c:2784 :closeness centrality is not well- ## defined for disconnected graphs \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#} \FunctionTok{head}\NormalTok{(metrics)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## deg bet clo eig cor ## 6183 1 0 0.000113 0.00000 1 ## 49199 1 0 0.000113 0.00000 1 ## 71080 1 0 0.000113 0.00000 1 ## 162983 1 0 0.000113 0.00000 1 ## 772 3 0 0.000116 0.10409 2 ## 907 1 0 0.000113 0.00814 1 \end{verbatim} To fix and to do: Explain metrics and better graphs \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(ggplot2)} \FunctionTok{ggplot}\NormalTok{(} \NormalTok{ metrics,} \FunctionTok{aes}\NormalTok{(}\AttributeTok{x=}\NormalTok{bet, }\AttributeTok{y=}\NormalTok{eig,} \AttributeTok{label=}\FunctionTok{rownames}\NormalTok{(metrics),} \AttributeTok{colour=}\NormalTok{res, }\AttributeTok{size=}\FunctionTok{abs}\NormalTok{(res))} \NormalTok{)}\SpecialCharTok{+} \FunctionTok{xlab}\NormalTok{(}\StringTok{"Betweenness Centrality"}\NormalTok{)}\SpecialCharTok{+} \FunctionTok{ylab}\NormalTok{(}\StringTok{"Eigenvector Centrality"}\NormalTok{)}\SpecialCharTok{+} \FunctionTok{geom\_text}\NormalTok{()} \SpecialCharTok{+} \FunctionTok{theme}\NormalTok{(}\AttributeTok{title=}\StringTok{"Key Actor Analysis"}\NormalTok{)} \FunctionTok{V}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{label.cex }\OtherTok{\textless{}{-}} \FloatTok{2.2} \SpecialCharTok{*} \FunctionTok{V}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{degree }\SpecialCharTok{/} \FunctionTok{max}\NormalTok{(}\FunctionTok{V}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{degree)}\SpecialCharTok{+}\NormalTok{ .}\DecValTok{2} \FunctionTok{V}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{label.color }\OtherTok{\textless{}{-}} \FunctionTok{rgb}\NormalTok{(}\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{, .}\DecValTok{2}\NormalTok{, .}\DecValTok{8}\NormalTok{)} \FunctionTok{V}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{frame.color }\OtherTok{\textless{}{-}} \ConstantTok{NA} \NormalTok{egam }\OtherTok{\textless{}{-}}\NormalTok{ (}\FunctionTok{log}\NormalTok{(}\FunctionTok{E}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{weight)}\SpecialCharTok{+}\NormalTok{.}\DecValTok{4}\NormalTok{) }\SpecialCharTok{/} \FunctionTok{max}\NormalTok{(}\FunctionTok{log}\NormalTok{(}\FunctionTok{E}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{weight)}\SpecialCharTok{+}\NormalTok{.}\DecValTok{4}\NormalTok{)} \FunctionTok{E}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{color }\OtherTok{\textless{}{-}} \FunctionTok{rgb}\NormalTok{(.}\DecValTok{5}\NormalTok{, .}\DecValTok{5}\NormalTok{, }\DecValTok{0}\NormalTok{, egam)} \FunctionTok{E}\NormalTok{(g)}\SpecialCharTok{$}\NormalTok{width }\OtherTok{\textless{}{-}}\NormalTok{ egam} \CommentTok{\# plot the graph in layout1} \FunctionTok{plot}\NormalTok{(g, }\AttributeTok{layout=}\NormalTok{layout1)} \end{Highlighting} \end{Shaded} Further information: \url{http://sna.stanford.edu/lab.php?l=1} \hypertarget{text-mining-software-engineering-data}{% \chapter{Text Mining Software Engineering Data}\label{text-mining-software-engineering-data}} In software engineering, there is a lot of information in plain text such as requirements, bug reports, mails, reviews from applicatons, etc. Typically that information can be extracted from Software Configuration Management Systems (SCM), Bug Tracking Systems (BTS) such as Bugzilla or application stores such as Google Play or Apple's AppStore, etc. can be mined to extract relevant information. Here we briefly explain the text mining process and how this can be done with R. A well-known package for \emph{text mining} is \texttt{tm} \citep[\citet{FeinererHM08}]{FeinererH15}. Another popular package is \texttt{wordcloud}. \hypertarget{terminology}{% \section{Terminology}\label{terminology}} The workflow that we follow for analyzing a set of text documents are: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Importing data. A \emph{Corpus} is a collection of text documents, implemented as VCorpus (corpora are R object held in memory). The \texttt{tm} provides several corpus constructors: \texttt{DirSource}, \texttt{VectorSource}, or \texttt{DataframeSource} (\texttt{getSources()}). \end{enumerate} There are several parameters that control the creation of a \emph{Corpus}. ((The parameter readerControl of the corpus constructor has to be a list with the named components reader and language)) \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item Preprocessing: in this step we may remove common words, punctuation and we may perform other operations. We may do this operations after creating the DocumentTermMatrix. \item Inspecting and exploring data: Individual documents can be accessed via {[}{[} \item Transformations: Transformations are done via the \texttt{tm\_map()} function. + \texttt{tm\_map(\_\_\_\_\_,\ stripWhitespace)}\\ + \texttt{tm\_map(\_\_\_\_\_,\ content\_transformer(tolower))} + \texttt{tm\_map(\_\_\_\_\_,\ removeWords,\ stopwords("english"))} + \texttt{tm\_map(\_\_\_\_\_,\ stemDocument)} \item Creating \texttt{Term-Document} Matrices: TermDocumentMatrix and DocumentTermMatrix + A document term matrix is a matrix with documents as the rows and terms as the columns. Each cell of the matrix contains the count of the frequency of words. We use DocumentTermMatrix() to create the matrix. + \texttt{inspect(DocumentTermMatrix(\ newsreuters,\ list(dictionary\ =\ c("term1",\ "term2",\ "term3"))))}. It displays detailed information on a corpus or a term-document matrix. \item Relationships between terms. + \texttt{findFreqTerms(\_\_\_\_\_,\ anumber)} + \texttt{findAssocs(Mydtm,\ "aterm",\ anumbercorrelation)} + A dictionary is a (multi-)set of strings. It is often used to denote relevant terms in text mining. \item Clustering and Classification \end{enumerate} \hypertarget{example-of-classifying-bugs-from-bugzilla}{% \section{Example of classifying bugs from Bugzilla}\label{example-of-classifying-bugs-from-bugzilla}} Bugzilla is Issue Tracking System that allow us to follow the evolution of a project. The following example shows how to work with entries from Bugzilla. It is assumed that the data has been extracted and we have the records in a flat file (this can be done using Web crawlers or directly using the SQL database). \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(foreign)} \CommentTok{\# path\_name \textless{}{-} file.path("C:", "datasets", "textMining")} \CommentTok{\# path\_name} \CommentTok{\# dir(path\_name)} \CommentTok{\#Import data} \FunctionTok{options}\NormalTok{(}\AttributeTok{stringsAsFactors =} \ConstantTok{FALSE}\NormalTok{)} \NormalTok{d }\OtherTok{\textless{}{-}} \FunctionTok{read.arff}\NormalTok{(}\StringTok{"./datasets/textMining/reviewsBugs.arff"}\NormalTok{ )} \FunctionTok{str}\NormalTok{(d) }\CommentTok{\#print out information about d} \end{Highlighting} \end{Shaded} \begin{verbatim} ## 'data.frame': 789 obs. of 2 variables: ## $ revContent: chr "Can't see traffic colors now With latest updates I can't see the traffic green/red/yellow - I have to pull over"| __truncated__ "Google Map I like it so far, it has not steered me wrong." "Could be 100X better Google should start listening to customers then they'd actually build a proper product." "I like that! Easily more helpful than the map app that comes with your phone." ... ## $ revBug : Factor w/ 2 levels "N","Y": 2 1 1 1 1 1 2 1 2 1 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{head}\NormalTok{(d,}\DecValTok{2}\NormalTok{) }\CommentTok{\# the first two rows of d. } \end{Highlighting} \end{Shaded} \begin{verbatim} ## revContent ## 1 Can't see traffic colors now With latest updates I can't see the traffic green/red/yellow - I have to pull over and zoom in the map so that one road fills the entire screen. Traffic checks are (were) the only reason I use google maps! ## 2 Google Map I like it so far, it has not steered me wrong. ## revBug ## 1 Y ## 2 N \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# fifth entry} \NormalTok{d}\SpecialCharTok{$}\NormalTok{revContent[}\DecValTok{5}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "Just deleted No I don't want to sign in or sign up for anything stop asking" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{d}\SpecialCharTok{$}\NormalTok{revBug[}\DecValTok{5}\NormalTok{]} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] N ## Levels: N Y \end{verbatim} Creating a Document-Term Matrix (DTM) Now, we can explore things such as ``which words are associated with''feature''?'' \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# which words are associated with "bug"?} \FunctionTok{findAssocs}\NormalTok{(dtm, }\StringTok{\textquotesingle{}bug\textquotesingle{}}\NormalTok{, .}\DecValTok{3}\NormalTok{) }\CommentTok{\# minimum correlation of 0.3. Change accordingly. } \end{Highlighting} \end{Shaded} \begin{verbatim} ## $bug ## it? mini major users causing ipad ## 1.00 0.92 0.91 0.80 0.62 0.57 \end{verbatim} And find frequent terms. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{findFreqTerms}\NormalTok{(dtm, }\DecValTok{15}\NormalTok{) }\CommentTok{\#terms that appear 15 or more times, in this case} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "google" "map" "like" "app" "just" "good" ## [7] "crashes" "maps" "time" "get" "much" "really" ## [13] "update" "great" "nice" "best" "ever" "fun" ## [19] "review" "love" "awesome" "cool" "amazing" "game" ## [25] "clans" "clash" "game." "game!" "addicting" "play" ## [31] "playing" "addictive" \end{verbatim} Remove some terms \begin{Shaded} \begin{Highlighting}[] \NormalTok{sparseparam }\OtherTok{\textless{}{-}} \FloatTok{0.90} \CommentTok{\# will make the matrix 90\% empty space, maximum. Change this, as you like.} \NormalTok{dtm\_sprs }\OtherTok{\textless{}{-}} \FunctionTok{removeSparseTerms}\NormalTok{(dtm,}\AttributeTok{sparse=}\NormalTok{sparseparam)} \FunctionTok{inspect}\NormalTok{(dtm\_sprs)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## <<DocumentTermMatrix (documents: 789, terms: 9)>> ## Non-/sparse entries: 1233/5868 ## Sparsity : 83% ## Maximal term length: 7 ## Weighting : term frequency - inverse document frequency (normalized) (tf-idf) ## Sample : ## Terms ## Docs app awesome best clash fun game good great love ## 159 0 0.00 0 1.6 0 0 0.0 0 1.46 ## 163 0 3.12 0 0.0 0 0 0.0 0 0.00 ## 178 0 3.12 0 0.0 0 0 0.0 0 0.00 ## 400 0 0.00 0 0.0 0 0 3.1 0 0.00 ## 421 0 0.00 0 0.0 0 0 3.1 0 0.00 ## 472 0 0.00 0 0.0 0 0 3.1 0 0.00 ## 50 0 0.00 0 0.0 0 0 3.1 0 0.00 ## 525 0 1.56 0 0.0 0 0 0.0 0 1.46 ## 527 0 0.00 0 0.0 0 0 3.1 0 0.00 ## 532 0 0.00 0 1.6 0 0 0.0 0 1.46 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{maintitle }\OtherTok{\textless{}{-}}\FunctionTok{paste0}\NormalTok{(}\StringTok{"Most frequent terms (sparseness="}\NormalTok{ ,sparseparam , }\StringTok{" )"}\NormalTok{)} \FunctionTok{barplot}\NormalTok{(}\FunctionTok{as.matrix}\NormalTok{(dtm\_sprs),}\AttributeTok{xlab=}\StringTok{"terms"}\NormalTok{,}\AttributeTok{ylab=}\StringTok{"number of occurrences"}\NormalTok{, }\AttributeTok{main=}\NormalTok{maintitle)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-123-1.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# organize terms by their frequency } \NormalTok{freq\_dtm\_sprs }\OtherTok{\textless{}{-}} \FunctionTok{colSums}\NormalTok{(}\FunctionTok{as.matrix}\NormalTok{(dtm\_sprs))} \FunctionTok{length}\NormalTok{(freq\_dtm\_sprs)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 9 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{sorted\_freq\_dtm\_sprs }\OtherTok{\textless{}{-}} \FunctionTok{sort}\NormalTok{(freq\_dtm\_sprs, }\AttributeTok{decreasing =} \ConstantTok{TRUE}\NormalTok{)} \NormalTok{sorted\_freq\_dtm\_sprs} \end{Highlighting} \end{Shaded} \begin{verbatim} ## good great game awesome fun best love clash app ## 77.8 68.8 68.7 64.6 55.8 54.1 45.4 42.5 31.3 \end{verbatim} Create a data frame that will be the input to the classifier. Last column will be the label. As data frame: \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#dtmdf \textless{}{-} as.data.frame(dtm.90)} \CommentTok{\#dtmdf \textless{}{-} as.data.frame(inspect(dtm\_sprs))} \NormalTok{dtmdf }\OtherTok{\textless{}{-}} \FunctionTok{as.data.frame}\NormalTok{(}\FunctionTok{as.matrix}\NormalTok{(dtm\_sprs))} \CommentTok{\# rownames(dtm)\textless{}{-} 1:nrow(dtm)} \NormalTok{class }\OtherTok{\textless{}{-}}\NormalTok{ d}\SpecialCharTok{$}\NormalTok{revBug} \NormalTok{dtmdf }\OtherTok{\textless{}{-}} \FunctionTok{cbind}\NormalTok{(dtmdf,class)} \FunctionTok{head}\NormalTok{(dtmdf, }\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} Use any classifier now: - split the dataframe into training and testing - Build the classification model using the training subset - apply the model to the testing subset and obtain the Confusion Matrix - Analise the results \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(caret)} \FunctionTok{library}\NormalTok{(randomForest)} \NormalTok{inTraining }\OtherTok{\textless{}{-}} \FunctionTok{createDataPartition}\NormalTok{(dtmdf}\SpecialCharTok{$}\NormalTok{class, }\AttributeTok{p =}\NormalTok{ .}\DecValTok{75}\NormalTok{, }\AttributeTok{list =} \ConstantTok{FALSE}\NormalTok{)} \NormalTok{training }\OtherTok{\textless{}{-}}\NormalTok{ dtmdf[ inTraining,]} \NormalTok{testing }\OtherTok{\textless{}{-}}\NormalTok{ dtmdf[}\SpecialCharTok{{-}}\NormalTok{inTraining,]} \NormalTok{fitControl }\OtherTok{\textless{}{-}} \FunctionTok{trainControl}\NormalTok{(}\DocumentationTok{\#\# 5{-}fold CV} \AttributeTok{method =} \StringTok{"repeatedcv"}\NormalTok{,} \AttributeTok{number =} \DecValTok{5}\NormalTok{,} \DocumentationTok{\#\# repeated ten times} \AttributeTok{repeats =} \DecValTok{5}\NormalTok{)} \NormalTok{gbmFit1 }\OtherTok{\textless{}{-}} \FunctionTok{train}\NormalTok{(class }\SpecialCharTok{\textasciitilde{}}\NormalTok{ ., }\AttributeTok{data =}\NormalTok{ training,} \AttributeTok{method =} \StringTok{"gbm"}\NormalTok{,} \AttributeTok{trControl =}\NormalTok{ fitControl,} \DocumentationTok{\#\# This last option is actually one} \DocumentationTok{\#\# for gbm() that passes through} \AttributeTok{verbose =} \ConstantTok{FALSE}\NormalTok{)} \NormalTok{gbmFit1} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Stochastic Gradient Boosting ## ## 593 samples ## 9 predictor ## 2 classes: 'N', 'Y' ## ## No pre-processing ## Resampling: Cross-Validated (5 fold, repeated 5 times) ## Summary of sample sizes: 475, 474, 474, 475, 474, 474, ... ## Resampling results across tuning parameters: ## ## interaction.depth n.trees Accuracy Kappa ## 1 50 0.798 0.0026 ## 1 100 0.801 0.0830 ## 1 150 0.806 0.2030 ## 2 50 0.810 0.1918 ## 2 100 0.808 0.2585 ## 2 150 0.808 0.2743 ## 3 50 0.813 0.2629 ## 3 100 0.809 0.2733 ## 3 150 0.807 0.2822 ## ## Tuning parameter 'shrinkage' was held constant at a value of 0.1 ## ## Tuning parameter 'n.minobsinnode' was held constant at a value of 10 ## Accuracy was used to select the optimal model using the largest value. ## The final values used for the model were n.trees = 50, interaction.depth = ## 3, shrinkage = 0.1 and n.minobsinnode = 10. \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# trellis.par.set(caretTheme())} \CommentTok{\# plot(gbmFit1)} \CommentTok{\# } \CommentTok{\# trellis.par.set(caretTheme())} \CommentTok{\# plot(gbmFit1, metric = "Kappa")} \FunctionTok{head}\NormalTok{(}\FunctionTok{predict}\NormalTok{(gbmFit1, testing, }\AttributeTok{type =} \StringTok{"prob"}\NormalTok{))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## N Y ## 1 0.718 0.2816 ## 2 0.901 0.0993 ## 3 0.677 0.3233 ## 4 0.677 0.3233 ## 5 0.596 0.4039 ## 6 0.950 0.0499 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{conf\_mat }\OtherTok{\textless{}{-}} \FunctionTok{confusionMatrix}\NormalTok{(testing}\SpecialCharTok{$}\NormalTok{class, }\FunctionTok{predict}\NormalTok{(gbmFit1, testing))} \NormalTok{conf\_mat} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Confusion Matrix and Statistics ## ## Reference ## Prediction N Y ## N 152 5 ## Y 31 8 ## ## Accuracy : 0.816 ## 95% CI : (0.755, 0.868) ## No Information Rate : 0.934 ## P-Value [Acc > NIR] : 1 ## ## Kappa : 0.231 ## ## Mcnemar's Test P-Value : 3.09e-05 ## ## Sensitivity : 0.831 ## Specificity : 0.615 ## Pos Pred Value : 0.968 ## Neg Pred Value : 0.205 ## Prevalence : 0.934 ## Detection Rate : 0.776 ## Detection Prevalence : 0.801 ## Balanced Accuracy : 0.723 ## ## 'Positive' Class : N ## \end{verbatim} We may compute manually all derived variables from the Confusion Matrix. See Section -- with the description of the Confusion Matrix \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# str(conf\_mat)} \NormalTok{TruePositive }\OtherTok{\textless{}{-}}\NormalTok{ conf\_mat}\SpecialCharTok{$}\NormalTok{table[}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{]} \NormalTok{TruePositive} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 152 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{FalsePositive }\OtherTok{\textless{}{-}}\NormalTok{ conf\_mat}\SpecialCharTok{$}\NormalTok{table[}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{]} \NormalTok{FalsePositive} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 5 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{FalseNegative }\OtherTok{\textless{}{-}}\NormalTok{ conf\_mat}\SpecialCharTok{$}\NormalTok{table[}\DecValTok{2}\NormalTok{,}\DecValTok{1}\NormalTok{]} \NormalTok{FalseNegative} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 31 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{TrueNegative }\OtherTok{\textless{}{-}}\NormalTok{ conf\_mat}\SpecialCharTok{$}\NormalTok{table[}\DecValTok{2}\NormalTok{,}\DecValTok{2}\NormalTok{]} \NormalTok{TrueNegative} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 8 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# Sum columns in the confusion matrix} \NormalTok{ConditionPositive }\OtherTok{\textless{}{-}}\NormalTok{ TruePositive }\SpecialCharTok{+}\NormalTok{ FalseNegative} \NormalTok{ConditionNegative }\OtherTok{\textless{}{-}}\NormalTok{ FalsePositive }\SpecialCharTok{+}\NormalTok{ TrueNegative} \NormalTok{TotalPopulation }\OtherTok{\textless{}{-}}\NormalTok{ ConditionPositive }\SpecialCharTok{+}\NormalTok{ ConditionNegative} \NormalTok{TotalPopulation} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 196 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\#Sum rows in the confusion matrix} \NormalTok{PredictedPositive }\OtherTok{\textless{}{-}}\NormalTok{ TruePositive }\SpecialCharTok{+}\NormalTok{ FalsePositive} \NormalTok{PredictedNegative }\OtherTok{\textless{}{-}}\NormalTok{ FalseNegative }\SpecialCharTok{+}\NormalTok{ TrueNegative} \CommentTok{\# Total Predicted must be equal to the total population} \NormalTok{PredictedPositive}\SpecialCharTok{+}\NormalTok{PredictedNegative} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 196 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{SensitivityRecall\_TPR }\OtherTok{\textless{}{-}}\NormalTok{ TruePositive }\SpecialCharTok{/}\NormalTok{ ConditionPositive} \NormalTok{SensitivityRecall\_TPR} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.831 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{Specificity\_TNR\_SPC }\OtherTok{\textless{}{-}}\NormalTok{ TrueNegative }\SpecialCharTok{/}\NormalTok{ ConditionNegative} \NormalTok{Specificity\_TNR\_SPC} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.615 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{Precision\_PPV }\OtherTok{\textless{}{-}}\NormalTok{ TruePositive }\SpecialCharTok{/}\NormalTok{ PredictedPositive} \NormalTok{Precision\_PPV } \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.968 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{NegativePredictedValue\_NPV }\OtherTok{\textless{}{-}}\NormalTok{ TrueNegative }\SpecialCharTok{/}\NormalTok{ PredictedNegative} \NormalTok{NegativePredictedValue\_NPV} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.205 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{Prevalence }\OtherTok{\textless{}{-}}\NormalTok{ ConditionPositive }\SpecialCharTok{/}\NormalTok{ TotalPopulation} \NormalTok{Prevalence} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.934 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{Accuracy\_ACC }\OtherTok{\textless{}{-}}\NormalTok{ (TruePositive }\SpecialCharTok{+}\NormalTok{ TrueNegative) }\SpecialCharTok{/}\NormalTok{ TotalPopulation} \NormalTok{Accuracy\_ACC} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.816 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{FalseDiscoveryRate\_FDR }\OtherTok{\textless{}{-}}\NormalTok{ FalsePositive }\SpecialCharTok{/}\NormalTok{ PredictedPositive} \NormalTok{FalseDiscoveryRate\_FDR} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.0318 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{FalseOmisionRate\_FOR }\OtherTok{\textless{}{-}}\NormalTok{ FalseNegative }\SpecialCharTok{/}\NormalTok{ PredictedNegative } \NormalTok{FalseOmisionRate\_FOR} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.795 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{FallOut\_FPR }\OtherTok{\textless{}{-}}\NormalTok{ FalsePositive }\SpecialCharTok{/}\NormalTok{ ConditionNegative} \NormalTok{FallOut\_FPR} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.385 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{MissRate\_FNR }\OtherTok{\textless{}{-}}\NormalTok{ FalseNegative }\SpecialCharTok{/}\NormalTok{ ConditionPositive} \NormalTok{MissRate\_FNR } \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.169 \end{verbatim} And finally, a word cloud as an example that appears everywhere these days. \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(wordcloud)} \CommentTok{\# calculate the frequency of words and sort in descending order.} \NormalTok{wordFreqs}\OtherTok{=}\FunctionTok{sort}\NormalTok{(}\FunctionTok{colSums}\NormalTok{(}\FunctionTok{as.matrix}\NormalTok{(dtm\_sprs)),}\AttributeTok{decreasing=}\ConstantTok{TRUE}\NormalTok{)} \FunctionTok{wordcloud}\NormalTok{(}\AttributeTok{words=}\FunctionTok{names}\NormalTok{(wordFreqs),}\AttributeTok{freq=}\NormalTok{wordFreqs)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/WordCloud-1.pdf} \hypertarget{extracting-data-from-twitter}{% \section{Extracting data from Twitter}\label{extracting-data-from-twitter}} The hardest bit is to link with Twitter. Using the TwitteR package is explained following this \href{./twitter.Rmd}{example}. \hypertarget{time-series}{% \chapter{Time Series}\label{time-series}} Many sources of information are time related. For example, data from Software Configuration Management (SCM) such as Git, \href{http://www.github.com}{GitHub}) systems or Dashboards such as \href{http://metricsgrimoire.github.io/}{Metrics Grimoire} from \href{http://bitergia.com/}{Bitergia} or \href{http://www.sonarqube.org/}{SonarQube} With MetricsGrimore or SonarQube we can extract datasets or dump of databases. For example, a dashboard for the OpenStack project is located at \url{http://activity.openstack.org/dash/browser/} and provides datasets as MySQL dumps or JSON files. With R we can read a JSON file as follows: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(jsonlite)} \CommentTok{\# Get the JSON data } \CommentTok{\# gm \textless{}{-} fromJSON("http://activity.openstack.org/dash/browser/data/json/nova.git{-}scm{-}rep{-}evolutionary.json")} \NormalTok{gm }\OtherTok{\textless{}{-}} \FunctionTok{fromJSON}\NormalTok{(}\StringTok{\textquotesingle{}./datasets/timeSeries/nova.git{-}scm{-}rep{-}evolutionary.json\textquotesingle{}}\NormalTok{)} \FunctionTok{str}\NormalTok{(gm)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## List of 13 ## $ added_lines : num [1:287] 431874 406 577 697 7283 ... ## $ authors : int [1:287] 1 1 4 2 7 5 4 9 8 11 ... ## $ branches : int [1:287] 1 1 1 1 1 1 1 1 1 1 ... ## $ commits : int [1:287] 3 4 16 11 121 38 35 90 66 97 ... ## $ committers : int [1:287] 1 1 4 2 7 5 4 9 8 11 ... ## $ date : chr [1:287] "May 2010" "May 2010" "Jun 2010" "Jun 2010" ... ## $ files : int [1:287] 1878 9 13 7 144 111 28 1900 89 101 ... ## $ id : int [1:287] 0 1 2 3 4 5 6 7 8 9 ... ## $ newauthors : int [1:287] 1 1 2 0 4 1 0 4 2 3 ... ## $ removed_lines: num [1:287] 864 530 187 326 2619 ... ## $ repositories : int [1:287] 1 1 1 1 1 1 1 1 1 1 ... ## $ unixtime : chr [1:287] "1274659200" "1275264000" "1275868800" "1276473600" ... ## $ week : int [1:287] 201021 201022 201023 201024 201025 201026 201027 201028 201029 201030 ... \end{verbatim} Now we can use time series packages. First, after loading the libraries, we need to create a time series object. \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# TS libraries} \FunctionTok{library}\NormalTok{(xts)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Attaching package: 'xts' \end{verbatim} \begin{verbatim} ## The following objects are masked from 'package:dplyr': ## ## first, last \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{library}\NormalTok{(forecast)} \CommentTok{\# Library to deal with dates} \FunctionTok{library}\NormalTok{(lubridate)} \CommentTok{\# Ceate a time series object} \NormalTok{gmts }\OtherTok{\textless{}{-}} \FunctionTok{xts}\NormalTok{(gm}\SpecialCharTok{$}\NormalTok{commits,}\FunctionTok{seq}\NormalTok{(}\FunctionTok{ymd}\NormalTok{(}\StringTok{\textquotesingle{}2010{-}05{-}22\textquotesingle{}}\NormalTok{),}\FunctionTok{ymd}\NormalTok{(}\StringTok{\textquotesingle{}2015{-}11{-}16\textquotesingle{}}\NormalTok{), }\AttributeTok{by =} \StringTok{\textquotesingle{}1 week\textquotesingle{}}\NormalTok{))} \CommentTok{\# TS Object} \FunctionTok{str}\NormalTok{(gmts)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## An 'xts' object on 2010-05-22/2015-11-14 containing: ## Data: int [1:287, 1] 3 4 16 11 121 38 35 90 66 97 ... ## Indexed by objects of class: [Date] TZ: UTC ## xts Attributes: ## NULL \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{head}\NormalTok{(gmts, }\DecValTok{3}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [,1] ## 2010-05-22 3 ## 2010-05-29 4 ## 2010-06-05 16 \end{verbatim} Visualise the time series object \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(gmts)} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-127-1.pdf} Arima model: \begin{Shaded} \begin{Highlighting}[] \NormalTok{fit }\OtherTok{\textless{}{-}} \FunctionTok{auto.arima}\NormalTok{(gmts)} \NormalTok{fit} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Series: gmts ## ARIMA(0,1,2) ## ## Coefficients: ## ma1 ma2 ## -0.312 -0.307 ## s.e. 0.058 0.064 ## ## sigma^2 estimated as 1341: log likelihood=-1435 ## AIC=2876 AICc=2876 BIC=2887 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{forecast}\NormalTok{(fit, }\DecValTok{5}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95 ## 2010 7.75 -39.2 54.7 -64.0 79.5 ## 2017 15.16 -41.8 72.1 -72.0 102.3 ## 2024 15.16 -44.6 74.9 -76.2 106.5 ## 2031 15.16 -47.2 77.5 -80.2 110.5 ## 2038 15.16 -49.7 80.0 -84.0 114.3 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(}\FunctionTok{forecast}\NormalTok{(fit, }\DecValTok{5}\NormalTok{))} \end{Highlighting} \end{Shaded} \includegraphics{DASE_files/figure-latex/unnamed-chunk-129-1.pdf} \hypertarget{web-tutorials-about-time-series}{% \section{Web tutorials about Time Series:}\label{web-tutorials-about-time-series}} \url{http://www.statoek.wiso.uni-goettingen.de/veranstaltungen/zeitreihen/sommer03/ts_r_intro.pdf} \url{http://www.statmethods.net/advstats/timeseries.html} \url{http://a-little-book-of-r-for-time-series.readthedocs.org/en/latest/} \url{https://media.readthedocs.org/pdf/a-little-book-of-r-for-time-series/latest/a-little-book-of-r-for-time-series.pdf} \url{http://www.stat.pitt.edu/stoffer/tsa3/} \hypertarget{part-bibliography}{% \part{Bibliography}\label{part-bibliography}} \bibliography{./references/dataSources.bib,./references/RandStatistics.bib,./references/RPackages.bib,./references/referencesFS.bib,./references/referencesIS.bib,./references/evaluation.bib,./references/machinelearning.bib,./references/effortEstimation.bib,./references/DefectPrediction.bib,./references/SEmetrics.bib} \end{document}
{ "alphanum_fraction": 0.6490903911, "avg_line_length": 51.5954045954, "ext": "tex", "hexsha": "b1966ca7e0a9458938a64434ece9a40c9dd11aa7", "lang": "TeX", "max_forks_count": 16, "max_forks_repo_forks_event_max_datetime": "2021-10-30T13:04:46.000Z", "max_forks_repo_forks_event_min_datetime": "2016-03-15T09:23:43.000Z", "max_forks_repo_head_hexsha": "ff75ad39e59f25e5570808becd419f22d7fa4606", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "danrodgar/DASE", "max_forks_repo_path": "DASE.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "ff75ad39e59f25e5570808becd419f22d7fa4606", "max_issues_repo_issues_event_max_datetime": "2015-11-07T17:23:38.000Z", "max_issues_repo_issues_event_min_datetime": "2015-11-07T08:26:52.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "danrodgar/DASE", "max_issues_repo_path": "DASE.tex", "max_line_length": 3350, "max_stars_count": 10, "max_stars_repo_head_hexsha": "ff75ad39e59f25e5570808becd419f22d7fa4606", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "danrodgar/DASE", "max_stars_repo_path": "DASE.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-01T19:52:35.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-30T10:04:50.000Z", "num_tokens": 123333, "size": 361529 }
\section{conflicting types for function}\label{sec:conflicting-types} \begin{figure}[htb] \begin{lstlisting} int printHouse(void); void printHouse(void) { printf("/ \\\n"); printf("|_|\n"); } \end{lstlisting} \errmsg{conflicting types for 'printHouse'} \label{ex:conflict-types} \end{figure} This error means that the function declaration and implementation have different return types, but they are required to have the same type. In Example \ref{ex:conflict-types}, this can be fixed by changing the declared type of \func{printHouse} from \keyword{int} to \keyword{void}. \newpage
{ "alphanum_fraction": 0.7572156197, "avg_line_length": 31, "ext": "tex", "hexsha": "e4879d7b7893f600fb5859c9d7785a25a78b7341", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8775c6fd83b13ad96cc5dfa5e366cd44a627f95d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jdalyuml/compile-error-book", "max_forks_repo_path": "c-book/compile-errors-cpp/errors/conflicting-types.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8775c6fd83b13ad96cc5dfa5e366cd44a627f95d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jdalyuml/compile-error-book", "max_issues_repo_path": "c-book/compile-errors-cpp/errors/conflicting-types.tex", "max_line_length": 142, "max_stars_count": null, "max_stars_repo_head_hexsha": "8775c6fd83b13ad96cc5dfa5e366cd44a627f95d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jdalyuml/compile-error-book", "max_stars_repo_path": "c-book/compile-errors-cpp/errors/conflicting-types.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 153, "size": 589 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Structured General Purpose Assignment % LaTeX Template % % This template has been downloaded from: % http://www.latextemplates.com % % Original author: % Ted Pavlic (http://www.tedpavlic.com) % % Note: % The \lipsum[#] commands throughout this template generate dummy text % to fill the template out. These commands should all be removed when % writing assignment content. % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass{article} \usepackage{listings} \usepackage{mathtools} \usepackage{fancyhdr} % Required for custom headers \usepackage{lastpage} % Required to determine the last page for the footer \usepackage{extramarks} % Required for headers and footers \usepackage{graphicx} % Required to insert images \usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template \usepackage{caption} \usepackage{subcaption} \usepackage{amsmath} % Margins \topmargin=-0.45in \evensidemargin=0in \oddsidemargin=0in \textwidth=6.5in \textheight=9.0in \headsep=0.25in \linespread{1.1} % Line spacing % Set up the header and footer \pagestyle{fancy} \lhead{\hmwkAuthorName} % Top left header \chead{\hmwkClass} % Top center header --> \ (\hmwkClassInstructor\ \hmwkClassTime): \hmwkTitle \rhead{\firstxmark} % Top right header \lfoot{\lastxmark} % Bottom left footer \cfoot{} % Bottom center footer \rfoot{Page\ \thepage\ of\ \pageref{LastPage}} % Bottom right footer \renewcommand\headrulewidth{0.4pt} % Size of the header rule \renewcommand\footrulewidth{0.4pt} % Size of the footer rule \setlength\parindent{0pt} % Removes all indentation from paragraphs %---------------------------------------------------------------------------------------- % DOCUMENT STRUCTURE COMMANDS % Skip this unless you know what you're doing %---------------------------------------------------------------------------------------- % Header and footer for when a page split occurs within a problem environment \newcommand{\enterProblemHeader}[1]{ \nobreak\extramarks{#1}{#1 continued on next page\ldots}\nobreak \nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak } % Header and footer for when a page split occurs between problem environments \newcommand{\exitProblemHeader}[1]{ \nobreak\extramarks{#1 (continued)}{#1 continued on next page\ldots}\nobreak \nobreak\extramarks{#1}{}\nobreak } \setcounter{secnumdepth}{0} % Removes default section numbers \newcounter{homeworkProblemCounter} % Creates a counter to keep track of the number of problems \newcommand{\homeworkProblemName}{} \newenvironment{homeworkProblem}[1][Problem \arabic{homeworkProblemCounter}]{ % Makes a new environment called homeworkProblem which takes 1 argument (custom name) but the default is "Problem #" \stepcounter{homeworkProblemCounter} % Increase counter for number of problems \renewcommand{\homeworkProblemName}{#1} % Assign \homeworkProblemName the name of the problem \section{\homeworkProblemName} % Make a section in the document with the custom problem count \enterProblemHeader{\homeworkProblemName} % Header and footer within the environment }{ \exitProblemHeader{\homeworkProblemName} % Header and footer after the environment } \newcommand{\problemAnswer}[1]{ % Defines the problem answer command with the content as the only argument \noindent\framebox[\columnwidth][c]{\begin{minipage}{0.98\columnwidth}#1\end{minipage}} % Makes the box around the problem answer and puts the content inside } \newcommand{\homeworkSectionName}{} \newenvironment{homeworkSection}[1]{ % New environment for sections within homework problems, takes 1 argument - the name of the section \renewcommand{\homeworkSectionName}{#1} % Assign \homeworkSectionName to the name of the section from the environment argument \subsection{\homeworkSectionName} % Make a subsection with the custom name of the subsection \enterProblemHeader{\homeworkProblemName\ [\homeworkSectionName]} % Header and footer within the environment }{ \enterProblemHeader{\homeworkProblemName} % Header and footer after the environment } %---------------------------------------------------------------------------------------- % NAME AND CLASS SECTION %---------------------------------------------------------------------------------------- \newcommand{\hmwkTitle}{Project\ \#2} % Assignment title \newcommand{\hmwkDueDate}{Wednesday,\ March\ 4,\ 2015} % Due date \newcommand{\hmwkClass}{CSCI\ 8810} % Course/class \newcommand{\hmwkClassTime}{08:00am} % Class/lecture time \newcommand{\hmwkClassInstructor}{Prof. Arabnia} % Teacher/lecturer \newcommand{\hmwkAuthorName}{Sina Solaimanpour} % Your name %---------------------------------------------------------------------------------------- % TITLE PAGE %---------------------------------------------------------------------------------------- \title{ \vspace{2in} \textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\ \normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate}\\ \vspace{0.1in}\large{\textit{\hmwkClassInstructor\ \hmwkClassTime}} \vspace{3in} } \author{\textbf{\hmwkAuthorName}} \date{} % Insert date here if you want it to appear below your name %---------------------------------------------------------------------------------------- \begin{document} \maketitle %---------------------------------------------------------------------------------------- % TABLE OF CONTENTS %---------------------------------------------------------------------------------------- %\setcounter{tocdepth}{1} % Uncomment this line if you don't want subsections listed in the ToC \newpage \tableofcontents \newpage %---------------------------------------------------------------------------------------- % Introduction %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Introduction] In this paper, I will discuss about the second phase of the ColonD Image processing project. The first phase of the project was about designing the overall structure of the project and implement some basic operations like read/write, gray-scale conversion, thresholding and smoothing images by applying filters. In this phase of the of the project, I will expand the program to include some other filtering operations and another slightly more complicated feature called Connected Components Labelling or CCL. The following list shows what new features have been added to the program in phase 2: \begin{itemize} \itemsep1pt \parskip0pt \parsep0pt \item Connected Component Labelling (CCL) \item Invert pixel colors \item Robert's operator \item Sobel's operator \item Prewitt's operator \item Kirsch's operator \item Laplacian filter \item Gaussian filter \end{itemize} These features are all implemented and tested with different images, individually and in combination of other operations. In the next sections, I will go over each one of these features in more detail and will present some results from the program. \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Component Labelling %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Connected Component Labelling] Connected Component Labelling is a well-known algorithm used to find individual components in an image. The idea behind this version of the algorithm is to check the surrounding pixels (in this case, the west and the north pixels) of a white pixel and assign appropriate label to the pixel in hand. Details of this pixel are as follows: \begin{enumerate} \itemsep1pt \parskip0pt \parsep0pt \item If the north and west pixels are black: Assign a new label to the pixel \item If west pixel is white and the other one is black: Assign the label of the west pixel to the pixel in hand \item If north pixel is white and the other one is black: Assign the label of the north pixel to the pixel in hand \item If both west and north pixels are white: Assign to the pixel in hand, the smaller label of the two and record that the two labels are equivalent. \end{enumerate} The algorithm described above is the first pass of the CCL algorithm. After labelling each pixel during the first pass, we need to go over all of the pixels and handle the equivalent labels found in the first pass. As the number of labels during the first pass might get very big, I am using a \textbf{Disjoint Set Data Structure} implemented in \textbf{Boost framework}. The overall idea of the disjoint set dadta structure is to keep the equivalent elements as a set represented by the smallest member of that set. When a new label is created, it is considered to be a separate set in the bigger set. Later on in the program, when an equivalency is found between two labels, the sets representing those two labels are merged together with a union function. This will allow us to know exactly which labels are equivalent with each other. During the second pass, we find the set that each label is a member of and use the representing label in that set as the final label. Just for better visualization of the components, I use a very simple coloring technique which might fail from time to time, depending on the number of the components found and their order, but it will try to assign distinct colors to different components found in the image. Figure \ref{fig:CCL1} shows the process of labelling the lena's image. First the image is converted to gray scale. Then, I apply some gaussian smoothing to the image and use the simple thresholding feature to convert the image to a binary image with some disjoint components. At the end, I use the CCL in the program to label the components in the resulting image from the previous steps. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{CCL1-1} \includegraphics[width=0.3\textwidth]{CCL1-2} \includegraphics[width=0.3\textwidth]{CCL1-3} \includegraphics[width=0.3\textwidth]{CCL1-4} \includegraphics[width=0.3\textwidth]{CCL1-5} \caption{The resulting images from using the CCL algorithm on the lena's image after applying a gray-scale conversion, two gaussian filters and a simple thresholding. Different colors in the last image shows different components found. Here, we have \textbf{29} distinct components with average component area of \textbf{300.276} pixels.} \label{fig:CCL1} \end{figure} In the next example, shown in Figure \ref{fig:CCL2}, I have tried to show that CCL can be used to find important parts of an image of a human face. The image used in this example was obtained randomly from a search for "Face Image" on Google and was in gray-scale originally. Two Guassian filters have been applied to the image and after that, the image has been converted to binary using a thresholding technique. CCL has been used on the image after inverting the image pixels. The result is very interesting because you can clearly see that the important parts of the face (her eyes, lib, nose and a not very complete boundary around the face) have been highlighted as different components. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{CCL2-1} \includegraphics[width=0.3\textwidth]{CCL2-2} \includegraphics[width=0.3\textwidth]{CCL2-3} \includegraphics[width=0.3\textwidth]{CCL2-4} \includegraphics[width=0.3\textwidth]{CCL2-5} \caption{Applying CCL to an image of a face. Different colors in the last image shows different components found. Here, we have \textbf{10} distinct components with average component area of \textbf{926} pixels.} \label{fig:CCL2} \end{figure} \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Invert %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Invert] Inverting an image has been implemented in the program as an extra feature. It will subtract the pixel value from 255 and use the resulting value as the pixel intensity. This function is illustrated in Figure \ref{fig:invert}. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{invert1} \includegraphics[width=0.3\textwidth]{invert2} \caption{Inverting pixel values of the lena's image.} \label{fig:invert} \end{figure} \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Robert's operator %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Robert's Operator] Robert's operator is acting as an edge detector. The operator is consisting of separate applications of two different 2 x 2 filters. The idea behind this operator is to approximate the gradient of an image. The matrices used as filters are shown below. \[ F1 =\begin{bmatrix} +1 & 0 \\ 0 & -1 \end{bmatrix} , F2 =\begin{bmatrix} 0 & +1 \\ -1 & 0 \end{bmatrix} \] These two filters are applied to the image individually and at the end, the final result would be the summation of both images together. I will present two examples of this operator used on different images. The first application of the operator was on an image of a bike in front of a brick wall. The operator is applied without any other modifications to the image and the result is shown in Figure \ref{fig:rob1}. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{rob1-1} \includegraphics[width=0.3\textwidth]{rob1-2} \caption{Applying Robert's operator to an image without any other modifications.} \label{fig:rob1} \end{figure} Now, I am going to show the resulting image from Robert's operator to the same image but with applying a Guassian filter before the Robert's application. The results are shown in Figure \ref{fig:rob2}. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{rob2-1} \includegraphics[width=0.3\textwidth]{rob2-2} \includegraphics[width=0.3\textwidth]{rob2-3} \caption{Applying Robert's operator to an image with Guassian filter applied to it.} \label{fig:rob2} \end{figure} As you can see, the difference between the results of the Robert's operator to an image with or without the Guassian filter is not big. Although, if you pay attention closer, the difference is in the amount of noise shown in two results. The image with Guassian filter applied to it, has much smoother surface which allows us to focus on the important information not getting confused with the noise. \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Sobel's operator %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Sobel's Operator] Sobel's operator is another edge detector. The operator is consisting of separate applications of two different 3 x 3 filters. The idea behind this operator is to use two filters, one focusing on Horizontal edges and the other one focused on the Vertical edges to find all of the edges in the image. The filters used in this operator is as follows: \[ GY =\begin{bmatrix} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{bmatrix} , GX =\begin{bmatrix} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{bmatrix} \] These two filters are applied to the image individually and at the end, the final result would be the summation of both images together. I will present two examples of this operator used on different images. The first application of the operator is on the same image used in the Robert's operator section (The bike's image). The operator is applied without any other modifications to the image and the result is shown in Figure \ref{fig:sob1}. As you can see, the edges are much brighter and stronger comparing to the edges found using Robert's operator. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{sob1-1} \includegraphics[width=0.3\textwidth]{sob1-2} \caption{Applying Sobel's operator to an image without any other modifications.} \label{fig:sob1} \end{figure} The second example of the Sobel's operator will be on the face image used in the previous sections. This is to illustrate how an edge detection algorithm will perform on an image which contains a face in it. The results can be seen in Figure \ref{fig:sob2}. The face image is first smoothed with an application of a Guassian filter. The result of this operator clearly shows that the important areas of the image are again highlighted and the main purpose of edge detection algorithms are to find and highlight important parts of the images which are the boundaries between different colors in the image. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{sob2-1} \includegraphics[width=0.3\textwidth]{sob2-2} \includegraphics[width=0.3\textwidth]{sob2-3} \caption{Applying Sobel's operator to a face image with Guassian filter applied to it.} \label{fig:sob2} \end{figure} \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Prewitt's operator %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Prewitt's Operator] Prewitt's operator is again another edge detector operator like the previous two operators. The operator is consisting of separate applications of two different 3 x 3 filters. The idea behind this operator is very similar to the idea used in Sobel's operator which uses two filters, one focusing on Horizontal edges and the other one focused on the Vertical edges to find all of the edges in the image. The filters used in this operator is as follows: \[ GY =\begin{bmatrix} -1 & -1 & -1 \\ 0 & 0 & 0 \\ 1 & 1 & 1 \end{bmatrix} , GX =\begin{bmatrix} -1 & 0 & 1 \\ -1 & 0 & 1 \\ -1 & 0 & 1 \end{bmatrix} \] These two filters are applied to the image individually and at the end, the final result would be the summation of both images together. Figure \ref{fig:prew1} depicts how this operator performs on the image of the bike used in previous sections. The result is very similar to the result of the Sobel's operator, but it can argued that the result of the Prewitt's operator is less biased towards the pixels in the middle of the filter. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{prew1} \includegraphics[width=0.3\textwidth]{prew2} \caption{Applying Prewitt's operator to an image without any other modifications.} \label{fig:prew1} \end{figure} \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Kirsch's operator %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Kirsch's Operator] Kirsch's operator is another edge detector operator but it acts a little bit differently than the others. The operator is consisting of separate applications of four different 3 x 3 filters. Two of the filters are biased towards the Horizontal and Vertical edges and the other two are simply a 45 degree rotation of these filters. The filters are as follow: \[ F1 =\begin{bmatrix} 1 & 1 & 1 \\ 0 & 0 & 0 \\ -1 & -1 & -1 \end{bmatrix} , F2 =\begin{bmatrix} -1 & 0 & 1 \\ -1 & 0 & 1 \\ -1 & 0 & 1 \end{bmatrix} , F3 =\begin{bmatrix} 0 & 1 & 1 \\ -1 & 0 & 1 \\ -1 & -1 & 0 \end{bmatrix} , F4 =\begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & -1 \\ 0 & -1 & -1 \end{bmatrix} \] These filters are applied to the image individually and at the end, to combine the results, I find the maximum for each pixel among four pixels of the images and use that value to represent the final pixel value. Beside combining the resulting images from the four filters, I give the user the option to save all separate and the combined images to the file system. Figure \ref{fig:kirs1} depicts how this operator performs on the image of the bike used in previous sections by showing all four separate images and the combined result at the end. You can obviously see specially in the first two filtered images that the filter are working in favor of the Horizintal and Vertical edges, respectively. The other two filtered images are showing the results for the rotated filters and the last image shows the combined image using the max operator. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{kirs1-1} \includegraphics[width=0.3\textwidth]{kirs1-2} \includegraphics[width=0.3\textwidth]{kirs1-3} \includegraphics[width=0.3\textwidth]{kirs1-4} \includegraphics[width=0.3\textwidth]{kirs1-5} \includegraphics[width=0.3\textwidth]{kirs1-6} \caption{Applying Kirsch's operator to an image without any other modifications.} \label{fig:kirs1} \end{figure} Figure \ref{fig:kirs2} shows the usage of the Kirsch's operator on Lena's image. Again, all 4 separate and the combined images are shown in this figure. If one pay attention to the third and the fourth images, he can clearly see that these images are the results from the rotated filters because the edges with 45 degree headings are highlighted much stronger than other edges. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{kirs2-1} \includegraphics[width=0.3\textwidth]{kirs2-2} \includegraphics[width=0.3\textwidth]{kirs2-3} \includegraphics[width=0.3\textwidth]{kirs2-4} \includegraphics[width=0.3\textwidth]{kirs2-5} \includegraphics[width=0.3\textwidth]{kirs2-6} \caption{Applying Kirsch's operator to Lena's image without any other modifications.} \label{fig:kirs2} \end{figure} \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Laplacian operator %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Laplacian Operator] In this section, I have implemented the Laplacian operator on discrete images. This operator creates a new matrix out of an image which can be shown as an image but it might be meaningless because we are mapping the resulting matrix to the visible area with pixel of maximum value of 255. Laplacian operation can be done using different filters but the operator used in ColonD Image Processing program is as follows: \[ F1 =\begin{bmatrix} 0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0 \end{bmatrix} \] At the rest of this section, I will provide some examples of the Laplacian operator applied to different images. Figures \ref{fig:lap1}, \ref{fig:lap2} and \ref{fig:lap3} are depicting these examples. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{lap1-1} \includegraphics[width=0.3\textwidth]{lap1-2} \includegraphics[width=0.3\textwidth]{lap1-3} \caption{Applying Guassian filter and Laplacian filter to the Bike's image.} \label{fig:lap1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{lap2-1} \includegraphics[width=0.3\textwidth]{lap2-2} \includegraphics[width=0.3\textwidth]{lap2-3} \caption{Applying Gray-scale conversion and Laplacian filter to the Lena's image.} \label{fig:lap2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{lap3-1} \includegraphics[width=0.3\textwidth]{lap3-2} \includegraphics[width=0.3\textwidth]{lap3-3} \includegraphics[width=0.3\textwidth]{lap3-4} \caption{Applying Gray-scale conversion, Guassian filter and Laplacian filter to the Lena's image. Applying just one step of the Guassian filter has a drastic effect on the result of the Laplacian operator. The highlighted areas get faded and less strong.} \label{fig:lap3} \end{figure} \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Guassian operator %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Guassian Filter] This is another Smoothing (low-pass) filter which is implemented in this program as an extra feature. Some examples of this filter applied once, twice and three times to one image is depicted in Figure \ref{fig:guas}. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{guas1} \includegraphics[width=0.3\textwidth]{guas2} \includegraphics[width=0.3\textwidth]{guas3} \includegraphics[width=0.3\textwidth]{guas4} \caption{Applying Guassian filter once, twice and three time to the Lena's image.} \label{fig:guas} \end{figure} \end{homeworkProblem} %---------------------------------------------------------------------------------------- % Conclusion %---------------------------------------------------------------------------------------- % To have just one problem per page, simply put a \clearpage after each problem \begin{homeworkProblem}[\Roman{homeworkProblemCounter}. Conclusion] In this phase of the project, I implemented many high-pass filters which try to flatten out parts which have no significant changes in the pixel values but emphasize parts with rapid change in the pixel values. \end{homeworkProblem} \end{document}
{ "alphanum_fraction": 0.6855488705, "avg_line_length": 51.8176470588, "ext": "tex", "hexsha": "b537f6a87f0f6d65caba42444f86b89c9710e079", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2005e3f0078a9247345e298ed6eff6c1d2ac9e64", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "sina-cb/ColonD_ImageProcessor", "max_forks_repo_path": "Reports/Project 2/assignment_1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2005e3f0078a9247345e298ed6eff6c1d2ac9e64", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "sina-cb/ColonD_ImageProcessor", "max_issues_repo_path": "Reports/Project 2/assignment_1.tex", "max_line_length": 846, "max_stars_count": null, "max_stars_repo_head_hexsha": "2005e3f0078a9247345e298ed6eff6c1d2ac9e64", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "sina-cb/ColonD_ImageProcessor", "max_stars_repo_path": "Reports/Project 2/assignment_1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6483, "size": 26427 }
\section{Background}\label{sec:background} We briefly explain the concept of edit distance and its relation to the longest common subsequence. We then present the sequential algorithm from Myers, which was the basis for our parallel implementation. % Give a short, self-contained summary of necessary % background information. For example, assume you present an % implementation of sorting algorithms. You could organize into sorting % definition, algorithms considered, and asymptotic runtime statements. The goal of the % background section is to make the paper self-contained for an audience % as large as possible. As in every section % you start with a very brief overview of the section. Here it could be as follows: In this section % we formally define the sorting problem we consider and introduce the algorithms we use % including a cost analysis. % Explain the algorithm you use including their costs. % As an aside, don't talk about "the complexity of the algorithm.'' It's incorrect, problems have a complexity, not algorithms. % \mypar{Longest common subsequence} % The longest common subsequence problem is the problem of finding the longest sequence that two input sequences have in common. A related problem is finding the longest common substring. However, in that case the solution sequence needs to appear consecutively within the input string. \mypar{Edit distance} The edit distance refers to the minimum number of changes required to transform one string into another. In the case where we only consider insertions and deletions, and exclude substitutions, the LCS problem is equivalent to finding the edit distance. All symbols that do not appear in the LCS count towards the edit distance. To transform the first string into the second, we perform deletions for all symbols in the first sequence that are not part of the LCS and similarly insertions for the second sequence. Those changes are referred to as an edit script. % and the total number of changes is the edit distance. %\mypar{Diff program} %In the context of computer science, it is a common problem to compare source files to different versions. In order to visualize changes better, a diff program can display the minimal number of changes by highlighting all symbols that are part of the edit script. \mypar{Myers' algorithm} Myers \cite{myers_anond_1986} takes a greedy approach to finding the solution in the edit graph of two sequences. It has an asymptotic runtime of $O(nd)$ where $n$ is the total length of the two sequences and $d$ is the size of the minimum edit script. This algorithm performs well when the sequences are similar and have a small $d$. For constructing the edit graph, shown in figure \ref{edit_graph}, the two sequences A and B are placed along a grid. The solution for the shortest edit script can be constructed by finding the shortest path from $(0,0)$ to the bottom right corner. A horizontal or vertical segment represents an edit. A diagonal can only be taken if the two input sequences are identical. \begin{figure}[hbt]\centering \includegraphics[width=0.9\linewidth]{images/edit-graph.pdf} \caption{Edit graph for comparison of two sequences \cite{edit_graph}. The solution for the shortest edit script is shown in red. } \label{edit_graph} \end{figure} % Myers' algorithm pseudocode \begin{algorithm} \caption{Myers' LCS algorithm \cite{myers_anond_1986}} \label{myers_algorithm} \SetAlgoNoLine \SetAlgoNoEnd \DontPrintSemicolon \scriptsize % decrease font size \KwSty{Constant} $MAX \in [0, M+N]$\; $V:\ \KwSty{Integer}[-MAX\ ..\ MAX]$\; \; $V[1]\leftarrow 0$\; \For{$d\leftarrow 0$ \KwTo $MAX$}{ \For{$k\leftarrow -d$ \KwTo d in steps of 2}{ \eIf{$k=-d$ or $k\ne d$ and $V[k-1]<V[k+1]$}{ $x\leftarrow V[k+1]$\; }{ $x\leftarrow V[k-1]+1$\; } $y\leftarrow x-k$\; \While{$x < N$ and $y < M$ and $a_{x+1} = b_{y+1}$}{$(x,y)\leftarrow (x+1,y+1)$\;} $V[k]\leftarrow x$\; \If{$x\ge N$ and $y \ge M$}{ Length of shortest edit script is d\; \KwSty{STOP}\; } } } \end{algorithm} Myers' algorithm, shown in algorithm \ref{myers_algorithm}, goes through the diagonals shown in figure \ref{edit_graph} with an increasing edit distance and stores the coordinates of the furthest reachable point on each diagonal. The variables $N$ and $M$ represent the lengths of the sequences A and B.\\ To compute an entry on the diagonal $k$ and distance $d$, the previous paths can connect to it either by a horizontal or vertical move from its neighbors on the diagonals $k \pm 1$. Out of those two entries from the table for a distance of $d-1$, it takes the one with the higher x-value and follows the diagonal as long as the two sequences are identical. The new coordinates are then stored in the DP table. The solution has been found once an entry reaches the coordinates $(N,M)$. The edit distance $d$ for that entry directly tells us the minimum edit distance. But the actual edit script has to be computed recursively. \begin{figure}[hbt]\centering \includegraphics[width=0.95\linewidth]{images/dp_table.pdf} \caption{The DP table represents the furthest points on each diagonal that can be reached for a given edit distance. The arrows denote the dependencies of a single entry. The solution is marked in red and has an edit distance of 5.} \label{dp_table} \end{figure} Every entry depends on at most two entries from the previous row in the DP table (marked as arrows in figure \ref{dp_table}). If we only want the LCS length, it is not required to store the entire table, but only the previous row for the distance $d-1$. Thus the memory consumption is linear and no longer the limiting factor for very large input sequences. Myers presents an approach (``linear refinement'') to compute the solution recursively \cite{myers_anond_1986}. However, we will not go further into detail because it lies outside the scope of this report. % Additionally, it is not necessary to store both coordinates. If we only store the x-coordinate for an entry, we can compute the intersection with the diagonal $k$ as $y = x - k$.
{ "alphanum_fraction": 0.7539631187, "avg_line_length": 73.5952380952, "ext": "tex", "hexsha": "eee4bc640e8309013d2e5a7478dff8be3ce590c4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9d6dafc9dc16dcf97b4c712dbb8c6dace25eeee5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "tehwalris/mpi-myers-diff", "max_forks_repo_path": "report_src/sections/02-background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9d6dafc9dc16dcf97b4c712dbb8c6dace25eeee5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "tehwalris/mpi-myers-diff", "max_issues_repo_path": "report_src/sections/02-background.tex", "max_line_length": 563, "max_stars_count": 2, "max_stars_repo_head_hexsha": "9d6dafc9dc16dcf97b4c712dbb8c6dace25eeee5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "tehwalris/mpi-myers-diff", "max_stars_repo_path": "report_src/sections/02-background.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-13T17:47:49.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-09T11:30:02.000Z", "num_tokens": 1492, "size": 6182 }
\documentclass[12pt,a4paper]{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{enumitem} \usepackage{listings} \usepackage[a4paper, portrait, left=0.7in]{geometry} \usepackage{titling} \setlength{\droptitle}{-10em} \pagestyle{myheadings} \author{Alex Dalgleish amd96 Robinson} \begin{document} \title{Part II Project Proposal} \maketitle \section*{Introduction and Description of the Work} A common practice is to have a text document that requires editing by multiple people. For example, to schedule a meeting a manager may give their subordinates a list of possible times and require them to indicate which of those they would be able to attend. However, the common applications for this purpose involve editing a document through a central server such that the server has a record of all edits made to the document. To those very privacy conscious, this may not be desirable. From the Google Drive terms of service:\footnote{Google Terms of Service - Privacy \& Terms: https://www.google.com/intl/en/policies/terms/}\begin{quote} When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content.\\ \end{quote} In distributed systems, only two of consistency, availability and partition-tolerance are available.\footnote{Seth Gilbert and Nancy Lynch, “Brewer's conjecture and the feasibility of consistent, available, partition-tolerant web services”, ACM SIGACT News, Volume 33 Issue 2 (2002), pg. 51-59.} Often, availability and parition-tolerance are prioritised, sacrificing strong consistency, as service providers may prefer a service to be fault-tolerant and always accessible. Therefore, given that strong consistency is provably impossible, other weaker forms of consistency need to be maximised as much as possible.\\\\ Operation-based Conflict-free Replicated Data Types (CRDTs) are data structures designed for systems needing strong eventual consistency. That is, two nodes who receive the same set of CRDTs will be guaranteed to be in the same state as soon as the final one arrives at each.\footnote{Proposition 2.2, Marc Shapiro, Nuno Pregui¸ca, Carlos Baquero, Marek Zawirski. A comprehensive study of Convergent and Commutative Replicated Data Types. [Research Report] RR-7506, Inria – Centre Paris-Rocquencourt; INRIA. 2011, pp.50. \textless inria-00555588\textgreater} The idea is that the data structure holds the update to the state (and not the state itself) and this is broadcast to all listening nodes (eg add 10 to x). A merge operation is then performed with the node's current state and the incoming CRDT state update. These operations must necessarily be commutative so that two nodes receiving two state updates in a different order can achieve the same state. In addition, there must be some middleware to guarantee exactly-once message delivery. As these state updates are not idempotent by nature, message replication will result in erroneous state. For example, if one node broadcasts add 10 to x and another receives it twice (because the network duplicated the message) then the receiver will have added 20 to x and the two nodes won't converge to a consistent state.\\\\ I plan to develop a library for a collaborative text editor application that uses operation-based CRDTs to achieve strong eventual consistency and does not send data to a central server, such that only members asked to edit a document can know its contents. \section*{Resources required} I will be using my own machine: 4x2GHz CPU, 8Gb RAM, 300Gb Disk, Ubuntu 16.04LTS I will keep everything in a git repository on GitHub. Weekly backups will also be done to an external hard drive. I will also need support for Tor on my machine. \section*{Starting Point} \begin{itemize} \item \textbf{Computer Science Tripos} \\ My project will draw on knowledge gained from parts IA and IB of the Tripos. In particular, Concurrent and Distributed Systems provides theory background for CRDTs and Networking for TCP/IP and Peer-to-peer architectures. This is supplemented by Topics in Concurrency and Principles of Communications. \item \textbf{Code by Martin Kleppmann} \\ A server for connecting clients over websockets and sending data between them (https://github.com/trvedata)\\ \end{itemize} \section*{Substance and Structure of the Project} I plan to incrementally build a library for a peer-to-peer (P2P), collaborative, text editor. It will allow the concurrent editing of a single string of text by 2 clients over a network.\\\\ Initially, I will evaluate different possible platforms for my application to be built on e.g. web, desktop and Android Then, I will make sure I am familiar with a suitable language/framework for that platform.\\\\ I will write first a library for editing a single string of text (an ordered list of characters) using operation-based CRDTs (Conflict-free Replicated Data Types). This will involve coming up with a data structure to represent the string, defining allowed operations for strings, and specifying how to perform those operations on a string.\\\\ Next, I will use this to write code that can send and receive CRDTs over a network. Martin Kleppmann has written a server application that can connect clients like these and enable them to communicate, so I can run my implementation with this.\\\\ Then, I will create a service that simply allows discovery of other clients editing the same document as you, and make clients communicate Peer-to-peer. This will involve knowing through some other channel a common identifier for a document, and a service which can link this identifier to others who know it. \\\\ Extensions: \begin{enumerate}[label=\alph*)] \item Make a simple application that uses this library - this will have a very minimal UI, but will make it easier to use and test the library I have built \item Support \textgreater 2 users \& allow coming and going at any time \item Use the Tor network for all communication - this will involve sending data through a client-side proxy server, and using/creating Tor hidden services. \item Make a more substantial Graphical User Interface.\\ \end{enumerate} In order to evaluate my project: \begin{itemize} \item I plan to measure different communication metrics such as latency and bandwidth usage for my application, and potentially how these scale with numbers of peers. \item I will compare these with the different versions of my application, as well as against the Google Realtime API, which provides concurrent editing functionality for collaborative applications. \item I will write unit tests for the CRDT library to test its correctness in specific cases. \item If the Tor extension is implemented, I can also inspect packets coming into one client and observe if there is any data identifying the host from which it came. \end{itemize} % % % % % % \section*{Success Criteria} By the end of the project I aim: \begin{itemize} \item To have a tested library for operation-based CRDTs which represent an ordered list of characters \item To have 2 versions of a library which a text editor application might use to collaboratively edit a document with other similar applications over a network. One which will use a central server to pass data, and another where data is sent directly between clients. \item To have a library which, when used by an application, provides collaborative editing functionality with sufficiently small latency between clients to be considered 'realtime' \end{itemize} % % % % % % % % % \pagebreak \section*{Plan of work} \begin{description} \item[17/10 - 31/10] \hfill \begin{itemize} \item Evaluate the available platforms/languages/frameworks. At the moment these are Android, web/JS and Python and choose one to work in \item Research CRDTs, Peer-to-peer architectures and existing applications, and Tor \item Setup development environment (includes testing frameworks and build tools) \item Start writing CRDT library \end{itemize} \item[31/10 - 14/11] \hfill \begin{itemize} \item Finish CRDT library \item Start writing client-server version \end{itemize} \textbf{GOAL : A working and tested CRDT library} \item[14/10 - 28/11] \hfill \begin{itemize} \item Finish client-server version \item Start Peer-to-peer version \end{itemize} \item[28/11 - 5/12 + vacation] \hfill \begin{itemize} \item Finish Peer-to-peer version \item Start on extensions \end{itemize} \textbf{GOAL : 2 versions of library} \item[23/1 - 3/2] \hfill \begin{itemize} \item Write progress report \item Continue with extensions \end{itemize} \textbf{GOAL : Submitted progress report} \item[3/2 - 20/2] \hfill \begin{itemize} \item Finish writing current extension \item Start collecting evaluation data \end{itemize} \textbf{GOAL : Implementation code completed} \item[20/2 - 6/3] \hfill \begin{itemize} \item Collect evaluation data and interpret/display results \end{itemize} \textbf{GOAL : A set of data displaying the project's success} \item[6/3 - 15/3 + vacation] \hfill \begin{itemize} \item Write first draft of dissertation \end{itemize} \item[24/4 - 8/5] \hfill \begin{itemize} \item Second draft of dissertation \end{itemize} \item[8/5 - 19/5] \hfill \begin{itemize} \item Final draft of dissertation \end{itemize} \end{description} \end{document}
{ "alphanum_fraction": 0.7832009081, "avg_line_length": 65.925170068, "ext": "tex", "hexsha": "e960d6725bc456f511a4583902894d43add66c3f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "617080e3b2fffd73b07de4fa59aca7ffb2389bac", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Dgleish/PartIIProj", "max_forks_repo_path": "draft-proposal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "617080e3b2fffd73b07de4fa59aca7ffb2389bac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Dgleish/PartIIProj", "max_issues_repo_path": "draft-proposal.tex", "max_line_length": 903, "max_stars_count": null, "max_stars_repo_head_hexsha": "617080e3b2fffd73b07de4fa59aca7ffb2389bac", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Dgleish/PartIIProj", "max_stars_repo_path": "draft-proposal.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2375, "size": 9691 }
\documentclass{article} \usepackage{mathrsfs, amsmath} % need for subequations \usepackage{verbatim} % useful for program listings \usepackage{color} % use if color is used in text \usepackage{hyperref} % use for hypertext links, including those to external documents and URLs \allowdisplaybreaks \begin{document} \section*{Feed Forward Neural Networks} In this part of the tutorial I will go through the components of a fully connected feed forward neural network and show how gradient descent and backpropagation can be used to optimize the parameters of the neural network for a certain task. Our training data will be a collection of $N$ objects represented by input and output vectors $(\textbf{x}_1, \textbf{y}_1),...,(\textbf{x}_N, \textbf{y}_N)$. Each vector $\textbf{x}_n$ has length $M$, where each entry of the vector represent a different feature (or measurement) of object $n$. The network itself has trainable parameters $\textbf{W}^l$ and $\textbf{b}^l$. The weights of the network are stored in the series of matrices $\textbf{W}^l$, where the $W_{ij}^l$ entry of the tensor represents the weight of the connection from neuron $j$ in layer $l-1$ to neuron $i$ in layer $l$. The series of vectors $\textbf{b}^l$ represent the offset the neurons in layer $l$, where the entry $b_{i}^l$ is the offset of neuron $i$ in layer $l$. The activations of and inputs to the neurons in the network can be represented with the following equations \begin{equation} \begin{split} \textbf{a}^0 &= \textbf{x} \\ \textbf{z}^l &= \textbf{W}^l \textbf{a}^{l-1} + \textbf{b}^l \\ \textbf{a}^l &= \sigma_l (\textbf{z}^l) \\ \hat{\textbf{y}} &= \textbf{a}^{L}, \end{split} \end{equation} where the subscript $n$, which denotes the training example, is suppressed so as to make the equations more transparent. Here $\textbf{z}^l$ are the inputs to the neurons in layer $l$. By applying a sigmoidal activation function $\sigma_l$ to the inputs $\textbf{z}^l$ the activations $\textbf{a}^l$ of the neurons in layer $l$ are obtained. In practice this activation function is usually a tanh, logistic, or relu function. This function can also be different for each layer of the network. For simplicity of notation the activations in the layer $l=0$ are the input vector $\textbf{x}$, and the activations of the last layer $l=L$ are the predicted output $\hat{\textbf{y}}$. In component form these equations are. \begin{equation} \label{eq:nn_components} \begin{split} a_j^0 &= x_j \\ z_i^l &= \sum_{j=0} W_{ij}^l a_j^{l-1} + b_i^l \\ a_i^l &= \sigma_l (z_i^l) \\ \hat{y} &= a_i^{L} \end{split} \end{equation} \section*{Cost Functions} The difference between the predicted output and the known output, for the $n^{th}$ training example, for a given set of parameters $\textbf{W}^l$ and $\textbf{b}^l$, can be quantified by defining the loss function $E_k(\textbf{x}_n, \textbf{y}_n ; \textbf{W}^l, \textbf{b}^l)$. For regression problems the cost function is often chosen to be the squared error loss \begin{equation} E_n = \frac{1}{2} \left |\hat{\textbf{y}}_n - \textbf{y}_n \right|^2 \end{equation} For classification problems with multiple classes the cross entropy loss is used. This requires the use of the softmax activation function in the last layer to ensure the output neurons of the network output probabilities between $0$ and $1$. The softmax activation function is \begin{equation} \hat{\textbf{y}} = \sigma_L(\textbf{z}^{l-1}) = \frac{e^{\textbf{z}^{l-1}}}{\text{sum} \left( e^{\textbf{z}^{l-1}} \right)}, \end{equation} where $\text{sum} \left( e^{\textbf{z}^{l-1}} \right)$ is the sum of elements of the vector $e^{\textbf{z}^{l-1}}$. The cross entropy loss of the $n^{th}$ training example is then given by the dot product between the true label vector $\textbf{y}_n$ and the the log of the predicted labels from the network $\log{\hat{\textbf{y}}_n}$ \begin{equation} E_n = - \textbf{y}_n \cdot \log{\hat{\textbf{y}}_n}. \end{equation} The cost function for the entire training set is then the sum of the losses for each individual training example \begin{equation} E \left(\textbf{W}^l, \textbf{b}^l \right) = \frac{1}{N} \sum_{n=0}^N E_n(\textbf{x}_n, \textbf{y}_n ; \textbf{W}^l, \textbf{b}^l) \end{equation} \subsection*{Minimizing the Cost Function with Backpropagation} Optimizing the network for a particular training set means minimizing the cost function $E \left(\textbf{W}^l, \textbf{b}^l \right)$ as a function of the weights and offsets parameters. In practice the number of trainable parameters can be very large so it is impractical to use brute force to minimize the cost function. Backpropagation is the traditional gradient descent algorithm combined with the use of the chain rule to calculate the derivatives of the cost function with respect to the training parameters. Gradient descent works off the observation that from any given point in parameter space the fastest way to get to a local minimum of the cost function is to travel in the negative direction of the gradient of the cost function at that point. Thus by updating the parameters according to the rules \begin{equation} \begin{split} W^l_{ij} & \rightarrow W^l_{ij} - \alpha \frac{\partial E}{\partial W^l_{ij}} \\ b^l_i & \rightarrow b^l_{i} - \alpha \frac{\partial E}{\partial b^l_i} \end{split} \end{equation} eventually the values of the trainable parameters will be such that the cost function is at a local minimum. The quantity $\alpha$ is called the learning rate. It should be tuned to a value that ensures the gradient descent algorithm reaches a local minimum in a reasonable amount of time. From the above update rules it can be seen that gradient descent requires knowledge of the derivatives of the cost function with respect to the training parameters. In theory these derivatives can be calculated numerically, but this approach is inefficient and prone to numerical error. A better approach is to use the chain rule to write the derivatives as \begin{equation} \begin{split} \frac{\partial E_n}{\partial W^l_{ij}} &= \frac{\partial E_n}{\partial z_i^l } \frac{\partial z_i^l}{\partial W^l_{ij}} \\ \frac{\partial E_n}{\partial b^l_{ij}} &= \frac{\partial E_n}{\partial z_i^l} \frac{\partial z_i^l}{\partial b^l_i} \end{split} \end{equation} From the definition of $z_i^l$ the inputs to the neurons in layers $l$ can be written $z_i^l=\sum_{k=0} W_{ik}^l a_k^l + b_i^l$. Thus \begin{equation} \begin{split} \frac{\partial z_i^l}{\partial W^l_{ij}} &= \frac{\partial}{\partial W^l_{ij}} \left( \sum_{k=0} W_{ik}^l a_k^{l-1} + b_i^l \right) = a_j^{l-1} \\ \frac{\partial z_i^l}{\partial b^l_i} &= \frac{\partial}{\partial b^l_{i}} \left( \sum_{k=0} W_{ik}^l a_k^{l-1} + b_i^l \right) = 1 \end{split} \end{equation} Furthermore the derivative $\frac{\partial E_n}{\partial z_i^l }$ has the interpretation of being the "error" of the network in layer $l$. This quantity is usually given the name $\delta_i^l = \frac{\partial E_n}{\partial z_i^l }$. Inserting these expressions into the above equations, the derivatives of the cost function at layer $l$ can be written in terms of the activation at layer $l-1$ and the errors at layer $l$. \begin{equation} \begin{split} \frac{\partial E_n}{\partial W^l_{ij}} &= \delta_i^l a_j^{l-1} \\ \frac{\partial E_n}{\partial b^l_{ij}} &= \delta_i^l \end{split} \end{equation} Now all that's left is to write the errors in terms of the activations using the chain rule. For a hidden layer this error can be written as \begin{equation} \delta_j^{l-1} = \frac{\partial E_n}{\partial z_j^{l-1} } = \sum_k \frac{\partial E_n}{\partial z_k^l } \frac{\partial z_k^l}{\partial z_j^{l-1} } \end{equation} Using the definition of the inputs to the neurons in layer $l$ \begin{equation} \begin{split} \frac{\partial z_k^l}{\partial z_j^{l-1}} &= \frac{\partial }{\partial z_j^{l-1}} \left(\sum_k W_{ik}^l a_k^{l-1} + b_k^l \right) \\ &= \frac{\partial }{\partial z_j^{l-1}} \left(\sum_k W_{ik}^l \sigma_l (z_k^{l-1}) + b_k^l \right) \\ &=W_{ij}^l \sigma_l '(z_j^{l-1}) \end{split} \end{equation} Thus \begin{equation} \begin{split} \delta_j^{l-1} &= \sum_k \frac{\partial E_n}{\partial z_k^l } W_{kj}^l \sigma_l '(z_j^{l-1}) \\ &= \sigma_l '(z_j^{l-1}) \sum_k \delta_k^l W_{kj}^l \end{split} \end{equation} For the output layer \begin{equation} \delta_i^{L} = \frac{\partial E_n}{\partial z_i^{L}} = \frac{\partial E_n}{\partial \hat{y}_i} \frac{\partial \hat{y}_i}{\partial z_i^{L}} \end{equation} For the squared error loss function (suppressing the $n$ subscript) this becomes \begin{equation} \delta_i^{L} = \frac{\partial}{\partial z_i^L} \left(\frac{1}{2} \left |\hat{\textbf{y}} - \textbf{y} \right|^2 \right) \frac{\partial \hat{y}_i}{\partial z_i^{L}} = \left( \hat{y}_i - y_i \right) \sigma_L'(z_i^L) \end{equation} For the cross entropy loss this becomes \begin{equation} \delta_i^{L} = - \frac{\partial}{\partial \hat{y}_i} \left(\textbf{y} \cdot \log{\hat{\textbf{y}}} \right) \frac{\partial \hat{y}_i}{\partial z_i^{L}} = -\frac{y_i}{\hat{y}_i} \sigma_L'(z_i^L) \end{equation} \end{document}
{ "alphanum_fraction": 0.7158265463, "avg_line_length": 66.3211678832, "ext": "tex", "hexsha": "9b4493d755ff3b0a83a31c55333398f413b0d6f7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "68247c6f8250bb2c8201ff0536a1eb82a99215ca", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mattdornfeld/mattdornfeld.github.io", "max_forks_repo_path": "writing/backprop/backprop.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68247c6f8250bb2c8201ff0536a1eb82a99215ca", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mattdornfeld/mattdornfeld.github.io", "max_issues_repo_path": "writing/backprop/backprop.tex", "max_line_length": 1095, "max_stars_count": null, "max_stars_repo_head_hexsha": "68247c6f8250bb2c8201ff0536a1eb82a99215ca", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mattdornfeld/mattdornfeld.github.io", "max_stars_repo_path": "writing/backprop/backprop.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2882, "size": 9086 }
\documentclass[twocolumn]{article} \input{preamble} \pagestyle{plain} % Use autocite to get footnotes. % This requires switching to biber from bibtex \usepackage[natbib=true,backend=biber]{biblatex} \addbibresource{no-edit.bib} \addbibresource{new.bib} % fnpct converts things like "bah\footnote{bah}." into "bah.\footnote{bah}. \usepackage{fnpct} \setfnpct{add-punct-marks=!?:;} % . and , included by default \usepackage{amssymb} \newenvironment{claim}[1]% {\smallskip\noindent\textbf{{#1}:}}% {\smallskip} \newcommand\independent{\protect\mathpalette{\protect\independenT}{\perp}} \def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} \DeclareMathOperator*{\argmax}{arg\,max} %\newenvironment{claim}[1]% %{\renewtheorem*{MCTTHM}{#1:}\begin{MCTTHM}} %{\end{MCTTHM}} \begin{document} \title{Chapter 3: Games and Rules of Information Flow} \author{Sebastian Benthall} \date{} \maketitle In this chapter, I build on the prior sections to develop a general, formal model of economic information flow. This builds on prior work identifying the gaps in social theoretical understanding of privacy (Chapter 1) and advancing a formal definition of information flow compatible with concepts of security and privacy in computer science (Chapter 2). I argue that this model is well suited to capturing the economic impact of information flows through mechanism design, which can inform both regulation and privacy by design. Section \ref{sec:limitations} considers the social theory of privacy and notes, based on prior work in Chapter 1 of this dissertation, that cross-context information flows remain an unresolved theoretical problem in privacy. When societal expectations are organized according the boundaries of social contexts, they cannot easily anticipate flows that violate those contexts. In particular, the kinds of information flows in technical infrastructure and their impact on society are difficult to conceptualize and therefore difficult to regulate socially. Section \ref{sec:law} outlines the legal frameworks for data protection. These do offer rationales for preventing cross-context information flow in particular contexts, such as health and legal advice, through confidentiality. These sectoral privacy laws have not prevented cross-context flows that fall through the gaps of the law, such as those facilitated by data brokers. These flows are driven by actors who, unregulated by the law of society or the law of the state, are beholden instead by the law of the market. Section \ref{sec:economics} addresses the existing economic of privacy and information. This literature is also organized into analysis of single economic contexts. I argue that this is due to a lack of formal modeling tools for addressing the complex reality of economic information flow. Drawing on the formal model of information flow in Chapter 2 based on Dretske, Pearl, and Nissenbaum, and multi-agent influence models \cite{koller2003multi}, I develop a framework for mechanism design of games involving information flows. Section \ref{sec:single-context} uses the framework developed in Section \ref{sec:economics} to economic games of information flow. This section includes models of: a principal hiring and agent of uncertain quality; price differentiation based on personal information; and the provision of expert advice to a client. These models demonstrate the expressivity of the modeling framework. Section \ref{sec:cross-context} builds on the prior sections to model a case of cross-context information flow and its social effects. The model shows how one firm purchasing an information flow from another firm can have a negative impact on consumers who are otherwise not involved in that transaction. This shows that the cross-context information flows can have market externalities, suggesting that the economy of information flow is prone to market failure. Section \ref{sec:discussion} concludes the chapter with a discussion of broader implications and future work. \section{The limitiations of contextualized privacy} \label{sec:limitations} Privacy is a many-meaninged term difficult \cite{solove2005taxonomy} to concisely define. \cite{mulligan2016privacy} argues that privacy is an essentially pluralistic and contestable concept that should be defined at the 'retail' rather than ‘wholesale’ level. We will call a theory of privacy that maintains that the term has many distinct and perhaps irreconciliable meanings a \emph{particularist} account. The field of contextual integrity \cite{nissenbaum09book} accounts for variety of meanings of the term by (1) defining privacy as \emph{appropriate information flow} and (2) noting that what is ``appropriate'' depends on socially situated expectations or \emph{norms}, that are indexed to social contexts. According to this theory, privacy refers to these social expectations that vary from context to context. This theory of privacy both stands up to empirical tests \cite{martin2016measuring} and has been useful in privacy engineering (e.g. \cite{shvartzshnaider2016learning}, \cite{benthall2017contextual}). We will refer to this kind of theory, in which privacy has a single meaning that is parameterized by social context a \emph{contextualized} account. Both particularist and contextualist accounts have trouble addressing legal, ethical, and technical privacy challenges arising from social platforms, technologies that mediate multiple different contexts. \cite{benthall2017contextual}. The commonly felt awkwardness of social media due to the unexpected participation of different audiences known as \emph{context collapse} \cite{marwick2011tweet} \cite{davis2014context} is a symptom of the more general problem that the digital infrastructure mediating so many of our social and commercial interactions is often indifferent to our contextualized social expectations because it is not ``in'' any one social context. In many cases technology makes our \emph{situation} much more complex and interconnected in ways that go beyond any social expectations of our social \emph{spheres}. Contextual Integrity is socially meaningful and psychologically compelling. For most people, our ubiquitous and complex technical infrastructure, in actuality, is neither. It is perhaps for precisely this reason that social norms are not enough to regulate privacy in our technical infrastructure. Beyond society's expecations of privacy, there are also legal limits to the collection and use of personal data. \section{Information in the Law} \label{sec:law} This section will briefly survey relevant legal positions on information and data protection. \subsection{Information as property} There is a sense of the word ``information'' that corresponds to physical records--papers on file, patterns recorded electrically in databases. This sense of information as a \emph{thing} perhaps encourages privacy solutions that frame personal information as a good that could be protected by property rights and thereby allocated more efficiently. \cite{murphy1995property, samuelson2000privacy} Private property rights create a legal relationship between a person and a thing that generally transcends social spheres; robbery of private property is illegal in almost all social contexts. The closest existing legal framework for property rights in information are intellectual property rights laws. However, intellectual property rights such as those for copyright, patents, and trade secrets are motivated by the economic incentivization of innovation, not by privacy. They are not designed to protect ownership of data in general. For example, copyright specifically does not pertain to mere data or the organization of facts. \footnote{Feist v. Rural, 499 U.S. 340 (1991)} So a data subject does not by default own facts about themselves. Databases may be protected as a compilation if the selection of the data constitute individual, creative expression. Data in general presents a conceptually more difficult case than the kinds of intellectual goods considered in intellectual property law. I will make the case that this is due to data's ontological slipperiness, a slipperiness that makes it difficult to reason about it economically. \subsection{Confidentiality and sectoral privacy law} \label{sec:confidentiality} United States law has many provisions for the confidentialiality of personal information gathered in specific professional contexts. For example, HIPAA has special provisions for psychotherapy notes that do not apply to personal health information more generally. Attorney-client privilege, which protects personal information disclosed to ones lawyer, is another example of strongly protected confidentiality \cite{hazard1978historical} \cite{allen1990positive} \cite{richards2007privacy}. Confidentiality in these domains is meant to ensure that the protected client can freely divulge personal information to the service provider without concern that their information may be used in a secondary way that harms them. This is necessary for the effective execution of these services. It is notable that in all these cases of expert services, data protection is mandated by law, not left for market selection or self-regulation. These confidentiality cases are perhaps the clearest cut examples of contextual privacy. These are examples of \emph{sectoral} privacy laws, meaning laws that apply only to a single business sector. This indexes them into a particular social context, the one defined by that sector's activity. In the language of Contextual Integrity, it is clear to which abstract social \emph{sphere} each law applies. Notably, these laws generally do not apply to data collection performed by online services. Furthermore, confidentiality is a restriction on information flow. Restrictions on information flow, when observed, prevent the collapse of otherwise separate social \emph{situations} in a more complex and perhaps conflicted one. Contextual integrity is specific about how information norms need not be restrictive, and sectoral privacy laws indeed do recognize cases where some kind of information flow is mandatory (for example, when a hospital must comply with law enforcement). But it may be that throttling information flow between situations that keeps sphere or sector based information flow rules enforceable. \subsection{Notice and consent} On-line services that do not fall under the rubric of any sectoral privacy laws are regulated in the United States by the Federal Trade Commission (FTC). The FTC has encouraged a self-regulatory regime of ``notice and consent'' whereby on-line services must transparently describe how they will use personal data and get consent before collecting it. The company must abide by the terms of the notice, even in the case of a corporate merger \cite{hine_2015}, or risk being in violation of the FTC Act Section 5 prohibitions against unfair or deceptive practices. The effectiveness of the notice and consent framework has been widely panned \cite{barocas2009notice} \cite{reidenberg2015privacy}, as empirically users do not read legal notices, which are often written in dense and complex legal and technical language. This complex language may indeed reflect the complexity with which collected personal data may be used \cite{schaub2015design} in practice. Nevertheless, the scholarly consensus is that the notice and consent framework does little to protect privacy in practice \subsection{GDPR and purpose-binding} \label{sec:GDPR} The European Union's General Data Protection Regulation, which at the time of this writing has not yet gone into effect, promises to set a significant new standard for data protection in on-line services. While it protects only EU citizens, its extraterritorial enforcement means that many companies that are not based in the EU must still take significant steps to be compliant or risk facing heavy fines. A notable feature of the GDPR is its use of ``purpose binding'' \cite{hildebrandt2013slaves} \cite{herrmann2016privacy}: data subjects must consent to particular purposes of use by data processors before the data may be collected. Exceptions to this rule are also framed in terms of purposes (such as the purpose to protect the ``vital interests'' of the data subject). Purpose binding is combined with \emph{data minimization}, the requirement that data may not be held or processed in excess of what is needed for the original purposes of collection. The efficacy of this regulation is still untested. However, it compares favorably with existing U.S. law. Narrowing the complexity of notices to be about particular purposes may be an improvement over the more complex legal and technical conditions in onotices typical under the FTC's notice and comment framework. While some U.S. sectoral privacy policies include purpose restrictions on information use \cite{tschantz2012formalizing}, the fact that the GDPR is an omnibus law means that its purpose restrictions apply even to those businesses that fall through the gaps of sectoral regulation. Truly, the GDPR formalizes new privacy \emph{rights}, which are akin to but unlike other rights like property rights. \section{Economics and mechanism design} \label{sec:economics} Privacy is a complex social phenomenon and the importance of nuanced social theories like contextual integrity cannot be overstated. However, it is also a fact that technical infrastructure that spans social contexts is most often developed by private companies that are more responsive to economic principles than social norms. Having motivated the inquiry by reflecting on philosophical and legal theories of privacy, I will now turn to the economics of privacy, as economics are at the core of the social and legal questions that have concerned other scholars. Though narrower in scope, the field of economics has provided a rich literature on privacy that lends precision to claims about how interests or incentives shape outcomes. Modern economics of privacy concerns itself mainly with concerns about the economics of personal information as it is used by businesses employing information technology. Specially, it most often addresses what Acquisti, Taylor, and Wagman \cite{acquisti2016economics} call \emph{tangible} impacts of privacy, those impacts that have objectively measurable and modelable costs and benefits and effects on market structure. While others acknowledge the possible importance of \emph{intangible} impacts, such as a psychological concern about how ones personal information may be used (which may be modeled as a subjective preference for privacy \cite{calo2011boundaries} \cite{cofone2017dynamic}) and other more global social effects, we will limit the discussion in this paper to tangible impact. Even so narrowly scoped, there are many different economic contexts in which the presence or absence of personal information is critically relevant. There are so many different contexts, each represented in their own sophisticated scholarly literatures, some \cite{acquisti2016economics} argue that a comprehensive economics of privacy cannot be achieved. Essentially, this is an argument that the economics of privacy should be contextualized, echoing the contextualized account of privacy outlined in Section \ref{sec:limitations}. But what if we want to understand the economic impact of information flowing \emph{between economic contexts}? In order to accomplish this, we need an economic framework that can model many different kinds of economic contexts, as well as a the ways in which they may interact. More concretely, economics has so far failed to come up with a theory explaining why people \emph{buy and sell data}, and how they price it. Such a question is critical for explaining the way personal information transfers from business to business in the case of, say, online behavioral advertising. I posit that this is in part because of the slippery ontological properties of data: it is not a thing that one can hold as property and its main economic value is informing the actions of other agents (such as pricing decisions). In other words, the value of data is often the value of the strategic advantage provided by the data. This may help explain why often companies are often more interested in buying and selling flows of data, such as those provided by a web-based Application Programming Interface (API), than any particular piece of data. One tool in the economics toolkit for understanding policy decisions is mechanism design. \cite{hurwicz2006designing} \cite{nisan2007introduction} Mechanism design is an ``inverted game theory'', wherein the designer defines a range of possible economic games and chooses the structure of the game that maximizes some predetermined goal or objective function. The objective is a function of the outcome of the game assuming the players are operating according to strategies that are rationally optimized for their self interest, such as the strategies of a Nash Equilibrium. This in turn provides insight about what kind of rules can be imposed on an economic transaction such that socially prefered outcomes result, even when the economic actors are self-interested. In this section, I will develop a framework for mechanism design of economic situations involving information flows. This framework will extend the Multi-Agent Influence Diagram (MAID) framework \cite[koller2003multi], which is a game-theoretic extension of Bayesian Networks. This framework, which was briefly introduced in Chapter 2, models information flow in a social context as information is understood by engineers and economists. For regulatory regimes to be most effective, they must be reducible, in the scientific sense, to something like this model. In Section ???, I will reflect back on whether and how this model can reflect the restrictions of information law. \subsection{Formalizing information flow mechanisms} \label{sec:formalizing} We have motivated the need for a general framework for mechanism design for economics contexts involving (personal) information flow. In this section, I will specify that framework. Summarizing prior work (see \emph{Dissertation Chapter 2}, \cite{benthall2017origin}), I synthesis a formal representation of information flow from Nissenbaum, Dretske \cite{dretske1981knowledge}, Shannon, and Pearl \cite{pearl1988probabilistic}. In this representation, information flow is a causal flow that carries nomic associations, which can be represented precisely using Bayesian networks. I then propose the use of Multi-Agent Influence Diagrams, a game theoretic extension to Bayesian networks, as a framework for mechanism design in privacy economics. \cite{koller2003multi} \subsection{Formal theory of information flow} An upshot of CI is that it identifies privacy as a property of information flows, which when unpacked proves to be a more substantive claim than it may first appear. When we speak about "consumer information" or "personal information", we are faced with the ambiguity of the meaning of the word "information", which can mean alternatively either a medium of representation (such as paper or electronic records, "data") or a mathematical relationship between events or objects such that one is sufficient for inferences about the other \cite{nunberg1996farewell}. \cite{benthall2017origin} provides an mathematical analysis of the concept of \emph{information flow} on robust foundations: Dretske's philosophical theory of information flow and Pearl's account of statistical causation. Dretske's \cite{dretske1981knowledge} formulation that a message carries information about something it represents if and only if it messages of its kind carry a regular or ``nomic'' relationship with what is represented. Dretske develops this philosophical account of information flow to be consistent with classical information theory \cite{shannon1948mathematical}, in which an information channel establishes a correspondence between the probability distributions of two random events. The emphasis on the regularity of the probabilistic relationship suggests the need for an account of how messages can flow in a structured way. Just such a theory of structured probabilistic relationships can be found in Pearl's theory of statistical probability and causation \cite{pearl1988probabilistic}, and more generally theory around Bayesian networks. Bayesian networks provide a formulation of precisely how causally linked events can be correlated without being directly caused by each other. For example, two events that share a common cause can be correlated. This means that the nomic associations of a message depend not just on who sent the message but how the message is situated in a larger context of messages. Information flow therefore decomposes into two related parts, \emph{causal flow} of events and their relationship to each other, and \emph{nomic associations} between events. Both of these properties of information flow can be deduced from a model of information's context as a Bayesian network. A fully specified Bayesian network, complete with conditional probability distributions at every node, will determine not just the existence of a nomic assocation (or, equivalently, a conditional independence), but also the strength fo the association. Many measures of associative strength are possible, but one useful measure that is very well understood is Shannon's \emph{mutual information}: \begin{dfn}[Mutual information] The mutual information of two discrete random variables $X$ and $Y$ is $$I(X,Y) = \sum_{x \in X} \sum_{y \in Y} p(x,y) log \frac{p(x,y)}{p(x)p*y)}$$ In particular, $I(X,Y) = 0 \iff X \independent Y$. \end{dfn} See Appendix \ref{appendix:information-theory-theorems} for theorems concerning the ways bounds on mutual information between variables can be read off of Bayesian networks. \subsection{MAIDs and Mechanism Design} Introduced briefly in Chapter 2, the Multi-Agent Influence Diagram (MAID) framework developed by \cite{koller2003multi} provides a game-theoretic extension to Bayesian networks. As a formalism, it is well suited for modeling how information flows, which we have detailed as causal flows with nomic associations, play a role in strategic games. A full account of the formalism is in the Appendix \ref{appendix:maid}. MAIDs extend Bayesian networks with two new node types: decision variables, and utility variables. Chance variables are much like the nodes in a Bayesian network: a CPD is defined for each chance variable that conditions on its parent nodes. Utility variables are much like chance variables, but they are each assigned to an agent $a \in \mc{A}$ and they may not have children. The utility for each player in the game defined by the MAID is the sum of the values of the utility nodes assigned to them. Decision variables are assigned to an agent $a \in \mc{A}$. Their CPD functions are not defined as part of the MAID. Rather, the choice of CPD function for each decision variable is a strategic choice of the agent. The strategy profile $\sigma_a$ for each agent is their assignment of a CPD to each decision variable. Taken to together, the strategy $\sigma$ of all the players induces a MAID into a Bayesian network, from which the expected utilitites of all players may be computed. In this work we have extended the MAID framework in a few respects. First, we have introduced the mechanic of an \emph{optional edge}, represented in our diagrams as a dotted edge. \begin{center} \begin{tikzcd} A \arrow[r, dotted] & B \\ \end{tikzcd} \end{center} A dotted edge represents a potential information flow whose value is the focus of the study. An optional edge means a diagram represents two distinct MAIDs, one with the edge ``open'' or present in the graph, and one with the edge ``closed''. We will look at the outcomes of the open and closed cases and evaluate them according to values like efficiency and equity. Intuitively, there's a difference between information that becomes suddenly available, as in a data breach, and well-establish information flows to which everyone is accustomed, such as security cameras in malls. In both cases the information flow will have an effect on outcomes, but the cases are subtly different. I try to capture this difference by distinguishing between the tactical and strategic value of information (this is formalized in Appendix \ref{sec:value-of-information}). The tactical value of information is its value to an agent assuming all other agent's strategies remain fixed. The strategic value of information is the difference in an agent's utilities in the open and closed cases, consider a strategic equilibrium of all players in each case. In the cases discussed in this chapter, I will consider the strategic value of information flow except when specifically stated otherwise. \section{Single Context Economic models} \label{sec:single-context} In this section, I will demonstrate ... ... using simple economic models of well-known phenomena in privacy economics. The innovation in these models is that they use MAIDs to model the strategic decisions, and that this makes explicit the relationship between information flow and contextual outcomes. \subsection{Agent quality uncertainty} \label{sec:agent-quality} One of the first contexts studied under the term ``privacy economics'' was labor markets \cite{posner1981economics}. In labor, insurance, and credit markets, a firm must evaluate natural persons for their individual capacities (to perform a certain kind of work, to avoid risk, or to repay a loan) and decide whether to invest resources in them. The firm generally benefits from having more information about the persons under consideration. The effect of privacy, or lack of it, is uneven across the population being considered by the firm. Paradigmatically, more suitably employees are benefited if their suitability is known to potential employers, while conversely less suitable employees are harmed by the same. Analogous results hold for credit and insurance. We can model this interaction with the following graph: \begin{center} \begin{tikzcd} & V \arrow[d, dotted] \arrow[ddr, bend left = 20] & \\ & \tilde{B_p} \arrow[dr] \arrow[dl] &\\ \breve{U_a} & & \breve{U_p}\\ \end{tikzcd} \end{center} In this model, $V$ represents the value to a principal of a service or contract with an agent. For simplicity, in the model $V$ is normalized with a predetermined price, so the value of $V$ may be negative. At $\tilde{B_p}$, the principal decides whether or not to buy the contract; $dom(\tilde{B_p}) = \{0,1\}$. The utility awarded to the principal is the normalized value of the contract if the principal buys and zero otherwise. $$U_p = \begin{cases} V & \text{if } \tilde{B_p} = 1 \\ 0, & \text{otherwise}\\ \end{cases} = \tilde{B_p} V$$ The utility for the agent is, for simplicity, a fixed amount (for example $1$) if the principal buys the contract, and zero otherwise; so $U_a = \tilde{B_p}$. This model affords some simplifications through backwards induction. The optimal strategy for the principal is to buy the contract if the expected value of it is positive. iIf the dotted edge is open, then the principal is able to use the know $V$ to make this decision. If the dotted edge is closed, then the optimal decision $\hat{B_p}$ depends only on the distribution of $V$. \begin{equation} \begin{split} \hat{B_p} & = \argmax_{b_p \in {0,1}} \E(U_p) \\ & = \argmax_{b_p \in {0,1}} b_p \E(V) = \begin{cases} 1 & \text{if } \E(V) \geq 0 \\ 0, & \text{otherwise}\\ \end{cases} \end{split} \end{equation} If the dotted edge is open, then the decision to buy the contract will be better informed. \begin{equation} \begin{split} \hat{B_p \vert V = v} & = \begin{cases} 1 & \text{if } v \geq 0 \\ 0, & \text{otherwise} \end{cases} \\ & = [v \geq 0] \end{split} \end{equation} \begin{center} \begin{tabular}{ |c|c|c| } \hline $\E(\cdot)$ & Open & Closed \\ \hline $U_p$ & $\E(V \vert V \geq 0) P(V \geq 0)$ & $[\E(V) \geq 0] V_E$ \\ $U_a$ & $P(V \geq 0)$ & $[\E(V) \geq 0]$ \\ $U_a \vert V \geq 0$ & $1$ & $[\E(V) \geq 0]$ \\ $U_a \vert V < 0$ & $0$ & $[\E(V) \geq 0]$ \\ \hline \end{tabular} \end{center} where $V_E = \E(V)$ \begin{exm} Let $V$ range over $\{-1,1\}$ with even odds. $V_E = 0$, $[\E(V) \geq 0] = 1$. So the utility to the agent in the closed case, whatever their quality, is $1$, and the expected utility to the principal is $0$. In the open case, the principal's utility is $\E(V \vert V \geq 0) P(V \geq 0) = 1 * (.5) = .5$. The high-quality agents get utility $1$, and the low quality agent gets utility $0$. \end{exm} From this example, we can see that principals and agents who can offer more valuable contracts benefit from more openness, while agents with low-value contracts suffer. \subsubsection{Values} Early work on privacy economics reasoned that flow of personal information in labor markets leads to greater economic efficiency. \cite{posner1981economics} The MAID model in Section \ref{sec:agent-quality} does reflect this reasoning. More flow of personal information (the open condition) brings greater utility to the principal on average, and this is a form of market surplus. It must also be noted that personal information flow has an unequal effect on the contract agents. Less valuable contract agents are negatively impacted by the flow of their personal information. In this narrowly considered economic context there is a global tradeoff between economic productivity, lubricated by flows of personal information, and equality. This model is general enough to extend to cases where agents are not natural persons but rather firms. Indeed, the situation may be flipped: a single natural person may have to choose among many firms in order to, for example, contract an improvement to their home. The model therefore generalizes from cases of privacy economics to other cases where there is quality uncertainty and the buyer has market power. A question for policy designers is whether individual privacy is any more worthy of protection than information about firms to those who would hire their services, and why. One reason to be wary of a hiring or other contract choice depending on personal information is indirect discrimination. If contract value is negatively correlated with membership in a protected class of persons, choosing contracts solely on the basis of value might compound an injustice, which Hellman argues there is a duty to avoid doing. \cite{hellman2017indirect} Modeling historical injustice with MAIDs is a problem left for future work. \subsection{Price differentiation} \label{sec:price-differentiation} It is well known that personal information is used by on-line retailers for price differentiation \cite{shapiro1998information, varian2001economics}. According the classic economic theory, when a firm charges all consumers at the same price it leaves some unserved by the market (because the price exceeds their demand) and some accruing a consumer surplus (because their demand exceeds the price). With differentiated prices, a firm can charge individual or groups of consumers closer to their reservation prices. This reduces deadweight loss by charging consumers with very low willingness to pay a price they can afford, while transforming consumer surplus formerly accrued by those with high reservation price to the firm as producer surplus. We can model this context graphically like so: \begin{center} \begin{tikzcd} & V \arrow[d, dotted] \arrow[dd, bend left = 40] \arrow[dddl, bend right = 20] & \\ & \tilde{R}_f \arrow[d] \arrow[ddl, bend right = 20] \arrow[ddr, bend left = 20] & \\ & \tilde{B_c} \arrow[dr] \arrow[dl] &\\ \breve{U_c} & & \breve{U_f}\\ \end{tikzcd} \end{center} In this model, $V$ represents a consumer's demand for a product. $\tilde{R}_F$ is the price offered by the firm for the product (costs normalized out) based on the available information $S$. At $\tilde{B_c}$, the consumer decides whether or not to buy the product; $dom(\tilde{B_c}) = \{0,1\}$. The firm's utility is the offered price of the product if it is bought and zero otherwise; $U_f = B_c R_p$. The consumer's utility is their demand minus the price if they buy the product and zero otherwise; $U_c = B_c (V - R_p)$. Once again, we can consider two cases. In the ``closed'' case, the firm does not know the demand of the individual consumer. They only know the general distribution. The consumer will buy the product if and only if the price is lower than their demand or reservation price; $\hat{B} = [V > R]$. The firm must choose $\hat{R}$ that maximizes their expected revenue: $$\hat{R} = \argmax_{r \in \R} \E(r [V > r])$$ If $V \geq \hat{R}$, then the consumer will find the price agreeable and purchase the good, accruing $V - \hat{R}$ utility. Otherwise, they will not purchase the good. In the ``open'' case, the producer knows the reservation price $V = v$ when deciding their price $\hat{R}$. $$\hat{R} = \argmax_{r \in \R} r [v > r]) = v - \epsilon$$ This value approaches $v$ from below, and for simplicity of presentation we will use $\epsilon$ to represent a vanishingly small value. \begin{center} \begin{tabular}{ |c|c|c| } \hline $\E(\cdot)$ & Open & Closed \\ \hline $U_f$ & $V_E - \epsilon$ & $\hat{R} P(V \geq \hat{R})$ \\ $U_c$ & $\epsilon$ & $(V - \hat{R}) [V \geq \hat{R}]$ \\ $U_c \vert V \geq V_E$ & $\epsilon$ & $V - \hat{R}$ \\ $U_c \vert V < V_E$ & $\epsilon$ & 0 \\ \hline \end{tabular} \end{center} where $V_E = \E(V)$ \begin{exm} Let $V$ be a uniform distribution ranging $[0,1]$. In the closed condition, $U_f = r (1 - r) = r - r^2$, implying the $\hat{R} = .5$ and $U_f = .25$. $\E(U_c) = .125$. In the open condition, $\hat{R} = V - \epsilon$ and $\E(U_f) = .5 - \epsilon$. Consumer utility is $\epsilon$. \end{exm} In this price differentiation case, the strategic value of information to the producer is positive. The strategic value of the information to the consumer depends on the consumers' reservation price: it may be negative, or it may be very slightly positive. In general, allowing information flow for price differentiation is better for producers than for consumers. \subsubsection{Values} This model shows the tradeoffs of allowing information flow in the economic context of price differentiation. The outcomes for different agents can be inferred from the model. It's clear that information flow for the purpose of price differentiation primarily serves the firm selling a good or service. Arguably, this is valuable because it allows firms to recoup fixed costs for product development with greater sales. However, this model shows that price differentiation is on the whole bad for consumers. While it's true that consumers with low willingness to pay have access to the good with price differentiation, they are charged a price that makes them almost indifferent to the transaction. Meanwhile, consumer surplus has drained from those who valued to the good highly. This is a case where the purpose of a market context may be hotly contested by different actors within it. If the contextual purpose of the market transaction is to satisfy as much consumer demand as possible while rewarding productive suppliers, then allowing information flow for price differention is wise policy. But this may be contested by consumer advocates who would argue that consumer satisfaction is more important than economic growth. This context is so raw with economic intent it may be that not societal consensus is possible. \subsection{Expertise} \label{sec:expertise} Doctors, lawyer, and financial services professionals all have something in common. Their clients consult them for their expertise. In the schematic interaction we'll consider in this section, we'll consider the case where these clients must divulge personal information to an expert in order to get a personalized recommendation or response. In many of these domains, there are already strong data protection laws in place in the United States. HIPAA in health care, GLBA in personal finance, and FERPA in education all place restrictions on institution's ability to disclose personal information that's collected as a part of that instition's normal professional service. (See Section \ref{sec:confidentiality}.) Notably, there is no similar data protection law for search engine queries, which may also be considered a kind of expert recommendation service. The MAID modeling tool we have been using can capture the difference in knowledge between the client and the expert and the consequences that has for the service market. \begin{center} \begin{tikzcd} & W \arrow[ddl, bend right = 40] \arrow[ddr, bend left = 40]& \\ & C \arrow[dl] \arrow[dd] \arrow[dr, bend left = 20, dotted]& \\ V \arrow[dd] \arrow[ddr, bend right = 20] & & \tilde{R}_e \arrow[dl] \\ & \tilde{A}_c \arrow[dl] \arrow[d] &\\ \breve{U_e} & \breve{U_c} &\\ \end{tikzcd} \end{center} In this model, $W$ are facts about the world that determines the relationship between personal qualities of clients and the best course of action taken by them. For example, this may be thought of as parameters in a function from symptoms of illness to appropriate prescribed remedies. Its domain is some flavor of $n$-by-$m$ matrices, where $n$ is the number of personal characteristics or types in the model, $m$ is the number of actions available to the client, and $W_{i,j}$ is the reward to a client with of type $i$ of action $j$. The variable $C$ encodes those personal qualities known to and communicable by the client. The domain of this variable is an integer from 1 to $n$. The variable $V$ encodes the value of a particular to the client of a variety of courses of action that they might take. It depends on $W$ and $C$, and in the simple version of the model considered here is a deterministic function: $V$ is the row of $W$ indexed by $C$. $\tilde{R}_e$ is the strategically determined decision of the expert to recommend a course of action based on their knowledge of $W$ and optionally $C$. Its domain is an integer from 1 to $m$. $\tilde{A}_c$ is the decision of which action the client takes. It also has domain from 1 to $m$. $U_c$ and $U_e$ are the utilities awarded to the client and expert, respectively, which take their value from the vector $V$ indexed by the value of $\hat{A}_c$. Perhaps idealistically, we have modeled the utility of the expert as depending only on the utility of the client. We imagine that the client pays for the expertise up front, that this is normalized into the value of the action taken $V$, and that the expert benefits from the positive recommendations of satisfied clients. Future work and other models may explore other possible configurations of incentives. For an action taken $a \in A$, we will specify that $U_c = V(a)$. We will once again consider two cases. In the closed case, there is no edge from $C$ to $\tilde{R}_e$. In this case, the expert still has specialized knowledge (the value of $W$), but no personal information about the client with which to tailor their recommendation. Their best recommendation is the action that would benefit a random client the most in expectation. $$\hat{R}_{closed} = \argmax_{a \in A} \E(V(a) \vert W)$$ The client, on the other hand, has access to information about their symptoms $C$ but not the expert knowledge $W$. By the assumption of the model, the client does have access to the expert's recommendation, $\hat{R}_{closed}$. So their choice of action is: $$\hat{A}_{closed} = \argmax_{a \in A} \E(V(a) \vert C, \hat{R}_{closed})$$ In the alternative ``open'' condition, there is an edge between $S$ and $\tilde{R}$. $$\hat{R}_{open} = \argmax_{a \in A} \E(V(a) \vert W,C)$$ $$\hat{A}_{open} = \argmax_{a \in A} \E(V(a) \vert C, \hat{R}_{open})$$ The specific utility outcomes depend heavily on the parameters of the model. We can make a few general observations about bounds. If the model is such that individual symptoms carry no information about the value of actions taken even with expert knowledge taken into account ($V \independent C \vert W$, $V \independent C$), then the welfare outcomes in the closed case and the open case will be the same. If the expert knowledge $W$ has information about the action values given the symptoms ($I(W;V \vert C) > 0$), then the expert recommendation $\hat{R}_{open}$ will generally be better in expectation than $\hat{A}_{closed}$ and indeed $\hat{A}_{open} = \hat{R}_{open}$. Note that there is an interaction between the strategies of the expert and the client. The optimality of the expert's strategy at $\tilde{R}_e$ depends on how its signal will be ``interpreted'' at $\tilde{A}_c$. Interpreting $\tilde{R}_e$ as a recommendation implies some correspondence between the value taken at that variable and the values of actions according to $V$. But in some cases an alternative encoding of the information in $W$ (and $C$) may be more efficient. An example of the instantiated model will illustrate these points. \begin{center} \begin{tabular}{ |c|c|c| } \hline $\E(\cdot)$ & Open & Closed \\ \hline $U_e$ & $\E(V(\hat{A}_{open}))$ & $\E(V(\hat{A}_{closed}))$ \\ $U_c$ & $\E(V(\hat{A}_{open}))$ & $\E(V(\hat{A}_{closed}))$ \\ \hline \end{tabular} \end{center} \begin{exm} Let $n$ and $m$ both equal 2. Let the domain of $W$ be binary 2-by-2 matrices with the restriction that each row contains one 0 and one 1. Let the distribution of $W$ be uniform over the four possible matrices in its domain. In the closed case, $\tilde{R}_e$ does not depend on $C$. $\tilde{R}_e$ is therefore a strategically chosen encoding of only the information in $W$. Notably, whereas the random variable has 2 bits of information, $\tilde{R}_e$ ranges over 0 and 1 and can carry at most 1 bit of information. One such encoding, communicating one bit from $W$, is: $$\hat{R}_{closed} \vert (W = w) = \begin{cases} 1 & \text{if } w_{0,1} = 1 \\ 0, & \text{otherwise}\\ \end{cases}$$ At $\tilde{A}_c$, the client knows both $C$ and $\tilde{R}_e$ and must choose an action that optimizes their expected utility. At this node the client knows if they are of type 0 or 1. If they are of type 0, they know the recommendation applies to them, with certainty of a reward of 1 if they take the suggested action. $$\hat{A}_c \vert (C = 0) = \hat{R}_c$$ But if the client knows they are of type 1 (probability $.5$), then the recommendation does not encode information about the value of their action. Whatever action they take has an even chance of having utilty of 0 or 1. Expected utility in the closed case is $.5 * 1 + .5 * .5 = .75$. In the open case, the expert's recommendation $\tilde{R}_e$ depends on both $W$ and $C$. From this information, the expert can deduce the one bit of information relevant to the client's decision, which is $V$. $$\hat{R}_{open} \vert (W = w, C = c) = \begin{cases} 1 & \text{if } w_{c,1} = 1 \\ 0, & \text{otherwise}\\ \end{cases}$$ In this case, when $\hat{A} = \hat{R}_{open}$, the value of the action is guaranteed to be $1$, which implies that the expected utility in the open case is $1$. The strategic value of the information flow from $C$ to $R$ is the difference in expected utilities in the two cases, which in this example is $1 - .75 = .25$. \end{exm} In this example, the expert chooses a strategic at $\tilde{R}$ that maximizes the flow of information, in the Shannon sense of the term, to $\tilde{A}$ about another variable of interest, $V$. Or, formally: $$\hat{R} = \arg \max_{R} I(R;V)$$ In the closed case, the limited domain of $\tilde{R}$, which permits the flow of at most one bit of information, restricts the expert's ability to provide an adequate recommendation to the client. If the number of bits in $R$ was greater than or equal to the number of bits in $W$, the expert would be able to communicate the entirety of their expertise to the client, who could then make a perfect judgment of action taking $C$ into account. This information theoretic lens provides a new view into personalized expert advice. Personalization is useful to the client only because the client lacks expertise, but this lack of expertise is due in part because the expert cannot communicate all the information they know to the client. Personalization allows the expert to provide the highest value information through the narrow bandwidth of communication. The constraints on information flow are due mathematically to the Data Processing Inequality and its consequences for Bayesian networks, which are discussed at length in \ref{appendix:information-theory-theorems}. \subsubsection{Values} This model of expert services has been simplified to exclude cases of expert conflicts of interest that might engage societal values in mechanism design. We accomplished this simplification by directly aligning client and expert incentives. Despite this simplification, the model shows some of the difficulty in modeling the welfare outcomes of expert services. The principle difficulty is that the outcomes depend on general facts and the quality of expertise in a particular domain. Because it is hard to encode an actual field of expertise into a simple model we can prove only very general properties of such a field. Despite these difficulties, this simple model shows that when expert and client incentives are aligned, greater flow of information from client to expert enables better outcomes for both parties. In a later section, we will elaborate on this model by introducing the possibility of a breach of confidentiality. \section{Cross-context information flow and secondary use of personal data} \label{sec:cross-context} In the above models, we have shown how in a variety of economic contexts the flow of information can have tangible effects on welfare outcomes. What these models have in common is that they show that the relevance of information on outcomes depends on the process by which it is generated and how elements of that process affect outcomes. While we have provided narrative cover stories for each example where we have described the information flows in terms of particular types of documents or events (job applications, symptoms, etc.), what really gives information its semantics are its associational relationships with other variables. These are given by the conditional probability distribution governing the model. A reason for modeling games with information flow in this way is to begin to model the challenges of modeling the economic impact of the \emph{secondary use} of data. It is the cases where data collected for one purpose or context is used in another that are often presented as alarming. \subsection{Cooperating Firms: Price differentiation and agent quality} \label{sec:cooperating-firms} Consider the following model, constructed as a combination of the agent quality uncertainty and price differentiation models. Here $c$ is a natural person who is \emph{both} potentially a consumer of firm $f$'s products and potentially involved in a contract with principal $p$. The value of this person's contract $V^1$ and their willingness to pay for the product $V^2$ both depend on a prior variable $W$ that encapsulates many factors about the background of the person. \begin{center} \begin{tikzcd} & & W \arrow[dl] \arrow[dr] & & \\ & V^1 \arrow[d] \arrow[dddl, bend right = 20] \arrow[dddl, bend right = 20] \arrow[dd, bend left = 40] & & V^2 \arrow[dddr, bend left = 20]& \\ & \tilde{R}_f \arrow[d] \arrow[drr, dotted] \arrow[ddl, bend right = 20] \arrow[dd, bend left = 40] & & & \\ & \tilde{B_c} \arrow[d] \arrow[dl] & & \tilde{B_p} \arrow[dr] \arrow[d] &\\ \breve{U_f} & \breve{U_c^1} & & \breve{U_c^2} & \breve{U_p}\\ \end{tikzcd} \end{center} As before, $V^1 \rightarrow \tilde{R}_f$ represents the ability of the firm to known the customer's demand before choosing their price. There is also a principal that decides at $\tilde{B}_p$ whether or not to buy a contract with the customer, who in this case is also an agent for hire. In this model, the principal cannot know the value of the contract $V^2$ directly. Rather, there is a new edge $\tilde{R}_f \rightarrow \tilde{B}_p$ that represents the option of the product-selling firm $f$ to share its pricing information with the contract principal $p$. Why would two companies ever interact in this way? If the principal does \emph{not} know the value of a potential contract $V^1$ directly, then the pricing information $\hat{R}_f$ potentially contains information about $V^1$ in a way that the principal $p$ can use Here, ``contains information about'' can be read to mean ``has mutual information with'', i.e. $I(\hat{R}_f, V^1) \geq 0$. This information may be valuable for the principal by allowing them to avoid bad contracts. Since the principal and the producing firm's utility's do not interact directly in any other way, we can imagine that the principal would be willing to purchase the pricing data from the producing firm for the \emph{value of the data to the principal}. Though this data relates directly to a natural person, it is not data collected from that person; it is data derived from the producing firm's pricing algorithm. Nevertheless, sharing this data has a function analogous to sharing personal data that could be used in a hiring decision or in offering a loan. In this situation, the firm's incentives are the same as in the simple price differention case in Section \ref{sec:price-differentiation}. By assumption, the firm knows the customer's demand $V^1$, and therefore prices at $\hat{R} = V^1 - \epsilon$. \begin{center} \begin{tabular}{ |c|c|c| } \hline $\E(\cdot)$ & Open & Closed \\ \hline $U_f$ & $V^1_E - \epsilon$ & $V^1_E - \epsilon$ \\ $U^1_c$ & $\epsilon$ & $\epsilon$ \\ $U_p$ & $\E(V^2 \vert V^2 \geq 0) P(V^2 \geq 0 \vert \hat{R})$ & $[\E(V^2) \geq 0] V^2_E$ \\ $U_c^2$ & $P(V^2 \geq 0)$ & $[\E(V^2) \geq 0]$ \\ $U_c^2 \vert V^2 \geq 0$ & $1$ & $[\E(V^2) \geq 0]$ \\ $U_c^2 \vert V^2 < 0$ & $0$ & $[\E(V^2) \geq 0]$ \\ \hline \end{tabular} \end{center} where $V^1_E = \E(V^1)$ and $V^2_E = \E(V^2)$. \begin{exm} Let $W$ vary over $\{0,1\}$ with even probability, with the value corresponding to one of two socioeconomic classes, \emph{low} and \emph{high}. In this example, we will assume that higher class people have access to better education and wealth, and therefore both have higher reservation prices and offer higher value contracts to creditors, employers, and insurance providers. $$V^1(w) = \begin{cases} 10 & \text{if } w = 1 \\ 1, & \text{otherwise}\\ \end{cases}$$ $$V^2 = \begin{cases} 1 & \text{if } w = 1 \\ -1, & \text{otherwise}\\ \end{cases}$$ The firm has access to the reservation price $V^1$ at $\tilde{R}_f$ and so will maximize their utility by pricing at slightly below the customer's willingness to pay. $$\hat{R}(v) = v - \epsilon$$ The optional edge being considered in this case runs from $\tilde{R}$ to $\tilde{B}$. In the closed case, the principal has no information about $V^2$ on which to decide except the base rate provided by the game structure. The expected value to the principal of providing the contract is $0$. In the open case, $\tilde{B}$ is conditional on $\tilde{R}$. Crucially, $V^2$ is conditionally dependent on $\hat{R}$. In particular: $$P(V^2 = 1 \vert \hat{R} > 1) = 1$$ $$P(V^2 = 0 \vert \hat{R} \leq 1) = 1$$ The optimal strategy for the principal is to hire the agent when the firm reveals to them that they offered a high price for the good, and to reject the agent otherwise. The high quality contract is purchase half the time with reward $1$ to the principal. So the strategic value of the information flow from $\tilde{R}$ to $\tilde{B}$ to the principal is $.5$. The strategic value to the average customer/agent of this information flow is negative, as it results in many customer/agents not getting hired. \end{exm} In this simple example, in strategic equilibrium the firm's offered price and the customer/agent's contract quality are highly correlated. This means that a \emph{causal flow} between the firm's price and the principal's hiring decision carries a \emph{nomic association} with the contract value. That association has strategic value for the principal similar to the value of having the contract value's causally flowing directly to the hiring decision. The secondary use of data to determine the social class of natural persons is not an academic hypothetical. Facebook has filed a patent to use information about user's hardware specifications, presumabely collected originally to optimize service performance, to predict social class, presumably useful for targeting advertisements \cite{cb_insights_research_2018}. The cross-context use of personal data for targetted advertising is arguably the fundamental business proposition of on-line advertising companies like Facebook and Google. The strategic value of the information to the principal can be interpreted as the price at which the principal would be willing to purchase this information flow from the firm. This price depends on the causal structure of the environment, including the distribution of qualities of natural persons. Even though natural persons are on average worse off as a result of this information flow, this is not a factor in its value or price to the principal. This can be considered a market externality, though because it involves a new flow of information, it may also correct a market inefficiency. \subsubsection{Values} The outcomes of this game are similar to the outcomes in the simple principal agent case. The difference is that the information flow runs between two cooperating firms. This models the flow of information from one economic context (price differentiation on a consumer good) to another (a principal-agent hiring decision). Because the two contexts are part of a shared causal environment, the data from from one context can carry meaningful information relevant to another. It's notable that the natural person in this example, who is both a customer and an agent to be hired, is (on average) disadvantaged by the hypothetical transaction taking place between two firms. This is a market externality, though the flow of information corrects a market inefficiency in the second economic context. There is a tradeoff between market efficiency, which is good for firms, and the privacy of natural persons, and especially the most vulnerable natural persons. \subsection{Secondary use of queries to experts} Section \ref{sec:cooperating-firms} detailed the potential negative impact on vulnerable natural persons from an information flow that crosses economic contexts. It is possibly because of the potential negative impact of secondary uses of information that so many market segments are protected by sectoral privacy and confidentiality laws. HIPAA in health care, GLBA in personal finance, and FERPA in education all place restrictions on firm's ability to disclose personal information. (See Section \ref{sec:confidentiality}.) What these sectors all have in common is that they do not function without significant disclosures of personal information to firms because the provide personalized service unavailable to the consumer. The expertise model in Section \ref{sec:expertise} captures why personal information is necessary for the functioning of these services in fundamental mathematical and economic terms: personalization allows expert service providers to deliver more value to clients given a tight information bottleneck relative to the body of knowledge of the expert. Section \ref{sec:cooperating-firms} provides a template for understanding why disclosure of sensitive information from the context of expert services into other domains could have negative externalities for natural persons. Indeed, information we provide our doctors, lawyers, financial advisors, and search engines is sensitive precisely because this information is potentially impactful in other contexts in ways that are surprising and/or unwelcome. Whereas the firms in \ref{sec:cooperating-firms} both benefit from the sale of personal information, the sale of personal information from expert services may have secondary effects that negatively impact experts. If clients are aware of a harmful information flow, they may be reluctant to engage the expert. In the the terminology introduced earlier, an expert may get tactical value from selling personal information of its clients, but if clients can adjust their behavior according to new expectations, the strategic value of this information flow to experts will be negative. \section{Discussion} \label{sec:discussion} This chapter has developed a framework for modeling economic games with information flow. This framework expands MAIDs with optional edges, which results in a system for modeling mechanism design with Bayesian Networks. This framework can model well understood cases of privacy economics (principal agent, price differentiation) as well as the lesser understood case of expert services. The framework makes it clear how the fundamental limits of information theory, as well as the nature of information flow as causal flow with nomic associations, relates to the economics of information services. The models show that sometimes personal information flow improves market efficiency at the expense of consumers and riskier agents. These models allow for a direct comparision between social values and the outcomes of policies allowing or disallowing personal information flows. (Section \ref{sec:single-context}.) This framework can also model cases where information flows between economic contexts (Section \ref{sec:cross-context}.) In particular, secondary use of personal information can play an economic role similar to primary use of personal information if and when the processes that generate the data result in reliable and useful statistical correlations. These correlations can occur when society is stratified into socioeconomic classes, as they are in reality. Central to this modeling system is a conceptual shift in how to understand the role of information flow in economics. In a framework like Contextual Integrity, information flow gets its meaning from its social context, or the way the information plays a role in an abstractly and normatively understood social sphere. This captures social expectations well, but not the reality of information flow when businesses develop infrastructure technologies that span social contexts. For this, we need a model of information flow that is more realistic. We accomplish this by modeling information flows as situated within larger causal structures. These causal structures give each individual flow its nomic associations, which are what make the information strategically useful. The model makes clear that the strategic choices of agents in the economy is one of the elements that determines the causal structure that gives information its meaning. This indicates a major source of confusion in economics of information. Information is not a good that is bought and sold for consumption. Information is a strategic resource, part of the social and economic fabric. When information flows are bought and sold, it changes the strategic landscape of the economy. Market externalities abound as information flows effect many parties who are not party to transactions. Beyond these general conclusions, there are a number of more specific implications of these models which indicate directions of future work. \subsection{Privacy concerns and privacy competence} A robust empirical result is that there are different segments of the general population that have different privacy concerns. These are often presented as the marginally concerned, pragmatic majority, and privacy fundamentalists. \cite{ackerman1999privacy} \cite{berendt2005privacy} \cite{sheehan2002toward} This matches expectations from economic model: some populations are more vulnerable than others to the negative effects of personal information flow. Further work is needed to test to what extent different preferences or concerns about information flow are determined by the economic situation of data subjects. Class differences may have significant effects, with implications for value-driven policy design. There is also evidence that consumers are generally not making privacy decisions in rational and informed self-interest but, rather, become much more concerned with their privacy when told facts about how personal information is used \cite{hoofnagle2014alan}. There is a disconnect between consumer expectations and fact. This may be because the most prominent privacy threats are beyond user comprehension. Many serious privacy threats, whether they be Big Data analytics drawing conclusions from aggregating for an unforeseen business end, a network of companies engaged in secondary uses of data shared between them, or an illicit dark web of hackers and fraudsters, are due to cross-context inforamation flows in which the data subject plays little active role. Section \ref{sec:cross-context} shows the mechanics of how companies can gain strategic advantage by reusing personal data to the detriment of consumers. However, if consumer privacy expectations are tied to the normative expectations in specific social spheres, perhaps because these expectations are encoded as mental models or causally structured frames, then consumers can not be expected to be competent stewards of their own personal information. Consumers cannot act strategically in their interest, individually let alone collectively, unless they are aware of how their information is being used. The true mechanics of information flow, represented here by Bayesian networks, are opaque and largely unknown. The framework provided here can be extended to take into account different degrees of knowledge about the causal structure that gives information flow its meaning. Further work is needed to understand the implications of knowledge and information assymetry in data economy market equilibria. \subsection{Market failure} By the preceding argument, consumers are not competent to make decisions about how to control their personal information because their privacy expectations are tied to contexts that are routinely violated in practice. Potential secondary uses of personal data depend on associational properties of the data that are beyond users comprehension. In the case of large, data-rich firms, these associational properties are discovered through aggregation and data mining by the very firms that attract consumer interaction through expert services that they offer. This data is then used in two-sided markets, which act as intermediaries in many other economic contexts, further complicating any prediction of the benefits and harms of disclosure. Quantitative, let alone qualitative, prediction of these harms and benefits is beyond what an individal can accomplish. In the absence of a more concrete culprit for privacy threats, security considerations raise a general case for needing to limit secondary use of personal information. On the one hand, we can consider security to be another context where personal information is used, perhaps in a secondary way. Uses of personal information which are harmful to all affected consumers include those that facilitate security threats like spearphishing (when attackers use personal information to manipulate a person to reveal security-related information or otherwise be a vector for a further attack) and identity theft. On the other hand, it is the possibility of harmful secondary use \emph{across all potential contexts} that makes security of personal information so important in the first place. Security in this sense is necessary for an implementation of confidentiality. The conditions appear to be ripe for classic market failure, or else there would be if there were a market to begin with. As has been mentioned, property rights for personal data are weak if not nonexistant (see Section \ref{sec:law}). Personal data is not a good being produced by anybody in the privacy economics ecosystem. It is rather information in the \emph{strategic} sense of allowing some market actors to perform more effectively. There is no sense in which the market of personal information has the properties that would lead us to believe the market would allocate resources efficiently. Perhaps rather than ask if there is a market failure, we should be asking what is happening, if not a market at all? As an alternative to regulating personal data as a kind of property, some have proposed regulating personal data through tort \cite{posner1981economics, cofone2017dynamic}. Certainly some meanings of "privacy", such as those that refer to protection from libel, are enforceable in this way. To the extent that considering personal data to be a /thing/ is misleading, it may more more effective to craft data protection regulation through the framework of dignitary privacy. \cite{post2017data} However, as we have discussed it seems unlikely that the scope of consumer harm or benefit can be adequately assessed given the scale of the empirical problems involved. A another alternative is strengthened data protection laws for two-sided markets involving targeting, such as advertising in social media platforms. As we have noted, in most expert service sectors, including health care, finance, education, and so on, there are existing sectoral data protection laws ensuring confidentiality. The existence of these laws is an indication that without them, these expert service markets would implode in market failure. If protecting confidential information from secondary use (through austere prohibitions on disclosure and security investments) is a form of service \emph{quality}, and this quality is difficult for consumers to assess independently, then this information assymetry about service quality would result in a market failure along the lines of Akerlof's market for "lemons". \cite{akerlof1970market} Since unregulated two-sided markets are in the senses described above equivalent to providing unrestricted secondary use to other firms, perhaps present economic conditions are just such a market failure. \subsection{Purposes} As discussed in Section \ref{sec:GDPR}, the EU's GDPR attempts to limit the kinds of privacy violations due to secondary use of personal information through \emph{purpose restrictions}, which place restrictions on the goals for which collected data may be used. Personal data may be processed only for purposes to which the data subjects consent (with some exceptions). Further data minimization requirements reduce the amount to which data is unintendedly exposed to other unintended purposes. As a way of creating agreement between the expectations of data subjects and the activities of data processors, this can be seen as a refinement of the notice-and-consent framework. It may be argued that purposes are easier to understand than the complexity of legal and technical reality. The rising importance of purpose binding as a privacy requirement raises the question of how the purpose of data processing can be formalized to facilitate privacy engineering. Tschantz, Datta, and Wing (2012, 2013) do formalize purpose in order to provide the basis of automatic enforcement of privacy policies. In their approach, ``an action is for a purpose if the action is part of a plan for achieving that purpose.'' They then go on to formalize this in terms of a Markov Decision Process (MDP), a way of modeling the relationship between actions, environment, policies, and outcomes that allows for a formal definition of optimal policy. A promising direction for future work is to formalize purpose binding in terms of Bayesian causality and incentives, extending the mechanism design framework introduced in this chapter. \section{Acknowledgements} This work draws on collaboration with Michael Tschantz, Helen Nissenbaum, and Anupam Datta. I thank John Chuang as well as Ignacio Cofone, Yafit Lev-Aretz, Helen Nissenbaum, John Nay, Julia Powles, Madelyn R. Sanfilippo, Yan Shvartzshnaider, Katherine Strandberg, Bendert Zevenberger, and other members of the Privacy Research Group at the NYU School of Law for their helpful comments. I gratefully acknowledge funding support from the U.S. Defense Advanced Research Projects Agency (DARPA) under award FA8750-16-2-0287. The opinions in this paper are those of the author and do not necessarily reflect the opinions of any funding sponsor or the United States Government. \printbibliography \appendix \section{Information theory theorems} \label{appendix:information-theory-theorems} This appendix contains proofs for several theorems extending the well-known Data Processing Inequality in information theory to configurations of random variables beyond a triplet Markov Chain. The motivation for these theorems is the desire to model information flow through a world modeled as a Bayesian network, where information flow is determine by causal flows and nomic associations. Nomic association here is measured as mutual information between two variables. The Chain rule for mutual information and the Markov properties of a Bayesian network make it possible to prove several theorems that are as far as we know new. \subsection{Triplet Structures} The Data Processing Inequality is a standard theorem in information theory. It concerns the mutual information of three variables arranged in a Markov Chain. \begin{dfn} Random variables $X, Y, Z$ are said to \emph{form a Markov chain in that order} (denoted $X \rightarrow Y \rightarrow Z$) if the conditional distribution of $Z$ depends only on $Y$ and is conditionally independent of $X$. Specifically, $X,Y,Z$ form a Markov chain $X \rightarrow Y \rightarrow Z$ if the joint probability mass function can be written as \begin{equation} p(x,y,z) = p(x)p(y \vert x)p(z \vert y) \end{equation} \cite{cover2012elements} \end{dfn} \begin{thm}[Data Processing Inequality] Given a probability model defined by the following (Markov Chain): \begin{center} \begin{tikzcd} X \arrow[r] & Y \arrow[r] & Z \\ \end{tikzcd} \end{center} where $X \independent Z \vert Y$, then it must be that $I(X;Y) \geq I(X;Z)$. \end{thm} \begin{proof} From \cite{cover2012elements}. By the Chain Rule, mutual information can be expanded in two different ways: \begin{equation} \begin{split} I(X;Y,Z) & = I(X;Z) + I(X;Y \vert Z) \\ & = I(X;Y) + I(X;Z \vert Y) \end{split} \end{equation} Since X and Z are conditionally independent given Y, we have $I(X;Z \vert Y) = 0$. Since $I(X;Y \vert Z) \geq 0$, we have \begin{equation} I(X;Y) \geq I(X;Z) \end{equation} We have equality if and only if $I(X;Y \vert Z) = 0$ (i.e. $X \rightarrow Z \rightarrow Y$ forms a Markov Chain). Similarly, one can prove that $I(Y;Z) \geq I(X;Z)$. \end{proof} Bayesian networks are a generalization of Markov chains. A Bayesian network models the probability distribution between many random variables as a directed acyclic graph. The conditional probability distribution of each variable is defined in terms of the graphical parents $pa(\dot)$ of each variable, i.e. $P(X_i) = P(X_i \vert pa(X_u))$. The joint distribution is \begin{equation} P(X_1, X_2, ..., X_n) = \prod_{i=1}^{n}P(X_i \vert pa(X_i)) \end{equation} We can now prove several theorems that are similar to the Data Processing Inequality but for other probabilistic structures besides Markov chains. \begin{thm}[Data Sourcing Inequality] Given a probability model defined by the following (Common Cause): \begin{center} \begin{tikzcd} & Y \arrow[dl] \arrow[dr] & \\ X & & Z \end{tikzcd} \end{center} then it must be that $I(X;Y) \geq I(X;Z)$. \end{thm} \begin{proof} The implication of the common cause structure is that \begin{equation} p(x,y,z) = p(x \vert y )p(y)p(z \vert y). \end{equation} It follows that $X \independent Z \vert Y$. The rest of the proof is identifical to the previous proof. \end{proof} \begin{thm}[Unobserved common effect inequality] Given variables $X,Y,Z$ with the common cause structure \begin{center} \begin{tikzcd} X \arrow[dr] & & \arrow[dl] Z \\ & Y & & \end{tikzcd} \end{center} then it must be that $I(X,Y) \geq I(X,Z) = 0$. \end{thm} \begin{proof} The implication of the structure is that \begin{equation} p(x,y,z) = p(x) p(y \vert x,z) p(z). \end{equation} It follows that $X \independent Z$, therefore $I(X;Z) = 0$. Because the mutual information of two variables is always nonnegative, $$I(X,Y) \geq I(X;Z)$$ \end{proof} We note that while a similar property holds for a ``collider'' or common effect structure, it's proof is different from the chain and common cause cases because, in general, it is not the case that $X \independent Z \vert Y$ for a common effect structure. For example, when $X$ and $Z$ are both fair coin tosses and $Y = X \oplus Z$, $X$ and $Z$ are independent from each other but not when conditioned on $Y$. When a common effect is in the conditioning set, the two causes depend probabilistically on each other. The extent to which these dependencies are limited can be characterized by a few equations. \begin{lem} \label{lemma:common-effect-1} Given variables $X_1,X_2,Y$ with the common effect structure $X_1 \rightarrow Y \leftarrow X_2$, then $I(X_1;X_2,Y) = I(X_1,Y \vert X_2)$. \end{lem} \begin{proof} By the Chain Rule for mutual information, $$I(X_1;X_2,Y) = I(X_1;X_2) + I(X_1;Y \vert X_2)$$ Because of the common effect structure, $I(X_1;X_2) = 0$. Therefore, $I(X_1;X_2,Y) = I(X_1;Y \vert X_2)$. \end{proof} \begin{lem} \label{lemma:common-effect-2} Given variables $X_1,X_2,Y$ with the common effect structure $X_1 \rightarrow Y \leftarrow X_2$, then \begin{equation} \begin{split} I(Y; X_1, X_2) & = I(X_1;X_2,Y) + I(X_2;Y) \\ & = I(X_2;X_1,Y) + I(X_1;Y) \end{split} \end{equation} \end{lem} \begin{proof} \begin{equation} \begin{split} & I(X_1;X_2,Y) \\ & = I(X_1;Y \vert X_2) \\ & = H(Y \vert X_2) - H(Y \vert X_1, X_2)\\ & = H(Y \vert X_2) - H(Y) + I(Y ; X_1, X_2)\\ & = I(Y ; X_1, X_2) - I(X_2;Y) \end{split} \end{equation} which implies that $$I(X_1;X_2,Y) + I(X_2;Y) = I(Y ; X_1, X_2)$$ The proof works symmetrically for $I(X_2;X_1,Y) + I(X_1;Y) = I(Y ; X_1, X_2)$ \end{proof} \begin{lem} \label{lemma:common-effect-3} Given variables $X_1,X_2,Y$ with the common effect structure $X_1 \rightarrow Y \leftarrow X_2$, then $I(X_1,X_2 \vert Y) \leq I(X_1;X_2,Y)$. \end{lem} \begin{proof} \begin{equation} \begin{split} & I(X_1,X_2 \vert Y) \\ & = H(X_1 \vert Y) - H(X_1 \vert X_2, Y) \\ & = H(X_1) - I(X_1;Y) - H(X_1) + I(X_1;X_2,Y)\\ & = I(X_1;X_2,Y) - I(X_1;Y) \\ & \leq I(X_1;X_2,Y) \\ \end{split} \end{equation} \end{proof} \begin{thm} \label{thm:common-effect-inequality} Given variables $X_1,X_2,Y$ with the common effect structure $X_1 \rightarrow Y \leftarrow X_2$, then $$I(X_1,X_2 ; Y) \geq I(X_1;X_2,Y) = I(X_1 ; Y \vert X_2) \geq I(X_1;X_2 \vert Y)$$ \end{thm} \begin{proof} Follows from Lemmas \ref{lemma:common-effect-1}, \ref{lemma:common-effect-2}, and \ref{lemma:common-effect-3}. \end{proof} \subsection{Quartet Structures} While triplet structures (chain, common effect, and common cause) are the building blocks of larger paths in Bayesian networks, an analysis of larger, quarter structures will help us develop general theorems about the mutual information along paths. Recall that a both with a common effect is not blocked if \emph{either} the common effect \emph{or} a descendant of the effect is in the conditioning set. Let's look at the following structure, which we will call a \emph{wishbone} structure. \begin{center} \begin{tikzcd} X_0 \arrow[dr] & & \arrow[dl] X_1 \\ & Y \arrow[d] & \\ & Z & \end{tikzcd} \end{center} Here, $Y$ is a common effect of $X_0$ and $X_1$, and $Z$ is a descendant of $Y$. How much information flows from $X_0$ to $X_1$ when $Z$ is known? \begin{thm} \label{thm:wishbone} For variables $X_0, X_1, Y, Z$ in a wishbone structure, $$I(X_0;X_1 \vert Z) \leq I(Y;Z)$$ \end{thm} \begin{proof} Consider the quantity $I(X_0, Y; X_1, Z)$, expanded by the Chain Rule. One expansion is: \begin{equation} \begin{split} & I(X_0, Y; X_1, Z) \\ & = I(Y;Z) + I(X_0;Z \vert Y) + I(Y; X_1 \vert Z) + I(X_0, X_1 \vert Y,Z) \\ & = I(Y;Z) + I(X_0,Y;X_1) \end{split} \end{equation} Another expansion is: \begin{equation} \begin{split} & I(X_0, Y; X_1, Z) \\ & = I(X_0;Z) + I(Y;X_1 \vert X_0) + \\ & I(X_0; X_1 \vert Z) + I(Y; X_1 \vert X_0,Z) \\ & \geq I(Y; X_1 \vert X_0) + I(X_0; X_1 \vert Z) \end{split} \end{equation} By Theorem \ref{thm:common-effect-inequality}, we know that $I(Y;X_1 \vert X_0) = I(X_0,Y;X_1)$ for three variables in a common effect structure, as they are for these variables in the wishbone structure. So we can set the two expansions equal to each other and reduce: \begin{equation} \begin{split} I(Y; X_1 \vert X_0) + I(X_0; X_1 \vert Z) & \leq I(Y;Z) + I(X_0,Y;X_1) \\ I(X_0, Y; X_1) + I(X_0; X_1 \vert Z) & \leq I(Y;Z) + I(X_0,Y;X_1) \\ I(X_0; X_1 \vert Z) & \leq I(Y;Z) \end{split} \end{equation} \end{proof} \subsection{Paths} We can now look at mutual information of nodes connected by longer paths. We start with an arbitrariliy long Markov chain. \begin{center} \begin{tikzcd} X_0 \arrow[r] & X_1 \arrow[r] & ... \arrow[r] & X_{n-1} \arrow[r] & X_n \end{tikzcd} \end{center} \begin{thm}[Chain Data Processing Inequality] \label{cdpi-thm} Given a Markov chain of variables $X_1, ..., X_n$ such that $X_1 \rightarrow ... \rightarrow X_n$. It must be the case that $$I(X_1,X_n) \leq \min_i I(X_i,X_{i+1}).$$ \end{thm} \begin{proof} \label{cdpi-prf} For all $i$, by the Chain rule for mutual information and the independence properties of the Markov chain, \begin{equation} \label{cdpi-prf-eq1} \begin{split} I(X_0, ..., X_{i} ; X_{i+1},...,X_n) = \\ \sum_{j=i+1}^{n} I(X_0,...,X_{i}; X_j \vert X_{i+1},...,X_j) = \\ I(X_0,...,X_{i}; X_{i+1}) = \\ \sum_{j=i}^{1} I(X_{i+1}; X_{j} \vert X_i,...X_j) = \\ I(X_i;X_{i+1}) + \sum_{j=i-1}^{1} I(X_{i+1}; X_{j} \vert X_i,...X_j) = \\ I(X_i;X_{i+1}) \end{split} \end{equation} The Chain rule can expand the variables in arbitary order. So we can also derive (using the fact that mutual information is always nonnegative): \begin{equation} \label{cdpi-prf-eq2} \begin{split} & I(X_0, ..., X_{i} ; X_{i+1},...,X_n) \\ &= I(X_0, .., X_{i} ; X_n) + \sum_{j=n-1}^{i+1} I(X_{0}..,X_{i}; X_j \vert X_{j+1}..,X_j) \\ &\geq I(X_0, ..., X_{i} ; X_n) \\ &= \sum_{j = 0}^{n-1} I(X_n ; X_j \vert X_{j-1}, ... , X_0) \\ &= I(X_n; X_0) + \sum_{j=1}^{n-1} I(X_n; X_j \vert X_{j-1}, ..., X_0) \\ &\geq I(X_n; X_0) \end{split} \end{equation} Combining these two results and generalizing across all $i$, \begin{equation} \forall i, I(X_0;X_n) \leq I(X_i,X_{i+1}) \end{equation} which entails that which is to be proven, \begin{equation} I(X_0;X_n) \leq \min_i I(X_i,X_{i+1}) \end{equation} \end{proof} Our goal is to generalize this theorem to Bayesian paths with other structures, just found as in the previous section we found equivalents to the Data Processing Inequality in other triplet structures. \begin{dfn}[Path] A \emph{path} between two nodes \(X_1\) and \(X_2\) in a graph to be a sequence of nodes starting with \(X_1\) and ending with \(X_2\) such that successive nodes are connected by an edge (traversing in either direction). \end{dfn} In this section, we will only consider paths isolated from any other variables. We are interested in how to derive useful bounds on the mutual information of a path based on the mutual information of links within the path. \begin{dfn}[Mutual information of a path] The \emph{mutual information of a path} between two nodes \(X\) and \(Y\) is $I(X,Y)$. \end{dfn} \begin{thm}[Unobserved Path Data Processing Inequality] \label{thm:updpi} Given a path between $X_0$ and $X_n$ of variables $X_0, ..., X_n$, with no other connected variables. It must be the case that $$I(X_1,X_n) \leq \min_{i} I(X_i,X_{i+1}).$$ \end{thm} \begin{proof} This proof mirrors the proof of Theorem \ref{cdpi-thm}. For any $i$, consider $I(X_0,...X_i;X_{i+1},...,X_n)$. By the logic of Equation \ref{cdpi-prf-eq1}, $I(X_0,...X_i;X_{i+1},...,X_n) = I(X_i,X_{i+1})$. By the logic of Equation \ref{cdpi-prf-eq2}, $I(X_0,...X_i;X_{i+1},...,X_n) \geq I(X_0,X_n)$. Therefore, $\forall i, I(X_0;X_n) \leq I(X_i,X_{i+1})$ and $I(X_0;X_n) \leq \min_i I(X_i,X_{i+1})$. \end{proof} Theorem \ref{thm:updpi} applies to any paths on the condition that none of the variables are observed. Its proof is identical to the proof for Markov chains because isolated, unobserved paths are Markov equivalent to Markov chains. Some proofs extending this result follow from theory of Bayesian networks. Recall that there are two conditions under which a path betweent two variables is blocked. First, an unobserved head-to-head connection on the path blocks the path and makes the terminal nodes conditionally independent. Second, an observation of a head-to-tail or tail-to-tail node blocks the path and makes the terminal nodes conditionally independent. If the only paths between two variables are blocked, then they are d-separated and therefore independent, with zero mutual information. \begin{thm}[Blocked Path Mutual Information] \label{thm:bpmi} For any blocked paths between $X_0$ and $X_n$ of variables $X_0, ..., X_n$ with no other connected variables, $I(X_0,X_n) = 0$. \end{thm} \begin{proof} If the only path between $X_0$ and $X_n$ is blocked, then $X_0$ and $X_n$ are d-separated and conditionally independent. If $X_0$ and $X_n$ are conditionally independent, then $I(X_0, X_n) = 0$. \end{proof} The difficult case for determining the mutual information of a path is the case where there are observed common effects on the paths. This breaks the conditions for the proof of Theorem \ref{updpi-thm}. It is possible for $I(X_i,X_{i+1}) = 0$ but $I(X_{i-1},X_{i+1} \vert X_i) > 0$. As a simple example, consider again the case where $X_{i-1}$ and $X_{i+1}$ are fair coin tosses and $X_i = X_{i-1} \oplus X_{x+1}$. If there are many common effect nodes on the path and only some of them are observed, then the path is blocked and the mutual information is solved using Theorem \ref{thm:bpmi}; the mutual information of the path is zero. Similarly, if there are common cause or chain triplets on the path and the central node of the triplet is observed, the mutual information of the path is trivially ze So we need consider only the case where there's a path where \emph{all and only} the common effect nodes are observed. \begin{thm}[Path Mutual Information Theorem (PMIT)] \label{thm:path-mutual-information} Given a path between $X_0$ and $X_n$ of variables $\{X_0, ..., X_n\} = \mathcal{X}$ with no other connected variables. Let $\mathcal{X}_E$ be the common effect nodes, meaning only those nodes $X_i$ such that the edge structure of the path is $X_{i-1} \rightarrow X_i \leftarrow X_{i+1}$. The mutual information of the path when all the common effects are observed is is: $$I(X_0; X_n \vert \mathcal{X}_E) \leq min_{i} \begin{cases} I(X_i,X_{i+1}) & \text{if} X_i,X_{i+1} \notin \mathcal{X}_E \\ I(X_{i-1},X_{i+1}) & \text{if} X_i \in \mathcal{X}_E\\ \end{cases}$$ \end{thm} \begin{proof} For any $i$, consider $$I(X_0,..., X_i;X_{i+1},..., X_n \vert \mathcal{X}_E)$$ By the Chain Rule for mutual information, this can be expanded as $$\sum_{j=0}^i I(X_j;X_{i+1},..., X_n \vert X_0,..., X_{j-1},\mathcal{X}_E)$$ Consider two cases. In the first case, $X_i \notin \mathcal{X}_E$ and $X_{i+1} \notin \mathcal{X}_E$. By logic similar to Equation \ref{cdpi-prf-eq1} and Equation \ref{cdpi-prf-eq2}, then as before $I(X_0;X_n) \leq I(X_i,X_{i+1})$. In the second case, $X_i$ is a common effect node, i.e $X_i \in \mathcal{X}_E$. It is not possible to have two common efffect nodes adjacent on a path. So in any case where either $X_{i-1}$ or $X_{i+1}$ is in the conditioning set, the path is blocked. We can therefore compute the mutual information and its Chain Rule expansion as: \begin{equation} \begin{split} I(X_0,..., X_{i-1};X_{i+1},..., X_n \vert \mathcal{X}_E) \\ = \sum_{j=i-1}^0 I(X_j;X_{i+1},..., X_n \vert X_{i-1},..., X_{j-1},\mathcal{X}_E) \\ = I(X_{i-1};X_{i+1},..., X_n \vert \mathcal{X}_E)\\ = I(X_{i-1};X_{i+1} \vert \mathcal{X}_E)\\ = I(X_{i-1};X_{i+1} \vert X+i) \end{split} \end{equation} Since once again by the logic of Equation \ref{cdpi-prf-eq2} this value is greater than or equal to the mutual information of the path, we have $$I(X_0;X_n) \leq I(X_{i-1},X_{i+1} \vert X_i)$$ for the cases when $X_i \in \mathcal{X}_E$. Combining these results, we get the bound on the mutual information of the path. \end{proof} Note that Theorem \ref{thm:updpi} is a special case of PMIT, or Theorem \ref{thm:path-mutual-information}, where the set of common effects on the path $\mathcal{X}_E$ is empty. %*** What about if there is a common effect node, %and a descendent of it is observed? *** % %\begin{center} % \begin{tikzcd} % X_0 \arrow[dr] & & & & \\ % & Y_0 \arrow[r] & ... \arrow[r] & Y_n \arrow[r] & Z \\ % X_1 \arrow[ur] & & & & % \end{tikzcd} %\end{center} % %\emph{TODO: Proof for this case} % % %Note that Theorem \ref{thm:wishbone} generalizes to any %variables such that the Markov %properties implies by the wishbone structure hold. %This can be the case even when the variables are %part of a larger, more complex structure. %For example, consider the structure: % % %The inequality $I(X_0;X_1 \vert Z) \leq I(Y;Z)$ %still holds in this case. %The difference is that $I(Y;Z)$ depends %on the mutual information of the intermediate %connections between $Y$ and $Z$. %This brings us to our study of the mutual information %along Bayesian network paths in general. % \section{Multi-Agent Influence Diagrams (MAIDs)} \label{appendix:maid} Multi-Agent Influence Diagrams (MAIDs) are a game-theoretic extension of Bayesian networks developed by Koller and Milch \cite{koller2003multi}. A MAID is defined by: \begin{enumerate} \item A set $\mc{A}$ of agents \item A set $\mc{X}$ of chance variables \item A set $\mc{D}_a$ of decision variables for each agent $a \in \mc{A}$, with $\mc{D} = \bigcup_{a \in \mc{A}} \mc{D}_a$ \item A set $\mc{U}_a$ of utility variables for each agent $a \in \mc{A}$, with $\mc{U} = \bigcup_{a \in \mc{A}} \mc{U}_a$ \item A directed acyclic graph $\mc{G}$ that defines the parent function $Pa$ over $\mc{V} = \mc{X} \cup \mc{D} \cup{U}$ \item For each chance variable $X \in \mc{X}$, a CPD $Pr(X \vert Pa(X))$ \item For each utility variable $U \in \mc{U}$, a CPD $Pr(U \vert Pa(U))$ \end{enumerate} The decision variables represent moments where agents can make decisions about how to act given only the information provided by the variable's parents. \begin{dfn}[Decision rules] \label{dfn:decision-rule} A \emph{decision rule} $\delta$ is a function that maps each instantiation $\vec{pa}$ of $Pa(D)$ to a probability distribution over $dom(D)$. \end{dfn} \begin{dfn}[Strategy] \label{dfn:strategy} An assignment of decision rules to every decision $D \in \mc{D}_a$ for a particular agent $a \in \mc{D}_a$ for a particular agent $a \in \mc{A}$ is called a \emph{strategy}. \end{dfn} \begin{dfn}[Strategy profile] An assignment $\sigma$ of decision rules to every decision $D \in \mc{D}$ is called a \emph{strategy profile}. A \emph{partial strategy profile} $\sigma_\mc{E}$ is an assignment of decision rules to a subset $\mc{E} \subset \mc{D}$. $\sigma_{-\mc{E}}$ refers to a restriction of $\sigma$ to variables not in $\mc{E}$. \end{dfn} Decision rules are of the same form as CPDs, and so a MAID can be transformed into a Bayes network by replacing every decision variable with a random variable with the CPD of the decision rule of a strategy profile. \begin{dfn} If $\mc{M}$ is a MAID and $\sigma$ is a strategy profile for $\mc{M}$, then the \emph{joint distribution for $\mc{M}$ induced by $\sigma$}, denoted $P_{\mc{M}[\sigma]}$, is the joint distribution over $\mc{V}$ defined by the Bayes net where: \begin{itemize} \item the set of variables is $\mc{V}$; \item for $X, Y \in \mc{V}$, there is an edge $X \rightarrow Y$ if and only if $X \in Pa(Y)$; \item for all $X \in \mc{X} \cup \mc{U}$, the CPD for $X$ is $Pr(X)$; \item for all $D \in \mc{D}$, the CPD for $D$ is $\sigma(D)$. \end{itemize} \end{dfn} \begin{dfn} Let $\mc{E}$ be a subset of $\mc{D}_a$ and let $\sigma$ be a strategy profile. We say that $\sigma*_\mc{E}$ is \emph{optimal for the strategy profile} $\sigma$ if, in the induced MAID $\mc{M}[\sigma_{-\mc{E}}]$, where the only remaining decisions are those in $\mc{E}$, the strategy $\sigma*_{\mc{E}}$ is optimal, i.e., for all strategies $\sigma'_{\mc{E}}$: $$EU_a((\sigma_{-\mc{E}},\sigma*_{\mc{E}})) \geq EU_a((\sigma_{\mc{E}}, \sigma'_{\mc{E}}))$$ \end{dfn} A major contribution of \cite{koller2003multi} is their analysis of how to efficiently discover Nash Equilibrium strategy profiles for MAIDs. Their method involves analyzing the qualitative graphical structure of the MAID to discover the \emph{strategic reliance} of decision variables. When a decision variable $D$ strategically relies on $D'$, then in principle the choice of the optimal decisionr rule for $D$ depends on the choice of the decision rule for $D'$. \begin{dfn}[Strategic reliance] \label{dfn:strategic-reliance} Let $D$ and $D'$ be decision nodes in a MAID $\mc{M}$. $D$ \emph{strategically relies on} $D'$ if there exist two strategy profiles $\sigma$ and $\sigma'$ and a decision rule $\delta$ for $D$ such that: \begin{itemize} \item $\delta$ is optimal for $\sigma$; \item $\sigma'$ differs from $\sigma$ only at $D'$; \end{itemize} but no decision rule $\delta*$ that agrees with $\delta$ on all parent instantiations $\vec{pa} \in dom(Pa(D))$ where $P_{\mc{M}[\sigma]}(\vec{pa}) > 0$ is optimal for $\sigma'$. \end{dfn} \begin{dfn}[s-reachable] \label{dfn:s-reachable} A node $D'$ is \emph{s-reachable} from a node $D$ in a MAID $\mc{M}$ if there is some utility node $U \in \mc{U}_D$ such that if a new parent $\widehat{D'}$ were added to $D'$, there would be an active path in $\mc{M}$ from $\widehat{D'}$ to $U$ given $Pa(D) \cup \{D\}$, where a path is active in a MAID if it is active in the same graph, viewed as a BN. \end{dfn} \begin{thm} \label{thm:strategic-non-reliance} If $D$ and $D'$ are two decision nodes in a MAID $\mc{M}$ and $D'$ is not s-reachable from $D$ in $\mc{M}$, then D does not strategically rely on $D'$. \end{thm} \subsection{Tactical independence} In our analysis in this paper, we will not compute equilibrium strategies for MAIDs. Rather, we will focus on a newly defined property of MAIDs, tactical independence. \begin{dfn}[Tactical independence] \label{dfn:tactical-independence} For decision variables $D$ and $D'$ in MAID $\mc{M}$, $D$ and $D'$ are \emph{tactically independent} for conditioning set $\mc{C}$ iff for all strategy profiles $\sigma$ on $\mc{M}$, in $P_{\mc{M}[\sigma]}$, the joint distribution for $\mc{M}$ induced by $\sigma$, $$D \independent D' \vert C$$ \end{dfn} Because tactical independence depends on the independence of variables on an induced probability distribution that is representable by a Bayesian network, the d-separation tests for independence apply readily. \begin{thm} For decision variables $D$ and $D'$ in MAID $\mc{M}$, and for conditioning set $\mc{C}$, if $D$ and $D'$ are d-separated given $\mc{C}$ on $\mc{M}$ considered as a Bayesian network, then $D$ and $D'$ are tactically independent given $\mc{C}$. \end{thm} \begin{proof} Suppose $D$ and $D'$ are d-separated given $\mc{C}$ on $\mc{M}$ considered as a Bayesian network. For any strategy profile $\sigma$, the joint distribution for $\mc{M}$ induced by $\sigma$, $P_{\mc{M}[\sigma]}$ has the same graphical structure as $\mc{M}$ considered as a Bayesian network. Therefore, $D$ and $D'$ are d-separated given $\mc{C}$ in the graph corresponding to $P_{\mc{M}[\sigma]}$ for all $\sigma$. Because $D$ and $D'$ are d-separated given $\mc{C}$ in the Bayesian network, $D \independent D' \vert C$. \end{proof} % Utility CPDs are deterministic % Utility nodes have to be leaves (I bend this in my models!) \subsection{Notation} \label{sec:maid-notation} We will use a slightly different graphical notation than that used by \cite{koller2003multi} In the models in this paper, we will denote random variables with undecorated capital letters, e.g. $A, B, C$. I will denote strategic nodes with a tilde over a capital letter, e.g. $\tilde{A}, \tilde{B}, \tilde{C}$. The random variable defined by the optimal strategy at a decision node, when such a variable is well-defined, will be denoted with a hat, e.g. $\hat{A}, \hat{B}, \hat{C}$. Nodes that represent the payoff or utility to an agent will be denoted with a breve, e.g. $\breve{A}, \breve{B}, \breve{C}$. Particular agents will be identified by a lower case letter and the assignment of strategic and utility nodes to them will be denoted by subscript. E.g., $\tilde{A}_q$ and $\breve{U}_q$ denote an action taken by agent $q$ and a payoff awarded to $q$, respectively. A dotted arrow in a diagram indicates an optional arrow. They indicate that two separate models, one including the arrow and one without, are being considered. When considering an instantiation of the model with the dotted edge present, we will say the model or edge is \emph{open}. When the edge is absent, we'll say it's \emph{closed}. \subsection{Value of Data} \label{sec:value-of-data} Using the MAID framework, we can develop a the tools to analyze the value of data. We proceed with several considerations. First, we note that the meaning of data depends on the context in which is flows. This is a consequence of the fact that information flow consists of both causal flow and the nomic associations of the information, which are due to the causal structure in which the flow takes place. Similarly, we cannot determine the value of data except in the context of a system of causal relations that give data its meaning. Second, data is valuable not primarily as a good to be consumed but as a resource for determining correct actions. The value of data is in its strategic and/or tactical value. This means that data is valuable only in so far as it is available to an active agent. The value of the data to that agent will be the value of the strategic or tactical advantage that the data provides. Third, as a consequence of the second point, the effect of data on an agent's action may or may not effect outcomes for other agents. The value of a particular flow of data to an agent may have negative value \emph{to other agents}. These three points can be seen by an analysis of the MAID framework we have introduced in this section. In this framework, the flow of data is represented by an edge in the causal graph. The value of a data flow can be computed as the difference in utility to each agent between the open and closed cases in the game. \subsubsection{Strategic and tactical value} As we have distinguished between strategic reliance and tactical independence, we can distinguish between the strategic and tactical value of information. The strategic value of an information flow to an agent is the difference in utility to that agent in the open and closed conditions of the game, given each game is at strategic equilibrium for all players. \begin{dfn}[Strategic value of information] \label{dfn:strategic-value} Given two MAID diagrams $\mc{M}_{o}$ and $\mc{M}_{c}$ that differ only by a single edge, $e$, and a strategic profile for each diagram, $\sigma_{o}$ and $\sigma_{c}$, the \emph{strategic value of $e$ to $a$} is the difference in expected utility to $a$ under the two respective induced joint distributions: $$E(P_{\mc{M}_{o}[\sigma_{o}]}(U_a)) - E(P_{\mc{M}_{c}[\sigma_{c}]}(U_a))$$ \end{dfn} *** This is a sketch definition. It is has some notational problems, and it does not match the descriptive text above it because there is not yet a definition of an equilibrium strategy *** In contrast with the strategic value of information, the tactical value of information is the value of the information to an agent given an otherwise fixed strategy profile. We allow the agent receiving the data to make a tactical adjustment to their strategy at the decision variable at the head of the new information flow. \begin{dfn}[Best tactical response to information] Given two MAID diagrams $\mc{M}_{o}$ and $\mc{M}_{c}$ differing only in optional edge $e$ with head in decision variable $D$, the \emph{best tactical response to $e$} given strategy profile $\sigma$, $\hat{\delta}_{\sigma,e}$ is the decision rule $\delta$ for $D$ such that $\delta$ is optimal for $\sigma$ \end{dfn} *** Problem: this definition assumes a unique best $\delta$ which may not exist.*** \begin{dfn}[Tactical value of information] \label{def:tactical-value} Given two MAID diagrams $\mc{M}_{o}$ and $\mc{M}_{c}$ differing only in optional edge $e$ with head in decision variable $D$, the \emph{tactical value of $e$} to agent $a$ given strategy profile $\sigma$ is the difference in expected utility of the open condition with the best tactical response to $e$ and the close condition using the original strategy: $$EU_a((\sigma_{-D},\hat{\delta}_{\sigma,e}) - EU_a(\sigma)$$ \end{dfn} \end{document}
{ "alphanum_fraction": 0.7542631122, "avg_line_length": 38.4877669903, "ext": "tex", "hexsha": "9631f2401b38db639d8cade907634b864ab818d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sbenthall/dissertation", "max_forks_repo_path": "chapters/chapter3-original.tex", "max_issues_count": 32, "max_issues_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_issues_repo_issues_event_max_datetime": "2018-05-02T21:33:06.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-19T13:07:29.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sbenthall/dissertation", "max_issues_repo_path": "chapters/chapter3-original.tex", "max_line_length": 146, "max_stars_count": null, "max_stars_repo_head_hexsha": "621bf159f921bac66f8da43e0e022cd2e45982ac", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sbenthall/dissertation", "max_stars_repo_path": "chapters/chapter3-original.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 26140, "size": 99106 }
\documentclass[12pt]{article} \textwidth=163mm \textheight=243mm \setlength{\voffset}{-20mm} \oddsidemargin -5mm \evensidemargin -5mm \usepackage{epsf} %\usepackage[dvips]{graphicx} \newcommand{\SDVA}{\langle S_2 \rangle} \begin{document} \begin{center} {\bfseries AZIMUTHAL STRUCTURES OF PRODUCED PARTICLES\\ IN HEAVY ION INTERACTIONS} \vskip 5mm S.~Vok\'al$^{1 \dag}$, A.~Krav\v c\'akov\'a$^{2}$, S.~Lehock\'a$^{2}$ and G.~I.~Orlova$^{3}$ \vskip 5mm {\small (1) {\it VBLHE, JINR, Dubna, Russia } \\ (2) {\it University of P.~J.~\v{S}af\'arik, Ko\v{s}ice, Slovakia } \\ (3) {\it Lebedev Physical Institute, Moscow, Russia } \\ $\dag$ {\it E-mail: [email protected] }} \end{center} \vskip 5mm \begin{center} \begin{minipage}{150mm} \centerline{\bf Abstract} The angular substructures of particles produced in ${}^{208}$Pb at 158~A~GeV/c and ${}^{197}$Au at 11.6~A~GeV/c induced interactions with Ag(Br) nuclei in emulsion detector have been investigated. Nonstatistical ring-like substructures of produced particles in azimuthal plane of a collision have been found and their parameters have been determined. \end{minipage} \end{center} \vskip 10mm %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction} An important aim of nucleus collision investigation at high energies is to search for a phenomena connecting with large densities obtained in such collisions. As an example, the transition from the QGP (quark-gluon plasma) back to the normal hadronic phase is predicted to give large fluctuations in the number of produced particles in local regions of phase space \cite{bib01,bib02}. The observed effects of such type are dominated by statistical fluctuations. Significant deviations from them are only observed after removing the statistical part of the fluctuations \cite{bib03}. In case of angular structures of produced particles investigation two different classes were revealed, which could be referred to as jet-like and ring-like substructures. The goal of our work was to study the ring-like substructures of produced particles in azimuthal plane. They are occurrences if many particles are produced in a narrow region along the rapidity axis, which at the same time are diluted over the whole azimuth. The jet-like substructures consist of cases where particles are focused in both dimensions \cite{bib04}. \\ For the first time the individual nucleus-nucleus collisions with a ring-like substructure of produced particles in the azimuthal plane have been observed more then 20 years ago in cosmic ray experiments \cite{bib05}. Later a lot of the nucleus-nucleus collisions with the ring-like substructure were observed in the accelerator experiments at high energy \cite{bib03,bib06,bib07,bib08,bib19,bib20}. A new mechanism of multiparticle production at high energies was proposed in \cite{bib09,bib10,bib11}. This mechanism is similar to that of Cherenkov electromagnetic radiation. As a hadronic analogue, one may treat an impinging nucleus as a bunch of confined quarks each of which can emit gluons when traversing a target nucleus \cite{bib12,bib13}. The idea about possible Cherenkov gluons is relying \cite{bib09} on experimental observation of the positive real part of the elastic forward scattering amplitude of all hadronic processes at high energies. This is a necessary condition for such process because in the commonly used formula for the refractivity index its excess over 1 is proportional to this real part. Later I.~M.~Dremin \cite{bib10} noticed that for such thin targets as nuclei the similar effect can appear due to small confinement length thus giving us a new tool for its estimate. If the number of emitted gluons is large enough and each of them generates a mini-jet, the ring-like substructure will be observed in the target (azimuthal) diagram. If the number of emitted gluons is not large, we will see several jets correlated in their polar, but not in the azimuthal angles. Central collisions of nuclei are preferred for observation of such effects because of a large number of participating partons. In the present study the ring-like substructures of charged produced particles from ${}^{208}$Pb and ${}^{197}$Au induced interactions with Ag(Br) target nuclei in emulsion detector at 158~A~GeV/c and 11.6~A~GeV/c, correspondently, have been analyzed. The comparison with the FRITIOF calculations \cite{bib14} has been made. All used data are obtained in the frames of EMU01 Collaboration. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Experiment} The stacks of NIKFI~BR-2 nuclear photoemulsions have been irradiated horizontally by ${}^{208}$Pb beam at 158~A~GeV/c (the CERN SPS accelerator -- experiment EMU12) and by ${}^{197}$Au beam at 11.6~A~GeV/c (the BNL AGS accelerator -- experiment E863). The photoemulsion method allows to measure: \textit{multiplicities of any charged particles}: produced particles ($N_s$) with $\beta > 0.7$, projectile fragments ($N_F$) with $\beta \approx 0.99$ and target fragments ($N_h$) with $\beta < 0.7$, \textit{angles of particles} with the resolution of $\Delta\eta = 0.010-0.015$ rapidity units in the central region, pseudo-rapidity is given by $\eta = -\ln(\tan(\theta/2))$, and $\theta$ is the emission angle with respect to the beam direction, \textit{charges of projectile fragments} $Z_F$. Further details on both experiments, measurements and experimental criteria can be found in \cite{bib15,bib16}. In this work we have analyzed: \begin{itemize} \item 628 Pb+Ag(Br) collisions found by the along-the track scanning. From the collisions we have selected three centrality groups determined by the multiplicity of the produced particles: $350 \leq N_s < 700, 700 \leq N_s < 1000$ and $N_s \geq 1000$. As it was shown in our previous paper \cite{bib18} the criterion $N_s \geq 350$ selects the interactions of lead nuclei at 158~A~GeV/c with the heavy emulsion targets Ag and Br with $b_{imp} < 8 $~fm only. Moreover the group with $N_s \geq 1000$ comprises the central Pb+Ag(Br) interactions with impact parameter $b_{imp} \approx (0 - 2)$~fm. \item 1128 Au+Ag(Br) collisions found by the along-the track scanning. From the collisions we have selected analogous three centrality groups determined by the multiplicity of the produced particles: $100 \leq N_s < 200, 200 \leq N_s < 300$ and $N_s \geq 300$. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Method} A method we use to search for a ring-like substructure and to determine parameters they have been devised in paper \cite{bib03}. The produced particle multiplicity $N_d$ of analyzed subgroup from an individual event is kept a fixed. Each $N_d$-tuple of consecutive particles along the $\eta$-axis of individual event can then be considered as a subgroup characterized by \textit{a size}: $\Delta\eta = \eta_{max} - \eta_{min}$, where $\eta_{min}$ and $\eta_{max}$ are the pseudo-rapidity values of the first and last particles in the subgroup, by \textit{a density}: $\rho_d = \frac{N_d}{\Delta\eta}$ and by \textit{a average pseudo-rapidity} (or a subgroup position): $\eta_m = \frac{\sum\eta_i}{N_d}$. Another way is to kept a fixed the $\Delta\eta$ interval. This method has been used by G.~L.~Gogiberidze et al. in papers \cite{bib19,bib20}. To parameterize the azimuthal structure of the subgroup in a suitable way a parameter of the azimuthal structure $S_2 = \sum(\Delta\Phi_i)^2$ have been suggested, where $\Delta\Phi$ is the difference between azimuthal angels of two neighboring particles in the investigated subgroup (starting from the first and second and ending from the last and first). For the sake of simplicity it was counted $\Delta\Phi$ in units of full revolutions $\sum(\Delta\Phi_i) = 1$. The parameter $S_2$ is large $(S_2 \rightarrow 1)$ for the particle groups with the jet-like structure and is small $(S_2 \rightarrow 1/N_d)$ for the particle groups with the ring-like structure. The expectation value for the parameter $S_2$, in the case of stochastic scenario with independent particles in the investigated group, can be analytically expressed as $\SDVA = \frac{2}{N_d + 1}$ This expectation value can be derived from the distribution of gaps between neighbors. What can one wait to see in the experimental $S_2$ -- distributions in different scenarios? As it was shown in \cite{bib18} in case of a pure stochastic scenario the normalized $S_2/\SDVA$ -- distribution would have peak position at $S_2/\SDVA = 1$. The existence of the jet-like substructures in collisions results to appearance of additional $S_2/\SDVA$ -- distribution from this effect but shifted to the right side in comparison with stochastic distribution. Analogously, the existence of the ring-like substructures results to appearance of additional $S_2/\SDVA$ -- distribution from this effect but shifted to the left side. As result, the summary $S_2/\SDVA$ -- distribution from this three effects may have different form depends of mutual order and sizes \cite{bib18}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Results} The first detailed study of the average values of the parameter $S_2$ was performed in \cite{bib03}. The azimuthal substructures of particles produced within dense and dilute groups along the rapidity axis in the central ${}^{16}$O and ${}^{32}$S induced collisions with Ag(Br) and Au targets at 200~A~GeV/c (EMU01 data sets) were analyzed. The results were compared with different FRITIOF calculations including $\gamma$-conversion and the HBT effects. It was conclude that jet-like and ring-like events do not exhibit significant deviations from what can be expected from stochastic emission. The study of the $S_2$-parameter distributions for subgroups of the particles produced in ${}^{197}$Au interactions at 11.6~A~GeV/c with Ag(Br) targets in emulsion detector has been done in \cite{bib17}. Nonstatistical ring-like substructures have been found and cone emission angles as well as other parameters they have determined. In Fig.~\ref{fig04}(a -- c) the $S_2$ -- distributions for groups with $N_d = 35$ are shown for Au+Ag(Br) collisions with multiplicities of the produced particles $N_s > 300\; (a),\; 200 \leq N_s < 300\; (b)$ and $100 \leq N_s < 200\; (c)$. The analogical $S_2$ spectra for subgroups with $N_d = 90$ obtained in Pb+Ag(Br) collisions are shown in Fig.~\ref{fig04}(d -- f) for different centrality groups with $N_s > 1000\; (d),\; 700 \leq N_s < 1000\; (e)$ and $350 \leq N_s < 700\; (f)$. As one can see at all three cases of different centralities the $S_2$ -- distributions have the peak position around the value corresponding to the stochastic scenario $(S_2/\SDVA = 1)$ and tails at the right side. In order to study the ring-like substructures the only left part of the $S_2$ -- distribution, where a signal of ring-like substructure may be expected, is essential. As one can see the only central collisions $(N_s > 300$ in Au- and $N_s > 1000$ in Pb-induced collisions with Ag(Br) targets) have a proved additional peak on the left part. This indicates that a certain ring-like substructures are present at these two experiments.\\ \begin{figure}[!h] %\vspace*{8.0cm} %\epsfysize=20cm %\epsfxsize=8cm \begin{center} \epsffile{fig/sest.eps} \end{center} \caption[*]{ (a, b, c) The $S_2$ -- distributions for subgroups with $N_d = 35$ and different groups of shower particles multiplicity $N_s$ in Au+Ag(Br) collisions at 11.6~A~GeV/c;\\ (d, e, f) The $S_2$ -- distributions for subgroups with $N_d = 90$ and different groups of shower particles multiplicity $N_s$ in Pb+Ag(Br) collisions at 158~A~GeV/c. } \label{fig04} \end{figure} The experimental normalized $S_2/\SDVA$ -- distributions compared with the calculated ones by the FRITIOF model for the most central groups of events measured in ${}^{197}$Au and ${}^{208}$Pb induced collisions with Ag(Br) nuclei at 11.6 and 158~A~GeV/c are presented on the Fig.~\ref{fig06}. The model distributions were aligned according to the position of the peak with the expe\-ri\-mental one. The FRITIOF model includes neither the ring-like nor the jet-like effects, so the model distributions are used like the statistical background.\\ \begin{figure}[!t] %\vspace*{8.0cm} \epsfysize=7cm %\epsfxsize=8cm \begin{center} \epsffile{fig/s22.eps} \end{center} \caption[*]{ The experimental and FRITIOF model normalized $S_2/\SDVA$ -- distributions for central ${}^{208}$Pb and ${}^{197}$Au induced collisions with Ag(Br) targets in the emulsion detector. Here, $N_s$ is the number of produced particles and $N_d$ is the number of particles in the analysed subgroup. } \label{fig06} \end{figure} One can see that both experimental distributions are shifted to the right, have a tail in the right part and are broader than the spectra calculated by the FRITIOF. The left parts of both experimental distributions are not as smooth as in the model and there are some shoulders that refer to the surplus of the events in this region. The results obtained from the experimental data after the subtraction of the statistical background are also shown on this figure. The resultant distributions have two very good distinguishable hills, the first in the region $S_2/\SDVA < 1$, where the ring-like effects are expected and the second in the jet-like region -- $S_2/\SDVA > 1$. The probability of the formation of the nonstatistical ring-like substructures can be estimated as a rate of the surface of the ring-like part to the full surface of the experimental distribution.\\ Our preliminary results for ${}^{208}$Pb+Ag(Br) collisions at 158~A~GeV/c are shown that the estimated contribution of the events with nonstatistical ring-like substructures in the emission of produced particles is about $10 - 12 \%$ in the most central group of collisions with $N_s \geq 1000$. This value slowly decreases in two other groups of less central events with $350 \leq N_s < 700$ and $700 \leq N_s < 1000$.\\ \begin{figure}[!t] %\vspace*{8.0cm} \epsfysize=7cm %\epsfxsize=8cm \begin{center} \epsffile{fig/pm.eps} \end{center} \caption[*]{ The ring-like subgroup position $(\eta_m)$ distributions for central experimental (solid histogram) and FRITIOF model (dashed histogram) for ${}^{208}$Pb+Ag(Br) and ${}^{197}$Au+Ag(Br) collisions. } \label{fig07} \end{figure} To analyze the ring-like subgroup position on the pseudorapidity axis the $\eta_m$ -- distributions for subgroups with $S_2/\SDVA < 1$ from central collisions are presented for experimental data and FRITIOF model in Fig.~\ref{fig07}. One can see that the experimental distributions have a downfall in the central region, where the produced particle and FRITIOF distributions have maximum and two symmetrical hills from both sides of the center. For central ${}^{208}$Pb+Ag(Br) collisions $\eta_m$ -- distribution has two hills -- one at $\eta = 1.6 - 3.2$ and other at $\eta = 3.6 - 5.2$, the center of the distribution for produced particle is at $\eta \approx 3.5$. For central ${}^{197}$Au+Ag(Br) collisions $\eta_m$ -- distribution has two hills -- one at $\eta = 1.2 - 2.0$ and other at $\eta = 2.2 - 3.0$, the center of the distribution is at $\eta \approx 2.2$. The downfall of the $\eta_m$ -- distributions is more visible for ${}^{208}$Pb+Ag(Br) interactions that are probably connected with larger cross section of the effect for the collisions with bigger multiplicity that realized at higher beam energy and for more central collisions.\\ To investigate the ring-like subgroups size $\Delta\eta$ in Fig.~\ref{fig09} the $\Delta\eta$ -- distributions are shown in the region of the ring-like effects $(S_2/\SDVA < 1)$ for the most central ${}^{208}$Pb $(N_s \geq 1000, N_d = 90)$ and ${}^{197}$Au $(N_s \geq 300, N_d = 35)$ induced collisions with Ag(Br) targets compared with FRITIOF model. One can see that there are some distinctions in the shapes of the experimental and model distributions for Pb induced collisions. There appeared 3 or 4 peaks in the experimental $\Delta\eta$ -- distributions in the ring-like effect region that we don't see in other cases. The difference for ${}^{197}$Au data is not so obvious. Moreover, in our previous paper \cite{bib18} it was shown that from one side there are some distinctions in the shapes of the experimental distributions for the regions $S_2/\SDVA < 1$ and $S_2/\SDVA > 1$ but from the other side there are no differences in the $\Delta\eta$ -- distributions calculated by the model for both classes of events ($S_2/\SDVA < 1$ and $S_2/\SDVA > 1$).\\ \begin{figure}[!h] %\vspace*{8.0cm} \epsfysize=7cm %\epsfxsize=8cm \begin{center} \epsffile{fig/de.eps} \end{center} \caption[*]{ Comparison of the experimental and the FRITIOF model $\Delta\eta$ -- distributions for central interactions of ${}^{208}$Pb and ${}^{197}$Au nuclei with Ag(Br) targets for ring-like region ($S_2/\SDVA < 1$). } \label{fig09} \end{figure} If the ring-like substructures have been appeared due to an effect analogous to Cherenkov light there may be in a collision two such substructures forming two produced particle cones -- one in the forward and another in the backward direction in center-of-mass system. In such case the cones must have the equal emission angles, because as well known the Cherenkov emission angel depends on the refractive index of matter only. In our case, in case of nuclear matter, it is a way to measure the refractive index of nuclear matter. It is interesting to note that the refractive index of nuclear matter has to be changed in the case of the changes the nuclear matter properties, for example, in the case of phase transition from a normal hadronic matter to quark-qluon plasma. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Conclusion} The azimuthal ring-like substructures of produced particles from collisions induced by the 11.6~A~GeV/c ${}^{197}$Au and 158~A~GeV/c ${}^{208}$Pb beams with Ag(Br) targets in the emulsion detector have been investigated. \begin{itemize} \item The additional subgroups of produced particles in the region of the ring-like substructures ($S_2/\SDVA < 1$) in comparison to the FRITIOF model calculations have been observed. \item The difference with the FRITIOF model calculations in the $\eta_m$ -- distributions in ring- like region $S_2/\SDVA < 1$ indicates to existence of two symmetrical $\eta_m$ -- regions of preferred emission of ring-like substructures -- one in the forward and second in the backward direction in center-of-mass system. \item The $\Delta\eta$ -- distribution, which gives the information about a ring-like substructure size in pseudorapidity scale, for the experimental data in ring-like region ($S_2/\SDVA < 1 $) differs from the FRITIOF model calculations. \item The nonstatistical ring-like substructures formation is more visible for central collisions and for bigger energies. \item The results are in good agreement with an idea that the ring-like substructures have been appeared due to an effect analogous to Cherenkov light. \end{itemize} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Acknowledgements} This work was supported by Science and Technology Assistance Agency (Slovak Republic) under the contract No. APVT-51-010002 and by RFBR grants No. 03-02-17079 and No. 04-02-16593. \begin{thebibliography}{99} \bibitem{bib01} E.~Stenlund {\it et al.}, Nucl. Phys. {\bf A498} (1989) 541c. \bibitem{bib02} L.~van Hove, Z.~Phys. {\bf C21} (1984) 93. \bibitem{bib03} M.~I.~Adamovich {\it et al.}, J.~Phys. {\bf G19} (1993) 2035. \bibitem{bib04} E.~Stenlund {\it et al.}, Talk given at XXII. Multiparticle Dynamics Symposium, Santiago de Compostella, Spain, July 13-17 (1992). \bibitem{bib05} A.~B.~Apanasenko, N.~A.~Dobrotin, I.~M.~Dremin, K.~A.~Kotelnikov, Pisma v ZhETF, {\bf 30} (1979) 157. \bibitem{bib06} A.~El-Naghy and K.~S.~Abdel-Khalek, Phys. Lett. {\bf B299} (1993) 370. \bibitem{bib07} A.~El-Naghy {\it et al.}, J.~Phys. Soc. Jpn. {\bf 58} (1989) Suppl. 741. \bibitem{bib08} N.~M.~Astafyeva, I.~M.~Dremin, K.~A.~Kotelnikov, hep-ex/9795003, (1997). \bibitem{bib19} G. L. Gogiberidze, E. K. Sarkisyan, L. K. Gelovani, Nucl. Phys. Proc. Suppl. {\bf 92} (2001) 75. \bibitem{bib20} G. L. Gogiberidze, L. K. Gelovani, E. K. Sarkisyan, Phys. Atom. Nuclei {\bf 64} (2001) 143, Yad. Fizika {\bf 64} (2001) 147. \bibitem{bib09} M.~I.~Dremin, Pisma v ZhETF {\bf 30} (1979) 152. \bibitem{bib10} M.~I.~Dremin, Jad. Fiz. {\bf 33} (1981) 1357. \bibitem{bib11} M.~I.~Dremin, Pisma v ZhETF {\bf 34} (1981) 617. \bibitem{bib12} I.~M.~Dremin, hep-ph/0011110, (Nov 2000). \bibitem{bib13} I.~M.~Dremin {\it et al.}, Phys. Lett. {\bf B499} (2001) 97. \bibitem{bib14} B.~Nilsson-Almquist, E.~Stenlund, Comp. Phys. Com. {\bf 43} (1987) 387. \bibitem{bib15} A.~Sh.~Gaitinov {\it et al.}, Talk given at XVII. Meeting of the EMU01 Collaboration, Dubna, Russia, May 18-20 (1999). \bibitem{bib16} M.~I.~Adamovich {\it et al.}, Eur.~Phys.~J. {\bf A5} (1999) 429. \bibitem{bib17} G.~I.~Orlova, A.~Krav\v c\'akov\'a, S.~Vok\'al, V.~V.~Prosin, Proc. of the Hadron Structure 2002, Her\v lany, Slovakia, 22-27 Sept 2002, edited by P.~J.~\v Saf\'arik University Ko\v sice (2003) 155. \bibitem{bib18} S.~Vok\'al, A.~Vok\'alov\'a, A.~Krav\v c\'akov\'a, S.~Lehock\'a, G.~I.~Orlova, Preprint JINR, E1-2004-173, Dubna (2004). \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7135798032, "avg_line_length": 74.4515050167, "ext": "tex", "hexsha": "7f89c851616b82e39aa18f0aaf30d5921984ef38", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "42eede9867e5795a6fc040b0a7ce92da3ddd3120", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "e-sim/pdf-text-extraction-benchmark", "max_forks_repo_path": "benchmark/src/test-data/0501/hep-ex0501025/vokal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "42eede9867e5795a6fc040b0a7ce92da3ddd3120", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "e-sim/pdf-text-extraction-benchmark", "max_issues_repo_path": "benchmark/src/test-data/0501/hep-ex0501025/vokal.tex", "max_line_length": 1326, "max_stars_count": 1, "max_stars_repo_head_hexsha": "42eede9867e5795a6fc040b0a7ce92da3ddd3120", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "e-sim/pdf-text-extraction-benchmark", "max_stars_repo_path": "benchmark/src/test-data/0501/hep-ex0501025/vokal.tex", "max_stars_repo_stars_event_max_datetime": "2018-08-23T19:07:01.000Z", "max_stars_repo_stars_event_min_datetime": "2018-08-23T19:07:01.000Z", "num_tokens": 6103, "size": 22261 }
%!TEX root = ../main.tex \objective{Use but also know the limitations of inverse trigonometric functions} The standard trigonometric functions work by receiving angles as input, and outputting the appropriate ration of side from the Unit Circle. Often, we need have the ratio of sides and wish to know the angle that must exist between them. Thinking in terms of functions, this is the inverse of the normal, asking for the input corresponding to a given output. But as we saw last section, the trigonometric functions all have an infinite number of repetitions of each output. They are not invertible. That is to say, they do not invert to functions across their entire domains. We must limit their possible input if their inverses are to be functions. The criteria of deciding the inverse domains are: \begin{itemize} \item All possible outputs must be represented \item Be as continuous as possible \item Be centered around the origin \end{itemize} insert 6 inverse relations and highlighted ranges insert unit circle with relevant semi-circles \subsection{Composing} \subsubsection{Inverse inside Normal} It is straight-forward to interpret $\sin(\sin^{-1}(\frac{1}{2}))$: ``What is sine when sine is one-half?'' Very easy: it is one-half! What about $\sin(\cos^{-1}(\frac{1}{2}))$? ``What is sine, when cosine is one-half?'' We could fine the angle were cosine has that value ($60^\circ$) and take the sine of that angle, but we could also recognize that arccosine is giving us adjacent-over-hypotenuse, and sine is asking for opposite-over-hypotenuse, and easy problem to solve with Pythagorus's help. It is $\frac{\sqrt{3}}{2}$. The second approach --- drawing a right triangle with two of the side-lengths known --- is a much more power tool and extendable to more circumstances. Consider $\cos(\sin^{-1}(x))$. ``What is cosine when sine is $x$?'' Again, sine is giving us adjacent over hypotenuse. We can imagine a right-triangle with one angle, call it $\theta$. Sine of $\theta$ means the adjacent is $x$ and the hypotenuse is 1. The Pythagorean Theorem will allow is to find the opposite: it must be $\sqrt{1-x^2}$. \begin{figure} \begin{centering} \begin{tikzpicture} \draw (0,0) -- (2,0) -- (2,1) -- cycle; \node (A) at (1.2,0) [anchor=north] {$x$}; \node (B) at (1.2,.5) [anchor=south] {1}; \node (C) at (.5,.2) [anchor=west] {$\theta$}; \draw (1.9,0) -- (1.9,.1) -- (2.0,.1); \end{tikzpicture} \caption{A visualization of $\cos(\sin^{-1}(x))$.} \end{centering} \end{figure} \subsubsection{Normal inside Inverse} Perhaps you would be surprised at the answer if you put $\sin^{-1}(\sin(3))$ into your TI-8*. Would you expect it to answer 3? How can we verbalize what this expression is asking? ``Extend an arc 2 units and record the resulting height above the $x$-axis. What angle produces this height above the $x$-axis?'' We could interpret it visually on the unit circle like this: \begin{figure}[h] \begin{centering} \begin{tikzpicture}[scale=1.5] \draw[<->] (-1.1,0) -- (1.1,0) node [anchor=west] {$x$}; \draw[<->] (0,-1.1) -- (0,1.1) node[anchor=south] {$y$}; \draw (0,0) circle (1); \draw[->] (0,0) ++ (0:1.1) arc (0:170:1.1) node[midway,xshift=1.5cm] {3 radians}; \draw[dotted] (-1.1,0.14) -- (1.1,0.14); \end{tikzpicture} \caption{3 radians is a height arcsine can find in \texttt{QI}.} \end{centering} \end{figure} Notice that the height above the $x$-axis -- the sine of the angle -- is achievable in the first quadrant. Arcsine --- because it is a function and can only return one value per input --- must answer a positive input in the first quadrant. \subsection{Derivatives} How can we find the derivatives of inverse trigonometric functions? The definitions of inverses is very helpful: swapping $x$ and $y$. Inverse sine is just $x=\sin(y)$. Using implicit differentiation, we get $dx=\cos(y)dy$. Therefore, $$ \frac{dy}{dx} = \frac{1}{\cos(y)} = \frac{1}{\cos(\sin^{-1}(x))} $$ As we saw above, this can be simplified. You will find all six inverse trigonometric functions derivatives in the exercises.
{ "alphanum_fraction": 0.7142158062, "avg_line_length": 45.9213483146, "ext": "tex", "hexsha": "88a137bb57a3ccec8872e63c4e25c0c5dd8cc24d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "aquatiki/AnalysisTextbook", "max_forks_repo_path": "ch09/0905.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "aquatiki/AnalysisTextbook", "max_issues_repo_path": "ch09/0905.tex", "max_line_length": 97, "max_stars_count": 2, "max_stars_repo_head_hexsha": "011c16427ada1b1e3df8e66c02566a5d5ac8abcf", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "aquatiki/AnalysisTextbook", "max_stars_repo_path": "ch09/0905.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-07T12:32:53.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-08T15:05:17.000Z", "num_tokens": 1227, "size": 4087 }
\documentclass{article} \usepackage{marvosym} \usepackage{dingbat} \title{OpenMP parallel for scheduling} \date{} \begin{document} \maketitle The OpenMP parallel for worksharing construct supports different types of scheduling. The objective of this exercise is to define a scheduling strategy that is best suited for a specific code, propose a parallelization and analyze the resulting behavior. \section{Package content} The \texttt{Schedules} directory contains a single file named \texttt{schedules.c}. The relevant part of this file is represented by the following two lines of code \begin{verbatim} for (n=size; n>=100; n-=100){ mat_mult(n, A+ptr[n], B+ptr[n], C+ptr[n]); \end{verbatim} At every iteration of this loop a matrix-matrix product of the form $C=A\cdot B$ is performed, where $A$, $B$ and $C$ are dense matrices of size \texttt{n}. Note that the $A$, $B$ and $C$ matrices are different at every iteration and thus there are no loop-carried dependencies. In the code, \texttt{size} is set to be equal to 1500. The code can be compiled using the \texttt{make} command: just type \texttt{make} inside the \texttt{Schedules} directory. This will generate an executable \texttt{main} file that can be launched like this \begin{verbatim} $ ./main \end{verbatim} Upon execution, the \texttt{main} program will print the execution time for the loop above. \section{Assignment} \begin{itemize} \item \smallpencil assuming the OpenMP parallel for construct has been used to parallelize the loop above, identify the scheduling type that is best suited for this code and explain your choice in the \texttt{responses.txt} file. Report in this file also the execution time of the code using 1, 2 and 4 threads with the scheduling type you have chosen and with the default, static scheduling. The observed results confirm what you expected? \item {\huge \Keyboard} use the OpenMP parallel for construct to parallelize the loop above in the \texttt{schedules.c} file. Compile and run the program using 1, 2 and 4 threads, with both the scheduling type you have chosen and the default static scheduling; report the corresponding execution times in the \texttt{responses.txt} file as described above. \end{itemize} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7625160462, "avg_line_length": 34.3676470588, "ext": "tex", "hexsha": "e81ebfbdf09879f715121a4bd01fd3d8fa36bfe5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-04-10T11:39:01.000Z", "max_forks_repo_forks_event_min_datetime": "2021-04-10T11:39:01.000Z", "max_forks_repo_head_hexsha": "d3247880c66cd3754d0bd29781ab1ddec9f6536f", "max_forks_repo_licenses": [ "FTL" ], "max_forks_repo_name": "LagOussama/enseeiht", "max_forks_repo_path": "2IMA/OpenMP/BE2015_correction/Schedules/subject.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d3247880c66cd3754d0bd29781ab1ddec9f6536f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "FTL" ], "max_issues_repo_name": "LagOussama/enseeiht", "max_issues_repo_path": "2IMA/OpenMP/BE2015_correction/Schedules/subject.tex", "max_line_length": 72, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d3247880c66cd3754d0bd29781ab1ddec9f6536f", "max_stars_repo_licenses": [ "FTL" ], "max_stars_repo_name": "LagOussama/enseeiht", "max_stars_repo_path": "2IMA/OpenMP/BE2015_correction/Schedules/subject.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-26T11:38:34.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-26T11:38:34.000Z", "num_tokens": 608, "size": 2337 }
\documentclass{article} % For LaTeX2e %\usepackage{icml2018} \usepackage{nips_2018} \usepackage{hyperref} \usepackage{url} \usepackage{epsfig} \usepackage{amssymb} \usepackage[tbtags]{amsmath} \usepackage{amsthm} \usepackage{graphics,eepic,epic} \usepackage{latexsym} \usepackage{euscript} \usepackage{subfigure} \usepackage{graphics,eepic,epic,psfrag} \usepackage{bm} \usepackage{dsfont} \usepackage{algorithm} \usepackage{wrapfig} \usepackage{tikz} \usetikzlibrary{positioning} \usetikzlibrary{shapes} \usepackage{listings} \usepackage{color} \usepackage{setspace} \usepackage{caption} \usepackage{booktabs} \usepackage{nicefrac} \RequirePackage[noend]{algorithmic} % ADDED TO THIS LINE \definecolor{mydarkblue}{rgb}{0,0.08,0.45} \hypersetup{colorlinks=true, linkcolor=mydarkblue, citecolor=mydarkblue, filecolor=mydarkblue, urlcolor=mydarkblue} \newcommand{\argmax}{\mathop{\mathrm{argmax}}\limits} \newcommand{\argmin}{\mathop{\mathrm{argmin}}\limits} \newcommand{\minN}{\mathop{\mathrm{min}}\limits} \newcommand{\maxN}{\mathop{\mathrm{max}}\limits} \newcommand{\jon}[1]{{\color{green} #1}} \newcommand{\david}[1]{{\color{blue} #1}} \newcommand{\todo}[1]{{\color{red} #1}} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}{Corollary}[theorem] \newtheorem{lemma}[theorem]{Lemma} \newcommand{\dataSymbol}{D} %The symbol to use for data distributions. \newcommand{\train}{\dataSymbol_{Train}} %The distribution of training data. \newcommand{\test}{\dataSymbol_{Valid.}} %The distribution of testing data. % \newcommand{\prior}[1]{p \left( #1 \right)} % The prior belief distribution of a variable.(1) A parameter or hyper parameter. %\newcommand{\x}{x} %A variable data point. \newcommand{\param}{\mathrm{w}} %The model parameters. \newcommand{\paramFixed}{\param} %A fixed model parameter. \newcommand{\paramDist}{\prior{\param}} %The distribution of model parameters. \newcommand{\hyper}{\lambda} %A variable hyper parameter. \newcommand{\hyperFixed}{\hyper} %A fixed hyper parameter. \newcommand{\hyperDist}{\prior{\hyper}} %The distribution of hyper parameters. \newcommand{\hyperHyper}{\hyper'} %A fixed hyper hyper parameter. % \newcommand{\innerOptParamSymbol}{\param^{*}} %The symbol for the result of the inner optimization. \newcommand{\innerOptParam}[1]{\param^{*} \! \left( #1 \right)} %The optimal parameters of the model for the inner optimization.(1) Variable hyper parameters or fixed hyper parameters. \newcommand{\optParam}[1]{\param^{**}}% \! \left( #1 \right)} %The optimal parameters of the model after specifying the hyper parameter.(1) Variable hyper hyper parameters or fixed hyper hyper parameters. \newcommand{\optHyper}[1]{\hyper^{*}}% \! \left( #1 \right)} %The optimal hyper parameter according to loss and the hyper parameter.(1) A variable or fixed hyper hyper parameter. % \newcommand{\lossSymbol}{\mathop{\mathcal{L}}} %The symbol to use for loss functions. \newcommand{\lossSymbolInner}{\lossSymbol_{\mathrm{Train}}} %The symbol to use for the inner loss functions. \newcommand{\lossSymbolOuter}{\lossSymbol_{\mathrm{Valid.}}} %The symbol to use for the outer loss functions. \newcommand{\loss}[3]{\lossSymbol_{#1} \left(#2, #3 \right)} %The total loss for a particular objective.(1) Training or Testing.(2) The variable model parameters or optimal model parameters.(3) Hyper parameters or hyper hyper parameters. % \newcommand{\innerLoss}[2]{\lossSymbolInner \! \left( #1, #2 \right)} %The loss function for the inner optimization.(1) The model parameters or the hypernets output.(2) A variable hyper parameter, or a fixed hyper parameter. \newcommand{\innerOpt}{\argmin_{\param} \innerLoss{\param}{\hyper}} %The result of the inner optimization. \newcommand{\outerLoss}[1]{\lossSymbolOuter \! \left( #1 \right)}%, \hyperHyper \right)} %The loss function for the outer optimization.(1) The optimal model parameters, or the inner optimization. \newcommand{\outerOpt}[1]{\argmin_{\hyper} \outerLoss{#1}} %The result of the inner optimization.(1) The optimal model parameters, or the inner optimization. % \newcommand{\predictionLoss}[2]{\lossSymbol_{\mathrm{Pred}} ( #1, #2 )} %The loss from predictions of the model.(1) Training or Testing or data point. \newcommand{\regLoss}[2]{\lossSymbol_{\mathrm{Reg}} ( #1, #2 )}%The loss for the regularization of the model.(1) Hyper parameters or hyper hyper parameters.This is typically introduced in a principled fashion with MAP. %\newcommand{\sumPredictionLoss}[1]{\sum_{\x \in #1} \predictionLoss{\x}{\param}} %The expanded prediction loss for each data point.Typically arises due to assumptions about data points being independent. % \newcommand{\outerUpdateSymbol}{\Lambda} %The symbol for the outer update function. \newcommand{\outerUpdate}[1]{\outerUpdateSymbol \! \left( \!#1 \!\right)} %The function for updating the outer optimization parameter. %\newcommand{\innerGrad}[1]{\nabla_{\param} \! \left( \innerLossExpand{\param}{#1} \right)} %The gradient for the inner optimization.(2) A variable hyper parameter, or a fixed hyper parameter. \newcommand{\innerUpdateSymbol}{g} %The symbol for the inner update function. \newcommand{\innerUpdate}[1]{\innerUpdateSymbol \! \left( #1 \right)} %The function for updating the inner optimization parameter. % \newcommand{\resultList}{S} % A symbol for the list which will hold the results (\hyper, \param) in the optimization procedure. \newcommand{\arbitraryData}{Train} % A symbol for an arbitrary data set like training or testing. \newcommand{\variableData}{\bf{x}} % A variable data point \newcommand{\ETrain}[1]{\mathop{\mathbb{E}}_{\variableData \sim \mathrm{Train}} \! \left[ #1 \right]} \newcommand{\EValid}[1]{\mathop{\mathbb{E}}_{\variableData \sim \mathrm{Valid.}} \! \left[ #1 \right]} % \newcommand{\standardReturn}{\argmin_{\hyperFixed^{(k)},\, \paramFixed^{(k)}} \outerLoss{\paramFixed^{(k)}}} %The result returned by the standard optimization algorithms. % \newcommand{\innerLossEExpand}[2]{\ETrain{\predictionLoss{\variableData}{#1}} + \regLoss{#1}{#2}} % The expected expanded training loss.(1) The model parameters that are fixed of hypernet learned.(2) The hyperparamter. \newcommand{\outerLossEExpand}[1]{\EValid{\predictionLoss{\variableData}{#1}}} % The expected expanded validation loss.(1) The hyperparamter. \newcommand{\innerLossExpand}[2]{\predictionLoss{\variableData}{#1} + \regLoss{#1}{#2}} % The expanded training loss.(1) The model parameters that are fixed of hypernet learned.(2) The hyperparamter. \newcommand{\outerLossExpand}[1]{\predictionLoss{\variableData}{#1}} % The expanded validation loss.(1) The hyperparameter. % \newcommand{\outerIter}{T_{\mathrm{outer}}} % The number of iterations for the outer optimization. \newcommand{\innerIter}{T_{\mathrm{inner}}} % The number of iterations for the inner optimization. % \newcommand{\responseParam}{\phi} %The parameters of the response function. \newcommand{\responseParamFixed}{\responseParam} %The fixed parameters of the response function. \newcommand{\responseParamDist}{\prior{\responseParam}} %The prior distribution on parameters of the response function. \newcommand{\approxResponseSymbol}[1]{\param_{#1}} %The symbol for the approximate response function.(1) A variable hypernet, or a fixed hypernet. \newcommand{\approxResponse}[2]{\approxResponseSymbol{#2} ( #1 )} %The approximate response function.(1) A variable hyperparameter or fixed hyperparameter.(2) A variable hypernet, or a fixed hypernet. \newcommand{\newReturn}{\argmin_{\hyperFixed,\, \approxResponse{\hyperFixed}{\responseParamFixed}} \outerLoss{\approxResponse{\hyperFixed}{\responseParamFixed}}} %The value returned by the new optimization algorithm. % \newcommand{\hypernetIter}{T_{\mathrm{hypernet}}} % The number of iterations to train the hypernet for. \newcommand{\hyperIter}{T_{\mathrm{hyperparameter}}} % The number of iterations to train the hyperparameter for. % \newcommand{\jointIter}{T_{\mathrm{joint}}} % The number of iterations for the joint hyper-training. \newcommand{\sampleRename}[1]{#1} %Rename the sampled lambdas for algLocal \newcommand{\curRename}[1]{\smash{\hat{#1}}} %Rename the current lambdas for algLocal \newcommand{\hyperDistVar}{p ( \sampleRename{\hyper} | \curRename{\hyper} )} %The distribution of hyper parameters. \newcommand{\outerGrad}{\nabla_{\curRename{\hyper}} \outerLossExpand{\approxResponse{\curRename{\hyper}}{\responseParam}}} %The gradient for the outer optimization. % \newcommand{\argminTargetVary}{\responseParam}%\approxResponse{\cdot}{\responseParam}} \newcommand{\argminTargetFix}{\responseParam}%\approxResponse{\hyper}{\responseParam}} \newcommand{\arbitrary}{x} \newcommand{\argminTargetArbitrary}{\responseParam}%\approxResponse{\arbitrary}{\responseParam}} \newcommand{\approxResponseOutput}[1]{\approxResponseSymbol{\responseParam^{*}} ( #1 )} \newcommand{\approxResponseOutputNew}[1]{\approxResponseSymbol{\responseParam^{\dagger}} ( #1 )} \newcommand{\proofLoss}{\innerLoss{\approxResponse{\hyper}{\responseParam}}{\hyper}} \newcommand{\proofTargetLoss}{\outerLoss{\approxResponse{\hyper}{\responseParam}}} %\newcommand{\proofLossVary}{\innerLoss{\approxResponse{\cdot}{\responseParam}}{\cdot}} \newcommand{\proofLossOutput}{\innerLoss{\approxResponseOutput{\hyper}}{\hyper}} \newcommand{\proofTargetLossOutput}{\outerLoss{\approxResponseOutput{\hyper}}} \newcommand{\targetLoss}{\outerLoss{\innerOptParam{\hyper}}} \newcommand{\phyper}{p \left( \hyper \right)} \newcommand{\hyperSupport}{\mathrm{support} \! \left( \phyper \right)} %\phyper \newcommand{\hyperDomain}{\textnormal{for all } \hyper \in \hyperSupport} \newcommand{\Ehyper}[1]{\mathop{\mathbb{E}}_{\phyper} \! \left[ #1 \right]} \newcommand{\rename}[1]{#1'} \newcommand{\EhyperFix}[1]{\mathop{\mathbb{E}}_{p \left( \rename{\hyper} \right)} \! \left[ #1 \right]} \newcommand{\proofLossFix}{\innerLoss{\approxResponse{\rename{\hyper}}{\responseParam}}{\rename{\hyper}}} \newcommand{\responseDom}{\Phi} \newcommand{\lossTrainData}[2]{\lossSymbolInner \! ( #1, \! #2, \!\variableData \!)} %The loss from predictions of the model.(1) Training or Testing or data point. \newcommand{\lossValidData}[1]{\lossSymbolOuter \! ( #1, \!\variableData \!)} %The loss from predictions of the model.(1) Training or Testing or data point. \newcommand{\lossValidDataChange}[2]{\lossSymbolOuter ( #1, #2)} \title{Stochastic Hyperparameter Optimization through Hypernetworks} % The \author macro works with any number of authors. There are two % commands used to separate the names and addresses of multiple % authors: \And and \AND. % % Using \And between authors leaves it to LaTeX to determine where to % break the lines. Using \AND forces a line break at that point. So, % if LaTeX puts 3 of 4 authors names on the first line, and the last % on the second line, try using \AND instead of \And before the third % author name. \author{ Jonathan Lorraine\\ Department of Computer Science\\ University of Toronto\\ \texttt{[email protected]} \And David Duvenaud\\ Department of Computer Science\\ University of Toronto\\ \texttt{[email protected]} } \begin{document} \maketitle %\twocolumn[ %\icmltitle{Stochastic Hyperparameter Optimization through Hypernetworks} %\begin{icmlauthorlist} %\icmlauthor{Jonathan Lorraine}{to} %\icmlauthor{David Duvenaud}{to} %\end{icmlauthorlist} %\icmlaffiliation{to}{Department of Computer Science, University of Toronto, Toronto, Canada} %\icmlcorrespondingauthor{Jonathan Lorraine}{[email protected]} %\icmlkeywords{Machine Learning, Meta-learning} %] %\printAffiliationsAndNotice{} \begin{abstract} Machine learning models are often tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of weights and hyperparameters. Our process trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our technique converges to locally optimal weights and hyperparameters for sufficiently large hypernetworks. We compare this method to standard hyperparameter optimization strategies and demonstrate its effectiveness for tuning thousands of hyperparameters. \end{abstract} % \begin{wrapfigure}[21]{r}{0.465\textwidth} %\centering \vspace{-0.06\textheight} \begin{tikzpicture} \node[shape=circle,draw=black] (optParams) at (-0.5,0.1) {$\alpha$}; \node[shape=circle,draw=black, fill=gray] (x) at (-0.5,1) {$x$}; \node[shape=circle,draw=black, fill=gray] (t) at (-0.5,2) {$t$}; \node[shape=circle,draw=black] (lambda) at (1,-0.7) {$\hyper$}; \node[shape=rectangle,draw=black] (train) at (1,0.1) {Training}; \node[shape=circle,draw=black] (w) at (1,1) {$\param^{*}$}; \node[shape=circle,draw=black] (y) at (1,2) {$y$}; \node[shape=rectangle,draw=black] (L) at (1,2.8) {$\lossSymbolOuter$}; \path [->] (optParams) edge node {} (train); \path [->] (lambda) edge node {} (train); \path [->] (train) edge node {} (w); \path [->] (x) edge node {} (y); \path [->] (w) edge node {} (y); \path [->] (t) edge node {} (L); \path [->] (y) edge node {} (L); \node[shape=circle,draw=black] (phi) at (2.5,0.1) {{\color{red}$\responseParam$}}; \node[shape=circle,draw=black, fill=gray] (xNEW) at (2.5,1) {$x$}; \node[shape=circle,draw=black, fill=gray] (tNEW) at (2.5,2) {$t$}; \node[shape=circle,draw=black] (lambdaNEW) at (4.5,-0.7) {$\hyper$}; \node[shape=rectangle,draw=black] (hypernet) at (4.5,0.1) {{\color{red}Hypernetwork}}; \node[shape=circle,draw=black] (wNEW) at (4.5,1) {$\param^{*}$}; \node[shape=circle,draw=black] (yNEW) at (4.5,2) {$y$}; \node[shape=rectangle,draw=black] (LNEW) at (4.5,2.8) {$\lossSymbolOuter$}; \path [->] (phi) edge node {} (hypernet); \path [->] (lambdaNEW) edge node {} (hypernet); \path [->] (hypernet) edge node {} (wNEW); \path [->] (xNEW) edge node {} (yNEW); \path [->] (wNEW) edge node {} (yNEW); \path [->] (tNEW) edge node {} (LNEW); \path [->] (yNEW) edge node {} (LNEW); \node[above] at (current bounding box.north) {\,\,\,\,\,\,\,\,Cross-validation \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,Hyper-training}; \end{tikzpicture} \caption{%A standard computational graph is shown in black, while our additions are in {\color{red} red}. %Note that $t$ is the target for some data point $x$, which can be sampled from the training or validation sets. %We want to make predictions $y$ for each data point that are close to the target, with closeness being determined by our loss $\lossSymbol$ and hyperparameters $\hyper$. %The prediction is parameterized by $\param$ which is optimized to minimize the training loss, while the hyperparameters are optimized to minimize validation loss. %This makes the optimal parameters dependent on the hyperparameters and the optimal hyperparameters dependent on the optimal parameters. %We approximate this relationship with a hypernet parameterized by $\responseParam$ and leverage differentiability of the hypernetwork for hyperparameter optimization. \emph{Left:} A typical computational graph for cross-validation, where $\alpha$ are the optimizer parameters, and $\hyper$ are training loss hyperparameters. It is expensive to differentiate through the entire training procedure. % - see \citet{maclaurin2015gradient}. \emph{Right:} The proposed computational graph with our changes in {\color{red} red}, where $\responseParam$ are the hypernetwork parameters. We can cheaply differentiate through the hypernetwork to optimize the validation loss $\lossSymbolOuter$ with respect to hyperparameters $\hyper$. We use $x$, $t$, and $y$ to refer to a data point, its label, and a prediction respectively.} \end{wrapfigure} % \section{Introduction} Model selection and hyperparameter tuning is a significant bottleneck in designing predictive models. Hyperparameter optimization is a nested optimization: The inner optimization finds model parameters $\param$ which minimize the training loss $\lossSymbolInner$ given hyperparameters $\hyper$. The outer optimization chooses $\hyper$ to reduce a validation loss $\lossSymbolOuter$: % \begin{align} \outerOpt{\innerOpt} \label{eq nested} \end{align} % Standard practice in machine learning solves~\eqref{eq nested} by gradient-free optimization of hyperparameters, such as grid search or random search. %There are also methods using gradients of surrogate functions like Bayesian optimization. Each set of hyperparameters is evaluated by re-initializing weights and training the model to completion. Re-training a model from scratch is wasteful if the hyperparameters change by a small amount. Some approaches, such as Hyperband~\citep{li2016hyperband} and freeze-thaw Bayesian optimization~\citep{swersky2014freeze}, can pause model training and not waste this effort. However, these methods often scale poorly beyond 10 to 20 dimensions. How can we avoid re-training from scratch each time? Note that the optimal parameters $\param$ are a deterministic function of the hyperparameters $\hyper$: % \begin{align} \innerOptParam{\hyper} = \innerOpt \label{best response equation} \end{align} % We propose to \emph{learn this function}. Specifically, we train a neural network that takes hyperparameters as input, and outputs an approximately optimal set of weights. This formulation provides two major benefits: First, we can train the hypernetwork to convergence using stochastic gradient descent (SGD) without training any particular model to completion. Second, differentiating through the hypernetwork allows us to optimize hyperparameters with stochastic gradient-based optimization. % %\section{Background} %The validation loss gives us an estimate of the performance on unseen data, while the training loss gives us an estimate of performance on observed data. %We want to find models which perform well on unobserved data given observed data, so optimization of model parameters for training loss is nested in an optimization of models for validation loss. \begin{figure*} %\vspace{-0.02\textheight} \centering \begin{minipage}{1.0\textwidth} \begin{tabular}{c|l} \phantom{aaaaaaaaaaaaaa} Training loss surface & \phantom{aaaaaaaa} Validation loss surface\\ \includegraphics[width=0.5\textwidth, clip, trim=10mm 0mm 0mm 15mm]{train_loss_manifold.pdf} & \includegraphics[width=0.5\textwidth, clip, trim=35mm 0mm 0mm 18mm]{valid_loss_manifold.pdf} \end{tabular} \end{minipage} \caption{ A visualization of exact (blue) and approximate (red) optimal weights as a function of hyperparameters. % \emph{Left:} The training loss surface. % \emph{Right:} The validation loss surface. The approximately optimal weights $\param_{\phi^{*}}$ are output by a linear model fit at $\curRename{\lambda}$. The true optimal hyperparameter is $\lambda^{*}$, while the hyperparameter estimated using approximately optimal weights is nearby at $\lambda_{\phi^{*}}$. \label{fig:theory1} } \vspace{-0.02\textheight} \end{figure*} \section{Training a network to output optimal weights} \label{sec.new_formulation} How can we teach a \emph{hypernetwork}~\citep{ha2016hypernetworks} to output approximately optimal weights to another neural network? The basic idea is that at each iteration, we ask a hypernetwork to output a set of weights given some hyperparameters: $\approxResponseSymbol{} = \approxResponse{\hyperFixed}{\responseParamFixed}$. Instead of updating the weights $\param$ using the training loss gradient $\nicefrac{\partial \lossSymbolInner (\param)}{\partial \param}$, we update the hypernetwork weights $\responseParamFixed$ using the chain rule: $\nicefrac{\partial \lossSymbolInner(\param_\responseParamFixed)}{\partial \param_\responseParamFixed} \nicefrac{\partial \param_\responseParamFixed}{\partial \responseParamFixed}$. This formulation allows us to optimize the hyperparameters $\hyper$ with the validation loss gradient $\nicefrac{\partial \lossSymbolOuter(\param_\responseParamFixed (\hyper ))}{\partial \param_\responseParamFixed (\hyper )} \nicefrac{\partial \param_\responseParamFixed (\hyper )}{\partial \hyper}$. We call this method \emph{hyper-training} and contrast it with standard training methods. % %In game theory terms, it can be seen as a best-response function of a secondary player with strategy $\param$ to a leading player with strategy $\hyper$. %\emph{Hypergradients} are gradients for updating hyperparameters. %Our methods optimize hyperparameters with hypergradients through a hypernetwork, so we call them \emph{hyper-training}. We call the function $\innerOptParam{\hyper}$ that outputs optimal weights for hyperparameters a \emph{best-response function}. At convergence, we want our hypernetwork $\approxResponse{\hyperFixed}{\responseParamFixed}$ to match the best-response function closely. Our method is closely related to the concurrent work of \citet{brock2017smash}, whose SMASH algorithm also approximates the optimal weights as a function of model architectures, to perform a gradient-free search over discrete model structures. %The first section of Algorithm~\ref{algGlobal} is identical to the SMASH algorithm. Their work focuses on efficiently estimating the performance of a variety of model architectures, while we focus on efficiently exploring continuous spaces of models. We further extend this idea by formulating an algorithm to optimize the hypernetwork and hyperparameters jointly. Joint optimization of parameters and hyperparameters addresses one of the main weaknesses of SMASH, which is that the the hypernetwork must be very large to learn approximately optimal weights for many different settings. During joint optimization, the hypernetwork need only model approximately optimal weights for the neighborhood around the current hyperparameters, allowing us to use even linear hypernetworks. %This section shows the inner optimization loop can be avoided by training a neural network to output approximately optimal weights for hyperparameters. %Let $\hyperDist$ be a distribution of hyperparameters. %A hypernetwork is constructed, $\approxResponse{\hyperFixed}{\responseParamFixed}$, parameterized by $\responseParam$ which takes hyperparameter $\hyper \sim \hyperDist$ as input and tries to output best-responding weights $\innerOptParam{\hyper}$. %Locally optimal hyperparameters and weights are returned by Algorithm~\ref{algGlobal} if the hypernetwork mapping from hyperparameters to weights is a best-response ($\approxResponseSymbol{\responseParamFixed} = \innerOptParamSymbol$). %We can do gradient descent on the weights returned by the hypernetwork to fine-tune them. \subsection{Advantages of hypernetwork-based optimization} Hyper-training is a method to learn a mapping from hyperparameters to validation loss which is differentiable and cheap to evaluate. We can compare hyper-training to other model-based hyperparameter schemes. Bayesian optimization (e.g., \citet{lizotte2008practical, snoek2012practical}) builds a model of the validation loss as a function of hyperparameters, usually using a Gaussian process (e.g., \citet{rasmussen2006gaussian}) to track uncertainty. This approach has several disadvantages compared to hyper-training. First, obtaining data for standard Bayesian optimization requires optimizing models from initialization for each set of hyperparameters. In contrast, hyper-training never needs to optimize any one model fully, avoiding difficult choices such as how many models to train and for how long. Second, standard Bayesian optimization treats the validation loss as a black-box function: ${\mathcal{\hat \lossSymbolOuter(\lambda)} = f(\lambda)}$. In contrast, hyper-training takes advantage of the fact that the validation loss is a known, differentiable function: ${\mathcal{\hat \lossSymbolOuter}(\lambda) = \mathcal{\lossSymbolOuter}(\approxResponse{\hyperFixed}{\responseParamFixed})}$. This information removes the need to learn a model of the validation loss. This function can also be evaluated stochastically by sampling points from the validation set. Hyper-training has a benefit of learning hyperparameter to optimized weight mapping, which is substituted into the validation loss. This often has a better inductive bias for learning hyperparameter to validation loss than directly learning the loss. Also, the hypernetwork learns continuous best-responses, which may be a beneficial prior for finding weights by enforcing stability. %A continuous best-response will guarantee the weights do not change much when the hyperparameter is modified slightly, which may be connected with generalization. \subsection{Limitations of hypernetwork-based optimization} We can apply this method to unconstrained continuous bi-level optimization problems with an inner loss function with inner parameters, and an outer loss function with outer parameters. What sort of parameters can be optimized by our approach? Hyperparameters typically fall into two broad categories: 1) Optimization hyperparameters, such as learning rates and initialization, which affect the choice of locally optimal point converged to, and 2) regularization or model architecture parameters which change the set of locally optimal points. Hyper-training \emph{does not have inner optimization parameters} because there is no internal training loop, so we can not optimize these. However, we must still choose optimization parameters for the fused optimization loop. In principle, hyper-training can handle discrete hyperparameters, but does not offer particular advantages for optimization over continuous hyperparameters. %A limitation is that we may need to sample a number of hyperparameters that is exponential in their dimensionality. %We can only optimize continuous hyperparameters for the training loss - the architecture of a network is often discrete. %The hyperparameters being optimized must affect the training loss - this excludes optimization hyperparameters like learning rate. Another limitation is that our approach only proposes making local changes to the hyperparameters, and does not do uncertainty-based exploration. Uncertainty can be incorporated into the hypernetwork by using stochastic variational inference as in \citet{blundell2015weight}, and we leave this for future work. %Also, we do not optimize hyperparameters of the optimizer like learning rate and momentum. %We want to have a way to guarantee the hypernetwork is a sufficiently close approximation each time hyperparameters are updated. %Also, parameters for optimization (ex.optimizer parameters, network architecture, conditional hyperparameter distribution) can have a large impact on speed and stability of Algorithm~\ref{algLocal}. %Finally, we may need a prohibitively large number of parameters in our hypernetwork. Finally, it is not obvious how to choose the training distribution of hyperparameters $p(\hyper)$. If we do not sample a sufficient range of hyperparameters, the implicit estimated gradient of the validation loss w.r.t.\ the hyperparameters may be inaccurate. We discuss several approaches to this problem in section~\ref{joint optimization}. A clear difficulty of this approach is that hypernetworks can require several times as many parameters as the original model. For example, training a fully-connected hypernetwork with 1 hidden layer of $H$ units to output $D$ parameters requires at least $D \times H$ hypernetwork parameters. To address this problem, in section~\ref{joint optimization}, we propose an algorithm that only trains a linear model mapping hyperparameters to model weights. % \newcommand{\shiftIn}{\hspace{-0.08\textwidth}} \begin{figure*}[t] \vspace{-0.02\textheight} \begin{minipage}{0.33\textwidth} \begin{algorithm}[H] \begin{algorithmic} \captionof{algorithm}{Standard cross-validation with stochastic optimization\label{alg1}} \FOR{$i = 1, \dots, \outerIter$} \STATE $\textnormal{initialize } \paramFixed$ \STATE $\hyperFixed \!=\! \outerUpdate{\hyperFixed^{\!(1:i)}\!\!\!,\! \outerLoss{\!\paramFixed^{(1:i)\!}}} \vphantom{\curRename{\hyper}}$ \LOOP \STATE $\variableData \!\sim\! \textnormal{Training data} \vphantom{\hat{a^{i}}}$ \STATE $\paramFixed \!\mathrel{-}=\! \alpha \! \nabla_{\param} \! \lossTrainData{\param}{\hyper}$ \ENDLOOP \STATE $\hyper^{i}, \param^{i} = \hyper, \param$ \ENDFOR \STATE \STATE $i = \argmin_{i} \lossValidData{\param^{(i)}}$ \STATE \STATE Return $\hyperFixed^{(i)}, \param^{(i)}$ \end{algorithmic} \end{algorithm} \end{minipage} \begin{minipage}{0.32\textwidth} \begin{algorithm}[H] \begin{algorithmic} \captionof{algorithm}{Optimization of hypernetwork, then hyperparameters \label{algGlobal}} \STATE $\vphantom{i = 1, \dots, \outerIter}$ \STATE $\textnormal{initialize } {\color{blue}\responseParamFixed}$ \STATE ${\color{blue}\textrm{initialize } \curRename{\hyper}}$ \LOOP%FOR{$T_{\mathrm{{\color{blue}hypernetwork}}}$ steps} \STATE \shiftIn$\variableData \!\sim\! \textnormal{Training data}${\color{blue}, $\hyperFixed \!\sim\! \hyperDist$} \STATE \shiftIn${\color{blue}\responseParamFixed} \!\mathrel{-}=\! \alpha \! \nabla_{{\color{blue}\responseParam}} \!\lossTrainData{ \approxResponseSymbol{{\color{blue}\responseParam}} {\color{blue} ( \hyperFixed )}}{\hyper}$ \ENDLOOP%ENDFOR \STATE $\vphantom{\hyper^{i}, \param^{i} = \hyper, \param}$ \LOOP%FOR{{\color{blue}$T_{\mathrm{hyperparameter}}$ steps}} \STATE \shiftIn${\color{blue}\variableData \!\sim\! \textnormal{Validation data} \vphantom{\outerLoss{\paramFixed^{(i)}} < \outerLoss{\paramFixed}}}$ \STATE \shiftIn${\color{blue}\curRename{\hyperFixed} \!\mathrel{-}=\! \beta \nabla_{\curRename{\hyper}} \!\lossValidData{\approxResponse{\curRename{\hyper}}{\responseParamFixed}}}$ \ENDLOOP%\\%ENDFOR\\ \STATE Return $\curRename{\hyperFixed}, \approxResponseSymbol{{\color{blue}\responseParam}} {\color{blue} ( \curRename{\hyperFixed} )}$ \end{algorithmic} \end{algorithm} \end{minipage} % \begin{minipage}{0.32\textwidth} \begin{algorithm}[H] \begin{algorithmic} \captionof{algorithm}{Joint optimization of hypernetwork and hyperparameters\label{algLocal}} \STATE $\vphantom{i = 1, \dots, \outerIter}$ \STATE $\textrm{initialize } \responseParamFixed$ \STATE $\textrm{initialize } \curRename{\hyperFixed}$ %\STATE %$\vphantom{\curRename{\hyper}}$ \LOOP%FOR{$T_{{\color{blue}\mathrm{joint}}}$ steps} \STATE \shiftIn$\variableData \!\sim\! \textnormal{Training data}$, \!$\sampleRename{\hyper} \!\sim\! p ( \sampleRename{\hyper} {\color{red}| \curRename{\hyper}} )$ \STATE \shiftIn$\responseParamFixed \!\mathrel{-}=\! \alpha \! \nabla_{\responseParam} \! \lossTrainData{\approxResponse{\hyperFixed}{\responseParamFixed}}{\hyper}$ \STATE \shiftIn \STATE \shiftIn \STATE \shiftIn$\variableData \!\sim\! \textnormal{Validation data}$ \STATE \shiftIn$\curRename{\hyperFixed} \!\mathrel{-}=\! \beta \nabla_{\curRename{\hyper}} \!\lossValidData{\approxResponse{\curRename{\hyper}}{\responseParamFixed}}$ \ENDLOOP%\\%ENDFOR \\ \STATE Return $\curRename{\hyperFixed}, \approxResponse{\curRename{\hyperFixed}}{\responseParamFixed}$ \end{algorithmic} \end{algorithm} \end{minipage} \caption*{A comparison of standard hyperparameter optimization, our first algorithm, and our joint algorithm. Here, $\outerUpdateSymbol$ refers to a generic hyperparameter optimization. Instead of updating weights $\param$ using the loss gradient $\nicefrac{\partial \mathcal{L}(\param)}{\partial \param}$, we update hypernetwork weights $\responseParamFixed$ and hyperparameters $\hyper$ using the chain rule: $\nicefrac{\partial \lossSymbolInner (\param_\responseParamFixed)}{\partial \param_\responseParamFixed} \nicefrac{\partial \param_\responseParamFixed}{\partial \responseParamFixed}$ or $\nicefrac{\partial \lossSymbolOuter (\param_\responseParamFixed (\hyper))}{\partial \param_\responseParamFixed (\hyper)} \nicefrac{\partial \param_\responseParamFixed (\hyper) }{\partial \hyper}$ respectively. This allows our method to use gradient-based hyperparameter optimization. } \vspace{-0.02\textwidth} \label{compare with cv} \end{figure*} \subsection{Asymptotic convergence properties} Algorithm~\ref{algGlobal} trains a hypernetwork using SGD, drawing hyperparameters from a fixed distribution $p(\lambda)$. This section proves that Algorithm~\ref{algGlobal} converges to a local best-response under mild assumptions. In particular, we show that, for a sufficiently large hypernetwork, the choice of $p(\lambda)$ does not matter as long as it has sufficient support. Notation as if $\param_{\responseParam}$ has a unique solution for $\responseParam$ or $\param$ is used for simplicity, but is not true in general. % %Algorithm~\ref{algGlobal} solves for $\param_{\responseParam^{*}}$, where $\responseParam^{*} = \mathrm{argmin}_{\argminTargetVary} \Ehyper{\proofLoss}$. %We show $\hyperDomain, \proofTargetLossOutput = \targetLoss$ to prove a suitable surrogate is learned for hyperparameter optimization. %Notation as if $\param_{\responseParam}$ has a unique solution to $\responseParam$ or $\param$ is used for simplicity, but is not true in general. % \begin{theorem} \label{amoritizedExactness} Sufficiently powerful hypernetworks can learn continuous best-response functions, which minimizes the expected loss for all hyperparameter distributions with convex support. % \begin{align*} \textnormal{There exists } \responseParam^{*}, \textnormal{ such that } \hyperDomain,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!&\\ \innerLoss{\param_{\responseParam^{*}} \left( \hyper \right)}{\hyper} = \min_{\param} \innerLoss{\param}{\hyper} \textnormal{\,\, and \,\,}\responseParam^{*} = \argmin_{\argminTargetFix} &\EhyperFix{\proofLossFix} \end{align*} \end{theorem} \vspace{-0.02\textheight} % \begin{proof} If $\param_{\responseParam}$ is a universal approximator~\citep{hornik1991approximation} and the best-response is continuous in $\hyper$ (which allows approximation by $\param_{\responseParam}$), then there exists optimal hypernetwork parameters $\responseParam^{*}$ such that for all hyperparameters $\hyper$, $\param_{\responseParam^{*}} (\hyper) = \mathrm{argmin}_{\param} \innerLoss{\param}{\hyper}$. Thus, $\innerLoss{\param_{\responseParam^{*}} \left( \hyper \right)}{\hyper} = \min_{\param} \innerLoss{\param}{\hyper}$. In other words, universal approximator hypernetworks can learn continuous best-responses. Substituting $\responseParam^{*}$ into the training loss gives ${\Ehyper{\innerLoss{\approxResponse{\hyper}{\responseParam^{*}}}{\hyper}} = \Ehyper{\min_{\argminTargetFix} \proofLoss}}$. By Jensen's inequality, $\min_{\argminTargetFix} \Ehyper{\proofLoss} \geq \Ehyper{\min_{\argminTargetFix} \proofLoss}$. To satisfy Jensen's requirements, we have $\min_{\argminTargetFix}$ as our convex function on the convex vector space of functions $\{\proofLoss \textnormal{for } \hyper \in \hyperSupport \}$. To guarantee convexity of the vector space we require that $\hyperSupport$ is convex and $\innerLoss{\param}{\hyper} = \innerLossEExpand{\param}{\hyper}$ with $\regLoss{\param}{\hyper} = \hyper \cdot \lossSymbol ( \param)$. Thus, $\responseParam^{*} = \mathrm{argmin}_{\argminTargetFix} \Ehyper{\proofLoss}$. In other words, if the hypernetwork learns the best-response it will simultaneously minimize the loss for every point in $\hyperSupport$. \end{proof} % Thus, having a universal approximator and a continuous best-response implies $\hyperDomain$, $\proofTargetLossOutput = \targetLoss$, because $\approxResponseOutput{\hyper} = \innerOptParam{\hyper}$. Thus, under mild conditions, we will learn a best-response in the support of the hyperparameter distribution. If the best-response is differentiable, then there is a neighborhood about each hyperparameter where the best-response is approximately linear. If the support of the hyperparameter distribution is this neighborhood, then we can learn the best-response locally with linear regression. %Since $p$ was arbitrary up to support, the probability of sampling each point in the support does not matter. %Algorithm~\ref{algGlobal} has the choice of hyperparameter distribution, $\hyperDist$, and this theorem says we only have to care about the support of $\hyperDist$ if our network is a universal approximator. In practice, there are no guarantees about the network being a universal approximator or the finite-time convergence of optimization. The optimal hypernetwork will depend on the hyperparameter distribution $p(\hyper)$, not just the support of this distribution. We appeal to experimental results that our method is feasible in practice. \begin{wrapfigure}[14]{r}{0.41\textwidth} \vspace{-0.065\textheight} \begin{minipage}{0.41\textwidth} \begin{algorithm}[H] \begin{algorithmic} \captionof{algorithm}{Simplified joint training of hypernetwork and hyperparameters\label{algMin}} \STATE $\textrm{initialize } \responseParamFixed, \curRename{\hyperFixed}$ \LOOP \STATE \shiftIn$\variableData \!\sim\! \textnormal{Training data}, \variableData' \!\sim\! \textnormal{Validation data}$ \STATE \shiftIn$\responseParamFixed \mathrel{-}= \alpha \nabla_{\responseParam} \lossTrainData{\approxResponse{{\color{blue} \curRename{\hyperFixed}}}{\responseParamFixed}}{{\color{blue} \curRename{\hyperFixed}}}$ \STATE \shiftIn$\curRename{\hyperFixed} \mathrel{-}= \beta \nabla_{\curRename{\hyper}} \lossValidDataChange{\approxResponse{\curRename{\hyper}}{\responseParamFixed}}{\variableData'}$ \ENDLOOP \STATE Return $\curRename{\hyperFixed}, \approxResponse{\curRename{\hyperFixed}}{\responseParamFixed}$ \end{algorithmic} \end{algorithm} \vspace{-0.02\textheight} \caption*{Algorithm~\ref{algMin} builds on Algorithm~\ref{algLocal} by using gradient updates on $\curRename{\hyper}$ as a source of noise. This variant does not have asymptotic guarantees, but performs similarly to Algorithm~\ref{algLocal} in practice. } %\vspace{-1cm} \end{minipage} \end{wrapfigure} % \subsection{Jointly train parameters and hyperparameters} \label{joint optimization} %Algorithm~\ref{algGlobal} has limitations. %If there is limited training time, the hypernetwork's capacity may be limited. %Thus, a hypernetwork with no (or few) hidden units is used. Theorem~\ref{amoritizedExactness} holds for any $\hyperDist$. In practice, we should choose a $\hyperDist$ that puts most of its mass on promising hyperparameter values, because it may not be possible to learn a best-response for all hyperparameters due to limited hypernetwork capacity. %Also, it may be difficult to stabilize training when hyperparameters are sampled with a large support. Thus, we propose Algorithm~\ref{algLocal}, which only tries to match a best-response locally. We introduce a ``current'' hyperparameter $\curRename{\hyperFixed}$, which is updated each iteration and a conditional hyperparameter distribution, $\hyperDistVar$, which only puts mass close to $\curRename{\hyperFixed}$. This allows us to focus on promising hyperparameters to transition to from our current $\curRename{\hyperFixed}$, which an unconditional $\hyperDist$ does not, which is critical for allowing the use of a reduced capacity hypernet, making large problems feasible. Algorithm~\ref{algLocal} combines the two phases of Algorithm~\ref{algGlobal}. Instead of first learning a hypernetwork that can output weights for any hyperparameter then optimizing the hyperparameters, Algorithm~\ref{algLocal} only samples hyperparameters near the current guess. This means the hypernetwork just has to be trained to estimate good enough weights for a small set of hyperparameters. There is an extra cost of having to re-train the hypernetwork each time we update $\curRename{\hyperFixed}$. The locally-trained hypernetwork can then be used to provide gradients to update the hyperparameters based on validation set performance. %This motivates the joint training in Algorithm~\ref{algLocal}, which optimizes hyperparameters more efficiently than Algorithm~\ref{algGlobal} by only learning how to update locally. How simple can we make the hypernetwork, and still obtain useful gradients to optimize hyperparameters? Suppose the hypernetwork is a (reduced-rank) linear function of the hyperparameters and the conditional hyperparameter distribution is $\hyperDistVar = \mathcal{N} ( \curRename{\hyper}, \sigma \mathbb{I} )$ for a small $\sigma$. This hypernetwork learns a tangent hyperplane to a best-response function and only needs to make minor adjustments at each step if the hyperparameter updates are sufficiently small. Algorithm~\ref{algMin} uses simultaneous updates and achieves nearly identical results to Algorithm~\ref{algLocal} in our experiments. %In practice, trade-offs are made about the $\hyper$ distribution, the capacity of $\param_{\responseParam}$, and optimization parameters (batch size, learning rate, etc.). %It is often true that as $\param_{\responseParam}$'s capacity is restricted, the $\hyper$ distribution must be densely centered on the current $\curRename{\hyper}$ to stabilize training. %Also, as $\param_{\responseParam}$'s capacity is increased or the $\hyper$ distribution spreads out, we often must train longer. \section{Related work} Our work is complementary to the SMASH algorithm of \citet{brock2017smash}, with section 2 discussing our differences. %The first section of Algorithm~\ref{algGlobal} is identical to the SMASH algorithm. %Their work focuses on efficiently evaluating the performance of a variety of discrete model architectures, while we focus on efficiently exploring continuous spaces of models. %We also compare the ability of a hypernetwork to predict validation performance with standard black-box methods such as Gaussian processes. % proposes builds on SMASH by doing stochastic gradient descent, denoted SGD, on hyperparameters after optimizing the hypernetwork. %This as a generalization of Algorithm~\ref{alg1} where $\phyper = \delta_{\curRename{\hyper}}$ (a Dirac Delta distribution about some $\curRename{\hyper}$) and $\approxResponse{\hyper}{\responseParam} = \responseParam$ (weights not dependent on $\hyper$ because $\hyper$ can assume one value). \paragraph{Model-free approaches} Model-free approaches use only trial-and-error to explore the hyperparameter space. Simple model-free approaches applied to hyperparameter optimization include grid search and random search~\citep{bergstra2012random}. Hyperband~\citep{li2016hyperband} combines bandit approaches with modeling the learning procedure. \paragraph{Model-based approaches} Model-based approaches try to build a surrogate function, which can allow gradient-based optimization or active learning. %Model-based approaches to hyperparameter learning can be gradient-based or facilitate active learning. A common example is Bayesian optimization. Freeze-thaw Bayesian optimization can condition on partially-optimized model performance. %add modeling optimization to Bayesian optimization. \paragraph{Optimization-based approaches} Attempts have been made to directly approximate gradients of the validation loss with respect to hyperparameters. \citet{domke2012generic} proposes to differentiate through unrolled optimization to approximate best-responses in nested optimization and \citet{maclaurin2015gradient} differentiate through unrolled learning procedures. DrMAD~\citep{fu2016drmad} approximates differentiating through an unrolled learning procedure to relax memory usage for deep neural networks. HOAG~\citep{pedregosa2016hyperparameter} finds hyperparameter gradients with implicit differentiation by deriving an implicit equation for the gradient with optimality conditions. \citet{franceschi2017forward} study forward and reverse-mode differentiation for constructing hyperparameter gradients. Also, \citet{feng2017gradient} establish conditions where the validation loss of best-responding weights are smooth, allowing gradient-based hyperparameter training. The $T1 - T2$ method of \citet{luketina2016scalable} provides an algorithm for stochastic gradient-based optimization of hyperparameters. Convergence of $T1 - T2$ depends on approximating the Hessian of the training loss for parameters with the identity matrix. \paragraph{Game theory} Best-response functions are extensively studied as a solution concept in discrete and continuous multi-agent games (e.g., \citet{fudenberg1998theory}). Games where learning a best-response can be applied include adversarial training~\citep{goodfellow2014generative}, or Stackelberg competitions (e.g., \citet{bruckner2011stackelberg}). For adversarial training, the analog of our method is a discriminator who observes the generator's parameters. %Cross-validation is a method for model selection surveyed in \citet{arlot2010survey}, while \citet{kunapuli2008bilevel} consider the relationship between model selection and bi-level optimization problems. \newcommand{\imSize}{28} %The number of pixels in an image length/width. \newcommand{\numClass}{10} %The number of classes that images can belong to. \newcommand{\numModelParams}{7,850} %This is \imSize \cdot \imSize \cdot \numClass + \numClass. \newcommand{\regressionType}{linear } %The type of regression we are doing. \newcommand{\numValidSmall}{10,000} \newcommand{\numTestSmall}{10,000} % \newcommand{\numTrainSmall}{10} % The number of samples to use for the small training experiments. \newcommand{\hyperDimSmall}{1} %The dimensionality of the hyperparameter space. \newcommand{\realIter}{1,000} %The number of iterations to use when solving for the true model parameters. \newcommand{\stepSizeReal}{0.0001} %The step size to use for finding the real loss. \newcommand{\numHiddenGlobal}{50} %The number of hidden units in the hypernetwork. \newcommand{\batchSizeGlobal}{2} %The batch size for each update (in terms of hyperparameters). \newcommand{\stepSizeHypernetGlobal}{0.0001} %The step size to use when training the hyperparameter. \newcommand{\hyperMeanGlobal}{0} %The mean of the hyperparameter distribution for sampling hyperparameters for training the hypernet. \newcommand{\hyperVarGlobal}{1.5} %The variance of the hyperparameter distribution for sampling hyperparameters for training the hypernet. \newcommand{\numHypernetParamsGlobal}{400,450} %This is \hyperDimSmall \cdot \numHiddenGlobal + \numHiddenGlobal + \numHiddenGlobal \cdot \numModelParams + \numModelParams. % \newcommand{\batchSizeLocal}{2} %The batch size for each update (in terms of hyperparameters). \newcommand{\stepSizeHypernetLocal}{\stepSizeHypernetGlobal} %The step size to use when training the hypernet. \newcommand{\hyperVarLocal}{0.00001} %The variance of the conditional hyperparameter distribution for sampling hyperparameters for training the hypernet. \newcommand{\numHypernetParamsLocal}{15,700} %This is \hyperDimSmall \cdot \numModelParams + \numModelParams. \newcommand{\numHypernetItersSmall}{10} %The number of iterations to use when training the hypernet. \newcommand{\stepSizeHyperSmall}{0.1} %The step size to use when training the hyperparameter. \newcommand{\numHyperItersSmall}{1} %The number of iterations to use when training the hypernet. % \newcommand{\hyperDimMedium}{10} %The dimensionality of the hyperparameter space for the medium dimensionality hyperparameter experiment. \newcommand{\numTrainMedium}{50,000} \newcommand{\minibatchMedium}{100} \newcommand{\numValidMedium}{\numValidSmall} \newcommand{\numTestMedium}{\numTestSmall} \newcommand{\numHiddenMedium}{1} %The number of hidden units in the hypernet. \newcommand{\batchSizeMedium}{\batchSizeGlobal} %The batch size for each update (in terms of hyperparameters). \newcommand{\stepSizeHypernetMedium}{\stepSizeHypernetGlobal} %The step size to use when training the hypernet. \newcommand{\hyperVarMedium}{\hyperVarLocal} %The variance of the conditional hyperparameter distribution for sampling hyperparameters for training the hypernet. \newcommand{\numHypernetParamsMedium}{86,350} %This is \hyperDimSmall \cdot \numModelParams + \numModelParams. \newcommand{\numHypernetItersMedium}{10} %The number of iterations to use when training the hypernet. \newcommand{\numHyperItersMedium}{1} %The number of iterations to use when training the hyperparameter. \newcommand{\stepSizeHyperMedium}{0.0001} %The step size to use when training the hyperparameter. % \newcommand{\hyperDimLarge}{\numModelParams} %The dimensionality of the hyperparameter space for the large dimensionality hyperparameter experiment. \newcommand{\minibatchLarge}{\minibatchMedium} \newcommand{\numTrainLarge}{\numValidMedium} \newcommand{\numValidLarge}{\numValidMedium} \newcommand{\numTestLarge}{\numTestMedium} \newcommand{\numHiddenLarge}{10} %The number of hidden units in the hypernet. \newcommand{\batchSizeLarge}{\batchSizeMedium} %The batch size for each update (in terms of hyperparameters). \newcommand{\stepSizeHypernetLarge}{\stepSizeHypernetMedium} %The step size to use when training the hypernet. \newcommand{\hyperVarLarge}{\hyperVarMedium} %The variance of the conditional hyperparameter distribution for sampling hyperparameters for training the hypernet. \newcommand{\numHypernetParamsLarge}{164,860} %This is \hyperDimSmall \cdot \numModelParams + \numModelParams. \newcommand{\numHypernetItersLarge}{\numHypernetItersMedium} %The number of iterations to use when training the hypernet. \newcommand{\stepSizeHyperLarge}{\numHypernetItersMedium} %The step size to use when training the hyperparameter. % \newcommand{\numTrainVs}{25} \newcommand{\numValidVs}{10,215} \newcommand{\numHiddenLossVs}{\numModelParams} \section{Experiments} \label{sec.experiments} In our experiments, we examine the standard example of stochastic gradient-based optimization of neural networks, with weight regularization penalty. Some gradient-based methods explicitly use the gradient of a loss, while others use the gradient of a learned surrogate loss. Hyper-training learns and substitutes a surrogate best-response function into a real loss. We may contrast our algorithm with methods learning the loss like Bayesian optimization, gradient-based methods only handling hyperparameters that affect the training loss and gradient-based methods which can handle optimization parameters. The best comparison for hyper-training is to gradient-based methods which only handle parameters affecting the training loss because other methods apply to a more general set of problems. Here, we write the training and validation losses as: % \begin{align*} \innerLoss{\param}{\hyper} = \innerLossEExpand{\param}{\hyper}, \hspace{0.09\textwidth} \outerLoss{\param} = \outerLossEExpand{\param} \end{align*} % \begin{wrapfigure}[24]{r}{0.4\textwidth} %\centering \vspace{-0.01\textheight} \includegraphics[width=0.4\textwidth]{hypernets_global_small.pdf} %\includegraphics[width=0.4\textwidth]{hypernets_local_small.pdf} \caption{ The validation loss of a neural net, estimated by cross-validation (crosses) or by a hypernetwork (line), which outputs $7,850$-dimensional network weights. Cross-validation requires optimizing from scratch each time. The hypernetwork can be used to evaluate the validation loss cheaply. \label{fig:exp1} } \end{wrapfigure} \begin{wrapfigure}[23]{r}{0.4\textwidth} \vspace{-0.06\textheight} %\centering %\includegraphics[width=0.4\textwidth]{hypernets_global_small.pdf} \includegraphics[width=0.4\textwidth]{hypernets_local_small.pdf} \caption{ The losses of a neural net, estimated by cross-validation or a hypernetwork, which outputs $7,850$-dimensional network weights. A linear hypernetwork which has limited capacity making it only accurate where the hyperparameter distribution puts mass. \label{fig:exp2} } \end{wrapfigure} % In all experiments, Algorithms~\ref{algGlobal} or \ref{algLocal} are used to optimize weights with a mean squared error on MNIST~\citep{lecun1998gradient} with $\lossSymbol_{\mathrm{Reg}}$ as an $L_{2}$ weight decay penalty weighted by $\exp(\hyper)$. The elementary model has $\numModelParams$ weights. All hidden units in the hypernetwork have a ReLU activation~\citep{nair2010rectified} unless otherwise specified. Autograd~\citep{maclaurin2015autograd} was used to compute all derivatives. For each experiment, the minibatch samples $\batchSizeGlobal$ pairs of hyperparameters and up to $1,000$ training data points. We used Adam for training the hypernetwork and hyperparameters, with a step size of $\stepSizeHypernetGlobal$. We ran all experiments on a CPU. % \subsection{Learning a global best-response} Our first experiment, shown in Figure~\ref{fig:exp1}, demonstrates learning a global approximation to a best-response function using Algorithm~\ref{algGlobal}. To make visualization of the regularization loss easier, we use $\numTrainSmall$ training data points to exacerbate overfitting. We compare the performance of weights output by the hypernetwork to those trained by standard cross-validation (Algorithm~\ref{alg1}). Thus, elementary weights were randomly initialized for each hyperparameter choice and optimized using Adam~\citep{kingma2014adam} for $\realIter$ iterations with a step size of $\stepSizeReal$. When training the hypernetwork, hyperparameters were sampled from a broad Gaussian distribution: $\hyperDist = \mathcal{N} ( \hyperMeanGlobal, \hyperVarGlobal )$. The hypernetwork has $\numHiddenGlobal$ hidden units which results in $\numHypernetParamsGlobal$ parameters of the hypernetwork. The minimum of the best-response in Figure~\ref{fig:exp1} is near the real minimum of the validation loss, which shows a hypernetwork can satisfactorily approximate a global best-response function in small problems. \subsection{Learning a local best-response} Figure~\ref{fig:exp2} shows the same experiment, but using the Algorithm~\ref{algLocal}. The fused updates result in finding a best-response approximation whose minimum is the actual minimum faster than the prior experiment. The conditional hyperparameter distribution is given by $\hyperDistVar = \mathcal{N} ( \curRename{\hyperFixed}, 0.00001)$. The hypernetwork is a linear model, with only $\numHypernetParamsLocal$ weights. We use the same optimizer as the global best-response to update both the hypernetwork and the hyperparameters. Again, the minimum of the best-response at the end of training minimizes the validation loss. This experiment shows that using only a locally-trained linear best-response function can give sufficient gradient information to optimize hyperparameters on a small problem. Algorithm~\ref{algLocal} is also less computationally expensive than Algorithms~\ref{alg1} or \ref{algGlobal}. % \begin{wrapfigure}[38]{r}{0.45\textwidth} \vspace{-0.05\textheight} \includegraphics[width=0.45\textwidth]{hypernets_local_large.pdf} \includegraphics[width=0.45\textwidth]{compare_number_layers.png} \caption{ Validation and test losses during hyperparameter optimization with a separate $L_{2}$ weight decay applied to each weight in the model. Thus, models with more parameters have more hyperparameters. \emph{Top:} We solve the $7,850$-dimensional hyperparameter optimization problem with a linear model and multiple algorithms. %The weights $\param_{\phi^{*}}$ are output by the hypernet for current hyperparameter $\curRename{\lambda}$, while random losses are for the best result of a random search. Hypernetwork-based optimization converges to a sub-optimal solution faster than unrolled optimization from \citet{maclaurin2015gradient}. %We also observe significant overfitting of the hyperparameters on the validation set, which may be reduced be introducing hyperhyperparameters (parameters of the hyperparameter prior). \emph{Bottom:} Hyper-training is applied different layer configurations in the model. %The hand-tuned regularization parameters on the 784-10, 784-100-10, and 784-100-100-10 models have a validation losses of $0.434, 0.157$ and $0.206$ respectively. \label{fig:exp3} } \end{wrapfigure} \subsection{Hyper-training and unrolled optimization} To compare hyper-training with other gradient-based hyperparameter optimization methods, we train models with $\hyperDimLarge$ hyperparameters and a separate $L_{2}$ weight decay applied to each weight in a 1 layer (linear) model. The conditional hyperparameter distribution and optimizer for the hypernetwork and hyperparameters is the same the prior experiment. We factorize the weights for the model by selecting a hypernetwork with $\numHiddenLarge$ hidden units. The factorized linear hypernetwork has $\numHiddenLarge$ hidden units giving $\numHypernetParamsLarge$ weights. Each hypernetwork iteration is $2 \cdot \numHiddenLarge$ times as expensive as an iteration on just the model because there is the same number of hyperparameters as model parameters and we have a $\numHiddenLarge$ unit bottleneck. Figure~\ref{fig:exp3}, top, shows that Algorithm~\ref{algLocal} converges more quickly than the unrolled reverse-mode optimization introduced in \citet{maclaurin2015gradient} and implemented by \citet{franceschi2017forward}. Hyper-training reaches sub-optimal solutions because of limitations on how many hyperparameters can be sampled for each update but overfits validation data less than unrolling. Standard Bayesian optimization cannot be scaled to this many hyperparameters. Thus, this experiment shows Algorithm~\ref{algLocal} can efficiently partially optimize thousands of hyperparameters. It may be useful to combine these methods by using a hypernetwork to output initial parameters and then unrolling several steps of optimization to differentiate through. \subsection{Optimizing with deeper networks} To see if we can optimize deeper networks with hyper-training we optimize models with 1, 2, and 3 layers and a separate $L_{2}$ weight decay applied to each weight. The conditional hyperparameter distribution and optimizer for the hypernetwork and hyperparameters is the same the prior experiment. We factorize the weights for each model by selecting a hypernetwork with $\numHiddenLarge$ hidden units. Figure~\ref{fig:exp3}, bottom, shows that Algorithm~\ref{algLocal} can scale to networks with multiple hidden layers and outperform hand-tuned settings. As we add more layers the difference between validation loss and testing loss decreases, and the model performs better on the validation set. Future work should compare other architectures like recurrent or convolutional networks. Additionally, note that more layers perform with lesser training (not shown), validation, and test losses, instead of lower training loss and higher validation or test loss. This performance indicates that using weight decay on each weight could be a prior for generalization, or that hyper-training enforces another useful prior like the continuity of a best-response. \subsection{Estimating weights versus estimating loss} Our approach differs from Bayesian optimization which attempts to directly model the validation loss of optimized weights, where we try to learn to predict optimal weights. In this experiment, we untangle the reason for the better performance of our method: Is it because of a better inductive bias, or because our way can see more hyperparameter settings during optimization? %learning optimized weights is compared to learning optimized loss for the predicted loss on held-out hyperparameters. %Also, sampling hyperparameters is compared with using a fixed set of hyperparameters. %This allows a decomposition of the benefits of hyper-training. \begin{wrapfigure}[32]{r}{0.55\textwidth} %\centering %\hspace{-0.02\textwidth} \begin{tabular}{@{\hskip3pt}c @{\hskip3pt}c @{\hskip3pt}c @{\hskip3pt}c} &GP mean\hspace{0.02\textwidth}&Hyper-training fixed&\hspace{0.02\textwidth}Hyper-training\\ \rotatebox{90}{\,\,\,\,\,\,\,\,\,\,\,\, Inferred loss}\hspace{-0.02\textwidth}&\includegraphics[width=0.14\textwidth]{ax0_scatter.pdf}&\includegraphics[width=0.14\textwidth]{ax1_scatter.pdf}&\includegraphics[width=0.14\textwidth]{ax2_scatter.pdf}\\ &&True loss&\\ \rotatebox{90}{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, Frequency}&\includegraphics[width=0.14\textwidth]{ax0_hist.pdf}&\includegraphics[width=0.14\textwidth]{ax1_hist.pdf}&\includegraphics[width=0.14\textwidth]{ax2_hist.pdf}\\ &&Inferred - true loss&\\ \end{tabular} \caption{ Comparing three approaches to inferring validation loss. \emph{First column:} {\color{red}A Gaussian process}, fit on $\numTrainVs$ hyperparameters and the corresponding validation losses. \emph{Second column:} {\color{green}A hypernetwork, fit on the same $\numTrainVs$ hyperparameters} and the corresponding optimized weights. \emph{Third column:} Our proposed method, {\color{blue}a hypernetwork trained with stochastically sampled hyperparameters}. \emph{Top row:} The distribution of inferred and true losses. The diagonal black line is where predicted loss equals true loss. \emph{Bottom row:} The distribution of differences between inferred and true losses. The Gaussian process often under-predicts the true loss, while the hypernetwork trained on the same data tends to over-predict the true loss. \label{fig:exp5} } \end{wrapfigure} First, we constructed a hyper-training set: We optimized $\numTrainVs$ sets of weights to completion, given randomly-sampled hyperparameters. We chose $\numTrainVs$ samples since that is the regime in which we expect Gaussian process-based approaches to have the most significant advantage. We also constructed a validation set of $\numValidVs$ (optimized weight, hyperparameter) tuples generated in the same manner. %tuples for validation are generated with the same optimizer parameters as the global experiment. {\color{red}We then fit a Gaussian process (GP)} regression model with an RBF kernel from sklearn on the validation loss data. {\color{green}A hypernetwork is fit to the same set of hyperparameters and data}. %However, this hypernetwork was trained to fit optimized training weights, not optimized validation loss. Finally, {\color{blue}we optimize another hypernetwork using Algorithm~\ref{algGlobal}}, for the same amount of time as building the GP training set. The two hypernetworks were linear models and trained with the same optimizer parameters as the $\hyperDimLarge$-dimensional hyperparameter optimization. % Figure~\ref{fig:exp5} shows the distribution of prediction errors of these three models. We can see that the Gaussian process tends to underestimate loss. The hypernetwork trained with the same small fixed set of examples tends to overestimate loss. We conjecture that this is due to the hypernetwork producing bad weights in regions where it doesn't have enough training data. Because the hypernetwork must provide actual weights to predict the validation loss, poorly-fit regions will overestimate the validation loss. Finally, the hypernetwork trained with Algorithm~\ref{algGlobal} produces errors tightly centered around 0. The main takeaway from this experiment is a hypernetwork can learn more accurate surrogate functions than a GP for equal compute budgets because it views (noisy) evaluations of more points. %Also, if a point is explored which minimizes the predicted validation loss, then hyper-training explores points which perform better than Bayesian optimization. % %Table~\ref{gp_table} shows how varying the number of training tuples affects the hyperparameter which minimizes the predicted loss, where fixed input hyper-training uses the same fixed inputs as the Gaussian process. %Algorithm~\ref{algGlobal} consistently identifies hyperparameters with a better true performance than the other two approaches. %Code for all experiments will be made available upon publication. %\begin{table} %\begin{tabular}{r | c c c c c} % & \multicolumn{5}{c}{Evaluations of Validation Loss}\\ % Method & 10 & 25 & 100 & 250 &1000\\ %\midrule %{\color{red}Gaussian process} & 0.90 & 0.67 & 0.60 & 0.60 & 0.62\\ %{\color{green}hypernetwork trained on same evaluations} & 0.65 & \textbf{0.60} & \textbf{0.59} & \textbf{0.59} & \textbf{0.59}\\ %{\color{blue}hypernetwork trained stochastically for equivalent time} & \textbf{0.60} & 0.61 & \textbf{0.59} & \textbf{0.59} & \textbf{0.59} %Actual best hyperparameters & 0.593 & 0.593& 0.593 & 0.593 & 0.593 %\end{tabular} %\caption{Actual validation loss at the best-predicted hyperparameter setting, according to each model. %To estimate the true best validation loss, a random search was run for 10,000 hyperparameters, which did not find a better hyperparameter than a sampled hypernetwork. %\label{gp_table}} %\vspace{-0.5cm} %\end{table} \section{Conclusions and Future Work} In this paper, we addressed the question of tuning hyperparameters using gradient-based optimization, by replacing the training optimization loop with a differentiable hypernetwork. %Furthermore, we %\paragraph{Contributions} %The following contributions are made for continuous hyperparameters: %\begin{itemize} %\item Presented algorithms that efficiently learn a differentiable approximation to a best-response without nested optimization. We gave a theoretical justification that sufficiently large networks will learn the best-response for all hyperparameters viewed in training. We also presented a simpler and more scalable method that jointly optimizes both hyperparameters and hypernetwork weights, allowing our method to work with manageably-sized hypernetworks. Experimentally, we showed that hypernetworks could provide a better inductive bias for hyperparameter optimization than Gaussian processes fitting the validation loss empirically. %\end{itemize} There are many directions to extend the proposed methods. %Non-spherical conditional hyperparameter distributions can be explored, because it may drastically affect the speed a best-response is learned at. %Maximum a priori (MAP) estimates for weights and maximum likelihood estimates (MLE) for hyperparameters are learned, but it may be possible to learn a distribution of weights for each hyperparameter, or a distribution of hyperparameters for Bayesian learning~\citep{neal2012bayesian}. For instance, the hypernetwork could be composed with several iterations of optimization, as an easily-differentiable fine-tuning step. Or, hypernetworks could be incorporated into meta-learning schemes, such as MAML~\citep{finn2017model}, which finds weights that perform a variety of tasks after unrolling gradient descent. We also note that the prospect of optimizing thousands of hyperparameters raises the question of \emph{hyper-regularization}, or regularization of hyperparameters. %Additionally, it may be useful to introduce hyperhyperparameters (parameters of the hyperparameter prior) for hyperparameters. % %We hope this initial exploration of stochastic hyperparameter optimization will inspire further refinements, such as hyper-regularization methods, or uncertainty-aware exploration using Bayesian hypernetworks. %\subsubsection*{Acknowledgments} %We thank Matthew MacKay, Paul Vicol, Dougal Maclaurin, Daniel Flam-Shepard, Daniel Roy, and Jack Klys for helpful discussions. \bibliography{main_bib} %\bibliographystyle{icml2018} % \newpage \appendix \section{Extra Experiments} \subsection{Random Search and Bayesian Optimization Baselines} Here, optimize a model with $\hyperDimLarge$ and $\hyperDimMedium$ hyperparameters, in which a separate $L_{2}$ weight decay is applied to the weights for each digit class in a linear regression model to assess hyper-trainings performance against other hyperparameter optimization algorithms. The conditional hyperparameter distribution and optimizer for the hypernetwork and hyperparameters is the same the prior experiments. Algorithm~\ref{algLocal} is compared against random search and Bayesian optimization. % Figure~\ref{fig:exp_a1}, right, shows that our method converges more quickly and to a better optimum than either alternative method, demonstrating that medium-sized hyperparameter optimization problems can be solved with Algorithm~\ref{algLocal}. Figure~\ref{fig:exp_a1}, left, shows that our method converges more quickly and to a better optimum than either alternative method, demonstrating that medium-sized hyperparameter optimization problems can be solved with Algorithm~\ref{algLocal}. % \begin{figure}[h] \centering Optimizing $\hyperDimLarge$ hyperparameters \hspace{0.1\textwidth}Optimizing $\hyperDimMedium$ hyperparameters\\ \includegraphics[width=0.49\textwidth]{hypernets_local_large_appendix.pdf} \includegraphics[width=0.49\textwidth]{hypernets_local_medium_appendix.pdf} \caption{ Validation and test losses during hyperparameter optimization. A separate $L_{2}$ weight decay is applied to the weights of each digit class, resulting in $\hyperDimMedium$ hyperparameters. The weights $\param_{\phi^{*}}$ are output by the hypernetwork for current hyperparameter $\curRename{\lambda}$, while random losses are for the best result of a random search. hypernetwork-based optimization converges faster than random search or Bayesian optimization. We also observe significant overfitting of the hyperparameters on the validation set, which may be reduced by introducing hyperhyperparameters (parameters of the hyperparameter prior). The runtime includes the inner optimization for gradient-free approaches so that equal cumulative computational time is compared for each method. \label{fig:exp_a1} } \end{figure} %Factors affecting this include removing the overhead of constructing tuples of hyperparameters and optimized weights, viewing more hyperparameter samples, or having a better inductive bias from learning weights. %% %%\subsection{Optimizing with deeper networks} % %\begin{figure} %\vspace{-1.0cm} %\centering %\begin{tabular}{cc} %784-100-10 Model & 784-100-100-10 Model\\ %\includegraphics[width=0.47\textwidth]{hypernets_local_large.pdf} &%_100.pdf} & %\includegraphics[width=0.47\textwidth]{hypernets_local_large.pdf}%_100_100.pdf} %\end{tabular} %\caption{ %Validation and test losses during hyperparameter optimization with a separate $L_{2}$ weight decay applied to each weight in the model. %Thus, models with more parameters have more hyperparameters. %\emph{Left:} Hyper-training compared to unrolled optimization with the 784-100-10 model in prior experiments. %\emph{Right:} Hyper-training compared to unrolled optimization with the 784-100-100-10 model in prior experiments. %\label{fig:exp3} % %} %\vspace{-0.5cm} %\end{figure} %\section{Appendix - Example code} %\lstinputlisting[language=Python]{../code/hypernets_small.py} % \end{document}
{ "alphanum_fraction": 0.7839650601, "avg_line_length": 77.4956043956, "ext": "tex", "hexsha": "d69e0d73d11ef689adb77949a2b90fc5ceef1bbc", "lang": "TeX", "max_forks_count": 7, "max_forks_repo_forks_event_max_datetime": "2021-05-04T07:52:08.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-19T13:54:57.000Z", "max_forks_repo_head_hexsha": "3a4fd37e021921d5426723d0782a9cb746f6e700", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cpoptic/hypernet-hypertraining", "max_forks_repo_path": "hypertrain_nips_2018/paper/main.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "3a4fd37e021921d5426723d0782a9cb746f6e700", "max_issues_repo_issues_event_max_datetime": "2021-01-27T02:31:44.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-26T23:51:08.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cpoptic/hypernet-hypertraining", "max_issues_repo_path": "hypertrain_nips_2018/paper/main.tex", "max_line_length": 620, "max_stars_count": 19, "max_stars_repo_head_hexsha": "3a4fd37e021921d5426723d0782a9cb746f6e700", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cpoptic/hypernet-hypertraining", "max_stars_repo_path": "hypertrain_nips_2018/paper/main.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-12T05:15:44.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-23T22:33:56.000Z", "num_tokens": 17424, "size": 70521 }
\documentclass{article} \usepackage{booktabs, graphicx, hyperref, fontspec} \usepackage{sectsty} \allsectionsfont{\sffamily} \usepackage[margin=1in]{geometry} \hypersetup{ colorlinks = true, urlcolor = cyan, } \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \newcommand*{\authorfont}{\fontfamily{phv}\selectfont} \usepackage[]{Fira Sans} \begin{document} \sffamily \centerline{\Huge Microeconomic Analysis} \vspace{3 mm} \centerline{\large Dr.~Ryan Safner} \vspace{2 mm} \centerline{\large \href{http://microS22.classes.ryansafner.com}{microS22.classes.ryansafner.com}} \vspace{5 mm} \begin{tabular}{@{}p{3.5in}p{3.5in}} \textbf{Course}: ECON 306 Spring 2022 & \textbf{Email:} \href{mailto:[email protected]}{\nolinkurl{[email protected]}}\\ \textbf{Room}: Rosenstock 212 & \textbf{Office:} Rosenstock 110\\ \textbf{Meets}: MW 11:30 AM---12:55 PM & \textbf{Hours:} MW 1:30 PM---2:30 PM \& by appt\\ \end{tabular} \vspace{5 mm} \hrule \begin{quote} ``The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.'' {--- F.A. Hayek (1974 Economics Nobel Laureate)} \end{quote} \textbf{Microeconomic Analysis} is a collection of analytical tools for modeling and understanding how decisions made by agents (individuals and firms) in a context of market institutions lead to social coordination and the creation of value. This toolkit is the analytical foundation for exploring \emph{all} fields and applications of economics as a social science, has many practical applications in business, finance, politics, and insights into any realm of human endeavors. Economics, after all, is the study of human action. At the intermediate level, these concepts are made more rigorous through the use of mathematical models, but we will first and foremost lay the theoretical foundations with economic intuition. As such, this class assumes you have met the \textbf{required prerequisite courses} by taking principles of microeconomics (\textbf{ECON 206}). I reserve the right to change any part of this syllabus and course, at my discretion, with proper advance warning. \hypertarget{course-format-and-covid}{% \section{Course Format (and Covid)}\label{course-format-and-covid}} As of Fall 2021, all students are expected to be on campus except those with special approved exemptions. As such, this course will be taught \textbf{in-person} and \textbf{synchronously} until or unless otherwise announced. You are expected to come to class except due to medical reasons or other legitimate conflicts. Watching videos are not a substitute for attending class.\footnote{On average, even for students who complete all assignments, those that do not regularly attend class suffer by a full letter grade, (\href{https://www.aeaweb.org/articles?id=10.1257/jep.7.3.167}{Levitt 1993}).} Please see the \protect\hyperlink{attendance}{attendance policy} for more. In any event that we are unable to meet in person, I will hold class meetings at the same day/time live on Zoom, and post all recorded lectures via Panopto on Blackboard, and all assignments will be submitted online (often via Blackboard). \hypertarget{learning-during-a-global-pandemic}{% \subsection{Learning During a Global Pandemic}\label{learning-during-a-global-pandemic}} While we have made some progress in returning to normal, this remains a unique semester and a lot of things are still awful right now. None of us signed up for this. None of us are really okay, we're all just pretending for everyone else. Many of you may be dealing with hardships at home and at work, and are generally juggling many more problems than usual. Everyone's future plans have been completely put on hold or cancelled to a large degree. I am prioritizing us supporting each other as human beings during this crazy era, and will try to use simple, accessible solutions that make sense for the most people, and above all, to be flexible. If you tell me you're having trouble, I will do whatever I can to help, and not judge you or think less of you. I hope you will extend me the same courtesy. You never \emph{owe} me personal information about your health (mental or physical). You are however always welcome to talk to me about things that you're going through. If I can't help you, I usually know somebody who can. I want you to learn a lot from this course, but it is more important for you to remain healthy, balanced, and grounded during this crisis. \textbf{I reserve the right to change any part of this syllabus and course, at my discretion, with proper advance warning.} \hypertarget{course-objectives}{% \section{Course objectives}\label{course-objectives}} {By the end of this course,} you will: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item apply the models of microeconomics (constrained optimization and equilibrium) towards explaining real world behavior of individuals, firms, and governments \item explore the effects of economic and political processes on market performance (competition, market prices, profits and losses, property rights, entrepreneurship, market power, market failures, public policy, government failures) \item apply the economic way of thinking to real world issues in writing \end{enumerate} Given these objectives, this course fulfills all three of the learning outcomes for \href{https://www.hood.edu/academics/departments/george-b-delaplaine-jr-school-business/student-learning-outcomes}{the George B. Delaplaine, Jr.~School of Business} Economics B.A. program: \begin{itemize} \tightlist \item Use quantitative tools and techniques in the preparation, interpretation, analysis and presentation of data and information for problem solving and decision making {[}\ldots{]} \item Apply economic reasoning and models to understand and analyze problems of public policy {[}\ldots{]} \item Demonstrate effective oral and written communications skills for personal and professional success{[}\ldots{]} \end{itemize} {Fair warning:} \textbf{Economics is hard.} \emph{This, in particular, may be of the hardest courses that you will take, primarily due to the mathematical content.} Using the economic way of thinking is a skill, it is literally retraining your brain to interpret and analyze the world in a novel way, and is not something that can be memorized. I will do my best to make this class intuitive and helpful, if not interesting. If at any point you find yourself struggling in this course for any reason, please come see me. Do not suffer in silence! Coming to see me for help does not diminish my view of you, in fact I will hold you in \emph{higher} regard for understanding your own needs and taking charge of your own learning. There are also a some fantastic resources on campus, such as the \href{http://www.hood.edu/campus-services/academic-services/index.html}{Center for Academic Achievement and Retention (CAAR)} and the \href{http://www.hood.edu/library/}{Beneficial-Hodson Library}. Free tutoring in many subjects is available for Hood students through Thinking Storm professional tutoring and Hood peer tutoring. Hood peer tutoring is located in the Tutoring Center adjacent to the Student Success Center in the Beneficial Hodson Library and Learning Commons. Appointments and online tutoring can be accessed through Blackboard and located under tools/academic tutoring. Tutoring is available evenings and weekends as well as during the day to help meet your needs. For more information about tutoring and to see the specific subject tutoring offered, please visit the student success website here or contact the student success center at \href{mailto:[email protected]}{\nolinkurl{[email protected]}}. See my \href{/resources}{tips for success in this course}. \hypertarget{required-course-materials}{% \section{Required Course materials}\label{required-course-materials}} You can find all course materials at my \textbf{dedicated website} for this course: \href{https://microS22.classes.ryansafner.com}{microS22.classes.ryansafner.com}. Links to the website are posted on our Blackboard course page. Please familiarize yourself with the website, see that it contains this \href{syllabus/}{syllabus}, \href{resources/}{resources} to help you, and our \href{schedule/}{schedule}. On the schedule page, you can find each module with its own class page (\textbf{start there!}) along with all related readings, lecture slides, practice problems, and assignments. My lecture slides will be shared with you, and serve as your primary resource, but our main ``textbook'' below is \textbf{recommended} as the next best resource and will be available from the campus bookstore. I will discuss more about textbooks and materials in the first module. \hypertarget{books}{% \subsection{Books}\label{books}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Goolsbee, Austan, Steven Levitt, and Chad Syverson, 2012, \emph{Microeconomics} 2nd ed, USA: Worth Publishers, ISBN: 978-1464187025 \end{enumerate} You are welcome to purchase the book by other means (e.g.~Amazon, half.com, etc). I have no financial stake in requiring you to purchase this book. You are welcome to use previous version of the book, but carefully verify the reading assignments, as the material may be different across versions. \hypertarget{assignments-and-grades}{% \section{Assignments and Grades}\label{assignments-and-grades}} Your final course grade is the weighted average of the following assignments. You can find general descriptions for all the assignments on the \href{https://microS22.classes.ryansafner.com/assignments/}{assignments page} and more specific information and examples on each assignment's page on the \href{https://microS22.classes.ryansafner.com/schedule/}{schedule page}. \begin{tabular}{l|l|l} \hline Frequency & Assignment & Weight\\ \hline 1 & Opinion-Editorial & 20\%\\ \hline 5 & Homeworks & 20\% (using average HW grade)\\ \hline 3 & Exams & 20\% (each)\\ \hline \end{tabular} Each assignment is graded on a 100 point scale. Letter-grade equivalents are based on the following scale: \begin{table} \centering \begin{tabular}{l|c|l|c} \hline Grade & Range & Grade & Range\\ \hline A & 93–100\% & C & 73–76\%\\ \hline A− & 90–92\% & C− & 70–72\%\\ \hline B+ & 87–89\% & D+ & 67–69\%\\ \hline B & 83–86\% & D & 63–66\%\\ \hline B− & 80–82\% & D− & 60–62\%\\ \hline C+ & 77–79\% & F & < 60\%\\ \hline \end{tabular} \end{table} See also my \href{https://ryansafner.shinyapps.io/306_grade_calculator/}{ \texttt{Grade\ Calculator}} app where you can calculate your overall grade using existing assignment grades and forecast ``what if'' scenarios. These grades are firm cutoffs, but I do of course round upwards \((\geq\) 0.5) for final grades. A necessary reminder, as an academic, I am not in the business of \emph{giving} out grades, I merely report the grade that you \emph{earn}. I will not alter your grade unless you provide a reasonable argument that I am in error (which does happen from time to time). \textbf{No extra credit is available} \hypertarget{policies-and-expectations}{% \section{Policies and Expectations}\label{policies-and-expectations}} This syllabus is a contract between you, the student, and me, your instructor. It has been carefully and deliberately thought out. (A syllabus can and will be used as a legal document for disputes tried at a court of law. Ask me how I know.), and I will uphold my end of the agreement and expect you to uphold yours. In the language of game theory, this syllabus is my commitment device. I am a very understanding person, and I know that exceptions to rules often need to be made for students. However, to be \emph{fair} to \emph{all} students the syllabus artificially constrains my ability to make exceptions at a whim for anyone. This prevents clever students from exploiting my congenial personality at everyone else's expense. Please read and familiarize yourself with the course policies and expectations of you. Chances are, if you have a question, it is answered herein. \hypertarget{attendence}{% \subsection{Attendence}\label{attendence}} Your day-to-day classroom attendance is not graded. My philosophy is that you are all adults and must take ownership of your own learning or else you will not succeed. Some assignments may require in-class participation for credit, and an (unexcused) absence may be detrimental to your grade. Attending class is one of the strongest predictors of success. However, as required under Hood College's ``Promise of Fall Plan,'' (Ch. 2-C) \textbf{your classroom attendance will be recorded at every class meeting.} This is primarily to facilitate contact tracing. If you know you will be absent, you are not \emph{required} to let me know, but it is polite to give notice (Note if I do not reply to an email of yours letting me know, I am probably busy but will still see it and appreciate your email). Your absence will be noted and recorded for the purposes stated above. If, however, we have an assignment due in class, you \emph{must} notify me ahead of time in order to make alternate arrangements to still receive credit. Hasty ex-post attempts to notify me will generate little sympathy. \hypertarget{late-assignments}{% \subsection{Late Assignments}\label{late-assignments}} I will accept late assignments, but will subtract a specified amount of points as a penalty. Even if it is the last week of the semester, I encourage you to turn in late work: some points are better than no points! \textbf{Homeworks}: If you turn in a homework after it is due but before it is graded or the answer key posted, I generally will not take off any points. However, \textbf{if you turn in a homework \emph{after} the answer key is posted, I will automatically deduct 20 points (so the maximum grade you can earn on it is an 80).} \textbf{Exams}: If you know that you will be unable to complete an \emph{in-class exam} as scheduled for a legitimate reason, please notify me at least \emph{one week} in advance, and we will schedule a make-up exam date. Failure to do so, including desperate attempts to make arrangements only \emph{after} the exam will result in a grade of 0 and little sympathy. \textbf{Op-eds}: Starting at the deadline, I will take off 1 point for every hour that your Op-ed is late. I reserve the right to re-weight assignments for students whom I believe are legitimately unable to complete a particular assignment. \hypertarget{grading}{% \subsection{Grading}\label{grading}} I will try my best to post grades on Blackboard's Grading Center and return graded assignments to you within about one week of you turning them in. There will be exceptions. Where applicable, I will post answer keys once I know most homeworks are turned in (see Late Assignments above for penalties). Blackboard's Grading Center is the place to look for your most up-to-date grades. See also my \href{https://ryansafner.shinyapps.io/306_grade_calculator/}{ \texttt{Grade\ Calculator}} app where you can calculate your overall grade using existing assignment grades and forecast ``what if'' scenarios. \hypertarget{communication-email-slack-and-virtual-office-hours}{% \subsection{Communication: Email, Slack, and Virtual Office Hours}\label{communication-email-slack-and-virtual-office-hours}} Students must regularly monitor their \textbf{Hood email accounts} to receive important college information, including messages related to this class. Email through the Blackboard system is my main method of communicating announcements and deadlines regarding your assignments. \textbf{Please do not reply to any automated Blackboard emails - I may not recieve it!}. My Hood email (\texttt{[email protected]}) is the best means of contacting me. I will do my best to respond within 24 hours. If I do not reply within 48 hours, do not take it personally, and \emph{feel free to send a follow up email} in the very likely event that I genuinely did not see your original message. Our \href{https://hoodcollegeeconomics.slack.com}{slack channel} is available to all students and faculty in Economics and Business. I have invited all of my classes and advisees. It will not be extended to non-Business/Economics students or faculty. All users must use their \textbf{hood emails} and \textbf{true first and last names}. Each course has its own channel, exclusive for verified students in the course, and myself, by my invite only. As a third party platform, you agree to its Terms of Service. I have created this space as a way to stay connected, to help one another, and to foster community. Behaviors such as posting inappropriate content, harassing others, or engaging in academic dishonesty, to be determined solely at my discretion, will result in one warning, the content will be deleted, and subsequent behavior will result in a ban. In addition to in-person office hours, you can also make an appointment for \textbf{``office hours''} on Zoom. You can join in with video, audio, and/or chat, whichever you feel comfortable with. Of course, if you are not available during those times, we can schedule our own time if you prefer this method over email or Slack. If you want to go over material from class, please have \emph{specific} questions you want help with. I am not in the business of giving private lectures (particularly if you missed class without a valid excuse). Watch the excellent and accurate video explaining office hours \href{https://gamef21.classes.ryansafner.com/syllabus/\#communication-email-slack-and-virtual-office-hours}{on the website}. \hypertarget{netiquette}{% \subsection{Netiquette}\label{netiquette}} When using Zoom and Slack, please follow appropriate internet etiquette (``Netiquette''). Written communications, like blog posts or use of the Zoom chat, lacks important nonverbal cues (such as body language, tone of voice, sarcasm, etc). Above all else, please respect one another and think/reread carefully about how others may see your post before you submit a comment. You are expected to disagree and have different opinions, this is inherently valuable in a discussion. Please be civil and constructive in responding to others' comments: writing \emph{``have you considered `X'?''} is a lot more helpful to all involved than just writing \emph{``well you're just wrong.''} Posting content that is wilfully incindiary, illegal, or that constitutes academic dishonesty (such as plagarism) will automatically earn a grade of 0 and may be elevated to other authorities on campus. When using the chat function on Zoom or public Slack channels, please treat it as official course communications, even though I may not be grading it. It may be a quick and informal tool - don't feel you need to worry about spelling or perfect grammar - but please try to avoid \emph{too} informal ``text-speak'' (i.e.~say ``That's good for you'' instead of ``thas good 4 u''). \hypertarget{privacy}{% \subsection{Privacy}\label{privacy}} \href{https://www.execvision.io/blog/maryland-call-recording-laws/}{Maryland law} \href{https://law.justia.com/codes/maryland/2005/gcj/10-402.html}{requires} all parties consent for a conversation or meeting to be recorded. If you join in, and certainly if you participate, \textbf{you are consenting to be recorded.} However, as described below, videos are \emph{not accessible} beyond our class. Live lectures are recorded on Zoom and posted to Blackboard via Panopto, a secure course management system for video. Among other nice features (such as multiple video screens, close captioning, and time-stamped search functions!), Panopto is authenticated via your Blackboard credentials, ensuring that \emph{our course videos are not accessible to the open internet.} For the privacy of your peers, and to foster an environment of trust and academic freedom to explore ideas, \textbf{do not record our course lectures or discussions.} You are already getting my official copies. The \href{https://www2.ed.gov/policy/gen/guid/fpco/ferpa/index.html}{Family Educational Rights and Privacy Act} prevents me from disclosing or discussing any student information, including grades and records about student performance. If the student is at least 18 years of age, \emph{parents (or spouses) do not have a right to obtain this information}, except with consent by the student. Many of you may be tuning in remotely, living with parents, and may have occasional interruptions due to sharing a space. This is normal and fine, but know that I will protect your privacy and not discuss your performance when parents (or anyone other than you, for that matter) are present, without your explicit consent. \hypertarget{enrollment}{% \subsection{Enrollment}\label{enrollment}} Students are responsible for verifying their enrollment in this class. The last day to add or drop this class with no penalty is \textbf{Wednesday, September 1}. Be aware of \href{https://www.hood.edu/offices-services/registrars-office/academic-calendar}{important dates}. \hypertarget{honor-code}{% \subsection{Honor Code}\label{honor-code}} Hood College has an Academic Honor Code which requires all members of this community to maintain the highest standards of academic honesty and integrity. Cheating, plagiarism, lying, and stealing are all prohibited. All violations of the Honor Code are taken seriously, will be reported to appropriate authority, and may result in severe penalties, including expulsion from the college. See \href{http://hood.smartcatalogiq.com/en/2016-2017/Catalog/The-Spirit-of-Hood/The-Academic-Honor-Code-and-Code-of-Conduct}{here} for more detailed information. \hypertarget{van-halen-and-mms}{% \subsection{Van Halen and M\&Ms}\label{van-halen-and-mms}} When you have completed reading the syllabus, email me a picture of the band Van Halen and a picture of a bowl of M\&Ms.~If you do this \emph{before} the date of the first exam, you will get bonus points on the exam. If 75-100\% of the class does this, you each get 2 points. If 50-75\% of the class does this, you each get 4 points. If 25-50\% of the class does this, you each get 6 points. If 0-25\% of the class does this, you each get 8 points. Yes, you read this correctly. \hypertarget{accessibility-equity-and-accommodations}{% \subsection{Accessibility, Equity, and Accommodations}\label{accessibility-equity-and-accommodations}} College courses can, and should, be challenging and bring you out of your comfort zone in a safe and equitable environment. If, however, you feel at any point in the semester that certain assignments or aspects of the course will be disproportionately uncomfortable or burdensome for you due to any factor beyond your control, please come see me or email me. I am a very understanding person and am happy to work out a solution together. I reserve the right to modify and reweight assignments at my sole discretion for students that I belive would legitimately be at a disadvantage, through no fault of their own, to complete them as described. If you are unable to afford required textbooks or other resources for any reason, come see me and we can find a solution that works for you. This course is intended to be accessible for all students, including those with mental, physical, or cognitive disabilities, illness, injuries, impairments, or any other condition that tends to negatively affect one's equal access to education. If at any point in the term, you find yourself not able to fully access the space, content, and experience of this course, you are welcome to contact me to discuss your specific needs. I also encourage you to contact the \href{https://www.hood.edu/academics/josephine-steiner-center-academic-achievement-retention/accessibility-services}{Office of Accessibility Services} (301-696-3421). If you have a diagnosis or history of accommodations in high school or previous postsecondary institutions, Accessibility Services can help you document your needs and create an accommodation plan. By making a plan through Accessibility Services, you can ensure appropriate accommodations without disclosing your condition or diagnosis to course instructors. \hypertarget{acbsp-information}{% \subsection{ACBSP Information}\label{acbsp-information}} \begin{figure}[h!] \centering \includegraphics[]{../images/acbsp.png} \end{figure} Hood College is an accredited member of the Accreditation Council for Business Schools and Programs (ACBSP), an organization devoted to enhancing business education. In receiving and maintaining this accreditation, the faculty has made a commitment to the continuous improvement, innovation, and scholarship of the Department of Economics and Management. For you, this means that your educational experience undergoes ongoing validation to ensure it meets the most rigorous international standards of business education. Only a select group of institutions have received this status and it is an attribute of Hood for which you should take great pride. Pragmatically, our accreditation means that we will engage in the ongoing use of measures, both quantitative and qualitative in nature, to assess the performance of our students and the program. We ask that you take very seriously the surveys and other measurement devices we will use -- your best work and honest response will help us best assess and improve our program. \hypertarget{tentative-schedule}{% \section{Tentative Schedule}\label{tentative-schedule}} \textbf{You can find a full schedule} with much more details, including the readings, appendices, and other further resources for each class meeting on the \href{https://microS22.classes.ryansafner.com/schedule/}{schedule page}. \end{document}
{ "alphanum_fraction": 0.7857115213, "avg_line_length": 45.0156794425, "ext": "tex", "hexsha": "74a37734a144f36626078044aeca9135e439393c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8942ee310e2f2a50a9fa5e57dcafd9a8dd1695d2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ryansafner/microS22", "max_forks_repo_path": "static/syllabus/syllabus-pdf.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8942ee310e2f2a50a9fa5e57dcafd9a8dd1695d2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ryansafner/microS22", "max_issues_repo_path": "static/syllabus/syllabus-pdf.tex", "max_line_length": 127, "max_stars_count": null, "max_stars_repo_head_hexsha": "8942ee310e2f2a50a9fa5e57dcafd9a8dd1695d2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ryansafner/microS22", "max_stars_repo_path": "static/syllabus/syllabus-pdf.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6401, "size": 25839 }
\chapter{Manual Rule Checking} \label{ch_manual}
{ "alphanum_fraction": 0.7959183673, "avg_line_length": 16.3333333333, "ext": "tex", "hexsha": "0bec34aab1bb8bee64aaa34c7d8d146ac8eaace0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3717bec9d9298f163f45acd5af54d708e03e0b9f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "simonjj22/libsafecrypto", "max_forks_repo_path": "docs/CodingGuidelines/manual.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3717bec9d9298f163f45acd5af54d708e03e0b9f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "simonjj22/libsafecrypto", "max_issues_repo_path": "docs/CodingGuidelines/manual.tex", "max_line_length": 30, "max_stars_count": null, "max_stars_repo_head_hexsha": "3717bec9d9298f163f45acd5af54d708e03e0b9f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "simonjj22/libsafecrypto", "max_stars_repo_path": "docs/CodingGuidelines/manual.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13, "size": 49 }