Search is not available for this dataset
text
string
meta
dict
%================ch2====================================== \chapter{Genomic Data and Databases}\label{ch:ch2} \section{What Are Genomic Data?} Genomic data refers to the genome and DNA data of an organism. They are used in bioinformatics for collecting, storing and processing the genomes of living things. Genomic data generally require a large amount of storage and purpose-built software to analyze. \section{Genomic Databases} Genomic databases allow for the storing, sharing and comparison of data across research studies, across data types, across individuals and across organisms. These are not a new invention even before the popularisation of the modern internet, ‘online’ databases have been available in order to share data on key organisms, such as Escherichia coli (Blattner et al., 1997) and Saccharomyces cerevisiae (Cherry et al., 2012). Recent advances in both data sharing technology and genome sequencing technology have created an explosion of databases, based around particular organisms, as has been historically the case, as well as around particular data types, such as transcriptional data or short-read sequencing data\cite{gutierrez2019genome}. \section{Specific Organism Databases and the GMOD Project} It is possibly unsurprising that with the evolution of sequencing technology and the power to sequence the genome of most any organism, given a reasonable amount of time and a reasonable amount of research effort, individual databases have developed around the genomes of specific organisms \cite{ranganathan2018encyclopedia}. In the past, this was mostly focussed around so-called ‘model’ organisms, or ones with large research bases, such as the mouse (Mus musculus) (Smith et al., 2018) and nematode (Caenorhabditis elegans) (Lee et al., 2018). In many cases, these were created by their own research communities, to suit their own needs, both in terms of how the data could be accessed, as well as what tools were provided to dissect the data. As efforts continued, there have been moves to create some consistency between databases and the tools they offer, meaning new organism databases are not required to ‘re-invent the wheel’ so to speak. In this regard, the Generic Model Organism Database (GMOD) project has served to provide a framework of tools and database methods to allow new databases to be created. The ‘users’ of the GMOD project are no longer limited to ‘model’ organisms, and now consist of a variety of different species and databases. The GMOD project also has its own genome browser associated with it, GBrowse (as discussed further below), which can be integrated into the participating databases as a web-based genome browser\cite{ranganathan2018encyclopedia}. \section{Human Genome Databases} The breadth and depth of human genome databases is vast, as is to be expected when an organism attempts to study itself, and analyse its own biological problems. These databases are often structured around various data sources, such as transcriptional data,as is the case for the H-Invitational database (H-InvDB) (4). Particular study types have also given rise to specific databases:genome-wide association study databases such as GWASCentral (5), and structural variant study databases such as dbVar (6) and DGV (7). As is the case with other organisms, there are also some databases which seek to be more comprehensive in scope: DNA element databases such as ENCODE (8), and the 1000 Genomes project database, now hosted as the International Genome Sample Resource (IGSR) (9). Databases for even more specific purposes exist, such as a wealth of databases on cancer genomic data, and will need to be searched for on a case-by-case basis depending on need. \section{The Main Three Databases} The main three databases are the National Center for Biotechnology Information (NCBI, \url{www.ncbi.nlm.nih.gov/}), the DNA Data Bank of Japan (DDBJ, \url{www.ddbj.nig.ac.jp/}) and the European Bioinformatics Institute (EBI, \url{www.ebi.ac.uk/}). In addition to offering complete microbial genome sequences with links to corresponding publications, these databases provide online tools for analyzing genome sequences. As of February 24, 2014, 12 272 genome sequences from 2897 bacterial species are available online (www.genomesonline.org/, https://gold.jgi.doe.gov). For some species, several genomes have been sequenced. For 31 species, more than 50 genomes areavailable,including 16 species for which more than 100 genomes have been sequenced, the species holding the record being Escherichia coli, with 1261 currently available genomes. Sequenced genomes include the most significant human bacterial pathogens, covering all the phylogenetic domains of bacteria. In addition, more than 27 000 sequencing projects are ongoing ( \url{www.genomesonline.org/})\cite{ray2003}. Moreover, new sequencing technologies are making possible the sequencing of random community DNA and single cells of bacteria without the need for cloning or cultivation. \section{Genome Browsers} Data access and quality means very little if no meaning can be gained from it. In a field with as complex and abstract data as genomics, methods for data visualisation and analysis are of even greater importance. These must be able to cope with vast amounts of data, in the order of gigabytes or terabytes, as well as be able to connect these to tangible, biological meaning in the form of genes and products. Genome browsers seek to fill this need by providing a pre-existing software basis to visualise and analyse genomic data. Due to the sheer variety of researchers, purposes, expectations, and goals involved in the field, a number of genome browsers are available. For the new user, there are three broad-class, easy-to-pick-up databases for generic uses that stand out at present: the UCSC Genome Browser, managed by the University of California, Santa Cruz (Casper et al., 2018); GBrowse, managed by the GMOD project (see “Relevant Website section”); and Ensembl, managed by EMBL-EBI and the Wellcome Trust Sanger Institute (Zerbino et al., 2018). This section will consider each of these browsers in turn, and then give an overview of the more specific browsers which have been created based on these three forerunners\cite{ranganathan2018encyclopedia}. \subsection{UCSC Genome Browser} UCSC Genome Browser is the one of the most widely regarded broad-class browser, and has been integrated into a number of major databases. Its initial conception in 2000 was to visualise the first working draft of the Human Genome Project, but has been adapted in the following years to include a broad variety of organisms, and a vast suite of tools for visualising and analysing data.(\url{https://genome.ucsc.edu/}) \subsection{Gbrowse} Due to the ‘generic’ nature of the GMOD project (discussed above at 2.2), there was a need for a generic browser to accompany the suite of tools provided for new databases. GBrowse developed from this idea, and is therefore one of the more flexible genome browsers available. It has had a number of spin-off browsers created since its conception, tailored for particular purposes. As it is a part of the GMOD project, it is also available across many different databases\cite{ranganathan2018encyclopedia}. (\url{http://gmod.org/wiki/GBrowse}) \subsection{Ensembl} The Ensembl genome browser created by EMBL-EBI and the Wellcome Trust Sanger Institute is the native genome browser for the Ensembl Genomes databases. Due to the broad nature of the databases it is used for, it contains a broad variety of tools for visualisation and analysis across a variety of kingdoms of organisms.(\url{http://ensemblgenomes.org/}) \subsection{Specialised Browsers} Browsers for more specialised purposes have been developed by particular groups, largely based on one of the primary three browsers. Due to its generic nature, the majority of ‘subsidiary’ browsers are based on GBrowse in particular. Lighter implementations have been created, such as JBrowse, as well as browsers more suited to collaboration and annotation, such as Apollo.
{ "alphanum_fraction": 0.8003947693, "avg_line_length": 172.4680851064, "ext": "tex", "hexsha": "6511355e3df96e9a555e6b7fd0395201e2a61757", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d803fd0d70aabc51a92030f029d67286ce1b33d1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "datasticslab/ISCB1002", "max_forks_repo_path": "03-Thesis/chapters/ch2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d803fd0d70aabc51a92030f029d67286ce1b33d1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "datasticslab/ISCB1002", "max_issues_repo_path": "03-Thesis/chapters/ch2.tex", "max_line_length": 1249, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d803fd0d70aabc51a92030f029d67286ce1b33d1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "datasticslab/ISCB1002", "max_stars_repo_path": "03-Thesis/chapters/ch2.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-28T15:36:29.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-28T15:36:29.000Z", "num_tokens": 1768, "size": 8106 }
\documentclass{beamer} \usetheme{CambridgeUS} \usefonttheme{structurebold} \setbeamerfont{subtitle}{family=\ttfamily} \title{Getting started with competitive coding} \subtitle{bfs(competitive\_programming)} \author{Balajiganapathi S} \institute{code-drills.com} \date{\today} \begin{document} \begin{frame} \titlepage{} \end{frame} \begin{frame} \frametitle{Outline} \tableofcontents \end{frame} \section{Introduction} \subsection{About this video series} \begin{frame} \frametitle{Introduction} Hello World! \end{frame} \end{document}
{ "alphanum_fraction": 0.7522123894, "avg_line_length": 16.6176470588, "ext": "tex", "hexsha": "cd52b91d0763881682fc6cd6163dc350332fca28", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ba868a5665383cf23d0af4421be43ef206183b45", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Balajiganapathi/bfscp_slides", "max_forks_repo_path": "learn/template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ba868a5665383cf23d0af4421be43ef206183b45", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Balajiganapathi/bfscp_slides", "max_issues_repo_path": "learn/template.tex", "max_line_length": 47, "max_stars_count": null, "max_stars_repo_head_hexsha": "ba868a5665383cf23d0af4421be43ef206183b45", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Balajiganapathi/bfscp_slides", "max_stars_repo_path": "learn/template.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 164, "size": 565 }
\documentclass{memoir} \usepackage{notestemplate} %\logo{~/School-Work/Auxiliary-Files/resources/png/logo.png} %\institute{Rice University} %\faculty{Faculty of Whatever Sciences} %\department{Department of Mathematics} %\title{Class Notes} %\subtitle{Based on MATH xxx} %\author{\textit{Author}\\Gabriel \textsc{Gress}} %\supervisor{Linus \textsc{Torvalds}} %\context{Well, I was bored...} %\date{\today} %\makeindex \begin{document} % \maketitle % Notes taken on \subsection{Free Homotopy of Curves} \label{sub:free_homotopy_of_curves} In order to state Cauchy's theorem in complete generality, we need to construct a notion of homotopy for curves. \begin{defn}[Free Homotopy] A \textbf{free homotopy} of closed curves in \(\Omega \) is a continuous map \(\gamma(\tau,t)\) from \([0,1]\times [t_0,t_1]\) to \(\Omega \) such that \begin{align*} \gamma(\tau,t_0) = \gamma(\tau,t_1) \end{align*} for every \(\tau\in [0,1]\). We can denote \(\gamma_\tau(t) := \gamma(\tau,t)\).\\ We say \(\gamma_0,\gamma_1\) are \textbf{homotopic} if there exists a free homotopy with \(\gamma(0,t) = \gamma_0\) and \(\gamma(1,t) = \gamma_1\). \end{defn} Of course, if \(\Omega \) is convex, then any two curves are automatically freely homotopic. % \printindex \end{document}
{ "alphanum_fraction": 0.7085624509, "avg_line_length": 30.3095238095, "ext": "tex", "hexsha": "924e67f39ccdfaa827ef8ed4b9e150d28a6e2a74", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "gjgress/LibreMath", "max_forks_repo_path": "Complex Analysis/Notes/source/CurveHomotopy.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_issues_repo_issues_event_max_datetime": "2021-05-20T23:23:22.000Z", "max_issues_repo_issues_event_min_datetime": "2021-05-20T22:09:37.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "gjgress/Libera-Mentis", "max_issues_repo_path": "Complex Analysis/Notes/source/CurveHomotopy.tex", "max_line_length": 152, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d9f1bfd9e6ea62a9d56292f7890f99c450b54c9b", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "gjgress/Libera-Mentis", "max_stars_repo_path": "Complex Analysis/Notes/source/CurveHomotopy.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-16T23:18:15.000Z", "max_stars_repo_stars_event_min_datetime": "2021-07-16T23:18:15.000Z", "num_tokens": 415, "size": 1273 }
\documentclass[]{beamer} \setbeamertemplate{caption}[numbered] % For numbered figures \mode<presentation> % Setup appearance: \usetheme[secheader]{Boadilla} % Standard theme for UIUC package %\usetheme{Darmstadt} %My standard for HEP %\usetheme{Madrid} %For information titles at bottom of screen \usefonttheme[onlylarge]{structurebold} \setbeamerfont*{frametitle}{size=\normalsize,series=\bfseries} %\setbeamertemplate{navigation symbols}{} % Suppresses all navigation symbols \logo{\includegraphics[height=1.0cm]{SMU_Logo_Red.pdf}} %Inserts a logo in the lower right corner of the screen %Comment out for faster compile \setbeamercovered{transparent} % Standard packages \usepackage[english]{babel} \usepackage{ifxetex} \ifxetex %\usepackage{fontspec} \else \usepackage[T1]{fontenc} \usepackage[latin1]{inputenc} \usepackage{lmodern} \fi \usepackage{times} \usepackage{amsmath} % For math \usepackage{amssymb} % For more math \usepackage{mathrsfs} %Fancy math, needed for Lagrangian scripts \usepackage[normalem]{ulem} % For strikethrough \usepackage{verbatim} % For verbatim comments Note: Must Declare the frame fragile first. \begin{frame}[fragile] %\usepackage{multimedia} % For movies \usepackage{media9} %Alternative movie package \usepackage{calligra} % For generating 'script r' \DeclareMathAlphabet{\mathcalligra}{T1}{calligra}{m}{n} \DeclareFontShape{T1}{calligra}{m}{n}{<->s*[2.2]callig15}{} % A note about minted: You must compile from the command line as: % pdflatex -shell-escape <file name here> %\usepackage{minted} % Provides syntax highlighting % Custom Packages \usepackage{custom} % For custom math commands \usepackage{InfolinesThemeJJ} % Modified infolines outertheme; Modified by Juan Jottar UIUC 2010 \usepackage{JJmath} % Custom math commands; Created by Juan Jottar UIUC 2010 \usepackage{beamercolorthemeSMU} % Modified color palette to generate SMU colors % Additional packages for uiuc_beamer: \usepackage{dsfont} \usepackage{pstricks} \usepackage{pst-node} \usepackage{slashed} \usepackage{graphicx} % For images \graphicspath{{Images/}} %\usepackage{mathtools} % Setup TikZ \usepackage{tikz}% \usetikzlibrary{fit} \usetikzlibrary{calc} %\usetikzlibrary{arrows} %\tikzstyle{block}=[draw opacity=0.7,line width=1.4cm] % Author, Title, etc. \title[\LaTeX~Workshop for UT Austin SEG] {% \LaTeX~Workshop for UT Austin\\ Geophysical Society% } \author[Matthew Feickert] % For Sekula Group meetings %\author[S. Sekula, M. Feickert] % For meetings outside of SMU { % Stephen~Sekula\inst{1} \and \textcolor{blue!50!black}{\href{http://www.matthewfeickert.com/}{Matthew~Feickert}}\inst{1}\\ \vspace{0.25cm}\href{https://twitter.com/HEPfeickert}{@HEPfeickert} } \institute[SMU Physics] { \inst{1}% \href{http://www.physics.smu.edu/web/}{Southern Methodist University} } \date[March 12th, 2015] {March 12th, 2015} \AtBeginSection[] { \begin{frame}<beamer>{Outline} \tableofcontents[currentsection] \end{frame} } % The main document \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame} \titlepage \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Introduction to \LaTeX} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{What is \LaTeX?} \LaTeX~is a markup language written for the \TeX~typesetting program. \begin{columns}[c] \column{2.50in} \begin{block}{Features of \LaTeX} \begin{itemize} \item Open source \item Allows public to produce high-quality documents individually \item Provides identical output on all computers (both now and in the future) \item Widely used in academia \begin{itemize} \item Ex. Mathematics, computer science, economics, engineering, physics, and statistics. \end{itemize} \end{itemize} \end{block} % \column{1.50in} %%%%%%%%%%%%%%%%% FIGURE %%%%%%%%%%%%%%%%% \begin{figure}[h!] \centering %\includegraphics[scale=0.35]{CTAN_Lion.pdf} \includegraphics[width=\textwidth]{CTAN_Lion.pdf} \caption{\href{http://www.ctan.org/lion/}{CTAN} lion drawing by Duane Bibby.} %\label{fig:} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{columns} \end{frame} \subsection*{Why \LaTeX{}?} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Why would I want to use \LaTeX{}?} \textbf{Q.} Why would I want to learn how to use \LaTeX{} when I can just use Word? \begin{block}{Answer(s)} \begin{itemize} \item Open source \item Output is in PDF (through \href{http://en.wikipedia.org/wiki/PdfTeX}{pdf\TeX}) \begin{itemize} \item Can view on any computer and never have to worry about presentation not showing up right. \end{itemize} % \item Ability to typeset mathematical equations quickly and cleanly \item Professional standard in academia \end{itemize} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Aren't these really just cosmetic reasons?} Yes, but to some degree you will be judged by the quality of the way your present your work. Also, ease of dissemination of information is very important in research. \begin{block}{Bottom line} %%%%%%%%%%%%%%%%% FIGURE %%%%%%%%%%%%%%%%% \begin{figure}[h!tbp] \centering \includegraphics[scale=0.30]{latex_word_compare.pdf} \caption{Credit: \href{http://home.gwu.edu/~jinho10/CS_Tips/Entries/2012/1/2_Latex_Tips.html}{\tiny http://home.gwu.edu/~jinho10/Home.html}} %\label{fig:} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{block} \ldots which would you rather read? \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Setting Up \LaTeX~On Your Computer} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Setup and Installation} \begin{block}{To get a \LaTeX~distribution on your computer} \begin{itemize} \item Download and install the \href{http://www.latex-project.org/ftp.html}{distribution} for your operating system. \begin{itemize} \item Probably the latest version of \href{http://www.tug.org/texlive/}{\TeX~Live}.\end{itemize} \item Download and install a front end (or be hardcore and just use vim/emacs. :P) \begin{itemize} \item Windows: \href{http://miktex.org/}{MiKTeX} \item Mac: \href{http://pages.uoregon.edu/koch/texshop/}{TeXShop} \item Linux: \href{http://www.tug.org/texworks/}{TeXworks} \end{itemize} % \item Install any packages you want in the proper directory \item Start creating! \end{itemize} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Creating a \LaTeX~Document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection*{A simple document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Open The (Web) Editor: Overleaf} For today's workshop we're going to be using a web front end: \textcolor{green!80!black}{Overleaf} \begin{enumerate} \item Navigate to \href{https://www.overleaf.com/}{www.overleaf.com} \item Click the \href{https://www.overleaf.com/docs?template=overleaf}{``Create A New Paper'' button} \begin{figure}[!htbp] \centering \includegraphics[scale=0.40]{Overleaf_Create_Paper.png} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \item Once the template is loaded, click the ``Source'' button in the upper left corner \item We're going to start from scratch, so select all and delete \end{enumerate} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Creating a (simple) document} \begin{verbatim} \documentclass{article} \begin{document} Hello, World! \end{document} \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Creating a (simple) document} \begin{verbatim} \documentclass{article} \begin{document} Hello, World!\\ The quick brown fox jumps over the lazy dog. \end{document} \end{verbatim} \end{frame} \subsection*{Expanding: The Preamble} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Creating a document} \small\begin{verbatim} \documentclass[letterpaper,12pt]{article} \usepackage{graphicx} % For images \usepackage{float} % For tables and other floats \usepackage{amsmath} % For math \usepackage{amssymb} % For more math \usepackage{fullpage} % Set margins & place page numbers \usepackage{mathrsfs} % For nice math calligraphy fonts \usepackage{verbatim} % Allows commenting out \begin{document} Hello, World!\\ $\sin x \cdot \tan x \times \alpha \beta$\\ $\gamma \sum \prod \sqrt{y}$ \end{document} \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Creating a paper} \small\begin{verbatim} \documentclass[letterpaper,12pt]{article} \title{The Title of a Paper of Great Importance} \author{First Author} \begin{document} \maketitle \begin{abstract} Here is a collection of words that form a run on sentence to test an idea that was formed in a brain made of cells made of molecules made of atoms made of patrons and leptons and nope screw string theory. \end{abstract} \clearpage Stuff and things go here \end{document} \end{verbatim} \end{frame} \subsection*{Figure Basics} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Inserting Figures} \small\begin{verbatim} \documentclass[letterpaper,12pt]{article} ... \usepackage{graphicx} %\graphicspath{{Images/}} %Overleaf has frog.jpg in the %local dir but in general set %to your dir with figures ... \begin{document} ... \begin{figure} \includegraphics[width=\textwidth]{frog.jpg} \end{figure} ... \end{document} \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Inserting Figures} \begin{figure} \includegraphics[width=0.50\textwidth]{frog.pdf} %\label{Fig:frogA} \end{figure} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Better Figures} \small\begin{verbatim} \documentclass[letterpaper,12pt]{article} ... \usepackage{graphicx} %\graphicspath{{Images/}} ... \begin{document} ... \begin{figure}[!htbp] %here, top, bottom, page. \centering \includegraphics[width=\textwidth]{frog.jpg} \caption{This is a frog!} \label{Fig:frog} \end{figure} ... \end{document} \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Better Figures} \begin{figure}[!htbp] \centering \includegraphics[width=0.50\textwidth]{frog.pdf} \caption{This is a frog!} \label{Fig:frogB} \end{figure} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Rescaling Figures} \small\begin{verbatim} \documentclass[letterpaper,12pt]{article} ... \usepackage{graphicx} %\graphicspath{{Images/}} ... \begin{document} ... \begin{figure}[h!tbp] \centering \includegraphics[scale=0.40]{frog.jpg} \caption{This is a frog!} \label{Fig:frog} \end{figure} ... \end{document} \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Rescaling Figures} \begin{figure}[h!tbp] \centering \includegraphics[scale=0.40]{frog.pdf} \caption{This is a frog!} \label{Fig:frogC} \end{figure} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Rescaling Figures} \small\begin{verbatim} \documentclass[letterpaper,12pt]{article} ... \usepackage{graphicx} %\graphicspath{{Images/}} ... \begin{document} ... \begin{figure}[h!tbp] \centering \includegraphics[width=0.80\textwidth]{frog.jpg} \caption{This is a frog!} \label{Fig:frog} \end{figure} ... \end{document} \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Rescaling Figures} \begin{figure}[h!tbp] \centering \includegraphics[width=0.35\textwidth]{frog.pdf} \caption{This is a frog!} \label{Fig:frogD} \end{figure} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Math Mode} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Typesetting in Math Mode} Typesetting in Math Mode is what really gives \LaTeX{} its power. It is intuitive, quick, clean, and produces wonderful results! To initiate Math Mode enclose whatever you want to typeset between two ``\$''. \begin{block}{Some examples} \begin{itemize} \item \$\textbackslash Sigma\$ = $\Sigma$ \item \$\textbackslash int f(x) \textbackslash,dx\$ = $\int f(x)\,dx$ \item \$e\textasciicircum x\_\{\textbackslash text\{text\}\}\$ = $e^x_{\text{text}}$ \item \$\textbackslash frac\{\textbackslash sin x\}\{\textbackslash prod\_\{i=0\}\textasciicircum n\}\$ = $\frac{\sin x}{\prod_{i=0}^n}$ \end{itemize} \end{block} and so\ldots \[ \imply \int\limits_{\rho}^R \frac{\sqrt[3]{r}\,\ln r}{r^2 + 1}~dr + \int_{L_2} f(z)~dz = e^{i \pi/3} \int\limits_{\rho}^R \frac{\sqrt[3]{r}\,\ln r + i \pi \sqrt[3]{r}}{r^2 + 1}~dr% = \frac{i\pi^2}{2}\,e^{i \pi/6} - \int_{C_\rho} f(z)~dz - \int_{C_R} f(z)~dz \] \end{frame} %%%%% \begin{frame}{Math Mode: Display Style} The displaymath environment will pop your equation out of the text and display it prominently in the center of the page. \begin{block}{In Line vs. Displayed} \begin{itemize} \item It is seen that this requires $C = -k^2$, where $k^2>0$, reducing to $\rho\partiald{}{\rho} \parenthsqr{\rho \partiald{R\parenth{\rho}}{\rho}} - \parenth{n^2 - \rho^2 k^2} R\parenth{\rho} = 0.$ \item It is seen that this requires $C = -k^2$, where $k^2>0$, reducing to \[\rho\partiald{}{\rho} \parenthsqr{\rho \partiald{R\parenth{\rho}}{\rho}} - \parenth{n^2 - \rho^2 k^2} R\parenth{\rho} = 0.\] \end{itemize} \end{block} \end{frame} %%%%% \begin{frame}{Math Mode: Display Style} The displaymath environment is entered by enclosing whatever you want to typeset between opening ``\textbackslash['' and closing ``\textbackslash]''. \begin{block}{In Line vs. Displayed} \begin{itemize} \item \$\textbackslash int \textbackslash limits\_\{-\textbackslash infty\}\textasciicircum \{\textbackslash infty\} \textbackslash psi\textasciicircum* \textbackslash psi \textbackslash,dx \$ = $\infint \psi^* \psi\,dx$ \item \textbackslash[ \textbackslash int \textbackslash limits\_\{-\textbackslash infty\}\textasciicircum \{\textbackslash infty\} \textbackslash psi\textasciicircum* \textbackslash psi \textbackslash,dx \textbackslash] = \[\infint \psi^* \psi\,dx \] \end{itemize} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Packages \& Fun!} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Fun with Packages!} There are a ton of really useful packages, and just fun packages that you can use to make your documents stand out and entertaining. \begin{block}{One useful and N fun packages} \begin{itemize} \item My ``\href{http://dl.dropbox.com/u/6351966/custom.sty}{custom}'' style file (what I use for \TeX ing~my homework). \item \href{http://hanno-rein.de/archives/349}{Coffee stains} \item \href{http://tug.ctan.org/tex-archive/usergrps/uktug/baskervi/4_4/}{The Simpsons} \end{itemize} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Summary} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% \begin{frame}{What Can I Do With \LaTeX?} \begin{block}{``Fancy'' looking equations} The Lagrangian for the ABC ``toy'' theory containing three elementary particles of spin 0, masses $m_{A,B,C}$, and one primitive vertex with the vertex factor $-ig$. \[ \begin{split} \Lagrangian &= \parenthsqr{\frac{1}{2} \parenth{\partial_\mu \phi_A} \parenth{\partial^\mu \phi_A} - \frac{1}{2} \parenth{\frac{m_A c}{\hbar}}^2 \phi_A^2} \\ &+ \parenthsqr{\frac{1}{2} \parenth{\partial_\mu \phi_B} \parenth{\partial^\mu \phi_B} - \frac{1}{2} \parenth{\frac{m_B c}{\hbar}}^2 \phi_B^2}\\ &+ \parenthsqr{\frac{1}{2} \parenth{\partial_\mu \phi_C} \parenth{\partial^\mu \phi_C} - \frac{1}{2} \parenth{\frac{m_C c}{\hbar}}^2 \phi_C^2} - g \phi_A \phi_B \phi_C \end{split} \] \end{block} \end{frame} %%%%% \begin{frame}{What Can I Do With \LaTeX?} \begin{block}{Homework\ldots with fancy looking equations} \begin{figure}[h!tbp] \centering \includegraphics[width=\textwidth]{Homework_Example.pdf} %caption{} %label{fig:} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{block} \end{frame} %%%%% \begin{frame}{What Can I Do With \LaTeX?} \begin{block}{Draw All The Things (with \href{http://en.wikipedia.org/wiki/PGF/TikZ}{TikZ})} %%%%%%%%%%%%%%%%% TIKZ FIGURE %%%%%%%%%%%%%%%%% \begin{figure}[h!tbp] \centering \begin{tikzpicture} %%% Edit the following coordinate to change the shape of your %%% cuboid %% Vanishing points for perspective handling \coordinate (P1) at (-7cm,1.5cm); % left vanishing point (To pick) \coordinate (P2) at (8cm,1.5cm); % right vanishing point (To pick) %% (A1) and (A2) defines the 2 central points of the cuboid \coordinate (A1) at (0em,0cm); % central top point (To pick) \coordinate (A2) at (0em,-2cm); % central bottom point (To pick) %% (A3) to (A8) are computed given a unique parameter (or 2) .8 % You can vary .8 from 0 to 1 to change perspective on left side \coordinate (A3) at ($(P1)!.8!(A2)$); % To pick for perspective \coordinate (A4) at ($(P1)!.8!(A1)$); % You can vary .8 from 0 to 1 to change perspective on right side \coordinate (A7) at ($(P2)!.7!(A2)$); \coordinate (A8) at ($(P2)!.7!(A1)$); %% Automatically compute the last 2 points with intersections \coordinate (A5) at (intersection cs: first line={(A8) -- (P1)}, second line={(A4) -- (P2)}); \coordinate (A6) at (intersection cs: first line={(A7) -- (P1)}, second line={(A3) -- (P2)}); %%% Depending of what you want to display, you can comment/edit %%% the following lines %% Possibly draw front faces % \fill[orange] (A1) -- (A8) -- (A7) -- (A2) -- cycle; % face 1 % \node at (barycentric cs:A1=1,A8=1,A7=1,A2=1) {\tiny f1}; \fill[gray!50,opacity=0.2] (A1) -- (A2) -- (A3) -- (A4) -- cycle; % f2 %\node at (barycentric cs:A1=1,A2=1,A3=1,A4=1) {\tiny f2}; \fill[gray!90,opacity=0.2] (A1) -- (A4) -- (A5) -- (A8) -- cycle; % f5 %% Possibly draw back faces \fill[gray!90] (A2) -- (A3) -- (A6) -- (A7) -- cycle; % face 6 %\node at (barycentric cs:A2=1,A3=1,A6=1,A7=1) {\tiny f6}; \fill[gray!50] (A3) -- (A4) -- (A5) -- (A6) -- cycle; % face 3 %\node at (barycentric cs:A3=1,A4=1,A5=1,A6=1) {\tiny f3}; \fill[gray!30] (A5) -- (A6) -- (A7) -- (A8) -- cycle; % face 4 %\node at (barycentric cs:A5=1,A6=1,A7=1,A8=1) {\tiny f4}; \draw[thick,dashed] (A5) -- (A6); \draw[thick,dashed] (A3) -- (A6); \draw[thick,dashed] (A7) -- (A6); %% Possibly draw front faces \node at (barycentric cs:A1=1,A4=1,A5=1,A8=1) {\footnotesize $V_0$}; %{\tiny f5}; %% Possibly draw front lines \draw[thick] (A1) -- (A2); \draw[thick] (A3) -- (A4); \draw[thick] (A7) -- (A8); \draw[thick] (A1) -- (A4); \draw[thick] (A1) -- (A8); \draw[thick] (A2) -- (A3); \draw[thick] (A2) -- (A7); \draw[thick] (A4) -- (A5); \draw[thick] (A8) -- (A5); % Possibly draw points % (it can help you understand the cuboid structure) \foreach \i in {1,2,...,8} { \draw[fill=black] (A\i) circle (0.15em); % node[above right] {\tiny \i}; } % \draw[fill=black] (P1) circle (0.1em) node[below] {\tiny p1}; % \draw[fill=black] (P2) circle (0.1em) node[below] {\tiny p2}; \end{tikzpicture} %\caption{} \label{fig:} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%% TIKZ FIGURE %%%%%%%%%%%%%%%%% \begin{figure}[h!tbp] \centering \begin{tikzpicture} \shade[ball color=blue!10!white,opacity=0.20] (0,0) circle (1cm); \draw (-1,0) arc (180:360:1cm and 0.5cm); \draw[dashed] (-1,0) arc (180:0:1cm and 0.5cm); \draw (0,1) arc (90:270:0.5cm and 1cm); \draw[dashed] (0,1) arc (90:-90:0.5cm and 1cm); \draw (0,0) circle (1cm); % \shade[ball color=blue!10!white,opacity=0.20] (0,0) circle (1cm); \end{tikzpicture} %\caption{} %\label{fig:} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{What Can I Do With \LaTeX?} \begin{block}{More Like ``What Can't You Do!?''} \begin{itemize} \item Typeset professional documents \begin{itemize} \item Ex. Thesis, \href{http://matthewfeickert.github.io/mfeickert-CV.pdf}{CV}, Reports \end{itemize} % \item Create presentations in pdf \begin{itemize} \item Ex. \href{https://github.com/matthewfeickert/UTAustin-LaTeX-Workshop-2015}{This presentation}! (You can even embed video using the \href{https://www.ctan.org/pkg/media9}{media9} package. 0\_0) \end{itemize} % \item Circuit diagrams (\href{http://www.ctan.org/tex-archive/graphics/pgf/contrib/circuitikz}{circuitikz} package) \item Feynman diagrams (\href{http://osksn2.hep.sci.osaka-u.ac.jp/~taku/osx/feynmp.html}{feynmp} package) \item \href{http://casa.colorado.edu/~danforth/comp/cardtex.html}{Business cards} \item Conference posters (\href{http://www-i6.informatik.rwth-aachen.de/~dreuw/latexbeamerposter.php}{beamerposter} package) \item \ldots basically everything! \end{itemize} \end{block} \end{frame} %####################### %%%%%%%%%%%%%%%% % END MAIN PRESENTATION %%%%%%%%%%%%%%%% %####################### %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Backup Slides} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Backup Slides} \begin{center} {\alert{\Huge{Backup}}} \end{center} \end{frame} \subsection*{\LaTeX{} Resources} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{How Can I Learn More About \LaTeX{}?} \begin{itemize} \item \href{https://www.ctan.org/}{Comprehensive \TeX{} Archive Network (CTAN)} \item There are lots of great online resources. A great place to start is \href{https://www.overleaf.com/}{Overleaf}. \begin{itemize} \item Free student accounts \item Free \href{https://www.overleaf.com/latex/learn/free-online-introduction-to-latex-part-1\#.VPkJ7c3d-oo}{\LaTeX{} tutorial} very similar to one given tonight \end{itemize} \item Questions or confused? Check the \href{http://tex.stackexchange.com/}{\TeX{} Stack Exchange} \item Ask people who have used \LaTeX{}. They're usually happy to share their tricks of the trade. \end{itemize} \end{frame} \subsection*{Packages} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Installing packages} \begin{itemize} \item Most distributions will come with a package manager. It let's you search and install packages by query. \item If you don't know where to install a package so it can be universally accessed by your operating system, just save it in the same directory as your \TeX~file. \item If you're working on a Windows machine you'll probably need to run ``\href{http://tex.stackexchange.com/questions/48292/about-texhash-and-proper-installation-of-texlive}{texhash}'' from the command prompt before your computer will recognize the new package. \end{itemize} \end{frame} \subsection*{Macros} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Figure} \begin{verbatim} %%%%%%%%%%%%%%%%% FIGURE %%%%%%%%%%%%%%%%% \begin{figure}[h!tbp] \centering \includegraphics[scale=1.00]{} %\caption{} %\label{fig:} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Figure: Example} \begin{verbatim} %%%%%%%%%%%%%%%%% FIGURE %%%%%%%%%%%%%%%%% \begin{figure}[h!tbp] \centering \includegraphics[scale=0.40]{frog.pdf} \caption{Here is a very interesting picture of a frog.} \label{fig:frog} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Noting Figure~\ref{fig:frog}, it is seen that\ldots \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Figure: Example} %%%%%%%%%%%%%%%%% FIGURE %%%%%%%%%%%%%%%%% \begin{figure}[h!tbp] \centering \includegraphics[scale=0.40]{frog.pdf} \caption{Here is a very interesting picture of a frog.} \label{fig:frog} \end{figure} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Noting Figure~\ref{fig:frog}, it is seen that\ldots \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Table} \begin{verbatim} %%%%%%%%%%%%%%%%% TABLE %%%%%%%%%%%%%%%%% \begin{table}[!hbtp] \caption{Caption} \centering \scalebox{1.00}{ \begin{tabular}[!hbtp]{ | c | c | c | } \hline\hline \hline\hline \end{tabular}} \label{table:TABLE} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Equation} \begin{verbatim} %%%%%%%%%%% EQUATION %%%%%%%%%%% \begin{equation} \label{eq:1} \end{equation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Equation: Example} \begin{verbatim} %%%%%%%%%%% EQUATION %%%%%%%%%%% \begin{equation} \alpha + \beta = \gamma \label{eq:alpha} \end{equation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Here is Equation~\ref{eq:alpha}. \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Equation: Example} %%%%%%%%%%% EQUATION %%%%%%%%%%% \begin{equation} \alpha + \beta = \gamma \label{eq:alpha} \end{equation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Here is Equation~\ref{eq:alpha}. \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Matrix} \begin{verbatim} %%%%%%%%%%% MATRIX %%%%%%%%%%% \begin{pmatrix} X & X\\ \end{pmatrix} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Matrix: Example} \begin{verbatim} %%%%%%%%%%% MATRIX %%%%%%%%%%% \[ \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Matrix: Example} %%%%%%%%%%% MATRIX %%%%%%%%%%% \[ \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Split line equation} \begin{verbatim} \[ \begin{split} Y &= mx + b \\ &= mx +b \end{split} \] \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Split line equation: Example} \[ \begin{split} Y &= mx + b \\ &= \alpha + \epsilon \end{split} \] \end{frame} \subsection*{} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Parentheses: Non-enclosing} \begin{verbatim} \[ ( \frac{\alpha}{\beta} ) \] \[ [ \frac{\alpha}{\beta} ] \] \[ \{ \frac{\alpha}{\beta} \} \] \end{verbatim} \[ ( \frac{\alpha}{\beta} ) \] \[ [ \frac{\alpha}{\beta} ] \] \[ \{ \frac{\alpha}{\beta} \} \] \ldots This doesn't look very nice. \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{Parentheses: Enclosing} \begin{verbatim} \[ \left( \frac{\alpha}{\beta} \right) \] \[ \left[ \frac{\alpha}{\beta} \right] \] \[ \left\{ \frac{\alpha}{\beta} \right\} \] \end{verbatim} \[ \parenth{\frac{\alpha}{\beta}} \] \[ \parenthsqr{\frac{\alpha}{\beta}} \] \[ \parenthcurl{\frac{\alpha}{\beta}} \] \ldots This does! \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{How can I use the \$, \#, \_ , \textbackslash ~symbols in documents?} \centering You need to put the \textbackslash ~immediately before them: \[ \begin{split} \text{\textbackslash}\$ &= \$ \\ \text{\textbackslash}\# &= \# \\ \text{\textbackslash}\_ &= \_ \\ \text{\textbackslash}\text{textba}&\text{ckslash} = \text{\textbackslash} \end{split} \] \end{frame} %%%%% \begin{frame}{How is \LaTeX{} pronounced?} The original author, \href{http://en.wikipedia.org/wiki/Leslie_Lamport}{Leslie Lamport}, says in his book \href{http://www.amazon.com/LaTeX-Document-Preparation-System-Edition/dp/0201529831}{\emph{LaTeX: A document Preparation System}} \begin{block}{} One of the hardest things about \LaTeX{} is deciding how to pronounce it. This is also one of the few things I'm not going to tell you about \LaTeX, since pronunciation is best determined by usage, not fiat. \TeX{} is usually pronounced ``\emph{teck}'', making ``\emph{lah}-teck'', and ``\emph{lay}-teck'' the logical choices; but language is not always logical, so ``lay-tecks'' is also possible. \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{Can I create a SVG in \LaTeX{}?} \centering Not directly.\\ However, if you're running Linux you can use this script to convert your \LaTeX{} markup output to SVG in about 1 second:\\ \vspace{1cm} \href{https://github.com/matthewfeickert/LaTeX2SVG}{\LaTeX2SVG} \end{frame} %########### %%%%%%%% \end{document} %########### %%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}[fragile]{} \begin{verbatim} \end{verbatim} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{} \begin{block}{} \begin{itemize} \item \end{itemize} \end{block} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame} \frametitle{Two Column Output} \begin{columns}[c] \column{1.5in} Practical \TeX\ 2005\\ Practical \TeX\ 2005\\ Practical \TeX\ 2005 \column{1.5in} \begin{figure} \includegraphics[scale=0.20]{first_neutrino_annotated.eps} \caption{Credit: tomsastroblog.com} \end{figure} \end{columns} \end{frame} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{frame}{What is \LaTeX?} \LaTeX~is a markup language written for the \TeX~typesetting program. \begin{block}{Features of \TeX} \begin{itemize} \item Open source \item Allows public to produce high-quality documents individually \item Provides identical output on all computers (both now and in the future) \item Widely used in academia \begin{itemize} \item Ex. Mathematics, computer science, economics, engineering, physics, and statistics. \end{itemize} \end{itemize} \end{block} \end{frame}
{ "alphanum_fraction": 0.6021706956, "avg_line_length": 30.8054711246, "ext": "tex", "hexsha": "31654bb140a1a51633deabce5c0acd34dd8fa202", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "12a9918973a593751e73d88b1484a79091ada72a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "matthewfeickert/UTAustin-LaTeX-Workshop-2015", "max_forks_repo_path": "2015-03-12_LaTeX_Workshop.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "12a9918973a593751e73d88b1484a79091ada72a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "matthewfeickert/UTAustin-LaTeX-Workshop-2015", "max_issues_repo_path": "2015-03-12_LaTeX_Workshop.tex", "max_line_length": 397, "max_stars_count": null, "max_stars_repo_head_hexsha": "12a9918973a593751e73d88b1484a79091ada72a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "matthewfeickert/UTAustin-LaTeX-Workshop-2015", "max_stars_repo_path": "2015-03-12_LaTeX_Workshop.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9403, "size": 30405 }
%mainfile: ../lecture_notes.tex \chapter{Recursive Least-Squares Adaptive Filters} \label{ch:RLS} \section{Review of the Basics} \subsection{The Matrix Inversion Lemma} \label{sec:woodbury} Let $\vect{X}$, $\vect{Y}$, $\vect{X}+\vect{UYV}$, and $\vect{Y}^{-1}+\vect{VX}^{-1}\vect{U}$ all be non-singular matrices. By equating element $(1,1)$ of the two block matrices in \eq{eq:block_inv1} and \eq{eq:block_inv2} and setting \begin{align} \vect{X} &= \vect{A}\\ \vect{U} &= \vect{B}\\ \vect{V} &= \vect{C}\\ \vect{Y} &= -\vect{D}^{-1}\ , \end{align} we obtain the matrix inversion lemma \bmath (\vect{X}+\vect{UYV})^{-1} = \vect{X}^{-1}-\vect{X}^{-1}\vect{U}(\vect{Y}^{-1}+\vect{VX}^{-1}\vect{U})^{-1}\vect{V}\vect{X}^{-1}\ . \label{eq:woodbury} \emath The matrix inversion lemma is also sometimes called the \textit{Woodbury matrix identity} or the \textit{Woodbury's identity}. \section{Method of Least-Squares} The adaptive filtering problem is shown in Fig.~\ref{fig:nonwss_block_diagram2}. \begin{figure}[htbp] \centering \inputTikZ{figures/nonwss_block_diagram} \caption{Block diagram of adaptive filtering in a non-WSS environment.} \label{fig:nonwss_block_diagram2} \end{figure} From the figure, we have that \begin{itemize} \item[$u(n)$:] zero-mean input signal \item[$w_m(n)$:] $M$-tap FIR-filter with impulse response $w_0(n),w_1(n),\ldots,w_{M-1}(n)$ \item[$y(n)$:] output signal given by $y(n) = \sum_{m=0}^{M-1}w_m(n) u(n-m)$ \item[$d(n)$:] zero-mean desired signal \item[$e(n)$:] error signal \end{itemize} Define \begin{align} \vect{w}(n) &= \bbmtx w_0(n) & w_{1}(n) & \cdots & w_{M-1}(n)\ebmtx^T\\ \vect{u}(n)&= \bbmtx u(n) & u(n-1) & \cdots & u(n-M+1)\ebmtx^T\ . \end{align} For $i=1,2,\ldots,n$ with $n\geq M$, define \begin{align} \vect{A}(n) &= \bbmtx \vect{u}(1) & \vect{u}(2) & \cdots & \vect{u}(n)\ebmtx^T\\ \vect{d}(n) &= \bbmtx d(1) & d(2) & \cdots & d(n)\ebmtx^T\\ \vect{e}(n) &= \bbmtx e(1) & e(2) & \cdots & e(n)\ebmtx^T\ . \end{align} Then \bmath \vect{e}(n) = \vect{d}(n)-\vect{A}(n)\vect{w}(n)\quad\iff\quad\vect{d}(n)=\vect{A}(n)\vect{w}(n)+\vect{e}(n)\ . \emath Note that we have made $\vect{A}(n)$, $\vect{d}(n)$, and $\vect{e}(n)$ time-dependent in order to emphasise that we are here concerned with an online algorithm. Therefore, we also formulate the squared error cost function $J_2(\vect{w}(n))$ in a slightly different way compared to \eq{eq:cf_squared_error}. Here, we define it as \bmath J_2(\vect{w}(n)) = \sum_{i=1}^{n}e^2(i)=\vect{e}^T(n)\vect{e}(n)\ . \emath In the method of least-squares, we wish to minimise $J_2(\vect{w}(n))$ which we can write as \begin{align} J_2(\vect{w}(n)) &= \vect{e}^T(n)\vect{e}(n) = (\vect{d}(n)-\vect{A}(n)\vect{w}(n))^T(\vect{d}(n)-\vect{A}(n)\vect{w}(n))\notag\\ &= \vect{d}^T(n)\vect{d}(n)+\vect{w}^T(n)\vect{A}^T(n)\vect{A}(n)\vect{w}(n)-2\vect{w}^T(n)\vect{A}^T(n)\vect{d}(n)\ . \end{align} The minimiser \bmath \vect{w}_o(n) = (\vect{A}^T(n)\vect{A}(n))^{-1}\vect{A}^T(n)\vect{d}(n) \emath is referred to as the least-squares solution, and it is the unique solution to the normal equations \bmath \vect{A}^T(n)\vect{A}(n)\vect{w}(n) = \vect{A}^T(n)\vect{d}(n)\ , \emath provided that $\vect{A}(n)$ has full rank. \subsection{Weighted Least-Squares} When the statistics of $u(n)$ and/or $d(n)$ is time dependent, the minimisation of the squared error $J_2(\vect{w}(n))$ may fail to give a good estimate at time $n$ since all data affect the value of $J_2(\vect{w}(n))$ with the same weight. Ideally, we would like that the new data are assigned a larger weight than the old data. In order to do this, we reformulate $J_2(\vect{w}(n))$ as a weighted cost function \bmath J_\beta(\vect{w}(n)) = \sum_{i=1}^{n}\beta(n,i) e^2(i) = \vect{e}^T(n)\vect{B}(n) \vect{e}(n) \emath where $\beta(n,i)$ contains the weight pertaining to the $i$'th error at time $n$, and $\vect{B}(n)$ is a diagonal matrix given by \bmath \vect{B}(n) = \diag{\beta(n,1), \beta(n,2), \ldots , \beta(n,n)}\ . \emath We use the five step recipe in Sec.~\ref{ssec:optimisation} to minimise $J_\beta(\vect{w}(n))$ w.r.t $\vect{w}(n)$. \begin{enumerate} \item Construct the cost function \begin{align} J_\beta(\vect{w}(n)) &= \vect{e}^T(n)\vect{B}(n) \vect{e}(n) = (\vect{d}(n)-\vect{A}(n)\vect{w}(n))^T\vect{B}(n)(\vect{d}(n)-\vect{A}(n)\vect{w}(n))\notag\\ &= \vect{d}(n)^T\vect{B}(n)\vect{d}(n)+\vect{w}^T(n)\vect{\Phi}(n)\vect{w}(n)-2\vect{w}^T(n)\vect{\varphi}(n) \end{align} where we have defined $\vect{\Phi}(n)$ and $\vect{\varphi}(n)$ as \begin{align} \vect{\Phi}(n) &= \vect{A}^T(n)\vect{B}(n)\vect{A}(n)\\ \vect{\varphi}(n) &= \vect{A}^T(n)\vect{B}(n)\vect{d}(n)\ . \end{align} We refer to $\vect{\Phi}(n)$ and $\vect{\varphi}(n)$ as the correlation matrix and the cross-correlation vector, respectively, since they are scaled and weighted estimates of $\vect{R}_u$ and $\vect{r}_{ud}$. \item Find the gradient \bmath \vect{g}(\vect{w}(n)) = (\vect{\Phi}(n)+\vect{\Phi}^T(n))\vect{w}(n)-2\vect{\varphi}(n)=2\vect{\Phi}(n)\vect{w}(n)-2\vect{\varphi}(n) \emath \item Solve $\vect{g}(\vect{w}(n))=\vect{0}$ for $\vect{w}(n)$ \begin{alignat}{3} &&\qquad \vect{g}(\vect{w}(n)) &= \mathrlap{2\vect{\Phi}(n)\vect{w}(n)-2\vect{\varphi}(n)}\notag\\ \ArrowBetweenLines &&\qquad \alignedbox{\vect{\Phi}(n)\vect{w}(n)}{=\vect{\varphi}(n)} &\qquad &\text{(Weighted Normal Equations)}\\ \ArrowBetweenLines &&\qquad \vect{w}_o(n) &= \vect{\Phi}^{-1}(n)\vect{\varphi}(n) &\qquad &\text{(If }\vect{\Phi}(n)\text{ is invertible)} \end{alignat} \item Find the Hessian \bmath \vect{H}(\vect{w}) = 2\vect{\Phi}(n) \emath which is p.d. for all $\vect{w}(n)$ if $\vect{A}(n)$ has full rank and $\beta(n,i)>0$ for all $n\geq i > 0$. \item This implies that \begin{itemize} \item $J_\beta(\vect{w}(n))$ is a convex function, and \item $\vect{w}_o(n) = \vect{\Phi}^{-1}(n)\vect{\varphi}(n)$ is the global minimiser. \end{itemize} \end{enumerate} The solution $\vect{w}_o(n)$ is often referred to as the weighted least-squares solution. \subsubsection{Estimation of the (Cross-)Correlation} Comparing the weighted normal equations with the Wiener-Hopf equations, we see that \begin{align} \hat{\vect{R}}_u(n) &= c(n,\beta)\vect{\Phi}(n) = c(n,\beta)\vect{A}^T(n)\vect{B}(n)\vect{A}(n) = c(n,\beta)\sum_{i=1}^{n}\beta(n,i)\vect{u}(i)\vect{u}^T(i)\label{eq:wls_Ru_est}\\ \hat{\vect{r}}_{ud}(n) &= c(n,\beta)\vect{\varphi}(n) = c(n,\beta)\vect{A}^T(n)\vect{B}(n)\vect{d}(n) = c(n,\beta)\sum_{i=1}^{n}\beta(n,i)\vect{u}(i)d(i)\label{eq:wls_rud_est} \end{align} are the estimates of the correlation matrix and the cross-correlation vector, respectively. The constant $c(n,\beta)$ depends on $n$ and the weighting function $\beta(n,i)$, and it can be selected such that $\hat{\vect{R}}_u(n)$ and $\hat{\vect{r}}_{ud}(n)$ are unbiased estimates of $\vect{R}_u(n)$ and $\vect{r}_{ud}(n)$. \subsection{Weight Functions} We consider three simple weight functions. \subsubsection{Growing Window Weight Function} \begin{figure}[htbp] \centering \inputTikZ{figures/growing_window} \caption{The growing window weight function.} \label{fig:growing_window} \end{figure} \noindent If we select the weight function as \bmath \beta(n,i) = \begin{cases} 1 & 0 < i \leq n\\ 0 & \text{otherwise} \end{cases}\ , \emath we obtain the growing window weight function, and it is sketched in Fig.~\ref{fig:growing_window}. Selecting the growing window weight function reduces the weighted least-squares problem to the standard least-squares problem. In order to obtain unbiased estimates of $\vect{R}_u(n)$ and $\vect{r}_{ud}(n)$ in \eq{eq:wls_Ru_est} and \eq{eq:wls_rud_est}, respectively, we have to use \bmath c(n,\beta) = \frac{1}{n}\ . \emath \subsubsection{Sliding Window Weight Function} \begin{figure}[htbp] \centering \inputTikZ{figures/sliding_window} \caption{The sliding window weight function.} \label{fig:sliding_window} \end{figure} \noindent If we select the weight function as \bmath \beta(n,i) = \begin{cases} 1 & n-L < i \leq n\\ 0 & \text{otherwise} \end{cases} \emath for $0 < L \leq n$, we obtain the sliding window weight function, and it is sketched in Fig.~\ref{fig:sliding_window}. If we select $L=n$, the sliding window weight function reduces to the growing window weight function. In order to obtain unbiased estimates of $\vect{R}_u(n)$ and $\vect{r}_{ud}(n)$ in \eq{eq:wls_Ru_est} and \eq{eq:wls_rud_est}, respectively, we have to use \bmath c(n,\beta) = \frac{1}{L}\ . \emath \subsubsection{Exponential Weight Function} \begin{figure}[htbp] \centering \inputTikZ{figures/exponential_window} \caption{The exponential weight function.} \label{fig:exponential_window} \end{figure} \noindent If we select the weight function as \bmath \beta(n,i) = \begin{cases} \lambda^{n-i} & 0 < i \leq n\\ 0 & \text{otherwise} \end{cases} \emath for $0<\lambda\leq 1$, we obtain the exponential weight function, and it is sketched in Fig.~\ref{fig:exponential_window}. The parameter $\lambda$ is called \textit{the forgetting factor}. If we select $\lambda=1$, the exponential weight function reduces to the growing window weight function. In order to obtain unbiased estimates of $\vect{R}_u(n)$ and $\vect{r}_{ud}(n)$ in \eq{eq:wls_Ru_est} and \eq{eq:wls_rud_est}, respectively, we have to use \bmath c(n,\beta) = \begin{cases} \displaystyle\frac{1-\lambda}{1-\lambda^n} & 0<\lambda< 1\\ \displaystyle\frac{1}{n} & \lambda=1 \end{cases}\ . \emath \section{The Recursive Least-Squares Algorithm with an Exponential Weight Function} In an online algorithm, we have to solve the weighted normal equations \bmath \vect{\Phi}(n)\vect{w}(n) = \vect{\varphi}(n) \emath for $\vect{w}(n)$ at every time index $n$. However, solving this equation directly as \bmath \vect{w}(n) = \vect{\Phi}^{-1}(n)\vect{\varphi}(n) \emath yields a high computational complexity of the algorithm for the following two reasons. \begin{enumerate} \item The matrices $\vect{A}(n)$ and $\vect{B}(n)$, and the vector $\vect{d}(n)$ grows with $n$. Computing $\vect{\Phi}(n)$ and $\vect{\varphi}(n)$ directly would therefore be infeasible for an online algorithm, unless we use a weight function with a finite duration window. \item For an $M$-tap FIR filter, it requires in the order of $\mathcal{O}(M^3)$ operations to solve the normal equations for the filter coefficient vector $\vect{w}(n)$. \end{enumerate} The recursive least-squares (RLS) algorithm bypasses these two problems. Below, we consider how this is obtained for an exponential weight function, which is the most common weight function. \subsubsection{Recursive Computation of $\vect{\Phi}(n)$ and $\vect{\varphi}(n)$} We may compute the correlation matrix recursively by rewriting it as \begin{align} \vect{\Phi}(n) &= \vect{A}^T(n)\vect{B}(n)\vect{A}(n) = \sum_{i=1}^{n}\lambda^{n-i}\vect{u}(i)\vect{u}^T(i)\\ &= \lambda^0 \vect{u}(n)\vect{u}^T(n) + \sum_{i=1}^{n-1}\lambda^{n-i}\vect{u}(i)\vect{u}^T(i) = \vect{u}(n)\vect{u}^T(n) + \lambda\sum_{i=1}^{n-1}\lambda^{n-1-i}\vect{u}(i)\vect{u}^T(i)\notag\\ &= \vect{u}(n)\vect{u}^T(n) + \lambda\vect{\Phi}(n-1)\ . \label{eq:rls_corr_recurs} \end{align} Similarly, we may compute the cross-correlation vector recursively by rewriting it as \begin{align} \vect{\varphi}(n) &= \vect{A}^T(n)\vect{B}(n)\vect{d}(n) = \sum_{i=1}^{n}\lambda^{n-i}\vect{u}(i)d(i)\\ &= \lambda^0 \vect{u}(n)d(n) + \sum_{i=1}^{n-1}\lambda^{n-i}\vect{u}(i)d(i) = \vect{u}(n)d(n) + \lambda\sum_{i=1}^{n-1}\lambda^{n-1-i}\vect{u}(i)d(i)\notag\\ &= \vect{u}(n)d(n) + \lambda\vect{\varphi}(n-1)\ . \end{align} These recursive formulations of the correlation matrix and the cross-correlation vector clearly reduce the computational complexity and are suitable to use in an online algorithm. \subsubsection{Inversion of $\vect{\Phi}(n)$} For the inversion of the correlation matrix, we use the matrix inversion lemma stated in Sec.~\ref{sec:woodbury}. Comparing the recursive formulation of the correlation matrix in \eq{eq:rls_corr_recurs} with the left side of \eq{eq:woodbury}, we obtain \begin{align} \vect{X} &= \lambda\vect{\Phi}(n-1)\\ \vect{U} &= \vect{u}(n)\\ \vect{V} &= \vect{u}^T(n)\\ \vect{Y} &= 1\ . \end{align} Thus, invoking the matrix inversion lemma, we have that \bmath \vect{\Phi}^{-1}(n) = \lambda^{-1}\vect{\Phi}^{-1}(n-1)-\lambda^{-2}\frac{\vect{\Phi}^{-1}(n-1)\vect{u}(n)\vect{u}^T(n)\vect{\Phi}^{-1}(n-1)}{1+\lambda^{-1}\vect{u}^T(n)\vect{\Phi}^{-1}(n-1)\vect{u}(n)}\ . \label{eq:rls_inv_phi} \emath Note that the computational complexity of the right side of the equation is much lower than that of the left side of the equation when $\vect{\Phi}^{-1}(n-1)$ is known. In order to simplify the notation, we define \begin{alignat}{2} \vect{P}(n) &= \vect{\Phi}^{-1}(n)&\quad&\text{(inverse correlation matrix)}\\ \vect{k}(n) &= \frac{\vect{P}(n-1)\vect{u}(n)}{\lambda+\vect{u}^T(n)\vect{P}(n-1)\vect{u}(n)}&\quad&\text{(gain vector)} \end{alignat} which leads to that we may write \eq{eq:rls_inv_phi} as \bmath \vect{P}(n) = \lambda^{-1}\left[\vect{P}(n-1)-\vect{k}(n)\vect{u}^T(n)\vect{P}(n-1)\right]\ . \emath By rearranging the expression for the gain vector, we obtain \begin{align} \vect{k}(n) &= \lambda^{-1}\left[\vect{P}(n-1)-\vect{k}(n)\vect{u}^T(n)\vect{P}(n-1)\right] \vect{u}(n)\notag\\ &= \vect{P}(n)\vect{u}(n)\ . \end{align} \subsubsection{Recursive computation of $\vect{w}(n)$} We can now develop the recursive update equation of the filter coefficient vector. We have that \begin{align} \vect{w}(n) &= \vect{P}(n)\vect{\varphi}(n)\\ &= \vect{P}(n)\left[\vect{u}(n)d(n) + \lambda\vect{\varphi}(n-1)\right]=\lambda\vect{P}(n)\vect{P}^{-1}(n-1)\vect{w}(n-1)+\vect{k}(n)d(n)\notag\\ &= \left[\vect{P}(n-1)-\vect{k}(n)\vect{u}^T(n)\vect{P}(n-1)\right]\vect{P}^{-1}(n-1)\vect{w}(n-1)+\vect{k}(n)d(n)\notag\\ &= \vect{w}(n-1)-\vect{k}(n)\vect{u}^T(n)\vect{w}(n-1)+\vect{k}(n)d(n)\notag\\ &= \vect{w}(n-1)+\vect{k}(n)\xi(n) \end{align} where we have defined the a priori error as \bmath \xi(n) = d(n)-\vect{u}^T(n)\vect{w}(n-1)\ . \emath \subsubsection{The RLS Algorithm} The RLS algorithm may now be formulated as the following set of equations \begin{align} \vect{\pi}(n) &= \vect{P}(n-1)\vect{u}(n)\\ \vect{k}(n) &= \frac{\vect{\pi}(n)}{\lambda+\vect{u}^T(n)\vect{\pi}(n)}\\ \xi(n) &= d(n)-\vect{u}^T(n)\vect{w}(n-1)\\ \vect{w}(n) &= \vect{w}(n-1)+\vect{k}(n)\xi(n)\\ \vect{P}(n) &= \lambda^{-1}\left[\vect{P}(n-1)-\vect{k}(n)\vect{\pi}^T(n)\right]\ . \end{align} For an $M$-tap FIR filter, it requires in the order of $\mathcal{O}(M^2)$ operations to run one iteration of the RLS algorithm. \subsubsection{Initialisation} In order to start the RLS algorithm, we need to select values for the initial inverse correlation matrix $\vect{P}(0)$, the initial filter coefficient vector $\vect{w}(0)$, and the input samples $u(n)$ for $n=-M+1,-M+2,\ldots,1$. Typically, we assume that \begin{align} \vect{P}(0) &= \delta^{-1}\vect{I}\\ \vect{w}(0) &= \vect{0}\\ u(n) & = 0\ ,\quad\text{for } -M+1<n<1\ . \end{align} The first assumption implies that we assume that $\vect{u}(n)$ for $n<1$ is a white random process with covariance matrix $\delta\vect{I}$. The value of $\delta$ should reflect the SNR of the input data with $\delta$ being small for a high SNR and $\delta$ being large for a low SNR \cite[pp.~444--446]{Haykin2001}. This assumption introduces bias into the correlation matrix $\vect{\Phi}(n)$. However, this bias decreases to zero for an increasing $n$. An alternative initialisation, which does not introduce bias, is to estimate the correlation matrix and the cross-correlation vector as \cite[pp.~545--546]{Hayes1996} \begin{align} \vect{P}(0) &= \left[\sum_{i=-M+1}^{0}\lambda^{-i}\vect{u}(i)\vect{u}^T(i)\right]^{-1}\\ \vect{\varphi}(0) &= \sum_{i=-M+1}^{0}\lambda^{-i}\vect{u}(i)\vect{d}(i) \end{align} prior to starting the RLS algorithm at time $n=1$. The initial value of the filter coefficient vector can be set to $\vect{w}(0)=\vect{P}(0)\vect{\varphi}(0)$. Note, that this approach requires that we know the input signal from time $n=-2M+2$ and the desired signal from time $n=-M+2$. \subsection{Selection of the Forgetting Factor} At time $n$, the memory of the sliding window RLS algorithm is the $L$ newest samples indexed by $n-L+1,\ldots, n$. For the exponentially weighted RLS algorithm, the memory is controlled by the forgetting factor $\lambda$. Whereas the interpretation of $L$ is simple, the corresponding interpretation of $\lambda$ is not that intuitive when we have to investigate the memory of the exponentially weighted RLS algorithm. That is, we would like to interpret the forgetting factor as a sliding window length. We call this window length for the effective window length and denote it by $L_\textup{eff}$. A simple way of connecting $L_\textup{eff}$ and $\lambda$ is by requiring that \bmath \lim_{n\to\infty} \sum_{i=1}^{n}\lambda^{n-i} = \lim_{n\to\infty} \sum_{i=n-L_\textup{eff}+1}^{n}1\ . \emath That is, when the RLS algorithm has reached steady-state, the area under the sliding window curve should equal the area under the exponential window curve. This leads to \bmath \lim_{n\to\infty} \sum_{i=n-L_\textup{eff}+1}^{n}1 = L_\textup{eff} \emath and \bmath \lim_{n\to\infty} \sum_{i=1}^{n}\lambda^{n-i}=\lim_{n\to\infty} \sum_{k=0}^{n-1}\lambda^{k} = \lim_{n\to\infty} \sum_{k=0}^{n-1}\lambda^{k} = \lim_{n\to\infty}\frac{1-\lambda^n}{1-\lambda} = \frac{1}{1-\lambda} \emath where the second last equality follows for $\lambda\neq 1$ from the geometric series, and the last equality follows if $0<\lambda<1$. Thus, we have that \bmath L_\textup{eff} = \frac{1}{1-\lambda}\ . \emath \subsection{Transient Analysis} The RLS algorithm is stable in the mean and the mean-square if $0<\lambda\leq 1$\ . It may also be shown that the rate of the convergence of the RLS algorithm is typically an order of magnitude faster than the rate of the convergence of the LMS algorithm. Moreover, the rate of the convergence of the RLS algorithm is invariant to the condition number of the correlation matrix $\vect{R}_u$ of the input signal \cite[p.~463, ch.~14]{Haykin2001}. \subsection{Steady-State Analysis} It can be shown that \cite[p.~510]{Sayed2003} \begin{alignat}{2} &\text{EMSE:}&\quad J_\textup{ex} &= J_2(\vect{w}(\infty))-J_\textup{min}\notag\\ & &\quad &\approx J_\textup{min}\frac{(1-\lambda)M}{1+\lambda-(1-\lambda)M}\\ &\text{Misadjustment:}&\quad \mathcal{M} &= \frac{ J_\textup{ex}}{J_\textup{min}} \approx \frac{(1-\lambda)M}{1+\lambda-(1-\lambda)M}\\ &\text{MSD:} &\quad E[\|\vect{\Delta}\vect{w}(\infty)\|^2] &\approx J_\textup{ex}\sum_{m=1}^M\frac{1}{\lambda_m} \end{alignat} where $\lambda_m$ is the $m$'th eigenvalue of the correlation matrix $\vect{R}_u$ not to be confused with the forgetting factor $\lambda$. The approximations hold under certain conditions which may be found in \cite[pp.~508--510]{Sayed2003}. \subsection{Computational Cost} Table~\ref{tab:comp_cost_rls} shows the computational cost of the RLS algorithm in terms of the number of multiplications, additions or subtractions, and divisions. From the table, we see that the total number of flops is $5M^2+5M+1$. Thus, the RLS algorithm has a complexity of $\mathcal{O}(M^2)$. Note, that there exist faster ways of implementing the RLS algorithm \cite[pp.~247]{Sayed2003}. Some of them even achieve linear complexity \cite{Slock1991}. \begin{table}[htbp] \centering \begin{tabular}{l c c c} \toprule Term & $\times$ & $+$ or $-$ & $/$\\ \midrule $\vect{\pi}(n)=\vect{P}(n-1)\vect{u}(n)$ & $M^2$ & $M(M-1)$ &\\ $\vect{k}(n) = \vect{\pi}(n)/(\lambda+\vect{u}^T(n)\vect{\pi}(n))$ & $M$ & $M$ & $1$\\ $\xi(n)=d(n)-\vect{u}^T(n)\vect{w}(n-1)$ & $M$ & $M$ & \\ $\vect{w}(n) = \vect{w}(n-1)+\vect{k}(n)\xi(n)$ & $M$ & $M$ & \\ $\vect{P}(n) = (\vect{P}(n-1)-\vect{k}(n)\vect{\pi}^T(n))/\lambda$ & $M^2$ & $M^2$ & $M^2$ \\ \midrule Total & $2M^2+3M$ & $2M^2+2M$ & $M^2+1$\\ \bottomrule \end{tabular} \caption{Computational cost of the RLS algorithm.} \label{tab:comp_cost_rls} \end{table}
{ "alphanum_fraction": 0.6594751327, "avg_line_length": 58.9797101449, "ext": "tex", "hexsha": "0a32437cf967af13747bc3ed5f5fbd41d8bea8fd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "194706662078f810c163e403548395a532471d0c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "jkjaer/adaptiveFilteringLectureNotes", "max_forks_repo_path": "lecture_notes/lectures/RLS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "194706662078f810c163e403548395a532471d0c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "jkjaer/adaptiveFilteringLectureNotes", "max_issues_repo_path": "lecture_notes/lectures/RLS.tex", "max_line_length": 678, "max_stars_count": 3, "max_stars_repo_head_hexsha": "194706662078f810c163e403548395a532471d0c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "jkjaer/adaptiveFilteringLectureNotes", "max_stars_repo_path": "lecture_notes/lectures/RLS.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-21T07:10:32.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-22T19:46:04.000Z", "num_tokens": 7437, "size": 20348 }
\setlength{\footskip}{8mm} \chapter{Introduction} \textit{This dissertation explores the possibilities and develops algorithms for modeling human behaviors and detecting anomalies without requiring large databases of training data. In this chapter, I discuss the background and motivation of my work, review real-world surveillance systems, present the contributions of the work, and provide an outline of this dissertation.} \section{Background and Motivation} Due to the increase in demand for video surveillance in public areas, video surveillance is becoming ubiquitous in our lives. It becomes increasingly important for preventing and responding to terrorist and other criminal activities. However, the resulting proliferation of surveillance cameras is making it increasingly difficult to monitor all channels continuously. Human monitoring is therefore becoming increasingly expensive and ineffective as the torrent of video data increases. For instance, in a CCTV monitoring room (see Figure \ref{fig:monitoring}), security operators are required to monitor 24 hours a day and be ready to take action when an alarm occurs. Security will be enhanced if there is an intelligent surveillance system able to perform filtering, archiving ``normal'' events, and automatically raising alarms for possible ``abnormal'' events or presenting such events to human security personnel for consideration as a security threat. The problems to consider in a surveillance system include pedestrian detection and tracking, unattended object detection, human behavior modeling, and anomaly detection. \begin{figure}[t] \centering \includegraphics[width=5in]{figures/monitoring} \caption[CCTV monitoring room.]{\small CCTV monitoring room. Reprinted from the Twenty First Security Web site (\url{http://www.twentyfirstsecurity.com.au/}).} \label{fig:monitoring} \end{figure} Human behavior understanding is one of the key components of intelligent video surveillance. However, the problem is difficult and remains unsolved, due to the wide range of activities possible in any given context and the large amount of variability within any particular activity. One limitation of much of the existing work is that it learns in batch mode and creates separate models, which remain static once trained, for each distinct class of behavior. It may have an advantage in explaining data that are well-defined, but it needs sufficiently large samples of behavior patterns before training the system. Another limitation is that the number of ``normal'' behavior patterns needs to be known beforehand because we cannot learn a model for ``abnormal'' behavior that is rare and diverse. Therefore, we must instead detect deviations from typical behavior. The ambiguity of human behaviors makes the problem more challenging since it is scene-dependent and can change over time. In particular, a behavior considered normal in one context might be considered unusual in another context depending on the type of behavior and where and when it is observed. The characteristics of typical behavior also vary from scene to scene. Many researchers have attempted to build surveillance systems able to interpret and understand human behaviors. However, most of the work learns behaviors in an offline fashion and keeps the model static once trained. This is impractical for real-time surveillance applications, since offline training requires a great deal of computational time and resources. In addition, the number of behaviors needs to be known beforehand. We therefore need to find a way to initially recognize and model each of the common behavior groups in a particular scene as well as to incrementally learn scene-specific models as new behavior is observed. Many open source and commercial video surveillance products are available. Most of the products do not implement any intelligent features; therefore, they cannot learn and understand the events occurring in a scene. Moreover, a great deal of configuration must be done before a user can deploy the system to operate in a real situation. For instance, a user may need to define restricted zones or rules for a specific scene. Three main challenges for human behavior understanding and anomaly detection are as follows. \begin{enumerate} \item It is impractical to store all data in a system; therefore, we need an efficient approach that learns scene-specific statistical models of human behavior without requiring storage of large databases of training data. \item Real human behavior is sometimes ambiguous; therefore, we need to keep humans such as security personnel in the loop to interpret it. \item Unusual behavior is rare and diverse, and it is also impractical to acquire sufficient data for a good model for it; therefore, we need to detect deviations from typical behavior instead. \end{enumerate} In this dissertation, we propose to explore, implement, and evaluate an efficient method for automatic identification of suspicious behavior in video surveillance data that incrementally learns scene-specific statistical models of human behavior without requiring storage of large databases of training data. The method is based on hidden Markov models (HMMs) with sufficient statistics and an optimal threshold on the likelihood of an event according to the human behavior model. We begin by building an initial set of models explaining the behaviors occurring in a small bootstrap data set. The bootstrap procedure partitions the bootstrap set into clusters then assigns new observation sequences to clusters based on statistical tests of HMM log likelihood scores. Cluster-specific likelihood thresholds are learned rather than set arbitrarily. After bootstrapping, each new sequence is used to incrementally update the sufficient statistics of the HMM it is assigned to. Our method is an effective solution to the problem of inducing scene-specific statistical models useful for bringing suspicious behavior to the attention of human security personnel. \section{Review of Real-World Surveillance Systems} We divide this section into open source and commercial products. \subsection{Open Source Products} ZoneMinder (ZM; Coombes, 2007)\nocite{zoneminder} is an open source video surveillance system for Linux. It has been released under the terms of the GNU general public license (GPL). ZM supports both IP cameras and USB cameras. Figure \ref{fig:zm-webcam} shows an example of a scene captured by an IP camera and a USB camera. The system consists of many independent modules including motion detection. Each component is designed to use as few resources as possible, maximizing the efficiency of the machine. It provides a PHP-based Web interface that allows us to control the cameras and monitor a scene from anywhere. Besides controlling and monitoring, we can also review, archive, or delete the events through its Web interface. \begin{figure}[t] \begin{center} \subfloat[]{\includegraphics[scale=0.6]{figures/zm-webcam01.jpg} \label{fig:zm-webcam01}} \hspace{0.1in} \subfloat[]{\includegraphics[scale=0.6]{figures/zm-webcam02.jpg} \label{fig:zm-webcam02}} \end{center} \caption[Example of a scene captured by ZoneMinder.]{\small Example of a scene captured by (a) an IP camera and (b) a USB camera from ZoneMinder.} \label{fig:zm-webcam} \end{figure} SecureCam \shortcite{bedecs09securecam} is a open source software for Windows platform providing a friendly user interface as shown in Figure \ref{fig:securecam}. Similar to ZoneMinder, SecureCame supports multiple cameras and provides a few simple and fast algorithms such as motion detection. It has e-mail and sound notification features, and it also allows users to record video for later review. \begin{figure}[t] \begin{center} \includegraphics[width=4in]{figures/securecam.jpg} \caption[Screen shot of SecureCam.]{\small Screen shot of SecureCam. Reprinted from the SecureCam Web site (\url{http://sourceforge.net/projects/securecam/}).} \label{fig:securecam} \end{center} \end{figure} \subsection{Commercial Products} Vitamin D Video \shortcite{vitamind} is a commercial video surveillance system. It supports USB webcams and network cameras and provides advanced features such as creating rules for specific events. For example, the user could create a rule to record a clip only when a person opens the door, or the user could create a rule to notify via email when the system senses some movement in a defined region. The system also allows users to monitor a scene and to search for videos of interest. Figure \ref{fig:vitamind} shows a screen shot of the software. \begin{figure}[t] \begin{center} \includegraphics[width=4in]{figures/vitamind.jpg} \caption[Screen shot of Vitamin D Video.]{\small Screen shot of Vitamin D Video. Reprinted from the Vitamin D Video Web site (\url{http://www.vitamindinc.com/}).} \label{fig:vitamind} \end{center} \end{figure} It is interesting that Vitamin D Video does not use motion detection, but rather uses people detection. It uses hierarchical temporal memory (HTM; Hawkins, 2007)\nocite{jeff07htm} to detect people. This algorithm attempts to apply the concept of how the brain recognizes a person. Video Analytics \shortcite{dvtel} is a video surveillance software package that includes many different modules. Sample modules are intrusion detection, unattended baggage detection, object removal detection, autonomous pan-tilt-zoom (PTZ) tracking, stopped vehicle detection, loitering detection, and camera tampering detection. However, some modules such as intrusion detection do not have any intelligent features. SuperTrack \shortcite{supertrack} is an intelligent video surveillance system. SuperTrack is designed for automatically analyzing and monitoring events without the need for human attention and interaction. This system provides automatic object detection, abnormal behavior detection, tracking, and so on. However, it requires the user to configure and maintain a set of alarm trigger rules in advance. SuperTrack also supports various types of cameras such as USB cameras and network cameras. Figure \ref{fig:supertrack} shows a screen shot of the system. \begin{figure}[t] \begin{center} \includegraphics[width=3in]{figures/supertrack.jpg} \caption[Screen shot of SuperTrack.]{\small Screen shot of SuperTrack. Reprinted from the Stratech Web site (\url{http://www.stratechsystems.com/iv_supertrack.asp}).} \label{fig:supertrack} \end{center} \end{figure} \subsection{Summary} Many open source and commercial video surveillance products are available; however, most of them do not provide intelligent features. Therefore, they cannot understand the events occurring in a scene or learn human activities. In addition, users are required to provide a great deal of configuration parameters before they can deploy the system to operate in real-world situations. For instance, in all of the systems, restricted zones or rules for a scene need to be defined beforehand. This review on open source and commercial video surveillance products makes us realize the importance of and demand for human behavior modeling and anomaly detection. Nowadays, in order to solve real-world problems, research needs to focus on both academic issues and the commercial market. \section{Contributions} The contributions of this dissertation are as follows. \begin{enumerate} \item We propose an intelligent video surveillance system that is practical and fairly close to the ideal surveillance system, which should monitor and automatically raise alarms to security personnel; \item We develop an algorithm for extracting and tracking multiple moving foreground blobs in a scene using an appearance model; \item We propose a shadow detection method that uses a simple maximum likelihood classification approach based on color information; \item We propose a new method for clustering human behaviors that is suitable for bootstrapping an anomaly detection module for intelligent video surveillance systems; \item We propose and develop a new and effective algorithm for semi-supervised learning of common human behaviors and detect anomalies in video sequences; \item We introduce a new and more accurate algorithm for incrementally profiling human behaviors in video sequences without requiring storage of large databases of training data; \item We provide the source code and datasets online\footnote{See \url{http://www.kanouivirach.com/#downloads}.} for researchers interested in evaluating or extending our work. \end{enumerate} \subsection{List of Publications} We provide a list of publications as part of this dissertation. We also include the list of works currently under review and preparation for submission. \subsection*{Published Works} \begin{itemize} \renewcommand\labelitemi{--} \item \textbf{Clustering Human Behaviors with Dynamic Time Warping and Hidden Markov Models for a Video Surveillance System}\nocite{kan10clustering}\\ Kan Ouivirach and Matthew N.\ Dailey\\ \textit{International Conference on Electrical Engineering/Electronics Computer Telecommunications and Information Technology (ECTI-CON)}, pages 884--888, 2010\\ Publisher: IEEE Computer Society\\ Included in Chapter~\ref{ch:clustering}\\ \item \textbf{Automatic Suspicious Behavior Detection from a Small Bootstrap Set}\nocite{kan12detection}\\ Kan Ouivirach, Shashi Gharti, and Matthew N.\ Dailey\\ \textit{International Conference on Computer Vision Theory and Applications (VISAPP)}, volume 1, pages 655--658, 2012\\ Publisher: Springer-Verlag\\ Included in Chapter~\ref{ch:batch}\\ \item \textbf{Incremental Behavior Modeling and Suspicious Activity Detection}\nocite{kan13incremental}\\ Kan Ouivirach, Shashi Gharti, and Matthew N.\ Dailey\\ \textit{Pattern Recognition}, 46(3): 671--680, 2013\\ Publisher: Elsevier\\ Included in Chapter~\ref{ch:incremental} \end{itemize} \subsection*{Manuscripts Currently under Review} \begin{itemize} \renewcommand\labelitemi{--} \item {\bf{Extracting the Object from the Shadows: Maximum Likelihood Object/Shadow Discrimination}}\\ Kan Ouivirach and Matthew N.\ Dailey\\ \textit{International Conference on Electrical Engineering/Electronics Computer Telecommunications and Information Technology (ECTI-CON)}, 2013\\ Publisher: IEEE Computer Society\\ Included in Chapter~\ref{ch:shadow} \end{itemize} \subsection*{Manuscripts Currently in Preparation} \begin{itemize} \renewcommand\labelitemi{--} \item {\bf{Incorporating Geometric and Shadow Region Shape Information for Shadow Detection}}\\ Kan Ouivirach and Matthew N.\ Dailey\\ To be submitted in \textit{International Conference on Computer Vision and Pattern Recognition (CVPR)}, 2013\\ Publisher: IEEE Computer Society \end{itemize} \section{Organization of Dissertation} I organize the rest of this dissertation as follows. In Chapter \ref{ch:blobanalysis}, I describe our blob-based motion analysis methods including blob extraction and appearance-based blob tracking. In Chapter \ref{ch:shadow}, I present a new method for detecting shadows using a simple maximum likelihood classification method based on color information. In Chapter \ref{ch:clustering}, I propose a new method for clustering behaviors in a scene. In Chapter \ref{ch:batch}, I present an automatic suspicious behavior detection method that uses a small bootstrap set. In Chapter \ref{ch:incremental}, I describe our extension of the suspicious behavior detection approach for incremental learning in which security personnel feedback is incorporated. Finally, in Chapter \ref{ch:conclusion}, I conclude and discuss the possible further extensions of my dissertation. \FloatBarrier % --- Maybe use later --- %(see Figure \ref{fig:zm-console}) %\begin{figure}[t] % \begin{center} % \includegraphics[width=4.5in]{figures/zm-console.jpg} % \caption{ZM Console} % \label{fig:zm-console} % \end{center} %\end{figure} %ZM has three main modules. The first is the ZM Capture (ZMC) module %whose job is to capture video frames from a video device as quickly as %possible. The second is the ZM Analysis (ZMA) module, which performs %video analysis. It finds the captured frames and checks whether the %detected motion should generate an alarm or not. The last is the ZM %Frame (ZMF) module. It is an optional module for writing the captured %frames to file system. This module could potentially reduce the %workload of ZMA, so ZMA can do more analysis work. If ZMF is not %enabled, ZMA will write captured frames to file system instead. %The core of ZM supports capturing and analyzing images. It also %provides a configurable set of parameters (see %Figure \ref{fig:zm-option}). ZM allows us to define ``zones'' for each %camera and vary the sensitivity and functionality (see %Figure \ref{fig:zm-webcam01-zones} %and \ref{fig:zm-webcam01-zones-define}). Therefore, we can eliminate %unnecessary regions and set the different thresholds for each zone. ZM %also provides a motion detection feature for detecting changes in a %scene and setting an alarm based on the user-defined threshold. %\begin{figure}[t] % \begin{center} % \includegraphics[width=4.5in]{figures/zm-options.jpg} % \caption{ZM Options} % \label{fig:zm-option} % \end{center} %\end{figure} %\begin{figure}[t] % \begin{center} % \subfloat[]{\includegraphics[scale=0.5]{figures/zm-webcam01-zones.jpg}\label{fig:zm-webcam01-zones}} % \hspace{0.1in} % \subfloat[]{\includegraphics[scale=0.5]{figures/zm-webcam01-zones-define.jpg}\label{fig:zm-webcam01-zones-define}} % \end{center} % \caption[Example of zone defined in ZoneMinder]{Example of (a) zone % defined and (b) configuring a zone in ZoneMinder.} % \label{fig:zm-webcam-zones} %\end{figure} %Here I review only the intrusion detection module. %The intrusion detection module automatically detects prohibited %intrusion behaviors while ignoring distractions that cause false %alarms, like small animals, swaying branches, %etc. Figure \ref{fig:intrusion-detection} shows a screen shot of the %result from this module. This module can be used on stationary cameras %as well as on PTZ cameras. There are two modes which are Regional %Entrance and Tripwire. The first mode provides a security alarm when %the system detects a person or vehicle moving within a prohibited %area. The second mode provides a security alarm when a person or %vehicle breaks through a demarcation line. User can configure this %mode to prohibit any crossover or to allow movement in a single %direction. %\begin{figure}[t] % \begin{center} % \includegraphics[width=3in]{figures/intrusion-detection.png} % \caption[Screen shot of DVTel's intrusion detection % module]{Screen shot of DVTel's intrusion detection % module. Captured from the video demonstration on DVTel's Web % site.} % \label{fig:intrusion-detection} % \end{center} %\end{figure} %For example, it has the knowledge that a head contains two %eyes and a mouth; so, if either an eye or mouth is detected, it will %assume that there is a head at that position. Vitamin D Video trained %a learning network with several hundred videos of humans. Also, %examples of vehicles and animals not people are trained. As a result, %Vitamin D Video can tolerate to real world problems.
{ "alphanum_fraction": 0.7663885578, "avg_line_length": 45.2494382022, "ext": "tex", "hexsha": "a55462f4669fd9a937e93e7558e3bd00de9cba07", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "458c5fce241973008bdcc3958bdf962b9197e593", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "zkan/dissertation", "max_forks_repo_path": "first-revision/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "458c5fce241973008bdcc3958bdf962b9197e593", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "zkan/dissertation", "max_issues_repo_path": "first-revision/intro.tex", "max_line_length": 120, "max_stars_count": null, "max_stars_repo_head_hexsha": "458c5fce241973008bdcc3958bdf962b9197e593", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "zkan/dissertation", "max_stars_repo_path": "first-revision/intro.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4536, "size": 20136 }
Essential programming concepts and data structures are described in this chapter. The rest of the thesis relies on these terms. \section{Related Data Structures} \subsection{Binary Search Tree} A binary search tree is a data structure that implements \findop, \insertop, and \removeop operations on a set of keys. Key~$K$ can be of any type, that has a total order. Throughout the thesis, trees with distinct keys are discussed, although of course, it is not the necessary condition. A binary tree (not to be confused with a binary search tree) consists of nodes. A node~$N$ consists of a key (defined as~\func{key}{N}) and two references to other nodes~-- left child (defined as~\func{left}{N}) and right child (defined as~\func{right}{N}). A reference may contain a link to an existing node or a special value $nil$, that means that the reference is empty. A node that contains a link to a child is called a \emph{parent} of this child node. Nodes may contain other attributes as well, but those are not substantial for this explanation. A node that has no children (\(\func{left}{N} = nil \land \func{right}{N} = nil\)) is called a \emph{leaf}. From the perspective of the graph theory, the binary tree is a simple oriented acyclic graph, where vertices are represented as nodes and edges are represented as links between a node and its left and right children. Every vertex in such graph has at most one incoming edge, i.e. every node can have at most one parent. Moreover, only one node has no parent~-- this node is called the \emph{root} of a binary tree. The length of the longest path from any leaf to the root is known as the \emph{height} of a binary tree. The \emph{subtree} with a root in $N$ defined as $N$ and a set of nodes that can be reached from $N$ by child links. A binary \emph{search} tree (\emph{BST}) is a binary tree that satisfies the following condition: for each $N$, subtrees with the root in \func{left}{N} and with the root in \func{right}{N} contain only nodes with keys, that are less or equal than \func{key}{N} and larger or equal than \func{key}{N}, respectively. Using this property, it is possible to implement a fast lookup of a key $K$ in a binary search tree (see \Cref{alg:bst_find}). %\begin{float}{tbp} \begin{algorithm}[t] \caption{Lookup in $BST$}\label{alg:bst_find} \begin{algorithmic}[1] \Procedure{Find}{$root,K$}\Comment{The node with key = $K$ or $nil$} \State $node\gets root$ \While{$node\not= nil$} \If{$K = \func{key}{node}$} \State \textbf{return} $node$ \ElsIf{$K < \func{key}{node}$} \State $node \gets \func{left}{node}$ \Else \Comment{$K > \func{key}{node}$} \State $node \gets \func{right}{node}$ \EndIf \EndWhile \State \textbf{return} $nil$\Comment{The node was not found} \EndProcedure \end{algorithmic} \end{algorithm} %\end{float} The complexity of this search algorithm is~$\mathcal{O}(\func{height}{T})$ (assuming that key comparison takes~$\mathcal{O}(1)$). Furthermore, two remaining operations of a binary search tree, \insertop and \removeop, are implemented in the same fashion, and both of those operations have complexity~$\mathcal{O}(\func{height}{T})$ as well. Their implementation is described in detail in~\cite[p.~327]{sedgewick}. The tree height can vary between $\mathcal{O}(\func{size}{T})$, and $\mathcal{O}(\log(\func{size}{T}))$ in case of a complete binary tree~\cite{complete_bt}, where~$\func{size}{T}$ is the count of nodes in~$T$. BST will maintain optimal operation time only if its structure is close to a complete binary tree and the height is bounded by $c \times \log(\func{size}{T})$, where $c$ is a constant factor greater or equal~$1$. To keep the height logarithmic, even in a worst-case scenario, the \emph{tree rebalancing} has been invented. The idea is that a tree keeps track of its structure and if it is not optimal, then the rebalancing is applied to restore optimal structure. The rebalancing can be achieved with the \emph{tree rotation}\cite[p.~435]{sedgewick}~-- the operation, that swaps a node with its parent in a way, that preserves the $BST$ property. \subsection{AVL Tree} AVL tree was invented in 1962 by Georgy Adelson-Velsky and Evgenii Landis\cite{avl_tree}. It is a classic example of a self-balancing $BST$. In fact, the height of the AVL tree is never greater than $1.4405\times \log(\func{size}{T}) - 0.3277$ \cite[p.~460]{knuth3}. Self-balancing is achieved with the following approach: every node holds the difference between the heights of its left and right subtrees; this difference is called \emph{balance factor}. \emph{AVL property} requires the balance factor of every node to be in range of values $-1$, $0$ and~$1$. After every operation, that modifies the tree structure~-- \insertop and \removeop, balance factors are updated. If at any step the balance factor happens to be $-2$ or $2$, a rotation or a double rotation is applied. The rotation adjusts the heights of the left and right subtrees and, consequently, restores the AVL property. The exact AVL tree implementation is described in \cite[p.~458]{knuth3}. \subsection{Splay Tree} \label{sssec:splay} Another approach on tree balancing is presented in the Sleator and Tarjan work\cite{splay_tree}~-- ``The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying, whenever the tree is accessed.'' Term \emph{splaying} stands for the process of using rotations (similar to ones in the AVL tree) to bring the last accessed node to the root. Sleator and Tarjan proved that by using this approach, all three basic operations (\findop, \insertop and \removeop) have a logarithmic time bound. Another benefit of splaying is that the most frequently accessed items tend to gather near the root, therefore improving access speed, especially on skewed input sequences~-- the sequences, in which only a small number of items are accessed often while other items occur rarer. This property is exploited in the Splay eviction policy (\Cref{sssec:spolicy}). Even though splay trees show several interesting theoretical properties, in practice they are outperformed by more conventional BSTs, like AVL or Red-Black tree\cite{splay_overview} (the performance evaluation of \numdbname (\Cref{sec:secanalysis}) reaffirms this statement). This is due to the fact that in the splay tree the structure of the tree is altered on every operation, including find operation, while AVL, for instance, modifies the tree only during insertions and removals. The typical use scenario for those data structures is a scenario, where a vast majority of operations is the search operation, while updates are not so often. AVL and Red-Black trees happen to be faster because they execute fewer instructions per find operation. Moreover, they do not make any writes to memory during the lookup, and, as a consequence, there is lower load on the memory bus and the system cache. Further researches on splay trees were focused in the main on how to reduce the number of rotations during splaying. An extensive overview of those optimizations is provided in \cite{splay_overview}. One of the described techniques, the partial splaying is a modification of a conventional splay tree, where every node contains a counter that denotes a total count of accesses to this node. As usual, splaying is performed on every access, but the node is splayed only until its access count is less than the access count of its parent. W. Klostermeyer showed that this modification does not gain any noticeable advantage over a standard splay tree \cite{partial_splaying}. However, partial splaying and other derived modifications can have some interesting properties specifically in application to a numerical database. It will be discussed in \Cref{ch:alt}. \subsection{Hash Table} A hash table is another popular data structure that implements dictionary abstract data type. It uses the entirely different approach on item storage and lookup. A hash table allocates a contiguous array $A$, which size is bounded by the expected number of items to be stored, often multiplied by the \mbox{\emph{load~factor}~$\alpha$}. The items are stored in $A$. Firstly, let’s look at the simplified case: the key $K$ that is used in a hash table is of an integer type. Having $A$, $K$ and the value $V$, associated with $K$, it is possible to use a remainder of the division of $K$ by the size of $A$ as an index in $A$. Then, $V$ will be stored in $A$ at this index. This approach would give the best performance possible, as the $V$ can be retrieved immediately and \emph{the~lookup time does not depend on~the~total count of~items} in the hash table. However, since the $modulo$ operation has been used, there can be several keys that point at the same index in $A$. This circumstance is called \emph{collision}. To deal with a collision, it is necessary to store $K$ itself together with $V$, so that in case of a collision it would be possible to tell if the stored $V$ is actually associated with the $K$ or another $K'$, that collides with $K$. Secondly, one must pick a strategy on how to deal with the case when two different keys, $K_1$ and $K_2$, that point at the same index are inserted. There are two main approaches: \begin{description} \item[Separate chaining (open hashing)]-- each element in $A$ is a linked list (or another data structure), that stores all pairs $\langle K, V\rangle$ that collides. \pagebreak \item[Linear probing (closed hashing)]-- if during insertion of $K$ in $A$ at the index $i$ a collision occurs ($i$ is already occupied), a special function $F$ is used to determine the second index at which $K$ can be inserted. If it is also occupied, 3rd and all consequent positions, generated by $F$, are used to try to insert the element. \end{description} The approach described above can be generalized on keys of any type $T$. It is achieved with the help of a \emph{hash function}. This function takes an argument of type $T$ and maps it to an integer, called \emph{hash}. This function must satisfy two properties: \begin{block-description} \blockitem[Determinism]-- it should \emph{always} map the same input to the same hash. \blockitem[Uniformity]-- if used with a uniformly generated random sequence of objects on input, the hash function should produce a uniformly distributed sequence of hashes. \end{block-description} In \cite[p.~464]{sedgewick} Sedgewick and Wayne provide the detailed explanation of collision avoidance strategies as well as general information about hash tables. More information about hash function properties and hash function construction is presented in \cite{knott}. \section{Thread Synchronization} \subsection{Coarse-grained Locking} \label{sec:cgl} A trivial way to parallelize a sequential data structure is to eliminate a concurrent access at all. It can be achieved with a single mutual exclusion lock~-- \emph{mutex}. While a thread holds a mutex, no other threads can lock the same mutex. The sequential data structure is wrapped into the helper type, that locks the mutex in the beginning of every operation and releases it in the end, so that only one thread can access the data structure at a time, no matter how many threads are involved. This approach is called the \emph{coarse-grained locking}, in contrast with the \emph{fine-grained locking}, where many locks are used and each lock protects only a part of the data structure, so other threads can freely access other parts. Pros of this approach is a very trivial implementation and the absence of any special requirements on the underlying data structure. However, coarse-grained locking is only suitable when a data structure has a support role in the program and is used occasionally. If the data structure is the key element of the application, then a single lock becomes the bottleneck in the software, drastically decreasing program scalability. In this case one should use more sophisticated parallelization approaches. \subsection{Binning} \label{sec:pre_bin} The evolution of coarse-grained locking is the \emph{binning}. The main drawback of the previous approach is that a single lock becomes the main point of contention between threads. One way to cope with this is to increase the number of locks. In contrast with the fine-grained locking, the binning does not involve any modifications of the underlying container. Firstly, the numbers of bins~-- independent data structure instances~-- is chosen. Then a mapping between the item domain and a bin number is introduced. The mapping should yield a uniform distribution of mapped values. Every item is stored only in its assigned bin. Every bin has its own mutex, therefore the access to every bin is serialized. But since items are mapped uniformly, it is expected to produce much less contention than in case of a single lock. \subsection{Fine-grained Locking} \label{sec:fgl} The fine-grained locking usually offers better scalability, than the previously discussed approaches. Instead of a single lock, many mutexes are used simultaneously. Every mutex protects its part of data. The contention between threads is lower as it is unlikely that several threads will access the same portion of data at the same time. However, this is true only if every portion has the same probability of being accessed (like in a concurrent hash table). In some data structures, typically binary trees, there are some nodes that are accessed (and are locked before access) oftener than others, e.g. the root in a binary tree. \emph{Substantial modifications} got to be made to the data structure to integrate the fine-grained locking. Sometimes the overhead added by this approach is so big, that it brings to naught any potential speed-up. Fine-grained locking is not a silver bullet, but usually it offers a reasonable trade-off between implementation complexity and application scalability.
{ "alphanum_fraction": 0.7739428082, "avg_line_length": 118.8389830508, "ext": "tex", "hexsha": "f734d730b5cb1acf75079c310528f29e66d45819", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7937368e8d34eb68b1e90a6097737d48ca72c174", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "metopa/bachelors_thesis", "max_forks_repo_path": "Text/tex/1_Preliminaries.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7937368e8d34eb68b1e90a6097737d48ca72c174", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "metopa/bachelors_thesis", "max_issues_repo_path": "Text/tex/1_Preliminaries.tex", "max_line_length": 663, "max_stars_count": null, "max_stars_repo_head_hexsha": "7937368e8d34eb68b1e90a6097737d48ca72c174", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "metopa/bachelors_thesis", "max_stars_repo_path": "Text/tex/1_Preliminaries.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3317, "size": 14023 }
\section*{Path Planning}
{ "alphanum_fraction": 0.7037037037, "avg_line_length": 6.75, "ext": "tex", "hexsha": "3d49344af0ede0f10d1fdd5fec720e3efeaca205", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-11-16T16:06:15.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-16T16:06:15.000Z", "max_forks_repo_head_hexsha": "d86848a037a07e97122c92e3c80c980c58c41d52", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kottz/D7039E", "max_forks_repo_path": "report/finished/chapters/path_planning.tex", "max_issues_count": 72, "max_issues_repo_head_hexsha": "d86848a037a07e97122c92e3c80c980c58c41d52", "max_issues_repo_issues_event_max_datetime": "2021-01-01T08:06:16.000Z", "max_issues_repo_issues_event_min_datetime": "2020-09-15T13:32:02.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kottz/D7039E", "max_issues_repo_path": "report/finished/chapters/path_planning.tex", "max_line_length": 24, "max_stars_count": null, "max_stars_repo_head_hexsha": "d86848a037a07e97122c92e3c80c980c58c41d52", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kottz/D7039E", "max_stars_repo_path": "report/finished/chapters/path_planning.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 27 }
\section{Case Studies} In this section, we use four case studies deployed on HydraMini to show the capabilities of our research platform. \subsection{Autonomous Driving using End-to-End model} This case shows how you use our platform for AD using end-to-end model. Several recent research replaces the classic chain of perception, planning, and control with a neural network that directly maps sensor input to control output\cite{bojarski2016end, chi2017deep, eraqi2017end}, a methodology known as end-to-end driving. New approaches based on reinforcement learning are being actively developed\cite{kendall2019learning}. An end-to-end AI model is easy to use since it outputs control commands directly. The model contains mainly CNN and Dense layers\cite{keras}, it maps camera input to control output. The structure of the model is shown in Fig.~\ref{ms}. \begin{figure}[b] \centerline{\includegraphics[width=0.5\textwidth]{ns.png}} \caption{Model Structure.} \label{ms} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=5in]{traditional_autonomous_driving} \caption{Autonomous Driving using Traditional Methods.} \label{fig:traditional_autonomous_driving} \end{figure*} CNN layers extract features from the images taken by the car's front camera. Several fully connected layers follow the CNN layers, they finally extract the command information needed for auto-driving. The activation function we use is Relu. The last layer is a Softmax layer\cite{keras} for classification or a Dense layer for regression. Although the current model is not the perfect one, it's convenient to make changes and do optimizations to the model. Fig.~\ref{fig:end_to_end_autonomous_driving} shows the whole process. First, you controls the car using your keyboard or whatever you like and save data from the car's sensors as training data. Or you just get the data from Internet or somewhere else. Second, after pre-process, you put the images you get as input to the AI model and the labels such as keyboard signals as the output. Train the model using TensorFlow until you get a satisfactory model. Third, you will use DNNDK\cite{dnndk} provided by Xilinx to do the compression and compilation and then copy the generated files to the car. Finally, the car is able to move by itself. It will be controlled by AI models. \begin{figure}[b] \centering \includegraphics[width=3.5in]{lane_detect} \caption{Lane Detection.} \label{fig:lane_detection} \end{figure} \subsection{Autonomous Driving using traditional methods} In the first case of AD we use an end-to-end model and in this one we will show you another way of doing AD. This time we will use computer vision methods to detect the road line and use classic YOLOv3 model to find objects. To detect road lane, we first use $5 \times 5$ Gaussian filter to remove the noise of the image. Then Canny edge detection\cite{canny1986computational} is applied to detect the edges of the image. Hough transform\cite{burns1986extracting} is employed to detect the lines of the image. Based on the orientation and relative position of detected lines, side line and stop line of the road are identified. To improve the robustness and accuracy of the algorithm, more techniques will be added to the algorithm, such as K-Means clustering\cite{selim1984k} for Hough lines, Kalman/Gabor filtering for sampled data, alternative IPM (inverse perspective mapping) lane detection method\cite{wang2014approach}. The car will run following the lane of the road, Fig.~\ref{fig:lane_detection} shows the final lanes extracted from all detected lines. The yellow line marks the middle of the picture, the other lines are all lines we find in the photos. Among all the lanes we detected, we will choose one line for both left side and right side and they are painted purple. The red ones and blue ones are the rest. With these information, the car adjusts its direction and speed to the road. For example, in the picture the left lane chosen as the road line has a slope of $-0.869$ which is smaller than the slope datum, then the controller calculated a steer value $-0.42$ and that means to turn left. \begin{figure}[b] \centerline{\includegraphics[width=0.5\textwidth]{simulator.png}} \caption{Simulator Interface.} \label{si} \end{figure} YOLOv3 is used to find objects like people or cars to help the car make decisions like braking to avoid a obstacle. The inference process will be accelerated by DNNDK to meet more stringent real-time requirement, also you could replace full YOLOv3 model with tiny YOLOv3 model to achieve faster inference speed. Fig.~\ref{fig:traditional_autonomous_driving} shows the control process of this case, the taken images will be processed by both computer vision threads and YOLOv3 threads, this time they won't produce commands directly, instead the information they get will be used to make decisions. Finally, the commands will be generated by the decision maker. \subsection{Simulator} The simulator is a tool which helps users test their designs more efficiently. You see the interface of the simulator in Fig.~\ref{fig:lane_detection}. The simulator is based on sdsandbox\cite{sdsandbox} which is a simulator project for Donkey Car\cite{donkeycar}. As you see in the picture, the users use this simulator to collect training data and test their models. When testing, you should build a server, the server will receive sensor data from the client of the simulator and generate control messages according to these data. We have already built one example, the server will get images taken by simulator and handles them using the AI model trained before, then the model output control commands and these commands will be sent to the simulator. It's also convenient for users to modify the source code of the simulator to define their own data format or control methods. However, users should be familiar with C\# and Unity3d if they would like to do so, and we provide coding tutorial manual to help. \subsection{Lidar in ROS} Due to the high impact of ROS, we make our platform able to support ROS projects. The version we use is ROS Melodic\cite{rosmelodic} which is mainly used in Ubuntu18.04. This time we show how to read LiDAR data and control the car using ROS. With Lidar data, the car is able to handle obstacle avoidance tasks and SLAM tasks whose basical technology is Lidar data processing. The Lidar we use is LeiShen LS01D\cite{leishen}, LeiShen has already provided one ROS node for users to gain and publish Lidar data. We read the laser point cloud data by subscribing the published topic. Also, it will be more visualized to use RViz\cite{rviz}, Fig.~\ref{ld} is one example. \begin{figure}[b] \centerline{\includegraphics[width=0.5\textwidth]{laser.png}} \caption{Lidar Data.} \label{ld} \end{figure} The control node of the car is a transplantation of code from existing controller. It's easy to send control commands by publishing commands to corresponding topic. As we can see, it's easy for users to build their projects based on ROS in our platform. Since ROS is widely used, it's important that our platform helps users who want to try ROS.
{ "alphanum_fraction": 0.7947683317, "avg_line_length": 108.8939393939, "ext": "tex", "hexsha": "8932da3df898ad73d685bd1fa069dfa57b40fc4f", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2020-09-25T15:46:07.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-28T01:23:17.000Z", "max_forks_repo_head_hexsha": "ac3713e3b6d937dfff7c5cf0cdd24c0c9f7bc2a5", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "wutianze/HydraMini", "max_forks_repo_path": "_book/Papers/MetroCAD/data/5_case_study.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "4f1090e58a53e6d24b8c13bdf40e668f01634b64", "max_issues_repo_issues_event_max_datetime": "2020-09-18T05:25:01.000Z", "max_issues_repo_issues_event_min_datetime": "2020-07-05T05:17:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wutianze/pynq_car", "max_issues_repo_path": "Papers/MetroCAD/data/5_case_study.tex", "max_line_length": 701, "max_stars_count": 12, "max_stars_repo_head_hexsha": "4f1090e58a53e6d24b8c13bdf40e668f01634b64", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wutianze/pynq_car", "max_stars_repo_path": "Papers/MetroCAD/data/5_case_study.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-12T11:23:47.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-16T05:35:50.000Z", "num_tokens": 1670, "size": 7187 }
\subsubsection{\stid{6.03} Sandia ATDM Software Ecosystem and Delivery - Technology Demonstrator} \paragraph{Overview} This project is part of the NNSA/ASC program and is primarily focused on the integration of programming models technologies in existing applications. This project spans both the Code Development and Applications (CDA) and Architecture and Software Development (ASD) components of Sandia's ATDM program element, and this description only covers the ASD portion of the project. The purpose of this project is to broaden the impact of programming model technologies developed in ATDM by integrating these technologies into an existing ASC code suite, evaluating the ability of these technologies to be incrementally added into a legacy code base and to deliver performance improvements on current- and next-generation computing platforms. In particular, this subproject focuses on using and evaluating Kokkos and asynchronous many task (AMT) programming models in the NGP Contact code. Kokkos is an abstraction layer that implements a parallel programming model in C++ for writing performance portable applications targeting all major HPC platforms. NGP Contact is a performance-critical proximity search component relevant to many NNSA applications, including ASC production applications that invoke mechanical or thermal contact, coupled physics transfers, and particle algorithms. This project is important the overall ECP efforts because it evaluates how well technologies developed for next-generation applications can be leveraged by existing large application code bases, such as the ASC Sierra toolkit, in an effort to maximize the impact of ECP technologies. \paragraph{Key Challenges} The main challenge associated with this project is the incremental integration of a new programming model and programming system into an existing very large production application. This challenge is similar in many ways to the incremental integration of OpenMP into an existing large MPI application. As Kokkos is a higher-level abstraction layer that can be implemented using OpenMP, there are additional challenges around the ability of Kokkos' abstractions and interfaces to integrate into an existing application and minimizing the performance penalty of the abstraction layer for different underlying hardware implementations. \paragraph{Solution Strategy} The strategy for this project is to produce demonstration applications that drive the development of an AMT scheduling toolset and to enable leveraging of Sierra-developed technologies to support ATDM application milestones. \paragraph{Recent Progress} The Technology Demonstrator team has continued progress toward a performant, optimized GPU contact implementation. Two final parts of the contact algorithm are being ported to run on the GPU: interaction determination and interaction enforcement. The Sierra Solid Mechanics and NimbleSM teams have begun, and are continuing, joint work on porting the existing NimbleSM CPU-only routines and expect to test GPU execution. Initial performance data indicated that providing a full implementation of GPU contact could provide significant performance improvements over CPU-only execution, and performance data will be gathered once the routines are ready. \paragraph{Next Steps} The team's next focus will be to add a frictional contact capability to NimbleSM using node-face contact. This effort will include demonstrating correctness through a few canonical verification problems, optimization as needed, and evaluations of relative performance on hardware representative of LLNL's ATS-2 platform.
{ "alphanum_fraction": 0.832694414, "avg_line_length": 50.0273972603, "ext": "tex", "hexsha": "919d18c371f6d9512313a284a18777889a399882", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b9d6e478ed34830e0ae83693564ced7e044e2b12", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "mikiec84/ECP-ST-CAR-PUBLIC", "max_forks_repo_path": "projects/2.3.6-NNSA/2.3.6.03-SNL-ATDM/2.3.6.03a-SNL-ATDM-Ecosystem.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b9d6e478ed34830e0ae83693564ced7e044e2b12", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "mikiec84/ECP-ST-CAR-PUBLIC", "max_issues_repo_path": "projects/2.3.6-NNSA/2.3.6.03-SNL-ATDM/2.3.6.03a-SNL-ATDM-Ecosystem.tex", "max_line_length": 77, "max_stars_count": null, "max_stars_repo_head_hexsha": "b9d6e478ed34830e0ae83693564ced7e044e2b12", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "mikiec84/ECP-ST-CAR-PUBLIC", "max_stars_repo_path": "projects/2.3.6-NNSA/2.3.6.03-SNL-ATDM/2.3.6.03a-SNL-ATDM-Ecosystem.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 699, "size": 3652 }
\section{Impact of COVID-19 on the Project} In accordance with COVID-19 guidelines, our team had limited access to lab spaces throughout the duration of the project. While this did not significantly impact the project, given its focus on software, it made access to hardware more difficult for testing purposes. Overall, little was changed due to the pandemic as the project timeline and scope were designed with the existing and potential restrictions in mind.
{ "alphanum_fraction": 0.8181818182, "avg_line_length": 154, "ext": "tex", "hexsha": "9dd4bb7532d5d90dfb89bd9149dae8493bf58652", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9fb42ea76cc570da4b8a6e8b17121a764169e899", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "A5-015/capstone-reports", "max_forks_repo_path": "intermediate-report/covid-impact.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9fb42ea76cc570da4b8a6e8b17121a764169e899", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "A5-015/capstone-reports", "max_issues_repo_path": "intermediate-report/covid-impact.tex", "max_line_length": 417, "max_stars_count": null, "max_stars_repo_head_hexsha": "9fb42ea76cc570da4b8a6e8b17121a764169e899", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "A5-015/capstone-reports", "max_stars_repo_path": "intermediate-report/covid-impact.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 87, "size": 462 }
\input{header} \begin{document} \begin{center} {\LARGE GrassMarlin} \vspace{0.1in}\\ \end{center} \section{Overview} This lab provides an introduction to GrassMarlin, a software tool that provides a method for discovering and cataloging Supervisory Control \& Data Acquisition (SCADA) and Industrial Control System (ICS) hosts on IP-based networks. \subsection {Background} The student is expected to have some familiarity with the Linux command line. And some experience with the Wireshark tool is expected (e.g., the wireshark-intro lab). \section{Lab Environment} This lab runs in the Labtainer framework, available at http://my.nps.edu/web/c3o/labtainers. That site includes links to a pre-built virtual machine that has Labtainers installed, however Labtainers can be run on any Linux host that supports Docker containers. From your labtainer-student directory start the lab using: \begin{verbatim} labtainer grassmarlin \end{verbatim} \noindent A link to this lab manual will be displayed, along with a link to the GrassMarlin User Guide. \section{Network Configuration} The lab consists of a single computer that contains the GrassMarlin tool, and a PCAP file. \section{Lab Tasks} Start the GrassMarlin application using this command on the terminal titled ``analyst@grassmarlin: \begin{verbatim} grassmarlin \end{verbatim} \noindent Refer to the GrassMarlin User Guide for information about the tool. From the ``File'' menu, select ``Import Files'' and use the ``Add Files'' button and the file browser to add the PCAP file found at ``analysist/ics-trace.pcap''. Then use the ``Import Selected'' button on the ``Import'' dialog to import that file. It may take a minute to complete the import. When the import completes, close the ``Import'' dialog. You should then see the ``Logical Graph'' of system components that the GrassMarlin tool discovered from the PCAP file. Right click on components and select "View Details" to learn what the tool has discovered about each component. On the left hand panel, expand the node lists to locate summeries of packet traffic sent and received by each node. Right click on one having substantial traffic, and select ``View Frames'' to view the captured packets. If you find interesting traffic, you can right click on a frame and select ``Open in Wireshark''. \section{Submission} After finishing the lab, go to the terminal on your Linux system that was used to start the lab and type: \begin{verbatim} stoplab \end{verbatim} When you stop the lab, the system will display a path to the zipped lab results on your Linux system. Provide that file to your instructor, e.g., via the Sakai site. \copyrightnotice \end{document}
{ "alphanum_fraction": 0.7795420975, "avg_line_length": 40.4179104478, "ext": "tex", "hexsha": "8a5df8289c704b3fd9b6a2c94e75a3950f2300ad", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a8b1acc10791a3a328555c807a43e81e097de70f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "oabuoun/Labtainers", "max_forks_repo_path": "labs/grassmarlin/docs/grassmarlin.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a8b1acc10791a3a328555c807a43e81e097de70f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "oabuoun/Labtainers", "max_issues_repo_path": "labs/grassmarlin/docs/grassmarlin.tex", "max_line_length": 124, "max_stars_count": null, "max_stars_repo_head_hexsha": "a8b1acc10791a3a328555c807a43e81e097de70f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "oabuoun/Labtainers", "max_stars_repo_path": "labs/grassmarlin/docs/grassmarlin.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 645, "size": 2708 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} \PassOptionsToPackage{dvipsnames,svgnames*,x11names*}{xcolor} % \documentclass[ ]{article} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdfauthor={Fan Wang}, colorlinks=true, linkcolor=Maroon, filecolor=Maroon, citecolor=Blue, urlcolor=blue, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[margin=1in]{geometry} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \title{Generate R Package Example\\ roxygen2 and pkgdown} \author{Fan Wang} \date{2020-04-01} \begin{document} \maketitle Go back to \href{http://fanwangecon.github.io/}{fan}'s \href{https://fanwangecon.github.io/REconTools/}{REconTools} Package, \href{https://fanwangecon.github.io/R4Econ/}{R4Econ} Repository, or \href{https://fanwangecon.github.io/Stat4Econ/}{Intro Stats with R} Repository. \hypertarget{objective}{% \section{Objective}\label{objective}} Document and generate sharable R package. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Use \href{https://github.com/hadley/roxygen2}{roxygen2} from \href{https://github.com/hadley}{Hadley} to document package. \item Use \href{https://github.com/r-lib/pkgdown}{pkgdown} to package file and publish to github pages. \end{enumerate} \hypertarget{file-structure-and-naming-convention}{% \section{File Structure and Naming Convention}\label{file-structure-and-naming-convention}} \textbf{Folders}: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item R function functions in the \emph{/R} folder \item RData files in the \emph{/data} folder \end{enumerate} \textbf{Naming Conventions}: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item functions files and functions all should be \emph{snake\_case\_names} \item function name prefix: \begin{itemize} \tightlist \item \emph{fs\_}: for non-project specific script files \item \emph{ffs\_}: for project specific functions script files \item \emph{fv\_}: for non-project specific vignettes files, generally RMD \item \emph{ffv\_}: for project specific functions vignettes files, generally RMD \item \emph{ff\_}: for non-project specific functions files \item \emph{ffd\_}: for project specific data description files \item \emph{ffp\_}: for project specific functions files \item \emph{ffy\_}: for project specific utility files \item Each function file could be prepared to have multiple functions inside, each file have the root which is the function name. \end{itemize} \end{enumerate} Additionally, follow these general structures for \emph{functions} in \emph{/R} folder: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item 3 letter/digit project name \item 3 letter/digit current file name, in R4E, three letter func group name: \emph{ffp\_opt\_lin.R} \item 5 letter/digit function name within file: \emph{function ffp\_opt\_lin\_solum} \end{enumerate} With this structure, we end up with potentially fairly long names, but hopefully also not too long and clear. Additionally, follow these general structures for \emph{vignettes} in \emph{/vignettes} folder: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Follow conventions for function name vignettes should be associated with functions \item There should be a core vignette that has the same name as the R function it is trying to provide line by line details or math descriptions of. If the function is called: \emph{ffp\_opt\_lin\_solum.r}, the vignette shold be **ffv\_opt\_lin\_solum.Rmd*. \item Append onto existing function name 5 letter/digit to describe what this vignettes is suppose to achieve: \emph{ffv\_opt\_lin\_solum\_vign1.Rmd}. \begin{itemize} \tightlist \item note the file starts with \emph{ffv} \end{itemize} \end{enumerate} \hypertarget{create-r-project-directory}{% \section{Create R Project Directory}\label{create-r-project-directory}} \hypertarget{folder-does-not-exist-yet}{% \subsection{Folder does not exist yet}\label{folder-does-not-exist-yet}} If the project/folder does not yet exist: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \emph{devtools:create} the folder of interest \item Move your files over to \emph{/R} and \emph{/data} folders. \begin{itemize} \tightlist \item Write files following conventions above and with r descriptions \end{itemize} \item \emph{pkgdown::build\_site()} \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \NormalTok{devtools}\OperatorTok{::}\KeywordTok{create}\NormalTok{(}\StringTok{"C:/Users/fan/R4Econ"}\NormalTok{)} \end{Highlighting} \end{Shaded} \hypertarget{folder-already-exists}{% \subsection{Folder already exists}\label{folder-already-exists}} If there is already a folder with a bunch of files including R and not R files, and the folder needs to be converted to become a R package and was previously not a R package. The idea is to use a folder somewhere to generate a generic template folder. This folder will have files and structure we need for our actual folder. Each time, we will just copy that folder's contents into the folder that we want to turn into a R folder. And do a global search to replace the template folder's folder name with the actual project name. To avoid confusion, generate this folder outside of an existing R package. If there is already a R folder in your existing project, delete that or rename that and move files back into the \emph{/R} folder after completion. Make sure there are no duplicate folder or file names in the old and the new project. Search replace the word \emph{rprjtemplate} inside your old project folder with the new files, replace that with your project name. Should appear in three different spots. \hypertarget{r-project-template}{% \subsubsection{R project template}\label{r-project-template}} The {[}rprjtemplate{]}{[}\url{https://github.com/FanWangEcon/Tex4Econ/tree/master/nontex/rprjtemplate}{]} serves this templating role. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Create project} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{create}\NormalTok{(}\StringTok{"C:/Users/fan/Tex4Econ/nontex/rprjtemplate"}\NormalTok{)} \end{Highlighting} \end{Shaded} Running the \emph{devtools::create()} command will create the core needed folder structure with: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item NAMESPACE \item DESCRIPTION \item \emph{.gitignore} \item \emph{.Rbuildignore} \item \emph{.Rproj} \item Empty R folder \end{enumerate} Now customize this folder for future use with \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item custom \emph{.gitignore} \item MIT LICENSE \item etc. \end{enumerate} For adding vignette later: Add these to the DESCRIPTION file: - Suggests: knitr, rmarkdown - VignetteBuilder: knitr \hypertarget{develop-and-package-r-project}{% \section{Develop and Package R Project}\label{develop-and-package-r-project}} Developing a package means write various vignettes and functions. My process is to write various vignettes first, to basically test out core functionalities. Then convert parts of those vignettes to functions. As functions are developed, some vignettes can be invoked differently, taking advantage of newly written functions. Make sure keep the original vignette that is not dependent on any other functions for ease of going to the initial point and figuring out what is happening. Names could be written to indicate vignette vintage. \hypertarget{development-process}{% \subsection{Development Process}\label{development-process}} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Write original vignette with dependencies, minimize dependencies, for example, do not import tidyverse, since package will not allow for all tidyverse components to be added, too much space needed. \begin{itemize} \tightlist \item this is not really a vignette, it is a function testing sandbox that does not call any package functions but provides line by line descriptions for the core functions with print outputs. \end{itemize} \item Test vignette inside Rstudio/VSCode etc. (These vignette do not depend on the function to be written below, they do not exist yet) \item When raw vignettes work, start converting vignette components to functions, include minimal dependencies, \begin{itemize} \tightlist \item in function: \emph{@import} \item add to DESCRIPTION: \textbf{usethis::use\_package(``tidyr'')} if tidyr is needed as \emph{IMPORT} \item \emph{DEPENDS} vs \emph{IMPORTS} \item \emph{important}: slowly check what does the function depend on, for example \emph{R4Econ} has its own dependencies, when installing as packages, make those dependencies explicit in the new function if it calls \emph{R4Econ}. Otherwise, \emph{test()} might throw error. This is resolved if \emph{R4Econ} add several things under depends. \end{itemize} \item Run the documentation tool: \begin{itemize} \tightlist \item \textbf{setwd(`C:/Users/fan/PrjOptiAlloc')} \item \textbf{devtools::document()} \item this updates \emph{NAMESPACE} \item generates new \emph{.Rd} files \end{itemize} \item Test @examples from Roxygen documentations \begin{itemize} \tightlist \item \textbf{devtools::run\_examples()} \item if also want to test vignette, also do: \textbf{devtools::build\_vignettes()} \item this tests if \emph{@examples} are working. \end{itemize} \item Continue 3 to 4 - {[} {]} o convert various components of vignette. \item Build package and load package using: \begin{itemize} \tightlist \item \textbf{devtools::check()} \item this will run \emph{run\_examples} as well, but at the very end, will test everything for package.\\ \end{itemize} \item Now install package \begin{itemize} \tightlist \item \textbf{devtools::install()} \item \emph{library(PrjOptiAlloc)} should work now \item \emph{ls(``package:PrjOptiAlloc'')} to list all objects in package\\ \item \textbf{devtools::reload()} \end{itemize} \item Publish webpage, review local, using \emph{pkgdown}: \begin{itemize} \tightlist \item \textbf{pkgdown::build\_site()} \end{itemize} \end{enumerate} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# Step 3} \KeywordTok{rm}\NormalTok{(}\DataTypeTok{list =} \KeywordTok{ls}\NormalTok{(}\DataTypeTok{all.names =} \OtherTok{TRUE}\NormalTok{))} \KeywordTok{setwd}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{)} \CommentTok{# setwd('C:/Users/fan/REconTools')} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{document}\NormalTok{()} \CommentTok{# Step 4} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{run_examples}\NormalTok{()} \CommentTok{# devtools::build_vignettes()} \CommentTok{# Step 6} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()} \CommentTok{# Step 7} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{install}\NormalTok{()} \CommentTok{#library(PrjOptiAlloc)} \CommentTok{#ls("package:PrjOptiAlloc")} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{reload}\NormalTok{()} \CommentTok{# Step 8} \NormalTok{pkgdown}\OperatorTok{::}\KeywordTok{build_site}\NormalTok{()} \end{Highlighting} \end{Shaded} During development, after new R functions have been written, testing latested functions in vignette. \begin{Shaded} \begin{Highlighting}[] \NormalTok{devtools}\OperatorTok{::}\KeywordTok{load_all}\NormalTok{()} \KeywordTok{library}\NormalTok{(PrjOptiAlloc)} \end{Highlighting} \end{Shaded} bookdown \begin{Shaded} \begin{Highlighting}[] \KeywordTok{setwd}\NormalTok{(}\StringTok{'C:/Users/fan/R4Econ'}\NormalTok{)} \NormalTok{bookdown}\OperatorTok{::}\KeywordTok{render_book}\NormalTok{(}\StringTok{'index.Rmd'}\NormalTok{, }\StringTok{'bookdown::gitbook'}\NormalTok{)} \NormalTok{bookdown}\OperatorTok{::}\KeywordTok{render_book}\NormalTok{(}\StringTok{'index.Rmd'}\NormalTok{, }\StringTok{'bookdown::pdf_book'}\NormalTok{)} \end{Highlighting} \end{Shaded} \hypertarget{attach-vs-import}{% \subsection{Attach vs Import}\label{attach-vs-import}} Packages that are loaded in under *usethis::use\_package** are imports, not attached. According to \href{http://r-pkgs.had.co.nz/namespace.html}{Wickham}: - \emph{apkg::bfnc()}: when packages are loaded, its components can be accessed with the \emph{::}. That is, if I want to use the \emph{document()} from from \emph{devtools()}, if I first attach, I can directly call document(). Without attaching, just loading, I have to do \emph{devtools::document()}. - \emph{attach(), library()} : when attaching, must first load. After attached, can directly call function names, for example, \emph{filter}, it is on the search path. See \href{http://blog.obeautifulcode.com/R/How-R-Searches-And-Finds-Stuff/}{How R Searches and Finds Stuff}: \begin{quote} The better solution would have been to stuff reshape's cast() function into imports:ggplot2 using the Imports feature. In that case, we would have travelled from 2 to 3 and stopped. Now you can see why the choice between Imports and Depends is not arbitrary. With so many packages on CRAN and so many of us working in related disciplines its no surprise that same-named functions appear in multiple packages. Depends is less safe. Depends makes a package vulnerable to whatever other packages are loaded by the user. \end{quote} This is \href{https://stackoverflow.com/questions/8637993/better-explanation-of-when-to-use-imports-depends}{question} as well. \hypertarget{dealing-with-datasets}{% \subsection{Dealing with Datasets}\label{dealing-with-datasets}} Datasets to be used with the project should be in the \emph{/data} folder. The name of the data file should appear in several spots and be consistent, suppose data is called \emph{abc} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Rdata file name: \emph{/data/abc.Rdata} \item Rdata file contains a dataframe inside that has to be called abc as well, so open the Rdata file inside Rstudio or R, what is the file called? is it abc? Look under environment \item In the \emph{/R/ffp\_abc.R}, the last line should be ``abc'', but the file name does not need to be. \item If any functions uses a dataset, this could be done in two ways: \begin{itemize} \tightlist \item a dataset is a parameter for a function. In @example, can load in dataframes declared in dependencies. \item inside a function, if direclty use a dataframe, that data frame should be declared following 1 to 3 here in \emph{/data}, make sure it has the same name, and write: \emph{data(data\_name)} at the beginning of the function to explicitly load the function in. \end{itemize} \end{enumerate} \hypertarget{logging-and-printing}{% \subsection{Logging and Printing}\label{logging-and-printing}} Logging and printing control is as usual important. See: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item \href{https://stackoverflow.com/questions/36699272/why-is-message-a-better-choice-than-print-in-r-for-writing-a-package/36700294\#36700294}{Why is message() a better choice than print() in R for writing a package?} \item \href{https://adv-r.hadley.nz/conditions.html}{Conditions} \end{enumerate} For simple package, for regression results, table outputs etc, use \emph{print}, print statements can be suppressed by \emph{invisible(capture.output(abc \textless- ffy\_opt\_dtgch\_cbem4()))}. \hypertarget{check-and-run-examples}{% \subsection{Check and Run Examples}\label{check-and-run-examples}} \href{http://r-pkgs.had.co.nz/check.html}{check} to make sure file structures etc are all correct. Will also test the @examples inside r functions in ROxygen comments. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# will check the file structure, but also @example in functions.} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{)} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{, }\DataTypeTok{manual=}\OtherTok{FALSE}\NormalTok{)} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{run_examples}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{)} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{build}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{)} \end{Highlighting} \end{Shaded} \hypertarget{add-depenencies}{% \subsection{Add Depenencies}\label{add-depenencies}} Assuming that we have used roxygen2 formats to write functions, now generate \emph{.Rd} automatically with the \emph{document()} function. This should create a \emph{/man} folder in which various \emph{.Rd} files are stored. Note that their .Rd are for specific R functions, not for files that contain multiple functions. \begin{Shaded} \begin{Highlighting}[] \CommentTok{# add dependencies:} \KeywordTok{setwd}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{)} \NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"tidyr"}\NormalTok{)} \NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"dplyr"}\NormalTok{)} \NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"stringr"}\NormalTok{)} \NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"broom"}\NormalTok{)} \NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"ggplot2"}\NormalTok{)} \NormalTok{usethis}\OperatorTok{::}\KeywordTok{use_package}\NormalTok{(}\StringTok{"R4Econ"}\NormalTok{)} \CommentTok{# document will generate new NAMESPACE, so after dependencies added, need to document.} \CommentTok{# funciton import must be consistent with these} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{document}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{)} \end{Highlighting} \end{Shaded} \hypertarget{vignettes-generation}{% \subsubsection{Vignettes Generation}\label{vignettes-generation}} To generate Vignettes, follow instructions \href{https://kbroman.org/pkg_primer/pages/vignettes.html}{here}: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item create a vignette directory \item put RMD files in there, RMD front matter should look like below \item add to DESCRIPTION file: \begin{itemize} \tightlist \item Suggests: knitr, rmarkdown \item VignetteBuilder: knitr \end{itemize} \item run \emph{devtools::build\_vignettes()}. Running build should build all? automatically builds vignette \end{enumerate} \begin{verbatim} title: "Put the title of your vignette here" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Put the title of your vignette here} %\VignetteEngine{knitr::rmarkdown} \usepackage[utf8]{inputenc} \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{devtools}\OperatorTok{::}\KeywordTok{build_vignettes}\NormalTok{(}\StringTok{'C:/Users/fan/PrjOptiAlloc'}\NormalTok{)} \NormalTok{devtools}\OperatorTok{::}\KeywordTok{check}\NormalTok{()} \end{Highlighting} \end{Shaded} \hypertarget{r-project-build-site-with-pkgdown}{% \subsection{R project build site with pkgdown}\label{r-project-build-site-with-pkgdown}} Once we have used \emph{pkgdown::build\_site()}, by default, all html and other package presentation files and reference files are saved in the \emph{/docs} folder, build\_site is for building site, not for generating \emph{/man} documenation. If R files change, need to document first, before rebuilding site. \begin{Shaded} \begin{Highlighting}[] \NormalTok{pkgdown}\OperatorTok{::}\KeywordTok{build_site}\NormalTok{(}\StringTok{"C:/Users/fan/PrjOptiAlloc"}\NormalTok{)} \end{Highlighting} \end{Shaded} \end{document}
{ "alphanum_fraction": 0.7472111308, "avg_line_length": 35.0548340548, "ext": "tex", "hexsha": "c96a374aed02e109b32188a65444eba5c2c29136", "lang": "TeX", "max_forks_count": 13, "max_forks_repo_forks_event_max_datetime": "2022-03-30T20:45:29.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-13T04:21:21.000Z", "max_forks_repo_head_hexsha": "69fbb89fcecec58f01c05486a1f79d15ddc1c834", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "FanWangEcon/R4Econ", "max_forks_repo_path": "support/development/fs_packaging.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "69fbb89fcecec58f01c05486a1f79d15ddc1c834", "max_issues_repo_issues_event_max_datetime": "2020-04-20T06:34:54.000Z", "max_issues_repo_issues_event_min_datetime": "2019-12-19T17:55:04.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "FanWangEcon/R4Econ", "max_issues_repo_path": "support/development/fs_packaging.tex", "max_line_length": 170, "max_stars_count": 14, "max_stars_repo_head_hexsha": "69fbb89fcecec58f01c05486a1f79d15ddc1c834", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "FanWangEcon/R4Econ", "max_stars_repo_path": "support/development/fs_packaging.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-01T16:18:33.000Z", "max_stars_repo_stars_event_min_datetime": "2019-08-25T04:48:08.000Z", "num_tokens": 7549, "size": 24293 }
\begin{document} \section{This is section 1} \subsection{This is section 1 subsection 1} (n.) A sacred fruit. Also known as red lemon, life \section{This is section 2} Nothing \end{document}
{ "alphanum_fraction": 0.7422680412, "avg_line_length": 17.6363636364, "ext": "tex", "hexsha": "38cc431cf638a47b0316ca63bb8858be9c7e44b0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e69a20c8cff469526996fc995e98c480177fa5a7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dead-water/jarbegone", "max_forks_repo_path": "test.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e69a20c8cff469526996fc995e98c480177fa5a7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dead-water/jarbegone", "max_issues_repo_path": "test.tex", "max_line_length": 50, "max_stars_count": null, "max_stars_repo_head_hexsha": "e69a20c8cff469526996fc995e98c480177fa5a7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dead-water/jarbegone", "max_stars_repo_path": "test.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 56, "size": 194 }
\documentclass[a4paper,english]{lipics-v2018} \usepackage[utf8]{inputenc} \usepackage{microtype} \usepackage{listings} \lstset{basicstyle=\ttfamily\small, backgroundcolor=\color{white}} \usepackage{srcltx} \usepackage[ttscale=.875]{libertine} %\usepackage[colorlinks=true]{hyperref} \usepackage{wrapfig} \newcommand\YAMLcolonstyle{\color{red}\mdseries} \newcommand\YAMLkeystyle{\color{black}\bfseries} \newcommand\YAMLvaluestyle{\color{blue}\mdseries} \hideLIPIcs \makeatletter \newcommand\language@yaml{yaml} \expandafter\expandafter\expandafter\lstdefinelanguage \expandafter{\language@yaml} { keywords={true,false,null,y,n}, keywordstyle=\color{darkgray}\bfseries, basicstyle=\YAMLkeystyle, % assuming a key comes first sensitive=false, comment=[l]{\#}, morecomment=[s]{/*}{*/}, commentstyle=\color{purple}\ttfamily, stringstyle=\YAMLvaluestyle\ttfamily, moredelim=[l][\color{orange}]{\&}, moredelim=[l][\color{magenta}]{*}, moredelim=**[il][\YAMLcolonstyle{:}\YAMLvaluestyle]{:}, % switch to value style at : morestring=[b]', morestring=[b]", literate = {---}{{\ProcessThreeDashes}}3 {>}{{\textcolor{red}\textgreater}}1 {|}{{\textcolor{red}\textbar}}1 {\ -\ }{{\mdseries\ -\ }}3, } % switch to key style at EOL \lst@AddToHook{EveryLine}{\ifx\lst@language\language@yaml\YAMLkeystyle\fi} \makeatother \newcommand\ProcessThreeDashes{\llap{\color{cyan}\mdseries-{-}-}} \let\accentvec\vec \let\spvec\vec \let\vec\accentvec %\addtolength{\textfloatsep}{-3mm} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsfonts} \usepackage{verbatim} \usepackage[textsize=tiny]{todonotes} %\usepackage[notref,notcite]{showkeys} %\usepackage[subtle]{savetrees} %\sp\vspace{-1em}orem{algorithm}{Algorithm}{\bfseries}{\rmfamily} \newcommand{\matodo}[1]{\todo[color=green!30]{#1}} \newcommand{\alectodo}[1]{\todo[color=red!30]{#1}} \newcommand{\eriktodo}[1]{\todo[color=blue!30]{#1}} % Mandatory. First: Use abbreviated first/middle names. Second (only in severe cases): Use first author plus 'et. al.' \authorrunning{M. Aumüller, E. Bernhardsson, A. Faithfull} % Mandatory, please use full first names. LIPIcs license is "CC-BY"; http://creativecommons.org/licenses/by/3.0/ \Copyright{ Martin Aumüller, Erik Bernhardsson, Alexander Faithfull} % Mandatory: Please choose ACM 1998 classifications from http://www.acm.org/about/class/ccs98-html . E.g., cite as "F.1.1 Models of Computation". \subjclass{H.3.3 Information Search and Retrieval} % Mandatory: Please provide 1-5 keywords \keywords{benchmarking, nearest neighbor search, evaluation} \usepackage{pgfplots} \usepgfplotslibrary{groupplots,colorbrewer} \pgfplotsset{ legend style = {font=\ttfamily} } \pgfplotsset{ % initialize Set1-5: cycle list/Set1-5, % combine it with ’mark list*’: cycle multiindex* list={ mark list*\nextlist Set1-5\nextlist }, } \usepackage{url} \title{ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms% \footnote{The research of the first and third authors has received funding from the European Research Council under the European Union's 7th Framework Programme (FP7/2007-2013) / ERC grant agreement no. 614331. A conference version of this work was published at SISAP'17 and is available at \url{http://dx.doi.org/10.1007/978-3-319-68474-1_3}.} } \titlerunning{ANN-Benchmarks: A Benchmarking Tool for ANN Algorithms} \author{Martin Aum\"{u}ller}{IT University of Copenhagen, Denmark}{[email protected]}{}{} \author{Erik Bernhardsson}{Better, Inc.}{[email protected]}{}{} \author{Alexander Faithfull}{IT University of Copenhagen, Denmark}{[email protected]}{}{} \begin{document} \maketitle \begin{abstract} This paper describes \textsf{ANN-Benchmarks}, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several different ways of integrating $k$-NN algorithms, and its configuration system automatically tests a range of parameter settings for each algorithm. Algorithms are compared with respect to many different (approximate) quality measures, and adding more is easy and fast; the included plotting front-ends can visualise these as images, \LaTeX\ plots, and websites with interactive plots. \textsf{ANN-Benchmarks} aims to provide a constantly updated overview of the current state of the art of $k$-NN algorithms. In the short term, this overview allows users to choose the correct $k$-NN algorithm and parameters for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different approaches to $k$-NN search yield comparable quality-performance trade-offs. The system is available at \url{http://ann-benchmarks.com}. \end{abstract} %\setlength{\abovecaptionskip}{0pt plus 2pt} \section{Introduction}\label{sec:introduction} \input{intro} \section{Problem Definition and Quality Measures}\label{sec:problem:desc} \input{problem} \section{System Design}\label{sec:system:design} \input{system} \section{Evaluation}\label{sec:evaluation} \input{evaluation} \section{Conclusion \& Further Work}\label{sec:conclusion} \input{conclusion} \bibliographystyle{splncs03} \bibliography{lit} \end{document}
{ "alphanum_fraction": 0.7452464789, "avg_line_length": 36.178343949, "ext": "tex", "hexsha": "29ae9cee03a45a06203ba54e9394c5ed3beab3ce", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b66ee6981fefefd4ab6e27bb4aac4ec0bfebed2a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "maumueller/ann-benchmarks-reproducibility", "max_forks_repo_path": "paper/paper.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "b66ee6981fefefd4ab6e27bb4aac4ec0bfebed2a", "max_issues_repo_issues_event_max_datetime": "2021-11-22T15:10:57.000Z", "max_issues_repo_issues_event_min_datetime": "2021-01-22T16:24:31.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "maumueller/ann-benchmarks-reproducibility", "max_issues_repo_path": "paper/paper.tex", "max_line_length": 210, "max_stars_count": null, "max_stars_repo_head_hexsha": "b66ee6981fefefd4ab6e27bb4aac4ec0bfebed2a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "maumueller/ann-benchmarks-reproducibility", "max_stars_repo_path": "paper/paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1606, "size": 5680 }
\documentclass{../lib} \begin{document} \begin{ledCmnt} \section{Example} This is commentary in \textbf{\TeX} syntax. \end{ledCmnt} \begin{ledDef} \funDefNoWhere{thisIsLEDSyntax}{\aggrSum{\setMem{x}{\iv{1}{3}}}{\exp{x}{2}}} \end{ledDef} \begin{ledCmnt} The line above will be converted to $math$. test: \begin{verbatim} cmd:>pp(thisIsLEDSyntax) "14" \end{verbatim} \end{ledCmnt} \end{document}
{ "alphanum_fraction": 0.7142857143, "avg_line_length": 16.625, "ext": "tex", "hexsha": "1d36734cc2df4d56120f1aefddbd8931f9c6e393", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0562d768c5bcc66d61bd1c1a42f1e9bd7ecb9498", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vuphan314/led", "max_forks_repo_path": "src/demo/comment.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "0562d768c5bcc66d61bd1c1a42f1e9bd7ecb9498", "max_issues_repo_issues_event_max_datetime": "2017-08-06T16:08:22.000Z", "max_issues_repo_issues_event_min_datetime": "2017-08-06T03:53:58.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vuphan314/LEDtoSLtranslator", "max_issues_repo_path": "src/demo/comment.tex", "max_line_length": 76, "max_stars_count": null, "max_stars_repo_head_hexsha": "0562d768c5bcc66d61bd1c1a42f1e9bd7ecb9498", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vuphan314/LEDtoSLtranslator", "max_stars_repo_path": "src/demo/comment.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 145, "size": 399 }
\input{commandsFile} \DeclareUnicodeCharacter{202F}{AAAAAA} \begin{document} \fontfamily{ppl}\selectfont %TITLE PAGE \setlength\parindent{18pt} \begin{titlepage} %LOGO {\begin{table}[t!] \centering \begin{tabu} to \textwidth { X[1.3,r,p] X[1.7,l,p] } \textcolor{Blue} {\textbf{\small{Software Engineering 2\break CLup project by\break Robert Medvedec\break Toma Sikora}}} & \includegraphics[scale=0.5]{Images/PolimiLogo} \end{tabu} \end{table}}~\\ [7cm] %TITLE \begin{flushleft} %Replace the text string with your title {\textcolor{Blue}{\textbf{\Huge{Design Document}}}} \\ [1cm] \end{flushleft} \end{titlepage} %Define deliverable specific info %Replace cell contents where needed \begin{table}[h!] \begin{tabu} to \textwidth { X[0.3,r,p] X[0.7,l,p] } \hline \break\textbf{Deliverable:} & \break DD\\ \break\textbf{Title:} & \break Design Document \\ \textbf{Authors:} & Robet Medvedec, Toma Sikora \\ \textbf{Version:} & 1.3 \\ \textbf{Date:} & 9-January-2021 \\ \textbf{Download page:} & \url{https://github.com/robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora} \\ \break\textbf{Copyright:} & \break Copyright © 2021, R. Medvedec, T. Sikora – All rights reserved\break\\ \hline \end{tabu} \end{table} \setcounter{page}{2} %------------------------------------------------------------------------------------------------------------------------------------------------ \newpage \addcontentsline{toc}{section}{Table of Contents} \tableofcontents \newpage \addcontentsline{toc}{section}{List of Figures} \listoffigures \addcontentsline{toc}{section}{List of Tables} \listoftables %------------------------------------------------------------------------------------------------------------------------------------------------ \clearpage {\color{Blue}{\section{Introduction}}} \label{sect:introduction} \input{Files/introduction} %------------------------------------------------------------------------------------------------------------------------------------------------ \clearpage {\color{Blue}{\section{Architectural Design}}} \label{sect:architecture} \input{Files/architecture} %------------------------------------------------------------------------------------------------------------------------------------------------ \clearpage {\color{Blue}{\section{User Interface Design}}} \label{sect:interface} \input{Files/interface} %------------------------------------------------------------------------------------------------------------------------------------------------ \clearpage {\color{Blue}{\section{Requirements Traceability}}} \label{sect:requirements} \input{Files/requirements} %------------------------------------------------------------------------------------------------------------------------------------------------ \clearpage {\color{Blue}{\section{Implementation, Integration and Test Plan}}} \label{sect:implementation} \input{Files/implementation} %------------------------------------------------------------------------------------------------------------------------------------------------ \clearpage {\color{Blue}{\section{Effort Spent}}} \label{sect:effort} \input{Files/effort} %------------------------------------------------------------------------------------------------------------------------------------------------ \clearpage \addcontentsline{toc}{section}{References} \bibliographystyle{plain} \bibliography{main} %------------------------------------------------------------------------------------------------------------------------------------------------ \end{document}
{ "alphanum_fraction": 0.4687150838, "avg_line_length": 31.1304347826, "ext": "tex", "hexsha": "7e5017bfeceaf0272efffbc7ff790189adffcb5b", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-11-09T19:40:05.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-08T18:33:30.000Z", "max_forks_repo_head_hexsha": "9385d689f30065e578cadb02ffd9ba24f258b51f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora", "max_forks_repo_path": "Latex_DD/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9385d689f30065e578cadb02ffd9ba24f258b51f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora", "max_issues_repo_path": "Latex_DD/main.tex", "max_line_length": 152, "max_stars_count": null, "max_stars_repo_head_hexsha": "9385d689f30065e578cadb02ffd9ba24f258b51f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "robertodavinci/Software_Engineering_2_Project_Medvedec_Sikora", "max_stars_repo_path": "Latex_DD/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 775, "size": 3580 }
%% %% Copyright (c) Facebook, Inc. and its affiliates. %% %% This source code is licensed under the MIT license found in the LICENSE file %% in the root directory of this source tree. %% \documentclass[article]{jdssv} %% -- LaTeX packages and custom commands --------------------------------------- %% recommended packages \usepackage{lmodern} %% other packages \usepackage{amsmath,amsthm} \newtheorem{theorem}{Theorem}[] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newtheorem{observe}[theorem]{Observation} \newtheorem{remark1}[theorem]{Remark} \newenvironment{remark}{\begin{remark1} \rm}{\end{remark1}} \extrafloats{128} \maxdeadcycles=200 %% -- Article metainformation (author, title, ...) ----------------------------- %% - \author{} with primary affiliation %% - \Plainauthor{} without affiliations %% - Separate authors by \And or \AND (in \author) or by comma (in \Plainauthor). %% - \AND starts a new line, \And does not. \author{Aaron Defazio\\Facebook Artificial Intelligence Research \And Mark Tygert\\Facebook AI Research \AND Rachel Ward\\University of Texas at Austin \And Jure Zbontar\\Facebook AI Research} \Plainauthor{Aaron Defazio, Mark Tygert, Rachel Ward, Jure Zbontar} %% - \title{} in title case %% - \Plaintitle{} without LaTeX markup (if any) %% - \Shorttitle{} with LaTeX markup (if any), used as running title \title{Compressed Sensing with a Jackknife, a~Bootstrap, and Visualization} \Plaintitle{Compressed Sensing with a Jackknife, a~Bootstrap, and Visualization} \Shorttitle{Compressed Sensing with a Jackknife, a~Bootstrap, and Visualization} %% - \Abstract{} almost as usual \Abstract{ Compressed sensing proposes to reconstruct more degrees of freedom in a signal than the number of values actually measured (based on a potentially unjustified regularizer or prior distribution). Compressed sensing therefore risks introducing errors --- inserting spurious artifacts or masking the abnormalities that medical imaging seeks to discover. Estimating errors using the standard statistical tools of a jackknife and a bootstrap yields error ``bars'' in the form of full images that are remarkably qualitatively representative of the actual errors (at least when evaluated and validated on data sets for which the ground truth and hence the actual error is available). These images show the structure of possible errors --- without recourse to measuring the entire ground truth directly --- and build confidence in regions of the images where the estimated errors are small. Further visualizations and summary statistics can aid in the interpretation of such error estimates. Visualizations include suitable colorizations of the reconstruction, as well as the obvious ``correction'' of the reconstruction by subtracting off the error estimates. The canonical summary statistic would be the root-mean-square of the error estimates. Unfortunately, colorizations appear likely to be too distracting for actual clinical practice in medical imaging, and the root-mean-square gets swamped by background noise in the error estimates. Fortunately, straightforward displays of the error estimates and of the ``corrected'' reconstruction are illuminating, and the root-mean-square improves greatly after mild blurring of the error estimates; the blurring is barely perceptible to the human eye yet smooths away background noise that would otherwise overwhelm the root-mean-square. } %% - \Keywords{} with LaTeX markup, at least one required %% - \Plainkeywords{} without LaTeX markup (if necessary) %% - Should be comma-separated and in sentence case. \Keywords{error estimation, visualization, medical imaging, magnetic resonance imaging, compressive sensing, compressive sampling} \Plainkeywords{error estimation, visualization, medical imaging, magnetic resonance imaging, compressive sensing, compressive sampling} %% - \Address{} of at least one author %% - May contain multiple affiliations for each author %% (in extra lines, separated by \emph{and}\\). %% - May contain multiple authors for the same affiliation %% (in the same first line, separated by comma). \Address{ Mark Tygert\\ Facebook Artificial Intelligence Research\\ 1 Facebook Way\\ Menlo Park, CA 94025\\ E-mail: \email{[email protected]}\\ URL: \url{http://tygert.com}\\ \ \\ } \begin{document} %% -- Introduction ------------------------------------------------------------- %% - In principle "as usual". %% - But should typically have some discussion of both _software_ and _methods_. %% - Use \proglang{}, \pkg{}, and \code{} markup throughout the manuscript. %% - If such markup is in (sub)section titles, a plain text version has to be %% added as well. %% - All software mentioned should be properly \cite-d. %% - All abbreviations should be introduced. %% - Unless the expansions of abbreviations are proper names (like "Journal %% of Statistical Software" above) they should be in sentence case (like %% "generalized linear models" below). \section{Introduction} Compressed sensing is the concept that many interesting signals are recoverable from undersampled measurements of the representations of those signals in a special basis. A widely touted potential application is to the acceleration of magnetic resonance imaging (MRI), as by~\citet{lustig-donoho-pauly}. In MRI, the special basis for representations of signals is the Fourier basis, and the goal of compressed sensing is to recover high-resolution images from relatively sparse measurements of the Fourier components of those images. Here, ``sparse'' means substantially fewer measurements of values in the Fourier domain than the numbers of pixels in the reconstructed images. Of course, recovering more degrees of freedom than the number of measured values is an ill-posed problem, yet has been rigorously proven to be solvable when the gradients of the images being recovered are known to be small except at a few pixels, for instance, when edges dominate the images (the proofs of~\citet{candes-romberg-tao} are especially well-known). This recovery is still non-trivial, as the small number of pixels where the gradients are non-trivial very well may vary from image to image, while the same reconstruction procedure works irrespective of where the gradients concentrate (so long as they concentrate on sparse subsets of all pixels in the reconstructed domain). The requirement that gradients be concentrated on sparse subsets is sufficient but may not be necessary, and much recent research --- including that of~\citet{hammernik-klatzer-kobler-recht-sodickson-pock-knoll} --- aims to generalize beyond this requirement by applying machine learning to representative data sets. Indeed, the literature on compressed sensing is vast and growing rapidly; see, for example, the recent review of~\citet{tropp} for explication of all this and more. Needless to say, compressed sensing risks introducing errors into the resulting reconstructions, especially if the assumption of sparsity is unfounded for the real data at hand. The works of~\citet{malioutov-sanghavi-willsky} and~\citet{ward} quantify these errors via a single scalar estimate of confidence in the reconstruction, namely, an estimate of the mean-square error. The present paper extends these methods, producing estimates of the entire image displaying the discrepancy between the reconstruction in compressed sensing and the actual ground truth. Of course, compressed sensing takes too few measurements to ascertain the actual ground truth, so only an estimate of the discrepancy --- an error ``bar'' in the form of an image --- is possible. However, the examples of the present paper show that ``jackknife'' and ``bootstrap'' estimates of the errors are reasonably representative of the reality, at least for the cases in MRI tested here, in which the ground truth is available for comparison and evaluation. Those unfamiliar with the jackknife and the bootstrap may wish to consult~\citet{efron-tibshirani}; that said, the presentation below is completely self-contained, not presuming any prior knowledge of either the jackknife or the bootstrap. The jackknife and bootstrap images highlight when, where, and what errors may have arisen in each reconstruction from compressed sensing for MRI, tailored to the specific object being imaged. The jackknife is similar to standard {\it a posteriori} tests for convergence of numerical methods; such numerical tests for convergence often serve as proxies for estimates of accuracy. The bootstrap leverages more extensive computation, simulating measurements that could have been taken but were not in fact (recall that compressed sensing involves taking fewer measurements than the number of degrees of freedom being reconstructed). The bootstrap simulates plausible alternative reconstructions from hypothetical measurements that are consistent with the reconstruction from the measurements actually made. The alternative reconstructions fluctuate around the reconstruction from the measurements actually made; the fluctuation is an estimate of the error, when averaged over various sampling patterns for the measurements being considered. The present paper also investigates user-friendly methods for generating visualizations and automatic interpretations of these error estimates, appropriate for display to medical professionals (especially radiologists). After testing several natural visual displays, we find that any nontrivial visualization is likely to be too distracting for physicians, as some have expressed reservations about having to look at any errors at all --- they would be much happier having a machine look at the estimates and flag potentially serious errors for special consideration. We might conclude that colorization is too distracting, that the best visualizations are simple displays of the error estimates, possibly supplemented with the error estimates subtracted from the reconstructions (thus showing how the error estimates can ``correct'' the reconstructions). Most of the results of the present paper about visualization could be regarded as negative, however natural and straightforward the colorizations may be. For circumstances in which visualizing errors is overkill (or unnecessarily bothersome), we find that an almost simplistic automated interpretation of the plots of errors --- reporting just the root-mean-square of the denoised error estimates --- works remarkably well. While background noise dominates the root-mean-square of the initial, noisy error estimates, even denoising that is almost imperceptible can remove the obfuscatory background noise; the root-mean-square can then focus on the remaining errors, which are often relatively sparsely distributed. When the root-mean-square of the denoised error estimates is large enough, a clinician could look at the visualizations mentioned above to fully understand the implications of the error estimates (or rescan the patient using a less error-prone sampling pattern). The structure of the remainder of the present paper is as follows: First, Section~\ref{methods} introduces the jackknife and the bootstrap for compressed sensing, together with methods for their visualization and automated summarization in scalar statistics. Then, Section~\ref{results} illustrates the performance of the methods on data sets from MRI, with copious additional examples provided in the appendices. Finally, Section~\ref{conclusion} discusses the results introduced in Section~\ref{results} and draws some conclusions. The \proglang{Python} package \pkg{fbooja} reproduces all our results from Section~\ref{results} and the appendices and is available at \url{https://github.com/facebookresearch/fbooja} (that GitHub repository also includes a re-distribution of the MRI scans of a patient's head from the data of~\citet{mri2}, \citet{mri1}, \citet{mri3}, and~\citet{mri4}). The software is available with a permissive (MIT) open-source license, though the package is meant solely for prototyping and demonstrating a proof of principle --- the package may not be industrial-strength or suited for deployment directly to clinical practice; the software makes no attempt to implement fail-safes or be fool-proof, but rather is most appropriate for use in research or as a template for the development of production-ready implementations. \section{Methods} \label{methods} \subsection{A jackknife and a bootstrap} We denote by $X$ a data set $(x_i)_{i \in I}$, where each $x_i$ is a scalar or a vector and $I$ is a set of indices. We consider a vector-valued (or image-valued) function $f = f(X, S)$ of both $X$ and a subset $S$ of the index set $I$ such that the value of $f$ depends only on $(x_i)_{i \in S}$. Compressed sensing approximates the full $f(X, I)$ with $f(X, S)$, where $S$ is a subset of $I$ collecting together independent uniformly random draws from $I$, perhaps plus some fixed subset $T$ of $I$. (Obviously, this construction makes $T$ a subset of $S$. However, $T$ need not be disjoint from the set of independent uniformly random draws.) In compressed sensing for MRI, measured observations in the Fourier domain of the object being imaged are $(x_i)_{i \in I}$, and $f(X, I)$ uses those measurements to reconstruct the object in the original domain (hence involving an inverse Fourier transform to map from $X$ to $f(X, I)$). The reconstruction $f(X, S)$ from a subset $S$ of $I$ commonly involves minimizing a total-variation objective function or deep learning of some sort, as discussed by~\citet{tao-yang}, \citet{yang-zhang}, \citet{hammernik-klatzer-kobler-recht-sodickson-pock-knoll}, and their references. With such undersampled measurements, the reconstruction is oblivious to much of the Fourier domain, sampling fewer measurements than at the usual Nyquist rate. We will tacitly be assuming that the procedure for reconstruction works not only for the set $S$ specifying the measurements actually used, but also for other sets of random observations, that is, for other random realizations of $S$. For machine-learned reconstructions, the model for reconstruction must train on measurements taken from many different possible samplings, not just one, as otherwise the model will be blind to parts of the Fourier domain. If we can simulate on a computer what could have happened with measurements that we do not take in reality, then we can construct error ``bars'' highlighting when, where, and what might have gone wrong in a reconstruction from actual measurements taken with only one realized sampling set $S$. The computational simulation allows us to gauge what could have happened with unseen measurements. While seeing the unseen (at least in part) may seem counterintuitive, in fact the field of statistics is all about what might have occurred given observations of what actually did happen. The bootstrap defined below follows this prescription literally. The jackknife is a somewhat simpler formulation. The goal of both the jackknife and the bootstrap is to provide an estimated bound on $f(X, S) - f(X, I)$, without having access to the full reconstruction $f(X, I)$. (The full reconstruction depends on all measurements $(x_i)_{i \in I}$ --- the whole $I$ --- so is unavailable when performing compressed sensing.) First, we define the jackknife error ``bar'' for $f$ on $S$ to be % \begin{equation} \label{jackknife} d = 2 \sum_{i \in S \backslash T} \Bigl( f(X, S \backslash \{i\}) - f(X, S) \Bigr), \end{equation} % where the sum ranges over every index $i \in S$ such that $i \notin T$, and $S \backslash \{i\}$ is just $S$ after removing $i$. The jackknife $d$ defined in~(\ref{jackknife}) characterizes what would happen to the output of $f$ if the input $S$ were slightly smaller; if $f(X, S)$ is close to converging on $f(X, I)$, then $f(X, S \backslash \{i\})$ in~(\ref{jackknife}) should also be close to $f(X, I)$, so $f(X, S \backslash \{i\})$ should be close to $f(X, S)$, aside from errors. We refer to $f(X, S \backslash \{i\}) - f(X, S)$ in the right-hand side of~(\ref{jackknife}) as a ``leave-one-out'' difference, as in ``leave-one-out'' cross-validation. We could empirically (or semi-empirically) determine a calibration constant $c$ such that $cd$ becomes of the same size as the actual discrepancy $f(X, S) - f(X, I)$ on average for a training set of exemplars (the training set could consist of many different $X$ together with the corresponding $f(X, S)$ and $f(X,I)$). We found that $c = 1$ works well in the experiments reported below. Next, we define the bootstrap, assuming that $S$ is the union of the set $T$ and a set of $\ell$ independent uniformly random samples from $I$ (where $\ell$ is a parameter, and the number of distinct members of the latter set may be less than $\ell$ due to repetition in the $\ell$ samples): First, having already computed $f(X, S)$, we solve for $\tilde{X} = (\tilde{x}_i)_{i \in I}$ such that % \begin{equation} \label{bootstrapped} f(\tilde{X}, I) = f(X, S). \end{equation} % Then, we form the set $R$ of $\ell$ independent uniformly random draws from $I$ (not all $\ell$ of which need be distinct), plus the fixed subset $T$ of $I$ (in so-called parallel MRI, as described by~\citet{brown-cheng-haacke-thompson-venkatesan}, $T$ would naturally contain all the lowest frequencies). We select a positive integer $k$ and repeat this resampling independently $k$ times, thus obtaining sets $R_1$, $R_2$, \dots, $R_k$. We define the bootstrap error ``bar'' to be % \begin{equation} \label{difference} e = \frac{3}{k} \sum_{j = 1}^k \Bigl( f(\tilde{X}, R_j) - f(\tilde{X}, I) \Bigr). \end{equation} % We could say that $f(\tilde{X}, R_j)$ arises from $f(\tilde{X}, I)$ in the same way as $f(X, S)$ arises from $f(X, I)$, having constructed $\tilde{X}$ assuming that $f(X, S)$ is ``correct'' in the sense of~(\ref{bootstrapped}); so the summand in~(\ref{difference}) is a proxy for the actual error $f(X, S) - f(X, I)$ (and the averaging over independent realizations reduces noise). We have no reason to believe that the scaling $3$ in~(\ref{difference}) is the ideal factor, but $3$ seems to work well in the experiments reported below. \begin{remark} Both $\bigl( f(X, S \backslash \{i\}) - f(X, S) \bigr)_{i \in S \backslash T}$ from~(\ref{jackknife}) and $\bigl( f(\tilde{X}, R_j) - f(\tilde{X}, I) \bigr)_{j = 1}^k$ from~(\ref{difference}) span whole spaces of errors that potentially could have happened given the actually observed measurements. (Here, ``potentially'' refers to being consistent with what the jackknife or bootstrap can generate.) While the sum and average in~(\ref{jackknife}) and~(\ref{difference}), respectively, of these sets of differences characterize leading modes of these spaces, principal component analysis can characterize all modes. However, looking at even just the leading modes seems somewhat overwhelming already; having to investigate more modes could really try the patience of a physician interpreting MRI scans, for instance. The present paper focuses on the leading modes observed in our experiments (namely, the sum or average). \end{remark} \subsection{Visualization in grayscale and in color} \label{viz} We include four kinds of plots displaying the full reconstructions and errors: % \begin{enumerate} \item ``Original'' is the original grayscale image. \item ``Reconstruction'' is the reconstruction via compressed sensing. \item ``Error of Reconstruction'' displays the difference between the original and reconstructed images, with black (or white) corresponding to extreme errors, and middling grays corresponding to the absence of errors. \item ``Bootstrap'' displays the errors estimated via the bootstrap, with black (or white) corresponding to extreme errors, and middling grays corresponding to the absence of errors. \end{enumerate} We visualize the errors in reconstruction and the bootstrap estimates using grayscale so that the phases of oscillatory artifacts are less apparent; colorized errors look very different for damped sine versus cosine waves, whereas the medical meaning of such waves is often similar. We consider four methods for visualizing the effects of errors (estimated via the bootstrap) simultaneously with displaying the reconstruction, via manipulation of the hue-saturation-value color space described, for example, by~\citet{scikit-image}: % \begin{enumerate} \item ``Reconstruction -- Bootstrap'' is literally the bootstrap error estimate subtracted from the reconstruction, in some sense ``correcting'' or ``enhancing'' the reconstruction. \item ``Errors Over a Threshold Overlaid'' identifies the pixels in the bootstrap error estimate whose absolute values are in the upper percentiles (the upper two percentiles for horizontally retained sampling, the upper one for radially retained sampling), then replaces those pixels (retaining all other pixels unchanged) in the reconstruction with colors corresponding to the values of the pixels in the bootstrap. Specifically, the colors plotted are at the highest value possible and fully saturated, with a hue ranging from cyan to magenta, with blue in the middle (however, as we include only the upper percentiles, only hues very close to cyan or to magenta actually get plotted). This effectively marks the pixels corresponding to the largest estimated errors with eye-popping colors, leaving the other pixels at their gray values in the reconstruction. \item ``Bootstrap-Saturated Reconstruction'' sets the saturation of a pixel in the reconstruction to the corresponding absolute value of the pixel in the bootstrap error estimate (normalized by the greatest absolute value of any pixel in the bootstrap), with a hue set to red or green depending on the sign of the pixel in the bootstrap. The value of the pixel in the reconstruction stays the same. Thus, a pixel gets colored more intensely red or more intensely green when the absolute value of the pixel in the bootstrap is large, but always with the value in hue-saturation-value remaining the same as in the original reconstruction; a pixel whose corresponding absolute value in the bootstrap is relatively negligible stays unsaturated gray at the value in the reconstruction. \item ``Bootstrap-Interpolated Reconstruction'' leaves the value of each pixel at its value in the reconstruction, and linearly interpolates in the hue-saturation plane between green and magenta based on the corresponding value of the pixel in the bootstrap error estimate (normalized by the greatest absolute value of any pixel in the bootstrap). Pure gray is in the middle of the line between green and magenta, so that any pixel whose corresponding error estimate is zero will appear unchanged, exactly as it was in the original reconstruction; pixels whose corresponding error estimates are the largest have the same value as in the reconstruction but get colored magenta, while those whose corresponding error estimates are the most negative have the same value as in the reconstruction but get colored green. \end{enumerate} \subsection{Summarization in a scalar} \label{sum} The square root of the sum of the squares of slightly denoised error estimations summarizes in a single scalar the overall size of errors. Even inconspicuous denoising can greatly improve the root-mean-square: While the effect of blurring the bootstrap error estimates with a normalized Gaussian convolutional kernel of standard deviation one pixel is almost imperceptible to the human eye (or at least preserves the semantically meaningful structures in the images), the blur helps remove the background of noise that can otherwise dominate the root-mean-square of the error estimates. The blur largely preserves significant edges and textured areas, yet can eliminate much of the perceptually immaterial zero-mean background noise. Whereas background noise can overwhelm the root-mean-square of the initial, noisy bootstrap, the root-mean-square of the slightly blurred bootstrap captures the magnitude of the more important features in the error estimates. \section{Results} \label{results} The examples of this section illustrate the most commonly discussed compressed sensing for MRI, in which we reconstruct an image from measured observations of some of its values in the Fourier domain --- ``some'' meaning significantly less than usually required by the Nyquist-Shannon-Whittaker sampling theory. To reconstruct an image from measurements taken in the Fourier domain (with independent and identically distributed centered complex Gaussian noise of standard deviation $0.02\sqrt{2}$ added to mimic machine imprecision), we minimize the sum of deviations from the measurements plus a total-variation regularizer via Algorithm~1 at the end of Section~2.2 of~\citet{tao-yang} (which is based on the work of~\citet{yang-zhang}), with 100 iterations, using the typical parameter settings $\mu = 10^{12}$ and $\beta = 10$ ($\mu$ governs the fidelity to the measurements taken in the Fourier domain, and $\beta$ is the strength of the coupling in the operator splitting for the alternating-direction method of multipliers). As discussed by~\citet{tropp}, this is the canonical setting for compressed sensing. All computations take place in IEEE standard double-precision arithmetic. We use $k =$ 1,000 resamplings for the bootstrap in~(\ref{difference}). The \proglang{Python} package \pkg{fbooja} reproduces our results and is available at \url{https://github.com/facebookresearch/fbooja} \dots\ \pkg{fbooja} builds upon \proglang{PyTorch} of~\citet{pytorch}, which is available along with instructions for installation of dependencies at \url{https://pytorch.org} We consider two kinds of sampling patterns, radially retained and horizontally retained. All sampling takes place on an $m \times n$ Cartesian grid, allowing direct use of the fast Fourier transform for acceleration of the reconstruction (as described by~\citet{tao-yang}). Future implementations could consider sampling off the grid, too. With radially retained sampling, each $x_i$ in our data set $X = (x_i)_{i \in I}$ consists of all pixels on an $m \times n$ Cartesian grid in the Fourier domain that intersect a ray emanating from the origin (each angle corresponds to $x_i$ for a different index $i$). Figure~\ref{radialines} displays four examples of uniformly random subsets of $X$, sampling the angles of the rays uniformly at random. For radially retained sampling, we refrain from supplementing the subsampled set $S$ with any fixed subset; that is, the set $T$ is empty. To construct $S$, we generate $\frac{m + n}{5}$ angles uniformly at random (rounding $\frac{m + n}{5}$ to the nearest integer), which makes the errors easy to see in the coming figures, yet not too extreme. With horizontally retained sampling, each $x_i$ in our data set $X = (x_i)_{i \in I}$ consists of a horizontal line $n$ pixels wide on an $m \times n$ Cartesian grid in the Fourier domain, with $I$ consisting of the $m$ integers from $-\frac{m}{2}$ to $\frac{m-2}{2}$. The subsampled set $S$ always includes all horizontal lines ranging from the $-\sqrt{2m}\,$th lowest frequency to the $\sqrt{2m}\,$th lowest frequency (rounding $\sqrt{2m}$ to the nearest integer); that is, the set $T$ consists of these low-frequency indices. To construct the remainder of $S$, we generate $\frac{m}{4}$ integers from $-\frac{m}{2}$ to $\frac{m-2}{2}$ uniformly at random (rounding $\frac{m}{4}$ to the nearest integer), which makes the errors easy to see in the coming figures, yet not too extreme. Recall that $S$ is a set: each member $i$ of $S$ occurs only once irrespective of how many times the sampling procedure just described chooses to include the index $i$. Figures~\ref{bigradial}--\ref{bighorizontal} display results for retaining radial and horizontal lines, using the same original images (the third and tenth --- ``lower'' and ``upper'' --- slices out of twenty). Further examples are available in Appendix~\ref{suppfigs}. The figures whose captions specify ``$2\times$'' use $\frac{2(m + n)}{5}$ random angles for radially retained sampling and $\frac{m}{2}$ random integers for horizontally retained sampling instead of the $\frac{m + n}{5}$ random angles and $\frac{m}{4}$ random integers used in all other figures. All figures concern MRI scans of a patient's head from the data of~\citet{mri2}, \citet{mri1}, \citet{mri3}, and~\citet{mri4}.\footnote{The \proglang{Python} software package \pkg{fbooja} includes a re-distribution of all data processed for the present paper, along with modular codes for automatically reproducing all our results, and is available at \url{https://github.com/facebookresearch/fbooja}\ \dots\ an extended data set with MRI scans from multiple patients (one of whom is the patient considered in the results reported here) is available at \url{http://www.medinfo.cs.ucy.ac.cy/old/doc/Publications/Datasets/MRIFreeDataset.zip}} The resolutions in pixels of the original images range from $376 \times 286$ to $456 \times 371$. The acquisitions used turbo spin-echo pulse sequences for T2 weighting on a 1.5 tesla scanner; the repetition time was 4408 ms, the echo time was 100 ms, and the echo spacing was 10.8 ms. The thickness of a cross-sectional slice was 5 mm. The resolution was 2.226 pixels per mm. Discrete Fourier transforms of the downloaded full reconstructions yielded proxies for the measurements in the spatial Fourier domain (``k-space''). The scans did not accelerate the acquisitions via any form of parallel imaging. In the figures, ``Original'' displays the original image, ``Reconstruction'' displays the reconstruction $f(X, S)$, ``Error of Reconstruction'' displays the difference between the original image and the reconstruction, ``Jackknife'' displays the jackknife $d$ from~(\ref{jackknife}), and ``Bootstrap'' displays the bootstrap $e$ from~(\ref{difference}). The values of the original pixels are normalized to range from $0$ to $1$ (that is, the pixels in the original data for each cross-sectional slice get normalized to range from $0$ to $1$; the images corresponding to the same cross-sectional slice do not get further normalized individually). In the images ``Original'' and ``Reconstruction,'' pure black corresponds to 0 while pure white corresponds to 1. In the images ``Error of Reconstruction,'' ``Jackknife,'' and ``Bootstrap,'' pure white and pure black correspond to the extreme values $\pm 1$, whereas 50\% gray (halfway to black or to white) corresponds to 0. Thus, in the images displaying errors and potential errors, middling halftone grays correspond to little or no error, while extreme pure white and pure black correspond to more substantial errors. For the visualizations from Subsection~\ref{viz}, we focus on two cross-sectional slices: the lower slice is the third of twenty in Appendix~\ref{suppfigs}, while the upper slice is the tenth. Figures~\ref{first_viz}--\ref{last_viz} display the visualizations from Subsection~\ref{viz}. Figures~\ref{blurredr} and~\ref{blurredh} depict the effects of the blur from Subsection~\ref{sum}, implemented with \code{skimage.filters.gaussian} (a function for convolution with a Gaussian) from \pkg{scikit-image} of~\citet{scikit-image}. Table~\ref{blurring} reports how drastically such a nearly imperceptible blur changes the square roots of the sums of the squares of the error estimates. Background noise clearly overwhelms the root-mean-square without any denoising of the error estimates --- the root-mean-square decreases dramatically even with just the mild denoising of blurring with a normalized Gaussian convolutional kernel whose standard deviation is one pixel, as in Table~\ref{blurring} and Figures~\ref{blurredr} and~\ref{blurredh}. Tables~\ref{blurs3} and~\ref{blurs10} report how blurring with wider Gaussians affects the root-mean-square; of course, wider Gaussian blurs are much more conspicuous and risk washing out important coherent features of the error estimates, while the last column of Table~\ref{blurs10} shows that denoising with wider Gaussian blurs brings diminishing returns. The width used in Table~\ref{blurring} and Figures~\ref{blurredr} and~\ref{blurredh} --- only one pixel --- may be safest. Appendix~\ref{blurredover} displays the grayscale reconstructions overlaid with the blurred bootstraps (blurring with a Gaussian whose standard deviation is one pixel), thresholded and colorized as in Subsection~\ref{viz}. \section{Discussion and Conclusion} \label{conclusion} The jackknife images are generally noisier than the bootstrap images. The bootstrap directly explores parts of the Fourier domain outside the observed measurements, whereas the jackknife is more like a convergence test or a differential approximation to the bootstrap --- see, for example, the review of~\citet{efron-tibshirani}. Both the jackknife and the bootstrap occasionally display artifacts where in fact the reconstruction was accurate. Moreover, they miss some anomalies; if the reconstruction completely washes out a feature of the original image, then neither the jackknife nor the bootstrap can detect the washed-out feature (consider, for instance, the stark long straight line in the upper-right quadrants of the errors in reconstruction from Figures~\ref{bigradial} and~\ref{afterbigradial} --- the jackknife and bootstrap fail to detect that line). Neither the jackknife nor the bootstrap can hallucinate or otherwise introduce a feature in a reconstruction that the reconstruction procedure is not designed to handle; in particular, any machine-learned model is practically guaranteed to miss any feature not represented in the data set used for training the model. That said, in most cases they show the actual errors nicely. The estimates bear an uncanny qualitative resemblance to the actual errors. Using both the jackknife and the bootstrap may be somewhat conservative, but if the jackknife misses an error, then the bootstrap usually catches it, and vice versa. Similarly, most clinical practice involves comparison of MRI scans of the same patient across proton density, T1, and T2 weighting, and sometimes also with other weightings (FLAIR or fat-suppression, for example) and other physical quantities such as susceptibility, diffusion, or densities of radiological contrast agents. Comparing across these different sequence types (and possibly different imaging modalities) affords an additional opportunity to check for inconsistencies that may indicate errors in the acquisition or reconstruction, including errors arising from compressed sensing. An anonymous reviewer also suggested that careful study of the errors introduced by particular schemes for reconstruction (schemes such as the minimization of total variation tested in Section~\ref{results}) could inform Bayesian priors useful for interpreting individual reconstructions in the clinical setting. The reviewer noted that the most prominent anomalies in the examples of Section~\ref{results} appear to focus on the boundary interfaces of different tissues, on artifacts due to motion of the cerebro-spinal fluid, and on idiosyncrasies in the geometry of the sampling scheme (radial versus horizontal). Clinicians will likely need to consider the reviewer's observations and perhaps develop suitable Bayesian priors. Broadly speaking, the bootstrap-saturated reconstructions and bootstrap-interpolated reconstructions look similar, even though the details of their constructions differ. Both the bootstrap-saturated reconstruction and the bootstrap-interpolated reconstruction highlight errors more starkly on pixels for which the reconstruction is bright; dark green, dark red, and dark magenta (that is, with a relatively low value in hue-saturation-value) simply do not jump out visually, even if the green, red, or magenta are fully saturated. That said, retaining the value of the pixel in the reconstruction makes the colorization of the bootstrap-saturated reconstruction and the bootstrap-interpolated reconstruction far less distracting than in errors over a threshold overlaid, with much higher fidelity to the form of the grayscale reconstruction in the colored regions. Of course, the errors over a threshold overlaid do not alter the grayscale reconstruction at all when the errors are within the threshold, so the fidelity to the grayscale reconstruction is perfect in those areas of the images with overlaid errors where the error estimates do not go beyond the threshold. Thus, none of the colorizations is uniformly superior to the others, and all may be too distracting for actual clinical practice. Alternatives include direct display of the bootstrap error estimates, possibly complemented by the bootstrap subtracted from the reconstruction (to illustrate the effects of ``correcting'' the reconstruction with the error estimates), which are readily interpretable and minimally distracting. The bootstrap subtracted from the reconstruction tends to sharpen the reconstruction and to add back some features such as lines or textures that the reconstruction obscured. However, this reconstruction that is ``corrected'' with the bootstrap estimations may contain artifacts not present in the original image --- the error estimates tend to be conservative, possibly suspecting errors in some regions where in fact the reconstruction is accurate. The ``corrected'' reconstruction (that is, the bootstrap subtracted from the reconstruction) can be illuminating, but only as a complement to plotting the bootstrap error estimates on their own, too. A sensible protocol could be to check if the root-mean-square of the blurred bootstrap is large enough to merit further investigation, investigating further by looking at the full bootstrap image together with the reconstruction ``corrected'' by subtracting off the bootstrap error estimates (or colorizations). Consulting the jackknife is another (albeit noisier) possibility. For obvious reasons, the clinical setting needs to be rather conservative in adopting novel techniques such as those proposed in the present paper. Medical research, however, might be able to more unreservedly leverage the proposed methods to screen for problems or otherwise quantify uncertainty in advanced procedures for image acquisition, reconstruction, and analysis. The jackknife and the bootstrap can estimate errors in any scheme for reconstruction from undersampled measurements, without requiring any modification of the software for reconstruction --- a convenience that could favor accelerated adoption as a handy tool in research, whether as inputs to a pipeline of data analysis or merely for flagging egregious errors. \section*{Computational details} The results in this paper were obtained using the \proglang{Python} package \pkg{fbooja} available at \url{https://github.com/facebookresearch/fbooja} \dots\ \pkg{fbooja} builds upon \proglang{PyTorch} of~\citet{pytorch}, which is available at \url{https://pytorch.org} along with instructions for installation of the dependencies \pkg{Matplotlib} of~\citet{matplotlib}, \pkg{NumPy} of~\citet{numpy}, \pkg{Pillow} of~\cite{pillow}, \pkg{Scikit-image} of~\citet{scikit-image}, and \pkg{SciPy} of~\citet{scipy}. \section*{Acknowledgements} We would like to thank Florian Knoll, Jerry Ma, Jitendra Malik, Matt Muckley, Mike Rabbat, Dan Sodickson, Larry Zitnick, and the anonymous referees and editor. \begin{figure} \parbox{\textwidth}{\includegraphics[width=\textwidth]{mask}} \caption{Radially retained sampling --- sampling on a Cartesian grid along rays emanating from the origin} \label{radialines} \end{figure} \newlength{\vertsep} \setlength{\vertsep}{.0in} \newlength{\imsize} \setlength{\imsize}{.34\textwidth} \newlength{\imsizes} \setlength{\imsizes}{.49\textwidth} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_bootstrap}} \end{centering} \caption{Radially retained sampling --- lower slice} \label{bigradial} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_bootstrap}} \end{centering} \caption{Radially retained sampling --- upper slice} \label{afterbigradial} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- lower slice} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- upper slice} \label{bighorizontal} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_original}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_recon}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_corrected}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_overlaid}} \end{centering} \caption{Radially retained sampling --- lower slice (a)} \label{first_viz} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_saturated}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/r3_interpolated}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_error}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_bootstrap}} \end{centering} \caption{Radially retained sampling --- lower slice (b)} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_original}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_recon}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_corrected}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_overlaid}} \end{centering} \caption{Radially retained sampling --- upper slice (a)} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_saturated}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/r10_interpolated}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_error}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_bootstrap}} \end{centering} \caption{Radially retained sampling --- upper slice (b)} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_original}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_recon}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_corrected}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_overlaid}} \end{centering} \caption{Horizontally retained sampling --- lower slice (a)} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_saturated}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/h3_interpolated}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_error}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- lower slice (b)} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_original}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_recon}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_corrected}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_overlaid}} \end{centering} \caption{Horizontally retained sampling --- upper slice (a)} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_saturated}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/h10_interpolated}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_error}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- upper slice (b)} \label{last_viz} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_bootstrap}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_blurred}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_bootstrap}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_blurred}} \end{centering} \caption{Radially retained sampling --- upper plots display the upper slice; lower plots display the lower} \label{blurredr} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_bootstrap}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_blurred}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_bootstrap}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_blurred}} \end{centering} \caption{Horizontally retained sampling --- upper plots display the upper slice; lower plots display the lower} \label{blurredh} \end{figure} \begin{table} \caption{Square roots of the sums of the squares of the error estimates} \label{blurring} \begin{centering} \vspace{.125in} \begin{tabular}{llll} \hline Sampling & Slice & Bootstrap & Blurred Bootstrap \\\hline horizontally & lower & 12.9 & 6.25 \\ horizontally & upper & 13.8 & 7.34 \\ radially & lower & 17.5 & 10.5 \\ radially & upper & 18.0 & 11.6 \\\hline \end{tabular} \end{centering} \end{table} \begin{table} \vspace{.125in} \caption{Square roots of the sums of the squares of the error estimates for the lower slice blurred against a Gaussian convolutional kernel of the specified standard deviation (the standard deviation is in pixels), for sampling retained horizontally or radially} \label{blurs3} \begin{centering} \vspace{.125in} \begin{tabular}{lll} \hline Std.\ Dev. & Horizontally & Radially \\\hline 0.0 & 12.9 & 17.5 \\ 0.5 & 9.94 & 14.6 \\ 1.0 & 6.25 & 10.5 \\ 1.5 & 4.38 & 8.06 \\ 2.0 & 3.03 & 6.34 \\ 2.5 & 2.04 & 5.06 \\ 3.0 & 1.33 & 4.09 \\ 3.5 & .847 & 3.34 \\ 4.0 & .535 & 2.75 \\\hline \end{tabular} \end{centering} \end{table} \begin{table} \vspace{.125in} \caption{Square roots of the sums of the squares of the error estimates for the upper slice blurred against a Gaussian convolutional kernel of the specified standard deviation (the standard deviation is in pixels), for sampling retained horizontally or radially} \label{blurs10} \begin{centering} \vspace{.125in} \begin{tabular}{lll} \hline Std.\ Dev. & Horizontally & Radially \\\hline 0.0 & 13.8 & 18.0 \\ 0.5 & 10.9 & 15.3 \\ 1.0 & 7.34 & 11.6 \\ 1.5 & 5.35 & 9.50 \\ 2.0 & 3.82 & 7.97 \\ 2.5 & 2.63 & 6.79 \\ 3.0 & 1.75 & 5.87 \\ 3.5 & 1.14 & 5.13 \\ 4.0 & .745 & 4.54 \\\hline \end{tabular} \end{centering} \end{table} %% -- Bibliography ------------------------------------------------------------- %% - References need to be provided in a .bib BibTeX database. %% - All references should be made with \cite, \citet, \citep, \citealp etc. %% (and never hard-coded). See the FAQ for details. %% - JSS-specific markup (\proglang, \pkg, \code) should be used in the .bib. %% - Titles in the .bib should be in title case. %% - DOIs should be included where available. \clearpage \bibliography{fbooja} %% -- Appendix (if any) -------------------------------------------------------- %% - After the bibliography with page break. %% - With proper section titles and _not_ just "Appendix". \newpage \begin{appendix} \section{Supplementary jackknives and bootstraps} \label{suppfigs} This appendix supplements Figures~\ref{bigradial}--\ref{bighorizontal} of Section~\ref{results} with analogous figures for twenty cross-sectional slices through the head of the patient (Figures~\ref{bigradial}--\ref{bighorizontal} are for slices 3 and 10). \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 1} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 2} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 3} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 4} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 5} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 6} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 7} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 8} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 9} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 10} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 11} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 12} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 13} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 14} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 15} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 16} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 17} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 18} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 19} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20_bootstrap}} \end{centering} \caption{Radially retained sampling --- slice 20} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 1} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 2} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 3} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 4} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 5} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 6} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 7} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 8} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 9} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 10} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 11} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 12} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 13} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 14} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 15} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 16} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 17} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 18} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 19} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20j_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20j_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20_bootstrap}} \end{centering} \caption{Horizontally retained sampling --- slice 20} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r1x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 1} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r2x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 2} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r3x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 3} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r4x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 4} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r5x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 5} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r6x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 6} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r7x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 7} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r8x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 8} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r9x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 9} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r10x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 10} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r11jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r11x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 11} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r12jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r12x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 12} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r13jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r13x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 13} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r14jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r14x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 14} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r15jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r15x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 15} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r16jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r16x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 16} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r17jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r17x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 17} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r18jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r18x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 18} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r19jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r19x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 19} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/r20jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/r20x2_bootstrap}} \end{centering} \caption{2$\times$ radially retained sampling --- slice 20} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h1x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 1} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h2x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 2} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h3x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 3} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h4x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 4} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h5x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 5} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h6x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 6} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h7x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 7} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h8x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 8} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h9x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 9} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h10jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h10x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 10} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h11jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h11x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 11} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h12jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h12x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 12} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h13jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h13x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 13} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h14jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h14x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 14} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h15jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h15x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 15} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h16jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h16x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 16} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h17jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h17x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 17} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h18jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h18x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 18} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h19jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h19x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 19} \end{figure} \begin{figure} \begin{centering} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20jx2_error}} \parbox{\imsize}{\includegraphics[width=\imsize] {../brainsout/h20jx2_jackknife}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20x2_original}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20x2_recon}} \vspace{\vertsep} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20x2_error}} \parbox{\imsize}{\includegraphics[width=\imsize]{../brainsout/h20x2_bootstrap}} \end{centering} \caption{2$\times$ horizontally retained sampling --- slice 20} \end{figure} \clearpage \section{Blurred errors over a threshold overlaid} \label{blurredover} For reference, this appendix displays the same errors over a threshold overlaid over the reconstruction as in Subsection~\ref{viz}, together with the blurred errors over a threshold overlaid over the reconstruction (blurring with a Gaussian convolutional kernel whose standard deviation is one pixel, as in Subsection~\ref{sum}). The labeling conventions (``lower,'' ``upper,'' etc.) conform to those introduced in Section~\ref{results}. The blurred errors certainly introduce less distracting noise than without blurring, yet the colors still appear really distracting. \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r10_overlaid}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/r10_blurred_overlaid}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/r3_overlaid}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/r3_blurred_overlaid}} \end{centering} \caption{Radially sampling (b) --- upper plots display the upper slice; lower plots show the lower} \end{figure} \begin{figure} \begin{centering} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h10_overlaid}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/h10_blurred_overlaid}} \vspace{\vertsep} \parbox{\imsizes}{\includegraphics[width=\imsizes]{../brainsout/h3_overlaid}} \hfill \parbox{\imsizes}{\includegraphics[width=\imsizes] {../brainsout/h3_blurred_overlaid}} \end{centering} \caption{Horizontally sampling (b) --- upper plots show the upper slice; lower plots show the lower} \end{figure} \end{appendix} \newpage \end{document}
{ "alphanum_fraction": 0.7572865794, "avg_line_length": 34.7050847458, "ext": "tex", "hexsha": "b61cda9495bdffaf9bc333d9ac4a8b0774be287a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c0a2293f39f3af37cd610eade80fbd61c531337f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vishalbelsare/fbooja", "max_forks_repo_path": "tex/fbooja.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c0a2293f39f3af37cd610eade80fbd61c531337f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vishalbelsare/fbooja", "max_issues_repo_path": "tex/fbooja.tex", "max_line_length": 87, "max_stars_count": null, "max_stars_repo_head_hexsha": "c0a2293f39f3af37cd610eade80fbd61c531337f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vishalbelsare/fbooja", "max_stars_repo_path": "tex/fbooja.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 28647, "size": 102380 }
\chapter{Formalism} \label{chap:cpv:theory} The measurement of \ARaw, as defined in \cref{eqn:cpv:introduction:araw}, can be contaminated by production and detection asymmetries. By reconstructing \LcTopKK\ and \LcToppipi\ decays from \decay{\PLambdab}{\PLambdac\Pmuon X}\ decays, \ARaw\ can include the effects of the \PLambdab/\APLambdab\ production asymmetry, the \Pmuon/\APmuon detection asymmetry, and the detection asymmetry of the \PLambdac\ final state $f$/$\bar{f}$. The \PLambdab\ production asymmetry is predicted to be non-zero due to the \pp\ collision environment~\cite{PhysRevD.90.014023}, and indeed \lhcb\ has found evidence of such an asymmetry as a function of \PLambdab\ rapidity~\cite{Aaij:2015fea}. The two detection asymmetries are also expected to be non-zero, due to the known differences between the hadron/anti-hadron and muon/anti-muon cross-sections with matter~\cite{PDG2014}. The final state detection asymmetry can be broken down into two terms: one due to the detection asymmetry of the proton; and another due to the detection asymmetry of the \KmKp pair and the \pimpip\ pair. If the meson kinematics are equal within a given mode, for example if the \PKminus kinematics are identical to those of the \PKplus\ in the \pKK\ data, then the final state asymmetry reduces to the proton detection asymmetry. Both the \PLambdab\ production and the muon detection asymmetries should be independent of the \PLambdac\ final state, given the same \PLambdac\ kinematics. Different acceptance efficiencies between \pKK\ and \ppipi\ can create unequal \PLambdac\ kinematics, but the kinematics can be weighted to equalise them between the modes. It will now be shown how the background asymmetries entering \ARaw\ can be removed in the difference \dACP\@. The number of reconstructed \LcTof\ decays can be expressed as the product of several effective probabilities \begin{equation} N(f) = \prob(\PLambdab)\cdot \Gamma(\LbToLcmuX)\cdot \Gamma(\LcTof)\cdot \eff(\Pmuon)\cdot \eff(f), \label{eqn:cpv:theory:yield} \end{equation} where $\prob(\PLambdab)$ is the probability of producing a $\PLambdab$ baryon given a \pp\ collision, and $\eff(\Pmuon)$ and $\eff(f)$ are the muon and $\PLambdac$ final-state detection efficiencies. A similar expression exists for $N(\bar{f})$, where are particles are replaced by their charge conjugates. It is assumed that the \LbToLcmuX\ decay is \CP-symmetric, but that all other factors may not be, for example $\eff(\Pmuon) \neq \eff(\APmuon)$. To express \ARaw\ in terms of its component asymmetries, the notation of a general asymmetry parameter $X$ is defined, which describes the asymmetry between two quantities $x$ and $\bar{x}$ \begin{equation} X = \frac{x - \bar{x}}{x + \bar{x}}. \label{eqn:cpv:theory:generic_asym} \end{equation} This can be rearranged as \begin{align} x &= \frac{1}{2}(x + \bar{x})(1 + X),\ \text{and}\label{eqn:cpv:theory:asym_form_one}\\ \bar{x} &= \frac{1}{2}(x + \bar{x})(1 - X).\label{eqn:cpv:theory:asym_form_two} \end{align} An asymmetry parameter like that in \cref{eqn:cpv:theory:generic_asym} can be defined for each of the relevant terms in \cref{eqn:cpv:theory:yield} \begin{align*} \APLb(f) &= \frac{% \prob(\PLambdab) - \prob(\APLambdab) }{% \prob(\PLambdab) + \prob(\APLambdab) },\\ \ADmu(f) &= \frac{% \eff(\Pmuon) - \eff(\APmuon) }{% \eff(\Pmuon) + \eff(\APmuon) },\ \text{and}\\ \ADf(f) &= \frac{% \eff(f) - \eff(\bar{f}) }{% \eff(f) + \eff(\bar{f}) }, \end{align*} and the asymmetry in $\Gamma(\LcTof)$ is \ACP\ as in \cref{eqn:cpv:introduction:acp}. Each parameter is, at least implicitly, dependent on the detected \PLambdac\ final state, as each parameter can vary in quantities that may also vary between \PLambdac\ decay modes. Substituting Equation~\ref{eqn:cpv:theory:yield} into Equation~\ref{eqn:cpv:introduction:araw} \begin{equation*} \ARaw(f) = \frac{% \prob(\PLambdab)\Gamma(f)\eff(\Pmuon)\eff(f) - \prob(\APLambdab)\Gamma(\bar{f})\eff(\APmuon)\eff(\bar{f}) }{% \prob(\PLambdab)\Gamma(f)\eff(\Pmuon)\eff(f) + \prob(\APLambdab)\Gamma(\bar{f})\eff(\APmuon)\eff(\bar{f}) }, \end{equation*} and then substituting each quantity for its equivalent form as in \cref{eqn:cpv:theory:asym_form_one,eqn:cpv:theory:asym_form_two}, all factors of $\sfrac{1}{2}$ and all factors of the form $(x - \bar{x})$ cancel, leaving \begin{equation} \ARaw(f) = \frac{Y}{Z}. \end{equation} where (dropping the final state parameter temporarily for compactness) \begin{align} Y = \APLb\ADmu\ADf &+ \APLb\ADmu\ACP + \APLb\ADf\ACP + \ADmu\ADf\ACP \nonumber\\ &+ \APLb + \ADmu + \ADf + \ACP, \end{align} and \begin{align} Z = 1 &+ \APLb\ADmu + \APLb\ADf + \APLb\ACP + \ADmu\ADf + \ADmu\ACP \nonumber\\ &+ \ADf\ACP + \APLb\ADmu\ADf\ACP. \end{align} Assuming that the individual asymmetries are small, of the order of \SI{1}{\percent}, the product of two or more asymmetries is negligible with respect to the leading order, and so \begin{equation} \ARaw(f) \approx \ACP(f) + \APLb(f) + \ADmu(f) + \ADf(f). \label{eqn:cpv:theory:araw_approx} \end{equation} By assuming that these background asymmetries are mode-independent, that is to say $\AD(f) = \AD(g)$ and $\AP(f) = \AP(g)$, the difference between \ARaw\ for the \pKK\ and \ppipi\ will only have contributions from \ACP \begin{align} \dACP &= \ARaw(\pKK) - \ARaw(\ppipi),\label{eqn:cpv:theory:dacp}\\ &\approx \ACP(\pKK) - \ACP(\ppipi)\nonumber. \end{align} The assumption that the production and detection asymmetries in \cref{eqn:cpv:theory:araw_approx} are mode independent is not true in general. This can been seen by first making a weaker assumption that the background asymmetries are dependent only on the kinematics of the representative particles. Different final states will in general have different acceptance, reconstruction and selection efficiencies as a function of \PLambdac\ kinematics. The kinematics of the \PLambdac\ are correlated to those of the muon and the \PLambdab, and so two samples with different \PLambdac\ kinematics will likely also have different \PLambdab\ and muon kinematics. Hence, there can still be net production and detection asymmetries in \dACP\@. The assumption that the background asymmetries depend only on particle kinematics is not unreasonable. The production asymmetry, for example, is a difference in cross-sections, which are usually parameterised by the kinematics of the produced particle (\pT\ and either \Eta\ or rapidity). Similarly, a detection asymmetry describes the differences of material interactions between matter and antimatter, and these are dependent on the momentum of the particle in question and, assuming a non-uniform material distribution, its flight path. As it is not given that the \PLambdac\ kinematics are the same between \LcTopKK\ and \LcToppipi\ decays in the data, the data can be weighted to equalise the \PLambdac\ kinematics. The background asymmetries \AP\ and \ADmu\ will then cancel in the difference \dACP\@. The proton kinematics will not necessarily agree after such a weighting, as the energy release in the \pKK\ and \ppipi\ decays is different. These considerations govern the analysis strategy: measure the number of \PLambdac\ candidates in the \pKK\ and \ppipi\ samples after weighting them such that the \PLambdac\ kinematics look alike, such that the \PLambdab\ and \Pmuon kinematics also agree. Additional weighting may be required to equalise the proton kinematics. \section{Decay phase space} \label{chap:cpv:theory:phsp} The phase space of a decay is the set of variables which fully parameterises all possible dynamics. Although a three-body decay requires 12 parameters to describe the kinematics, under the assumptions of four-momentum conservation and that the masses of the three decay products are known, the number of free parameters is only five. When all particles involved have zero spin, the distribution of the momenta of the decay products is isotropic, such that there are only 2 degrees of freedom. These are usually taken to be two child-pair squared masses, which can be visualised as a Dalitz plot. In the case of \LcTophh, the spin \sfrac{1}{2} of the proton means that the three-body system is no longer rotationally symmetric, and the full five degrees of freedom are required to describe the phase space. For a meaningful comparison of the measurement of \dACP\ with theoretical predictions, which are not presently available, the efficiency of the \PLambdac\ selection across the five-dimensional phase space must be known. The definition of ``selection'' includes the effects of the \lhcb\ acceptance and the trigger, stripping, and offline requirements. The efficiency model can either be provided to theorists so that they can apply the same efficiencies to their phase space models, or it can be used to correct the data before the \dACP\ measurement is made. The five dimensions of the phase space are defined here in a similar way to the \LcTopKpi\ amplitude analysis performed by the \esno\ collaboration~\cite{Aitala:1999uq}. This defines two child-pair squared masses, of the proton and oppositely-charged child \msqphm\ and of the two opposite-sign pseudo-scalars \msqhh, and three decay angles. The child-pair squared masses are invariant under Lorentz transformations, but the decay angles are not and so a definition of the frame in which they are computed is required. The \esno\ analysis defines a coordinate system in the \PLambdac\ rest frame. The $z$-axis, also called the quantisation axis or polarisation axis \polzlcp, is perpendicular to the plane of production \begin{equation} z = \polzlcp = \phatbeam \times \phatlcp, \end{equation} where \phatbeam\ is the direction of the beam\footnotemark\ and \phatlcp\ is the direction of the \PLambdac\ measured in the laboratory frame. \footnotetext{% E791 was a fixed-target experiment, colliding a \SI{500}{\GeVc} pion beam with metal foils. } The $x$-axis of the \PLambdac\ rest frame is \phatlcp. As the \PLambdac\ candidates in this analysis are not produced directly from the \pp\ collision but in the decays of \PLambdab\ baryons, here the `beam direction' is defined as the direction of the \PLambdab\ momentum vector, which is equal to the direction vector pointing from the \pp\ primary vertex $v_{\pp}$ to the $\PLambdac\Pmuon$ vertex $v_{\PLambdac\Pmuon}$ \begin{equation} \phatbeam = \phatlbz = v_{\PLambdac\Pmuon} - v_{\pp}. \end{equation} With this coordinate system and inertial frame, the three decay angles are defined as: \begin{enumerate} \item The angle \thetap\ between the proton momentum vector and the $z$-axis; \item The angle \phip\ between the proton momentum vector and the $x$-axis; and \item The angle \phihh\ between the plane containing the proton momentum vector and the $z$-axis and the plane containing the two pseudo-scalar meson momentum vectors. \end{enumerate} These definitions are illustrated in \cref{fig:cpv:theory:phsp:angles} The distributions of the five phase space variables will be presented in \cref{chap:cpv:phsp}, along with the evaluation of the efficiency as a function of the position in phase space. \begin{figure} \begin{subfigure}{0.5\textwidth} \resizebox{\textwidth}{!}{% \input{figures/cpv/theory/phase_space_angles} } \end{subfigure} \begin{subfigure}{0.5\textwidth} \resizebox{\textwidth}{!}{% \input{figures/cpv/theory/phase_space_planes} } \end{subfigure} \caption{% Definition of inertial reference frame axes and \LcTophh\ phase space decay angles. Adapted from Figure~1 in the \esno\ \LcTopKpi\ amplitude analysis paper~\cite{Aitala:1999uq}. } \label{fig:cpv:theory:phsp:angles} \end{figure}
{ "alphanum_fraction": 0.7473222569, "avg_line_length": 45.9573643411, "ext": "tex", "hexsha": "cc3ca2c964f35d21727d69c1bddd7b77dc4bb42c", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-01-06T23:42:27.000Z", "max_forks_repo_forks_event_min_datetime": "2019-05-13T07:54:57.000Z", "max_forks_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alexpearce/Thesis", "max_forks_repo_path": "chapters/cpv/theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alexpearce/Thesis", "max_issues_repo_path": "chapters/cpv/theory.tex", "max_line_length": 119, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d727d04b7ee619ba0eb45c7faf1004eb418e046e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alexpearce/Thesis", "max_stars_repo_path": "chapters/cpv/theory.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-18T00:58:34.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-18T00:58:34.000Z", "num_tokens": 3541, "size": 11857 }
\documentclass[a4paper, 12pt]{report} %%%%%%%%%%%% % Packages % %%%%%%%%%%%% \usepackage[english]{babel} \usepackage{silence} %Disable all warnings issued by latex starting with "You have..." \WarningFilter{latex}{You have requested package} \WarningFilter{latex}{Font shape} \WarningFilter{latex}{Some font shapes were not available} \usepackage[noheader]{../packages/sleek} \usepackage{../packages/sleek-title} \usepackage{../packages/sleek-theorems} \usepackage{../packages/sleek-listings} \usepackage{../packages/pythonhighlight} \usepackage{graphicx} % \usepackage{biblatex} %%%%%%%%%%%%%% % Title-page % %%%%%%%%%%%%%% \logo{../resources/png/mit_logo.png} \institute{Massachusetts Institute of Technology} \faculty{6.864 Advanced Natural Language Processing} \department{Professors Jacob Andres and Yoon Kim} \title{Generating Graphics Shaders from English Descriptions} \subtitle{Final Project Proposal} \author{\textit{Authors}\\Richard Muri, Saaketh Vedantam, Justin Yao, Vivienne Zhang} % \context{Well, I was bored...} \date{\today} %%%%%%%%%%%%%%%% % Bibliography % %%%%%%%%%%%%%%%% % \addbibresource{../resources/bib/references.bib} \addbibresource{../references.bib} %%%%%%%%%% % Others % %%%%%%%%%% \FrameTBStyle{latex} \def\tbs{\textbackslash} %%%%%%%%%%%% % Document % %%%%%%%%%%%% \begin{document} \maketitle \romantableofcontents %\section*{Acknowledgement} \input{content.tex} \printbibliography \end{document}
{ "alphanum_fraction": 0.696342305, "avg_line_length": 21.3088235294, "ext": "tex", "hexsha": "801d941fac1bd1ff7b047306687eec0889de2172", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "83d3118f2fa32e40af70ae9fdd35f99baac71de7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RichardMuri/ShaderGenerator", "max_forks_repo_path": "reports/proposal/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83d3118f2fa32e40af70ae9fdd35f99baac71de7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RichardMuri/ShaderGenerator", "max_issues_repo_path": "reports/proposal/main.tex", "max_line_length": 85, "max_stars_count": 1, "max_stars_repo_head_hexsha": "83d3118f2fa32e40af70ae9fdd35f99baac71de7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RichardMuri/ShaderGenerator", "max_stars_repo_path": "reports/proposal/main.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-15T02:34:28.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-15T02:34:28.000Z", "num_tokens": 379, "size": 1449 }
\documentclass{clazz} \usepackage{pkg2} \usepackage{pkg1} \usepackage{somepkg} \usepackage[op1,op2]{anotherpkg} \title{Title} \author{Author} \date{Date} \begin{document} \maketitle \begin{abstract} abstract text \end{abstract} \tableofcontents \begin{someEnv}{ba} ABC\newlineDEF Some text \newlinein raw file \c[opt]{arg} \end{someEnv} \section{Section} section text \paragraph*{Paragraph} paragraph text \end{document}
{ "alphanum_fraction": 0.7785714286, "avg_line_length": 17.5, "ext": "tex", "hexsha": "a350d7e1904af34a808775d77d27947bd1c46bd1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "08930a1ac82323a050fbb1791bf9e55f12514fe6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CanIHas/LaTeX", "max_forks_repo_path": "src/test/resources/expectedForModel.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "08930a1ac82323a050fbb1791bf9e55f12514fe6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CanIHas/LaTeX", "max_issues_repo_path": "src/test/resources/expectedForModel.tex", "max_line_length": 32, "max_stars_count": null, "max_stars_repo_head_hexsha": "08930a1ac82323a050fbb1791bf9e55f12514fe6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CanIHas/LaTeX", "max_stars_repo_path": "src/test/resources/expectedForModel.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 137, "size": 420 }
\documentclass[author-year, review, 11pt]{components/elsarticle} %review=doublespace preprint=single 5p=2 column %%% Begin My package additions %%%%%%%%%%%%%%%%%%% \usepackage[hyphens]{url} \usepackage{lineno} % add \linenumbers % turns line numbering on \bibliographystyle{elsarticle-harv} \biboptions{sort&compress} % For natbib \usepackage{graphicx} \usepackage{booktabs} % book-quality tables %% Redefines the elsarticle footer \makeatletter \def\ps@pprintTitle{% \let\@oddhead\@empty \let\@evenhead\@empty \def\@oddfoot{\it \hfill\today}% \let\@evenfoot\@oddfoot} \makeatother % A modified page layout \textwidth 6.75in \oddsidemargin -0.15in \evensidemargin -0.15in \textheight 9in \topmargin -0.5in %%%%%%%%%%%%%%%% end my additions to header \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[utf8]{inputenc} \else % if luatex or xelatex \usepackage{fontspec} \ifxetex \usepackage{xltxtra,xunicode} \fi \defaultfontfeatures{Mapping=tex-text,Scale=MatchLowercase} \newcommand{\euro}{€} \fi % use microtype if available \IfFileExists{microtype.sty}{\usepackage{microtype}}{} \ifxetex \usepackage[setpagesize=false, % page size defined by xetex unicode=false, % unicode breaks when used with xetex xetex]{hyperref} \else \usepackage[unicode=true]{hyperref} \fi \hypersetup{breaklinks=true, bookmarks=true, pdfauthor={}, pdftitle={R clients for working with scholarly metadata and full text}, colorlinks=true, urlcolor=blue, linkcolor=magenta, pdfborder={0 0 0}} \urlstyle{same} % don't use monospace font for urls \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} \setlength{\emergencystretch}{3em} % prevent overfull lines \setcounter{secnumdepth}{0} % Pandoc toggle for numbering sections (defaults to be off) \setcounter{secnumdepth}{0} % Pandoc header \begin{document} \begin{frontmatter} \title{R clients for working with scholarly metadata and full text} \author[cstar]{Scott Chamberlain\corref{c1}} \ead{scott(at)ropensci.org} \cortext[c1]{Corresponding author} \address[cstar]{University of California, Berkeley, CA, USA} \begin{abstract} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item xxx \item xxx \item xxx \item xxxx \end{enumerate} \end{abstract} \end{frontmatter} \section{Introduction}\label{introduction} xxxx \section{The packages}\label{the-packages} xxx \subsection{Package interface}\label{package-interface} xxx \subsubsection{fulltext}\label{fulltext} \subsubsection{rentrez}\label{rentrez} \subsubsection{rplos}\label{rplos} \subsubsection{rcrossref}\label{rcrossref} \section{Conclusions and future directions}\label{conclusions-and-future-directions} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item pt 1 \item pt 2 \item pt 3 \item pt 4 \end{itemize} \section{Acknowledgements}\label{acknowledgements} This project was supported in part by the Alfred P Sloan Foundation (Grant 2013-6-22). \section{Data Accessibility}\label{data-accessibility} All scripts and data used in this paper can be found in the permanent data archive Zenodo under the digital object identifier (DOI). This DOI corresponds to a snapshot of the GitHub repository at \href{https://github.com/sckott/mstext}{github.com/sckott/mstext}. Software can be found at \href{https://github.com/ropensci/fulltext}{github.com/ropensci/fulltext}, under the open and permissive MIT license. \section*{References}\label{references} \addcontentsline{toc}{section}{References} \end{document}
{ "alphanum_fraction": 0.7329617431, "avg_line_length": 25.6298701299, "ext": "tex", "hexsha": "d219c98476a06b2e53b85737663df8bdaa234ebf", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7edb909d25c56dc30176630b7a1bcef82b52682d", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "sckott/mstext", "max_forks_repo_path": "manuscript.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7edb909d25c56dc30176630b7a1bcef82b52682d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "sckott/mstext", "max_issues_repo_path": "manuscript.tex", "max_line_length": 112, "max_stars_count": 2, "max_stars_repo_head_hexsha": "7edb909d25c56dc30176630b7a1bcef82b52682d", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "sckott/mstext", "max_stars_repo_path": "manuscript.tex", "max_stars_repo_stars_event_max_datetime": "2015-11-14T23:43:45.000Z", "max_stars_repo_stars_event_min_datetime": "2015-11-14T19:05:32.000Z", "num_tokens": 1218, "size": 3947 }
\documentclass{report} \usepackage{amsmath} \usepackage{amssymb} \usepackage{hyperref} \usepackage{graphicx} \usepackage{float} \usepackage{listings} \begin{document} \section{LinearRegression without regularization} \subsection{Abstract} Hello, everyone. I'm btobab.\\ When I learned NLP, I found I completely can't understand formula derivation, so I decided to learn the theoretical derivation of machine learning from the beginning.\\ This chapter is mainly divided into two parts: \begin{itemize} \item[] The first part will derive the closed-form solution of the linear regression(least squares method) from three perspectives of matrix, geometry, probability, and provide reference code. \item[] The second part will derive the closed-formula solution of least squares method with regularization from two perspectives of matrix and probability, and construct a complete linear regressioin class, and implement the closed-solution method and gradient descent method with code at the same time. \end{itemize} \subsection{Introduction} Linear regression model is a model that use linear function to fit the relationship between one or more independent variables and the dependent variable ($y$).\\\\ The target variable ($y$) is a continuous numerical type, such as: housing price, number of people and rainfall. The regression model is to find a mapping function between input variables and output variables.\\\\ The learning of regression task equals to function fitting: use a function curve to make it fit the known data well and predict unknown data.\\\\ Regression task is divided into two processes: model learning and prediction. Construct a model based on given training dataset and predict corresponding output based on new input data. \subsection{Algorithm} \subsubsection{Matrix perspective} Note:in general, the vectors we are discussing are column vectors. Therefore, in order to ensure the shape of the matrix during the derivation process, a large number of transposition characters are used\\\\ given dataset $\mathcal{D}=\{(x_1,y_1),(x_2,y_2)...(x_n, y_n)\}$\\\\ in which $x_i\in \mathcal{R}^p, y_i\in \mathcal{R}, i=1,2,...,n$ $$ X=(x_1,x_2,...,x_n)^T=\begin{pmatrix} x_{11}&x_{12}&...&X_{1p}\\ x_{21}&x_{22}&...&x_{2p}\\ .&.&.&.&\\ .&.&.&.&\\ .&.&.&.&\\ x_{n1}&x_{n2}&...&x_{np}\\ \end{pmatrix}_{np} $$ $$ Y=(y1,y2,...,y_n)_{n1}^T $$ It is the model we constrct: $f(w)=w^Tx+w_0 x_0.$\\\\ Generally set $x_0=1$,and $b=w_0 x_0$, $b$ is bias, $w$ is weight. below, for the convenience of derivation, we merge $w_0$ into $w$ and $x_0$ into $x$.\\\\ so the model is updated to $f(w)=w^Tx$ The loss function of least squares method is: $$ L(w)=\sum_{i=1}^{i=n}\|y_i-w^T x_i\|_2^2 $$ $$ =\begin{pmatrix} y_1-w^Tx_1&y_2-w^Tx_2&...&y_n-w^Tx_n \end{pmatrix} \begin{pmatrix} y_1-w^Tx_1\\ y_2-w^Tx_2\\ .\\ .\\ .\\ y_n-w^Tx_n\\ \end{pmatrix} $$ $$ =(Y^T-w^TX^T)(Y^T-w^TX^T)^T $$ $$ =(Y^T-w^TX^T)(Y-Xw) $$ $$ =Y^TY-w^TX^TY-Y^TXw+w^TX^TXw $$ Note the second and third terms are transposed to each other, and observe its matrix shape: $(1,p)(p,n)(n,1)=(1,1)$\\ Knowning that these two terms are scalars, and the transposition of a scalar is itself, so the two can be conbined, get: $$ L(w)=Y^TY-2w^TX^TY+w^TX^TXw $$ so $\hat{w}=argmin(L(w))$ below, to find out the minimum of $L(w)$, we need to diffentiate $L(w)$\\\\ Note there are three terms in the formual. The first term has nothing to do with $w$ and can be removed. Then the remaining two terms involve matrix derivation.\\\\ Regarding to matrix derivation, author recommends three articles by a blogger(more detailed and rigorous than textbook, each formual has proof) \begin{itemize} \item \href{https://zhuanlan.zhihu.com/p/263777564}{essence} \item \href{https://zhuanlan.zhihu.com/p/273729929}{basics} \item \href{https://zhuanlan.zhihu.com/p/288541909}{advanced} \end{itemize} The following is the derivative solution process of the above two terms.\\\\ Because $X,Y$ are constant matrices, the derivative can be obtained directly. However, since it is the derivative of $w$, the result must be transposed. $$ \frac{d(2w^TX^TY)}{dw}=2X^TY $$ Below, let's solve the third term. $$ d(w^TX^TXw)=tr(d(w^TX^TXw))=tr(X^TXd(w^Tw)) $$ $$ =tr(X^TX(d(w^T)w+w^Td(w)))=tr(X^TXw(dw)^T)+tr(X^TXw^Tdw) $$ $$ =tr(w^TX^TXdw)+tr(X^TXw^Tdw)=tr(2X^TXw^Tdw) $$ so $$ \frac{d(w^TX^TXw)}{dw}=2wX^TX $$ From a geometric perspective, we regard $X$ as a $p$ dimensional vector.\\\\ The first dimension of $X$ is $(x_{11},x_{21},...,x_{n1})$, the p-th dimension of $X$ is $(x_{1p},x_{2p},...,x_{np})$\\\\ and here $Y$ is regarded as a one-dimensional vector.\\\\ so $\frac{dL(w)}{dw}=2X^TXw-2X^TY$\\\\ set the derivative equal to 0 to get the closed-form solution of the least squares method: $$ \hat{w}=(X^TX)^{-1}X^TY $$ \subsubsection{Geometry perspective} $$ X=(x_1,x_2,...,x_n)^T=\begin{pmatrix} x_{11}&x_{12}&...&X_{1p}\\ x_{21}&x_{22}&...&x_{2p}\\ .&.&.&.&\\ .&.&.&.&\\ .&.&.&.&\\ x_{n1}&x_{n2}&...&x_{np}\\ \end{pmatrix}_{np} $$ $$ Y=(y1,y2,...,y_n)_{n1}^T $$ Now we assume $p=2$ because it is easier to draw. The diagram is as follows(I really drew it for a long time and pitifully asked for a star. \begin{figure} \includegraphics[width=0.7 \textwidth]{/Users/btobab/TeX-Projects/figures/1} \caption{figure 1} \end{figure} Change the model to $f(w)=Xw$, which means zoom $X$ with weight $w$\\\\ The geometric meaning of the least squares method is to find a $w$, so that the distance between vector $Y-Xw$ and space $w$ is the smallest. Of course the case of the smallest distance is perpendicular to space $X$.\\\\ so we get a formula: $X^T(Y-Xw)=0$\\\\ then get the solution of $w$: $$ X^TXw=X^TY $$ $$ \hat{w}=(X^TX)^{-1}X^TY $$ we can see that the solved $w$ is the same as the result of matrix perspective. \subsubsection{Probability perspective} As we known, in reality, it is hard to fit a distribution with a straight line. True data must have some randomness, that is, noise.\\\\ so we assume noise $\epsilon\backsim N(0,\sigma^2)$\\ so $y=f(w)+\epsilon=w^Tx+\epsilon$\\ so $y|x;w\backsim N(w^Tx,\sigma^2)$\\ Bring it into the probability density function of gaussian distribution: $$ p(y|x;w)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(y-w^Tx)^2}{2\sigma^2}} $$ Then use $MLE$ (maximum likelihood estimate)\\\\ Note: so-called MLE is to get the relative frequency via a large number of samples to approximate probability\\\\ Let's assume a function: $\zeta(w)=\log{p(Y|X;w)}$ Since $n$ data are independent, we can change the probability to a form of continuous mutiplication: $\zeta(w)=\log{\Pi_{i=1}^np(y_i|x_i;w)}=\Sigma_{i=1}^n \log{p(y_i|x_i;w)}$ bring the probability density function of gaussian distribution into the formula: $\zeta(w)=\Sigma_{i=1}^n(\log{\frac{1}{\sqrt{2\pi}\sigma}}-\frac{(y-w^Tx)^2}{2\sigma^2})$ since the former term has nothing to do with $w$, it can be ignored. so: $$ \hat{w}=argmax{\zeta(w)} $$ $$ =argmax{ \Sigma_{i=1}^n -\frac{(y-w^Tx)^2}{2\sigma^2}} $$ $$ =argmin{ \Sigma_{i=1}^n (y-w^Tx)^2} $$ The conclusion obtained by using maximum likelihood estimation is the definition of least squares method.\\\\ This also shows that least squares method hides a assumption that noise is gaussian distribution. \subsection{Implement} \begin{lstlisting}[language={python}] %matplotlib inline import numpy as np import matplotlib.pyplot as plt # num of samples n = 1000 # noise epsilon = 1 X = np.expand_dims(np.linspace(0,100,1000), axis=-1) w = np.asarray([5.2]) Y = X w # apply noise to X X += np.random.normal(scale=epsilon, size=(X.shape)) X_T = X.transpose() w_hat = np.matmul(np.linalg.pinv((np.matmul(X_T, X))), np.matmul(X_T, Y)) print(w_hat) plt.scatter(X, Y, s=3, c="y") Y_hat = X w_hat plt.plot(X, Y_hat) plt.show() \end{lstlisting} \newpage \section{LinearRegression with regularization} \subsection{Algorithm} \subsubsection{Matrix perspective} Firstly, given a new loss functioin with regularization: $$ \zeta(w)=\Sigma_{i=1}^{n}||y_i-w^T x_i||^2 + \lambda ||w||^2 $$ then the derivation of loss function from matrix perspective without regularization is referenced: $$ \zeta(w)=Y^TY-2w^TX^TY+w^TX^TX w+\lambda ||w||^2 $$ so$\hat{w}=argmax(\zeta(w))$\\\\ differentiate $\zeta(w)$: $$ \frac{\partial \zeta(w)}{\partial w}=2X^TXw-2X^T Y+2\lambda w $$ set the derivative equal to 0 to get the closed-form solution of least squares method with regularization: $$ \hat{w}=(X^TX+\lambda I)^{-1} X^TY $$ $I$ is identity matrix \subsubsection{Probability perspective} assume noise $\epsilon \backsim N(0,\sigma_1^2) \ w \backsim N(0,\sigma_2^2)$ since $y=w^T x + \epsilon$ we get $y|w \backsim N(w^T x,\sigma_1^2)$ \\Next we use MAP(Maximum a posteriori estimate): according to Bayes theorem: $$ P(w|Y)=\frac{P(Y|w) P(w)}{P(Y)} $$ $P(w)$ is a priori probability, $P(Y|w)$ is a likelihood probability, $P(Y)$ is normalized probability, priori probability is mutiplied by the likelihood probability and normalized to obtain the posteriori probability $P(w|Y)$. actually $P(Y)$ is constant, so: $$ \hat{w}=argmax(P(w|Y))=argmax(P(Y|w) P(w))=argmax(log(P(Y|w) P(w))) $$ since samples are independent, we can change the probability to a form of continuous mutipication. $$ =argmax(log(\prod_{i=1}^n P(y_i|w) P(w)))=argmax(\sum_{i=1}^n log(P(y_i|w)+ log(P(w)))) $$ bring it into probability density function of gaussian distribution to get: $$ \hat{w}=argmax(\sum_{i=1}^nlog(\frac{1}{\sqrt{2\pi} \sigma_1})-\frac{(y_i-w^T x_i)^2}{2\sigma_1^2}+log(\frac{1}{\sqrt{2 \pi} \sigma_2})-\frac{w^2}{2\sigma_2^2}) $$ since both $\sigma_1$ and $\sigma_2$ are hyperparameters, they can be omitted. so: $$ \hat{w}=argmin(\sum_{i=1}^n \frac{(y_i-w^T x_i)^2}{2\sigma_1^2}+\frac{w^2}{2\sigma_2^2}) $$ $$ =argmin(\sum_{i=1}^n (y_i-w^T x_i)^2+\frac{\sigma_1^2}{\sigma_2^2} w^2) $$ we can see that the result derived via $MAP$ is the definition of the least squares method with regularization. \subsection{Implement} \begin{lstlisting}[language={python}] import os os.chdir("../") from models.linear_models import LinearRegression import numpy as np import matplotlib.pyplot as plt X_ = np.expand_dims(np.linspace(0, 10, 1000), axis=-1) X = np.c_[X_, np.ones(1000)] w = np.asarray([5.2, 1]) Y = X.dot(w) X = np.r_[X, np.asarray([[11, 1], [12, 1], [13, 1]])] Y = np.r_[Y, np.asarray([100, 110, 120])] model = LinearRegression(l2_ratio=1e1, epoch_num=1000, lr=1e-2, batch_size=100, if_standard=False) model.fit(X[:, :-1], Y) print(model.get_params()) model.draw(X[:, :-1], Y) \end{lstlisting} \end{document}
{ "alphanum_fraction": 0.6966292135, "avg_line_length": 37.9133574007, "ext": "tex", "hexsha": "987bed1347f5790670fc1796fc31198c9d70ffce", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-08-28T18:47:22.000Z", "max_forks_repo_forks_event_min_datetime": "2021-08-28T18:47:22.000Z", "max_forks_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "btobab/Machine-Learning-notes", "max_forks_repo_path": "EN-TeX_files/LinearRegression/05_linear_regression.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "btobab/Machine-Learning-notes", "max_issues_repo_path": "EN-TeX_files/LinearRegression/05_linear_regression.tex", "max_line_length": 305, "max_stars_count": 10, "max_stars_repo_head_hexsha": "bc064bd2fe3817444bb8850340ac1177f5fd71c4", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "btobab/Machine-Learning-notes", "max_stars_repo_path": "EN-TeX_files/LinearRegression/05_linear_regression.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-04T07:36:27.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-28T18:47:12.000Z", "num_tokens": 3527, "size": 10502 }
\documentclass{article} \usepackage{tocloft} \include{common_symbols_and_format} \renewcommand{\cfttoctitlefont}{\Large\bfseries} \begin{document} \logo \rulename{Relative Strength Index} %Argument is name of rule \tblofcontents \ruledescription{The Relative Strength Index (RSI), developed by J. Welles Wilder, is a momentum indicator that measures the speed and change of price movements. It is an extremely popular indicator that is used to indicate overbought and oversold signals. RSI is considered overbought when above 70 and oversold when below 30. RSI can also be used to identify the general trend of an asset. It can also be used for identifying bullish and bearish divergences.} \ruleparameters %You can include however many arguments (in groups of 4) as you want! {Overbought}{70}{Overbought Condition}{$O_b$} {Oversold}{30}{Oversold Condition}{$O_s$} {Time Length}{14 Days}{Time frame on which the RSI is calculated}{$L$} \stoptable %must be included or Tex engine runs infinitely \section{Equation} Below are the equations which govern how this specific trading rule calculates a trading position. \begin{equation} RSI = 100-\frac{100}{(1+RS)} \end{equation} \\ with: $RSI$: is the relative strength index at $\currenttime$ $RS$: is the relative strength which is calculated using below formula. \begin{equation} RS = \frac{AvgU}{AvgD} \end{equation} \\ with: $AvgU$ : average of all upward movements in the last $L$ price bars $AvgD$ : average of all downward movements in the last $L$ price bars $\L$ : Time Length \keyterms \furtherlinks %The footer \end{document}
{ "alphanum_fraction": 0.7633540373, "avg_line_length": 34.2553191489, "ext": "tex", "hexsha": "e1edb0ba01ba6a1fae2fafbf5b0c3eed57d72632", "lang": "TeX", "max_forks_count": 28, "max_forks_repo_forks_event_max_datetime": "2021-11-10T18:21:14.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-26T14:26:04.000Z", "max_forks_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "parthgajjar4/infertrade", "max_forks_repo_path": "docs/strategies/tex/RelativeStrengthIndex.tex", "max_issues_count": 137, "max_issues_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_issues_repo_issues_event_max_datetime": "2022-01-28T19:36:30.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-25T10:59:46.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "parthgajjar4/infertrade", "max_issues_repo_path": "docs/strategies/tex/RelativeStrengthIndex.tex", "max_line_length": 391, "max_stars_count": 34, "max_stars_repo_head_hexsha": "2eebf2286f5cc669759de632970e4f8f8a40f232", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "parthgajjar4/infertrade", "max_stars_repo_path": "docs/strategies/tex/RelativeStrengthIndex.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-06T23:03:01.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-25T13:32:54.000Z", "num_tokens": 430, "size": 1610 }
\chapter{Appendix: LEMUR Results} \label{chap:appendix-utility} \begin{figure*} \begin{widepage} \begin{center} \includegraphics{build/lemur-sq/herb-master} \caption[Experimental results across six single-query motion planning instances for a 7-DOF robot arm. Top: expected planning cost $p$ (in seconds) vs. execution cost $x$ (in radians) for each parameterized planner for various values of their parameters. Bottom: the mean negative utility $-U = \lambda_U p + (1\!-\!\lambda_U) x$ (solid lines) measured for each planner (lower on plot is better) as the utility parameter $\lambda_U$ is varied. The 95\% confidence interval of the mean is also shown. Planners used the same parameter schedule across the problems as shown in Figure~\ref{fig:herbarm-schedules}. The per-problem maximum achievable mean utilities (\ref{eqn:oracle-param-schedule}) for each planner are shown as dotted lines. Planners are RRT-Connect with shortcutting, BIT*, LEMUR (no roadmap cache), and LEMUR (with roadmap cache). ][-30cm]{x} \label{fig:lemur:sq-herb-master} \end{center} \end{widepage} \vspace{0.1in} \smallskip\noindent\small Figure \ref{fig:lemur:sq-herb-master}: Experimental results across six single-query motion planning instances for a 7-DOF robot arm. Top: expected planning cost $p$ (in seconds) vs. execution cost $x$ (in radians) for each parameterized planner for various values of their parameters. Bottom: the mean negative utility $-U = \lambda_U p + (1\!-\!\lambda_U) x$ (solid lines) measured for each planner (lower on plot is better) as the utility parameter $\lambda_U$ is varied. The 95\% confidence interval of the mean is also shown. Planners used the same parameter schedule across the problems as shown in Figure~\ref{fig:herbarm-schedules}. The per-problem maximum achievable mean utilities (\ref{eqn:oracle-param-schedule}) for each planner are shown as dotted lines. Planners are \protect\tikz{\protect\node[fill=red,draw=black]{};}\;RRT-Connect with shortcutting, \protect\tikz{\protect\node[fill=green,draw=black]{};}\;BIT*, \protect\tikz{\protect\node[fill=blue,draw=black]{};}\;LEMUR (no roadmap cache), and \protect\tikz{\protect\node[fill=black!80,draw=black]{};}\;LEMUR (with roadmap cache). \end{figure*} \begin{figure*} \begin{widepage} \begin{center} \includegraphics{build/lemur-sq/herb-timing} \caption[Time breakdown of LEMUR for each of the six experimental problems across various values of $\lambda_p$. Cumulative time is shown performing the following operations: roadmap generation, graph search, collision checking, and unaccounted for. The bottom row shows the results of the algorithm over the same set of 50 roadmaps when reading the cached roadmap from disk to avoid online nearest-neighbor queries. ][-30cm]{x} \label{fig:lemur:sq-herb-timing} \end{center} \end{widepage} \vspace{0.1in} \smallskip\noindent\small Figure \ref{fig:lemur:sq-herb-timing}: Time breakdown of LEMUR for each of the six experimental problems across various values of $\lambda_p$. Cumulative time is shown performing the following operations: \protect\tikz{\protect\node[fill=red!20,draw=black]{};}\;roadmap generation, \protect\tikz{\protect\node[fill=blue!20,draw=black]{};}\;graph search, \protect\tikz{\protect\node[fill=green!20,draw=black]{};}\;collision checking, and \protect\tikz{\protect\node[fill=black!20,draw=black]{};}\;unaccounted for. The bottom row shows the results of the algorithm over the same set of 50 roadmaps when reading the cached roadmap from disk to avoid online nearest-neighbor queries. \end{figure*} \begin{figure*} \begin{widepage} \centering \includegraphics{build/lemur-sq/workcell-master} \end{widepage} \caption{Summary of $p$-vs-$x$ and utility results for each step in the workcell problem.} \label{fig:lemur:sq-workcell-master} \end{figure*} \begin{figure*} \begin{widepage} \centering \includegraphics{build/lemur-sq/workcell-timing} \end{widepage} \caption{Timing results for each step in the workcell problem.} \label{fig:lemur:sq-workcell-timing} \end{figure*}
{ "alphanum_fraction": 0.6944878375, "avg_line_length": 40.0089285714, "ext": "tex", "hexsha": "25f6337e8c751daa5696a4136c494df54554e998", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "siddhss5/phdthesis-dellin", "max_forks_repo_path": "thesis-ch05-lemur-appendix.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "siddhss5/phdthesis-dellin", "max_issues_repo_path": "thesis-ch05-lemur-appendix.tex", "max_line_length": 96, "max_stars_count": 1, "max_stars_repo_head_hexsha": "62ca559db0ad0a6285012708ef718f4fde4e1dcd", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "siddhss5/phdthesis-dellin", "max_stars_repo_path": "thesis-ch05-lemur-appendix.tex", "max_stars_repo_stars_event_max_datetime": "2018-09-06T21:45:42.000Z", "max_stars_repo_stars_event_min_datetime": "2018-09-06T21:45:42.000Z", "num_tokens": 1255, "size": 4481 }
%% !TEX root = manual.tex \section{Network Model} \label{sec:tutorial:networkmodel} Network models can be divided into several categories. SST/macro supports analytic models, which estimate network delays via basic latency/bandwidth formulas, and packet models, which model step-by-step the transit of individuals through the interconnect. A third class of models (flow models), was previously supported but are now discontinued due to the much better scalability of packet models. \subsection{Analytic Models: MACRELS} \label{subsec:tutorial:macrels} The analytic models in SST/macro are colloqiually referred to as MACRELS (MTL for AnalytiC REally Lightweight Simulation). The MTL (message transfer layer) moves entire network flows from point-to-point without packetizing them into smaller chunks. Thus an entire 1 MB MPI message is transported as a single chunk of data. The majority of MACRELS models are based on the LogP set of approximations: \[ \Delta t = \alpha + \beta N \] where $\Delta t$ is the time delay, $\alpha$ is the minimum latency of the communication, $\beta$ is the inverse bandwidth (s/B), and N is the number of bytes. In abstract machine models, these methods are selected as: \begin{ViFile} congestion_model = logP \end{ViFile} \subsection{Packet Models: PISCES} \label{subsec:tutorial:pisces} PISCES (Packet-flow Interconnect Simulation for Congestion at Extreme Scale) breaks network flows (MPI messages) into individual packets and models each packet individually. In abstract machine models, PISCES can be selected as: \begin{ViFile} congestion_model = pisces \end{ViFile} In reality, packets are further subdivided into flits (flow-control units). Flit-level detail would be way too computationally intense for large-scale simulation. All routing decisions are made on packets as a while. Two flits in the same packet cannot take different paths through the network. However, they may not travel together. PISCES provides two mechanisms for treating flit-level flow control. \subsubsection{PISCES simple model} \label{subsubsec:tutorial:simplePisces} In the simple model, each router uses a basic store-and-forward mechanism. Flits are not allowed to ``separate'' and always travel as a single unit. The entire packet has to be stored within a router before it can be forwarded to the next router. The simple model affects the arbitrator that decided when and how to transmit flits. To select a simple model: \begin{ViFile} arbitrator = simple \end{ViFile} The simple model is the least computationally expensive. However, for large packet sizes, it can produce erroneously high latencies. To tune the packet size for abstract machine models, set: \begin{ViFile} accuracy_parameter = 1024B \end{ViFile} or equivalently \begin{ViFile} mtu = 1024B \end{ViFile} which sets the packet size to 1024B. For the simple model, packet sizes larger than 256-512B are not recommended. Packet sizes on production supercomputers are often small (96-128B). Small packet sizes with the simple model can be a good compromise for having more fine-grained routing but cheaper congestion modeling in the arbitrator. \subsubsection{PISCES cut-through model} \label{subsubsec:tutorial:cutThroughPisces} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figures/pisces} \caption{Timeline of four different packets passing through a PISCES cut-through bandwidth arbitrator. The incoming bandwidth (I) and outgoing bandwidth (O) are shown for each packet. Time is the horizontal axis. Bandwidth consumed by a packet is shown by the vertical extent of each packet. The individual events are 1) First packet arrives 2) Second packet arrives with reduced bandwidth but no available bandwidth 3) First packet finishes. Second packet can begin sending. 4) Third packet arrives and begins sending with remaining bandwidth. 5) Fourth packet arrives, but no available bandwidth. 6) Second packet finishes. Third packet increases bandwidth. Fourth packet can begin sending. 7) Third packet finishes. Fourth packet increases bandwidth. 8) Fourth packet finishes. Full details are given in the text.} \label{fig:pisces} \end{figure} In the cut-through model, routing decisions still occur at the packet-level. However, some attempt is made to account for pipelining of flits across different router stages. Somewhat similar to the LogP models used above, latency/bandwidth formulas are used to estimate packet delays. However, the cut-through model adds more details. It's requested as: \begin{ViFile} arbitrator = cut_through \end{ViFile} Figure \ref{fig:pisces} shows a timeline for the data being transmitted through a crossbar, SerDes, or other network component with a ``fixed bandwidth.'' Each component is essentially a pipe with some flow bandwidth. The arbitrator divides its limited bandwidth amongst incoming packets. Packets fill the pipeline, consuming bandwidth. In contrast to the completely discrete simple model, packets can ``multiplex'' in the component sharing an arbitrary bandwidth partition. Modeling a packet delay starts with two input parameters and computes three output parameters. \begin{itemize} \item $A$: Packet head arrival (input) \item $I$: Packet incoming bandwidth (input) \item $H$: Packet head departure (output) \item $T$: Packet tail departure (output) \item $O$: Packet outgoing bandwidth (output) \end{itemize} In the simple model, a packet either consumes all the bandwidth or none of the bandwidth. To account for flit-level pipelining, the cut-through model allows packets to consume partial bandwidths. Consider an aribitrator that has a maximum bandwidth of 1.0. The first packet (purple, Figure \ref{fig:pisces}) arrives with a full incoming bandwidth of 1.0 and head arrival of t=0.0. It therefore consumes all the available bandwidth. The head of the packet can actually leave immediately (as it must to properly pipeline or cut-through). The tail leaves after all bytes have sent at t=1.0. Thus for the first packet we have $H$=0.0, $T$=1.0, and $O$=1.0. The second packet (orange) arrives at t=0.5. Upon arrival there is no bandwidth available as the first packet is consuming the maximum. Only after the first packet finishes can the second packet begin. The second packet arrives and leaves with a reduced bandwidth of 0.5. Thus we have $H$=1.0, $T$=3.0, and $O$=0.5. The third packet (green) arrives at t=1.75. Upon arrival there is some bandwidth, but not enough to match the incoming bandwidth of 0.57. Thus the third packet is slowed initially. After the second packet finished, the third packet can send at increased bandwidth. The computation here is a bit more complicated. Packet 3 can actually consume MORE than 0.6 bandwidth units. Between steps 4 and 6, packet 3 has accumulated in a local buffer in the router. Thus even though the incoming bandwidth is only 0.6, there are several flits that are available to send immediately at full bandwidth waiting in the buffer. Thus results in an effective bandwidth of 0.75 for the remainder of the packet's time in the arbitrator. Thus we end up with $H$=1.75, $T$=3.5, and $O$=0.57. Even though the packet is initially delayed, the buffers compensate for the delay and allow the outgoing bandwidth to ``catch up'' with the incoming bandwidth. Finally, the fourth packet (blue) arrives at t=3.0. There is some available bandwidth. After the third packet finishes, the fourth packet can now send at maximum. Because of the initial delay, the outgoing bandwidth is somewhat reduced. We have $H$=3.0, $T$=4.38, and $O$=0.73. \subsection{Flow} \label{subsec:tutorial:flow} The flow model, in simple cases, corrects the most severe problems of the packet model. Instead of discrete chunks, messages are modeled as fluid flows moving through the network. Congestion is treated as a fluid dynamics problem, sharing bandwidth between competing flows. In contrast to LogP models, flow models can account fairly well for congestion. Without congestion, a flow only requires a FLOW START and FLOW STOP event to be modeled (see tutorial on discrete event simulation in \ref{sec:tutorial:des}). While the packet model would require many, many events to simulate a 1 MB message, the flow model might only require two. With congestion, flow update events must be scheduled whenever congestion changes on a network link. For limited congestion, only a few update events must occur. The flow model also corrects the latency and multiplexing problems in the PISCES simple model, providing higher-accuracy for coarse-grained simulation. The flow model starts to break down for large systems or under heavy congestion. In the packet model, all congestion events are ``local'' to a given router. The number of events is also constant in packet models regardless of congestion since we are modeling a fixed number of discrete units. In flow models, flow update events can be ``non-local,'' propagating across the system and causing flow update events on other routers. When congestion occurs, this ``ripple effect'' can cause the number of events to explode, overwhelming the simulator. For large systems or heavy congestion, the flow model is actually much slower than the packet model. Support for this model has been completely removed.
{ "alphanum_fraction": 0.7936644758, "avg_line_length": 58.7405063291, "ext": "tex", "hexsha": "6deba4031a74ddf6d0bc403a699295ba9403eaa4", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-02-21T11:39:08.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-21T11:39:08.000Z", "max_forks_repo_head_hexsha": "fd2c52b3872b9c49af77f5f82b3177cc7bbe403c", "max_forks_repo_licenses": [ "BSD-Source-Code" ], "max_forks_repo_name": "minyee/sst-macro", "max_forks_repo_path": "docs/manual/NetworkModelTutorial.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fd2c52b3872b9c49af77f5f82b3177cc7bbe403c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-Source-Code" ], "max_issues_repo_name": "minyee/sst-macro", "max_issues_repo_path": "docs/manual/NetworkModelTutorial.tex", "max_line_length": 781, "max_stars_count": 4, "max_stars_repo_head_hexsha": "fd2c52b3872b9c49af77f5f82b3177cc7bbe403c", "max_stars_repo_licenses": [ "BSD-Source-Code" ], "max_stars_repo_name": "minyee/sst-macro", "max_stars_repo_path": "docs/manual/NetworkModelTutorial.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-22T18:33:50.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-18T22:49:38.000Z", "num_tokens": 2131, "size": 9281 }
\documentclass[11pt]{article} \usepackage{semeval2014} \usepackage{times} \usepackage{url} \usepackage{multirow} \usepackage{latexsym} \usepackage{rotating} \usepackage{booktabs} \usepackage{graphicx} %\setlength\titlebox{5cm} % You can expand the titlebox if you need extra space % to show all the authors. Please do not make the titlebox % smaller than 5cm (the original size); we will check this % in the camera-ready version and ask you to change it back. \newcommand{\wsname}{SemEval-2014} \newcommand{\submissionpage}{\url{http://alt.qcri.org/semeval2014/index.php?id=cfp}} \newcommand{\filename}{semeval2014} \newcommand{\contact}{pnakov qf.org.qa} \title{AI-KU: Using Co-Occurrence Modeling for Semantic Similarity} \author{Osman Ba\c{s}kaya \\ Artificial Intelligence Laboratory \\ Ko\c{c} University, Istanbul, Turkey \\ {\tt [email protected]} \\ % \And % Deniz Yuret \\ % Artificial Intelligence Laboratory \\ % Ko\c{c} University, Istanbul, Turkey \\ % {\tt [email protected]} \\ } \date{} \begin{document} \maketitle \begin{abstract} In this paper, we describe our unsupervised method submitted to the Cross-Level Semantic Similarity task in Semeval 2014 that computes semantic similarity between two different sized text fragments. Our method models each text fragment by using the co-occurrence statistics of either occurred words or their substitutes. The co-occurrence modeling step provides dense, low-dimensional embedding for each fragment which allows us to calculate semantic similarity using various similarity metrics. Although our current model avoids the syntactic information, we achieved promising results and outperformed all baselines. \end{abstract} \section{Introduction} \label{intro} Semantic similarity is a measure that specifies the similarity of one text's meaning to another's. Semantic similarity plays an important role in various Natural Language Processing (NLP) tasks such as textual entailment \cite{berant2012entail}, summarization \cite{lin2003summarization}, question answering \cite{surdeanu2011questionans}, text classification \cite{sebastiani2002textclass}, word sense disambiguation \cite{schutze98automatic} and information retrieval \cite{park2005infret}. There are three main approaches to computing the semantic similarity between two text fragments. The first approach uses Vector Space Models (see Turney \& Pantel \shortcite{turney10frequency} for an overview) where each text is represented as a bag-of-word model. The similarity between two text fragments can then be computed with various metrics such as cosine similarity. Sparseness in the input nature is the key problem for these models. Therefore, later works such as Latent Semantic Indexing \cite{deerwester90indexing} and Topic Models \cite{blei03latent} overcome sparsity problems via reducing the dimensionality of the model by introducing latent variables. The second approach blends various lexical and syntactic features and attacks the problem through machine learning models. The third approach is based on word-to-word similarity alignment~\cite{pilehvar2013align,islam2008semantic}. The Cross-Level Semantic Similarity (CLSS) task in SemEval 2014\footnote{\url{http://alt.qcri.org/semeval2014/task3/}} \cite{jurgens14task3} provides an evaluation framework to assess similarity methods for texts in different volumes (i.e., lexical levels). Unlike previous SemEval and *SEM tasks that were interested in comparing texts with similar volume, this task consists of four subtasks (paragraph2sentence, sentence2phrase, phrase2word and word2sense) that investigate the performance of systems based on pairs of texts of different sizes. A system should report the similarity score of a given pair, ranging from 4 (two items have very similar meanings and the most important ideas, concepts, or actions in the larger text are represented in the smaller text) to 0 (two items do not mean the same thing and are not on the same topic). In this paper, we describe our two unsupervised systems that are based on co-occurrence statistics of words. The only difference between the systems is the input they use. The first system uses the words directly (after lemmatization, stop-word removal and excluding the non-alphanumeric characters) in text while the second system utilizes the most likely substitutes consulted by a 4-gram language model for each observed word position (i.e., context). Note that we participated two subtasks which are paragraph2sentence and sentence2phrase. % 2 subtask'a participate ettigimiz soylenebilir. The remainder of the paper proceeds as follows. Section \ref{algorithm} explains the preprocessing part, the difference between the systems, co-occurrence modeling, and how we calculate the similarity between two texts after co-occurrence modeling has been done. Section \ref{evaluation} discusses the results of our systems and compares them to other participants'. Section \ref{conclusion} discusses the findings and concludes with plans for future work. \section{Algorithm} \label{algorithm} This section explains preprocessing steps of the data and the details of our two systems\footnote{The code to replicate our work can be found at \url{https://github.com/osmanbaskaya/semeval14-task3}.}. Both systems rely on the co-occurrence statistics. The slight difference between the two is that the first one uses the words that occur in the given text fragment (e.g., paragraph, sentence), whereas the latter employs co-occurrence statistics on 100 substitute samples for each word within the given text fragment. \subsection{Data Preprocessing} \begin{table} \begin{center} \begin{tabular}{|r|c|} \hline \bf Type-ID & \bf Lemma \\ \hline Sent-33 & choose \\ Sent-33 & buy \\ Sent-33 & gift \\ Sent-33 & card \\ Sent-33 & hard \\ Sent-33 & decision \\ \hline \end{tabular} \end{center} \caption{\label{tab:system1_input} Instance id-word pairs for a given sentence.} \end{table} \begin{table*}[htbp] \resizebox{\textwidth}{!}{ \centering \begin{tabular}{r||c||l} \toprule % \cmidrule{2-3} \cmidrule{3-3} \textbf{Word} & \textbf{Context} & \textbf{\hspace{4cm} Substitutes} \\ \hline the & $<$s$>$ \_\_\_ dog & The (0.12), A (0.11), If (0.02), As (0.07), Stray (0.001),..., $w_n$ (0.02) \\ dog & the \_\_\_ & cat (0.007), dog (0.005), animal (0.002), wolve (0.001), ..., $w_n$ (0.01) \\ bites & dog \_\_\_ . & runs (0.14), bites (0.13), catches (0.04), barks (0.001), ..., $w_n$ (0.01) \\ \bottomrule \end{tabular}} \caption{Contexts and substitute distributions when a bigram language model is used. $w$ and $n$ denote an arbitrary word in the vocabulary and the vocabulary size, respectively.} \label{tab:subs_exp} \end{table*} Two AI-KU systems can be distinguished by their inputs. One uses the raw input words, whereas the other uses words' likely substitutes according to a language model. \paragraph{AI-KU$_1$:} This system uses the words that were in the text. All words are transformed into lower-case equivalents. Lemmatization\footnote{Lemmatization is carried out with Stanford CoreNLP and transforms a word into its canonical or base form.} and stop-word removal were performed, and non-alphanumeric characters were excluded. Table \ref{tab:system1_input} displays the pairs for the following sentence which is an instance from paragraph2sentence test set: \begin{quote} ``Choosing what to buy with a \$35 gift card is a hard decision.'' \end{quote} Note that the input that we used to model co-occurrence statistics consists of all such pairs for each fragment in a given subtask. \paragraph{AI-KU$_2$:} Previously, the utilization of high probability substitutes and their co-occurrence statistics achieved notable performance on Word Sense Induction (WSI) \cite{baskaya13ai} and Part-of-Speech Induction \cite{yatbaz2012learning} problems. AI-KU$_2$ represents each context of a word by finding the most likely 100 substitutes suggested by the 4-gram language model we built from ukWaC\footnote{Available here: http://wacky.sslmit.unibo.it} \cite{ukWaC}, a 2-billion word web-gathered corpus. Since S-CODE algorithm works with discrete input, for each context we sample 100 substitute words with replacement using their probabilities. Table~\ref{tab:subs_exp} illustrates the context and substitutes of each context using a bigram language model. No lemmatization, stop-word removal and lower-case transformation were performed. \subsection{Co-Occurrence Modeling} This subsection will explain the unsupervised method we employed to model co-occurrence statistics: the Co-occurrence data Embedding (CODE) method \cite{globerson-CODE} and its spherical extension (S-CODE) proposed by Maron et al. \shortcite{Maron2010}. Unlike in our WSI work, where we ended up with an embedding for each word in the co-occurrence modeling step in this task, we model each text unit such as a paragraph, a sentence or a phrase, to obtain embeddings for each instance. Input data for S-CODE algorithm consist of instance-id and each word in the text unit for the first system (Table \ref{tab:system1_input} illustrates the pairs for only one text fragment) instance-ids and 100 substitute samples of each word in text for the second system. In the initial step, S-CODE puts all instance-ids and words (or substitutes, depending on the system) randomly on an n-dimensional sphere. If two different instances have the same word or substitute, then these two instances attract one another --- otherwise they repel each other. When S-CODE converges, instances that have similar words or substitutes will be closely located or else, they will be distant from each other. \paragraph{AI-KU$_1$:} According to the training set performances for various $n$ (i.e., number of dimensions for S-CODE algorithm), we picked 100 for both tasks. \begin{table}[hl] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|} \hline & \bf System & \bf Pearson & \bf Spearman \\ % \hline % \multicolumn{5}{|c|}{Test Set} \\ \hline \multirow{6}{*}{\rotatebox{90}{\small{Paragraph-2-Sentence}}} & AI-KU$_1$ & 0.671 & 0.676 \\ & AI-KU$_2$ & 0.542 & 0.531 \\ & LCS & 0.499 & 0.602 \\ & lch & 0.584 & 0.596 \\ & lin & 0.568 & 0.562 \\ & JI & 0.613 & 0.644 \\ \hline \end{tabular}} \end{center} \caption{\label{table:results-p2s-train} Paragraph-2-Sentence subtask scores for the training data. Subscripts in AI-KU systems specify the run number.} \end{table} \paragraph{AI-KU$_2$:} We picked $n$ to be 200 and 100 for paragraph2sentence and sentence2phrase subtasks, respectively. Since this step is unsupervised, we tried to enrich the data with ukWaC, however, enrichment with ukWaC did not work well on the training data. To this end, proposed scores were obtained using only the training and the test data provided by organizers. \subsection{Similarity Calculation} When the S-CODE converges, there is an n-dimen-sional embedding for each textual level (e.g., paragraph, sentence, phrase) instance. We can use a similarity metric to calculate the similarity between these embeddings. For this task, systems should report only the similarity between two specific cross level instances. Note that we used cosine similarity to calculate similarity between two textual units. This similarity is the eventual similarity for two instances; no further processing (e.g., scaling) has been done. \begin{table}[hr] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|} \hline & \bf System & \bf Pearson & \bf Spearman \\ % \hline % \multicolumn{5}{|c|}{Test Set} \\ \hline \multirow{6}{*}{\rotatebox{90}{\small{Sentence-2-Phrase}}} & AI-KU$_1$ & 0.607 & 0.568 \\ & AI-KU$_2$ & 0.620 & 0.579 \\ & LCS & 0.500 & 0.582 \\ & lch & 0.484 & 0.491 \\ & lin & 0.492 & 0.470 \\ & JI & 0.465 & 0.465 \\ \hline \end{tabular}} \end{center} \caption{\label{table:results-s2p-train} Sentence2phrase subtask scores for the training data.} \end{table} In this task, two correlation metrics were used to evaluate the systems: Pearson correlation and Spearman's rank correlation. Pearson correlation tests the degree of similarity between the system's similarity ratings and the gold standard ratings. Spearman's rank correlation measures the degree of similarity between two rankings; similarity ratings provided by a system and the gold standard ratings. \section{Evaluation Results} \label{evaluation} Tables \ref{table:results-p2s-train} and \ref{table:results-s2p-train} show the scores for Paragraph-2-Sentence and Sentence-2-Phrase subtasks on the training data, respectively. These tables contain the best individual scores for the performance metrics, Normalized Longest Common Substring (LCS) baseline, which was given by task organizers, and three additional baselines: lin \cite{lin1998information}, lch \cite{leacock1998combining}, and the Jaccard Index (JI) baseline. lin uses the information content \cite{resnik1995inforcontent} of the least common subsumer of concepts A and B. Information content (IC) indicates the specificity of a concept; the least common subsumer of a concept A and B is the most specific concept from which A and B are inherited. lin similarity\footnote{lin similarity $= 2 * IC(lcs) / (IC(A) + IC(B))$ where $lcs$ indicates the least common subsumer of concepts A and B.} returns the difference between two times of the IC of the least common subsumer of A and B, and the sum of IC of both concepts. On the other hand, lch is a score denoting how similar two concepts are, calculated by using the shortest path that connects the concept and the maximum depth of the taxonomy in which the concepts occur\footnote{The exact formulation is $-log(L/2d)$ where $L$ is the shortest path length and $d$ is the taxonomy depth.} \begin{table}[hl] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|} \hline & \bf System & \bf Pearson & \bf Spearman \\ % \hline % \multicolumn{5}{|c|}{Test Set} \\ \hline \multirow{9}{*}{\rotatebox{90}{\small{Paragraph-2-Sentence}}} & Best & 0.837 & 0.821 \\ & $2^{nd}$ Best & 0.834 & 0.820 \\ & $3^{rd}$ Best & 0.826 & 0.817 \\ & AI-KU$_1$ & 0.732 & 0.727 \\ & AI-KU$_2$ & 0.698 & 0.700 \\ & LCS & 0.527 & 0.613 \\ & lch & 0.629 & 0.627 \\ & lin & 0.612 & 0.601 \\ & JI & 0.640 & 0.687 \\ \hline \end{tabular}} \end{center} \caption{\label{table:results-p2s} Paragraph-2-Sentence subtask scores for the test data. \emph{Best} indicates the best correlation score for the subtask. LCS stands for Normalized Longest Common Substring. Subscripts in AI-KU systems specify the run number.} \end{table} (please see Pedersen et al. \shortcite{pedersen2004wordnet} for further details of these measures). These two baselines were calculated as follows. First, using the Stanford Part-of-Speech Tagger \cite{toutanova2000enriching} we tagged words across all textual levels. After tagging, we found the synsets of each word matched with its part-of-speech using WordNet 3.0 \cite{fellbaum98electronic}. For each synset of a word in the shorter textual unit (e.g., sentence is shorter than paragraph), we calculated the lin/lch measure of each synset of all words in the longer textual unit and picked the highest score. When we found the scores for all words, we calculated the mean to find out the similarity between one pair in the test set. Finally, Jaccard Index baseline was used to simply calculate the number of words in common (intersection) with two cross textual levels, normalized by the total number of words (union). Table \ref{table:results-p2s} and \ref{table:results-s2p} demonstrate the AI-KU runs on the test data. Next, we present our results pertaining to the test data. \paragraph{Paragraph2Sentence:} Both systems outperformed all the baselines for both metrics. The best score for this subtask was .837 and our systems achieved .732 and .698 on Pearson and did similar on Spearman metric. These scores are promising since our current unsupervised systems are based on bag-of-words approach --- they do not utilize any syntactic information. \begin{table}[hr] \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|c|c|c|} \hline & \bf System & \bf Pearson & \bf Spearman \\ % \hline % \multicolumn{5}{|c|}{Test Set} \\ \hline \multirow{9}{*}{\rotatebox{90}{\small{Sentence-2-Phrase}}} & Best & 0.777 & 0.642 \\ & $2^{nd}$ Best & 0.771 & 0.760 \\ & $3^{rd}$ Best & 0.760 & 0.757 \\ & AI-KU$_1$ & 0.680 & 0.646 \\ & AI-KU$_2$ & 0.617 & 0.612 \\ & LCS & 0.562 & 0.626 \\ & lch & 0.526 & 0.544 \\ & lin & 0.501 & 0.498 \\ & JI & 0.540 & 0.555 \\ \hline \end{tabular}} \end{center} \caption{\label{table:results-s2p} Sentence2phrase subtask scores for the test data.} \end{table} \paragraph{Sentence2Phrase:} In this subtask, AI-KU systems outperformed all baselines with the exception of the AI-KU$_2$ system which performed slightly worse than LCS on Spearman metric. Performances of systems and baselines were lower than Paragraph2Sentence subtask, since smaller textual units (such as phrases) make the problem more difficult. \section{Conclusion} \label{conclusion} In this work, we introduced two unsupervised systems that utilize co-occurrence statistics and represent textual units as dense, low dimensional embeddings. Although current systems are based on bag-of-word approach and discard the syntactic information, they achieved promising results in both paragraph2sentence and sentence2phrase subtasks. For future work, we will extend our algorithm by adding syntactic information (e.g, dependency parsing output) into the co-occurrence modeling step. % syntactic information'a bakilmadan iyi sonuclar elde edildi, future work olarak SCODE'a syntax'in da handle edilebilecegi feature'lar eklenecek. % \ref, \noindent, \shortcite, \cite, \url % \section*{Acknowledgements} % % \emph{You may want to thank Tubitak.} % include your own bib file like this: \bibliographystyle{acl} \bibliography{references} \end{document}
{ "alphanum_fraction": 0.7513395298, "avg_line_length": 69.8091603053, "ext": "tex", "hexsha": "8d442f34e1fee63fc6b0ef7bf08c0fd1e07d249d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2d440cadd701de3cbff3a6be630157d3ade05176", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "osmanbaskaya/semeval14-task3", "max_forks_repo_path": "paper/aiku-semeval14-task3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2d440cadd701de3cbff3a6be630157d3ade05176", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "osmanbaskaya/semeval14-task3", "max_issues_repo_path": "paper/aiku-semeval14-task3.tex", "max_line_length": 1029, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2d440cadd701de3cbff3a6be630157d3ade05176", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "osmanbaskaya/semeval14-task3", "max_stars_repo_path": "paper/aiku-semeval14-task3.tex", "max_stars_repo_stars_event_max_datetime": "2015-02-03T20:22:33.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-03T20:22:33.000Z", "num_tokens": 4853, "size": 18290 }
\section{\module{codeop} --- Compile Python code} % LaTeXed from excellent doc-string. \declaremodule{standard}{codeop} \sectionauthor{Moshe Zadka}{[email protected]} \sectionauthor{Michael Hudson}{[email protected]} \modulesynopsis{Compile (possibly incomplete) Python code.} The \module{codeop} module provides utilities upon which the Python read-eval-print loop can be emulated, as is done in the \refmodule{code} module. As a result, you probably don't want to use the module directly; if you want to include such a loop in your program you probably want to use the \refmodule{code} module instead. There are two parts to this job: \begin{enumerate} \item Being able to tell if a line of input completes a Python statement: in short, telling whether to print `\code{>\code{>}>~} or `\code{...~}' next. \item Remembering which future statements the user has entered, so subsequent input can be compiled with these in effect. \end{enumerate} The \module{codeop} module provides a way of doing each of these things, and a way of doing them both. To do just the former: \begin{funcdesc}{compile_command} {source\optional{, filename\optional{, symbol}}} Tries to compile \var{source}, which should be a string of Python code and return a code object if \var{source} is valid Python code. In that case, the filename attribute of the code object will be \var{filename}, which defaults to \code{'<input>'}. Returns \code{None} if \var{source} is \emph{not} valid Python code, but is a prefix of valid Python code. If there is a problem with \var{source}, an exception will be raised. \exception{SyntaxError} is raised if there is invalid Python syntax, and \exception{OverflowError} or \exception{ValueError} if there is an invalid literal. The \var{symbol} argument determines whether \var{source} is compiled as a statement (\code{'single'}, the default) or as an expression (\code{'eval'}). Any other value will cause \exception{ValueError} to be raised. \strong{Caveat:} It is possible (but not likely) that the parser stops parsing with a successful outcome before reaching the end of the source; in this case, trailing symbols may be ignored instead of causing an error. For example, a backslash followed by two newlines may be followed by arbitrary garbage. This will be fixed once the API for the parser is better. \end{funcdesc} \begin{classdesc}{Compile}{} Instances of this class have \method{__call__()} methods indentical in signature to the built-in function \function{compile()}, but with the difference that if the instance compiles program text containing a \module{__future__} statement, the instance 'remembers' and compiles all subsequent program texts with the statement in force. \end{classdesc} \begin{classdesc}{CommandCompiler}{} Instances of this class have \method{__call__()} methods identical in signature to \function{compile_command()}; the difference is that if the instance compiles program text containing a \code{__future__} statement, the instance 'remembers' and compiles all subsequent program texts with the statement in force. \end{classdesc} A note on version compatibility: the \class{Compile} and \class{CommandCompiler} are new in Python 2.2. If you want to enable the future-tracking features of 2.2 but also retain compatibility with 2.1 and earlier versions of Python you can either write \begin{verbatim} try: from codeop import CommandCompiler compile_command = CommandCompiler() del CommandCompiler except ImportError: from codeop import compile_command \end{verbatim} which is a low-impact change, but introduces possibly unwanted global state into your program, or you can write: \begin{verbatim} try: from codeop import CommandCompiler except ImportError: def CommandCompiler(): from codeop import compile_command return compile_command \end{verbatim} and then call \code{CommandCompiler} every time you need a fresh compiler object.
{ "alphanum_fraction": 0.7646618418, "avg_line_length": 38.5288461538, "ext": "tex", "hexsha": "4267d3875abd5e9c160154e6852d135970d432b9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "deadsnakes/python2.3", "max_forks_repo_path": "Doc/lib/libcodeop.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "deadsnakes/python2.3", "max_issues_repo_path": "Doc/lib/libcodeop.tex", "max_line_length": 71, "max_stars_count": null, "max_stars_repo_head_hexsha": "0b4a6871ca57123c10aa48cc2a5d2b7c0ee3c849", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "deadsnakes/python2.3", "max_stars_repo_path": "Doc/lib/libcodeop.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 971, "size": 4007 }
\section*{Generative Models} \subsection*{Variational Autoencoder (VAE)} Sample random vector $\mathbf{z} \sim \mathcal{N}(0,\mathbf{I})$. Transform through (deterministic) DNN $F_\theta : \mathbb{R}^m \to \mathbb{R}^n$. Note: expectations $\mathbb{E}_x[f(x)] = \mathbb{E}_z [f(F_\theta(z))]$ (law of the unconscious statistician). Infeasible, would need to find inv. Jacobian determinant\\ $D_{KL}(P\|Q)=\sum_i P(i)\log\frac{P(i)}{Q(i)}=\mathbb{E}_i [\frac{\log P_i}{\log Q_i}]$ \\ More general: $p_\theta (\mathbf{x}|\mathbf{z}) $ instead of $F_\theta$: ELBO \\ $\log p_\theta(x^{(i)}) % = \mathbb{E}_{z \sim q_\phi (z|x)}[\log p_\theta (x^{(i)})] = \mathbb{E}_{z \sim q_\phi (z|x)} \left[\log \frac{p_\theta(z) \cdot p_\theta(x^{(i)}|z)}{p_\theta(z|x^{(i)}}\frac{q_\Phi(z|x^{(i)})}{q_\Phi(z|x^{(i)})} \right] \allowbreak = \mathbb{E}_z[\log p_\theta (x^{(i)}|z)] - D_{KL}(q_\Phi(z|x^{(i)}) || p_\theta(z))$ \\ $ + D_{KL}(q_\Phi(z|x^{(i)}) || p_\theta(z|x^{(i)})$ (drop last part)\\ 1st: reconstr. quality, 2nd: posterior close to prior. \\ Update: $\nabla_\theta \mathbb{E}_{q_\phi}[\log p_\theta(x|z)] = \mathbb{E}[\nabla_\theta \log p_\theta(x|z)] \allowbreak \approx \frac 1 L \sum_{r=1}^L \nabla_\theta \log p_\theta(x|z^{(r)}), z^{(r)} \sim^{iid} q_\theta (\cdot | x) $\\ \textbf{Reinforce trick:}\\ $\nabla_\theta\mathbb{E}_{q_\phi}[\mathcal{L}(\mathbf{x}, \mathbf{z})] = \mathbb{E}_{q_\phi}[\mathcal{L}(\mathbf{x}, \mathbf{z}) \nabla_\theta\log q_\theta(\mathbf{z};\mathbf{x})]$ \\ \textbf{Re-parameterization trick:} use variational distribution $q_\phi(\mathbf{z};\mathbf{x})=g_\phi(\zeta;\mathbf{x})$ for $\zeta$ simple \\ (e.g. $\zeta \sim \mathcal{N}(\mathbf{0},\mathbf{I}), \mathbf{z}=\mathbf{\mu} + \mathbf{U}\zeta \Rightarrow \mathbf{z}\sim \mathbf{N}(\mathbf{\mu}, \mathbf{UU}^T)$)\\ \textbf{Stochastic Backprop:} for $\zeta^{(r)} \sim^{iid} \text{simple}$ \\ $\mathbb{E}_{q_\phi}[\nabla_\phi\mathcal{L}(\mathbf{x}, \mathbf{z})] \approx \frac 1 L \sum_{r=1}^L [\nabla_\phi \mathcal{L}(\mathbf{x}, g_\phi(\zeta^{(r)})] $ \subsection*{Generative Adversarial Network (GAN)} $\min_G \max_D V(D,G) = \mathbb{E}_{\mathbf{x}\sim p_{data}(\mathbf{x})}[\log D(\mathbf{x}) ]$ \\ $+ \mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}(\mathbf{\mathbf{z}})}[\log (1 - D(G(\mathbf{x})) ] \\ $ $\theta^* := \argmin_{\theta\in \Theta} \{ \sup_{\phi \in \Phi} l(\theta, \phi)\}$ \textbf{SGD:} $\theta^{t+1} = \theta^t - \eta \nabla_\theta l(\theta^t, \phi^t)$ ; $\phi^{t+1} = \phi^t + \eta \nabla_\phi l(\theta^{t+1}, \phi^t)$ \subsection*{Autoregressive Models} Generate output one variable at a time: $p(x_1, ..., x_m) = \prod_{t=1}^m p(x_t|x_{1:t-1})$ \\\textbf{PixelCNN:} uses exactly that over a window to predict the next pixel (slow process).
{ "alphanum_fraction": 0.6199490353, "avg_line_length": 105.6538461538, "ext": "tex", "hexsha": "d2c207e4cf13d1f7a325a106cc52ed4b2f36cb3a", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-08-21T01:02:09.000Z", "max_forks_repo_forks_event_min_datetime": "2020-02-06T16:55:13.000Z", "max_forks_repo_head_hexsha": "2ef62f626fa83d1410fbb1b2f8a0501937c0dabe", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "anklinv/Computational-Intelligence-Lab-ETH-FS19", "max_forks_repo_path": "Generative.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2ef62f626fa83d1410fbb1b2f8a0501937c0dabe", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "anklinv/Computational-Intelligence-Lab-ETH-FS19", "max_issues_repo_path": "Generative.tex", "max_line_length": 317, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2ef62f626fa83d1410fbb1b2f8a0501937c0dabe", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "anklinv/Computational-Intelligence-Lab-ETH-FS19", "max_stars_repo_path": "Generative.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-20T20:58:16.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-20T20:58:16.000Z", "num_tokens": 1159, "size": 2747 }
%!TEX root = ../dokumentation.tex \chapter{Example LAN Application Implementation}\label{cha:ExampleLAN} To develop an application, a plan is required that guides the development process, which is usually at least an architecture concept. The architecture is created based on the application components (cf. chapter \ref{cha:Requirement:lanpartyapplication}) and the technologies selected (cf. chapter \ref{cha:Implementation}). To fully exploit the potential of the technologies, the resulting \ac{LAN} Application will be implemented using the microservice architecture. An important principle is the single responsibility principle form Robert C. Martin’s: \cite[p.~4]{Bruce.2019} \begin{quote} Gather together the things that change for the same reasons. Separate those things that change for different reasons. \end{quote} Taking this principle into account and combining it with the application components, each component will be isolated into its own microservice. Other microservice attributes that should be considered are, for example, that a microservice should know little about other microservice, they should be independently deployable and replaceable. These attributes leads to providing one database per microservice \cite[p.~4ff.]{Bruce.2019}. Another important part to consider when building a microservice architecture it the fact that it is a distributed system. False assumptions like the network is reliable must be avoided \cite[p.~16]{Bruce.2019}. Therefore, \ac{QoS} \textit{at-least-once} will be used and considered when creating the architecture. It was chosen because the application must be consistent and availability can be achieved by scaling. These are the general guidelines for creating the architecture, next is to evaluate the use of the different communication technologies. The selected technologies are \ac{REST} for synchronous messaging and RabbitMQ for asynchronous messaging. Synchronous messaging is used for communication where direct feedback is required, such as client-server communication. Asynchronous messaging, on the other hand, is used to decouple communication partners. This allows dynamic growth and parallel work distribution within the microservice application \cite[p.~62f.]{Bruce.2019}. Therefore, asynchronous messaging is used for inter-service communication. Another common principle in microservice architectures is the use of a boundary layer that provides abstraction over internal complexity and change \cite[p.~66]{Bruce.2019}. Therefore, an \ac{API} Gateway will be used as single entry point for the application. This has advantages such as centralized handling of authentication, routing and transmission transformations. It also minimizes the exposed surface area of the application \cite[p.~68f.]{Bruce.2019}. Now that the general structure of the services has been described, one last part must be described, namely the \ac{UI}. The user interface can be either a web page or full fledged application, and both can be implemented via a monolith or micro frontend approach. The \ac{UI} will have cross microservice concerns to ensure usability \cite[p.~71f.]{Bruce.2019}. So, since this work focuses on communication technologies for a microservice environment, the approach used for the \ac{UI} should hardly affect communication. Therefore, a monolithic web page will be realized because it is simple to realize, and the authors have experience with this kind of application. The architecture of the \ac{LAN} application can be divided into two parts. The first part shows the synchronous communication and the second part shows the asynchronous communication. \textbf{Overview of the Synchronous Messaging Architecture} An architectural overview of the synchronous messaging is shown in Figure \ref{img:lansyncoverview}. An \ac{API} gateway, which will be \textit{Nginx}, abstracts the microservice system. Depending on the domain used, \textit{Nginx} forwards the requests to the microservice system or to \textit{Grafana}. This means that \textit{domain.de} will return the \ac{LAN} application and \textit{metric.domain.de} will return \textit{Grafana}. Each microservice has its own REST endpoint which is accessible via \textit{/api/<name of service>}. \textit{Nginx} will use the service name within the \ac{URL} to route the request and truncate the \ac{URL}. For example, a request \textit{domain.de/api/account/1} will be routed to the account microservice and the forwarded request will be \textit{<ip of account>/1}. For logging purposes, Prometheus will be used. Each microservice will collect metrics about requests and provide an endpoint for Prometheus. This is visualized by the red box attached to each microservice. Finally, each microservice has its one MongoDB database. For simplicity, all databases run in one instance, but they could be distributed across different MongoDB instances. \begin{figure} \centering \includegraphics[width=0.7\textwidth, height=0.9\textheight, keepaspectratio]{LAN_Application_REST.png} \caption{LAN application synchronous messaging architecture overview} \label{img:lansyncoverview} \end{figure} \textbf{Asynchronous Messaging Events} Before describing asynchronous messaging, it is important to note that every microservice must be able to validate any request from the client by itself. Therefore, the microservice needs the necessary information for any validation itself and must not be dependent on other microservice. This leads to the distribution of shared information and that is the main concern of the following overview of asynchronous messaging. Figure \ref{img:lanasyncevents} is a simplified overview of the available topics and how the microservices interact with them. An important design decision was made to allow the flexibility described above. Every microservice must subscribe to the \textit{Account} and \textit{Event} topics, to receive all \textit{created} events (not included in the image for readability). This distributes the event and account ID's to all microservice. These ID's are the center of almost all operation. In Figure \ref{img:lanasyncevents}, most of the publish arrows numbered. Each number is associated with a reason for this connection, which is described in the following list. \begin{enumerate} \item \textit{Events} and \textit{catering} can cost money and must therefore inform \textit{billing} \item The \textit{event} must know when the fees for an event are payed \item The \textit{catering} and \textit{searing plan} must know when an account has registered for an event. This will enable there functionality. \item The \textit{scoreboard} must know when an event starts and when it ends \item \textit{Scoreboard} publishes a summery of the event results to event and account topics \end{enumerate} \begin{figure} \centering \includegraphics[width=0.9\textwidth, height=0.9\textheight, keepaspectratio]{LAN_Application_Events.png} \caption{LAN application asynchronous messaging events overview} \label{img:lanasyncevents} \end{figure} This concludes the architecture overview of the \ac{LAN} application. It is important to note that this was only a high-level overview of the actual application. \textbf{Implementation Details} For the actual implementation, the following extensions/frameworks are used in addition to the selected technologies mentioned above: \begin{enumerate} \item \textbf{express.js}: Server-side web framework for node.js (version \textit{4.17.1}) \item \textbf{mongoose}: MongoDB object modeling framework for node.js (version \textit{5.9.17}) \item \textbf{amqp-ts}: \ac{AMQP} communication library for node.js (version \textit{1.8.0}) \item \textbf{bcryptjs}: Password hashing library for node.js based on \textit{bcrypt} (version \textit{2.4.3}) \item \textbf{express-jwt}: Middleware for validating \ac{JSON} Web Tokens (version \textit{5.3.3}) \item \textbf{Angular}: Web application framework for frontends (version \textit{9.1.9}) \item \textbf{ng-Bootstrap}: Widgets for \textit{Angular} based on \textit{Bootstrap}, a styling framework (version \textit{6.1.0}) \end{enumerate} In contrast to the presented services in Figure \ref{img:lansyncoverview}/\ref{img:lanasyncevents}, only three services are implemented, namely \textit{Event},\textit{ Account} and \textit{Billing}. Although the resulting implementation would not be a viable application for an actual LAN party, it was important to demonstrate the underlying architecture for communication and event flow. Consequently, it can be extended with additional services without many changes. Each implemented service is bundled into an image using \textit{Docker}, which allows containerization and flexible deployment via a \textit{docker-compose} script that also handles switching environments (e.g. for development). The current implementation consists of the following services: \begin{enumerate} \item Nginx \item MongoDB \item RabbitMQ \item Prometheus \item Grafana \item Each implemented node.js service: \subitem Account \subitem Event \subitem Billing \end{enumerate} As alluded in Figure \ref{img:lansyncoverview}, each external request is forwarded using \textit{Nginx}, including access to additional informative services (\textit{mgnt} subdomain), such as \textit{Grafana}, and direct database access (separate port) to \textit{MongoDB}. However, the backend communicates with said services through internal endpoints, since each service is located on the same isolated network provided by \textit{Docker}. Consequently, metric reporting and database access is restricted to the microservice environment itself. Although metrics are already collected through Prometheus and accessible via \textit{Grafana}, a dashboard implementation was omitted in \textit{Grafana}. Apart from the REST communication to the frontend, there is also inter-service communication over \ac{AMQP}. The binding selected for connecting a queue to an exchange entity is a regular expression (topic exchange). Each event has its own expression to synchronize with the services, such as \textit{account.created}, which means that the main topic is \enquote{account} and the associated sub topic is \enquote{created}. Furthermore each service listening/consuming a queue is implemented in a way that same events can be received more than once, but only the first time is of relevance. This implementation detail is important, as the implementation of the \ac{AMQP} protocol in this work uses a \textit{at-least-once} \ac{QoS}. An exemplary flow of both synchronous and asynchronous messages across the whole architecture is shown in Figure \ref{img:registeraccountevent_sequence}. The synchronous calls over REST are highlighted in blue, whereas the asynchronous publishing of messages using \ac{AMQP} are highlighted in orange. Moreover, the data sent in each step is also shown, whereas not all internal functions of each service are represented here . For a complete list of implemented and used events, as well as available REST endpoints, see \ref{chp:appendix:implementedRest} and \ref{chp:appendix:implementedEvent} in the appendix. \begin{figure} \centering \includegraphics[width=\textwidth, height=0.6\textheight, keepaspectratio]{registeraccountevent_sequence.png} \caption{Example of registering account and registering for an event} \label{img:registeraccountevent_sequence} \end{figure}
{ "alphanum_fraction": 0.804769392, "avg_line_length": 74.3376623377, "ext": "tex", "hexsha": "4dc2998d5a38061cac578076599b7525eac98d02", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c404e2fb9723538b74dcef142d1db6189f50af4d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NicoVogel/node-ms-paper", "max_forks_repo_path": "content/06ExampleLAN.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c404e2fb9723538b74dcef142d1db6189f50af4d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NicoVogel/node-ms-paper", "max_issues_repo_path": "content/06ExampleLAN.tex", "max_line_length": 273, "max_stars_count": null, "max_stars_repo_head_hexsha": "c404e2fb9723538b74dcef142d1db6189f50af4d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NicoVogel/node-ms-paper", "max_stars_repo_path": "content/06ExampleLAN.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2569, "size": 11448 }
\chapter{Implementation} \label{s:impl} The current prototype of the Nail parser generator can emit parsers in either C or C++. The implementation parses Nail grammars with Nail itself, using a 130-line Nail grammar feeding into a 2,000-line C++ program that emits the parser and generator code. Bootstrapping is performed with a subset of the grammar implemented using conventional grammars. In this chapter, we will discuss some particular features of our parser implementation. A generated Nail parser makes two passes through the input: the first to validate and recognize the input, and the second to bind this data to the internal model. Currently the parser uses a straightforward top-down algorithm, which can perform poorly on grammars that backtrack heavily. However, Packrat parsing~\cite{packrat-parsing:icfp02} that achieves linear time even in the worst case can be added relatively easily. \paragraph{Parser input/output.} Nail-generated parsers interact with their input through an abstract stream data type, which must support the operations show in Figure~\ref{fig:stream-api}. In the C implementation, the programmer can provide an implementation of these operations, or fall back to using a default implementation backed by continuous buffers in memory. Both the input stream and all intermediate streams must use the same implementation. While the concrete implementation is transparent to the Nail parser itself, transformations can and often must modify the stream data type through other interfaces. The standard library of Nail transformations is at present only for memory-buffer backed streams and often directly manipulates the underlying buffers. This makes many operations, such as splitting a stream into multiple substreams, much faster than if the substreams had to be copied directly. \input{fig-stream-api} \paragraph{C++ implementation.} The C implementation forces all streams to have the same underlying data type. To address this shortcoming, we developed an optional C++ backend for Nail. Nail C++ parsers are template functions that are parametrized by the types of the streams they operate on. Transformations are template functors -- classes that can perform different computation and produce a different type of stream depending on the types of their inputs. Useful transformations are often only possible to write when specialized to their input types, and the fully general will either be very slow or not even implemented. Consider for example the standard library offset transform, which creates a substream starting at a given offset. Given just the stream interface shown in Figure~\ref{fig:stream-api}, the only way to implement offset would be to either to make a copy of the underlying stream, using reposition and read, or to reposition the underlying stream before and after each read. By specializing the transformation for in-memory streams, we can construct another memory stream that points to the correct range in the same memory. Similar specializations could exist for file-descriptor backed streams using a duplicate descriptor or sparse-fragment streams. Transformations can also be expressed through chaining together other transformations -- which is equivalent to composing them directly in the Nail grammar, but results in shorter and easier to understand Nail grammars. In addition to allowing complicated file formats to be captured more naturally by mapping their layout onto different stream types, for example, fragmented network packets can be stored as individual fragments and do not have to be assembled into a continuous buffer. \paragraph{Defense-in-depth.} Security exploits often rely on raw inputs being present in memory~\cite{shotgun-parser}, for example to include shell-code or crafted stack frames for ROP~\cite{phrack58:4-nergal} attacks in padding fields or the application executing a controlled sequence of heap allocations and de-allocations to place specific data at predictable addresses~\cite{jp-advanced, vudo-malloc}. Because the rest of the application or even Nail's generated code may contain memory corruption bugs, Nail carefully handles memory allocations as defense-in-depth to make exploiting such vulnerabilities harder. When parsing input, Nail uses two separate memory arenas. These arenas allocate memory from the system allocator in large, fixed-size blocks. Allocations are handled linearly and all data in the arena is zeroed and freed at the same time. Nail uses one arena for data used only during parsing, including dependent fields and temporary streams; this arena is released before the parser returns. The other arena is used to allocate the internal data type returned and is freed by the application once it is done processing an input. Furthermore, the internal representation does not include any references to the input stream, which can therefore be zeroed immediately after the parser succeeds, so an attacker has to write an exploit that works without referencing data from the raw input. Finally, Nail performs sophisticated error handling only in a special debug configuration and will print error messages about the input only to \cc{stderr}. Besides complicating the parser, advanced error handling invites programmers to attempt to fix malformed input, such as adding reasonable defaults for a missing field. Such error-fixing not only introduces parser inconsistencies, but also might allow an attacker to sneak inconsistent input past a parser. % \paragraph{Intermediate representation.} % Most parser generators, such as Bison, do not have to dynamically allocate temporary data on the % heap, as they evaluate a semantic action on every rule. However, as our goal is to perform as little % computation as possible before the input has been validated, and we do not want to mix temporary % objects with the results of our parse, we use an append-only trace to store intermediate parser % results. % Hammer solves this problem by storing a full abstract syntax tree. However, this % abstract syntax tree is at least an order of magnitude larger than the input, % because it stores a large tree node structure for each input byte and for each % rule reduced. This allows Hammer semantic actions to get all of the necessary % information without ever seeing the raw input stream. However, because we also % automatically generate our second pass, which corresponds to Hammer's semantic % actions, we can trust it as much as we trust the parser, and thus can expose it % to the raw input stream. % Under this premise, the actions need limited information from the % recognizer to correctly handle the input stream. In particular, the parser's % control flow branches only at the choice, repetition, and constant combinators. % Thus, for each of those combinators, we store the minimum amount of information % required to reconstruct the syntactic structure of the input. % The trace is an % array of integers. % Whenever the parser encounters a choice, it appends two integers to the trace: % the number of that choice and % the length of the trace when it began parsing that choice. When backtracking in % the input, the parser does not backtrack in the trace. This means that offsets % within the trace can be used for a Packrat hash table to memoize backtrack-heavy % parsers. % When encountering a repetition combinator, the parser records the number of % times the inner parser succeeded, and when encountering a constant parser of % variable size, it records how much input was consumed by the constant parser. % In a second pass, the parser then allocates the internal representation from an % arena allocator and binds the fields to values from the input, while following % the trace to determine how many array fields to parse and which choices to pick. % \paragraph{.} % During parsing, dependency fields occur before the context in which they are % used. The parser stores their values and retrieves them afterwards when % encountering the combinator that uses them. When generating output, the % dependency field is first filled with a filler value, then later when the first % combinator that determines this fields value is encountered, the field is % overwritten. Any further combinators using this dependency will then validate % that the dependency field is correct.
{ "alphanum_fraction": 0.8103344923, "avg_line_length": 78.6886792453, "ext": "tex", "hexsha": "d8dfb3c2a2586ed5568e890a218e0eadab85bc82", "lang": "TeX", "max_forks_count": 28, "max_forks_repo_forks_event_max_datetime": "2021-01-13T09:03:45.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-21T19:01:23.000Z", "max_forks_repo_head_hexsha": "c34a21876d21002893a5530106cb59ebe27b311a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "russetrob/nail", "max_forks_repo_path": "paper/thesis/impl.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "c34a21876d21002893a5530106cb59ebe27b311a", "max_issues_repo_issues_event_max_datetime": "2021-04-07T12:03:15.000Z", "max_issues_repo_issues_event_min_datetime": "2015-03-09T19:57:18.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "russetrob/nail", "max_issues_repo_path": "paper/thesis/impl.tex", "max_line_length": 815, "max_stars_count": 111, "max_stars_repo_head_hexsha": "c34a21876d21002893a5530106cb59ebe27b311a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "russetrob/nail", "max_stars_repo_path": "paper/thesis/impl.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-24T21:02:13.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-06T01:54:47.000Z", "num_tokens": 1645, "size": 8341 }
\begin{comment} \chapter{Reliability of Supervision Tree} \section{Single Node Analysis} Case 0: Single Failure, No restart \begin{equation} R(t) = \int_{t}^{\infty} f(x)dx \end{equation} \begin{equation} A(t) = \frac{\int_{0}^{t} xf(x)dx + t \cdot R(t) }{t} \end{equation} Case I: Every $T$ time, spend $t_R$ time to restart \begin{equation} A(t) = \frac{ n(\int_{0}^{T} x f(x) dx + T \cdot R(T)) + \int_{0}^{t_{tail}} x f(x) dx}{t} \end{equation} \hspace{7 cm} where $n = \left \lfloor \frac{t}{T+t_R} \right \rfloor$ and $t_{tail} = t-nT$ Case II: When failed, spend $t_R$ time to restart \begin{equation} A(t) = ? \end{equation} \hspace{7 cm} where $W(t) = \int_{0}^{t} xf(x)dx$ Case III: When failed or running for $T$ time, spend $t_R$ time to restart $A(t) = ? $ \end{comment}
{ "alphanum_fraction": 0.6253101737, "avg_line_length": 19.6585365854, "ext": "tex", "hexsha": "8182b274cdfbc614c9c24773b21aebfa76407a72", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_forks_repo_licenses": [ "BSD-Source-Code" ], "max_forks_repo_name": "Jiansen/TAkka", "max_forks_repo_path": "s1024484/Thesis/ReliabilitySupervision.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-Source-Code" ], "max_issues_repo_name": "Jiansen/TAkka", "max_issues_repo_path": "s1024484/Thesis/ReliabilitySupervision.tex", "max_line_length": 79, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_stars_repo_licenses": [ "BSD-Source-Code" ], "max_stars_repo_name": "Jiansen/TAkka", "max_stars_repo_path": "s1024484/Thesis/ReliabilitySupervision.tex", "max_stars_repo_stars_event_max_datetime": "2019-06-27T06:36:09.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-11T14:35:53.000Z", "num_tokens": 321, "size": 806 }
\section{Exercises in Discrete Random Variables}\label{S:xsDiscreteRVs} \begin{ExerciseList} \Exercise[label={xRV1}] One number in the following table for the probability function of a random variable $X$ is incorrect. Which is it, and what should the correct value be? $$ \begin{array}{c|ccccc} x&1&2&3&4&5\\\hline \P(X=x)&0.07&0.10&1.10&0.32&0.40 \end{array} $$ \Answer $\P(X=3)$ does not satisfy the condition that $0\leq \P(A)\leq1$ for any event $A$. If $\Omega$ is the sample space, then $\P(\Omega)=1$ and so the correct probability is \[ \P(X=3)\;=\;1-0.07-0.10-0.32-0.40\;=\;0.11 \enspace . \] \Exercise Let $X$ be the number of years before a particular type of machine will need replacement. Assume that $X$ has the probability function $f(1)=0.1$, $f(2)=0.2$, $f(3)=0.2$, $f(4)=0.2$, $f(5)=0.3$. \be \item Find the distribution function, $F$, for $X$, and graph both $f$ and $F$. \item Find the probability that the machine needs to be replaced during the first 3 years. \item Find the probability that the machine needs no replacement during the first 3 years. \ee \Answer \be \item Tabulate the values for the probability mass function as follows: $$ \begin{array}{c|ccccc} x&1&2&3&4&5\\\hline \P(X=x)&0.1&0.2&0.2&0.2&0.3 \end{array} $$so the distribution function is: \[F(x)\; = \;\P(X \leq x) \;=\; \begin{cases} 0 & \text{ if } 0 \leq x < 1\\ 0.1 & \text{ if } 1 \leq x < 2\\ 0.3 & \text{ if } 2 \leq x < 3\\ 0.5 & \text{ if } 3 \leq x < 4\\ 0.7 & \text{ if } 4 \leq x < 5\\ 1& \text{ if } x \geq 5 \end{cases} \] %The graphs of $f(x)$ and $F(x)$ for random variable $X$ are shown below: %TODO\centering \makebox{\includegraphics[width=6.5in]{figures/fandFfor5YearMachine.png}} \item The probability that the machine needs to be replaced during the first 3 years is: $$\P(X \leq 3)\;=\;\P(X=1)+\P(X=2)+\P(X=3)\;=\;0.1+0.2+0.2\;=\;0.5\,.$$ (This answer is easily seen from the distribution function of $X$.) \item The probability that the machine needs no replacement during the first three years is \[ \P(X > 3)\;=\;\, 1-\P(X \leq 3)\,=\, 0.5 \,.\] \ee \Exercise Of 200 adults, 176 own one TV set, 22 own two TV sets, and 2 own three TV sets. A person is chosen at random. What is the probability mass function of $X$, the number of TV sets owned by that person? \Answer Assuming that the probability model is being built from the observed relative frequencies, the probability mass function is: $$ f(x)\;=\;\begin{cases}\frac{176}{200}&x=1\\\frac{22}{200}&x=2\\\frac{2}{200}&x=3\end{cases}$$ \Exercise Suppose a discrete random variable $X$ has probability function give by $$\begin{array}{c|cccccccccccc} x&3&4&5&6&7&8&9&10&11&12&13\\\hline \P(X=x)&0.07&0.01&0.09&0.01&0.16&0.25&0.20&0.03&0.02&0.11&0.05 \end{array} $$ \be \item[(a)] Construct a row of cumulative probabilities for this table, that is, find the distribution function of $X$. \item[(b)] Find the following probabilities. \bcols{3} \be \item[(i)]$\P(X\leq 5)$ \item[(ii)] $\P(X<12)$ \item[(iii)] $\P(X>9)$ \item[(iv)] $\P(X\geq 9)$ \item[(v)] $\P(4 < X\leq 9)$ \item[(vi)] $\P(4<X<11)$ \ee \ecols \ee \Answer \begin{itemize} \item[(a)] $$\begin{array}{c|cccccccccccc} x&3&4&5&6&7&8&9&10&11&12&13\\\hline F(x) =\P(X\leq x)&0.07&0.08&0.17&0.18&0.34&0.59&0.79&0.82&0.84&0.95&1.00 \end{array} $$ \medskip \item[(b)] \be \item[(i)]$\P(X\leq 5)= F(5) = 0.17$ \\[3pt] \item[(ii)]$\P(X<12)=\P(X\leq11) = F(11) =0.84$\\[3pt] \item[(iii)] $\P(X>9)= 1 - \P(X\leq 9) = 1- F(9) =1-0.79=0.21$ \\[3pt] \item[(iv)] $\P(X\geq 9)=1-\P(X < 9)=1-\P(X\leq 8)=1-0.59=0.41$ \\[3pt] \item[(v)] $\P(4 < X\leq 9)= F(9) - F(4) =0.79-0.08=0.71$ \\[3pt] \item[(vi)] $\P(4<X<11)=\P(4 < X\leq 10)= F(10) -F(4) =0.82- 0.08=0.74$ \\[3pt] \ee \end{itemize} \Exercise A box contains 4 right-handed and 6 left-handed screws. Two screws are drawn at random without replacement. Let $X$ be the number of left-handed screws drawn. Find the probability mass function for $X$, and then calculate the following probabilities: \be \item $\P(X\leq 1)$ \item $\P(X\geq 1)$ \item $\P(X>1)$ \ee \Answer Since we are sampling without replacement, \begin{eqnarray*} \P(X=0) &=& \frac{4}{10}\cdot\frac{3}{9} = \frac{2}{15}\quad (\text{one way of drawing two right screws}), \\ \P(X=1) &=& \frac{6}{10}\cdot\frac{4}{9}+\frac{4}{10}\cdot\frac{6}{9} = \frac{8}{15}\quad(\text{two ways of drawing one left and one right screw}),\\ \P(X=2) &=& \frac{6}{10}\cdot\frac{5}{9} = \frac{1}{3}\quad(\text{one way of drawing two left screws}). \end{eqnarray*} So the probability mass function of $X$ is: \[f(x)\;=\;\P(X=x)\;=\; \begin{cases} \frac{2}{15} & \text{if } x=0\\[3pt] \frac{8}{15} & \text{if } x=1\\[3pt] \frac{1}{3} & \text{if } x=2 \end{cases}\] The required probabilities are: \be \item $$\P(X\leq1)\;=\;\P(X=0)+\P(X=1)\;=\;\frac{2}{15}+\frac{8}{15}\;=\;\frac{2}{3}$$ \item $$\P(X\geq 1)\;=\;\P(X=1)+\P(X=2)\;=\;\frac{8}{15}+\frac{1}{3}\;=\;\frac{13}{15}$$ \item $$\P(X>1)\;=\;\P(X=2)\;=\;\frac{1}{3}$$ \ee \Exercise Suppose that a random variable $X$ has geometric probability mass function, \[f(x)\;=\;\frac{k}{2^x}\quad (x=0,1,2,\dots)\,.\] \be \item Find the value of $k$. \item What is $\P(X\geq 4)$? \ee \Answer \be \item Since $f$ is a probability mass function, $$\sum_{x=0}^\infty\frac{k}{2^x}\;=\;1\,, \qquad \text{that is,}\qquad k\,\sum_{x=0}^\infty\frac{1}{2^x}\;=\;1\ \enspace.$$ Now $\displaystyle \sum_{x=0}^\infty\frac{1}{2^x}$ is a geometric series with common ratio $r= \frac{1}{2}$ and first term $a=1$, and so has sum \[ S\;=\; \frac{a}{1-r} \;=\;\frac{1}{1-\frac{1}{2} } \;=\; 2\] Therefore, \[ 2 k\;=\; 1 \,, \quad \text{that is,}\quad k\;=\; \frac{1}{2}\,.\] \item From (a), the probability mass function of $f$ is $$f(x)\;=\;\frac{\frac{1}{2}}{2^{x}}\;=\;\frac{1}{2^{x+1}}\,.\quad (x=0,1,2,\dots)$$ Now \[\P(X\geq 4)\;=\;1-\P(X<4)\;=\;1-\P(X\leq3)\] where \ba{\P(X\leq3)&=\;\sum^3_{x=0}\frac{1}{2^{x+1}}\\[3pt] &=\;\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16}\\[3pt] &=\;\frac{8}{16}+\frac{4}{16}+\frac{2}{16}+\frac{1}{16}\\[3pt] &=\;\frac{15}{16}\enspace. } That is, $\P(X\geq4)\,=\,\displaystyle \frac{1}{16}$. \ee \Exercise Four fair coins are tossed simultaneously. If we count the number of heads that appear then we have a binomial random variable, $X=$ {\it the number of heads}. \be \item Find the probability mass function of $X$. \item Compute the probabilities of obtaining no heads, precisely 1 head, at least 1 head, not more than 3 heads. \ee \Answer Note that $\theta=\frac{1}{2}$ here. \be \item $X$ has probability mass function \[f(x)\;=\;\begin{cases} \displaystyle\displaystyle \binom{4}{0}\frac{1}{2}^0\frac{1}{2}^4=\frac{1}{16}&x=0\\[16pt] \displaystyle\displaystyle\binom{4}{1}\frac{1}{2}^1\frac{1}{2}^3=\frac{4}{16}&x=1\\[16pt] \displaystyle\displaystyle\binom{4}{2}\frac{1}{2}^2\frac{1}{2}^2=\frac{6}{16}&x=2\\[16pt] \displaystyle\binom{4}{3}\frac{1}{2}^3\frac{1}{2}^1=\frac{4}{16}&x=3\\[16pt] \displaystyle\binom{4}{4}\frac{1}{2}^4\frac{1}{2}^0=\frac{1}{16}&x=4 \displaystyle\end{cases} \] \item The required probabilities are: $\displaystyle \P(X=0)=f(0)=\frac{1}{16}$\\[3pt] $\displaystyle \P(X=1)=f(1)=\frac{4}{16}$\\[3pt] $\displaystyle \P(X\geq1)=1-\P(X=0)=1-f(0)=\frac{15}{16}$\\[3pt] $\displaystyle \P(X\leq3)=f(0)+f(1)+f(2)+f(3)=\frac{15}{16}$ \ee \Exercise The distribution of blood types in a certain population is as follows: $$ \begin{array}{c|cccc} \text{Blood type}&\text{Type } O&\text{Type } A&\text{Type } B& \text{Type }AB\\\hline \text{Proportion}&0.45&0.40&0.10&0.05 \end{array} $$ A random sample of 15 blood donors is observed from this population. Find the probabilities of the following events. \be \item Only one type $AB$ donor is included. \item At least three of the donors are type $B$. \item More than ten of the donors are \emph{either} type $O$ \emph{or} type $A$. \item Fewer that five of the donors are \emph{not} type $A$. \ee \Answer \be \item If the random variable $X$ denotes the number of type $AB$ blood donors in the sample of 15, then $X$ has a binomial distribution with $n=15$ and $\theta=0.05$. Therefore \[\P(X=1) \;=\;\binom{15}{1} (0.05)^1 (0.95)^{14}\;=\;0.366 \quad (\text{3 sig. fig.}) \,.\] \medskip \item If the random variable $X$ denotes the number of type $B$ blood donors in the sample of 15, then $X$ has a binomial distribution with $n=15$ and $\theta=0.10$. Therefore \ba{\P(X\geq 3) &\;=\; 1\, - \,\P(X=0)\, -\, \P(X=1)\, -\, \P(X=2) \\[3pt] &\;=\; 1\; - \; \binom{15}{0} (0.1)^0 (0.9)^{15}\;-\; \binom{15}{1} (0.1)^1 (0.9)^{14}\;-\; \binom{15}{2} (0.1)^2 (0.9)^{13}\\[3pt] &\;=\; 1\,-\, 0.2059 \,-\, 0.3432 \,-\, 0.2669\\[3pt] &\;= \;0.184 \quad (\text{to 3 sig. fig.}) } \medskip \item If the random variable $X$ denotes the number of type $O$ or type $A$ blood donors in the sample of 15, then $X$ has a binomial distribution with $n=15$ and $\theta=0.85$. Therefore \ba{\P(X > 10 ) &\;=\; \P(X=11)\, + \, \P(X=12)\, +\, \P(X=13) \, + \, \P(X=14)\, + \, \P(X=15) \\[3pt] &\;=\; \binom{15}{11} (0.85)^{11} (0.15)^{4}\;+\; \binom{15}{12} (0.85)^{12} (0.15)^{3}\\[3pt] &\;+\; \binom{15}{13} (0.85)^{13} (0.15)^{2}\;+\; \binom{15}{14} (0.85)^{14} (0.15)^{1}\;+\; \binom{15}{15} (0.85)^{15} (0.15)^{0}\\[3pt] &\;=\; 0.1156 \,+\,0.2184 \,+\,0.2856 \,+\, 0.2312\,+\,0.0874\\[3pt] &\;=\;0.938 \quad (\text{to 3 sig. fig.}) } \medskip \item If the random variable $X$ denotes the number of blood donors that are \emph{not} of type $A$ blood donors in the sample of 15, then $X$ has a binomial distribution with $n=15$ and $\theta=0.6$. Therefore \ba{\P(X < 5) &\;=\; \P(X=0)\, +\, \P(X=1)\, + \, \P(X=2)\, + \, \P(X=3)\, + \, \P(X=4) \\[3pt] &\;=\; \binom{15}{0} (0.6)^0 (0.4)^{15}\;+\; \binom{15}{1} (0.6)^1 (0.4)^{14}\;+\; \binom{15}{2} (0.6)^2 (0.4)^{13}\\[3pt] &\;+\; \binom{15}{3} (0.6)^3 (0.4)^{12}\;+\; \binom{15}{4} (0.6)^4 (0.4)^{11}\\[3pt] &\;=\; 0.0000 \,+\, 0.0000\,+\, 0.0003\,+\, 0.0016\,+\,0.0074\\[3pt] &\;=\;0.009 \quad (\text{to 3 DP.}) } \ee \Exercise If the probability of hitting a target in a single shot is $10\%$ and 10 shots are fired independently, what is the probability that the target will be hit at least once? \Answer This is a Binomial experiment with parameters $\theta=0.1$ and $n=10$, and so \[\P(X\geq 1) = 1-\P(X<1) = 1-\P(X=0) \enspace ,\] where \[\P(X=0) = \binom{10}{0}0.1^0 0.9^{10} \approxeq 0.3487 \enspace .\] Therefore, the probability that the target will be hit at least once is \[1- 0.3487 \approxeq 0.6513 \enspace .\] %question (poisson) \Exercise Suppose that a certain type of magnetic tape contains, on the average, 2 defects per 100 meters. What is the probability that a roll of tape 300 meters long will contain no defects? \Answer Since 2 defects exist on every 100 meters, we would expect 6 defects on a 300 meter tape. If $X$ is the number of defects on a 300 meter tape, then $X$ is Poisson with $\lambda = 6$ and so the probability of zero defects is $$\P(X=0;6)\;=\;\frac{6^0}{0!}e^{-6}\;=\;0.0025\enspace.$$ %question (poisson) \Exercise In 1910, E.~Rutherford and H.~Geiger showed experimentally that the number of alpha particles emitted per second in a radioactive process is a random variable $X$ having a Poisson distribution. If the average number of particles emitted per second is 0.5, what is the probability of observing two or more particles during any given second? \Answer Since $X$ is $\poisson(\lambda)$ random variable with $\lambda=0.5$, $\P(X\geq2)$ is the probability of observing two or more particles during any given second. $$\P(X\geq 2)\;=\;1-\P(X<2)\;=\;1-\P(X=1)-\P(X=0)\enspace,$$ where $\P(X=1)$ and $\P(X=0)$ can be carried out by the Poisson probability mass function $$\P(X=x)\;=\; f(x)\;=\;\frac{\lambda^x}{x!}e^{-\lambda}\enspace.$$ Now \[\P(X=0)\;=\;\frac{0.5^0}{0!}\,\times \,e^{-0.5} \;=\; 0.6065\] and \[ \P(X=1)\;=\;\frac{0.5^1}{1!}\,\times \, e^{-0.5}\;=\; 0.3033\] and so $$\P(X\geq 2)\;=\; 1- 0.9098\;= \;0.0902\enspace.$$ %question (poisson) \Exercise The number of lacunae (surface pits) on specimens of steel, polished and examined in a metallurgical laboratory, is thought to have a Poisson distribution. \be \item Write down the formula for the probability that a specimen has $x$ defects, explaining the meanings of the symbols you use. \item Simplify the formula in the case $x=0$. \item In a large homogeneous collection of specimens, 10\% have one or more lacunae. Find (approximately) the percentage having exactly two. \item Why might the Poisson distribution not apply in this situation?\\[4pt] [{\scriptsize HINT: Recall the {\em emphasised sentence} in THINKING POISSON and what the continuum on which the number of events occur is for the problem, and what could possibly go wrong in your imagination of the manufacturing process of the steel specimens (normally you need to melt and manipulate iron with other elements and cast them in moulds and this needs energy and raw materials of possibly varying quality and the machines used in the process could break down, etc.) to violate the Poisson assumption about the occurrence of pits on the surface of the specimens.}] \ee \Answer \be \item The Probability mass function for $\poisson(\lambda)$ random variable $X$ is \[\P(X=x)\;=\;f(x;\lambda)\;=\; \frac{e^{-\lambda}\lambda^x}{x!}\] where $\lambda$ is the mean number of lacunae per specimen and $X$ is the random variable ``number of lacunae on a specimen''. \item If $x=0$ then $x!=0!=1$ and $\lambda^x=\lambda^0=1$, and the formula becomes $\displaystyle \P(X=0) \,=\, e^{-\lambda}$. \item Since $\P(X \geq 1) = 0.1$, \[\P( X=0)\;=\; 1 - \P(X \geq 1)\;=\; 0.9\,.\] Using (b) and solving for $\lambda$ gives: \[ e^{-\lambda}\;=\; 0.9 \quad \text{that is,}\quad \lambda \;=\; -\ln(0.9) \;=\; 0.1\; (\text{approximately}\,.)\] Hence \[\P( X=2 )\;=\; \frac{e^{-0.1} (0.1)^2}{2!} \;=\;0.45\% \; (\text{approximately}\,.) \] \item Occurrence of lacunae may not always be independent. For example, a machine malfunction may cause them to be clumped. \ee \end{ExerciseList}
{ "alphanum_fraction": 0.6226161887, "avg_line_length": 41.1569767442, "ext": "tex", "hexsha": "fe0a02936b96b3f4994b58303666e5eaca98c683", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-07-19T11:28:24.000Z", "max_forks_repo_forks_event_min_datetime": "2018-07-18T07:47:52.000Z", "max_forks_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_forks_repo_path": "matlab/csebook/ExsInDiscreteRVs.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_issues_repo_path": "matlab/csebook/ExsInDiscreteRVs.tex", "max_line_length": 578, "max_stars_count": 2, "max_stars_repo_head_hexsha": "edb33db9a05b32645e8337c03729c0b8d02fa728", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "raazesh-sainudiin/computational-statistical-experiments", "max_stars_repo_path": "matlab/csebook/ExsInDiscreteRVs.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-14T13:55:18.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-19T07:54:06.000Z", "num_tokens": 5802, "size": 14158 }
\section{Dynamic Bayesian networks}
{ "alphanum_fraction": 0.7894736842, "avg_line_length": 9.5, "ext": "tex", "hexsha": "57a601f23988aa833e218b8827ea29774c776b5a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/probability/stochasticMarkov/04-00-Dynamic_Bayesian_networks.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/probability/stochasticMarkov/04-00-Dynamic_Bayesian_networks.tex", "max_line_length": 35, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/probability/stochasticMarkov/04-00-Dynamic_Bayesian_networks.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8, "size": 38 }
\documentclass[12pt, letterpaper]{report} \usepackage[margin=1in]{geometry} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{amsmath} \usepackage{float} \usepackage{subfig} \graphicspath{ {./img/} } \setlength\parindent{0pt} \renewcommand\thesection{\Roman{section}.} \renewcommand{\thesubsection}{\alph{subsection}.} \title{CS1675 - Assignment 3} \author{Zachary M. Mattis} \begin{document} \maketitle \section{Problem 1 - Bernoulli Trials} % A \subsection{ML Estimate} \[ \hat{\theta}(x) = 0.65 \] % B \subsection{$Beta(\theta | 1,1)$} \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{p1b.png} \caption{Prior = 1,1} \end{figure} % C \subsection{MAP Estimate} \[ \textrm{MAP Estimate of } \theta = 0.65 \] % D \subsection{$Beta(\theta | 4,2)$} \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{p1d.png} \caption{Prior = 4,2} \end{figure} \[ \textrm{MAP Estimate of } \theta = 0.6538 \] \section{Problem 2 - Multivariate Gaussian} % A \subsection{Gaussian Scatter Plot} \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{p2a.png} \caption{Gaussian Scatter Plot} \end{figure} % B \subsection{ML Estimation} \begin{table}[H] \centering \begin{tabular}{ |l|l| } \hline 3.6377 & 7.8506 \\ \hline \end{tabular} \caption{Mean} \end{table} \begin{table}[H] \centering \begin{tabular}{ |l|l| } \hline 3.6414 & 1.0779 \\ \hline 1.0779 & 3.7831 \\ \hline \end{tabular} \caption{Covariance} \end{table} \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{p2b.png} \caption{Gaussian 3-D} \end{figure} % C \subsection{Individual Gaussian} \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{p2c.png} \caption{Individual Gaussian} \end{figure} The first column data of ``gaussian txt" is in blue, while the second is in red. The corresponding means are 3.6377 and 7.8506. The corresponding variances are 3.6414 and 3.7831. % D \subsection{Multivariate vs. Univariate} I believe multivariate Gaussian models are a better model than two separate univariate models. Given a multivariate model, it is much easier to view any correlations between the data points as they are directly displayed together. Univariate models need to be interpolated side-by-side and any correlation must be interpreted by the viewer. \section{Problem 3 - Exponential Distribution} % A \subsection{Density Function} \begin{figure}[H] \centering \includegraphics[width=0.7\columnwidth]{p3a.png} \caption{Exponential Distribution} \end{figure} \subsection{ML Estimate} \[ f(x;\theta)=\frac{1}{\theta}{e}^{\frac{-x}{\theta}}, 0<x<\infty, \theta\in{[0,\infty}] \] \\ \[ L(\theta)=L\left(\theta;{x}_{1},{x}_{2}...{x}_{n} \right)=\left(\frac{1}{\theta}{e}^{\frac{{-x}_{1}}{\theta}}\right)\left(\frac{1}{\theta}{e}^{\frac{{-x}_{2}}{\theta}}\right)...\left(\frac{1}{\theta}{e}^{\frac{{-x}_{n}}{\theta}} \right)=\frac{1}{{\theta}^{n}}exp\left(\frac{-\sum_{1}^{n}{x}_{i}}{\theta} \right) \] \\ \[lnL\left(\theta\right)=-\left(n \right)ln\left(\theta\right) -\frac{1}{\theta}\sum_{1}^{n}{x}_{i}, 0<\theta<\infty\] \\ \[\frac{d\left[lnL\left(\theta\right) \right]}{d\theta}=\frac{-\left(n \right)}{\left(\theta\right)} +\frac{1}{{\theta}^{2}}\sum_{1}^{n}{x}_{i}=0\] \\ \[\theta=\frac{\sum_{1}^{n}{x}_{i}}{n}x`\] \\ \[\Theta=\frac{\sum_{1}^{n}{X}_{i}}{n}\] \end{document}
{ "alphanum_fraction": 0.6774948785, "avg_line_length": 23.8951048951, "ext": "tex", "hexsha": "bd05d9938cdafb79ffd24b5a667f694937b54d45", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2021-03-04T07:41:07.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-14T03:28:19.000Z", "max_forks_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "zmattis/University_of_Pittsburgh", "max_forks_repo_path": "CS-1675/Homework3/homework3_analysis.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "zmattis/University_of_Pittsburgh", "max_issues_repo_path": "CS-1675/Homework3/homework3_analysis.tex", "max_line_length": 340, "max_stars_count": 6, "max_stars_repo_head_hexsha": "29ba0f4686f34d633b474bb792cf0e6cee8b0f1c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "zmattis/University_of_Pittsburgh", "max_stars_repo_path": "CS-1675/Homework3/homework3_analysis.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-05T09:25:12.000Z", "max_stars_repo_stars_event_min_datetime": "2017-07-21T17:56:15.000Z", "num_tokens": 1263, "size": 3417 }
\section{Networking} \begin{frame}{Networking in Rust} \pause Networking can be done using the \mintinline{rust}|std::net| standard library \pause \begin{itemize}[<+->] \item \mintinline{rust}|TcpListener| --- TCP Socket Server \item \mintinline{rust}|TcpStream| --- Stream between local and remote socket \item \mintinline{rust}|UdpSocket| --- Functionality for UDP communication \item \mintinline{rust}|IpAddr| --- IPv4 and IPV6 addresses \begin{itemize}[<+->] \item \mintinline{rust}|Ipv4Addr| \item \mintinline{rust}|Ipv6Addr| \end{itemize} \item \mintinline{rust}|SocketAddr| --- socket address of IPv4 and IPv6 \begin{itemize}[<+->] \item \mintinline{rust}|SocketAddrV4| \item \mintinline{rust}|SocketAddrV6| \end{itemize} \end{itemize} \end{frame} \begin{frame}{\mintinline{rust}|TcpListener|} \inputrust{code/tcplistener.rs} \end{frame} \begin{frame}{\mintinline{rust}|TcpStream|} \inputrust{code/tcpstream.rs} \end{frame} \begin{frame}{\mintinline{rust}|UdpSocket|} \inputrust{code/udpsocket.rs} \end{frame} \begin{frame}{Comparison with C} \resizebox{0.49\textwidth}{!}{\inputrust{code/server.rs}} \resizebox{0.49\textwidth}{!}{\inputc{code/server.c}} \end{frame}
{ "alphanum_fraction": 0.6478873239, "avg_line_length": 34.5897435897, "ext": "tex", "hexsha": "d3c99d0057bf801bf3006c1bfcd5659d597f1b2f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3b180baecf7e78c6f0aae2fc6bb26c9c806ceb29", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "khalid-salad/rust-networking-tutorial", "max_forks_repo_path": "tex/networking.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b180baecf7e78c6f0aae2fc6bb26c9c806ceb29", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "khalid-salad/rust-networking-tutorial", "max_issues_repo_path": "tex/networking.tex", "max_line_length": 85, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b180baecf7e78c6f0aae2fc6bb26c9c806ceb29", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "khalid-salad/rust-networking-tutorial", "max_stars_repo_path": "tex/networking.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 407, "size": 1349 }
\documentclass[crop=false,10pt,ngerman]{standalone} \usepackage{standard} \usepackage{tabularx} \begin{document} \section*{Symbol Table} \phantomsection \addcontentsline{toc}{section}{Symbol Table} \markboth{SYMBOL TABLE}{} \bigskip % \setlength\extrarowheight{5pt} \renewcommand{\arraystretch}{1.3} % \begin{table}[h] { \scriptsize \begin{tabularx}{\textwidth}{p{0.2\textwidth}p{0.79\textwidth}} \hline \textbf{Symbol} & \textbf{Definition} \\ \hline \hline \textbf{Logic} & \\ $\exists \ldots : \ldots$ & There exists $\ldots$, such that $\ldots$ \\ $a\define b$ & $a$ is defined by $b$. \\ \hline \textbf{Set Theory} & \\ $\set{\ldots}{}$ & Set Definition \\ $\set{\ldots}{\ldots}$ & Set Definition with Condition \\ $x\in A$ & $x$ is an element of the set $A$. \\ $A\subset B$ & The set $A$ is a subset of the set $B$. \\ $A\cap B$ & Intersection --- $\set{x}{x\in A \text{ and } x\in B}$ for sets $A,B$ \\ $A\cup B$ & Union --- $\set{x}{x\in A \text{ or } x\in B}$ for sets $A,B$ \\ $A\setminus B$ & Relative Complement --- $\set{x\in A}{x\not\in B}$ for sets $A,B$ \\ $A\times B$ & Cartesian Product --- $\set{(x,y)}{x\in A,y\in B}$ for sets $A$ and $B$ \\ $A^n$ & $n$-fold Cartesian Product of Set $A$ \\ $\emptyset$ & Empty set --- $\set{}{}$. \\ $\# A$ & Number of Elements in the Set $A$ \\ $\mathscr{P}(A)$ & Power Set of Set $A$ \\ \hline \textbf{Special Sets} & \\ $\setNatural$ & Set of Natural Numbers \\ $\setNatural_0$ & $\setNatural \cup \set{0}{}$ \\ $\mathds{P}$ & Set of Prime Numbers \\ $\setInteger$ & Set of Integers \\ $\setInteger_n$ & Set of Integers Modulo $n$ \\ $\mathds{F}_m$ & Finite Field with $m\in\mathds{P}$ Elements \\ $\mathds{F}^{p\times q}_m$ & Set of $p\times q$-Matrices over Finite Field $\mathds{F}_m$ \\ $\mathds{F}_2$ & Finite Field of Bits \\ $\mathds{F}_2^n$ & Set of $n$-bit Words \\ $\setReal$ & Set of Real Numbers \\ $\setReal^n$ & Set of $n$-dimensional Real Vectors \\ $\mathscr{S}^2$ & Set of Directions --- $\set{x\in\setReal^3}{\norm{x} = 1}$ \\ \hline \textbf{Functions} & \\ $\function{f}{X}{Y}$ & $f$ is a function with domain $X$ and range $Y$. \\ $\identity_X$ & Identity Function over the Set $X$ \\ $f\circ g$ & Composition of Functions $f$ and $g$ \\ $f^{-1}$ & Inverse Image of Function $f$ \\ $f^n$ & $n$-fold Composition of Function $f$ \\ \hline \textbf{Bit Arithmetic} & \\ $x_{n-1}\ldots x_{1}x_{0}$ & $n$-bit Word $x$ of Set $\mathds{F}_2^n$ \\ $x \leftarrow a$ & Left Shift of all Bits in $x$ by $a$ \\ $x \rightarrow a$ & Right Shift of all Bits in $x$ by $a$ \\ $x \circlearrowleft a$ & Circular Left Shift of all Bits in $x$ by $a$ \\ % $x \circleright a$ & Circular Right Shift of all Bits in $x$ by $a$ \\ $x \oplus y$ & Bit-Wise Addition of $x$ and $y$ \\ $x \odot y$ & Bit-Wise Multiplication of $x$ and $y$ \\ $x \ | \ y$ & Bit-Wise Or of $x$ and $y$ \\ \\ \hline \end{tabularx} % \end{table} % \begin{table}[H] % \scriptsize \newpage \noindent \begin{tabularx}{\textwidth}{p{0.2\textwidth}p{0.79\textwidth}} \hline \textbf{Symbol} & \textbf{Definition} \\ \hline \hline \textbf{Probability Theory} & \\ $\mathscr{B}(\setReal)$ & Borel σ-Algebra over $\setReal$ \\ $\roundBrackets{\Sigma,\mathscr{A}}$ & Measurable Space over $\Sigma$ with σ-Algebra $\mathscr{A}$ \\ $λ$ & Lebesgue Measure \\ $\integral{U}{}{f}{λ}$ & Lebesgue Integral of $f$ over $U$ \\ $\setIntegrable^2(U,λ)$ & Set of Square-Integrable Functions over the Set $U$ with Respect to the Lebesgue Measure λ \\ $\roundBrackets{\Omega,\mathscr{F},P}$ & Probability Space over $\Omega$ with σ-Algebra $\mathscr{A}$ and Probability Measure $P$ \\ $\integral{\Omega}{}{X}{P}$ & Integral of Random Variable $X$ with respect to Probability Space $\roundBrackets{\Omega,\mathscr{A},P}$ \\ $\integral{\Omega}{}{X(ω)}{P(ω)}$ & $\integral{\Omega}{}{X}{P}$ \\ $P_X$ & Distribution of Random Variable $X$ \\ $\expect X$ & Expectation Value of Random Variable $X$ \\ $\var X$ & Variance of Random Variable $X$ \\ $\stddev(X)$ & Standard Deviation of Random Variable $X$ \\ $\mathds{1}_A$ & Characteristic Function of Set $A$ \\ $δ_ω$ & Dirac Delta Distribution over $\mathscr{S}^2$ with respect to $ω\in\mathscr{S}^2$ \\ $\bigotimes_{n\in I} P_n$ & Product Measure of Measures $P_n$ Indexed by the Set $I$ \\ \hline \textbf{Miscellaneous} & \\ $(x_n)_{n\in I}$ & Sequence of Values $x_n$ with Index Set $I$ \\ $\absolute{x}$ & Absolute Value of $x$ \\ $\norm{x}$ & Norm of Vector $x$ \\ $x \mod y$ & $x$ Modulo $y$ \\ $\mathrm{gcd}(ρ,k)$ & Greatest Common Divisor of ρ and k \\ $\mathrm{max}(x,y)$ & Maximum of $x$ and $y$ \\ $\lim_{n\to\infty} x_n$ & Limit of Sequence $(x_n)_{n\in\setNatural}$ \\ $\sum_{k=1}^n x_k$ & Sum over Values $x_k$ for $k\in\setNatural$ with $k\leq n$ \\ $\dim X$ & Dimension of $X$ \\ $\ceilBrackets{x}$ & Ceiling Function \\ $\scalarProduct{x}{y}$ & Scalar Product \\ $[a,b]$ & $\set{x\in\setReal}{a\leq x \leq b}$ \\ $(a,b)$ & $\set{x\in\setReal}{a < x < b}$ \\ $[a,b)$ & $\set{x\in\setReal}{a\leq x < b}$ \\ \hline \textbf{Constants} & \\ $\infty$ & Infinity \\ $π$ & $3.1415926535\ldots$ --- Pi \\ \hline \textbf{Units} & \\ $1\appendUnit{B}$ & $1\appendUnit{Byte} = 8\appendUnit{bit}$ \\ $1\appendUnit{GiB}$ & $2^{30}\appendUnit{B}$ \\ $1\appendUnit{s}$ & $1\appendUnit{Seconds}$ \\ $1\appendUnit{min}$ & $1\appendUnit{Minutes} = 60\appendUnit{s}$ \\ $1\appendUnit{GHz}$ & $1\appendUnit{Gigahertz} = 10^9\appendUnit{Hertz}$ \\ \\ \hline \end{tabularx} } % \end{table} \end{document}
{ "alphanum_fraction": 0.5556278465, "avg_line_length": 39.9220779221, "ext": "tex", "hexsha": "e50522c2426b26c188d04e285f9d7f3524099d32", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c78931c1a5c0a85a1ad36d7d8979567b0853be52", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lyrahgames/random-number-generators", "max_forks_repo_path": "docs/thesis/sections/symbol_table.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c78931c1a5c0a85a1ad36d7d8979567b0853be52", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lyrahgames/random-number-generators", "max_issues_repo_path": "docs/thesis/sections/symbol_table.tex", "max_line_length": 143, "max_stars_count": 4, "max_stars_repo_head_hexsha": "c78931c1a5c0a85a1ad36d7d8979567b0853be52", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lyrahgames/random-number-generators", "max_stars_repo_path": "docs/thesis/sections/symbol_table.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-20T00:07:23.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-28T15:12:07.000Z", "num_tokens": 2144, "size": 6148 }
\section{Picture Processing Unit (PPU)}
{ "alphanum_fraction": 0.7209302326, "avg_line_length": 8.6, "ext": "tex", "hexsha": "a2d4050ad9453645101d163b05c98629f91fd271", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/computer/image/02-00-PPU.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/computer/image/02-00-PPU.tex", "max_line_length": 39, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/computer/image/02-00-PPU.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12, "size": 43 }
\documentclass{article} % if you need to pass options to natbib, use, e.g.: % \PassOptionsToPackage{numbers, compress}{natbib} % before loading neurips_2019 % ready for submission % \usepackage{neurips_2019} % to compile a preprint version, e.g., for submission to arXiv, add add the % [preprint] option: % \usepackage[preprint]{neurips_2019} % to compile a camera-ready version, add the [final] option, e.g.: \usepackage[final]{neurips_2019} % to avoid loading the natbib package, add option nonatbib: % \usepackage[nonatbib]{neurips_2019} \usepackage[utf8]{inputenc} % allow utf-8 input \usepackage[T1]{fontenc} % use 8-bit T1 fonts \usepackage{hyperref} % hyperlinks \usepackage{url} % simple URL typesetting \usepackage{booktabs} % professional-quality tables \usepackage{amsfonts} % blackboard math symbols \usepackage{nicefrac} % compact symbols for 1/2, etc. \usepackage{microtype} % microtypography \usepackage{graphicx} \usepackage{xurl} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=blue, } \usepackage{bm} \usepackage{enumitem} \title{Dimensionality Reduction for Text Classification with Latent Dirichlet Allocation\thanks{Project repository: \url{https://github.com/KunyuHe/Dimensionality-Reduction-for-Text-Classification-with-LDA}}} % The \author macro works with any number of authors. There are two commands % used to separate the names and addresses of multiple authors: \And and \AND. % % Using \And between authors leaves it to LaTeX to determine where to break the % lines. Using \AND forces a line break at that point. So, if LaTeX puts 3 of 4 % authors names on the first line, and the last on the second line, try using % \AND instead of \And before the third author name. \author{% Kunyu He\thanks{Kunyu holds a Bachelor of Science from Nanjing University and is pursuing his Master of Science in Computational Analysis and Public Policy. LinkedIn profile: \url{https://www.linkedin.com/in/kunyuhe/}, GitHub profile: \url{https://github.com/KunyuHe}.}\\ Harris School of Public Policy\\ The University of Chicago\\ Chicago, IL 60637 \\ \texttt{[email protected]} } \begin{document} \maketitle \begin{abstract} The curse of dimensionality is not rare for text classification tasks, and overfitting can be a direct result. This project attempts to incorporate a topic modeling technique, Latent Dirichlet Allocation (LDA), as a preprocessing step in the classification pipeline to address the issue. Using extracted topic distributions as encoding vectors, each document is represented as a linear combination of the latent topics. Empirical results show that although incorporating LDA might harms model performance, it can help reduce training time and address overfitting significantly. \end{abstract} \section{Introduction} \label{intro} In the text classification problem, we wish to classify a \emph{document} (a sequence of $n_i$ words denoted by $w=(w_1, w_2, ..., w_{n_i})$) into two or more mutually exclusive classes.Use cases include labeling user reviews according to their sentiment, categorizing unstructured documents into standardized subcategories. Since text data is typically not in a format that machine learning models can handle, we need to convert it into its numeric representation. \emph{Bag-of-words} comes in handy. It discards most of the structure of the input text and forms the \emph{Document-term matrix (DTM)}, with documents on the row, unique words on the column, and each entry represents the frequency of each \emph{term} in each document in the \emph{corpus} (a collection of $M$ documents denoted by $D = \{\bm{w_1}, \bm{w_2}, ..., \bm{w_M}$\}). Treating individual words as features yields a informative but very large feature set (Joachims, 1999). Large vocabulary size can create problems like data sparsity, which is problematic for out-of-sample tests: for documents with words that did not appear in the training corpus, maximum likelihood estimates of the multinomial parameters would assign zero probability to them. On the other hand, when the DTM is high-dimensional ($M< \bar{N}$, where $\bar{N}$ is the vocabulary size), classifiers might suffer from the curse of dimensionality, \emph{overfitting}. Adding to that, sparse matrices can make subsequent computations harder and compromise the efficiency of data storage. Significant progress has been made on this problem and researchers have proposed many approaches to find lower-dimensional representations of the DTM. \emph{Latent semantic indexing (LSI)} (Deerwester et al., 1990) is one of them. LSI applies \emph{singular value decomposition} to identify a subspace approximation of the \emph{term frequency-inverse document frequency (tf-idf)} (Salton and McGill, 1983) rescaled DTM. It also captures some aspects of basic linguistic notions through the linear combinations of the original \emph{tf-idf} features. \verb+From a generative probabilistic perspective+, one significant step forward is the \emph{probabilistic LSI (pLSI)} model (Hofmann, 1999). However, for pLSI we need a distribution for each document, resulting in a model where the set of parameters grow with the size of the corpus and we cannot associate topics to new documents outside the training corpus. Blei et al. proposed \emph{Latent Dirichlet allocation (LDA)} model as an improvement (Blei et al., 2003). The report is organized as follows. In Section \ref{lda} we describe LDA. We demonstrate the workflow and summarize the empirical results in Section \ref{results} and discuss the caveats of the project in Section \ref{discussion}. \section{Latent Dirichlet Allocation} \label{lda} The idea of LDA is that documents are represented as random mixtures over latent topics \emph{(latent multinomial variables)}, where each topic is characterized by a distribution over words. LDA assumes the following generative process for each document $w$ in a corpus $D$ (Blei et al., 2003): \begin{enumerate} \item Choose $N \sim Poisson(\xi).$ \item Choose $\theta \sim Dir(\alpha).$ \item For each of the $N$ words $w_n$: \begin{enumerate} \item Choose a topic $z_n \sim Multinomial(\theta)$. \item Choose a word $w_n$ from $p(w_n | z_n, \beta)$, a multinomial probability conditioned on the topic $z_n$. \end{enumerate} \end{enumerate} The key inferential problem to be solved is computing the posterior distribution of the hidden variables given a document: $$p(\theta, z|w, \alpha, \beta)=\frac{p(\theta, z, w|\alpha, \beta)}{p(w|\alpha, \beta)}$$ The posterior distribution is intractable for exact inference (Dickey, 1983). However, a wide variety of approximate inference algorithms can be considered, including \emph{Laplace approximation}, \emph{variational approximation}, and \emph{Markov chain Monte Carlo} (Jordan, 1999). Those are beyond the scope here. Given the hyperparameters $K$ (the assumed number of latent topics, $K\ll \bar{N}$), $\alpha$ and $\beta$, we can obtain an approximated posterior for each document and form the \emph{document-topics matrix (DtM)} of shape $M \times K$. DtM is a lower-dimensional representation of the DTM, and it can be used as input for document classifiers like \emph{Logistic Regression}, \emph{Support Vector Machine}, or \emph{Na\"ive Bayes}. \section{Problem Definition and Empirical Results} \label{results} This project incorporates LDA as the dimensionality reduction step in text classification pipelines. It also compares the empirical result pipelines that only apply \emph{tf-idf} rescaling. The workflow for using LDA or LSI for dimensionality reduction is described in Figure \ref{fig:workflow} below. First, we apply the same cleaning process on both training and test corpus and transform them into their numeric representations. The cleaning process incorporates bad-of-words model with \emph{bigrams} (pair of words), and it also removes the punctuation, lemmatizes the words, and excludes a list of standard and \href{https://github.com/KunyuHe/Dimensionality-Reduction-for-Text-Classification-with-LDA/blob/master/data/extended_stopwords.txt}{extended stopwords}. Second, we split the original training set into training and validation set. We then fit a set of LDA (or LSI) models on the training DTM (or \emph{tf-idf} rescaled training DTM) with different $K$, transform it into its lower-dimensional representation DtM, and use it as input for a linear classifier. We set the document-topic prior $\alpha$ to $K/50$ and the topic-word prior $\beta$ to $0.01$ according to empirical research (Wei and Croft, 2006). With grid-search on the validation set, we find the best $K$ for the LDA model and the best set of hyperparameters for the classifier through maximizing the evaluation metrics. Third, we refit the pipeline on \emph{the original} training DTM and make predictions with the transformed test DTM. \begin{figure}[h!] \centering \fbox{\includegraphics[width=\linewidth]{LDA_clf_workflow.png}} \caption{Text Classification Pipeline using LDA or LSI as a Preprocessing Step} \label{fig:workflow} \end{figure} We use a dataset of 50,000 movie reviews from the IMDb (Internet Movie Datbase) website collected by Andrew Maas (Maas et al., 2011). Each review in the dataset is labeled "positive" or "negative" according to its rating of the movie from 1 to 10. Reviews with a score of 6 or higher are labeled as positive. Half of the dataset is used as training set and the other half as test set. Both are perfectly \emph{balanced}. Our goal is to build a text classification pipeline that distinguish positive reviews from negative ones. The classifier should generalize well enough on unseen movie reviews and tag them with correct sentiment labels. We use \emph{Logistic Regression} as our classifier. It is easy to interpret, fast to train, and perform well on large and high-dimensional datasets. The evaluation metrics we use is \emph{AUC}, as the \emph{false positive rate} against the \emph{true positive rate} (\emph{recall}). $K$, $\alpha$, regularization parameter $\lambda$ and other hyperparameters are determined by optimizing AUC on the validation set. We then refit the pipeline on the original training set, transform the test set, and report the training and test performance. \begin{figure}[h!] \centering \fbox{\includegraphics[width=\linewidth]{topic_word.png}} \caption{Illustration of the Output of a LDA model} \label{fig:topic_word} \end{figure} Figure \ref{fig:topic_word} illustrates the output of a LDA model. It shows the top 5 words (sorted by conditional probability on the corresponding topic) in each of the 13 casually chosen topics, out of the 130 topics extracted through the LDA step of the "best" classification pipeline. From these words, we imply that the latent topics are extracted based mostly on movie genres. LDA takes an unsupervised approach, and the latent semantic structure that DtM represents would not necessarily lead us to better classification results. Figure \ref{fig:pr_roc} shows the \emph{precision-recall trade-off} and the \emph{Receiver Operating Characteristic (ROC)} curve of the best pipeline trained with the whole training corpus. \begin{figure}[h!] \centering \fbox{\includegraphics[width=\linewidth]{pr_roc.png}} \caption{Precision-recall Trade-off and ROC curve for the "Best" Classification Pipeline} \label{fig:pr_roc} \end{figure} As an experiment, we also use different proportions of the original training corpus to train the classification pipeline that incorporates LDA. We also list the results from pipelines that only use \emph{tf-idf}. Figure \ref{fig:comp} compares how their test AUC, difference between test and training AUC, number of features, and training time change with increased proportions of the original training corpus used. \begin{figure}[h!] \centering \fbox{\includegraphics[width=\linewidth]{comp.png}} \caption{Comparing LDA Pipeline with tf-idf Pipeline on Different Training Corpus Size} \label{fig:comp} \end{figure} Apparently, test AUC for the classification pipeline with LDA preprocessing is lower by approximately 0.07. Besides, its training time is linear to the training set size, while that of the \emph{tf-idf} remains nearly constant. On the bright side, difference in test and training AUC is smaller for the LDA classification pipeline by 0.06 and is robustly decreasing with sample size. In addition, the reduction in dimensionality is very significant. \section{Discussion} \label{discussion} In this project we incorporate LDA as a dimensionality reduction tool in the text classification pipeline. We introduce the LDA model, describe the workflow of the classification pipeline with LDA/LSI preprocessing, and evaluate the pipeline through a sentiment classification task on the IMDb movie review data. From empirical results, incorporating LDA can harm model performance and lead to longer training time. However, it reduces the dimensionality of training data significantly and can help reduce overfitting. The project has several caveats. Due to the long training time, we did not use grid search to find the best decision threshold for the classifier and might failed to get the optimal AUC. Since grid search for LSI takes really long, we did not include LSI into the classification pipeline performance comparison. \clearpage \section*{References} \medskip \small [1] Blei, D. M. \ \& Ng, A. Y. \ \& Jordan, M. I. (2003) Latent Dirichlet allocation. {\it Journal of Machine Learning Research} {\bf 3}:993–1022. [2] Dickey, J.\ (1983) Multiple hypergeometric functions: Probabilistic interpretations and statistical uses. {\it Journal of the American Statistical Association} {\bf 82}:773–781. [3] Hofmann, T.\ (1999) Probabilistic latent semantic indexing. {\it Proceedings of the Twenty-Second Annual International SIGIR Conference}. [4] Jordan, M.\ (1999) {\it Learning in Graphical Models}. Cambridge, MA: MIT Press. [5] Joachims, T.\ (1999) Making large-scale SVM learning practical. In {\it Advances in Kernel Methods - Support Vector Learning}. Cambridge, MA: MIT Press. [6] Maas, A. L. \ \& and Daly, R. E. \ \& Pham, P. T. \ \& Huang, D \ \& Ng, A. Y. \ \& Potts, C. (2011) Learning Word Vectors for Sentiment Analysis. In {\it Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}. [7] Salton, G.\ \& McGill, M. (1983) In {\it Introduction to Modern Information Retrieval}. Chicago, IL: McGraw-Hill. [8] Wei, X. \ \& Croft, W. B. (2006) LDA-based document models for ad-hoc retrieval. In {\it SIGIR}. \end{document}
{ "alphanum_fraction": 0.7737473727, "avg_line_length": 83.3276836158, "ext": "tex", "hexsha": "7844dbb7a4cc6ff037f5e3a26e1b184df7fb984a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ebf8732b0ecabf6906e91efcc0c14b4d9c8feee8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "KunyuHe/Dimensionality-Reduction-for-Text-Classification-with-Latent-Dirichlet-Allocation", "max_forks_repo_path": "report/lda_text_classfication.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "ebf8732b0ecabf6906e91efcc0c14b4d9c8feee8", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:14:32.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-02T00:59:28.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "KunyuHe/Dimensionality-Reduction-for-Text-Classification-with-Latent-Dirichlet-Allocation", "max_issues_repo_path": "report/lda_text_classfication.tex", "max_line_length": 1216, "max_stars_count": null, "max_stars_repo_head_hexsha": "ebf8732b0ecabf6906e91efcc0c14b4d9c8feee8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "KunyuHe/Dimensionality-Reduction-for-Text-Classification-with-Latent-Dirichlet-Allocation", "max_stars_repo_path": "report/lda_text_classfication.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3638, "size": 14749 }
\subsection{Head Mount} As described earlier in the section \ref{sec:nao:vision}, integrated hardwares of NAO Vision are not sufficient to provide precise three dimensional data to the complex algorithms to track human skeletal joints. Therefore, Asus Xtion PRO LIVE 3D camera is chosen to be used for this gesture recognition system. Section \ref{sec:sol:impl} shows the final architecture that proposes to mount Asus Xtion on the head of the robot. \input{chapter/figures/xtion-mount} \paragraph*{3D Printed NAO Xtion Mount} We have found a solution that is designed by emotion-robotics.com. Therefore, we have used their 3D model to print it using MakerBot Replicator 5th Generation 3D Printer as shown in the figure \ref{fg:xtion:mount}. The original base of Asus Xtion is removed and the camera is screwed with the 3D printed mount, and easily fixed on the head of the robot.
{ "alphanum_fraction": 0.8, "avg_line_length": 147.5, "ext": "tex", "hexsha": "534ebfe81638570a5b5193448806404c9b4f5ba1", "lang": "TeX", "max_forks_count": 35, "max_forks_repo_forks_event_max_datetime": "2021-12-15T04:47:16.000Z", "max_forks_repo_forks_event_min_datetime": "2015-06-25T08:51:57.000Z", "max_forks_repo_head_hexsha": "42effa14c0f7a03f460fba5cd80dd72d5206e2a8", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "AravinthPanch/gesture-recognition-for-human-robot-interaction", "max_forks_repo_path": "document/thesis/chapter/content/solution/head-mount.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "42effa14c0f7a03f460fba5cd80dd72d5206e2a8", "max_issues_repo_issues_event_max_datetime": "2017-11-22T13:52:50.000Z", "max_issues_repo_issues_event_min_datetime": "2017-11-20T13:28:59.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "AravinthPanch/gesture-recognition-for-human-robot-interaction", "max_issues_repo_path": "document/thesis/chapter/content/solution/head-mount.tex", "max_line_length": 451, "max_stars_count": 68, "max_stars_repo_head_hexsha": "42effa14c0f7a03f460fba5cd80dd72d5206e2a8", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "AravinthPanch/gesture-recognition-for-human-robot-interaction", "max_stars_repo_path": "document/thesis/chapter/content/solution/head-mount.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T12:52:09.000Z", "max_stars_repo_stars_event_min_datetime": "2016-05-26T16:19:59.000Z", "num_tokens": 213, "size": 885 }
\documentclass[paper=a4, fontsize=11pt]{scrartcl} % A4 paper and 11pt font size \usepackage[T1]{fontenc} % Use 8-bit encoding that has 256 glyphs \usepackage{fourier} % Use the Adobe Utopia font for the document - comment this line to return to the LaTeX default \usepackage[english]{babel} % English language/hyphenation \usepackage{amsmath,amsfonts,amsthm,amssymb} % Math packages \usepackage{algorithm, algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} %Use Input in the format of Algorithm \renewcommand{\algorithmicensure}{\textbf{Output:}} %UseOutput in the format of Algorithm \usepackage{graphicx} \usepackage{listings} \lstset{language=Matlab} \usepackage{lipsum} % Used for inserting dummy 'Lorem ipsum' text into the template \usepackage{sectsty} % Allows customizing section commands \allsectionsfont{\centering \normalfont\scshape} % Make all sections centered, the default font and small caps \usepackage{fancyhdr} % Custom headers and footers \pagestyle{fancyplain} % Makes all pages in the document conform to the custom headers and footers \fancyhead{} % No page header - if you want one, create it in the same way as the footers below \fancyfoot[L]{} % Empty left footer \fancyfoot[C]{} % Empty center footer \fancyfoot[R]{\thepage} % Page numbering for right footer \renewcommand{\headrulewidth}{0pt} % Remove header underlines \renewcommand{\footrulewidth}{0pt} % Remove footer underlines \setlength{\headheight}{13.6pt} % Customize the height of the header \numberwithin{equation}{section} % Number equations within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) \numberwithin{figure}{section} % Number figures within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) \numberwithin{table}{section} % Number tables within sections (i.e. 1.1, 1.2, 2.1, 2.2 instead of 1, 2, 3, 4) \setlength\parindent{0pt} % Removes all indentation from paragraphs - comment this line for an assignment with lots of text %---------------------------------------------------------------------------------------- % TITLE SECTION %---------------------------------------------------------------------------------------- \newcommand{\horrule}[1]{\rule{\linewidth}{#1}} % Create horizontal rule command with 1 argument of height \title{ \normalfont \normalsize \textsc{Shanghai Jiao Tong University, UM-SJTU JOINT INSTITUTE} \\ [25pt] % Your university, school and/or department name(s) \horrule{0.5pt} \\[0.4cm] % Thin top horizontal rule \huge Methods of Applied Mathematics I\\ HW7 \\ % The assignment title \horrule{2pt} \\[0.5cm] % Thick bottom horizontal rule } \author{Yu Cang \quad 018370210001\\ Zhiming Cui \quad 017370910006} % Your name \date{\normalsize \today} % Today's date or a custom date \begin{document} \maketitle % Print the title \section{Exercise7.1} \begin{enumerate} \item \begin{proof} Since $|\lambda|=1$, then \begin{equation} ||e_\lambda^{(N)}||_2 = \frac{1}{\sqrt{N+1}} \sqrt{\sum_{i=0}^{N} \lambda^{2i}} = \frac{1}{\sqrt{N+1}} \sqrt{N+1} = 1 \end{equation} \end{proof} \item \begin{proof} Since \begin{equation} \begin{aligned} (L - \lambda I) e_\lambda^{(N)} & = L e_\lambda^{(N)} - \lambda e_\lambda^{(N)}\\ & = \frac{1}{\sqrt{N+1}}\big[(\lambda, \lambda^2, ... , \lambda^N, 0, 0, ...) - (\lambda, \lambda^2, ... , \lambda^N, \lambda^{N+1}, 0, ...)\big]\\ & = \frac{-1}{\sqrt{N+1}}(0, 0, ..., 0, \lambda^{N+1}, 0, ...) \end{aligned} \end{equation} Together with $|\lambda|=1$, then \begin{equation} \lim\limits_{N\rightarrow\infty} ||(L - \lambda I) e_\lambda^{(N)}||_2 = \lim\limits_{N\rightarrow\infty} \frac{|\lambda|^{N+1}}{\sqrt{N+1}} = \lim\limits_{N\rightarrow\infty} \frac{1}{\sqrt{N+1}} = 0 \end{equation} \end{proof} \item Not figured out... \item \begin{equation} Ru = \lambda u \end{equation} Denote $u = (a_0, a_1, ..., a_n, ...)$, then \begin{equation} \begin{aligned} 0 &= \lambda a_0\\ a_0 &= \lambda a_1\\ ...\\ \end{aligned} \end{equation} \end{enumerate} \section{Exercise7.2} \begin{enumerate} \item \begin{proof} Take a subset of the domain of $L^{-1}$, which is denoted as $M$ and is defined as \begin{equation} M = \Big\{u \in L^2\big([0, 1]\big) \Big| \exists \xi >0 \ \forall x \in [0, \xi] \ u(x) = 0\Big\} \end{equation} Given any $u(x) \in L^2$, then \begin{equation} \int_{0}^{1} u^2(x) dx < \infty \end{equation} Hence $u(x)$ is bounded over $[0, 1]$. \\ Let \begin{equation} T \triangleq \sup \limits_{x\in [0, 1]} |u(x)| \end{equation} then, $\forall \epsilon > 0$, $\exists v \in M $ s.t. \begin{equation} v(x) = \Bigg\{\begin{aligned} 0, &\quad x \in [0, \delta]\\ u(x), &\quad x \in (\delta, 1] \end{aligned} \end{equation} where $\delta = \frac{\epsilon}{T^2}$. Therefore \begin{equation} d(u, v) \triangleq ||u-v||_2 = \int_{0}^{1} [u(x) - v(x)]^2 dx = \int_{0}^{\delta} u^2(x) dx \leq T^2 \delta = \epsilon \end{equation} which indicates that $M$ is dense in $L^2$, so do the domain of $L^{-1}$. \end{proof} \item \begin{proof} \begin{equation} ||L|| = \sup_{u\in L^2}\frac{||Lu||_2}{||u||_2} = \sup_{u\in L^2}\frac{||xu(x)||_2}{||u(x)||_2} = \sup_{u\in L^2}\frac{|x| ||u||_2}{||u||_2} = \sup_{x\in [0, 1]} |x| = 1 \end{equation} and \begin{equation} ||L^{-1}|| = \sup_{u\in L^2}\frac{||L^{-1}u||_2}{||u||_2} = \sup_{x\in [0, 1]} \frac{1}{|x|} = \infty \end{equation} hence $L^{-1}$ is unbounded. \end{proof} \item The state of $L$ is $(I, 1_n)$.\\ The state of $L^{-1}$ is $(I, 2_c)$. \item Yes \item Since \begin{equation} [(L - \lambda I)u] (x) = (Lu)(x) - \lambda u(x) = (x-\lambda) u(x) \end{equation} The inverse of $(L - \lambda I)$ always exists, therefore $\sigma(L) = \emptyset$. \end{enumerate} \section{Exercise7.3} \begin{enumerate} \item \begin{proof} It's clear that $L^{-1}$ is the differentiate operator, and $L^{-1}$ is unbounded. So $L$ has unbounded inverse.\\ Since the domain of $L$ is composed of square-integrable functions over $[0,1]$, say \begin{equation} \int_{0}^{1} f^2(x) dx < \infty \end{equation} An element within the range of $L$ is \begin{equation} g(x) = \int_{0}^{x} f(t) dt \end{equation} Then, if $f(x)$ is a polynomial in $L^2$ , it must be bounded over $[0, 1]$ as it is continous. Denote the supreme of $f(x)$ as $M$, then $g(x) \leq Mx$. Hence $g(x)$ is square-integrable over $[0,1]$, say \begin{equation} \int_{0}^{1} g^2(x) dx \leq M^2 \int_{0}^{1} x^2 dx < \infty \end{equation} Clearly, the domain of $L$ doesn't contain all the polynomials and therefore the range of $L$ is open and incomplete. The boundary of the range of $L$ are the limits of squences like $f_n(x) = nx$ when $n\rightarrow \infty$.\\ Hence, the state of $L$ is $(III, 1_n)$. \end{proof} \item \begin{equation} L^* = L \end{equation} \end{enumerate} \section{Exercise7.4} \begin{proof} For $p = 1$ \begin{equation} \begin{aligned} RHS & \triangleq ||(a_n)||_1 \cdot ||(b_n)||_1 = \sum_{i=0}^{\infty} |a_i| \cdot \sum_{j=0}^{\infty} |b_j| = \sum_{n=0}^{\infty} \sum_{i+j=n} |a_i||b_j|\\ & \geq \sum_{n=0}^{\infty} |\sum_{i+j=n} a_i b_j| = \sum_{n=0}^{\infty} |c_n| = ||(c_n)||_1 \triangleq LHS \end{aligned} \end{equation} For $p>1$, take $q > 0$ s.t. $\frac{1}{p} + \frac{1}{q} = 1$\\ Then, using the Holder's inequality \begin{equation} \begin{aligned} LHS & \triangleq ||(c_n)||_p = \Big(\sum_{n=0}^{\infty} |c_n|^p\Big)^{\frac{1}{p}} = \Big(\sum_{n=0}^{\infty} \Big|\sum_{i+j=n}a_i b_j\Big|^p\Big)^{\frac{1}{p}}\\ & = \Big(\sum_{n=0}^{\infty} \Big|\sum_{i+j=n}a_i b_j^\frac{1}{p} b_j^{\frac{1}{q}}\Big|^p\Big)^{\frac{1}{p}} \leq \Big(\sum_{n=0}^{\infty} \Big|\sum_{i+j=n}\Big(|a_i| |b_j|^\frac{1}{p}\Big) |b_j|^{\frac{1}{q}}\Big|^p\Big)^{\frac{1}{p}}\\ & \leq \Big(\sum_{n=0}^{\infty} \Big[\Big(\sum_{i+j=n}|a_i|^p |b_j|\Big)^\frac{1}{p} \Big(\sum_{j=0}^{n}|b_j|\Big)^\frac{1}{q}\Big]^p\Big)^{\frac{1}{p}} \\ & = \Bigg[\sum_{n=0}^{\infty} \Big(\sum_{i+j=n}|a_i|^p |b_j|\Big) \Big(\sum_{j=0}^{n}|b_j|\Big)^\frac{p}{q}\Bigg]^\frac{1}{p} \\ & \leq \Big(\sum_{i=0}^{\infty}|a_i|^p\Big)^\frac{1}{p}\cdot\Big(\sum_{j=0}^{\infty}|b_j|\Big)^\frac{1}{p}\cdot\Big(\sum_{j=0}^{\infty}|b_j|\Big)^\frac{1}{q}\\ & = ||(a_n)||_p \cdot ||(b_n)||_1 \triangleq RHS \end{aligned} \end{equation} \end{proof} \end{document}
{ "alphanum_fraction": 0.5972976094, "avg_line_length": 41.8309178744, "ext": "tex", "hexsha": "a682fd8a77d221a83d068a429e61314f06f2e9f3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-10-27T08:35:24.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-27T08:35:24.000Z", "max_forks_repo_head_hexsha": "407a952bdf98f6ca161291fe50c5c903c72efb28", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cangyu/lambdaflow", "max_forks_repo_path": "VV556/HW7/submission.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "407a952bdf98f6ca161291fe50c5c903c72efb28", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cangyu/lambdaflow", "max_issues_repo_path": "VV556/HW7/submission.tex", "max_line_length": 243, "max_stars_count": 1, "max_stars_repo_head_hexsha": "407a952bdf98f6ca161291fe50c5c903c72efb28", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cangyu/lambdaflow", "max_stars_repo_path": "VV556/HW7/submission.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-22T14:24:06.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-22T14:24:06.000Z", "num_tokens": 3453, "size": 8659 }
\subsection{Full List of Keywords} \addtocounter{table}{1} \index{Keywords!full list of} \addcontentsline{lot}{table}{\protect\numberline{\thetable}{\ignorespaces Full list of Keywords used in MOPAC}} \begin{description} \item[\comp{\&}] Turn next line into keywords \item[\comp{+}] Add another line of keywords \item[\comp{0SCF}] Read in data, then stop \item[\comp{1ELECTRON}] Print final one-electron matrix \item[\comp{1SCF}] Do one scf and then stop \item[\comp{AIDER}] Read in ab-initio derivatives \item[\comp{AIGIN}] Geometry must be in gaussian format \item[\comp{AIGOUT}] In arc file, include ab-initio geometry \item[\comp{ALLBONDS}] Print final bond-order matrix, including bonds to hydrogen \item[\comp{ALLVEC}] Print all vectors (keywords vectors also needed) \item[\comp{ALT\_a={\em A}}] In pdb files with alternative atoms, select atoms {\em A} \item[\comp{ALT\_r={\em A}}] In pdb files with alternative residues, select residues {\em A} \item[\comp{ANALYT}] Use analytical derivatives of energy wrt geometry \item[\comp{AUTOSYM}] Symmetry to be imposed automatically \item[\comp{AM1}] Use the AM1 hamiltonian \item[\comp{BAR=$n.n$}] reduce bar length by a maximum of $n.n$ \item[\comp{BCC}] Solid is body-centered cubic (used by BZ) \item[\comp{BFGS}] Optimize geometries using bfgs procedure \item[\comp{BIGSCF}] If restart, do a full scf to re-form starting density matrix \item[\comp{BIGCYCLES}] Do a maximum of $n$ big steps \item[\comp{BIRADICAL}] System has two unpaired electrons \item[\comp{BONDS}] Print final bond-order matrix \item[\comp{C.I.}] A multi-electron configuration interaction specified \item[\comp{CHARGE=$n$}] Charge on system = $n$ (e.g.\ NH4 $\Rightarrow$ charge=1) \item[\comp{CIS}] C.I.\ uses 1 electron excitations only \item[\comp{CISD}] C.I.\ uses 1 and electron excitations \item[\comp{CISDT}] C.I.\ uses 1, 2 and 3 electron excitations \item[\comp{COMPFG}] Print heat of formation calculated in COMPFG \item[\comp{CONNOLLY}] Use Connolly surface \item[\comp{CROSS}] Determine intersystem crossing geometry \item[\comp{CUTOF1=$n.nn$}] Cutoff for polarization functions is $n.nn$ {\aa}ngstroms \item[\comp{CUTOF2=$n.nn$}] Cutoff for nddo approximation is $n.nn$ {\aa}ngstroms \item[\comp{CUTOFP=$n.nn$}] Madelung distance cutoff is $n.nn$ {\aa}ngstroms \item[\comp{CUTOFS=$n.nn$}] Overlap cutoff distance is $n.nn$ {\aa}ngstroms \item[\comp{CVB}] User defined chemical bonds (used in making Lewis structures) \item[\comp{CYCLES=$n$}] Do a maximum of $n$ steps \item[\comp{DAMP=$n.nn$}] Use damping factor $n.nn$ in SCF calculation \item[\comp{DCART}] Print part of working in DCART \item[\comp{DDMAX}] See EF code \item[\comp{DDMIN}] Minimum trust radius in a EF/TS calculation \item[\comp{DEBUG}] Debug option turned on \item[\comp{DENOUT}] Density matrix output (channel 10) \item[\comp{DENOUT=$n$}] Density matrix output every $n$ seconds \item[\comp{DENSITY}] Print final density matrix \item[\comp{DEP}] Generate Fortran code for parameters for new elements \item[\comp{DERIV}] Print part of working in DERIV \item[\comp{DFORCE}] Force calculation specified, also print force matrix. \item[\comp{DFP}] Use Davidson-Fletcher-Powell method to optimize geometries \item[\comp{DIAGG2}] Print part of working in DIAGG2 \item[\comp{DIPOLE}] Fit the ESP to the calculated dipole \item[\comp{DIPX}] X component of dipole to be fitted \item[\comp{DIPY}] Y component of dipole to be fitted \item[\comp{DIPZ}] Z component of dipole to be fitted \item[\comp{DISEX}] Distance for interactions in fine grid in COSMO \item[\comp{DMAX}] Maximum stepsize in eigenvector following \item[\comp{DMIN}] See EF code \item[\comp{DOUBLET}] Doublet state required \item[\comp{DRC}] Dynamic reaction coordinate calculation \item[\comp{DUMP=$n$}] Write restart files every $n$ seconds \item[\comp{ECHO}] Data are echoed back before calculation starts \item[\comp{EF}] Use ef routine for minimum search \item[\comp{EIGEN}] Generate canonical eigenvectors instead of LMOs \item[\comp{EIGS}] Print all eigenvalues in ITER \item[\comp{ENPART}] Partition energy into components \item[\comp{EPS}] Dielectric constant in COSMO calculation \item[\comp{ESP}] Electrostatic potential calculation \item[\comp{ESPRST}] Restart of electrostatic potential \item[\comp{ESR}] Calculate RHF spin density \item[\comp{EXCITED}] Optimize first excited singlet state \item[\comp{EXTERNAL}] Read parameters off disk \item[\comp{FIELD}] An external electric field is to be used \item[\comp{FILL=$n$}] In RHF open and closed shell, force M.O.\ $n$ to be filled \item[\comp{FLEPO}] Print details of geometry optimization \item[\comp{FMAT}] Print details of working in FMAT \item[\comp{FOCK}] Print last Fock matrix \item[\comp{FORCE}] Force calculation specified \item[\comp{GEO-OK}] override interatomic distance check \item[\comp{GEOCHK}] Print part of working in GEOCHK \item[\comp{GNMIN}] Minimize gradients in EF \item[\comp{GNORM=$n.n$}] Exit when gradient norm drops below $n.n$ \item[\comp{GRADIENTS}] Print all gradients \item[\comp{GRAPH}] Generate file for graphics \item[\comp{GREENF}] Correct I.P.s using Green's function technique \item[\comp{H2O}] Use water as the solvent in MST calculations \item[\comp{HCORE}] Print details of working in HCORE \item[\comp{HESS=$n$}] options for calculating hessian matrices in EF \item[\comp{H-PRIO}] Heat of formation takes priority in DRC \item[\comp{HYBRID}] Print details of working in HYBRID \item[\comp{HYPERFINE}] Hyperfine coupling constants to be calculated \item[\comp{IMEM=$nn$}] Integer memory increased by $nn$ \item[\comp{INT}] Make all coordinates internal coordinates \item[\comp{IRC}] Intrinsic reaction coordinate calculation \item[\comp{ISOTOPE}] Force matrix written to disk (channel 9 ) \item[\comp{ITER}] Print details of working in ITER \item[\comp{ITRY=$n$}] Set limit of number of SCF iterations to $n$ \item[\comp{IUPD}] Mode of Hessian update in eigenvector following \item[\comp{K=($n$,$n$)}] Brillouin zone structure to be calculated \item[\comp{KINETIC}] Excess kinetic energy added to DRC calculation \item[\comp{LEWIS}] Print details of working in LEWIS \item[\comp{LINMIN}] Print details of line minimization \item[\comp{LARGE}] Print expanded output \item[\comp{LET}] Override certain safety checks \item[\comp{LOCALIZE}] Print localized orbitals \item[\comp{MAX}] In GRID, use maximum grid size (23*23) \item[\comp{MECI}] Print details of MECI calculation \item[\comp{MEP=1}] Print Orozco-Luque electrostatic potentials for cross-section \item[\comp{MEP=2}] Print Orozco-Luque electrostatic potentials for Connolly surface \item[\comp{MERS=($n$)}] Keyword for BZ \item[\comp{MICROS}] Use specific microstates in the C.I. \item[\comp{MINDO/3}] use the MINDO/3 hamiltonian \item[\comp{MINMEP}] Minimize MEP minima in the plane defined \item[\comp{MMOK}] Use molecular mechanics correction to CONH bonds \item[\comp{MNDO}] Use the MNDO hamiltonian \item[\comp{MNDO-d}] Use the MNDO-d hamiltonian \item[\comp{MODE=$n$}] In EF, follow Hessian mode no.\ $n$ \item[\comp{MOLDAT}] Print details of working in MOLDAT \item[\comp{MS=$n$}] In MECI, magnetic component of spin \item[\comp{MULLIK}] Print the Mulliken population analysis \item[\comp{NEWGEO}] Make backbone atoms Cartesian and side-chain internal \item[\comp{NLLSQ}] Minimize gradients using NLLSQ \item[\comp{NLMO=$nnn$}] define average size of LMOs as $nnn$ \item[\comp{NOANCI}] Do not use analytical C.I. derivatives \item[\comp{NOINT}] Do not print internal coordinates \item[\comp{NOLOG}] Suppress log file trail, where possible \item[\comp{NOMM}] Do not use molecular mechanics correction to CONH bonds \item[\comp{NONET}] NONET state required \item[\comp{NONR}] Do not use Newton-Raphson method in EF \item[\comp{NOREOR}] In symmetry work, use supplied orientation \item[\comp{NOSYM}] Point-group symmetry set to C1 \item[\comp{NOTHIEL}] Do not use Thiel's FSTMIN technique \item[\comp{NSPA}] Sets number of geometric segments in COSMO \item[\comp{NSURF=$n$}] Number of surfaces in an ESP calculation \item[\comp{NOXYZ}] Do not print Cartesian coordinates \item[\comp{NSURF}] Number of layers used in electrostatic potential \item[\comp{OCTET}] Octet state required \item[\comp{OLDENS}] Read initial density matrix off disk \item[\comp{OLDFPC}] Use the old fundamental physical constants \item[\comp{OLDGEO}] Previous geometry to be used \item[\comp{OMIN}] In TS, minimum allowed overlap of eigenvectors \item[\comp{OPEN}] Open-shell RHF calculation requested \item[\comp{OPTBACK}] Set optimization flags for backbone atoms in a protein \item[\comp{OPTRES}] Set optimization flags for side-chain atoms in a protein \item[\comp{ORT}] Use Reynolds-Richards molecular electrostatic pot. \item[\comp{P=$n.nn$}] An external pressure of $n.nn$ NM$^{-2}$ or N is applied to a solid or polymer. \item[\comp{PARASOK}] In AM1 calculations some MNDO parameters are to be used \item[\comp{PDB}] Geometry is in protein data bank format \item[\comp{PDB($text$)}] User defined chemical symbols in protein data base \item[\comp{PDBOUT}] Output geometry in pdb format \item[\comp{PECI}] C.I. involves paired excitations only \item[\comp{PI}] Resolve density matrix into $\sigma$ and $\pi$ bonds \item[\comp{PINOUT}] Print localized M.O.s whenever PINOUT is called \item[\comp{PL}] Monitor convergence of density matrix in ITER \item[\comp{PM3}] Use the MNDO-PM3 hamiltonian \item[\comp{PMEP}] Complete semiempirical MEP calculation \item[\comp{PMEPR}] Complete semiempirical MEP in a plane to be defined \item[\comp{POINT=$n$}] Number of points in reaction path \item[\comp{POINT1=$n$}] Number of points in first direction in grid calculation \item[\comp{POINT2=$n$}] Number of points in second direction in grid calculation \item[\comp{POLAR}] Calculate first, second and third order polarizabilities \item[\comp{POPS}] Print the self consistent field atomic orbital populations \item[\comp{POTWRT}] In ESP, write out electrostatic potential to unit 21 \item[\comp{POWSQ}] Print details of working in POWSQ \item[\comp{PRECISE}] Criteria to be increased by 100 times \item[\comp{PRTMEP}] MEP contour data output to \comp{$<$filename$>$.mep} \item[\comp{PULAY}] Use Pulay's converger to obtain a SCF \item[\comp{QPMEP}] Charges derived from Wang-Ford type AM1 MEP \item[\comp{QUARTET}] Quartet state required \item[\comp{QUINTET}] Quintet state required \item[\comp{RAPID}] In SCF, use atoms being optimized only \item[\comp{RECALC=$n$}] In EF, recalculate Hessian every $n$ steps \item[\comp{RE-LOCAL}] Re-localize localized molecular orbitals \item[\comp{RELSCF=$n$}] Default SCF criterion multiplied by $n$ \item[\comp{RELTHR=$n$}] Default threshold for LMOs multiplied by $n$ \item[\comp{REORTH}] Reorthogonalize LMOs \item[\comp{RESEQ}] Resequence the atoms in a protein \item[\comp{RESIDUES}] Label all atoms in a protein \item[\comp{RESTART}] Calculation restarted \item[\comp{RMAX}] In TS, maximum allowed ratio for energy change \item[\comp{RMEM=$nn$}] Real memory increased by $nn$ \item[\comp{RMIN}] In TS, minimum allowed ratio for energy change \item[\comp{ROOT=$n$}] Root $n$ to be optimized in a C.I. calculation \item[\comp{RSCAL}] In EF, scale p-RFO to trust radius \item[\comp{RSOLV}] Effective radius of solvent in COSMO \item[\comp{SADDLE}] Optimize transition state \item[\comp{SAFE}] Allow memory intensive scf convergers \item[\comp{SCALE}] Scaling factor for van der Waals distance in ESP \item[\comp{SCFCRT=$n$}] Default SCF criterion replaced by the value supplied \item[\comp{SCINCR}] Increment between layers in ESP \item[\comp{SEPTET}] Septet state required \item[\comp{SETUP}] Extra keywords to be read of setup file \item[\comp{SEXTET}] Sextet state required \item[\comp{SHIFT=$n$}] a damping factor of $n$ defined to start SCF \item[\comp{SIGMA}] Minimize gradients using SIGMA \item[\comp{SINGLET}] Singlet state required \item[\comp{SIZES}] Print the sizes of all arrays created or destroyed \item[\comp{SLOG=$n.nn$}] In BFGS optimization, use fixed step of length $n.nn$ \item[\comp{SLOPE}] Multiplier used to scale MNDO charges \item[\comp{SNAP}]Increase precision of symmetry angles \item[\comp{SPIN}] Print final UHF spin matrix \item[\comp{STEP}] Step size in path \item[\comp{STEP1=$n$}] Step size $n$ for first coordinate in grid calculation \item[\comp{STEP2=$n$}] Step size $n$ for second coordinate in grid calculation \item[\comp{STO-3G}] Deorthogonalize orbitals in STO-3G basis \item[\comp{SUPER}] Print superdelocalizabilities \item[\comp{SYMAVG}] Average symmetry equivalent ESP charges \item[\comp{SYMMETRY}] Impose symmetry conditions \item[\comp{T=$n$}] A time of $n$ seconds requested \item[\comp{THERMO}] Perform a thermodynamics calculation \item[\comp{THRESH=$n.nn$}] Set the cutoff for LMO coefficients to $n.nn$ \item[\comp{TIDY}] Print details of working in TIDY \item[\comp{TIMES}] Print times of various stages \item[\comp{TOM}] Use the Miertus-Scrocco-Tomasi solvation model \item[\comp{T-PRIO}] Time takes priority in DRC \item[\comp{TRANS}] The system is a transition state (used in thermodynamics calculation) \item[\comp{TRIPLET}] Triplet state required \item[\comp{TS}] Using EF routine for TS search \item[\comp{UHF}] Unrestricted Hartree-Fock calculation \item[\comp{UNSAFE}] Use reduced memory (may affect SCF convergence) \item[\comp{VDW}] Van der Waals radius for atoms in COSMO defined by user \item[\comp{VECTORS}] Print final eigenvectors \item[\comp{VELOCITY}] Supply the initial velocity vector in a DRC calculation \item[\comp{WILLIAMS}] Use Williams surface \item[\comp{XENO}] Specify fragments which are attached to residues \item[\comp{X-PRIO}] Geometry changes take priority in DRC \item[\comp{XYZ}] Do all geometric operations in Cartesian coordinates \end{description}
{ "alphanum_fraction": 0.7441147097, "avg_line_length": 60.1630901288, "ext": "tex", "hexsha": "e8e909fc8c00f5fe315c2a739a58078bb3212dc3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "openmopac/MOPAC-archive", "max_forks_repo_path": "manuals/MOPAC2000_manual/d_keywords1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "openmopac/MOPAC-archive", "max_issues_repo_path": "manuals/MOPAC2000_manual/d_keywords1.tex", "max_line_length": 112, "max_stars_count": 2, "max_stars_repo_head_hexsha": "01510e44246de34a991529297a10bcf831336038", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "openmopac/MOPAC-archive", "max_stars_repo_path": "manuals/MOPAC2000_manual/d_keywords1.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-16T20:54:11.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-16T20:53:27.000Z", "num_tokens": 4106, "size": 14018 }
\chapter{Getting started} \section{Introduction} \txs{} offers several techniques to symbolically manipulate \txs{} models. This section gives an overview of how to get started with these techniques. \section{Installation} \begin{enumerate} \item Download \texttt{stack} as described on \begin{align*} \texttt{https://docs.haskellstack.org/en/stable/README/} \end{align*} and install it. \item Download and install an SMT solver such as \begin{align*} \texttt{Z3 version 4.8.3 - 64 bit - build hashcode cf4bf7b591b6} \end{align*} Add the binaries of the SMT solver to \texttt{PATH}. \item Clone the \texttt{develop} branch from \begin{align*} \texttt{https://github.com/ikbendedjurre/txs-develop} \end{align*} \item In a terminal, navigate to the repository directory and run \begin{itemize} \item \texttt{stack -v setup} \item \texttt{stack -v --profile build} \end{itemize} \item The \txs{} binaries can be found in \begin{align*} \texttt{/.stack-work/install/23f9efff/bin} \end{align*} in the repository directory (the \texttt{23f9efff} hash is build-dependent). Add the binaries to \texttt{PATH} (there may be a way to do this more easily with \texttt{stack}). \end{enumerate} \section{Starting \txs{}} \label{starttxs} \begin{enumerate} \item Make it so that \texttt{txsserver.exe} and \texttt{torxakis.exe} are in \texttt{PATH}. \item Start two terminals, one for the \txs{} server and one for the \txs{} client -- this has the advantage that errors that occur on the server-side will be visible. \item In both terminals, navigate to the directory with a \txs{} specification file (such as \texttt{example.txs}). \item In one terminal, run \texttt{txsserver --no-smt-log 50001} where the number \texttt{50001} is the port number that the \txs{} server will listen for clients. \item In the other terminal, run \texttt{torxakis 50001 example.txs}. \end{enumerate} \section{Process linearization} \label{processlinearization} The symbolic manipulation of \txs{} models requires the processes that underlie those models to be in \emph{LPE form} (see \ref{lpeform}). To make a \txs{} process linear in practice, do the following: \begin{enumerate} \item Start \txs{} (see \ref{starttxs}). \item Run \begin{align*} \texttt{lpe Model} \end{align*} in the client terminal, where \texttt{Model} is the name of the model in the \txs{} specification file. \txs{} will attempt to linearize the processes that underlie that model. Note that not all \txs{} models are linearizable. For those models, the \texttt{lpe} command will fail, with an error reported in the server terminal. \item If successful, the \texttt{lpe} command prints the name of a new \txs{} model that was created. This model can be manipulated symbolically (see \ref{modelmanipulation}). \end{enumerate} \section{Model manipulation} \label{modelmanipulation} After \ref{starttxs} and \ref{processlinearization}, symbolic manipulation of a \txs{} model can begin. To do this, use the \texttt{lpeop} command in the client terminal. The \texttt{lpeop} command has 3 space-separated arguments. In order, these arguments are: \begin{itemize} \item A chain of LPE operations. LPE operations are represented by their name, such as \texttt{cstelm} or \texttt{loop} (see \ref{lpeoperations}). These names are separated by the symbol \texttt{->}. LPE operations in the chain are executed from left to right, passing their output to the next LPE operation as input. If a problem occurs, the process ends immediately. Otherwise, the final output model is saved as a new model. \item The name of the input model. The process that underlies the input model should be in LPE form (see \ref{lpeform}). \item A base name \textit{base} for generated output. The name is primarily used for the output model (if any), the underlying process of which is in LPE form. However, LPE operations may adopt the name for their own purposes. The \texttt{export} operation, for example, will create a file by that name with the \texttt{.txs} extension. It is possible to use the \texttt{\%i} token in the base name. This will insert the current counter value into the name. Use the \texttt{inc} command to increase the counter (which starts at 1). \end{itemize} \section{Unit tests} \begin{enumerate} \item In a terminal, navigate to the repository directory. \item Execute \texttt{stack -v --profile test lpeops}. \end{enumerate} \section{Benchmarks} \subsection{Generation (Windows OS only)} First, benchmark files must be generated. This is done via script. \begin{enumerate} \item Navigate to \texttt{/examps} and open the file \begin{align*} \texttt{generateBenchmarkData.bat} \end{align*} \item Change \texttt{TXSDIR} to the directory where the \txs{} source files are located. Among others, it should contain the following sub-directories: \begin{center} \texttt{examps} \\ \texttt{sys} \\ \texttt{test} \end{center} \item In order to perform different benchmark measurements, change the contents of the function \texttt{:WriteCommands} (line 104, in particular, describes how the model should be produced that is compared to the original model and the linearized model). \item Run \texttt{generateBenchmarkData.bat} and wait for it to finish. \end{enumerate} \subsection{Execution} The main benchmark file of \txs{}, \begin{align*} \texttt{/test/sqatt/src/Benchmarks/All.txs} \end{align*} has been changed, and so the benchmark for LPE operations is started in the same way as the original benchmark: \begin{enumerate} \item In a terminal, navigate to \texttt{/test/sqatt} in the repository directory. \item Execute \begin{center} \texttt{stack -v --profile bench --ba} \\ \texttt{"--output data.html --csv data.csv --time-limit 1"} \end{center} The benchmark spends a maximum of 1 second on each part of the benchmark (default is 5 seconds). If there are no failures, the benchmark results will be exported to \texttt{data.html} and \texttt{data.csv} (note that previous results are not deleted from \texttt{data.csv}). For more benchmark parameters, see \url{http://www.serpentine.com/criterion/tutorial.html}. \item Wait for the benchmark to finish. \end{enumerate}
{ "alphanum_fraction": 0.7606893188, "avg_line_length": 38.44375, "ext": "tex", "hexsha": "7334ed879e6406cd097e8e4a67f069cecddfc4c1", "lang": "TeX", "max_forks_count": 13, "max_forks_repo_forks_event_max_datetime": "2021-06-26T16:33:36.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-16T11:33:59.000Z", "max_forks_repo_head_hexsha": "bc11f4b93a15e220bf6941d395d5b4cd361bfe74", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "ikbendedjurre/txs-develop", "max_forks_repo_path": "sys/lpeops/tex/lpeopsDoc/getting-started.tex", "max_issues_count": 746, "max_issues_repo_head_hexsha": "a791ce9960e88df576733404fe4d60114c35e50a", "max_issues_repo_issues_event_max_datetime": "2022-03-23T19:14:31.000Z", "max_issues_repo_issues_event_min_datetime": "2017-06-13T07:36:42.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "ikbendedjurre/TorXakis", "max_issues_repo_path": "sys/lpeops/tex/lpeopsDoc/getting-started.tex", "max_line_length": 254, "max_stars_count": 44, "max_stars_repo_head_hexsha": "a791ce9960e88df576733404fe4d60114c35e50a", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "ikbendedjurre/TorXakis", "max_stars_repo_path": "sys/lpeops/tex/lpeopsDoc/getting-started.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-08T02:17:01.000Z", "max_stars_repo_stars_event_min_datetime": "2017-06-09T08:17:13.000Z", "num_tokens": 1672, "size": 6151 }
\section{\module{py_compile} --- Compile Python source files} % Documentation based on module docstrings, by Fred L. Drake, Jr. % <[email protected]> \declaremodule[pycompile]{standard}{py_compile} \modulesynopsis{Compile Python source files to byte-code files.} \indexii{file}{byte-code} The \module{py_compile} module provides a function to generate a byte-code file from a source file, and another function used when the module source file is invoked as a script. Though not often needed, this function can be useful when installing modules for shared use, especially if some of the users may not have permission to write the byte-code cache files in the directory containing the source code. \begin{excdesc}{PyCompileError} Exception raised when an error occurs while attempting to compile the file. \end{excdesc} \begin{funcdesc}{compile}{file\optional{, cfile\optional{, dfile\optional{, doraise}}}} Compile a source file to byte-code and write out the byte-code cache file. The source code is loaded from the file name \var{file}. The byte-code is written to \var{cfile}, which defaults to \var{file} \code{+} \code{'c'} (\code{'o'} if optimization is enabled in the current interpreter). If \var{dfile} is specified, it is used as the name of the source file in error messages instead of \var{file}. If \var{doraise} = True, a PyCompileError is raised when an error is encountered while compiling \var{file}. If \var{doraise} = False (the default), an error string is written to sys.stderr, but no exception is raised. \end{funcdesc} \begin{funcdesc}{main}{\optional{args}} Compile several source files. The files named in \var{args} (or on the command line, if \var{args} is not specified) are compiled and the resulting bytecode is cached in the normal manner. This function does not search a directory structure to locate source files; it only compiles files named explicitly. \end{funcdesc} When this module is run as a script, the \function{main()} is used to compile all the files named on the command line. \begin{seealso} \seemodule{compileall}{Utilities to compile all Python source files in a directory tree.} \end{seealso}
{ "alphanum_fraction": 0.7459532374, "avg_line_length": 41.9622641509, "ext": "tex", "hexsha": "04581918dbdc4f96463b9162749a6c88df1bdeff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f493d5415b662e99a73d017bcafe2148c5bc8fb5", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "deadsnakes/python2.4", "max_forks_repo_path": "Doc/lib/libpycompile.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f493d5415b662e99a73d017bcafe2148c5bc8fb5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "deadsnakes/python2.4", "max_issues_repo_path": "Doc/lib/libpycompile.tex", "max_line_length": 87, "max_stars_count": null, "max_stars_repo_head_hexsha": "f493d5415b662e99a73d017bcafe2148c5bc8fb5", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "deadsnakes/python2.4", "max_stars_repo_path": "Doc/lib/libpycompile.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 566, "size": 2224 }
% Options: [twoside, leqno, 11pt], etc.. leqno is "number equations on the left hand side" \RequirePackage{tikz} \documentclass[12pt]{thesis} \usepackage{setspace} \usepackage{amsmath} \usepackage{graphicx} \usepackage{wrapfig} \usepackage{longtable} \usepackage{subcaption} \usepackage{setspace} \usepackage{float} \usepackage{listings} \usepackage{rotating} \usepackage{cancel} \usepackage{array} \usepackage{tikz} \usetikzlibrary{matrix, shapes} \graphicspath{ {C:/Users/Marissa/network-similarity/} } \usepackage{array} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}\vspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}\vspace{0pt}}m{#1}} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% DOCUMENT PROPERTIES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \author{Marissa Graham} % Titles must be in mixed case. Style guide: https://www.grammarcheck.net/capitalization-in-titles-101/. \title{A computationally driven comparative survey of network alignment, graph matching, and network comparison in pattern recognition and systems biology.} \degree{Master of Science} \university{Brigham Young University} \department{Department of Mathematics} \committeechair{Emily Evans} %% These are fields that are stored in the PDF but are not visible in the document itself. They are optional. \memberA{Benjamin Webb} \memberB{Christopher Grant} \subject{Writing a thesis using LaTeX} % Subject of your thesis, e.g. algebraic geometry \keywords{LaTeX, PDF, BYU, Math, Thesis} \month{June} \year{2018} \pdfbookmarks \makeindex %%%%%%%%%%%%%%%%%%%%%%%%% THEOREM DEFINITIONS AND CUSTOM COMMANDS %%%%%%%%%%%%%%%%%%%%%%%%%%% %% Define the theorem styles and numbering \theoremstyle{plain} \newtheorem{theorem}{Theorem}[chapter] \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \theoremstyle{remark} \newtheorem*{remark}{Remark} %% Create shortcut commands for various fonts and common symbols \newcommand{\s}[1]{\mathcal{#1}} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} %% Declare custom math operators \DeclareMathOperator{\tr}{tr} \DeclareMathOperator{\diag}{diag} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator{\Span}{Span} \DeclareMathOperator{\rank}{rank} %% Sets and systems \newcommand{\br}[1]{\left\langle #1 \right\rangle} \newcommand{\paren}[1]{\left(#1\right)} \newcommand{\sq}[1]{\left[#1\right]} \newcommand{\set}[1]{\left\{\: #1 \:\right\}} \newcommand{\setp}[2]{\left\{\, #1\: \middle|\: #2 \, \right\}} \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\system}[1]{\left\{ \begin{array}{rl} #1 \end{array} \right.} %% referencing commands \newcommand{\thmref}[1]{Theorem \ref{#1}} \newcommand{\corref}[1]{Corollary \ref{#1}} \newcommand{\lemref}[1]{Lemma \ref{#1}} \newcommand{\propref}[1]{Proposition \ref{#1}} \newcommand{\defref}[1]{Definition \ref{#1}} \newcommand{\exampleref}[1]{Example \ref{#1}} \newcommand{\exerref}[1]{Exercise \ref{#1}} \renewcommand{\labelenumi}{(\roman{enumi})} \begin{document} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% FRONT MATTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \frontmatter \maketitle \begin{abstract}\index{abstract} Comparative graph and network analysis plays an important role in both systems biology and pattern recognition, but existing surveys on the topic have historically ignored or underserved one or the other of these fields. We present a integrative introduction to the key objectives and methods of graph and network comparison in each, with the intent of remaining accessible to relative novices in order to mitigate the barrier to interdisciplinary idea crossover. To guide our investigation, and to quantitatively justify our assertions about what the key objectives and methods of each field are, we have constructed a citation network containing 5,793 vertices from the full reference lists of over two hundred relevant papers, which we collected by searching Google Scholar for ten different network comparison-related search terms. This dataset is freely available on Github. We have investigated its basic statistics and community structure, and framed our presentation around the papers found to have high importance according to five different standard centrality measures. We have also made the code framework used to create and analyze our dataset available as documented Python classes and template Mathematica notebooks, so it can be used for a similarly computationally-driven investigation of any field. \vskip 2.25in \noindent Keywords: % Keywords need to be as close to the bottom of the page as possible without moving to a new page. graph matching, graph alignment, graph comparison, graph similarity, graph isomorphism, network matching, network alignment, network comparison, network similarity, network alignment, comparative analysis, local alignment, global alignment, protein network, computational biology, biological networks, protein-protein interactions, computational graph theory, pattern recognition, exact graph matching, inexact graph matching, graph edit distance, graphlets, network motifs, graph matching algorithms, bipartite graph matching, node similarity, graph similarity search, attributed relational graphs \end{abstract} %\begin{acknowledgments} \end{acknowledgments} \tableofcontents \listoftables \listoffigures \mainmatter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 1 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Introduction}\label{chapter:introduction_and_background} \section{Motivation} \begin{wrapfigure}{L}{0.4\textwidth} \centering \vspace{-20pt} \includegraphics[width=0.38\textwidth]{foodweb.png} \scriptsize ``SimpleFoodWeb" Mathematica sample network. \caption{Using a network to represent relationships in a food web.} \vspace{-20pt} \label{fig:foodweb} \end{wrapfigure} Networks\footnote{The term \textit{network} is sometimes used interchangeably with the term \textit{graph}. While they both refer to the same mathematical object, we attempt to follow the heuristic throughout of using the term \textit{graph} to refer to a purely mathematical object and \textit{network} to refer to a real-world system.} are first and foremost a way to model the relationships between objects, which we do by representing objects as vertices and and relationships as edges. For example, we might use a graph to represent the relationships in a food web, or between characters in a successful movie franchise. In some cases, using this representation simply to visualize relationships is useful, but we generally would also like to computationally exploit it in order to gain further insight about the system we are modeling. \begin{wrapfigure}{r}{0.5\textwidth} \centering \vspace{-25pt} \includegraphics[width=0.48\textwidth,height=0.28\textwidth]{avengers.png} \scriptsize Diagram from the mic.com article ``Here's How the Marvel Cinematic Universe's Heroes Connect--in One Surprising Map". \caption{Using a network to represent relationships between movie characters.} \vspace{-20pt} \label{fig:avengers} \end{wrapfigure} As a first example, consider what questions we might ask about an infrastructure network such as a road network, phone lines, power grid, or the routers and fiber optic connections of the Internet itself. How do we efficiently get from here to there? How much traffic can flow through the network? What happens if an intersection is clogged, or a power plant fails? We can also ask questions about social networks representing relationships between people. Who is the most important? Who controls the flow of information? To what extent do the people you consider your friends consider themselves \textit{your} friend? How similar are you to your friends? To what extent are your friends friends with each other? How well is everybody connected to each other? To what extent do people form relationships with people who are like them? What do communities and strong friend groups look like, mathematically? One common question across all of mathematics is how similar objects are to each other. With networks, we can ask this question about individual vertices in a network, but we also frequently want to ask it about networks themselves. For example, we might ask ``How similar are you to other students, based on your friendships at school?", but we can also ask ``Which proteins, protein interactions and groups of interactions are likely to have equivalent functions across species?" \cite{Sharan_2006} For objects as combinatorially complex as networks, similarity calculation is a difficult problem, the study of which has its roots in the 60s and 70s \cite{Conte_2004} and, as illustrated in Figure \ref{fig:year_distributions}, has gained significant attention in the past twenty years as interesting network data becomes more readily available and computationally feasible. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{year_distribution.png} \caption{Distribution of papers published by year in our citation network of network similarity-related papers. Interestingly, year cutoffs for significant percentiles seem to roughly correspond to the spread of computers, personal computers, and the Internet, respectively.} \label{fig:year_distributions} \end{figure} The goal of this project is to provide a broad outline of the study of network similarity, but without the help of prior expertise in the field, it is laborious at best and impossible at worst to know which works are important and which are irrelevant. Instead, we use the tools of network theory to study the network of citations between scientific papers on a certain topic. This allows us to use standard network analysis techniques to determine which papers are the most important or influential, and has the additional advantage of bringing transparency to the process. That is, we can quantitatively justify our assertions using standard centrality and community detection measures, rather than relying on existing expertise in the field to give weight to our claims. The remainder of this work is structured as follows: In the remainder of this chapter, we introduce the necessary mathematical background to inform our analysis of our citation network dataset. In Chapters \ref{chapter:dataset_creation_and_analysis} and \ref{chapter:partitioning}, we introduce our citation network dataset. We analyze its basic structure, find two main fields of application within the study of network comparison, and choose a reading list from the high centrality vertices in our dataset. In Chapters \ref{chapter:pattern_recognition} and \ref{chapter:systems_biology}, we discuss our findings from this reading, and then conclude by discussing potential cross-applications between our two observed fields in Chapter \ref{chapter:conclusion}. \section{Background} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{basic_properties_demo.png} \caption{Basic types of networks} \label{fig:basic_properties_demo} \end{figure} In this section we introduce the definitions and notation required to give context to our analysis of the citation network. Our presentation follows Newman's \textit{Networks: An Introduction} \cite{newman2010} closely, with the remainder of definitions not otherwise cited sourced from \textit{Algorithms and Models for Network Data and Link Analysis} \cite{fouss2016} A \textbf{graph}\index{graph} $G(V,E)$ is formally defined as a finite, nonempty set $V$ of \textbf{nodes}\index{nodes} or \textbf{vertices}\index{vertices}, combined with a set $E\subset V\times V$ of \textbf{edges}\index{edges} representing relationships between pairs of vertices. Throughout this work, we denote the number of vertices in a graph by $n$ and the number of edges by $m$ where not otherwise specified. In this work we will deal with \textbf{simple graphs}\index{simple graph}, which are those that do not have more than one edge between any pair of vertices (that is, a \textbf{multiedge}\index{multiedge}\footnote{A multigraph without edge loops is not simple, but if we represented the number of edges between pairs of vertices as edge weights instead of multiedges, that would be a simple graph.}), and do not have any edges from a vertex to itself (a \textbf{self-edge}\index{self-edge} or \textbf{self-loop}). We also are concerned with whether a graph is \textbf{directed}\index{directed graph} or \textbf{undirected}\index{undirected graph}. In an undirected graph, we have an edge \textit{between} two vertices, whereas in a directed graph we have edges \textit{from} one vertex \textit{to} another vertex. Throughout this work, we will use the notation $v_i \leftrightarrow v_j$ for an undirected edge between vertices $v_i$ and $v_j$, and $v_i \rightarrow v_j$ for a directed edge. In either case, the edge $v_i\leftrightarrow v_j$ or $v_i\rightarrow v_j$ is \textbf{incident}\index{incident} to vertices $v_i$ and $v_j$, and vertices $v_i$ and $v_j$ are therefore considered \textbf{neighbors}\index{neighbor}. A graph can also be \textbf{weighted}\index{weighted graph}, meaning each edge is assigned some real, generally positive value $w_{ij}$ representing the ``strength" of the connection between vertices $v_i$ and $v_j$. In an undirected graph, the \textbf{degree}\index{degree} of a vertex is the sum of the weights of the incident edges, and in a directed graph, the \textbf{indegree}\index{indegree} and \textbf{outdegree}\index{outdegree} are the total weight of a vertex's incoming and outgoing edges, respectively. If the graph is unweighted, this is simply the number of adjacent, incoming, or outgoing edges, as the weight of each edge is one. When studying real-world networks, we also make a distinction between \textbf{deterministic}\index{deterministic} and \textbf{random}\index{random} networks. This distinction is roughly the same as that between a variable and a random variable. The vertices and edges in a deterministic network are ``fixed", while in a random network, they need to be inferred from data using statistical inference methods. For example, our citation network is deterministic, but a network of protein interactions for a given species is not, as it must be inferred from experimental data on a limited number of members of that species. \subsection{Computational network properties} Whether a network is directed or weighted or simple will inform our approach to its analysis, but these properties are generally included as metadata rather than computationally determined. The remainder of the properties we consider in network analysis are determined computationally, with varying degrees of algorithmic complexity. \subsubsection{Connectivity} \begin{wrapfigure}{L}{0.3\textwidth} \centering \vspace{-20pt} \includegraphics[width=0.28\textwidth]{path_demo.png} \caption{A path of length three on a small network.} \vspace{-20pt} \label{fig:path_demo} \end{wrapfigure} One example of interesting computational network properties is the question of whether a network is \textbf{connected}\index{connected graph}, as illustrated in Figure \ref{fig:connectivity_demo}; that is, there is a \textbf{path}\index{path} between any pair of vertices, where a path is defined to be a sequence of vertices such that consecutive vertices are connected by an edge. The \textbf{length}\index{path length} of a path is the number of edges connecting the vertices in the sequence. In the case of a directed graph, we make the distinction between weak and strong connectivity. A \textbf{weakly connected graph}\index{weakly connected graph} is one which is connected when each edge is considered as undirected, while a \textbf{strongly connected graph}\index{strongly connected graph} requires a path from every vertex to every other vertex, even while respecting edge directions. \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{connectivity_demo.png} \caption{Simple examples of different types of network connectivity.} \label{fig:connectivity_demo} \end{figure} If an undirected network is not connected, or a directed network, is not weakly connected, the network has multiple \textbf{connected components}\index{connected component} or \textbf{weakly connected components}\index{weakly connected component}. Each component is a subset of vertices such that there is a path between every pair of member vertices, and no paths between any member and a nonmember. For example, the disconnected graph in Figure \ref{fig:connectivity_demo} has two components. The weakly connected components of a directed graph are the components in the corresponding undirected network. In a typical real world network, there is generally a single large component or weakly connected component which contains most of the vertices, with the rest of the vertices contained in many small disconnected components. We refer to this large component as the \textbf{giant component}\index{giant component}, and its relative size gives us a measure for how ``close" a network is to being connected; the higher the percentage of vertices are in the giant component, the closer the network is to being connected. \subsubsection{Assortativity} \begin{wrapfigure}{L}{0.6\textwidth} \centering \vspace{-20pt} \includegraphics[width=0.58\textwidth]{assortativity_demo.png} \scriptsize ``USPoliticsBooks" Mathematica sample network. \caption{A network of U.S. politics books. Vertices categorized as liberal, conservative, or neutral are colored blue, red, and white, respectively, and edges that run between different categories are bolded.} \label{fig:assortativity_demo} \end{wrapfigure} We can also consider whether a network is \textbf{assortative}\index{assortative}. That is, if the vertices in the network have some discrete-valued property, we ask whether the edges in the network are more likely to run between vertices of the same type. If all of the edges run between vertices of the same type, the assortativity of the network is 1; if all edges run between edges of different types, the assortativity is $-1$. For example, the network of U.S. politics books in Figure \ref{fig:assortativity_demo} is strongly assortative with an assortativity value of $0.72$. Most of the connections are between books with the same political classification, which we can visually confirm by coloring the vertices accordingly and highlighting the few edges of the graph that run between books with different classifications. \subsubsection{Acyclic networks} \begin{figure}[h] \centering \includegraphics[width=0.58\textwidth]{acyclic_demo.png} \caption{An acyclic directed network (left) vs. one which contains a cycle (right).} \label{fig:acyclic_demo} \end{figure} We also consider whether a directed network is \textbf{acyclic}\index{directed acyclic network}, meaning that it contains no \textbf{cycles}\index{cycle} or nontrivial paths from any vertex to itself. Our most important example of a directed acyclic network is a \textbf{citation network}\index{citation network}. In a citation network, we include an edge from a paper to each reference it cites. Any cycle in this network would require edges both from a newer paper to an older paper, and from an older paper to a newer paper. If we only cite papers which have already been written, which is the case for the papers in our dataset and generally true for academia as a whole, we cannot have any cycles. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 2 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Creating the citation network}\label{chapter:dataset_creation_and_analysis} \section{Approach} Citation network creation is not a trivial task. Although some journals and databases provide a citation network of the references in their own domain, since our approach considers a highly interdisciplinary field, this approach discards large sections of our desired network. As a result of this, and since intellectual property restrictions preclude simply scraping an entire citation network, we constructed the dataset manually by collecting reference lists for relevant papers and then building the network accordingly. Relevant papers were found by searching google scholar for ``graph" or ``network" + ``alignment", ``comparison", ``similarity", ``isomorphism", or ``matching"\footnote{Future work should probably include ``graph kernel(s)" in this list.} Topic-relevant papers were initially collected from the first five pages of results for each of the ten search terms on May 4th, 2018, after which new papers published through June 25th, 2018 were collected from a google scholar email alert for those same ten search terms. For each of these papers, we stored the plaintext reference list in a standardized format which could be easily split into the individual freeform citations. Any paper for which we have a reference list is referred to as a ``parent" record, and the references are referred to as the ``child" records. In total, we collected 7,790 child references from 221 parent papers. In order to create the network, we needed to parse the freeform citations for each reference list to obtain metadata and recognize records as repeatedly cited. This is a difficult problem, as the records in the database span several hundred years and represent a wide variety of citation styles and languages, as well as significant optical character recognition and Unicode-related challenges. Instead of attempting to parse a citation into component parts, we used the REST API to search for each record in the CrossRef database, which already has the metadata parsed for any record it includes. We marked results as duplicate if their metadata matches and both are known to be correct, or if both their metadata and original freeform citation match exactly. The results given by the CrossRef API are considered correct if the title of the record can be found in the original freeform citation, and unverified otherwise. We were able to automatically verify results for about 75\% of the parent records, and about half of their children. We are conservative about marking records as duplicate, which means having so many unverified records dramatically misrepresents the structure of the network. We therefore went through the approximately three thousand unverified records by hand. For unverified parents, we manually corrected or found title, year, author, DOI number if existent, and URL information as well as reference and citation counts. For unverified child references, we first went through and marked any correct but unverified results. We found about half of the unverified child results to be correct, despite being unable to be automatically verified due to punctuation discrepancies, misspellings, unicode issues, or citation styles that do not include the title. Next, results were counted as correct (but noted as ``half-right") if the CrossRef API returned a review, purchase listing or similar for the correct record. For the remaining incorrect references, we manually parsed the author, title, and year from the citation, or looked them up if not included. Finally, we deleted any records which did not refer to a written work of some kind; specifically, references simply citing a website, web service, database, software package/library, programming language, or ``personal communication". We then wrote the entire citation network to a GML file which can be loaded in Mathematica. By default, the code used to generate the GML file includes the title, year, reference and citation counts for each record as vertex properties. Including further metadata as vertex properties is not difficult, but additional string-valued properties dramatically slow Mathematica's ability to load such a large network\footnote{The network contains 5,793 vertices and 7,491 edges, and takes almost exactly two minutes to load on a 2.6GHz 6th-gen quad core Intel Core i7 CPU with 16GB of RAM running Windows 10 using Mathematica 11.2.}, and a network as large as ours cannot be loaded at all, and so we do not include any more of them than strictly necessary. The dataset itself and the code and source files used to generate it can be found on GITHUB REPOSITORY LINK, as well as documentation and instructions for using it to generate a similar dataset for any collection of properly-formatted reference list files. \begin{figure}[p] \centering \includegraphics[width=0.9\textwidth]{full_citation_network.png} \caption{The full citation network of the dataset used for the project.} \label{fig:full_database} \end{figure} \section{Basic statistics} \subsubsection{Construction-related issues} Our full citation network contains a total of 7,491 references between 5,793 papers. This results in a fairly low mean degree, or average number of references per paper, which is due to an inherent limitation in the construction of almost any citation network. We can include all the references for a small group of papers, but including all the references for \textit{their} references and so on is an exponentially more expensive task, and we therefore generally only include children for a small fraction of the total vertices. Since the edges in our network are hand-constructed using individual reference lists, it includes an abnormally small fraction of vertices with children. A typical citation network, such as the SciMet and Zewail datasets which are displayed in Appendix Figures \ref{fig:sciMet} and \ref{fig:zewail}, respectively, and whose analysis is included in Table \ref{tab:network_table}, is constructed by scraping a single database. This results in a much higher fraction of children whose references are included, but any references which are not in the database in question are missed, so the mean degree is still quite low. \begin{table}[h] \centering \begin{tabular}{|p{0.31\linewidth}||R{0.07\linewidth}|R{0.07\linewidth}|R{0.075\linewidth}|R{0.07\linewidth}|R{0.07\linewidth}|R{0.07\linewidth}|} \hline & $G$ & $G_p$ & sciMet & zewail & $R$ & $R_d$ \\ \hline\hline% & $R_{d,p}$ \\ \hline\hline Vertices & 5793 & 1062 & 1092 & 3145 & 5793 & 5793 \\ \hline % & 1077 \\ \hline % Edges & 7491 & 2775 & 1308 & 3743 & 7491 & 7491\\ \hline % & 2775 \\ \hline Mean degree & 1.29 & 2.61 & 1.20 & 1.19 & 1.29 & 1.29 \\ \hline %& 2.58 \\ \hline Fraction with children & 0.038 & 0.193 & 0.523 & 0.599 & 0.733 & 0.038 \\ \hline %& 0.202\\ \hline Diameter & 10 & 9 & 14 & 22 & 21 & 9\\ \hline % & 7 \\ \hline Connected components & 16 & 1 & 114 & 281 & 504 & 3 \\ \hline %& 3\\ \hline Fraction in giant component & 0.960 & 1.000 & 0.784 & 0.797 & 0.900 & 0.999 \\ \hline %& 1.000 \\ \hline %Assortativity by indegree & 0.113 & 0.016 & 0.055 & 0.158 & 0.007 & -0.008\\ \hline % & -0.007 \\ \hline %Assortativity by outdegree & -0.014 & -0.014 & -0.025 & 0.056 & -0.018 & -0.007 \\ \hline %& -0.013 \\ \hline \end{tabular} \caption{Comparing statistics for our dataset to other networks. The Mathematica code used to load and construct all six networks can be found in Table \ref{tab:network_creation_source_code}.} \label{tab:network_table} \end{table} \subsubsection{The pruned network $G_p$} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{subnetwork.png} \caption{The pruned citation network.} \label{fig:pruned_network} \end{figure} We also consider the \textit{pruned network}, shown in Figure \ref{fig:pruned_network}, which is defined to be the giant component of the subnetwork of vertices with positive outdegree or indegree greater than one. We do so because the main purpose of our dataset is to determine which papers are important in the field of network similarity, but the vast majority of references in the database are only cited by one paper and frequently have very little relevance to network similarity itself. To reduce the influence of off-topic papers on our results, we restrict our network to both our parent vertices, which we have hand-curated to be relevant to network similarity, and all vertices which are cited by more than one parent paper. This shrinks the number of vertices by a factor of almost six, correspondingly raises the fraction of vertices with children, and approximately doubles the mean degree. \subsubsection{Comparison to other networks} In Table \ref{tab:network_table}, we calculate the mean degree, fraction of vertices with children, diameter, number of connected components, and fraction of vertices in the giant component for six different networks: our full network $G$, its pruned version $G_p$, two datasets from the Garfield citation network collection, a uniformly generated directed random graph $R$, and a random graph $R_d$ with the same degree sequences as $G$. The random network $R$ is generated from a uniform distribution. The other, $R_d$, is constructed to match the degree sequence of $G$, by assigning each vertex ``stubs" according to the desired number of incoming and outgoing edges and then matching them by uniformly sampling the available stubs. \subsubsection{Connectivity} Our full network displays a high level of connectivity; 96\% of vertices are contained in the giant component, and it has only 16 connected components, compared to 90\% containment in the giant component and 504 connected components for a randomly generated network of the same size. The diameter is also low compared to a random network and to our choices of real-world network. Since our network consists of papers collected on a specific topic, which have an outside reason to cite the same papers, this high level of connectivity is not surprising. The construction of the network also explains the high connectivity compared to the real-world datasets, which only have about 80\% of their vertices in their giant components; if the generation of the citation network is limited to a single database, as the sciMet and zewail datasets are, cocitation connections in other databases will be lost. This makes it more difficult for the giant component to fill the network, and results in longer paths between connected vertices. The only dataset tested with better connectivity than ours is the random network $R_d$, which has almost complete containment--$99.9\%$--in the giant component. This is not surprising. Approximately speaking, in order for a parent vertex to be disconnected from the giant component, its children must all have exactly one parent, and no children. In a real-world network, this is easier to find, since a paper's references are not randomly selected. A single work from an author or topic which is relatively disconnected to the rest of the network similarity academic community can generate a significant number of references which are not in the giant component. By contrast, since about 82\% of the vertices in $G$ have exactly one parent and no children, the probability of a parent vertex with outdegree $n$ being disconnected from the giant component is $(0.82)^n$, or $(0.82)^{26.2} \approx 0.5\%$ for the mean outdegree of the parents. This probability only shrinks with a higher number of disconnected parent vertices, as fewer single-parent vertices become available compared to the rest. \section{Centrality} The main goal in creating the citation network is to determine which papers are most important or \textbf{central}\index{centrality}. The question of which vertices are the most central to a network is widely researched in network theory, and there correspondingly exists a wide variety of centrality measures used to quantify different ideas about importance. For this project, we chose five centrality measures that we expect to coincide with an intuitive definition of which papers in the network are the most relevant. \subsubsection{Indegree} This measures the number of times each paper was cited by the parent papers in our network. Since the parent papers approximately represent everything determined to be most relevant by google scholar in a search for network comparison-related search terms\footnote{This is somewhat skewed by our inclusion of newly published papers collected from an email alert after the initial search, which may not be the most relevant overall.}, the top values for indegree should give us a rough idea of which papers are formative for the field and therefore more frequently cited by the parent papers. \subsubsection{Outdegree} Since the pruned network consists of the parent papers as well as the papers that appear in more than one parent's reference list, this measures the number of a parent's references which have been cited by other parents in the network. The top values for outdegree should therefore give us an idea of which papers survey the most well-known topics in the field. \subsubsection{Betweenness} Betweenness centrality measures the extent to which a vertex lies on paths between other vertices (sentence comes straight from newman). Intuitively, this should correspond to papers which make uncommon connections between other works; either those that are applicable to a wide variety of fields and applications, or those which build on disparate ideas in an original way. Unsurprisingly, we see some overlap between the papers with the highest outdegree and those with the highest betweenness, since the goal of a survey is to discuss a wider variety of ideas than an original paper (which only cites the specific works it builds on) can reasonably include. \subsubsection{Closeness} \begin{wrapfigure}{r}{0.6\textwidth} \centering \vspace{-25pt} \includegraphics[width=0.55\textwidth]{closeness_demo.png} \vspace{-10pt} \caption{Two graphs with vertices labeled by their geodesic distance from the highlighted vertex.} \vspace{-20pt} \label{fig:closeness_demo} \end{wrapfigure} Recall that a path between two vertices is a sequence of vertices such that consecutive vertices are connected by an edge; a \textbf{geodesic path}\index{geodesic path} is the shortest possible path between any two vertices, and its length is the geodesic distance. We define \textbf{closeness}\index{closeness centrality} to be the inverse of the average geodesic distance between a vertex and all other vertices in the network. Closeness centrality therefore takes on the highest values for a vertex which is a short average distance from other vertices. For example, in Figure \ref{fig:closeness_demo} the highlighted vertex on the left has closeness centrality 1, since it has a distance of 1 from all other vertices. On the right, the average geodesic distance from the highlighted vertex to the other six is $\frac{1}{6}(1+1+2+2+2+2)=\frac{5}{3}$, so the closeness centrality is 0.6. In a friendship network, this corresponds to a person who seems to know just about everybody. Similarly, a paper in our citation network will have higher closeness centrality if it only takes a few steps through a paper's reference list or citations to another paper's reference list or citations to get to every other paper in the network. \subsubsection{HITS} For a directed network such as the citation networks used here, we may want to separately consider the notion that a vertex is important if it points to other important vertices, and the notion that a vertex is important if it is pointed to by other important vertices. This is the goal of the HITS or hyperlink-induced topic search algorithm. We define two different types of centrality for each vertex. We have \textbf{authority centrality}\index{authority centrality}, which measures whether a specific vertex is being pointed to by vertices with high \textbf{hub centrality}\index{hub centrality}, which in turn measures whether a specific vertex points to vertices with high authority centrality. By defining the hub and authority centralities of a vertex to be proportional to the sum of the authority and hub centralities, respectively, of its neighbors, this definition reduces to a pair of eigenvalue equations which can be easily solved numerically. That is, if $x_i$ is the authority centrality of the $i$-th vertex in a network, $y_i$ is the hub centrality of the $j$-th vertex, $A_{ij}$ is the weight of the edge from $j$ to $i$ if it exists, and 0 otherwise, and $\alpha, \beta$ are proportionality constants, we have \[x_i = \alpha \sum_j A_{ij}y_j\text{ and } y_i = \beta \sum_j A_{ji}x_j.\] \subsection{High centrality vertices}\label{section:high_centrality_vertices} We can observe that the pruned network, shown in Figure \ref{fig:pruned_network}, seems to contain two clusters of more tightly connected vertices.\footnote{We discuss the motivation behind and significance of this partition in detail in Chapter \ref{chapter:partitioning}.} We would like to collect the important papers for both the network as a whole and for the distinct communities it contains, so we partition the dataset in half using a modularity maximizing partition; that is, we choose two groups of vertices such that the fraction of edges running between vertices in different groups is minimized. For both the pruned network and the two halves of our partition, we collect the top ten papers according to these five different centrality measures and summarize the results in the following tables. Since the numerical values for indegree and outdegree have intuitive meaning, we report the value itself. However, the values for betweenness, closeness, and the two HITS centralities are unintuitive, context-free real numbers, so we report the rank of each paper with respect to each measure rather than the actual value. We also calculate betweenness and closeness for the undirected version of the network, to allow those rankings to be based on citing relationships in either direction. The papers in each table are sorted from maximum to minimum according to \[ f(p) = \frac{k^{in}_p}{k^{in}_{max}} + \frac{k^{out}_p}{ k^{out}_{max}} + \sum_{i=1}^4 \frac{1}{r_i(p)}, \] where $k^{in}_p$ and $k^{out}_p$ are the indegree and outdegree of a paper $p$, the maximums of which are taken with respect to the pruned network or partition half in question, and $r_i(p)$ is the rank of a paper $p$ according to the $i$-th of our four centrality metrics, which is defined to be infinity if a paper is not in the top ten for that metric. \begin{table}[H] \centering \vspace{-.5cm} {\setstretch{1}\fontsize{10}{13}\selectfont \begin{tabular}{|L{0.7\linewidth}|c|c|c|c|c|c|} \hline Title & \rotatebox[origin=c]{90}{Indegree} & \rotatebox[origin=c]{90}{Outdegree} & \rotatebox[origin=c]{90}{Betweenness} & \rotatebox[origin=c]{90}{Closeness} & \rotatebox[origin=c]{90}{HITS Auth.} & \rotatebox[origin=c]{90}{HITS Hub} \\ \hline\hline $^\Diamond$Thirty Years of Graph Matching in Pattern Recognition \cite{Conte_2004} & 20* & 109* & 1 & 2 & & 1 \\ \hline $\dagger$Fifty years of graph matching, network alignment and network comparison \cite{Emmert_Streib_2016} & 6 & 71* & 2 & 1 & & 3 \\ \hline $\dagger$Networks for systems biology: conceptual connection of data and function \cite{Emmert_Streib_2011} & 2 & 102* & 3 & 3 & & 2 \\ \hline $^\Diamond$An Algorithm for Subgraph Isomorphism \cite{Ullmann_1976} & 20* & 4 & 7 & 4 & 1 & \\ \hline $\dagger$Modeling cellular machinery through biological network comparison \cite{Sharan_2006} & 9 & 41* & 8 & & & \\ \hline $^\Diamond$Computers and Intractability: A Guide to the Theory of NP-Completeness \cite{Hartmanis_1982} & 16* & 0 & 4 & 5 & & \\ \hline $^\Diamond$The graph matching problem \cite{Livi_2012} & 2 & 55* & 5 & 6 & & 7 \\ \hline $\dagger$A new graph-based method for pairwise global network alignment \cite{Klau_2009} & 9 & 13 & & 8 & & \\ \hline $\dagger$On Graph Kernels: Hardness Results and Efficient Alternatives \cite{Gartner_2003} & 11 & 10 & 6 & & & \\ \hline $^\Diamond$Error correcting graph matching: on the influence of the underlying cost function \cite{Bunke_1999} & 10 & 16 & & 7 & 7 & 8 \\ \hline $^\Diamond$A graduated assignment algorithm for graph matching \cite{Gold_1996} & 18* & 0 & & & 5 & \\ \hline $^\Diamond$The Hungarian method for the assignment problem \cite{Kuhn_1955} & 17* & 0 & & & & \\ \hline $^\Diamond$An eigendecomposition approach to weighted graph matching problems \cite{Umeyama_1988} & 15* & 5 & & & 6 & \\ \hline $^\Diamond$Recent developments in graph matching \cite{Bunke_2000} & 1 & 51* & & & & 4 \\ \hline $\dagger$MAGNA: Maximizing Accuracy in Global Network Alignment \cite{Saraph_2014} & 5 & 35* & & & & \\ \hline $^\Diamond$A distance measure between attributed relational graphs for pattern recognition \cite{Sanfeliu_1983} & 14* & 0 & & & 3 & \\ \hline $\dagger$Pairwise Global Alignment of Protein Interaction Networks by Matching Neighborhood Topology \cite{Singh_2007} & 13* & 0 & & & & \\ \hline $\dagger$Topological network alignment uncovers biological function and phylogeny \cite{Bunke_1998} & 12* & 0 & & & & \\ \hline A graph distance metric based on the maximal common subgraph \cite{Kuchaiev_2010} & 10 & 0 & & 10 & 4 & \\ \hline $^\Diamond$Efficient Graph Matching Algorithms \cite{Messmer_1995} & 0 & 43* & & & & 5 \\ \hline Local graph alignment and motif search in biological networks \cite{Berg_2004} & 8 & 10 & 10 & & & \\ \hline $\dagger$Global alignment of multiple protein interaction networks with application to functional orthology detection \cite{Singh_2008} & 11* & 0 & & & & \\ \hline On a relation between graph edit distance and maximum common subgraph \cite{Bunke_1997} & 11 & 0 & & & 2 & \\ \hline $^\Diamond$Graph matching applications in pattern recognition and image processing \cite{Conte_2003} & 0 & 40* & & & & 6 \\ \hline $^\Diamond$Fast and Scalable Approximate Spectral Matching for Higher Order Graph Matching \cite{Park_2014} & 0 & 41* & 9 & & & \\ \hline $^\Diamond$Structural matching in computer vision using probabilistic relaxation \cite{Christmas_1995} & 9 & 0 & & & 10 & \\ \hline $^\Diamond$A new algorithm for subgraph optimal isomorphism \cite{El_Sonbaty_1998} & 2 & 21 & & & & 9 \\ \hline BIG-ALIGN: Fast Bipartite Graph Alignment \cite{Koutra_2013} & 2 & 21 & & 9 & & \\ \hline $^\Diamond$A graph distance measure for image analysis \cite{Eshera_1984} & 8 & 0 & & & 8 & \\ \hline A new algorithm for error-tolerant subgraph isomorphism detection \cite{Messmer_1998} & 8 & 0 & & & 9 & \\ \hline $^\Diamond$A (sub)graph isomorphism algorithm for matching large graphs \cite{Cordella_2004} & 3 & 16 & & & & 10 \\ \hline \end{tabular} {\singlespacing\footnotesize$\dagger$Also top for Group 1 (biology dominated); $^\Diamond$Also top for Group 2 (computer science dominated); *Top ten for indegree/outdegree}} \vspace{-.25cm} \caption{Highest centrality papers for the entire pruned network.} \label{tab:toppapers_all} \end{table} \begin{table}[H] {\setstretch{1}\fontsize{10.5}{13}\selectfont \begin{tabular}{|L{0.7\linewidth}|c|c|c|c|c|c|} \hline Title & \rotatebox[origin=c]{90}{Indegree} & \rotatebox[origin=c]{90}{Outdegree} & \rotatebox[origin=c]{90}{Betweenness} & \rotatebox[origin=c]{90}{Closeness} & \rotatebox[origin=c]{90}{HITS Auth.} & \rotatebox[origin=c]{90}{HITS Hub} \\ \hline\hline $^\Diamond$Networks for systems biology: conceptual connection of data and function \cite{Emmert_Streib_2011} & 2 & 90* & 1 & 2 & & 1 \\ \hline $^\Diamond$Fifty years of graph matching, network alignment and network comparison \cite{Emmert_Streib_2016} & 4 & 56* & 2 & 1 & & 2 \\ \hline $^\Diamond$Modeling cellular machinery through biological network comparison \cite{Sharan_2006} & 9 & 40* & 4 & 3 & 10 & 9 \\ \hline $^\Diamond$MAGNA: Maximizing Accuracy in Global Network Alignment \cite{Saraph_2014} & 5 & 35* & 7 & 6 & & 3 \\ \hline $^\Diamond$On Graph Kernels: Hardness Results and Efficient Alternatives \cite{Gartner_2003} & 10* & 9 & 3 & 8 & & \\ \hline Biological network comparison using graphlet degree distribution \cite{Przulj_2007} & 11* & 0 & & 7 & 4 & 7 \\ \hline $^\Diamond$A new graph-based method for pairwise global network alignment \cite{Klau_2009} & 8 & 12 & 9 & 4 & 6 & \\ \hline Network Motifs: Simple Building Blocks of Complex Networks \cite{Milo_2002} & 11* & 0 & & 9 & 8 & \\ \hline $^\Diamond$Pairwise Global Alignment of Protein Interaction Networks by Matching Neighborhood Topology \cite{Singh_2007} & 12* & 0 & & & 3 & \\ \hline $^\Diamond$Topological network alignment uncovers biological function and phylogeny \cite{Kuchaiev_2010} & 12* & 0 & & & 2 & \\ \hline NETAL: a new graph-based method for global alignment of protein-protein interaction networks \cite{Neyshabur_2013} & 6 & 26* & & & & 5 \\ \hline Collective dynamics of ``small-world" networks \cite{Watts_1998} & 10* & 0 & & 10 & 5 & \\ \hline Global network alignment using multiscale spectral signatures \cite{Patro_2012} & 11* & 0 & & & 9 & \\ \hline $^\Diamond$Global alignment of multiple protein interaction networks with application to functional orthology detection \cite{Singh_2008} & 10* & 0 & & & & \\ \hline Conserved patterns of protein interaction in multiple species \cite{Sharan_2005} & 10* & 0 & & & 7 & \\ \hline Pairwise Alignment of Protein Interaction Networks \cite{Koyoturk_2006} & 10* & 0 & & & 1 & \\ \hline Alignment-free protein interaction network comparison \cite{Ali_2014} & 2 & 22 & 6 & 5 & & \\ \hline Graphlet-based measures are suitable for biological network comparison \cite{Hayes_2013} & 1 & 30* & & & & 8 \\ \hline Survey on the Graph Alignment Problem and a Benchmark of Suitable Algorithms \cite{Dopmann_2013} & 0 & 26 & & & & 4 \\ \hline Predicting Graph Categories from Structural Properties \cite{Canning_2018} & 0 & 30* & 5 & & & \\ \hline Fast parallel algorithms for graph similarity and matching \cite{Kollias_2014} & 1 & 23 & & & & 6 \\ \hline Complex network measures of brain connectivity: Uses and interpretations \cite{Rubinov_2010} & 0 & 28* & 8 & & & \\ \hline Graph-based methods for analysing networks in cell biology \cite{Aittokallio_2006} & 0 & 30* & & & & 10 \\ \hline Demadroid: Object Reference Graph-Based Malware Detection in Android \cite{Wang_2018} & 0 & 25 & 10 & & & \\ \hline Early Estimation Model for 3D-Discrete Indian Sign Language Recognition Using Graph Matching \cite{Kumar_2018a} & 0 & 29* & & & & \\ \hline Indian sign language recognition using graph matching on 3D motion captured signs \cite{Kumar_2018b} & 0 & 29* & & & & \\ \hline \end{tabular} \singlespacing $^\Diamond$Also a top-centrality paper for the entire network; ; *Top ten for indegree/outdegree} \caption{Highest centrality papers for Group 1 (biology dominated) in our partition of the pruned network.} \label{tab:toppapers_bio} \end{table} \begin{table}[H] {\setstretch{1}\fontsize{10}{13}\selectfont \begin{tabular}{|L{0.7\linewidth}|c|c|c|c|c|c|} \hline Title & \rotatebox[origin=c]{90}{Indegree} & \rotatebox[origin=c]{90}{Outdegree} & \rotatebox[origin=c]{90}{Betweenness} & \rotatebox[origin=c]{90}{Closeness} & \rotatebox[origin=c]{90}{HITS Auth.} & \rotatebox[origin=c]{90}{HITS Hub} \\ \hline\hline $^\Diamond$Thirty Years of Graph Matching in Pattern Recognition \cite{Conte_2004} & 17* & 107* & 1 & 1 & & 1 \\ \hline $^\Diamond$An Algorithm for Subgraph Isomorphism \cite{Ullmann_1976} & 15* & 2 & 10 & 5 & 2 & \\ \hline $^\Diamond$A graduated assignment algorithm for graph matching \cite{Gold_1996} & 18* & 0 & 7 & 4 & 3 & \\ \hline $^\Diamond$An eigendecomposition approach to weighted graph matching problems \cite{Umeyama_1988} & 15* & 5 & & 2 & 4 & \\ \hline $^\Diamond$The graph matching problem \cite{Livi_2012} & 2 & 36* & 3 & 3 & & 8 \\ \hline $^\Diamond$A distance measure between attributed relational graphs for pattern recognition \cite{Sanfeliu_1983} & 13* & 0 & & 7 & 1 & \\ \hline $^\Diamond$Recent developments in graph matching \cite{Bunke_2000} & 0 & 50* & 8 & & & 2 \\ \hline $^\Diamond$Error correcting graph matching: on the influence of the underlying cost function \cite{Bunke_1999} & 9* & 16 & & 8 & & 6 \\ \hline $^\Diamond$Fast and Scalable Approximate Spectral Matching for Higher Order Graph Matching \cite{Park_2014} & 0 & 41* & 2 & & & \\ \hline $^\Diamond$Efficient Graph Matching Algorithms \cite{Messmer_1995} & 0 & 42* & 5 & & & 4 \\ \hline $^\Diamond$Computers and Intractability: A Guide to the Theory of NP-Completeness \cite{Hartmanis_1982} & 11* & 0 & 6 & & & \\ \hline $^\Diamond$The Hungarian method for the assignment problem \cite{Kuhn_1955} & 14* & 0 & & & & \\ \hline $^\Diamond$Graph matching applications in pattern recognition and image processing \cite{Conte_2003} & 0 & 40* & & & & 3 \\ \hline Efficient Graph Similarity Search Over Large Graph Databases \cite{Zheng_2015} & 0 & 28* & 4 & 6 & & \\ \hline A linear programming approach for the weighted graph matching problem \cite{Almohamad_1993} & 8 & 8 & & 9 & 9 & \\ \hline $^\Diamond$Structural matching in computer vision using probabilistic relaxation \cite{Christmas_1995} & 9* & 0 & & & 5 & \\ \hline $^\Diamond$A graph distance measure for image analysis \cite{Eshera_1984} & 8 & 0 & & & 6 & \\ \hline Inexact graph matching for structural pattern recognition \cite{Bunke_1983} & 10* & 0 & & & & \\ \hline $^\Diamond$A new algorithm for subgraph optimal isomorphism \cite{El_Sonbaty_1998} & 2 & 21 & & & & 5 \\ \hline Approximate graph edit distance computation by means of bipartite graph matching \cite{Riesen_2009} & 9 & 0 & & & & \\ \hline Linear time algorithm for isomorphism of planar graphs (Preliminary Report) \cite{Hopcroft_1974} & 9 & 0 & & & & \\ \hline Structural Descriptions and Inexact Matching \cite{Shapiro_1981} & 9 & 0 & & & 7 & \\ \hline $^\Diamond$A (sub)graph isomorphism algorithm for matching large graphs \cite{Cordella_2004} & 3 & 16 & & & & 7 \\ \hline A Probabilistic Approach to Spectral Graph Matching \cite{Egozi_2013} & 0 & 25* & 9 & 10 & & \\ \hline Hierarchical attributed graph representation and recognition of handwritten chinese characters \cite{Lu_1991} & 6 & 0 & & & 8 & \\ \hline Exact and approximate graph matching using random walks \cite{Gori_2005} & 1 & 14 & & & & 9 \\ \hline A shape analysis model with applications to a character recognition system \cite{Rocha_1994} & 5 & 0 & & & 10 & \\ \hline Fast computation of Bipartite graph matching \cite{Serratosa_2014} & 1 & 23* & & & & \\ \hline Graph Matching Based on Node Signatures \cite{Jouili_2009} & 0 & 17 & & & & 10 \\ \hline Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching \cite{Das_2018} & 0 & 22* & & & & \\ \hline \end{tabular} $^\Diamond$Also a top-centrality paper for the entire network; ; *Top ten for indegree/outdegree} \caption{Highest centrality papers for Group 2 (computer science dominated) in our partition of the pruned network.} \label{tab:toppapers_CS} \end{table} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 3 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Partitioning the citation network}\label{chapter:partitioning} \section{The tagging and partitioning process} \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{subnetwork_partition.png} \caption{The two halves of our partition of the pruned network.} \label{fig:partitioned_network} \end{figure} In the process of collecting relevant papers for our citation network, we noticed that network similarity applications seem to be almost exclusively found in the fields of biology and computer science\footnote{While the ``computer science" papers in our reading list all fall into the category of ``pattern recognition", this is not necessarily the case for their references, and we therefore use the broader label in our subject tagging process. Similarly, we use ``biology" instead of ``systems biology" when describing our category labels.}, and we would therefore like to investigate the structure of the network with respect to these two categories. Unfortunately, the metadata for the papers in our network does not include the subject information we would need to simply partition the network using on these categories; while the CrossRef API does sometimes include a ``subject" category, it is present in less than 1\% of items, and with almost six thousand references in our database, it is impractical to categorize their subjects by hand. Instead, we partition the network into two categories of equal size. If we choose our partition in a way that minimizes the fraction of edges running between its two groups, it will preserve and separate the two clusters of more densely connected vertices first observed in Figure \ref{fig:pruned_network}, as we can see in Figure \ref{fig:partitioned_network}. We then set out to determine whether these two halves of the network correspond to the two fields of study we noticed while constructing the dataset. To do this, we need some way to roughly tag papers by their subject. We chose to do according to their journal(s) of publication, since this information is available for over 97\% of the papers in our network\footnote{Journal information was not manually corrected for the papers for which the CrossRef API returned an incorrect result. However, in the vast majority of those cases, the result returned was very similar to the correct one--i.e., written by most of the same authors, or an older paper on the same subject. The subject information should therefore still be accurate enough for our purposes.}. There were a total of 2,285 unique journal names, which we tagged as ``Computer Science", ``Biology", and ``Mathematics" according to the keywords listed in Table \ref{tab:tagging_keywords}. This strategy allowed us to quickly tag the majority of the papers as at least one of these three subjects. Our journal-based tagging is a drastic improvement over the subject information provided by CrossRef, giving us information for about 67\% of the total papers, and 53\% of those in the pruned network. We found this to be sufficient to confirm our initial suspicions that the two communities observed in the network do in fact correspond to the fields of computer science (primarily pattern recognition) and systems biology. In the remainder this chapter, we show how these categories are reflected in the structure of the dataset, both overall and with respect to our partition, and then discuss the advantages that our dataset and analysis provide in our reading and writing process. \begin{figure}[p] \centering \begin{minipage}[c]{0.23\textwidth} \includegraphics[width=\textwidth]{color_key.png} \end{minipage} \hfill \begin{minipage}[c]{0.7\textwidth} a)\includegraphics[width=\textwidth]{color_coded_full.png} \end{minipage} \begin{minipage}[c]{0.49\textwidth} \includegraphics[width=0.95\textwidth]{color_coded_left.png} b) \vspace{-16pt} \end{minipage} \hfill \begin{minipage}[c]{0.49\textwidth} c)\includegraphics[width=0.95\textwidth]{color_coded_right.png} \end{minipage} \caption{a) The pruned network $G_p$, and b)-c) two halves of its partition $G_p^{(1)}$ and $G_p^{(2)}$, with vertices colored according to their subject label.} \label{fig:subject_color_coded} \end{figure} \section{Results} Our first step is to color code the vertices in the pruned network according to their subject, as shown in Figure \ref{fig:subject_color_coded}, so we can get a visual sense of how our tagged subjects are spread through the network. While the main two categories we are interested in are computer science and biology, we have also tagged the mathematics papers, so that we have a third category of similar size and generality as the other two. This serves as a control group, and allows us to consider and reject the hypothesis that there are three main subnetworks of similar papers instead of two. Intuitively, we can see that the red and blue vertices are mostly clustered together on the two halves of the pruned network, confirming our suspicion that the two dense clusters of vertices we see in Figure \ref{fig:pruned_network} correspond to the two categories we observed while constructing the dataset. We also notice that the cluster of blue vertices only fills about half of its side of the partition, indicating that the biology category is significantly smaller than the computer science category. The yellow seems to be about evenly spread across the two halves, meaning that we do not have three distinct meaningful subnetworks of papers. There also appear to be significantly more untagged vertices on the biology side of the partition. It is likely that this is not only because the computer science category is inherently larger, but because its papers are more likely to be tagged as such. A full half of the papers in the computer science category are published in an ACM, IEEE, or SIAM\footnote{Both ``SIAM" and ``algorithm" were used as keywords for both math and computer science, which accounts for about half of the overlap between the two categories.} journal (IEEE journals alone represent 35\% of the CS-tagged papers), all of which are easily tagged using these acronyms as a keyword. There were not any analagous dominant organizations with acronym keywords for the biology journals in our dataset, so the tagging relies on topical keywords and therefore can identify fewer of the biology papers using a reasonable number of keywords in our search. As a result of this, the computer science network is more strongly identified as such, and therefore more structurally visible to the partitioning algorithm. \begin{table}[t] \centering \begin{tabular}{|l|r|r|r|r|} \hline & $G$ & $G_p$ & $G_p^{(1)}$ & $G_p^{(2)}$ \\ \hline Total vertices & 5793 & 1062 & 531 & 531 \\ \hline Untagged & 1922 & 502 & 311 & 191 \\ \hline Tagged & 3871 & 560 & 220 & 340 \\ \hline CS & 2533 & 405 & 93 & 312 \\ \hline Biology & 984 & 122 & 108 & 14 \\ \hline Math & 787 & 97 & 44 & 53 \\ \hline Both CS and biology & 108 & 13 & 9 & 4 \\ \hline Both CS and math & 305 & 49 & 15 & 34 \\ \hline Both biology and math & 24 & 3 & 2 & 1 \\ \hline All three & 4 & 1 & 1 & 0 \\ \hline \end{tabular} \caption{Number of vertices tagged as computer science, biology, math, or some combination of these in $G$, $G_p$, and the two halves of the partition $G_p^{(1)}$ and $G_p^{(2)}$.} \label{tab:subject_counts} \end{table} We then count the number of vertices in each color-coded category, as shown in Table \ref{tab:subject_counts}. This confirms what we can intuitively see in Figure \ref{fig:subject_color_coded}. That is, almost all of the biology papers are found on one side of the partition, the majority of the computer science papers are found on the other side, and the math papers are fairly evenly spread between the two. The number of biology papers is much smaller than the number of computer science papers, which helps explain why there are more untagged papers on the biology side, and why there are significantly more computer science papers on the biology side than there are biology papers on the computer science, both by percentage and by total number. Color codings for the full network and the subnetwork of high centrality papers can be found in Figures \ref{fig:full_subject_color_coded} and \ref{fig:reading_list_subject_colored}, and a table of vertex counts similar to Table \ref{tab:subject_counts} for the high centrality subnetwork can be found in Table \ref{tab:reading_list_subject_counts}. \subsection{Assortativity results} \begin{table}[h] \centering \begin{tabular}{|l|r|r|} \hline & $G$ & $G_p$ \\ \hline\hline Outdegree & -0.0178 & -0.0141 \\ \hline Publication year & 0.0067 & 0.0041 \\ \hline Citation count & 0.0006 & 0.0654 \\ \hline Reference count & 0.0193 & -0.0061 \\ \hline Tagged with any subject & 0.1089 & -0.0094 \\ \hline Subject & 0.1837 & 0.0712 \\ \hline Subject is CS & 0.2624 & 0.1529 \\ \hline Subject is biology & 0.3354 & 0.1773 \\ \hline Subject is math & 0.0732 & 0.0164 \\ \hline Subject is CS or biology & 0.1500 & 0.0188 \\ \hline Subject is CS or math & 0.2458 & 0.1256 \\ \hline Subject is biology or math & 0.1713 & 0.0414 \\ \hline \end{tabular} \caption{Assortativity of the full and pruned citation networks with respect to various network properties.} \label{tab:assortativity} \end{table} We would like to calculate the assortativity of the network with respect to our subject tagging, to measure the degree to which papers on a certain topic cite other papers on the same topic. It is unclear how to do so, however, since our vertices can belong to multiple categories, while the assortativity algorithm requires categories to be exclusive. To handle this issue, we report results in Table \ref{tab:assortativity} for multiple strategies for dealing with multiple category membership. We can either define category intersections to be their own, separate category, which was the approach for the ``Subject" row in Table \ref{tab:assortativity}, or we can calculate assortativity with respect to whether a vertex is or is not tagged as a certain subject or group of subjects, which was the approach for the rest of Table \ref{tab:assortativity}. For non-subject properties, our assortativity values are all very low in absolute value, meaning that vertices are neither more or less likely to cite vertices with similar outdegree, publication year, citation count, or reference counts as themselves. We do, however notice nontrivial assortativity with respect to several our subject-based properties. The values are much lower than what we observed for the example in Figure \ref{fig:assortativity_demo}, which had an assortativity of 0.72, but this is not surprising. Many of the papers on each of our topics could not be tagged as such, so the assortativity is not as high as it likely would be with perfect subject tagging. We also would not expect to see as much assortativity in the citation network of an interdisciplinary academic research area as we would in a network of non-academic political books, especially when there is significant overlap between our categories. Survey papers in particular will lower the assortativity as they draw connections between work on a similar topic, but in different disciplines. We first notice that the assortativity values in $G_p$ are lower than their corresponding values in all of $G$. That is, the papers with only one parent, which we have removed in $G_p$, are more likely to have the same subject tag as their parent than those cited by multiple papers. We also observe that there is very little assortativity with respect to whether a paper's subject is mathematics, which justifies our hypothesis that the assortativity with respect to computer science and biology is noteworthy, and not observed in any subject classification. Finally, we note that the assortativity with respect to whether a paper is either computer science or biology, or neither, is much lower than with respect to either category on its own, and only somewhat higher than the assortativity with respect to whether a paper is tagged at all. Then the assortativity with respect to whether a paper is computer science or math is only slightly lower than with respect to computer science by itself, while the assortativity with respect to whether a paper is computer science or biology is much lower than with respect to biology by itself. That is, the math category is more highly structurally linked to computer science than biology (which is unsurprising, given the relative sizes of its intersections with each category), and biology is the most structurally distinct category overall. \section{How centrality and context inform a better survey} In Section \ref{section:high_centrality_vertices}, we introduced a collection of 61 papers which were found to have high centrality either overall or within one of the sides of our partition of the pruned network. Our goal is to frame our presentation around the most important papers in the network, so they form our primary reading list. To facilitate our reading, we collected and tabulated metadata for the high centrality papers, including their author, year, title, DOI number, whether they are a parent in the network, their rank with respect to each of our centrality metrics overall and within each side of the partition, and their overall rank as discussed in Section \ref{section:high_centrality_vertices}. We also considered the subnetwork of our high centrality vertices, which we refer to as $G_R$, and visually organized them as shown in Figure \ref{fig:reading_list} and Figure \ref{fig:reading_list_neighborhood} to allow us to see at a glance the context of each paper in the wider reading list. In Table \ref{tab:neighborhood_partition_counts}, we show how many vertices in each paper's neighborhood within $G_R$ fall on either side of the partition, which we can use as a guide to which papers might be highly interdisciplinary either in the references they cite, or the papers they are cited by. Finally, we can use the Mathematica representation of the network to easily calculate how many and which of a paper's references are in the pruned network and on either side of the partition, and check the intersection of the neighborhoods of two or more papers. \begin{sidewaysfigure} \centering \includegraphics[width=0.85\textwidth]{reading_list0pt9crop.png} \caption{The subnetwork $S$ of high centrality papers, as listed in Tables \ref{tab:toppapers_all}, \ref{tab:toppapers_bio}, and \ref{tab:toppapers_CS}. Green vertices are in group 1 (biology dominated) of the partition of $G_p$, and blue vertices are in group 2 (CS dominated).} \vspace{-12pt}\flushleft\scriptsize Note: ``Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching" is not in the connected component and is not displayed. \label{fig:reading_list} \end{sidewaysfigure} \begin{sidewaysfigure} \centering \includegraphics[width=0.85\textwidth]{reading_list_neighborhood0pt9crop.png} \caption{The subnetwork $S$ of high centrality vertices, highlighting the neighborhood of ``Fifty years of graph matching, network alignment, and network comparison" \cite{Emmert_Streib_2016}.} \vspace{-12pt}\flushleft\scriptsize Note: ``Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching" is not in the connected component and is not displayed. \label{fig:reading_list_neighborhood} \end{sidewaysfigure} At this point, we have a powerful amount of context to guide our reading. We know that there are two main categories of application in our dataset, we know roughly how they inform its structure, and we know which papers are important in each category as well as overall. We know that we are only reading papers which are considered important in some way, we know that they are important within our topic of interest specifically, and we know exactly \textit{how} important they are considered to be in comparison to the rest and why. As we read, we can easily check a paper's neighbors against those we have already read to place it within the context of the overall shape of the field , and to compare our computational insights into which of the references included are relevant with the author's choices in how to frame them. We can choose to start by reading the papers with high authority centrality, in order to gain an understanding of concepts before building upon them, and follow both year and forward reference information to track the development of the field over time. We can check for connections in the form of cocitations between papers we might expect to be connected based on the ideas they discuss--which is interesting both when the connections are there, and when they are not--and thereby get a sense of what the idea transfer between the two disciplines has been up to this point, which ideas have found crossover and which have not, and why. Overall, we acquire a global sense of the shape of the field of network similarity, which we can trust to be less biased by our own or other author's perceptions of what is important. At this point, it is easy to choose additional papers to read and refer to as needed throughout the reading and writing process, and to use their reference lists to put them in the context of our existing understanding. As a last remark, we note that our context is based largely on the reading list subnetwork alone--which was formed using an arbitrary cutoff of ``top ten papers"--and that the network mathematics involved is quite basic. Even as helpful as we have found this context to be, we have likely only scratched the surface of the possible benefits of this method of exploration. In the following chapters, we outline the motivation and approach for our two main categories of application. We begin with pattern recognition in Chapter \ref{chapter:pattern_recognition} and then discuss biology in Chapter \ref{chapter:systems_biology}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 4 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Pattern Recognition}\label{chapter:pattern_recognition} \section{Motivation} The complex, combinatorial nature of graphs makes them computationally very difficult to work with, but it also makes them an incredibly powerful data structure for the representation of various objects and concepts. They are particularly useful in computer vision, where we would often like to recognize certain objects in an image or across images that might seem very different at the pixel level as a result of things like angles, lighting, and image scaling. Since graphs are invariant under positional changes including translations, rotations, and mirroring, they are well suited for this task. Applications in the area of computer vision include optical character recognition \cite{Lu_1991,Rocha_1994}, biometric identification \cite{isenor1986fingerprint,deng2010retinal} and medical diagnostics \cite{sharma2012determining}, and 3D object recognition \cite{Christmas_1995}. In 2018, work has been published in the computer vision-related areas of Indian sign language recognition \cite{Kumar_2018a,Kumar_2018b}, spotting subgraphs (e.g. certain characters) in comic book images \cite{le2018ssgci}, and stacking MRI image slices \cite{clough2018mri}. A timeline with more comprehensive counts of papers appearing in various applicative areas in pattern recognition through 2002 can be found in \cite{Conte_2004}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{motion_capture_demo.png} \scriptsize Source: http://ultimatefifa.com/2012/fifa-13-motion-capture-session/ \caption{A graph-based representation of a human body, with vertices corresponding to the markers on a motion capture suit.} \label{motion_capture_demo} \end{figure} In computer vision applications, as well as for pattern recognition in general, we create a graph representation for an image by decomposing it into parts and using edges to represent the relationships between these components. For example, we can describe a person using the relationships between various body parts--head, shoulders, knees, toes, and so on. This is the idea behind motion capture, as illustrated in Figure \ref{motion_capture_demo}. After we have a graph representation of the objects we would like to compare, the problem of recognition, and in particular of database search, is reduced to a graph matching problem. We must compare the input graph for an unknown object to our database of model graphs to determine which is the most similar. \section{Defining graph matching}\label{section:defining_graph_matching} In the literature, the term ``graph matching"\index{graph matching} is used far more often than it is explicitly defined. When a definition is given, it is usually tailored to the purposes of a particular author, and specific to a certain \textit{type} of graph matching; i.e. exact, inexact, error-correcting, bipartite, and so on. The distinctions between these can be subtle, and are typically only explicitly addressed in survey papers. Furthermore, we sometimes address the question of finding a matching \textit{in} a graph \cite{wikiMatchingInAGraph}, which is different from but still related to the problem of \textit{graph matching}, in which we want to find a mapping \textit{between} two graphs. And finally, our Google Scholar results return a significant amount of papers about \textbf{elastic graph matching}\index{elastic graph matching}, which is widely used in pattern recognition but is not in fact a form of graph matching \cite{Conte_2003}. Clearly a good taxonomy is needed. In this section, we give an overview of graph matching-related terms and summarize their distinctions. \subsection{Preliminary definitions} Graph isomorphism is the strictest form of graph matching and a natural place to begin our discussion. We therefore give definitions for graph isomorphism and the two main relevant generalizations of the graph isomorphism problem, as well as the temporal complexity-related terms necessary to compare their computational difficulty. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{isomorphism_demos.png} \caption{A visual summary of the distinctions between graph isomorphism, subgraph isomorphism, maximum common subgraph, and inexact matching.} \label{fig:isomorphism_demos} \end{figure} A decision problem (i.e. one which can be posed as a yes or no question), of which the graph isomorphism problem is an example, is in \textbf{NP}\index{NP} or \textbf{nondeterministic polynomial time} if the instances where the answer is ``yes" can be verified or rejected in polynomial time \cite{Hartmanis_1982,wikiNPComplexity}. It is \textbf{NP-hard}\index{NP-hard} if it is ``at least as difficult as every other NP problem"; that is, every problem which is in NP can be reduced to it in polynomial time. An NP-hard problem does not necessarily have to be in NP itself \cite{Hartmanis_1982,wikiNPHardness}. If a decision problem is both NP and NP-hard, it is \textbf{NP-complete}\index{NP-complete} \cite{Hartmanis_1982,wikiNPCompleteness}. An \textbf{induced subgraph}\index{induced subgraph} of a graph is a graph formed from a subset of vertices in the larger graph, and all the edges between them \cite{wikiInducedSubgraph}. By contrast, a \textbf{subgraph}\index{subgraph} is simply a graph formed from a subset of the vertices and edges in the larger graph \cite{wikiSubgraph}. A \textbf{graph isomorphism}\index{graph isomorphism} is a bijective mapping between the vertices of two graphs of the same size, which is \textbf{edge-preserving}\index{edge-preserving}; that is that is, if two vertices in the first graph are connected by an edge, they are mapped to two vertices in the second graph which are also connected by an edge \cite{Conte_2004}. The decision problem of determining whether two graphs are isomorphic is neither known to be in NP nor known to be solvable in polynomial time \cite{wikiGraphIsomorphism}. A \textbf{subgraph isomorphism}\index{subgraph isomorphism} is an edge-preserving injective mapping from the vertices of a smaller graph to the vertices of a larger graph. That is, there is an isomorphism between the smaller graph and some induced subgraph of the larger \cite{Conte_2004}. The decision problem of determining whether a graph contains a subgraph which is isomorphic to some smaller graph is known to be NP-complete \cite{wikiSubgraphIsomorphism}. Finally, a \textbf{maximum common induced subgraph}\index{maximum common subgraph} (MCS) of two graphs is a graph which is an induced subgraph of both, and has as many vertices as possible \cite{wikiMaximumCommonSubgraph}. Formulating the MCS problem as a graph matching problem can be done by defining the metric \[d(G_1,G_2) = 1 - \frac{|MCS(G_1,G_2)|}{\max\{|G_1|,|G_2|\}},\] where $|G|$ is the number of vertices in $G$ \cite{Bunke_1998,Bunke_1997}. \begin{table}[h] \centering \begin{tabular}{|L{0.35\linewidth}|L{0.15\linewidth}|L{0.15\linewidth}|L{0.22\linewidth}|} \hline & Graph isomorphism & Subgraph isomorphism & Maximum common induced subgraph \\ \hline $G_1$ and $G_2$ must have the same number of vertices & X & & \\ \hline Mapping must include all vertices of either $G_1$ or $G_2$ & X & X & \\ \hline Mapping must be edge-preserving & X & X & X \\ \hline NP-complete & Unknown & X & X* \\ \hline \end{tabular} \flushleft\footnotesize *The associated decision problem of determining whether $G_1$ and $G_2$ have a common induced subgraph with at least $k$ vertices is NP-complete, but the problem of finding the maximum common induced subgraph (as required for graph matching) is NP-hard \cite{wikiMaximumCommonSubgraph}. \caption{A summary of exact graph matching problem formulations.} \label{NP_classifications} \end{table} \subsection{Exact and inexact matching}\label{section:exact_and_inexact_matching} \begin{table}[h] \centering \begin{tabular}{|L{0.16\linewidth}|L{0.13\linewidth}|L{0.11\linewidth}|L{0.12\linewidth}|L{0.105\linewidth}|L{0.165\linewidth}|} \hline & Edge preserving? & Result in? & Mapping seeking? & Optimal? & Complexity \\ \hline Graph isomorphism & Yes & \{0,1\} & Yes & Yes & Likely between P and NP \\ \hline Subgraph isomorphism & Yes & \{0,1\} & Yes & Yes & NP-complete \\ \hline MCS computation & Yes & [0,1] & Yes & Yes & NP-hard \\ \hline Edit distances (exact) & No & [0,1] & No & Yes & Generally exponential \\ \hline Edit distances (approximate) & No & [0,1] & No & No & Generally polynomial \\ \hline Other inexact formulations & No & [0,1] & Sometimes & No* & Generally polynomial \\ \hline \end{tabular} \caption{Summary of the distinctions between exact and inexact graph matching styles.} \footnotesize *There may be optimal methods out there, but we did not find any. If they exist, they are almost certainly not polynomial time. \label{exact_vs_inexact} \end{table} We define a graph matching method to be \textbf{exact}\index{exact matching} if it seeks to find a mapping between the vertices of two graphs which is edge preserving. Exact matching is also sometimes defined by whether a method seeks a \textit{boolean} evaluation of the similarity of two graphs \cite{Livi_2012,Emmert_Streib_2016}. For graph and subgraph isomorphism, this characterization is equivalent; either they are isomorphic/there is a subgraph in the larger which is isomorphic to the smaller, or they are not. Since the maximum common subgraph problem is edge preserving, we consider it in this work to be an exact matching problem. However, it does not seek a boolean evaluation, and it is therefore sometimes considered to be an inexact matching problem \cite{Livi_2012}. In \textbf{inexact matching}\index{inexact matching}, we allow mappings which are not edge-preserving, which allows us to compensate for the inherent variability of the data in an application, as well as the noise and randomness introduced by the process of constructing graph representations of that data. Instead of matchings between vertices being forbidden if edge-preservation requirements are unsatisfied, they are simply penalized in some way. That is, we seek to find a matching that minimizes the sum of this penalty cost. Instead of returning a value in $\{0,1\}$, we return a value in $[0,1]$ measuring the similarity or dissimilarity between two graphs\footnote{Returning 1 for an isomorphism is analagous to a boolean evaluation and would be considered a \textit{similarity} measure. Most algorithms for inexact matching seek to minimize some function, so they would return 0 for an isomorphism and are therefore considered \textit{dissimilarity} measures.}. Inexact matching algorithms which are based on an explicit cost function or edit distance are often called \textbf{error tolerant}\index{error tolerant} or \textbf{error correcting}\index{error correcting}. \subsubsection{Optimal and approximate algorithms} Generally, the problem formulations used for inexact matching seek to minimize some nonnegative cost function which can be defined to theoretically be zero for two graphs which are isomorphic. An \textbf{optimal}\index{optimal algorithm} algorithm, that is, one which is guaranteed to find the global minimum of this function, is guaranteed to find an isomorphism if it exists, while still handling the problem of graph variability. However, this comes at the cost of making optimal algorithms for inexact matching significantly more expensive than their exact counterparts \cite{Conte_2004}. Most inexact matching algorithms are therefore \textbf{approximate}\index{approximate algorithm} or \textbf{suboptimal}\index{suboptimal algorithm}. They only find a local minimum of the cost function, which may or may not be close to the true minimum (which may or may not be acceptable in a certain application), but in return are much less expensive to calculate, usually polynomial time \cite{Conte_2004}. \subsubsection{Mapping-seeking and non-mapping-seeking algorithms} Finally, we can draw the distinction of whether the algorithm seeks primarily to find a mapping between vertices (and returns a result in $\{0,1\}$ or $[0,1]$ as a byproduct), or whether it does not. All exact formulations seek a mapping, and many inexact formulations do as well. Mapping-seeking inexact matching is more commonly referred to as \textbf{alignment}\index{alignment}, and is one of two overwhelmingly dominant comparison strategies we observed in biology applications. Alignment is discussed more fully in Chapter \ref{chapter:systems_biology}. \subsection{Graph kernels and embeddings} ``The Graph Matching Problem" \cite{Livi_2012}, published in 2012, claims that there are three main approaches to the inexact graph matching problem: edit distances, graph kernels, and graph embeddings. We did not observe this to be the case in our reading, but we still give a brief introduction to graph kernels and embeddings, and discuss our observations. Graph \textbf{embeddings}\index{graph embedding} are a general strategy of mapping a graph into some metric space or generalized metric space, and performing comparisons there. For example, we could identify a graph with a vector in $\R^n$ containing the seven statistics reported in Table \ref{tab:network_table}, or the eigenvalues of its adjacency matrix, and compare them using Euclidean distance. Mapping a graph into Euclidean space certainly makes comparison easier, but it is not obvious how to create a mapping that preserves graph properties in a sensible way. Graph \textbf{kernels}\index{graph kernel} are a special kind of graph embeddings, in which we have a continuous map $k:\mathcal{G}\times \mathcal{G}\rightarrow \R$, where $\mathcal{G}$ is the space of all possible graphs, such that $k$ is symmetric and positive definite or semidefinite \cite{Livi_2012}. Creating a kernel for graphs would allow us to take advantage of the techniques and theory of general kernel methods, but it has been shown that computing a strictly positive definite graph kernel is at least as hard as solving the graph isomorphism problem \cite{Gartner_2003}. We suspect that the amount of work involved in creating a graph kernel with enough desirable properties to take advantage of kernel methods is prohibitive enough in many cases to make this an impractical strategy. We note that the strategy of using the \textbf{graphlet degree distribution}\index{graphlet degree distribution} and other local and global graph statistics, which we discuss in Chapter \ref{chapter:systems_biology}, is a form of embedding. Furthermore, the graph kernel strategies described in the references of \cite{Livi_2012} seem to follow the assignment problem-style approach of calculating some sort of similarity notion between pairs of vertices in two graphs, and using that matrix to create the desired alignment or kernel. We therefore consider the strategies of graph kernels and graph embeddings to be within the families of other categories which we describe in this work, rather than mainstream approaches in their own right. \section{Exact matching and graph edit distance} The field of graph matching is large and well-established, and we cannot hope to give a full overview of all existing techniques without sacrificing our focus on remaining accessible to the relative novice. If the reader is interested in a more comprehensive investigation, the definitive source on graph matching developments through 2004 is ``Thirty Years of Graph Matching In Pattern Recognition" \cite{Conte_2004}. Two of its authors collaborated with various others on a similar survey in 2014 covering the ten years since the prior survey's publication \cite{foggia2014graph}, and in June of 2018 published a large-scale performance comparison of graph matching algorithms on huge graphs \cite{carletti2018comparing} that may also be of interest. We roughly partition the field into three main approaches: \begin{enumerate} \item Exact matching methods, which are primarily based around some kind of pruning of the search space \item Edit distance-based methods for optimal inexact matching, and \item Continuous optimization-based methods for inexact matching. \end{enumerate} We present optimization methods in their own section, as they are relevant to our upcoming discussion of systems biology network comparison methods in Chapter \ref{chapter:systems_biology}, and in this section aim to introduce the concept of search space pruning (which is the most dominant approach for exact matching), and the concept of a graph edit path and its corresponding graph edit distance. Our presentation of edit distances is primarily inspired by \cite{Livi_2012} and \cite{Riesen_2009}. \subsection{Search space pruning} Most algorithms for exact graph matching are based on some form of tree search with backtracking \cite{Conte_2004}. The process is analagous to solving a grid-based logic puzzle. We represent all possible matching options in a grid format, and then rule out infeasible possibilities based on clues or heuristics about the problem. When we get to the point where our clues can no longer rule out any further possibilities, we must arbitrarily choose from among the remaining possible options for a certain item and follow through the correspondingly ruled out possibilities until we either complete the puzzle or reach a state where there are no possible solutions remaining. In the latter case, we backtrack, rule out our initial arbitrary choice, and try other possible options for the same certain item until we either find a solution or exhaust all possible choices. The seminal algorithm for exact matching is found in Ullmann's 1976 paper ``An Algorithm for Subgraph Isomorphism" \cite{Ullmann_1976}, and is applicable to both graph and subgraph isomorphism. We assume two graphs $g_1$ and $g_2$ with vertex counts $m$ and $n$, respectively, and assume without loss of generality that $m\leq n$. This allows us to represent all matching candidate possibilities in a $m\times n$ matrix of zeros and ones. The Ullmann algorithm\index{subgraph isomorphism algorithm} uses two principles to rule out matching possibilities: \begin{enumerate} \item In a subgraph isomorphism, a vertex in $g_1$ can only be mapped to a vertex in $g_2$ with the same or higher degree. This is used to rule out possibilities initially. In Example \ref{ex:ullmann}, degree comparison is able to reduce the number of possible matchings from $8^4=4096$ to $5^*5^*1^*8=200$, a drastic improvement found at a cost of at most $mn$ operations (comparing the degree of each vertex in $g_1$ against the degree of each vertex in $g_2$). \item For any feasible matching candidate $v_2\in g_2$ for $v_1\in g_1$, the neighbors of $v_1$ must each have a feasible matching candidate among the neighbors of $v_2$. Testing this is called the \textbf{refinement}\index{refinement} procedure, and it forms the heart of the algorithm. In Example \ref{ex:ullmann}, after a single stage of the refinement process and before we begin backtracking, we have reduced the number of possible matchings down to $3^*3^*1^*3=27$. \end{enumerate} \begin{example}\label{ex:ullmann} As Ullmann's presentation of his own algorithm is very detail-oriented and does not seek to give a broad intuition for the method, to illustrate the method we follow an example found on StackOverflow \cite{ullmannStackOverflow}. We have two graphs $g_1$ and $g_2$, as shown, and we want to determine if a subgraph isomorphism exists between them. \vspace{-30pt} \begin{center} \begin{tabular}{C{0.35\textwidth}C{0.6\textwidth}} \includegraphics[width=0.95\textwidth]{ullmann_demo_cropped.png} & \\ $g_1$ & $g_2$\\ \end{tabular} \end{center} First, we use degree comparison to determine the initial candidates for mapping vertices in $g_1$ to vertices in $g_2$. Vertex $d$ has degree 1, so can be mapped to anything in $g_2$. Vertices $a$ and $b$ have degree 2, so they cannot be mapped to vertices $3, 4$, or $8$ in $g_2$, as these vertices have degree 1. Finally, vertex $c$ has degree 3, so it can only be mapped to vertex 6. \begin{center} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ b & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \end{tikzpicture} \footnotesize\singlespacing Candidate mapping pairs which satisfy degree requirements. \end{center} \vspace{-5pt} Next, we begin the refinement procedure. We show two cases of the refinement procedure for the candidates of vertex $a$ in $g_1$: one where the candidate is valid, and another where it is ruled out. \vspace{-30pt} \begin{center} \begin{tabular}{C{0.35\textwidth}C{0.6\textwidth}} \includegraphics[width=0.95\textwidth]{ullmann_demo_cropped.png} & \\ $g_1$ & $g_2$\\ \end{tabular} \end{center} \begin{center} \begin{minipage}{0.1\textwidth} \hfill \end{minipage} \begin{minipage}{0.3\textwidth} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ b & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ }; \fill[gray,opacity=0.2] (m-3-2.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-2.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-3.south west) rectangle (m-2-3.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-2-7.north east); \draw (m-5-2.south west) -- (m-1-2.north west); \draw[green,thick,radius=6pt] (m-3-3) circle; \draw[green,thick,radius=6pt] (m-4-7) circle; \end{tikzpicture} \footnotesize\singlespacing Vertex 1 is a suitable candidate for vertex $a$. \end{minipage} \begin{minipage}{0.1\textwidth} \hfill \end{minipage} \begin{minipage}{0.3\textwidth} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & \cancel{1} & 0 & 0 & 1 & 1 & 1 & 0 \\ b & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ }; \fill[gray,opacity=0.2] (m-3-2.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-2.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-2.south west) rectangle (m-2-2.north east); \fill[gray,opacity=0.2] (m-5-4.south west) rectangle (m-2-4.north east); \draw (m-5-2.south west) -- (m-1-2.north west); \draw[green,radius=6pt] (m-3-2) circle; \draw[red,radius=6pt] (m-4-2) circle; \draw[red,radius=6pt] (m-4-4) circle; \end{tikzpicture} \footnotesize\singlespacing Vertex 2 is not a suitable candidate for vertex $a$. \end{minipage} \begin{minipage}{0.1\textwidth} \hfill \end{minipage} \end{center} \bigskip On the left, we consider vertex 1 in $g_2$ as a candidate for vertex $a$ in $g_1$. We highlight the rows corresponding to $a$'s neighbors, and the columns corresponding to 1's neighbors. Each neighbor of $a$ must have a candidate among the neighbors of 1; i.e., there must be a 1 somewhere in the intersections of the highlighted columns with each highlighted row. Since this is the case on the left, 1 remains a candidate for $a$. On the right, however, we find that vertex 2 is not a valid candidate for $a$. While there is a candidate for $b$ among the neighbors of 2, there is not a candidate for $c$ among the neighbors of 2. After performing the refinement process for all candidate pairings, the remaining candidates for each vertex in $g_1$ are as shown below. At this point, it is time to begin backtracking. \vspace{5pt} \begin{center} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ b & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \end{tikzpicture} \footnotesize\singlespacing Candidate mapping pairs after the initial refinement procedure. \end{center} \vspace{-5pt} For the backtracking procedure, we try mapping a vertex to each of its candidates in turn. At each stage, if we cannot find any viable candidates for a vertex among the neighbors of the candidate in question, we backtrack and try again. The algorithm stops when we either find a subgraph isomorphism, or eliminate all candidates for a vertex. \vspace{-30pt} \begin{center} \begin{tabular}{C{0.35\textwidth}C{0.6\textwidth}} \includegraphics[width=0.95\textwidth]{ullmann_demo_cropped.png} & \\ $g_1$ & $g_2$\\ \end{tabular} \end{center} \begin{minipage}{0.3\textwidth} \begin{center} \begin{tikzpicture} \matrix (m)[ matrix of math nodes, nodes in empty cells, inner sep=4pt, ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & \cancel{1} & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ b & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\ }; \fill[gray,opacity=0.2] (m-3-2.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-2.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-3.south west) rectangle (m-2-3.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-2-7.north east); \draw (m-5-2.south west) -- (m-1-2.north west); \draw[green,radius=6pt] (m-4-7) circle; \draw[red,radius=6pt] (m-3-3) circle; \draw[red,radius=6pt] (m-3-7) circle; \end{tikzpicture} \end{center} \footnotesize\singlespacing Try mapping $a$ to $1$. There is no viable candidate for $b$ among the neighbors of $1$, so we backtrack and try again. \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \begin{center} \begin{tikzpicture} \matrix (m)[matrix of math nodes,inner sep=4pt ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 0 & 0 & 0 & 0 & \cancel{1} & 0 & 1 & 0 \\ b & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \fill[gray,opacity=0.2] (m-3-2.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-2.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-5.south west) rectangle (m-2-5.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-2-7.north east); \draw[green,radius=6pt] (m-4-7) circle; \draw[red,radius=6pt] (m-3-5) circle; \draw[red,radius=6pt] (m-3-7) circle; \end{tikzpicture} \end{center} \footnotesize\singlespacing Try mapping $a$ to 5. There is no viable candidate for $b$ among the neighbors of $5$, so we backtrack and try again. \end{minipage}\hfill \begin{minipage}{0.3\textwidth} \begin{center} \begin{tikzpicture} \matrix (m)[matrix of math nodes,inner sep=4pt ]{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline a & 0 & 0 & 0 & 0 & 0 & 0 & \cancel{1} & 0 \\ b & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ c & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ d & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ }; \draw (m-5-2.south west) -- (m-1-2.north west); \fill[gray,opacity=0.2] (m-3-2.south west) rectangle (m-3-9.north east); \fill[gray,opacity=0.2] (m-4-2.south west) rectangle (m-4-9.north east); \fill[gray,opacity=0.2] (m-5-9.south west) rectangle (m-2-9.north east); \fill[gray,opacity=0.2] (m-5-7.south west) rectangle (m-2-7.north east); \draw[green,radius=6pt] (m-4-7) circle; \draw[red,radius=6pt] (m-3-9) circle; \draw[red,radius=6pt] (m-3-7) circle; \end{tikzpicture} \end{center} \footnotesize\singlespacing Try mapping $a$ to 7. There is no viable candidate for $b$ among the neighbors of $7$, and no more candidates for $a$, so we stop. \end{minipage} \vspace{15pt} We cannot find a suitable candidate for $b$ among the neighbors of any candidate of $a$, so there is no subgraph isomorphism between $g_1$ and $g_2$. \end{example} \subsection{Graph edit distance} \begin{wrapfigure}{R}{0.35\textwidth} \begin{tabular}{|lcl|l|} \hline cat & $\rightarrow$ & ca\textit{r}t & Insertion \\ \hline \textit{c}art & $\rightarrow$ & \textit{d}art & Substitution \\ \hline \textit{d}art & $\rightarrow$ & art & Deletion \\ \hline \textit{ar}t & $\rightarrow$ & \textit{ra}t & Transposition \\ \hline \end{tabular} \caption{Edit operations for strings.} \vspace{-10pt} \label{fig:string_edit_operations} \end{wrapfigure} One way to measure the distance between two objects is to measure how much work it takes to turn the first into the second, and take the length of the \textbf{edit path}\index{edit path}. For example, in Figure \ref{fig:string_edit_operations}, we find an edit path of length 4 between ``cat" and ``rat". However, this should clearly not be the \textbf{edit distance}\index{graph edit distance} between ``cat" and ``rat", as we can transform one into the other with a single substitution. The edit distance between two objects is therefore the \textit{minimum} over the lengths of all possible edit paths between them. \begin{figure}[h] \centering \begin{tabular}{C{0.18\textwidth}C{0.03\textwidth}C{0.18\textwidth}C{0.03\textwidth}C{0.18\textwidth}C{0.03\textwidth}C{0.18\textwidth}} \multicolumn{3}{c}{Vertex insertion} & & \multicolumn{3}{c}{Edge insertion} \\ \includegraphics[width=0.18\textwidth]{vertex_insertion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{vertex_insertion_right.png} & & \includegraphics[width=0.18\textwidth]{edge_insertion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{edge_insertion_right.png} \\ \multicolumn{3}{c}{Vertex deletion} & & \multicolumn{3}{c}{Edge deletion} \\ \includegraphics[width=0.18\textwidth]{vertex_deletion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{vertex_deletion_right.png} & & \includegraphics[width=0.18\textwidth]{edge_deletion_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{edge_deletion_right.png} \\ \multicolumn{7}{c}{Vertex substitution} \\ & & \includegraphics[width=0.18\textwidth]{vertex_substitution_left.png} & $\rightarrow$ & \includegraphics[width=0.18\textwidth]{vertex_substitution_right.png} & & \\ \end{tabular} \caption{Edit operations for graphs.} \label{fig:graph_edit_operations} \end{figure} For graphs, the relevant edit operations are \textbf{vertex substitution}\index{graph edit operations}, \textbf{vertex insertion}, \textbf{vertex deletion}, \textbf{edge insertion}, and \textbf{edge deletion}, as illustrated in Figure \ref{fig:graph_edit_operations}. Instead of simply taking the length of the edit path, however, each of these operations is associated with some nonnegative cost function $c(u,v)\in \R^+$ (the ``penalty" mentioned in Section \ref{section:exact_and_inexact_matching}) which avoids rewarding unnecessary edit operations by satisfying the inequality \[c(u,w)\leq c(u,v)+c(v,w),\] where $u, v$, and $w$ are vertices or edges, or sometimes null vertices/edges in the case of insertion and deletion. We also assume that the cost of deleting a vertex with edges is equivalent to that of first deleting each of its edges and then deleting the resultant neighborless vertex. %If two graphs are isomorphic to one another, the edit distance between them is the total cost of relabeling--i.e. substituting--all $n$ vertices. The edit distance is then the total cost of all operations involved in an edit path, and it critically depends on the costs of the underlying edit operations \cite{Bunke_1998}. This can be helpful in some cases, as it allows us to easily tweak parameters in our notion of similarity. It is also sometimes desirable to avoid this dependence on the cost function. This is one motivation for the formulation of inexact graph matching as a continuous optimization problem, which we discuss in the next section. Finally, we note that it was shown by Bunke in 1999 that the graph isomorphism, subgraph isomorphism, and maximum common subgraph problems are all special cases of the problem of calculating the graph edit distance under certain cost functions \cite{Bunke_1999}. \section{Suboptimal methods for inexact matching} We noted previously that optimal methods for inexact graph matching tend to be very expensive, and therefore only suitable for graphs of small size. To address this issue, Riesen and Bunke in 2008 introduced an algorithm for approximating the graph edit distance in a substantially faster way \cite{Riesen_2009}, which Serratosa published an improved variant of in 2014 \cite{Serratosa_2014}. This is not the only suboptimal inexact matching method in existence with the goal of suboptimally calculating a graph edit distance, but it provides an interesting connection between the seemingly radically different strategies of search space pruning and of casting the problem as one of continuous optimization. \subsection{The assignment problem} The key to this connection is the idea of the assignment problem. The following definition is due to Riesen and Bunke \cite{Riesen_2009}: \begin{definition} Consider two sets $A$ and $B$, each of cardinality $n$, together with an $n\times n$ cost matrix $C$ of real numbers, where the matrix elements $c_{i,j}$ correspond to the cost of assigning the $i$-th element of $A$ to the $j$-th element of $B$. The \textbf{assignment problem}\index{assignment problem} is that of finding a permutation $p=\{p_1,\dots,p_n\}$ of the integers $\{1,2,\dots,n\}$ which minimizes $\sum_{i=1}^n c_{i,p_i}$. \end{definition} A brute force algorithm for the assignment problem would require a $O(n!)$ time complexity, which is obviously unreasonable. Instead, we typically use the \textbf{Hungarian method}\index{Hungarian algorithm}\index{Munkres' algorithm}. This algorithm is originally due to Kuhn in 1955 \cite{Kuhn_1955} and solves the problem in maximum time $O(n^3)$ by transforming the original cost matrix into an equivalent matrix with $n$ independent zero elements\footnote{Independent meaning that they are in distinct rows and columns.} which correspond to the optimal assignment pairs. The version of the algorithm described in \cite{Riesen_2009} is a refinement of the original Hungarian algorithm published by Munkres in 1957 \cite{munkres1957algorithms}. \subsubsection{Relationship to the bipartite graph matching problem} \begin{figure}[h] \centering \begin{tabular}{m{0.3\textwidth}m{0.05\textwidth}m{0.3\textwidth}} $C = $\bordermatrix{ & 1 & 2 & 3 \cr a & 3 & 2 & 1 \cr b & 1 & 3 & 4 \cr c & 2 & 5 & 2 } & $\Leftrightarrow$ & \includegraphics[width=0.3\textwidth]{bipartite_assignment_problem.png} \\ \multicolumn{3}{c}{$A = \{a,b,c\}, B=\{1,2,3\}$} \end{tabular} \caption{Reformulating the assignment problem as that of finding an optimal matching in a bipartite graph. The edges and their weight labels in the bipartite graph are colored to make it easier to see which weights belong to which edges.} \label{bipartite_reformulation} \end{figure} As noted in Section \ref{section:defining_graph_matching}, we sometimes must address the question of finding a matching \textit{in} a graph. This is defined as a set of edges without common vertices. It is straightforward to reformulate the assignment problem as one of finding an optimal matching within a \textbf{bipartite graph}\index{bipartite graph}, that is, a graph whose vertices can be divided into two disjoint independent sets such that no edges run between vertices of the same type. If $A$ and $B$ are two sets of cardinality $n$ as in the assignment problem, the elements of $A$ form one vertex group, the elements of $B$ form the other, and we define the edge weight between the $i$-th element of $A$ and the $j$-th element of $B$ to be the cost of that assignment, as shown in Figure \ref{bipartite_reformulation}. The assignment problem is therefore also referred to as the \textbf{bipartite graph matching problem}. \subsubsection{Relationship to graph edit distance} To connect the assignment problem to graph edit distance computation, we define a cost matrix $C$ such that each $c_{i,j}$ entry corresponds to the cost of substituting the $i$-th vertex in our source graph to the $j$-th vertex in our target graph \cite{Riesen_2009}. We can generalize this approach further to handle graphs with different numbers of vertices by using a modified version of the Hungarian method which applies to rectangular matrices \cite{bourgeois1971extension} by considering vertex insertions and deletions as well as substitutions. The resulting cost matrix (again, definition due to Riesen and Bunke \cite{Riesen_2009}) then becomes \[ C = \left[ \begin{array}{cccc|cccc} c_{1,1} & c_{1,2} & \dots & c_{1,m} & c_{1,-} & \infty & \dots & \infty \\ c_{2,1} & c_{2,2} & \dots & c_{2,m} & \infty & c_{2,-} & \ddots & \vdots \\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \ddots & \infty \\ c_{n,1} & c_{n,2} & \dots & c_{n,m} & \infty & \dots & \infty & c_{n,-} \\ \hline c_{-,1} & \infty & \dots & \infty & 0 & 0 & \dots & 0 \\ \infty & c_{-,2} & \ddots & \vdots & 0 & 0 & \ddots & \vdots \\ \vdots & \ddots & \ddots & \infty & \vdots & \ddots & \ddots & 0\\ \infty & \dots & \infty & c_{-,m} & 0 & \dots & 0 & 0 \\ \end{array} \right], \] where $n$ is the number of vertices in the source graph, $m$ the number of vertices in the target, and a $-$ is used to represent null values. The upper left corner of this matrix represents the cost of vertex substitutions, and the bottom left and top right corners represent the costs of vertex insertions and deletions. Since each vertex can be inserted or deleted at most once, the off-diagonal elements of these are set to infinity. Finally, since substitutions of null values should not impose any costs, the bottom right corner of $C$ is set to zero. This is only a rough approximation of the true edit distance, as it does not consider any information about the costs of edge transformations. We can improve the approximation by adding the minimum sum of edge edit operation costs implied by a vertex substitution to that substitution's entry in the cost matrix, but we still have a suboptimal solution for the graph edit distance problem, even though the assignment problem can be solved optimally in a reasonable amount of time. \subsubsection{Other suboptimal graph matching methods using the assignment problem} Approximating the graph edit distance is far from the only graph matching strategy which is based around the assignment problem. Instead of a cost matrix based around the cost function of a graph edit distance measure, we can incorporate other measures of similarity or affinity between vertices. The advantage of this approach is that we can incorporate both topological\footnote{That is, information derived directly from the structure of a network.} and external notions of similarity into our definition. However, this comes at the cost of relying on heuristic notions of similarity, rather than directly incorporating information about edge preservation into our measure of assignment quality. Techniques which incorporate external information are much more prevalent in biology, and will be discussed further in Chapter \ref{chapter:systems_biology}. \subsection{Weighted graph matching vs. the assignment problem} Most of the suboptimal graph matching methods we observed are based around either the assignment problem, or around some formulation of the \textit{weighted graph matching problem}. \begin{definition} The \textbf{weighted graph matching problem}\index{weighted graph matching problem} (WGMP) is typically defined as finding an optimum permutation matrix which minimizes a distance measure between two weighted graphs; generally, if $A_G$ and $A_H$ are the adjacency matrices of these, both $n\times n$, we seek to minimize $||A_G - PA_HP^T||$ with respect to some norm\cite{Umeyama_1988, Koutra_2013, Almohamad_1993} , or minimize some similarly formulated energy function \cite{Gold_1996}. The specific norm and definition depends on the technique being used to solve the optimization problem. \end{definition} Weighted graph matching is an inexact graph matching method, and its techniques are generally suboptimal, searching for a \textit{local} minimum of the corresponding continuous optimization problem. There is a wide variety of techniques in use, including linear programming \cite{Almohamad_1993}, eigendecomposition \cite{Umeyama_1988}, and graduated assignment \cite{Gold_1996}. Other techniques mentioned in \cite{Almohamad_1993, Umeyama_1988, Gold_1996} and \cite{Conte_2004} for which we do not include specific references include Lagrangian relaxation, symmetric polynomial transformation, replicator equations, other spectral methods, neural networks, and genetic algorithms. The weighted graph matching problem is similar to the assignment problem in that we seek a permutation between the $n$ vertices of two graphs, but unlike the assignment problem, there is no need to define a cost or similarity matrix ahead of time. Instead, we directly measure the quality of a permutation assignment with respect to the structure of a graph, and optimize this quantity directly to find our matching. This means we can avoid relying on the heuristics inherent in any formulation of a cost or similarity matrix, but it also means we cannot easily incorporate external information into our solution of the problem. Whether weighted graph matching techniques are preferable to assignment problem-based strategies is therefore dependent on the specific problem to be solved. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 5 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Systems Biology}\label{chapter:systems_biology} \section{Motivation} A fundamental goal in biology is to understand complex systems in terms of their interacting components. Network representations of these systems are particularly helpful at the cellular and molecular level, at which we have large-scale, experimentally determined data about the interactions between biomolecules such as proteins, genes, and metabolites. In the past twenty years, there has been an explosion of availability of large-scale interaction data between biomolecules, paralleling the surge of DNA sequence information facilitated by the Human Genome Project\footnote{https://www.genome.gov/12011239/a-brief-history-of-the-human-genome-project/}. Sequence information comparison tools have been revolutionary in advancing our understanding of basic biological processes, including our models of evolutionary processes and disease\footnote{https://www.scq.ubc.ca/the-human-genome-project-the-impact-of-genome-sequencing-technology-on-human-health/ (will fix citation later)}, and the comparative analysis of biological networks presents a similarly powerful method for organizing large-scale interaction data into models of cellular signaling and regulatory machinery \cite{Sharan_2005}. In particular, we can use network comparison to address fundamental biological questions such as ``Which proteins, protein interactions and groups of interactions are likely to have equivalent functions across species?", ``Can we predict new functional information about proteins and interactions that are poorly characterized, based on similarities in their interaction networks?", and ``What do these relationships tell us about the evolution of proteins, networks, and whole species?" \cite{Sharan_2006}. Comparison strategies and metrics are also key to developing mathematical models of biological networks which represent their structure in a statistically meaningful way, which is a key step towards understanding them. In particular, good comparison techniques allow us to model dynamical systems on biological networks \cite{Watts_1998} (e.g. the spread of infectious diseases), and create appropriate null hypotheses for drawing conclusions about experimental networks \cite{Hayes_2013}\footnote{Section 2.3.1 in \cite{Hayes_2013} contains a very good summary of the importance of modelling biological networks.}. % Chart from ``Modelling cellular machinery": (alignment, integration, querying) x (common application, main goals, limitations) % Our chart should be like that big full page diagram % Have a sketch of it--draw a bigger sketch, see if Amelia can help with the tikz so it doesn't take as long \section{Graphlets} In Table \ref{tab:, we introduced \subsection{Introduction to univariate statistics via degree distributions} That's a good way to introduce the idea of local vs. global statistics--reference all the GLOBAL statistics we already talked about \subsection{Applications of univariate statistics} \subsection{Motifs and graphlets are a generalization of degree distributions} \subsection{They are important both because they can be used to create comparison metrics, and because random models for networks are important} \section{Alignment} What's the relationship between alignment and the assignment problem? \subsection{Why do we want to do it?} Analogy to sequence alignment goes HERE. \subsection{What is it exactly?} What is the actual definition, compared to how we just defined it for CS? Also, external information is very important! Not just topological similarity! We need it very badly! \subsection{Local vs. global} \begin{wrapfigure}{L}{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{local_alignment.png} \caption{local alignment} \label{fig:local_alignment} \end{wrapfigure} \begin{wrapfigure}{L}{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{global_alignment.png} \caption{global alignment} \label{fig:global_alignment} \end{wrapfigure} \subsection{Two steps of the process: similarity matrix + searching} \begin{table}[h] \centering \begin{tabular}{|l|r|r|} \hline % stuff in here \end{tabular} \caption{nice table of the named local and global algorithms, their year, their source number in the reference list, their similarity strategy, and their search strategy.} \label{tab:alignment_algorithms} \end{table} Because I already talked about this in the CS chapter, I can just do it comparatively here. \section{Summary of algorithms} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CHAPTER 6 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Conclusion}\label{chapter:conclusion} (probably doesn't need sections, but it's good for at-a-glance structure) \subsection{The conclusion only counts as one section in terms of time to write it} \subsection{how CS stuff might be useful for bio, and barriers/limitations to that} \subsection{how bio stuff might be useful for CS, etc.} \subsection{complain about the other surveys again and brag on my way} %%%%%%%%%%%%%%%%%%%%%%%%% GLOSSARY, APPENDIX AND END MATTER %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Changes the numbering of chapters and sections for appendices. Any chapters or sections listed here will be treated as part of the appendix \appendix \chapter{Appendices}\label{chapter:appendices} \section{Additional Figures} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{display_sciMet.png} \caption{The sciMet network dataset used in our Table \ref{tab:network_table} comparison to $G$. Note the high number of connected components, and low number of children per parent, in contrast to the ``blooming" behavior in $G$ created by the inclusion of ALL child references from each parent paper.} \label{fig:sciMet} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{display_zewail.png} \caption{The zewail citation network dataset used in our Table \ref{tab:network_table} comparison to $G$. Note the high number of connected components, and low number of children per parent, in contrast to the ``blooming" behavior in $G$ created by the inclusion of ALL child references from each parent paper.} \label{fig:zewail} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\textwidth]{full_network_color_coded.png} \caption{The full network $G_p$, with vertices colored according to their subject label as in Figure \ref{fig:subject_color_coded}.} \label{fig:full_subject_color_coded} \end{figure} \begin{sidewaysfigure} \centering \includegraphics[width=0.85\textwidth]{reading_list_subject_colored_CROPPED.png} \caption{The subnetwork $S$ of high centrality papers, as listed in Tables \ref{tab:toppapers_all}, \ref{tab:toppapers_bio}, and \ref{tab:toppapers_CS}, with vertices colored according to their subject label. Grey vertices are unlabeled, pink is computer science, blue is biology, yellow is math, orange is both math and computer science, and purple is both computer science and biology.} \vspace{-12pt}\flushleft\scriptsize Note: ``Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching" is not in the connected component and is not displayed. \label{fig:reading_list_subject_colored} \end{sidewaysfigure} \section{Additional Tables} \begin{table}[t] \centering \begin{tabular}{|l|r|r|r|} \hline & $G_R$ & $G_R^{(1)}$ & $G_R^{(2)}$ \\ \hline Total vertices & 61 & 27 & 34 \\ \hline Untagged & 30 & 8 & 22 \\ \hline Tagged & 31 & 19 & 12 \\ \hline CS & 24 & 2 & 22 \\ \hline Biology & 6 & 6 & 0 \\ \hline Math & 3 & 1 & 2 \\ \hline Both CS and biology & 1 & 1 & 0 \\ \hline Both CS and math & 2 & 0 & 2 \\ \hline Both biology and math & 0 & 0 & 0 \\ \hline All three & 0 & 0 & 0 \\ \hline \end{tabular} \caption{Number of vertices tagged as computer science, biology, math, or some combination of these in the reading list subnetwork $G_R$, and its intersections $G_R^{(1)}$ and $G_R^{(2)}$ with the two halves of the partition $G_p^{(1)}$ and $G_p^{(2)}$. See Table \ref{tab:subject_counts}.} \label{tab:reading_list_subject_counts} \end{table} \begin{singlespace} \begin{longtable}{|L{0.095\textwidth}|C{0.04\textwidth}|C{0.045\textwidth}|C{0.045\textwidth}|L{0.7\textwidth}|} \hline Subject & $N$ & $G_R^{(1)}$ & $G_R^{(2)}$ & Title \\ \hline \hline \endhead None & 22 & 15 & 7 & Fifty years of graph matching, network alignment and network comparison \\ \hline None & 15 & 10 & 5 & Networks for systems biology: conceptual connection of data and function \\ \hline Biology & 7 & 7 & 0 & Global network alignment using multiscale spectral signatures \\ \hline None & 13 & 1 & 12 & Error correcting graph matching: on the influence of the underlying cost function \\ \hline None & 10 & 10 & 0 & MAGNA: Maximizing Accuracy in Global Network Alignment \\ \hline None & 4 & 4 & 0 & Graphlet-based measures are suitable for biological network comparison \\ \hline None & 7 & 6 & 1 & On Graph Kernels: Hardness Results and Efficient Alternatives \\ \hline Biology & 8 & 8 & 0 & Pairwise Alignment of Protein Interaction Networks \\ \hline None & 6 & 6 & 0 & Alignment-free protein interaction network comparison \\ \hline Biology & 7 & 7 & 0 & Biological network comparison using graphlet degree distribution \\ \hline None & 10 & 10 & 0 & NETAL: a new graph-based method for global alignment of protein-protein interaction networks \\ \hline CS/Math & 10 & 4 & 6 & Computers and Intractability: A Guide to the Theory of NP-Completeness \\ \hline None & 16 & 1 & 15 & Recent developments in graph matching \\ \hline None & 10 & 10 & 0 & Modeling cellular machinery through biological network comparison \\ \hline CS & 7 & 2 & 5 & A graph distance metric based on the maximal common subgraph \\ \hline None & 24 & 1 & 23 & Thirty years of graph matching in pattern recognition \\ \hline None & 6 & 6 & 0 & Collective dynamics of ``small-world" networks \\ \hline None & 13 & 11 & 2 & A new graph-based method for pairwise global network alignment \\ \hline None & 12 & 4 & 8 & An Algorithm for Subgraph Isomorphism \\ \hline CS & 8 & 2 & 6 & On a relation between graph edit distance and maximum common subgraph \\ \hline None & 7 & 7 & 0 & Topological network alignment uncovers biological function and phylogeny \\ \hline CS & 7 & 0 & 7 & A linear programming approach for the weighted graph matching problem \\ \hline None & 10 & 0 & 10 & An eigendecomposition approach to weighted graph matching problems \\ \hline CS & 8 & 0 & 8 & A graduated assignment algorithm for graph matching \\ \hline None & 9 & 0 & 9 & A new algorithm for subgraph optimal isomorphism \\ \hline CS & 9 & 0 & 9 & A distance measure between attributed relational graphs for pattern recognition \\ \hline CS & 5 & 0 & 5 & Inexact graph matching for structural pattern recognition \\ \hline CS & 6 & 2 & 4 & A new algorithm for error-tolerant subgraph isomorphism detection \\ \hline CS & 6 & 0 & 6 & Structural Descriptions and Inexact Matching \\ \hline CS & 6 & 0 & 6 & A shape analysis model with applications to a character recognition system \\ \hline CS & 9 & 0 & 9 & A graph distance measure for image analysis \\ \hline CS & 8 & 0 & 8 & Structural matching in computer vision using probabilistic relaxation \\ \hline CS & 6 & 0 & 6 & Hierarchical attributed graph representation and recognition of handwritten chinese characters \\ \hline CS & 4 & 0 & 4 & Linear time algorithm for isomorphism of planar graphs (Preliminary Report) \\ \hline CS/Bio & 6 & 5 & 1 & Pairwise Global Alignment of Protein Interaction Networks by Matching Neighborhood Topology \\ \hline None & 6 & 6 & 0 & Conserved patterns of protein interaction in multiple species \\ \hline None & 5 & 0 & 5 & Approximate graph edit distance computation by means of bipartite graph matching \\ \hline None & 8 & 4 & 4 & Local graph alignment and motif search in biological networks \\ \hline None & 6 & 6 & 0 & Network Motifs: Simple Building Blocks of Complex Networks \\ \hline None & 8 & 1 & 7 & The Hungarian method for the assignment problem \\ \hline CS & 9 & 0 & 9 & Graph Matching Based on Node Signatures \\ \hline None & 7 & 0 & 7 & Exact and approximate graph matching using random walks \\ \hline None & 6 & 6 & 0 & Global alignment of multiple protein interaction networks with application to functional orthology detection \\ \hline None & 10 & 9 & 1 & Fast parallel algorithms for graph similarity and matching \\ \hline CS & 9 & 0 & 9 & Fast computation of Bipartite graph matching \\ \hline CS & 4 & 0 & 4 & Fast and Scalable Approximate Spectral Matching for Higher Order Graph Matching \\ \hline CS & 4 & 0 & 4 & A Probabilistic Approach to Spectral Graph Matching \\ \hline CS & 3 & 0 & 3 & Graph matching applications in pattern recognition and image processing \\ \hline None & 10 & 1 & 9 & The graph matching problem \\ \hline Biology & 4 & 4 & 0 & Graph-based methods for analysing networks in cell biology \\ \hline CS & 10 & 4 & 6 & BIG-ALIGN: Fast Bipartite Graph Alignment \\ \hline None & 7 & 0 & 7 & A (sub)graph isomorphism algorithm for matching large graphs \\ \hline Biology & 4 & 4 & 0 & Complex network measures of brain connectivity: Uses and interpretations \\ \hline CS & 7 & 0 & 7 & Efficient Graph Similarity Search Over Large Graph Databases \\ \hline 3 & 4 & 3 & 1 & Demadroid: Object Reference Graph-Based Malware Detection in Android \\ \hline None & 2 & 2 & 0 & Indian sign language recognition using graph matching on 3D motion captured signs \\ \hline None & 6 & 5 & 1 & Predicting Graph Categories from Structural Properties \\ \hline None & 10 & 10 & 0 & Survey on the Graph Alignment Problem and a Benchmark of Suitable Algorithms \\ \hline CS/Math & 12 & 0 & 12 & Efficient Graph Matching Algorithms \\ \hline CS & 2 & 2 & 0 & Early Estimation Model for 3D-Discrete Indian Sign Language Recognition Using Graph Matching \\ \hline None & 1 & 0 & 1 & Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching \\ \hline \caption{Number of vertices in the neighborhood of each paper in $G_R$, and how many vertices in it lie on each side of the partition.} \label{tab:neighborhood_partition_counts} \end{longtable} \end{singlespace} \section{Code-related} \begin{table}[t] \centering \begin{tabular}{| l | l |} \hline Usage & Package Name(s) \\ \hline Mathematical Computation & NumPy \\ & NetworkX \\ \hline Figure creation* & Matplotlib \\ \hline File I/O Handling & csv\\ & glob \\ & re \\ \hline API Request Handling & urllib \\ & Requests \\ & time \\ \hline Interfacing with Google Sheets & gspread$^\Diamond$ \\ & oauth2client$^\Diamond$ \\ \hline The \verb+defaultdict+ datatype & collections \\ \hline Interfacing my own modules with Jupyter notebooks & importlib$^\Diamond$ \\ \hline \end{tabular} \caption{Python packages used for the project. All but those marked with a $\Diamond$ are found in either the standard library or available in Anaconda for Python 3.6 on 64-bit Windows in mid-2018, and the remainder can be installed via pip.} \footnotesize *Only Figure \ref{fig:year_distributions} was created in Python. The remainder were made with Mathematica. \label{tab:python_packages} \end{table} %\begin{figure}[p] %\textbf{Initial loading of $G$, sciMet, and zewail GML files} %\begin{verbatim} %G = Import["citation_network.gml"] %sciMet = Import["sciMet_dataset.gml"] %zewail = Import["zewail_dataset.gml"] %\end{verbatim} %\textbf{Creating the pruned network $G_p$.} %\begin{verbatim} %parents = Position[VertexOutDegree[G], _?(# > 0 &)] // Flatten; %popular = Position[VertexInDegree[G], _?(# > 1 &)] // Flatten; %pruned = Subgraph[G, Union[parents, popular], Options[G]]; %Gp = Subgraph[pruned, WeaklyConnectedComponents[pruned][[1]], % Options[pruned]] %\end{verbatim} %\textbf{Creating the random network $R$.} %\begin{verbatim} %R = RandomGraph[{VertexCount[G], EdgeCount[G]}, DirectedEdges -> True] %\end{verbatim} %\textbf{Creating the random network $R_d$ with the same degree sequences as $G$.} %\begin{verbatim} %outStubs = VertexOutDegree[G]; %inStubs = VertexInDegree[G]; %vertexlist = Range[VertexCount[G]]; %edgelist = {}; %For[i = 1, i <= EdgeCount[g], i++, % source = RandomSample[outStubs -> vertexlist, 1][[1]]; % target = RandomSample[inStubs -> vertexlist, 1][[1]]; % outStubs[[source]] -= 1; % inStubs[[target]] -= 1; % AppendTo[edgelist, source -> target];] %Rd = Graph[vertexlist, edgelist]; %\end{verbatim} %\caption{The Mathematica code used to create the networks analyzed in Table \ref{tab:network_table}.} %\label{fig:network_creation_source_code} %\end{figure} \section{Subject Tagging Keywords} \begin{table}[p] \centering \begin{tabular}{| l | l | l |} \hline \textbf{Computer Science} & \textbf{Biology} & \textbf{Mathematics} \\ \hline\hline ACM & Biochem- & Algebra \\ Algorithm & Biocomputing & Algorithm \\ Artificial Intelligence & Bioengineering & Chaos \\ CIVR & Bioinformatic & Combinatori- \\ Computational Intelligence & Biological & Fixed Point \\ Computational Linguistics & Biology & Fractal \\ Computer & Biomedic- & Functional Analysis \\ Computer Graphics & Biosystem & Geometr- \\ Computer Science & Biotechnology & Graph \\ Computer Vision & Brain & Kernel \\ Data & Cancer & Linear Regression \\ Data Mining & Cardiology & Markov \\ Document Analysis & Cell & Mathemati- \\ Electrical Engineering & Disease & Multivariate \\ Graphics & DNA & Network \\ IEEE & Drug & Optimization \\ Image Analysis & Endocrinology & Permutation Group \\ Image Processing & Epidemiology & Probability \\ Intelligent System & Genetic & Riemann Surface \\ Internet & Genome & SIAM \\ ITiCSE & Genomic & Statistic- \\ Language Processing & Medical & Topology \\ Learning & Medicinal & Wavelet \\ Machine Learning & Medicine & \\ Machine Vision & Metabolic & \\ Malware & Microbiology & \\ Neural Network & Molecular & \\ Pattern Recognition & Neuro- & \\ Robotic & Neurobiological & \\ Scientific Computing & Pathology & \\ SIAM & Pathogen & \\ Signal Processing & Pharma- & \\ Software & Plant & \\ World Wide Web & Protein & \\ & Proteom- & \\ & Psych- & \\ & Psychology & \\ & Virology & \\ & Virus & \\ \hline \end{tabular} \caption{Keywords used to tag journal names as various subjects.} \vspace{-6pt}\flushleft\footnotesize *Note: Both a term and its plural are considered a match, and hyphens indicate a word with several ending variations which were all considered to be associated with the tag. While the search process was case sensitive in order to avoid false positives for short words like ``ACM", case-insensitive duplicate words have been excluded from the table. The words ``algorithm" and ``SIAM" are considered to be both computer science and mathematics. \label{tab:tagging_keywords} \end{table} \section{Reference List Guide} \begin{table}[p] \centering \begin{tabular}{|L{0.4\textwidth}|L{0.6\textwidth}|} \end{tabular} \caption{Guide to references in the bibliography} \label{tab:bibliography_guide} \end{table} \bibliographystyle{plain} \bibliography{thesis_bibliography} % If you want to include an index, this prints the index at this location. You must have \makeindex uncommented in the preamble \printindex \end{document}
{ "alphanum_fraction": 0.760103873, "avg_line_length": 81.641025641, "ext": "tex", "hexsha": "3a4dac658ce39468cf4d9d6a552f958125db48ff", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2019-12-24T07:18:24.000Z", "max_forks_repo_forks_event_min_datetime": "2018-11-11T03:58:30.000Z", "max_forks_repo_head_hexsha": "a6e23a593e17f9c90c292d4b4742a19da81e61d0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "marissa-graham/network-similarity", "max_forks_repo_path": "ThesisClass/draft copy for scrolling purposes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a6e23a593e17f9c90c292d4b4742a19da81e61d0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "marissa-graham/network-similarity", "max_issues_repo_path": "ThesisClass/draft copy for scrolling purposes.tex", "max_line_length": 1239, "max_stars_count": 20, "max_stars_repo_head_hexsha": "a6e23a593e17f9c90c292d4b4742a19da81e61d0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "marissa-graham/network-similarity", "max_stars_repo_path": "ThesisClass/draft copy for scrolling purposes.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-19T10:13:17.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-17T21:57:52.000Z", "num_tokens": 34266, "size": 130544 }
\section*{Exercises} \begin{ex} Let $A = \begin{mymatrix}{rr} 1 & 2 \\ 2 & 1 \end{mymatrix}$. Find $A^{10}$ by diagonalization. \begin{sol} First we write $A = PDP^{-1}$. \begin{equation*} \def\arraystretch{1.3} \begin{mymatrix}{rr} 1 & 2 \\ 2 & 1 \end{mymatrix} = \begin{mymatrix}{rr} -1 & 1 \\ 1 & 1 \end{mymatrix} \begin{mymatrix}{rr} -1 & 0 \\ 0 & 3 \end{mymatrix} \begin{mymatrix}{rr} -\frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{mymatrix} \end{equation*} Therefore $A^{10} = PD^{10}P^{-1}$. \begin{equation*} \def\arraystretch{1.3} \begin{mymatrix}{rr} 1 & 2 \\ 2 & 1 \end{mymatrix}^{10} ~=~ \begin{mymatrix}{rr} -1 & 1 \\ 1 & 1 \end{mymatrix} \begin{mymatrix}{rr} (-1)^{10} & 0 \\ 0 & 3^{10} \end{mymatrix} \begin{mymatrix}{rr} -\frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{mymatrix} \\ ~=~ \frac{1}{2}\begin{mymatrix}{cc} 3^{10}+1 & 3^{10}-1 \\ 3^{10}-1 & 3^{10}+1 \end{mymatrix}. \end{equation*} \end{sol} \end{ex} \begin{ex} Let $A = \begin{mymatrix}{rrr} 1 & 4 & 1 \\ 0 & 2 & 5 \\ 0 & 0 & 5 \end{mymatrix}$. Find $A^{50}$ by diagonalization. % \begin{sol} % \end{sol} \end{ex} \begin{ex} Let $A = \begin{mymatrix}{rrr} 1 & -2 & -1 \\ 2 & -1 & 1 \\ -2 & 3 & 1 \end{mymatrix}$. Find $A^{100}$ by diagonalization. % \begin{sol} % \end{sol} \end{ex} \begin{ex} Let $A = \begin{mymatrix}{rr} -5 & -6 \\ 9 & 10 \\ \end{mymatrix}$. Find a square root of $A$, i.e., find a matrix $B$ such that $B^2=A$. \begin{sol} $B=\begin{mymatrix}{rr} -1 & -2 \\ 3 & 4 \\ \end{mymatrix}$. \end{sol} \end{ex} \begin{ex} Let $A = \begin{mymatrix}{rrr} -2 & 0 & 6 \\ -3 & 1 & 6 \\ -3 & 0 & 7 \\ \end{mymatrix}$. Find a square root of $A$. \begin{sol} $A = \begin{mymatrix}{rrr} 0 & 0 & 2 \\ -1 & 1 & 2 \\ -1 & 0 & 3 \\ \end{mymatrix}$. \end{sol} \end{ex}
{ "alphanum_fraction": 0.4434470377, "avg_line_length": 20.6296296296, "ext": "tex", "hexsha": "2e68fdbf8b01a382ae7d7dc6a0201597b4dbb192", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_path": "baseText/exercises/Eigenvalues-Application-Power.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_path": "baseText/exercises/Eigenvalues-Application-Power.tex", "max_line_length": 69, "max_stars_count": 3, "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_path": "baseText/exercises/Eigenvalues-Application-Power.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "num_tokens": 970, "size": 2228 }
\section{education} \begin{entrylist} %------------------------------------------------ \courseentry {2012--2016} {PhD {\normalfont Communication}} {The University of Barcelona} {Integrated Digital Strategy} {Exploring the growth of the internet and the effects on employee fun during lunch breaks in high-latitude countries.} {Research Methods in Communication, Social Influence and Persuasion} %------------------------------------------------ \courseentry {2011--2012} {Bachelor {\normalfont of Commerce}} {The University of California, Berkeley} {Money Is The Root Of All Evil -- Or Is It?} {This thesis explored the idea that money has been the cause of untold anguish and suffering in the world. I found that it has, in fact, not.} {Business for Business Gurus, Money 101} %------------------------------------------------ \end{entrylist}
{ "alphanum_fraction": 0.6414201183, "avg_line_length": 40.2380952381, "ext": "tex", "hexsha": "3152bc100985073153cb17c722cebf45d818a8ba", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d9b4aa959c1c0dd4074353e6758cca56f8f83908", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ChristianHallerX/PhD_Resume_and_CoverLetter_Template", "max_forks_repo_path": "sections/education.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d9b4aa959c1c0dd4074353e6758cca56f8f83908", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ChristianHallerX/PhD_Resume_and_CoverLetter_Template", "max_issues_repo_path": "sections/education.tex", "max_line_length": 142, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d9b4aa959c1c0dd4074353e6758cca56f8f83908", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ChristianHallerX/PhD_Resume_and_CoverLetter_Template", "max_stars_repo_path": "sections/education.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:14:36.000Z", "max_stars_repo_stars_event_min_datetime": "2021-12-09T16:14:36.000Z", "num_tokens": 179, "size": 845 }
\hypertarget{modelling-reactive-behavior-notation---pulse-cast-interaction}{% \section{Modelling Reactive Behavior Notation - Pulse-Cast-Interaction}\label{modelling-reactive-behavior-notation---pulse-cast-interaction}} Used for synchronous Interaction between reactive-objects. \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{figures/pulseCastOverview.png} \caption{Pulse Cast Overview} \end{figure} \hypertarget{mechanism}{% \subsection{Mechanism}\label{mechanism}} \begin{itemize} \tightlist \item Out-pulses may be generated on a state-transition \item Pulses are transmitted at the end of a transition, i.e.~the sender is already in the new state \item On transmission, out-pulses of the sender are translated into in-pulses for the receiver \item The pulse-cast-connection defines receiver and translation of out-pulses to in-pulses \item An out-pulse may be translated in many different in-pulses for different receivers - they are transmitted according to the cast-order \item An in-pulse may trigger a state-transition of the receiver object \item Unhandled in-pulses, will be lost \end{itemize} \begin{figure}[H] \centering \includegraphics[width=0.65\textwidth]{figures/castsExample.png} \caption{Casts Example} \end{figure} \hypertarget{cast-order}{% \subsection{Cast-Order}\label{cast-order}} \begin{itemize} \tightlist \item Defines the transmission order of pulses in a multi-cast situation \item Is defined by a number at the beginning of the pulse-cast connection label \end{itemize} \hypertarget{three-variants}{% \subsubsection{Three Variants:}\label{three-variants}} P, Q, S, T, R, U (deep-first)\\ P, Q, R, S, T, U (left-first) / (right first)\\ P, Q, S, R, U, T (irregular) \begin{figure}[H] \centering \includegraphics[width=0.25\textwidth]{figures/pulse-cast-chains.png} \caption{Pulse Cast Chaines} \end{figure} \begin{itemize} \tightlist \item Wir können selbst entscheiden, ob wir nur die Input-Cast beim Sequence Diagram oder auch der Output-Cast (gesamte Translation) hinschreiben. \item \textbf{Wir müssen an der MEP mindestens 2 Sequenz-Diagramme zeichnen.} \end{itemize} \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{figures/LED-Cluster.png} \caption{LED Cluster} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.82\textwidth]{figures/sequenceDiagramExample.png} \caption{Sequence Diagram Example} \end{figure} \textbf{Reactive-Object-Step}: \begin{itemize} \item Asynchronious Event from outside the cluster \item Triggered by a message or an In-Pulse \end{itemize} \textbf{Cluster Step}: \begin{itemize} \item Synchronious Events within the cluster \item Triggered by a message to the cluster \item The sequence is not interruptible $\Rightarrow$ Run to Completion \item every cluster-step must have an end (no infinite cluster steps) \end{itemize} \clearpage
{ "alphanum_fraction": 0.7660519126, "avg_line_length": 27.8857142857, "ext": "tex", "hexsha": "a100161d769ac6981463194ba51018d426210b44", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-15T07:10:24.000Z", "max_forks_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_forks_repo_licenses": [ "Beerware" ], "max_forks_repo_name": "nortismo/mse-documentations", "max_forks_repo_path": "TSM_EmbReal/08_ModellingReactiveBehavior.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Beerware" ], "max_issues_repo_name": "nortismo/mse-documentations", "max_issues_repo_path": "TSM_EmbReal/08_ModellingReactiveBehavior.tex", "max_line_length": 93, "max_stars_count": null, "max_stars_repo_head_hexsha": "cc67637785237d630f077a863edcd5f49aa52b59", "max_stars_repo_licenses": [ "Beerware" ], "max_stars_repo_name": "nortismo/mse-documentations", "max_stars_repo_path": "TSM_EmbReal/08_ModellingReactiveBehavior.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 848, "size": 2928 }
\section{Optimization of Surrogates on Sparse Grids} \label{sec:52method} \minitoc{72mm}{5} \noindent The methods presented in the last section can be combined to a ``meta-method'' for surrogate optimization. The surrogates are constructed as interpolants on spatially adaptive sparse grids, which we explain in the following. \subsection{Novak--Ritter Adaptivity Criterion} \label{sec:521novakRitter} The classic surplus-based refinement strategy for spatially adaptive sparse grids is not tailored to optimization, as this refinement strategy aims to minimize the overall $\Ltwo$ error. However, in optimization, it is reasonable to generate more points in regions where we suspect the global minimum to be to increase the interpolant's accuracy in these regions. Hence, we employ an adaptivity criterion proposed by Novak and Ritter \cite{Novak96Global} for hyperbolic cross points. The Novak--Ritter criterion has also been applied to sparse grids \multicite{Ferenczi05Globale,Valentin14Hierarchische,Valentin16Hierarchical}. \paragraph{$m$-th order children} As usual, the criterion works iteratively: Starting with an initial regular sparse grid of a very coarse level, the criterion selects a specific point $\gp{\*l,\*i}$ in each iteration and inserts all its children into the grid. This process is repeated until a given number $\ngpMax$ of grid points is reached, since we evaluate $\objfun$ at every grid point once, and we assume that function evaluations dominate the overall complexity. The difference to common refinement criteria is that a point may be selected multiple times, in which case \term{higher-order children} are inserted. The $m$-th order children $\gp{\*l',\*i'}$ of a grid point $\gp{\*l,\*i}$ satisfy \begin{equation} \label{eq:indirectChild} \*l'_{-t} = \*l^{}_{-t},\;\, \*i'_{-t} = \*i^{}_{-t},\;\, l'_t = l^{}_t + m,\;\, i'_t \in \begin{cases} \{1\},&(l_t = 0) \land (i_t = 0),\\ \{2^m - 1\},&(l_t = 0) \land (i_t = 1),\\ \{2^m i_t - 1,\, 2^m i_t + 1\},&\hphantom{(}l_t > 0, \end{cases} \end{equation} where $m \in \nat$ and $t \in \{1, \dotsc, d\}$ (cf.\ \cref{eq:directAncestor} for $m = 1$). The order is chosen individually for each child point to be inserted as the lowest number $m$ such that $\gp{\*l',\*i'}$ does not yet exist in the grid. %\todo{add figure for m-th order children?} \paragraph{Criterion} The Novak--Ritter refinement criterion \cite{Novak96Global} refines the grid point $\gp{\*l,\*i}$ that minimizes the product% \footnote{% Compared to \cite{Novak96Global}, we added one in the base of each factor to avoid ambiguities for $0^0$. In addition, we swapped $\gamma$ with $1-\gamma$ to make $\gamma$ more consistent with its name as adaptivity.% } \begin{equation} (r_{\*l,\*i} + 1)^\gamma \cdot (\normone{\*l} + d_{\*l,\*i} + 1)^{1 - \gamma}. \end{equation} Here, $r_{\*l,\*i} \ceq \setsize{ \{(\*l',\*i') \in \liset \mid \objfun(\gp{\*l',\*i'}) \le \objfun(\gp{\*l,\*i})\} } \in \{1, \dotsc, \setsize{\liset}\}$ is the \term{rank} of $\gp{\*l,\*i}$ (where $\liset$ is the current set of level-index pairs of the grid), i.e., the place of the function value at $\gp{\*l,\*i}$ in the ascending order of the function values at all points of the current grid. We denote the \term{degree} $d_{\*l,\*i} \in \natz$ of $\gp{\*l,\*i}$ as the number of previous refinements of this point. Finally, $\gamma \in \clint{0, 1}$ is the \term{adaptivity parameter.} %By choosing $\gamma = 0$, the function values become irrelevant and the %criterion produces regular-like sparse grids. %If we choose $\gamma = 1$, then the criterion always refines the point with %the lowest function value, which means that the criterion is easily stuck %in local minima. We have to choose a suitable compromise between exploration ($\gamma = 0$) and exploitation ($\gamma = 1$). The best choice of course depends on the objective function $\objfun$ at hand, but for our purposes, we choose a priori a value of $\gamma = 0.15$. However, it may be an option to adapt the value of $\gamma$ automatically during the grid generation phase. \subsection{Global Optimization of Sparse Grid Surrogates} \label{sec:522method} \paragraph{Global, local, and globalized optimization methods} In \cref{sec:51algorithms}, we presented various optimization methods for the unconstrained case, divided into global gradient-free methods such as differential evolution and local gradient-based methods, for example, gradient descent. A subset of these methods has been implemented in \sgpp{} \cite{Pflueger10Spatially}, see \cref{tbl:optimizationMethod}. The gradient-based methods need an initial point, and they may get stuck in local minima. Hence, we additionally implemented globalized versions of the gradient-based methods via a multi-start Monte Carlo approach with $m \ceq \min(10d, 100)$ uniformly distributed pseudo-random initial points.% \footnote{% We split the number of permitted function evaluations evenly among the $m$ parallel calls.% %of the gradient-based method% } This means there are three types of methods: \begin{enumerate}[label=T\arabic*.,ref=T\arabic*,leftmargin=2.7em] \item \label{item:globalMethods} Global gradient-free methods listed as implemented in \cref{tbl:optimizationMethod} \item \label{item:localMethods} Local gradient-based methods listed as implemented in \cref{tbl:optimizationMethod}% %(need an initial point)% \footnote{% Excluding Levenberg--Marquardt, which is only applicable to least-squares problems.% } \item \label{item:globalizedMethods} Globalized versions of the methods of type \ref{item:localMethods} %(do not need an initial point) \end{enumerate} \paragraph{Unconstrained optimization of sparse grid surrogates} Given the objective function $\objfun\colon \clint{\*0, \*1} \to \real$, the maximal number $\ngpMax \in \nat$ of evaluations of $f$, and the adaptivity parameter $\gamma \in \clint{0, 1}$, we determine an approximation $\xoptappr \in \clint{\*0, \*1}$ of the global minimum $\xopt$ of $\objfun$ as follows: \begin{enumerate} \item Generate a spatially adaptive sparse grid $\sgset$ with the Novak--Ritter refinement criterion for $\objfun$, $\ngpMax$, and $\gamma$. \item Determine the sparse grid interpolant $\sgintp$ of $\objfun$ by solving the linear system \eqref{eq:hierarchizationProblem}. \item Optimize the interpolant: First, find the best grid point $\*x^{(0)} \ceq \vecargmin_{\gp{\*l,\*i} \in \sgset} \objfun(\*x_{\*l,\*i})$. Second, apply the local methods of type \ref{item:localMethods} to the interpolant $\sgintp$ with $\*x^{(0)}$ as initial point. Let $\*x^{(1)}$ be the resulting point with minimal objective function value. Third, we apply the global and globalized methods of types \ref{item:globalMethods} and \ref{item:globalizedMethods} to the interpolant $\sgintp$. Again, let $\*x^{(2)}$ be the point with minimal $\objfun$ value. Finally, determine the point of $\{\*x^{(0)}, \*x^{(1)}, \*x^{(2)}\}$ with minimal $\objfun$ value and return it as $\xoptappr$. \end{enumerate} \noindent Note that the third step requires a fixed number of additional evaluations of the objective function, which can be neglected compared to $\ngpMax$. By default, we use the cubic modified hierarchical not-a-knot B-spline basis $\bspl[\nak,\modified]{\*l,\*i}{p}$ ($p = 3$) for the construction of the sparse grid surrogate. However, we could apply any of the hierarchical (B-)spline bases presented in \cref{chap:30BSplines,chap:40algorithms}. \paragraph{Comparison methods} We use two comparison methods. First, we apply the gradient-free methods (type \ref{item:globalMethods}) to the sparse grid interpolant using modified piecewise linear hierarchical basis functions (i.e., $p = 1$) on the same sparse grid as the cubic B-splines. We cannot employ gradient-based optimization as the objective function should be continuously differentiable and discontinuous derivatives are usually numerically problematic for gradient-based optimization methods (see, e.g., \cite{Huebner14Mehrdimensionale}). Second, we apply the gradient-free methods (type \ref{item:globalMethods}) directly to the objective function. We cannot use the gradient-based methods here as the gradient of the objective function is assumed to be unknown. For both of the comparison methods, we make sure that the objective function is evaluated at most $\ngpMax$ times by splitting the $\ngpMax$ evaluations evenly among all employed optimization methods. \paragraph{Constrained optimization} For optimization problems with constraints, we proceed exactly as for unconstrained optimization, except that for optimizing the interpolant, we use the constrained optimization algorithms implemented in \sgpp as listed in \cref{tbl:optimizationMethod}. We only replace the objective function $\objfun$ with a sparse grid surrogate $\sgintp$, and we assume that the constraint function $\ineqconfun$ can be evaluated fast. However, it would also be possible to replace $\ineqconfun$ with a sparse grid surrogate. In this case, it cannot be guaranteed that the resulting optimal point $\xoptappr$ is feasible, i.e., we could have $\lnot(\ineqconfun(\xoptappr) \le \*0)$.
{ "alphanum_fraction": 0.7384946237, "avg_line_length": 40.6113537118, "ext": "tex", "hexsha": "da5127add45f8c5182046a405b9ece52bf43ccd1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "valentjn/thesis", "max_forks_repo_path": "tex/document/52method.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "valentjn/thesis", "max_issues_repo_path": "tex/document/52method.tex", "max_line_length": 79, "max_stars_count": 4, "max_stars_repo_head_hexsha": "65a0eb7d5f7488aac93882959e81ac6b115a9ea8", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "valentjn/thesis", "max_stars_repo_path": "tex/document/52method.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-15T20:16:10.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-15T19:50:36.000Z", "num_tokens": 2639, "size": 9300 }
\documentclass[a4paper,11pt]{article} \usepackage{physics} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm, mathtools} %\usepackage{hyperref} \usepackage{color} \usepackage{jheppub} \usepackage[T1]{fontenc} % if needed % My Commands \newcommand{\be}{\begin{equation}} \newcommand{\ee}{\end{equation}} \newcommand{\bes}{\begin{equation*}} \newcommand{\ees}{\end{equation*}} \newcommand{\bea}{\begin{flalign*}} \newcommand{\eet}{\end{flalign*}} %\linespread{1.0} %\setlength{\parindent}{0em} %\setlength{\parskip}{0.8em} \title{\textbf{Notes on Cosmology}} \author{Aditya Vijaykumar} \affiliation{International Centre for Theoretical Sciences, Bengaluru, India.} \emailAdd{[email protected]} \abstract{One hopes one will finally be accustomed to cosmology with this exploration, largely following the book by Prof. Mukhanov and lecture notes by Prof. Baumann. One might also, if one feels like, try to survey modern day approaches to inflation inspired by string theory. } \begin{document} \maketitle \section{Kinematics, Dynamics and Geometry} For most of the 20th century, cosmologists agreed that the universe was homogenous and isotropic ie. there is no single \textit{preferred} point or direction in the universe. This is called the \textit{Cosmological Principle} in popular terms. On the face of it, this is a very powerful assumption - it allows us to make predictions about the universe sitting at where we are on the earth. It remained an intelligent guess until the age of data took over cosmology. Data now shows that the universe is indeed homogenous and isotropic, but only on large enough scales (greater than 100 Mpc). Hence, if we coarse grain the universe on a scale smaller than 100 Mpc, we will start seeing inhomogeneities like galaxies, galaxy clusters etc. This was not entirely unexpected. So we decided to live with it. But then there is a twist. Theory suggests that the universe continues to be homogenous and isotropic for scales bigger than that of the observable universe (3000 Mpc and beyond), but inhomogeneities start to creep in at distance scales much larger than 3000 Mpc. Answering questions on these scales in indeed very tough, not only because one can't seem to pose the question in a mathematically precise fashion, but also because one can't imagine verifying these predictions empirically. Nevertheless, as travellers in the conquest for truth, it is imperative for us to learn what our forefathers thought of the cosmos, and hopefully avoid making the same mistakes again. \subsection{Hubble Law} Hubble postulated that, in a homogenous, isotropic expanding universe, the relative velocities of observers obey the Hubble Law.$$\vec{v}_{B,A}=H(t)\vec{r}_{AB}$$ Is the Hubble law in agreement with the homogenous and isotropic assumption? Let's check, $$\vec{v}_{B,A}=H(t)\vec{r}_{AB} \text{ ; } \vec{v}_{C,B}=H(t)\vec{r}_{BC}$$ $$\text{Hence, } \vec{v}_{C,A} = H(t)(\vec{r}_{AB} + \vec{r}_{BC}) = H(t)\vec{r}_{AC}$$ which is what we should have expected. In fact, one can show explicitly that Hubble Expansion Law is \textit{the only} law that is comaptible with homogeneity and isotropic expansion. One could go ahead and write the Law as a differential equation as follows $$\dot{\vec{r}}_{AB}=H(t)\vec{r}_{AB}$$ Integrating this, $$ r_{AB} = r_0 \exp\int H(t)dt = a(t) r_0 $$ where $\exp\int H(t)dt = a(t)$ is called the \textit{scale factor}. $r_0$ is the separation between A and B at some given instant of time (taken to be $t=0$ without loss of generality), and $r_{AB}$ denotes the distance between them after time $t$ has elapsed. It is also straightforward to write the Hubble constant in terms of the scale factor and its time derivative, $$H =\frac{\dot{a}}{a}$$ Obviously, the Hubble Law would only hold on cosmological length scales. The relative motion of the sun and the earth, for example, is not governed by this law, but by the inhomogeneities in the gravitational field. \subsection{Dynamics of Newtonian Dust} Consider a universe filled with \textit{dust} particles. What dust really means is matter which exerts negligible pressure compared to its energy density $\epsilon$. Lets also assume that gravity is a weak force and that the particles are not very far away from each other so as to avoid exceeding the speed of light. Now consider a sphere expanding about the origin, with its radius being given by $R(t) = a(t) \chi_{c}$. As the total mass is M, the energy density $\epsilon$ can be written as $$\epsilon(t)= \frac{3M}{4 \pi R(t)^3} = \frac{3M}{4 \pi R_0^3} \left(\frac{a_0}{a(t)}\right)^3 = \epsilon_0 \left(\frac{a_0}{a(t)}\right)^3 $$ Taking the time derivative, $$\dot{\epsilon}(t) = \epsilon_0 \left(\frac{a_0}{a(t)}\right)^3 \left(\frac{-3\dot{a}}{a}\right) = -3H \epsilon(t)$$ We can also write down Newton's second law for the sphere as follows, $$\ddot{R} = -\frac{GM}{R^2} = -G \frac{4 \pi }{3} \epsilon(t) R $$ Dividing by $R_0$ we get, $$\therefore \ \ddot{a} = - \frac{4 \pi G}{3} \epsilon a$$ We have essentially got the evolution equations for the energy density and the scale factor in a Newtonian dust setting. Substituting for $\epsilon$ in the equation for $a(t)$, $$\ddot{a} = - \frac{4 \pi G \epsilon_0}{3} \left(\frac{a_0^3}{a^2}\right) $$ Multiplying by $\dot{a}$ and integrating we get, $$\frac{\dot{a}^2}{2} - \frac{4 \pi G \epsilon_0}{3} \left(\frac{a_0^3}{a}\right) = \text{constant} = E $$ The form of this energy-like equation for the scale factor is similar to the energy equation for a rocket launched into orbit. If one provides the rocket too little kinetic energy, it will fall back to earth. Similarly, the fate of this Newtonian dust universe, ie. whether it keeps expanding or collapses unto itself depends on the sign of the energy $E$. Writing the equation in terms of the Hubble constant, $$H^2 - \frac{2E}{a^2} = \frac{8 \pi G}{3}\epsilon$$ Setting $E=0$ in the above equation, one can derive the critical density, $$\epsilon^{cr} = \frac{3H^2}{8\pi G}$$ Substituting this back in the other equation, $$\epsilon^{cr}(1 - \Omega(t))= \frac{3E}{4 \pi G} \frac{\epsilon}{a^2}$$ where $\Omega(t) = \frac{\epsilon^{cr}}{\epsilon}$ is called the \textit{cosmological parameter}. $Omega(t)$ varies with time, but because the sign of $E$ is fixed, the quantity $(1-\Omega(t))$ does not change sign. Hence, we just need to measure the current value of the cosmological parameter to determine the sign of $E$ in the universe. As is conveyed in popular talks, the sign of $E$ is linked to the spatial geometry of the Universe in general relativity. This spatial curvature has the sign opposite to that of $E$. Let's imagine we probe the value of $\Omega_0$. The possible scenarios are :- \begin{itemize} \item $\epsilon_0 > \epsilon^{cr}$ ie $\Omega > 1$ and $E < 0$. The spatial curvature will be positive (closed universe). The scale factor reaches some maximum value and the universe recollapses. \item $\epsilon_0 < \epsilon^{cr}$ ie $\Omega < 1$ and $E > 0$. The spatial curvature will be negative (open universe). The universe exapnds hyperbolically. \item $\epsilon_0 = \epsilon^{cr}$ ie $\Omega = 1$ and $E = 0$. The spatial curvature will be zero (flat universe). \end{itemize} In all three cases, extrapolating the universe backwards, we face an initial singularity. Note that we are still working with dust-dominated universe (and also Newtonian theory, duh); the analyses we have done are only toy-model-like. Nonetheless, we hope we have gained some physical intuition from the same. Lets just play with our toy-model for one last time. In a flat universe, $$\frac{\dot{a}^2a}{2} = \frac{4}{9} \left(\derivative{a^{3/2}}{t} \right)^2 = \frac{4 \pi G \epsilon_0 a_0^3}{3} = \text{constant} $$ Hence, $a \propto t^{2/3}$, $H = \frac{2}{3t}$. We can then write the age of universe expression as, $$t_0 = \frac{2}{3H_0} \text{ and also } \epsilon(t) = \frac{1}{6 \pi G t^2}$$ \subsection{Einstein it, duh} Newtonian theory is \textit{obviously} wrong. So its better to quickly stop being stupid and start worshipping Einstein. To start off, lets waste some time thinking what the geometry of a homogenous, isotropic space would look like. We can think of the universe as an evolving homogenous and istropic hypersurface. Lets first consider surfaces of this kind in 2 dimensions. The well known surfaces are the plane and the 2-sphere. We write the embedding of a two sphere in 3D Euclidean space as follows, $$x^2 + y^2 + z^2 = a^2$$ $a$ being the radius of the sphere. Taking the derivative, $$2x dx + 2y dy +2z dz = 0$$ $$\therefore dz = -\frac{xdx + ydy}{z}$$ Substituting this in the distance metric, $$dl^2 = dx^2 + dy^2 + \frac{(xdx + ydy)^2}{a^2 - x^2 - y^2}$$ Hence, we can represent distance between any two points on the hypersurface by just two independent coordinates $x$ and $y$. But lets, for the sake of easing up our calculations, substitute $x = r' \cos\phi$ and $y = r' \sin \phi$. Differentiting $x^2 + y^2 = r'^2$, $$xdx + ydy = r' dr'$$ $$\therefore dl^2 = dr'^2 + r'^2d\phi^2 + \frac{r'^2 dr'^2}{a^2 - r'^2}$$ $$\therefore dl^2 = \frac{dr'^2 }{1 - \frac{r'^2}{a^2}} + r'^2d\phi^2 $$ We note that in the $a^2 \rightarrow \infty$ limit, this is just the flat space metric. On can also in principle have $a^2 < 0$; we would not be able to visualize it as an embedding in 3D, but that's not the fault of the subject of mathematics. This space with $a^2 < 0$ has negative curvature and is called the \textit{Lobachevski space}. At this point, to simplify things further, lets rescale our coordinates as $r = r'/\sqrt{\abs{a^2}}$. Then, $$dl^2 = \abs{a}^2\left\{\frac{dr^2}{1 - k{r^2}} + r^2d\phi^2 \right\}$$ where $k=1$ for sphere, $k=-1$ for pseudo-sphere, and $k=0$ for plane. The scale doesn't have any physical meaning as such, so we redefine our coordinates to absorb it. The generalization of the above analysis for three dimensions is as follows, $$dl^2 = a^2\left\{\frac{dr^2}{1 - k{r^2}} + r^2(d\theta^2 + \sin^2\theta) \right\}$$ We can also work with a coordinate $\chi$ defined as, $$d\chi^2 = \frac{dr^2}{1-kr^2}$$ One can explicitly verify by integrating that $\chi = \text{arcsinh } r$ for $k = -1$, $\chi = r$ for $k=0$ and $\chi = \arcsin r$ for $k=1$. One can now rewrite the metric as follows, $$dl^2 = a^2(d\chi^2 + \sinh[2](\chi)d \Omega^2) \text{ ; } k = -1$$ $$dl^2 = a^2(d\chi^2 + \chi^2 d \Omega^2) \text{ ; } k = 0$$ $$dl^2 = a^2(d\chi^2 + \sin[2](\chi)d \Omega^2) \text{ ; } k = +1$$ \textbf{Consider the 2-sphere} in the positive curvature case at a given $\chi$. The surface area will be given by $S_{2d}(\chi) = 4 \pi a^2 \sin^2 \chi$. We note that as the radius $\chi$ increases, the surface area first increases and reaches a maximum, and thereafter starts decreasing, vanishing at $\chi=\pi$. The differential volume element $dV$ is given by, $$dV = S_{2d} \ dl_r = 4 \pi a^3 \sin[2](\chi) d \chi$$ Therefore the volume of a sphere with radius $\chi_0$ is, $$V(\chi_0) = 2 \pi a^3 (\chi_0 - \frac{1}{2} \sin 2\chi_0)$$ At low $\chi_0$, $V = 4 \pi (a \chi_0)^3/3$ and $\chi_0=\pi$ gives $V = 2 \pi^2 a^3$ \textbf{Consider the 2-pseudosphere} in the negative curvature case at given $\chi$ $$dl^2 = a^2 \sinh[2](\chi)d \Omega^2 \text{ ; } S_{2d} =4 \pi a^2 \sinh^2 \chi $$ This increases exponentially for as $\chi$ increases. In the most suitable coordinate system where the symmetries of our current construction are manifest \textcolor{red}{(Why?)}, we can write the metric of spacetime as $$ds^2 = dt^2 - a^2(t) \left(\frac{dr^2}{1 - k{r^2}} + r^2(d\theta^2 + \sin^2\theta)\right) = g_{\alpha \beta} dx^\alpha dx^\beta$$ On large scales, matter can be approximated to be a perfect fluid \textcolor{red}{(Why?)}, $$T^\alpha_\beta = (\epsilon+p)u^\alpha u_\beta - p \delta^\alpha_\beta$$ where $\epsilon$ is energy density and $p= p (\epsilon)$ depends on the properties of the matter. Another interesting case that one might encounter in cosmology is that of a classical scalar field. The energy momentum tensor for a scalar field is given as, $$T^\alpha_\beta = \phi^{,\alpha} \phi_{,\beta} - \left( \frac{1}{2}\phi^{,\gamma} \phi_{,\gamma} - V(\phi) \right)\delta^\alpha_\beta$$ \textbf{\textcolor{red}{To be explored further}}. \subsubsection{Friedmann Equations} How are the Newtonian evolution equations modified when one accounts for GR? In pricniple, one has to the metric and the energy momentum tensor into the Einstein's equations and derive the expressions. Expressions thus derived are called \textit{Friedmann equations}. \textcolor{red}{(do the derivation on mathematica)}. Mukhanov wishes to enlighten us in a different way. First law of thermodynamics tells us $dE = -pdV$. Since $V \propto a^3$, and $E = \epsilon V$, one can write, $$d \epsilon = -3 (p+\epsilon) d\ln a \text{ ie. } \dot{\epsilon} = -3H(\epsilon + p)$$ The other Newtonian evolution equations get modified as follows, $$\ddot{a} = -\frac{4 \pi G}{3}(\epsilon + 3p)a \text{ and } H^2 + \frac{k}{a^2} = \frac{8 \pi G}{3}\epsilon$$ The $k$ is indeed the curvature factor that was introduced a while ago. In Newtonian cosmology, the Universe is always flat and the scale factor has no real interpretationin terms of geometry.. It also is able to describe matter which is presureless and is expanding with velocities much less as compared to the speed of lgiht. GR, in contrast, provides a self-consistent theory of the cosmos, and it can describe relativistic matter with any equation of state. The matter content indeed determines the geometry of the universe, and the scale factor has a geometrical interpretation as the radius of curvature. \subsection{Conformal Time} Consider defining the conformal time $\eta$ as follows, $$\eta = \int \frac{dt}{a(t)} \text{ such that } a(t)d\eta = dt$$ The second of the above equations can be rewritten as, $$ a'^2 + k a^2 = \frac{8\pi G}{3}\epsilon a^4$$ and the first can be written as, $$a a'' -a'^2 = -\frac{4 \pi}{3}(\epsilon + 3p)a^4$$ Substituting, one gets, $$a a'' + \frac{4 \pi}{3}(\epsilon + 3p)a^4 + k a^2 = \frac{8\pi G}{3}\epsilon a^4$$ $$a'' + k a = \frac{4 \pi}{3}(\epsilon - 3p)a^3 $$ The last equation is just the trace of Einstein equations \textbf{\textcolor{red}{Verify!}}. In case of radiation, $\epsilon = 3p$, and hence, $a'' + ka = 0$ which gives us the solutions, $$a = \underbrace{a_m \sinh(\eta)}_{k=-1} \text{ , }\underbrace{a_m \eta}_{k=0} \text{ , } \underbrace{a_m \sin(\eta)}_{k=+1} $$ where the corresponding physical time $t$ is, $$t = \underbrace{\cosh \eta - 1}_{k=-1} \text{ , }\underbrace{\frac{\eta^2}{2}}_{k=0} \text{ , } \underbrace{1 - \cos \eta}_{k=+1} $$ If we consider the flat radiation dominated universe specifically, we see that $\eta \propto \sqrt{t}$ and $a \propto \sqrt{t}$. Hence $H = \frac{1}{2t}$. Substituting this into the Friedmann equations, we see that $\epsilon \propto 1/t^2$ \textit{ie.} $\epsilon \propto 1/a^4$. \textbf{\textcolor{red}{Do the conformal time exercises}} \subsection{Milne Universe} Consider open universe in the limit of vanishing energy density. The Friedmann equation boils down to $$ H^2 = \frac{1}{a^2} \implies \dot{a}^2 = 1 \implies a=t$$ The metric now becomes, \bes ds^2 = dt^2 -t^2 (d \chi^2 + \sinh^2 \chi d\Omega^2) \ees One can easily check that this is just the Minkowski spacetime written in expanding coordinates by making the substitutions $ \tau = t \cosh\chi $ and $ r = t \sinh\chi $ \begin{flalign*} ds^2 &= d\tau^2 - (dr^2 + r^2 d\Omega^2)\\ &= dt^2 -t^2 (d \chi^2 + \sinh^2 \chi d\Omega^2) \end{flalign*} The velocity $ v $ is just given by, \be v = \dfrac{r}{\tau} = \tanh \chi < 1 \ee \subsection{De Sitter Universe} De Sitter Universe is a spacetime with constant positive homogenous and isotropic curvature. From the Friedman equations, we know that, \begin{align*} dE &= -pdV\\ d\epsilon &= -3 (p + \epsilon) d \ln a \impliedby V \propto a^3, E = \epsilon V\\ d\epsilon &= -3 (-\epsilon + \epsilon) d \ln a = 0 \end{align*} This means that energy density stays constant during expansion. Using other Friedman equation, \begin{align*} \ddot{a} &= -\frac{4 \pi G}{3}(\epsilon + 3p)a\\ \ddot{a} &= +\frac{8 \pi G}{3}\epsilon a \implies \boxed{\ddot{a} - H_\Lambda^2 a = 0} \end{align*} where $H_\Lambda = \frac{8 \pi G}{3}\epsilon$. The most general solution to this equation is, \begin{equation*} a = C_1 \exp H_\Lambda t + C_2 \exp (- H_\Lambda t) \end{equation*} Substituting this ansatz into the equation $ H^2 + k/a^2 = 8\pi G \epsilon/3 = H_\Lambda^2 $, we get the constraint on $ C_1 $ and $ C_2 $ as, \begin{equation*} 4C_1 C_2 H_\Lambda^2 = k \end{equation*} Now we are in a position to deal with this on a case-by-case basis in $ k $. \begin{itemize} \item $ k= 0 $, we take $ C_2 = 0 $, and the solution represents a flat expanding universe. \textcolor{red}{why not take $ C_1 $ zero?} \item $ k=1 $, $ {C_1}={C_2}= 1/2 H_\Lambda $. \item $ k=-1 $, $ {C_1}=-{C_2}= 1/2 H_\Lambda $. \end{itemize} One can write the metric as, \begin{equation*} ds^2 - dt^2 -\dfrac{f(t)^2}{H_\Lambda^2}(d \chi^2 + g(\chi)^2 d\Omega^2), \qq{where} \end{equation*} \begin{equation*} f(t) = \underbrace{\sinh^2 H_\Lambda t}_{k=-1}, \underbrace{\exp 2 H_\Lambda t}_{k=0}, \underbrace{\cosh^2 H_\Lambda t}_{k=1} \qq{and} g(\chi) = \underbrace{\sinh^2 \chi }_{k=-1}, \underbrace{\chi^2 }_{k=0}, \underbrace{\sin^2 \chi}_{k=1} \end{equation*} \end{document}
{ "alphanum_fraction": 0.7031160499, "avg_line_length": 77.2477876106, "ext": "tex", "hexsha": "6e9910501465daa637700700cde397ecd9f76e70", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c0aebb67332ccf0b116a3348923ab2631b586dac", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adivijaykumar/courses", "max_forks_repo_path": "selfstudy/cosmology/notes/cosmology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c0aebb67332ccf0b116a3348923ab2631b586dac", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adivijaykumar/courses", "max_issues_repo_path": "selfstudy/cosmology/notes/cosmology.tex", "max_line_length": 538, "max_stars_count": null, "max_stars_repo_head_hexsha": "c0aebb67332ccf0b116a3348923ab2631b586dac", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adivijaykumar/courses", "max_stars_repo_path": "selfstudy/cosmology/notes/cosmology.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5587, "size": 17458 }
\section{Methodology} \subsection{Strategy} It's easy to notice that the nested $i$ and $j$ for-loops in \textbf{Algorithm \ref*{alg:fw1}} can be parallizable, but one of the \emph{intermediate edges} for a process could be the \emph{edge under analysis} for another process. For example process $P^1$ is writing in a cell while process $P^2$ is reading from the same cell, leading to a unpredictable final result if $P^1$'s writing comes before or after $P^2$'s reading. It seems that enabling parallelization requires some sort of blocking mechanism between processes, like a semaphore, but we prove that no data race can occur as in the previous example as far as the value of $k$ is shared between processes. Let's assume we have 2 processes, namely $P^1$ and $P^2$, and they have \emph{under analisys} $M_{i,j}$ and $M_{x,y}$ respectively, with $x \neq i$ so that each process partitions the matrix horizontally. At any point the following system of preconditions exists (the two inequalities are taken from \textbf{Algorithm \ref*{alg:fw1}} at line 10): \begin{flalign}\label{eq:sys1} && \left\{\begin{matrix} P^{1}_{i,j} & > & P^{1}_{i,k} & + & P^{1}_{k,j} \\ \\ P^{2}_{x,y} & > & P^{2}_{x,k} & + & P^{2}_{k,y} \end{matrix}\right. && \end{flalign} \emph{i.e.} $P^1$ is reading values of $M_{i,k}$ and $M_{k,j}$ so it can decide if $M_{i,j}$ should be overwritten; the same does $P^2$ with $M_{x,k}$, $M_{k,y}$ and $M_{x,y}$ respectively. \\ In order to have a data race, the following statement must be true as well \[(x = i \wedge k=j) \vee (k = i \wedge y = j)\] \emph{i.e.} $P^2$ is using as one of its \emph{intermediate edges} the \emph{edge under analysis} of $P^1$. Without losing of generality we do not analyze the opposite case because the proof develops symmetrically.\\ Because $x \neq i$, only the following must be verified \begin{flalign}\label{eq:cond1} && k = i \wedge y = j && \end{flalign} and by applying (\ref*{eq:cond1}) to (\ref*{eq:sys1}) we have \begin{flalign}\label{eq:sys2} && P^{1}_{i,j} > P^{1}_{k,k} + P^{1}_{i,j} && \end{flalign} but (\ref*{eq:sys2}) is clearly false, because $M$ is a hollow matrix \emph{i.e.} the diagonal elements are all equal to $0$, leaving the following inequality: \begin{flalign}\label{eq:sys3} && P^{1}_{i,j} > P^{1}_{i,j} && \end{flalign} Clearly no number can be greater than itself and this means that at this point the comparison made by $P^1$ is irrilevant and is always evaluated to false. If $P^2$ writes after or before $P^1$'s evaluation does not really matter and thus there is no data race as far as $k$ is the same for all the running processes. This is really important because it assures that there is no need for locking mechanism on $M$ when it comes to parallelize the nested $i$ and $j$ for-loops and having no blocks guarantees better performance. Now the strategy relies on dividing the matrix by rows and assigning an equal number of rows to each process. Once $k$ is set and it's shared among all processes, each process puts \emph{under analysis} every cell of its sub-matrix and selects its \emph{intermediate edges} depending on $k$. In this work we propose 3 parallel architectures implemented with 3 different parallel programming environments: \emph{Distributed computing} with MPI, \emph{Shared-memory Mutithreading} with OpenMP and \emph{GPGPU} with CUDA. The document analyzes each implementation and gives an overview of timings, implementations and pros and cons for each case. \subsection{Distributed with MPI} Message Passing Interface (MPI) is a standardized and portable message-passing standard designed by a group of researchers from academia and industry to function on a wide variety of parallel computing architectures. The standard defines the syntax and semantics of a core of library routines useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several well-tested and efficient implementations of MPI, many of which are open-source or in the public domain. These fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications. \par The main strategy is based on scattering horizontally the whole matrix among all the process, so that each process can read a portion of the matrix of size $\frac{n^2}{p}$; then a \emph{process of competence} is chosen: as $k$ is in common (but never transmitted) to all the processes, there's always a cell in the $k^{th}$ row representing one of the two intermediate vertices for any process and there's always one process which this row was assigned to. The value $k$ is always "up to date" among processes because each for-loop involving $k$ starts with a collective communication which implies a synchronization point among processes. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/mpi-scatter} \captionsetup{justification=centering} \caption{View of the data each thread can reach} \label{fig:threads} \end{figure} Everytime $k$ changes, the $k^{th}$ row is broadcasted by the \emph{process of competence} (the process that "owns" the row) to the other processes; a total of $k$ \texttt{MPI\_Bcast} operations are required. \par Once the $k^{th}$ row has been received, each process acts like the original FW in the $i$ and $j$ for-loops; obviously they each write values in their own local matrix. \par At the end of the $k$ for-loop, all the local matrices are gathered to the root process. \textbf{Algorithm \ref*{alg:mpi}} shows an high level pseudo-code of the strategy exposed above. For the concrete implementation see \textbf{section \ref{mpiimpl}}.\par \begin{algorithm}[h!] \SetAlgoLined broadcast($n$, $ROOT$) \\ scatter($M$, $p$) \For{$k = 1 \rightarrow n$}{ \If{$rank =$ \textnormal{findOwnerOfK$^{th}$Row}($k$, $rank$)}{ $K \leftarrow $ getRow($M$, $k$) \\ } broadcast($K$) \\ \For{$i = 1 \rightarrow \frac{n}{p}$}{ \For{$j = 1 \rightarrow n$}{ \If{$M_{i,j} < M_{i,k} + K_j$}{ $M_{i,j} \leftarrow M_{i,k} + K_j$ } } } } gather($M$, $p$); \caption{Distributed version of FW}\label{alg:mpi} \end{algorithm} We can list all the communication required: \begin{itemize} \item{1 \texttt{MPI\_Bcast} for communicating the value of $n$} \item{1 \texttt{MPI\_Scatter} for the assignment of the local sub-matrix} \item{$n$ \texttt{MPI\_Bcast} for communicating the $k^{th}$ row} \item{1 \texttt{MPI\_Gather} for the collection of the local sub-matrix} \end{itemize} \textbf{Table \ref*{tab:comm}} approximates how many bytes are involved in the communications, assuming that the implementation uses 4 bytes to store one \texttt{int} and omitting the overhead of the communication protocol. \begin{table}[h!] \centering \begin{tabular}{|l|l|l|} \hline \rowcolor[HTML]{F56B00} {\color[HTML]{FFFFFF} \textbf{Count}} & {\color[HTML]{FFFFFF} \textbf{Type}} & {\color[HTML]{FFFFFF} \textbf{Size}} \\ \hline 1 & \texttt{MPI\_Bcast} & $4(p-1)$ bytes \\ \hline 1 & \texttt{MPI\_Scatter} & $4\frac{n^2(p-1)}{p}$ bytes \\ \hline $n$ & \texttt{MPI\_Bcast} & $4n^2(p-1)$ bytes \\ \hline 1 & \texttt{MPI\_Gather} & $4\frac{n^2(p-1)}{p}$ bytes \\ \hline \end{tabular} \caption{Approximation of the size of each communication for \emph{MPI} FW} \label{tab:comm} \end{table} \par Total communications can be expressed with the following formula, that can help to calculate the bandwidth of the network required for this implementation: \[W_{comm} = 4(p-1)(1 + n^2(1 + \frac{2}{p})) \text{ bytes}\] This formula is important because the time spent on communications is time taken from the calculation and a network with an inadequate bandwith is the main source of bottlenecks. For example having 8 processes consuming a $12500 \times 12500$ matrix implies a total of $\textasciitilde 5.46$GB transferred over the entire network; this means that each processor would theorically spend $5468$ms (43 seconds in total) on communication over a 1Gbps network. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/mpi-time} \captionsetup{justification=centering} \caption{Calculation and MPI instantiation + communication time of \emph{MPI} FW consuming a $5040\times5040$ matrix over a 1Gbps network depending on the number of processes} \label{fig:mpi-time} \end{figure} \textbf{Figure \ref*{fig:mpi-time}} shows the amount of time taken to initialize the MPI cluster and communicate during the execution of the \emph{MPI} FW. In this case a dense $5040\times5040$ matrix is used as input and the MPI cluster was located in a completely-connected network with 1Gbps bandwith. The bandwidth required has a quadratic and linear growth with respect to the number of vertices and the number of processes respectively; so with a medium-small matrix the time used in communication can exceed the 10\% of the total computational time. But by increasing the size of the matrix to $12600\times12600$ the time used in communication drops to 4\%, with a total of 50 seconds. Thus the implementation fits well for huge matrixes, rather than medium ones. We can also easily calculate the time required by the algorithm with this formula: \[T = t_c \frac{n^3}{p} + log(p) \cdot(t_s(3+n) + t_w(1 + 2 \frac{n^2}{p} + n))\] where $t_c$ is the computation time, $t_s$ the time required to start a communication and $t_w$ the time to communicate a single word (in this case a \texttt{int}); the formula assumes that the collective communications use a tree-based hierarchical pattern that takes $log(p)$ steps rather than $p$ steps for $p$ processes. For this reason the speedup is considerable but far from the ideal for a $5040\times5040$ matrix. \textbf{Figure \ref*{fig:mpi-speedup}} shows the trend of the speedup in relation to the sequential version. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/mpi-speedup} \captionsetup{justification=centering} \caption{Speedup of \emph{MPI} FW} \label{fig:mpi-speedup} \end{figure} With a consequent decrease in efficiency as shown in \textbf{Figure \ref*{fig:mpi-efficiency}} \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/mpi-efficiency} \captionsetup{justification=centering} \caption{Efficiency of \emph{MPI} FW consuming a medium matrix} \label{fig:mpi-efficiency} \end{figure} \subsection{Shared-memory multiprocessing with OpenMP} OpenMP (Open Multi-Processing) is an application programming interface (API) for parallel programming intended to work on shared-memory architectures. More specifically, it is a set of compiler directives, library routines and environmental variables, which influence runtime behavior. OpenMP enables parallel programming in various languages, such as C, C++ and FORTRAN and runs on mostoperating systems. \par The OpenMP API uses the fork-join model of parallel execution. Multiple threads perform tasks defined implicitly or explicitly by OpenMP directives. All OpenMP applications begin as a single thread of execution, called the initial thread. The initial thread executes sequentially until it encounters a parallel construct. At that point, this thread creates a group of itself and zero or more additional threads and becomes the master thread of the new group. Each thread executes the commands included in the parallel region, and their execution may be differentiated, according to additional directives provided by the programmer. At the end of the parallel region, all threads are synchronized. \par The runtime environment is responsible for effectively scheduling threads. Each thread, receives a unique id, which differentiates it during execution. Scheduling is performed according to memory usage, machine load and other factors and may be adjusted by altering environmental variables. In terms of memory usage, most variables in OpenMP code are visible to all threads by default. However, OpenMP provides a variety of options for data management, such as a thread-private memory and private variables, as well as multiple ways of passing values between sequential and parallel regions. Additionally, recent OpenMP implementations introduced the concept of tasks, as a solution for parallelizing applications that produce dynamic workloads. Thus, OpenMP is enriched with a flexible model for irregular parallelism, providing parallel while loops and recursive data structures. \par The main advantage on using OpenMP is the ease of developing parallelismswith simple constructs that (often) do not differ too much from the original implementation. The snippet at \textbf{Algorithm \ref*{alg:omp}} shows the implementation used in this work: the matrix containing the distances between vertices is shared among all the threads, while the two nested for-loops are executed by each thread indipendentely. The $k^{th}$ row is extracted by a single thread and shared among the other threads: this guarantees a sequential access to the $k^{th}$ row faster than accessing the same row directly on $M$, resulting in a speedup of up to $\textasciitilde31\%$. \begin{algorithm}[h!] \SetAlgoLined \texttt{\#pragma omp parallel num\_threads(t) shared(M, K)} private(k) \\ \For{$k = 1 \rightarrow n$}{ \texttt{\#pragma omp master} \\ $K \leftarrow k^{th}Row(M, k)$ \\ \texttt{\#pragma omp for private(i,j) schedule(dynamic)} \\ \For{$i = 1 \rightarrow n$}{ \For{$j = 1 \rightarrow n$}{ \If{$M_{i,j} < M_{i,k} + K_j$)}{ $M_{i,j} \leftarrow M_{i,k} + K_j$ } } } } \caption{Multithreaded FW with \emph{OpenMP}}\label{alg:omp} \end{algorithm} Despite the overhead that the dynamic scheduler entails, in this case it works better than a static one because the content of the sub-matrices varies from thread to thread so a group of threads may do more write operations on $M$ and a static scheduler would make wait the others until they finish. We benchmarked a loss in performance from $\textasciitilde7\%$ to $\textasciitilde48\%$ when choosing a static scheduler over a dynamic one, depending on the size of the matrix. \\ Also note that the solution does not implement a \texttt{collapse} directive because when we collapse multiple loops, OpenMP turns them into a single loop: there is a single index that is constructed from \texttt{i} and \texttt{j} using division and modulo operations. This can have huge impacts on the performance because of this overhead, especially if the matrix is really wide. For the concrete implementation see \textbf{section \ref{ompimpl}}. \textbf{Figure \ref*{fig:threads}} shows how 2 threads interact inside the matrix: the red and orange zones highlight the cells where Thread 1 and Thread 2 can write respectively; the blue and turquise cells represent the intermediate vertices that are compared with the vertice under analysis for Thread 1 and Thread 2 respectively. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/openmp-threads} \captionsetup{justification=centering,margin=2cm} \caption{View of the data each thread can reach} \label{fig:threads} \end{figure} We already proved that no race condition may appear in this case. No race condition can occur if $k$ is the same among the threads and threrfore there's no need to verify atomicity of the write operation; the lack of OpenMP directive like \texttt{atomic} or \texttt{critical} plays in favor of performance. \par The speedup of the solution is sligthy worse than the ideal speedup (see \textbf{Figure \ref*{fig:omp-speedup}}) \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/openmp-speedup} \captionsetup{justification=centering,margin=2cm} \caption{Speedup of \emph{OpenMP} FW on a octacore CPU} \label{fig:omp-speedup} \end{figure} When scaling from 7 to 8 threads, we notice a slight deviation from the previous (almost) linear trend. That's because the measurement is taken from a 8-core/8-thread CPU, namely Intel Core i7-9700K, and because no other cores were free to manage the OS and its subprocesses, the scheduler divided this task among all the threads. So we interpolated the speedup, not counting the fluctuations due to the management of the OS. \par The efficiency, which stays always above $90\%$, is shown in \textbf{Figure \ref*{fig:omp-efficiency}} alongside with its theoretical counterpart. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/openmp-efficiency} \captionsetup{justification=centering,margin=2cm} \caption{Efficiency of \emph{OpenMP} FW on a octacore CPU} \label{fig:omp-efficiency} \end{figure} An overview of the timings collected can be found in \textbf{Table \ref*{tab:omp-time}}. \subsection{GPGPU with CUDA} In recent years, many scientific and numeric GPGPU applications found success due to graphics hardware’s streaming data-parallel organizational model. The GPU serves, to an extent, as a coprocessor to the CPU programmed through the CUDA API. A single program known as kernel is compiled to operate on the GPU device to exploit the massive data parallelism inherit on Single Instruction, Multiple Data (SIMD) architecture. Groups of threads then execute the kernel on the GPU. Threads are organized into blocks which allow efficient sharing of data through a high-speed shared memory region accessible to the programmer directly through CUDA. Shared-memory is shared among threads in a block, facilitating higher bandwidth and overall performance gains. Therefore, algorithms must intelligently manage this fast shared memory cache effectively. This will fully utilize the data parallelism capabilities of graphics hardware and alleviates any memory latency that data intensive algorithms suffer from on the GPU. Unlike the other two implementations, \emph{CUDA} FW can rely on its \emph{grid-of-blocks} system and thus the kernel can do no loops: $k$ is still shared among threads so that no data race occurs but each thread instead of covering one or more rows, is assigned to a single cell. \begin{figure}[h!] \centering \includegraphics[width=2.5in]{diagrams/cuda-threads} \captionsetup{justification=centering} \caption{Mapping of CUDA threads inside $M$} \label{fig:cuda-threads} \end{figure} Depending on the number of threads per block, each row of matrix $M$ is covered by 1 or more blocks, that have always height equal to 1 thread. This mapping allows threads of the same block share some data that can be reused. \\ In this case the shared variable stores the value $M_{i,k}$ that is in common with the entire row. Because the shared memory offers lower latency and higher bandwidth than the global memory, it is worth storing this value for other threads. \begin{algorithm}[h!] \SetAlgoLined \DontPrintSemicolon \For{$k = 0 \rightarrow n$}{ \textnormal{kernel<{}<{}<$\frac{n+b-1}{b} \times n$, $b$>{}>{}>($M$, $n$, $k$)} } \; \SetKwFunction{FMain}{kernel} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$M$, $n$, $k$}}{ $j \leftarrow \left|B\right|_x \times B_x + T_x$ \\ $i \leftarrow \left|B\right|_y$ \\ \If{$j < n$}{ \If{$T_x = 0 $}{ $K_{\textnormal{shared}} \leftarrow M_{i,k};$ } $\textnormal{syncthreads()}$ \If{$M_{i,j} < K_{\textnormal{shared}} + M_{k,j}$}{ $M_{i,j} \leftarrow K_{\textnormal{shared}} + M_{k,j}$ } } } \caption{Kernel of the \emph{CUDA-FW} on a pre-Volta architecture}\label{alg:cuda} \end{algorithm} \textbf{Algorithm \ref*{alg:cuda}} shows the kernel function for this implementation. \\ $B_x$ is the coordinate $x$ of the block, $\left|B\right|_x$ is the horizontal size of the block, $T_x$ the coordinate $x$ of the thread; $\left|B\right|_y$ is the coordinate $y$ of the block and $K_{\textnormal{shared}}$ is the variable shared among all the threads of the same block. One thread (namely $T_{0,0}$) reads from $M_{i,k}$ and stores the value in the shared memory. From here threads encounter a synchronization point which assures that $T_{0,0}$ had written in the shared memory and the block can actually use the correct value. The state-of-the-art L1 cache in Volta and Turing offers lower latency, higher bandwidth, and higher capacity compared to the earlier architectures. Like Volta, Turing's L1 can cache write operations (write-through). The result is that for many applications Volta and Turing narrow the performance gap between explicitly managed shared memory and direct access to device memory \cite{nvidia}. \par For this reason we benchmarked \textbf{Algorithm \ref*{alg:cuda2}} which relies only on global memory and we do not experienced any performance degradation on the Turing architecture. This mechanism is present in the newer Ampere architecture as well. For the concrete implementation see \textbf{section \ref{cudaimpl}}. \begin{algorithm}[h!] \SetAlgoLined \DontPrintSemicolon \SetKwFunction{FMain}{kernel} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$M$, $n$, $k$}}{ $j \leftarrow \left|B\right|_x \times B_x + T_x$ \\ $i \leftarrow \left|B\right|_y$ \\ \If{$j < n$}{ \If{$M_{i,j} < M_{i,k} + M_{k,j}$}{ $M_{i,j} \leftarrow M_{i,k} + M_{k,j}$ } } } \caption{Kernel of the \emph{CUDA-FW} on a Volta and post-Volta architectures}\label{alg:cuda2} \end{algorithm} Because each thread does little computation and uses high memory bandwidth, timings are really low if compared to \emph{serial-FW} and \emph{OMP} FW with 8 cores. \\ In fact from \textbf{Figure \ref*{fig:cuda-time}} we see a speedup up to $87.8$ times compared to \emph{serial} FW and $12.34$ times compared to \emph{OMP} FW with 8 cores. We also notice that the maximum speedup is reached when the number of threads per block is equal to 128; that's because the occupancy of the Streaming Multiprocessors is at its peak and there are less wasted threads. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/cuda-speedup} \captionsetup{justification=centering} \caption{Speedup of \emph{CUDA} FW (128 threads per block) consuming a medium matrix compared to \emph{sequential} and \emph{OMP} FW} \label{fig:cuda-speedup} \end{figure} Because the CUDA APIs don't allow the user to control the number of Streaming Multiprocessors or how many CUDA cores can be involved in the computation, the speedup is calculated based on the number of threads per block. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{diagrams/cuda-time} \captionsetup{justification=centering} \caption{Efficiency of \emph{MPI} FW consuming a medium matrix} \label{fig:cuda-time} \end{figure} \subsection{Hybrid: MPI + OpenMP} In order to make the most of \emph{MPI FW}, it can be combined with \emph{OpenMP}; this makes sense because otherwise each process would run the $i$ and $j$ for loops on one thread and nowadays is more than rare to have a cluster composed of single thread CPUs. \textbf{Algorithm \ref*{alg:mpi+omp}} shows how easily \emph{MPI FW} can be modified so that each process can benefit from multithreading. \begin{algorithm}[h!] \SetAlgoLined broadcast($n$, $ROOT$) \\ scatter($M$, $processes$) \texttt{\#pragma omp parallel num\_threads(t) private(k) shared(M, K, p)} \\ \For{$k = 1 \rightarrow n$}{ \texttt{\#pragma omp master} \\ \If{$rank =$ \textnormal{findOwnerOfKthRow}($k$, $rank$)}{ $K \leftarrow $ getRow($M$, $k$) \\ } broadcast($K$) \\ \texttt{\#pragma omp for private(i,j) schedule(dynamic)} \\ \For{$i = 1 \rightarrow \frac{n}{p}$}{ \For{$j = 1 \rightarrow n$}{ \If{$M_{i,j} < M_{i,k} + K_j$}{ $M_{i,j} \leftarrow M_{i,k} + K_j$ } } } } gather($M$, $processes$); \caption{\emph{MPI+OpenMP} FW}\label{alg:mpi+omp} \end{algorithm} For the concrete implementation see \textbf{section \ref{hybimpl}}. \subsection{Hybrid: MPI + CUDA} Another way to improve the efficiency of the cluster is to use GPGPU on each node. \textbf{Algorithm \ref*{alg:mpi+cuda}} shows how this solution can be implemented. \begin{algorithm}[h!] \SetAlgoLined \DontPrintSemicolon broadcast($n$, $ROOT$) \\ scatter($M$, $p$) \For{$k = 1 \rightarrow n$}{ \If{$rank =$ \textnormal{findOwnerOfKthRow}($k$, $rank$)}{ $K \leftarrow $ getRow($M$, $k$) \\ } broadcast($K$) \\ \textnormal{kernel<{}<{}<$\frac{n+b-1}{b\cdot p}\times n$, $b$>{}>{}>($M$, $n$, $k$)} } gather($M$, $p$); \; \; \SetKwFunction{FMain}{kernel} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$M$, $n$, $k$}}{ $j \leftarrow \left|B\right|_x \times B_x + T_x$ \\ $i \leftarrow \left|B\right|_y$ \\ \If{$j < n$}{ \If{$M_{i,j} < M_{i,k} + M_{k,j}$}{ $M_{i,j} \leftarrow M_{i,k} + M_{k,j}$ } } } \caption{\emph{MPI+CUDA} FW}\label{alg:mpi+cuda} \end{algorithm} Although the increase of performance, this solution implies high costs because each node must be equipped with a GPU; moreover the GPUs (that must be produced by NVIDIA) must be compatibile with the version of the CUDA Toolkit for which the program was compiled for.
{ "alphanum_fraction": 0.6117962112, "avg_line_length": 42.6014084507, "ext": "tex", "hexsha": "60521d4206a4ce97009e0a080abc476706f53229", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "97b99291cf2eb8bf12b1775358f6c5179f5a03b8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "firaja/Parallel-FloydWarshall", "max_forks_repo_path": "report/methodology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "97b99291cf2eb8bf12b1775358f6c5179f5a03b8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "firaja/Parallel-FloydWarshall", "max_issues_repo_path": "report/methodology.tex", "max_line_length": 669, "max_stars_count": 2, "max_stars_repo_head_hexsha": "97b99291cf2eb8bf12b1775358f6c5179f5a03b8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "firaja/Parallel-FloydWarshall", "max_stars_repo_path": "report/methodology.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-12T11:25:06.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-19T21:42:24.000Z", "num_tokens": 7049, "size": 30247 }
\section{The SkipBlock Model} \label{sec:skipbock} The Skip List model works on a record unit, i.e., a synchronization point has a probability p to appear in an inverted list of n records. The SkipBlock model operates instead on \emph{blocks} of records of a fixed size, in place of the records themselves. Consequently, the probabilistic parameter $p$ is defined with respect to a block unit. A synchronization point is created every $\frac{1}{p^i}$ blocks on a level $i$, thus every $\frac{\vert B \vert}{p^i}$ records where $\vert B \vert$ denotes the block size. A synchronization point links to the first record of a block interval. Compared to Figure~\ref{fig:skiplists}, a SkipBlock structure with $p=\frac{1}{8}$ and $\vert B \vert=2$ also has an interval of $\frac{\vert B \vert}{p^1}=16$ records. However, on level 2, the synchronization points are separated by $\frac{\vert B \vert}{p^2}=128$ instead of 256 records. We note that with $\vert B \vert= 1$, the SkipBlock model is equivalent to the original Skip Lists model, and therefore it is a generalization. The two properties of the Skip Lists are then translated to this model: \begin{enumerate} \item the number of levels is defined by \begin{equation} L_B(n)=\left\lfloor \ln_{\frac{1}{p}}\left(\frac{n}{\vert B \vert}\right) \right\rfloor \label{eq:skipblock-maxlevels} \end{equation} \item the structure's size \begin{equation} S_B(n)=\sum_{i=1}^{ L_B\left(n\right)}{\left\lfloor \frac{n\times p^i}{\vert B \vert} \right\rfloor} \label{eq:skipblock-size} \end{equation} \end{enumerate} \subsection{SkipBlock Search Algorithm} \label{sec:search-skipblock} With the block-based Skip Lists model, the search algorithm returns an interval of blocks containing the target record. We discuss the search in that interval in Section~\ref{sec:search-interval}. The search walk is identical to the one presented in Section~\ref{sec:search-skiplists}: we walk from the top to the bottom level, and compare at each step the current synchronization point with the target. The walk stops at the same termination criteria as in Skip Lists. The search complexity in the worst case becomes \begin{equation} C_{SB}=\frac{L_B(n)}{p} \label{eq:skipblock-complexity} \end{equation} \subsection{Impact of the Self-Indexing Parameters} \label{sec:block-probas-impact} In this section, we discuss the consequences of the probabilistic parameter on the Skip Lists data structure. Table~\ref{tab:skiplists-proba} reports for low (i.e., $\frac{1}{1024}$) and high (i.e., $\frac{1}{2}$) probabilities \begin{inparaenum}[(1)] \item the complexity (\ref{eq:skiplists-complexity}) to find the interval containing the target record, and \item the size (\ref{eq:skiplists-size}) of the Skip Lists structure. \end{inparaenum} For example with an interval $\vert I \vert = 1024$ and an inverted list of size $10^6$ (i.e., Skip Lists with one level), reaching a record located at the end of the inverted list incurs a lot of synchronization points at level 1 being read. With a smaller interval, e.g. 16, the number of synchronization points read decreases because the $L(10^6)=4$ levels provide more accessing points to the inverted list. Large intervals give the possibility to skip over a large number of records by reading a few synchronization points. However this leads to less use cases of the Skip Lists structure when the searched records are close to each other. In the latter case, small intervals are then more adapted, but at the cost of a bigger structure. Moreover the more levels there are, the more walks up and down on them there are. There is a trade-off to achieve when selecting $p$: a high probability provides a low search complexity but at a larger space cost, and a low probability reduces considerably the required space at the cost of higher search complexity. The SkipBlock model provides two parameters to control its Skip Lists structure: the probabilistic parameter $p$ and the block size $\vert B \vert$. The block size parameter enables more control over the Skip Lists structure. For example, to build a structure with an interval of length 64, the original Skip Lists model proposes only one configuration given by $p=\frac{1}{64}$. For this same interval length, SkipBlock proposes all the configurations that verify the equation $\frac{\vert B \vert}{p}=64$. Table~\ref{tab:skiplists-proba-block} reports statistics of some SkipBlock configurations for the same interval lengths as in Table~\ref{tab:skiplists-proba}. Compared to Skip Lists on a same interval length, SkipBlock shows a lower search complexity in exchange of a larger structure. \begin{table} \begin{center} \subfloat[Skip Lists with $\vert I\vert=\frac{1}{p}$.]{ \centering \ra{1.1} \resizebox{0.39\linewidth}{!}{% \begin{tabular}{@{}lcc@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}} \toprule $\vert I \vert$ & \phantom{a} & 2 & 16 & 64 & 128 & 1024\\ \cmidrule{2-7} $S(n)$ & \phantom{a} & \numprint{99999988}&\numprint{6666664}&\numprint{1587300}&\numprint{787400}&\numprint{97751}\\ $C$ & \phantom{a} & 54 & 112 & 320 & 512 & 3072 \\ \bottomrule \end{tabular} \label{tab:skiplists-proba} }}\quad% \subfloat[SkipBlock with $\vert I\vert=\frac{\vert B \vert}{p}$.]{ \ra{1.1} \resizebox{0.54\linewidth}{!}{% \begin{tabular}{@{}lcc@{\hs}c@{\hs}cc@{\hs}c@{\hs}cc@{\hs}c@{\hs}cc@{\hs}c@{\hs}} \toprule $\vert I \vert$ & \phantom{a} & \multicolumn{2}{c}{16} & \phantom{a} & \multicolumn{2}{c}{64} & \phantom{a} & \multicolumn{2}{c}{128} & \phantom{a} & \multicolumn{2}{c}{1024} \\ \cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10} \cmidrule{12-13} p:$\vert B \vert$ & \phantom{a} & $\frac{1}{4}$:4 & $\frac{1}{8}$:2 & \phantom{a} & $\frac{1}{4}$:16 & $\frac{1}{8}$:8 & \phantom{a} & $\frac{1}{4}$:32 & $\frac{1}{8}$:16 & \phantom{a} & $\frac{1}{4}$:256 & $\frac{1}{8}$:128 \\ $S_B(n)$ & \phantom{a} & \numprint{8333328} & \numprint{7142853} & \phantom{a} & \numprint{2083328} & \numprint{1785710} & \phantom{a} & \numprint{1041660} & \numprint{892853} & \phantom{a} & \numprint{130203} & \numprint{111603} \\ $C$ & \phantom{a} & 48 & 64 & \phantom{a} & 44 & 56 & \phantom{a} & 40 & 56 & \phantom{a} & 36 & 48 \\ \bottomrule \end{tabular} \label{tab:skiplists-proba-block} }}% \end{center} \caption{Search and size costs of Skip Lists and SkipBlock with $n=10^8$. $\vert I \vert$ stands for an interval length. $C$ reports the search complexity to find an interval (Sections~\ref{sec:search-skiplists} and~\ref{sec:search-skipblock}).} \end{table} \section{Searching within an Interval} \label{sec:search-interval} The Skip Lists and SkipBlock techniques enables the retrieval of an interval given a target record. The next step is to find the target record within that interval. A first strategy (S1) is to linearly scan all the records within that interval until the target is found. Its complexity is therefore $O(\vert I \vert)$. SkipBlock takes advantage of the block-based structure of the interval to perform more efficient search strategy. Here are defined four additional strategies for searching a block-based interval, parameterized to a probability p. The second strategy (S2) performs \begin{enumerate}[(a)] \item a linear scan over the blocks of the interval to find the block holding the target and \label{s2-linear-search-blocks} \item a linear scan of the records of that block to find the target. \label{s2-target-block} \end{enumerate} The search complexity is $\frac{1}{p}+\vert B \vert$ with $\frac{1}{p}$ denoting the linear scan over the blocks and $\vert B \vert$ the linear scan over the records of one block. Similarly to S2, the third strategy (S3) performs the step (\ref{s2-linear-search-blocks}). Then, it uses to find the target an inner-block Skip Lists structure restricted to one level only. The complexity is $\frac{1}{p}+\frac{1}{q}+\left\lfloor\vert B\vert\times q\right\rfloor$ with q the probability of the inner Skip Lists. In contrast to S3, the fourth strategy (S4) uses a non-restricted inner-block Skip Lists structure. The complexity is $\frac{1}{p}+\frac{L(\vert B \vert)+1}{q}$ with q the inner Skip Lists probability. The fifth one (S5) builds a Skip Lists structure on the whole interval instead of on a block. Its complexity is then $\frac{L\left(\frac{\vert B\vert}{p}\right)+1}{q}$, with $q$ the inner Skip Lists probability. The strategies S3, S4 and S5 are equivalent to S2 when the block size is too small for creating synchronization points. \section{Cost-Based Comparison} \label{sec:cost-based-cmp} In this section, we define a cost model that we use to compare five SkipBlock implementations and the original Skip Lists implementation. \paragraph{Cost Model} \label{sec:skipblock-cost-model} For both the Skip Lists and the SkipBlock, we define a cost model by \begin{inparaenum}[(a)] \item the cost to search for the target, and \label{cost-search} \item the cost of the data structure's size. \label{cost-size} \end{inparaenum} The search cost consists in the number of synchronization points traversed to reach the interval containing the target, plus the number of records scanned in that interval to find the target. The size cost consists in the total number of synchronization points in the data structure, including the additional ones for S3, S4 and S5 in the intervals. \paragraph{Implementations} \label{sec:impl} We define as the baseline implementation, denoted $I_1$, the Skip Lists model using the strategy (\emph{S1}). We define five implementations of the SkipBlock model, denoted by $I_2$, $I_3$, $I_4$, $I_5$ and $I_6$, based on the five interval search strategies, i.e., \emph{S1}, \emph{S2}, \emph{S3},\emph{S4} and \emph{S5} respectively. The inner Skip Lists in implementations $I_4$, $I_5$ and $I_6$ is configured with probability $q=\frac{1}{16}$. The inner Skip Lists in $I_5$ and $I_6$ have at least 2 levels. The size costs of the six implementations are \begin{itemize} \item[$I_1$:] $S(n)$ \item[$I_2$:] $S_B(n)$ \item[$I_3$:] $S_B(n)+\frac{n}{\vert B \vert}$ \item[$I_4$:] $S_B(n)+\left\lfloor n\times q\right\rfloor$ \item[$I_5$:] $S_B(n)+\frac{S(\vert B \vert)\times n}{\vert B \vert}$ \item[$I_6$:]$S_B(n)+\frac{S(p\times\vert B \vert)\times n}{p\times\vert B \vert}$ \end{itemize} \paragraph{Comparison} With respect to the SkipBlock model, we tested all the possible configurations for a given interval length. We report that all of them were providing better search cost than the baseline. We only report in Table~\ref{tab:cmp-costs} the configurations providing the best search cost given an interval length with the associated size cost. We observe that $I_2$ already provides better search cost than the baseline $I_1$ using the same search strategy \emph{S1}, in exchange of a larger size cost. The other implementations, i.e., $I_3$, $I_4$, $I_5$ and $I_6$ which use a more efficient interval search strategies further decrease the search cost. In addition, their size cost decreases significantly with the size of the interval. On a large interval ($512$), $I_4$ is able to provide a low search cost ($64$) while sustaining a small size cost ($6.5e6$). The inner Skip Lists of $I_5$ and $I_6$ are built on lists too small to provide benefits. To conclude, $I_4$ seems to provide a good compromise between search cost and size cost with large intervals. \begin{table} \centering \ra{1.1} \resizebox{\linewidth}{!}{% \begin{tabular}{@{}lcc@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}cc@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}cc@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}cc@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}c@{\hs}} \toprule $\vert I\vert$&\phantom{a}&\multicolumn{6}{c}{8} &\phantom{a}&\multicolumn{6}{c}{16} &\phantom{a}&\multicolumn{6}{c}{512} &\phantom{a}&\multicolumn{6}{c}{1152}\\ &\phantom{a}& $I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ & $I_6$ &\phantom{a}& $I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ & $I_6$ &\phantom{a}& $I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ & $I_6$ &\phantom{a}& $I_1$ & $I_2$ & $I_3$ & $I_4$ & $I_5$ & $I_6$ \\ \cmidrule{3-8}\cmidrule{10-15}\cmidrule{17-22}\cmidrule{24-29} SC&\phantom{a}& $72$ & $56$ & \multicolumn{4}{c}{$54$} &\phantom{a}& $112$ & $62$ & \multicolumn{4}{c}{$56$} &\phantom{a}& $1536$ & $548$ & $120$ & $64$ & $86$ & $84$ &\phantom{a}& $3456$ & $1186$ & $154$ & $74$ & $84$ & $81$ \\ \cmidrule{3-8}\cmidrule{10-15}\cmidrule{17-22}\cmidrule{24-29} ZC$\times e6$&\phantom{a}& $14.3$ & $16.7$ & \multicolumn{4}{c}{$50.0$} &\phantom{a}& $6.7$ & $12.5$ & \multicolumn{4}{c}{$25.0$} &\phantom{a}& $0.2$ & $0.3$ &$1.8$ & $6.5$ & $7.0$ & $6.9$ &\phantom{a}& $0.09$ &$0.17$ & $1.7$&$6.4$ & $6.6$&$6.7$\\ \bottomrule \end{tabular} }% \caption{Search (i.e., SC) and size (i.e., ZC, in million) costs with $n=10^8$. SkipBlock implementations report the best search cost with the associated size cost.} \label{tab:cmp-costs} \end{table} \subsection{Skipping Analysis} The SkipBlock structure, as a generalization of the Skip List, possesses the same hierarchical layout. However the difference between the two is concrete: SkipBlock jumps over groups of blocks at a time, whereas Skip Lists jumps over groups of records. The Table~\ref{tab:skip-levels} reports the intervals at each levels for both structures, with a constant interval between towers of 256 records. For a same interval length (e.g., 256 records in the table), we are able to yield more configurations with the SkipBlock than with the Skip List model. We remark that in both configurations of the SkipBlock, in-between levels have been added in comparison to the original Skip List. These additional levels give more skipping possibilities while searching. Moreover this implies that more data will be read from a tower in a SkipBlock structure than with the Skip List, for a same interval length. \begin{table} \ra{1.1} \centering \resizebox{0.45\linewidth}{!}{% \begin{tabular}{lc@{\hs}rc@{\hs}rr} \toprule Level & \phantom{a} & Skip Lists & \phantom{a} & \multicolumn{2}{c}{SkipBlock}\\ \cmidrule{5-6} & & p=256 & \phantom{a} & B=16 p=16 & B=64 p=4 \\ \cmidrule{5-6} 1 & \phantom{a} & 256 & \phantom{a} & 256 & 256 \\ 2 & \phantom{a} & \uwave{\numprint{65536}} & \phantom{a} & \numprint{4096} & \numprint{1024} \\ 3 & \phantom{a} & \uwave{\numprint{16777216}} & \phantom{a} & \uwave{\numprint{65536}} & \numprint{4096} \\ 4 & \phantom{a} & \ldots & \phantom{a} & \numprint{1048576} & \numprint{16384}\\ 5 & \phantom{a} & \ldots & \phantom{a} & \uwave{\numprint{16777216}} & \uwave{\numprint{65536}}\\ \bottomrule \end{tabular} } \caption{Interval length for each skip levels.} \label{tab:skip-levels} \end{table} The SkipBlock structure will behave differently depending on the parameter configurations in comparison to the Skip list, since they yield different skip levels. To have a better understanding of the differences between the two models, we discuss here the cost of reading skip data into memory and in which case it outweighs the CPU saving thanks to an adaptation of the method from \cite{moffat:96}. Let $t_r$ be the cost of reading one record or a synchronization point, both as part of a bulk read. Let also $t_d$ be the cost of decoding a record/synchronization point. Let further $k$ be the number of \emph{candidates}, i.e., the records that will be lookup from the inverted list thanks to the self-indexing structure. Then the total time $T$ required to search one inverted list of size $n$ in the worst case scenario (i.e., all levels up to the maximum $L(n)$ are read) is given by the formula \begin{displaymath} T=T_d+T_r=kt_d\left( C + \vert I \vert \right)+t_r\left(n+2\times S\right) \end{displaymath} where $T_d$ is the total time to decode pointers, $T_r$ the total time to read into memory the inverted list and synchronization points. $ZC$ and $SC$ reports respectively the size and the search costs. The numerical values of $t_d$ and $t_r$ are respectively $2.5\times 10^{-6}$ and $0.5\times 10^{-6}$, values taken from \cite{moffat:96}. The Table~\ref{tab:predicted-times} reports a numerical comparison between the two self-indexing models, with $n=60,000$ and $k=\{60,\;6000\}$. With a bigger number of candidates we can expect a bigger processing time, since the self-indexing structure will be accessed more often. The SkipBLock structure's values are reported with the strategy S1 for the configurations that yields the best processing time T. On small intervals (e.g., 16) the SkipBlock model does not provide significant better performance even though the complexity is better than the Skip List. This is explained by the difference in size, as the amount of data to be read is greater, which is reflected by the greater required time to read data $T_r$. As the interval increases, we can see that the processing times increase for both models. However the expected processing time is inferior with SkipBlock than with the original Skip List by 2 times. The reason is that even if the strategies for searching within intervals is the same (i.e., linear), the additional levels that the SkipBlock model provides give more skipping possibilities. This comforts the fact that the SkipBlock complexities are lower than the Skip List's. With the interval $\vert I \vert = 1024$, the structure's size increases by 30\%, however both the search complexity and the estimated processing time are twice as low as the original model. \begin{table} \ra{1.1} \centering \resizebox{\linewidth}{!}{% \begin{tabular}{lc@{\hs}rrrc@{\hs}rrrc@{\hs}rrc@{\whs}rrrc@{\hs}rrrc@{\hs}rr} \toprule $\vert I \vert$ & \phantom{a} & \multicolumn{10}{c}{Skip List} & \phantom{a} & \multicolumn{10}{c}{SkipBlock} \\ \cmidrule{3-12} \cmidrule{14-23} k &\phantom{a}& \multicolumn{3}{c}{60} &\phantom{a}& \multicolumn{3}{c}{6000} &\phantom{a}& & & \phantom{a} & \multicolumn{3}{c}{60} &\phantom{a}& \multicolumn{3}{c}{6000} & & \\ \cmidrule{3-5} \cmidrule{7-9} \cmidrule{14-16} \cmidrule{18-20} & \phantom{a} & $T_d$ & $T_r$ & T & \phantom{a} & $T_d$ & $T_r$ & T & \phantom{a} & SC & ZC & \phantom{a} & $T_d$ & $T_r$ & T & \phantom{a} & $T_d$ & $T_r$ & T & \phantom{a} & SC & ZC \\ 16&\phantom{a}&0.010&0.034&0.044 &\phantom{a}& 0.960 & 0.034 & 0.994 & \phantom{a}& 64 & 3998 &\phantom{a}&0.006&0.035&0.041 &\phantom{a}& 0.600 & 0.035 & 0.635 & \phantom{a}&40 & 4996 \\ 32&\phantom{a}&0.019&0.032&0.051 &\phantom{a}& 1.92 & 0.032 & 1.952& \phantom{a} & 128 & 1934 &\phantom{a}&0.008&0.034&0.042 &\phantom{a}& 0.81 & 0.034 & 0.844 & \phantom{a}& 54 & 3743 \\ 256&\phantom{a}&0.077&0.030&0.107 &\phantom{a}& 7.68 & 0.030 & 7.710& \phantom{a} & 512 & 234 &\phantom{a}&0.041&0.030&0.071 &\phantom{a}& 4.08 & 0.030 & 4.110& \phantom{a} & 272& 309 \\ 512&\phantom{a}&0.154&0.030&0.184 &\phantom{a}& 15.36 & 0.030 & 15.390& \phantom{a} & 1024 & 117 &\phantom{a}&0.079&0.030&0.109 &\phantom{a}& 7.89 & 0.030 & 7.920 & \phantom{a}& 526 & 229 \\ 1024&\phantom{a}&0.307&0.030&0.337 &\phantom{a}&30.72 & 0.030 & 30.750& \phantom{a} & 2048 & 58 &\phantom{a}&0.155&0.030&0.185 &\phantom{a}& 15.54 & 0.030 & 15.570 & \phantom{a}& 1036 & 75 \\ \bottomrule \end{tabular}} \caption{Predicted processing times in seconds (n=60,000), with the corresponding structure's search (i.e., SC) and size (i.e., ZC) costs. $T_d$ stands for the time for decoding a record, $T_r$ for the time of reading into memory a record/synchronization point and T the sum of these two times. $k$ indicate the number of candidates.} \label{tab:predicted-times} \end{table}
{ "alphanum_fraction": 0.7159571178, "avg_line_length": 48.7487437186, "ext": "tex", "hexsha": "b6b639bc10c799d9c4accd25362b8155c1c99a27", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ec36c77177642434a6ac4890d394ac2afea2c43", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "scampi/report-compression-skiplists", "max_forks_repo_path": "Methods/self-indexing.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ec36c77177642434a6ac4890d394ac2afea2c43", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "scampi/report-compression-skiplists", "max_issues_repo_path": "Methods/self-indexing.tex", "max_line_length": 193, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ec36c77177642434a6ac4890d394ac2afea2c43", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "scampi/report-compression-skiplists", "max_stars_repo_path": "Methods/self-indexing.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6486, "size": 19402 }
\section{Application Evaluation} \label{sec:apps} Legion\cite{Legion12} is a high-level runtime system that has been implemented on top of our interface. %The Legion runtime is distributed so %that there is an independent scheduler for each processor in the machine. The Legion programming model is built around the abstraction of {\em logical regions} which express locality and independence properties of data. Computation in Legion is organized into a tree of tasks where each task must specify which logical regions will be accessed. When executing a Legion task, the Legion runtime receives requests to execute sub-tasks along with their region requirements. This stream of tasks with region requirements is analogous to a stream of instructions with register requirements that are executed by a hardware processor. Hardware out-of-order processors are designed to run ahead of the actual execution of a stream of instructions to ensure that the processor's pipeline is fully utilized. Similarly, in a Legion application, for every processor there is an instance of the Legion runtime that takes a stream of tasks with region requirements and asynchronously runs ahead of the actual execution using a scheduler called a {\em Software Out-of-Order Processor} (SOOP). The SOOP leverages our interface by asynchronously issuing task launches, copy operations, and synchronization operations with the dependencies between them expressed as events. For details the interested reader is referred to the work of \cite{Legion12}. \begin{figure} \begin{center} \includegraphics[scale=0.33]{figs/reduce_simple.pdf} \end{center} \vspace{-2mm} \caption{Reduction Benchmark.\label{fig:reducbench}} \vspace{-4mm} \end{figure} The composable design of the interface proposed in this paper is crucial to Legion's SOOP implementation. If all operations in the interface could not be composed via events, the SOOP would have to periodically invoke blocking operations that would cause processors to stall and severely limit Legion's ability to run ahead and keep the machine fully occupied. With a fully asynchronous, composable interface the SOOP can hide latencies by running far ahead of the actual execution, limited only by the physical resources available in the machine, exactly like a hardware out-of-order processor. In this section we demonstrate the performance properties of three real-world applications that use the Legion runtime and the heterogeneous implementation of our interface to run on the Keeneland supercomputer. The three applications that we characterize are all multi-phase applications that require parallel computation, data exchange, and synchronization between phases. {\em Fluid} is an incompressible fluid flow simulation from the PARSEC benchmark suite\cite{bienia11benchmarking}. The reference implementation only runs on shared memory machines, but when written in Legion, Fluid runs on large distributed machines as well. The Fluid simulation models particles that flow through cells. Each time step requires multiple phases that update different properties of the particles contingent upon neighboring particles in the same cell. The space of cells is partitioned and neighboring cells in different partitions must exchange data between phases. Legion ensures that these exchanges are done point-to-point by chaining copies and tasks using events rather than employing a global bulk-synchronous approach. {\em Circuit} is a simulation of an integrated circuit that is described as an irregular graph of components connected by wires. The graph is partitioned and distributed across the machine. The computation for each time step consists of three phases: calculate currents on the wires, distribute charge between components, and update the voltages of all components. In the distribute charge phase, a single component may be updated by wires from multiple partitions. Reduction instances are used to allow each partition to perform reductions in parallel and to do so locally in GPU framebuffer memory before merging all the reductions to a global instance residing in GASNet memory. Without reduction instances this phase couldn't be performed using GPUs. {\em AMR} is an adaptive mesh refinement benchmark based on the third heat equation example from the Berkeley Labs BoxLib project\cite{BoxLib}. AMR simulates the two dimensional heat diffusion equation using three different levels of refinement. Each level is partitioned and distributed across the machine. Time steps require both intra- and inter-level communication and synchronization. Legion again expresses dependences between tasks from the same and different levels through events. In Section~\ref{subsec:eventlife} we study the lifetime of events in the Fluid application. Section~\ref{subsec:lockmig} examines the performance of deferred locks in the Circuit and AMR applications. %We describe how the Circuit application leverages reduction instances %to operate on GPUs in Section~\ref{subsec:reduccase}. Finally, we demonstrate that the asynchronous nature of our interface is essential to the performance of Legion by comparing to bulk-synchronous implementations of these applications in Section~\ref{subsec:bulkcomp}. \subsection{Event Lifetimes} \label{subsec:eventlife} We instrumented the heterogeneous implementation of our interface to capture event usage information. Figure~\ref{fig:eventlife} shows a timeline of the execution of the Fluid application on 16 nodes. The dynamic events line measures the total number of event creations. A large number of events are created - over 260,000 in less than 15 seconds - and allocating separate storage for every event would clearly be difficult for long-running applications. Dynamic events are considered {\em live events} until their last operation (i.e. wait, trigger) has been performed. The live events line measures the number of events that are live at any given time. After the last operation on a dynamic event, a reference counting implementation would be able to recover the storage associated with the event. The live events line is therefore equivalent to the number of needed events in a reference counting scheme. In this example a reference counting algorithm would reduce the storage needed for dynamic events by over 10X, but only at the cost of computation and communication for performing the reference counting. As discussed in Section~\ref{subsec:eventimpl}, our implementation requires storage that grows with the maximum number of {\em untriggered events}, a number that is 10X smaller than even the maximal live event count. The actual storage requirements of our implementation are shown by the generational events line, which shows the total number of generational events (defined in Section~\ref{subsec:eventimpl}) in the system. The maximum number of generational events needed is slightly larger than the peak number of untriggered events. The reason for this difference is because nodes create events locally if they have no free generational events, even if there are unused generational events on remote nodes. Despite this inefficiency, our implementation uses 5X less storage than a reference counting implementation and avoids any overhead related to reference counting. These savings would likely be even more dramatic for longer runs of the application, as the number of live events is steadily growing as the application runs, while the peak number of generational events needed appears to occur during the start-up of the application. %% much storage would be required for events using %% about different kinds of events. Figure~\ref{fig:eventlife} shows %% the lifetimes of different types of events from a single run of the Fluid %% application on 16 nodes using 8 processors per node. Dynamic events %% is a monotonically increasing line that %% corresponds to the total number of events created in the system. By %% the end of the run, the simulation created over 260,000 dynamic %% events. In contrast, the physical events line corresponds to the total number %% of generational events required to represent the dynamic events %% across all the nodes in the system. Less than 5,000 %% generational events were required to represent all the dynamic events %% illustrating that the technique of mapping from dynamic events to generational %% events described in Section~\ref{subsec:eventimpl} was effective. %% The live event line shows the total number of {\em live events} at a point in time. Dynamic events %% are considered {\em live events} until their last query has been performed. The number of live %% events is equivalent to the number of physical events that would be required %% in a reference counted scheme. In this case we see that by reusing %% generational events, we require up to 4X fewer generational events than would be %% required by a reference counted scheme. %% In an ideal world the total number of generational events required would be %% equivalent to the maximum number of generational events in the untriggered %% state at any point. The untriggered event line shows the number of generational %% events in the untriggered state. We see that the total number of generational %% events is actually slightly higher than the maximum number of generational %% events in the untriggered state at any point. The reason for this is that unused %% generational events cannot be shared across nodes, so in some cases nodes must %% make new generational events eventhough there are unused generational events %% on other nodes. The small difference in these lines shows that in practice %% this is only a minor inefficiency. \begin{figure} \begin{center} \includegraphics[scale=0.33]{figs/event_lifetimes.pdf} \end{center} \vspace{-2mm} \caption{Event Lifetimes - Fluid Application.\label{fig:eventlife}} \vspace{-4mm} \end{figure} %Figure~\ref{fig:eventlife} shows the event lifetimes in a run of the fluid application. The total number %of events created during the application is over 260,000. %\subsection{Deferred Lock Migration} \subsection{Deferred Lock Performance} \label{subsec:lockmig} We also instrumented our heterogeneous implementation to profile the usage of deferred locks in real applications. The Circuit and AMR applications both made use of deferred locks, creating 3336 and 1393 deferred locks respectively. Of all created locks in both applications, 14\% were migrated at least once during their lifetime. We measured the lock grant rates for both applications and found them to be many orders of magnitude smaller than the maximum lock grant rates achieved by our locking micro-benchmarks in Section~\ref{subsec:lockmicro}. This demonstrates that real applications need deferred locks to compose synchronization in a non-blocking environment, and that the ability of deferred locks to migrate is necessary for implementing real applications on distributed machines. %We also instrumented our heterogeneous implementation to profile the usage of %deferred locks in real applications. We measured how many locks were used in the application. For %each lock we measured the number of lock requests and the number of times that the lock %had to be transferred from one node to another. Figure~\ref{fig:lockcount} shows the results %for the Circuit and AMR application, both running on 16 nodes. The Circuit application uses %a larger number of locks (3336 vs. 1393), but most of them (86\%) are only used on the node on %which they were created. The AMR application exhibits more varied locking behavior, and in %particular has locks that are requested more often than they are transferred. In addition %to allowing synchronization in an asynchronous environment, deferred locks must be able to %migrate to implement real applications on distributed machines. %Without the %ability for locks to migrate, these applications couldn't have made use of asynchronous %locks. \texcomment{ \begin{figure} \centering \subfigure[Circuit application]{ \label{fig:lockcount:ckt} { \renewcommand{\arraystretch}{1.2} \small \begin{tabular}{c|rrrrrrr} \multicolumn{1}{l}{\multirow{2}{*}{{\renewcommand{\arraystretch}{1}\begin{tabular}{@{}l}\bf Lock\\\bf Xfers\end{tabular}}}} & \multicolumn{7}{c}{\bf Total Lock Requests} \\ & \multicolumn{1}{c}{\bf 0} & \multicolumn{1}{c}{\bf 1} & \multicolumn{1}{c}{\bf 2-8} & \multicolumn{1}{c}{\bf 9-16} & \multicolumn{1}{c}{\bf 17-32} & \multicolumn{1}{c}{\bf 33-64} & \multicolumn{1}{c}{\bf 65-128} \\ {\bf 0 } & 45 & 2611 & 9 & 26 & 79 & 78 & 30 \\ {\bf 1 } & - & 450 & - & - & - & - & - \\ {\bf 2-8 } & - & - & - & - & - & - & -\\ {\bf 9-16 } & - & - & - & 8 & - & - & -\\ \end{tabular} }} \vspace{-2mm} \subfigure[AMR application]{ { \renewcommand{\arraystretch}{1.2} \small \begin{tabular}{c|rrrrrrr} \multicolumn{1}{l}{\multirow{2}{*}{{\renewcommand{\arraystretch}{1}\begin{tabular}{@{}l}\bf Lock\\\bf Xfers\end{tabular}}}} & \multicolumn{7}{c}{\bf Total Lock Requests} \\ & \multicolumn{1}{c}{\bf 0} & \multicolumn{1}{c}{\bf 1} & \multicolumn{1}{c}{\bf 2} & \multicolumn{1}{c}{\bf 3-4} & \multicolumn{1}{c}{\bf 5-8} & \multicolumn{1}{c}{\bf 9-16} & \multicolumn{1}{c}{\bf 17-32} \\ {\bf 0 } & - & 94 & 592 & 414 & 94 & 3 & 41 \\ {\bf 1 } & - & 122 & - & - & - & - & - \\ {\bf 2 } & - & - & 6 & 7 & - & - & - \\ {\bf 3-4 } & - & - & - & - & 9 & 3 & - \\ {\bf 5-8 } & - & - & - & - & - & - & - \\ {\bf 9-16} & - & - & - & - & - & 7 & 1 \end{tabular} }} %\vspace{-2mm} \caption{Locks by Request and Transfer Counts.\label{fig:lockcount}} \vspace{-6mm} \end{figure} } %\subsection{Reduction Instances} %\label{subsec:reduccase} %In this section we give a brief case study of how reductions make it possible %to achieve high performance on a real application. One of the three phases %of the Circuit application involves reducing the current flowing in the wires \subsection{Bulk-Synchronous Comparison} \label{subsec:bulkcomp} Our last set of experiments demonstrates that the composable, asynchronous nature of our interface is essential to the high performance of the Legion runtime system. The work of \cite{Legion12} has already shown that Legion implementations perform well compared to hand-coded reference implementations. Rather than compare directly to hand-coded reference implementations, we control for any performance advantages gained by using the Legion runtime by re-implementing the applications in a bulk-synchronous style within the Legion framework. These bulk-synchronous versions still run on top of the Legion runtime, but invoke blocking operations between computational phases, communication phases, and synchronization. By controlling for any advantages derived from the Legion programming system, we can directly quantify the benefits reaped by the asynchronous and composable nature of our interface. Figure~\ref{fig:bulksync} compares the performance of the bulk-synchronous implementations versus the original implementations for Fluid, Circuit, and AMR. Each plot contains performance curves for both implementations on two different problem sizes. Circuit is a compute-bound application and the bulk-synchronous implementation performs reasonably well. By 16 nodes however the overhead grows to 19\% and 22\% on the small and large inputs respectively. The Fluid application has a more evenly balanced computation-to-communication ratio. As a result, Fluid performance suffers significantly more by switching to a bulk-synchronous model. At 16 nodes, performance is 135\% and 52\% worse than the original Legion implementations on the small and large problem sizes respectively. Finally, the AMR application is a memory bound application, and on 16 nodes the bulk-synchronous version is 82\% and 102\% slower on the small and large problem sizes. These results show that the asynchronous nature of our interface allows the Legion runtime system to run ahead of actual execution which provides significant latency hiding capability. The benefit of this latency hiding grows as the number of nodes increases. % and as the computation-to-communication ratio decreases. In all cases, the overhead of the bulk-synchronous implementations grew with the node count. As we continue to scale applications to larger and larger machines, the ability of runtimes like Legion to run ahead will be essential to hiding the large latencies inherent in such machine architectures. The composable and asynchronous nature of the interface presented in this paper will be crucial for supporting this ability in future high-level runtimes. \begin{figure}[!ht] \centering \subfigure[Fluid Application]{ \label{fig:fluidbulk} \includegraphics[scale=0.33]{figs/fluid_bulk_sync.pdf} \vspace{-3mm} } \subfigure[Circuit Application]{ \label{fig:cktbulk} \includegraphics[scale=0.33]{figs/circuit_bulk_sync.pdf} \vspace{-3mm} } \subfigure[AMR Application]{ \label{fig:amrbulk} \includegraphics[scale=0.33]{figs/amr_bulk_sync.pdf} } \vspace{-2mm} \caption{Bulk-Synchronous Performance.\label{fig:bulksync}} \vspace{-6mm} \end{figure}
{ "alphanum_fraction": 0.7778550322, "avg_line_length": 51.8288288288, "ext": "tex", "hexsha": "b3dc80b52a6ae2bc137b685ee52852f291bdaba2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lightsighter/LegionOrigins", "max_forks_repo_path": "doc/llr-pldi2013/results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lightsighter/LegionOrigins", "max_issues_repo_path": "doc/llr-pldi2013/results.tex", "max_line_length": 123, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0180bb3a8ee6efd0d2efdb743f75d3fba86f18f7", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lightsighter/LegionOrigins", "max_stars_repo_path": "doc/llr-pldi2013/results.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-14T20:56:13.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-10T06:29:39.000Z", "num_tokens": 4123, "size": 17259 }
\chapter[Conclusion and Future Scope]{Conclusion and \\Future Scope} \lipsum[29-30] \paragraph{Future Works} \blinditemize
{ "alphanum_fraction": 0.7293233083, "avg_line_length": 16.625, "ext": "tex", "hexsha": "07092596e2f83643094d4949a555b2aa9d217035", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-05-08T16:08:30.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-04T08:43:16.000Z", "max_forks_repo_head_hexsha": "5a280aacef92ab7c939e7980145c9bc202b77d9c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "c-i-p-h-e-r/PhinisheD", "max_forks_repo_path": "thesisInParts/8_conclusion/conclu_future.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5a280aacef92ab7c939e7980145c9bc202b77d9c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "c-i-p-h-e-r/PhinisheD", "max_issues_repo_path": "thesisInParts/8_conclusion/conclu_future.tex", "max_line_length": 69, "max_stars_count": 6, "max_stars_repo_head_hexsha": "5a280aacef92ab7c939e7980145c9bc202b77d9c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "c-i-p-h-e-r/PhinisheD", "max_stars_repo_path": "thesisInParts/8_conclusion/conclu_future.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T06:09:01.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-04T07:21:18.000Z", "num_tokens": 33, "size": 133 }
\chapter{The "Aime" methods for Tie Points computation} % Conclusion, on peut sans doute limiter le nombre de point avec ScaleStab % pour filtrage a priori => genre les 500 les plus stable %--------------------------------------------- %--------------------------------------------- %--------------------------------------------- \section{Fast recognitation} %--------------------------------------------- \subsection{Motivation} For each image, we have computed tie points. A tie points is made of a vector $V \in \RR^n)$ . Typically $V$ is invariant to the main geometric deformation . Le $V_1$ and $V_2$ be two tie points, we note : \begin{itemize} \item $H_{om}(V_1,V_2) $ the fact that two tie points are homologous; \end{itemize} Given $V_1,V_2$, there is of course no no oracle that can indicate if $H_{om}(V_1,V_2)$, and classically we want to compute a fast mathematicall function $\Psi $ that indicate if two vector $V_1$ and $V_2$ correspond to the same tie points . The ideal function would be such : \begin{itemize} \item $\Psi(V_1,V_2) \iff H_{om}(V_1,V_2)$ \end{itemize} Of course this impossible, and we introduce the miss rate and fall out: \begin{itemize} \item miss rate , probability of $\Psi=0$ knowing $H_{om}=1$ , we note $p_m$; \item fall out , probability of $\Psi=1$ knowing $H_{om}=0$, we note $p_f$; \end{itemize} As we cannot have the ideal function $\Psi $ such as $p_m=0$ and $p_f=0$, we have to compromise, and depending on the circunstances, the price of the two error, will not be the same. Typically, in indexation step, we are especially interested to have a low $p_f$; converselly in recognition step we are especially interested to have a low $p_m$. \subsection{Bits vector} %---------------------------------------- % A conserver : equations TIPE Lolo %----------------------------------------- \COM{ \begin{equation} \delta R = \frac{1}{\sigma} \frac{L}{S} = \frac{1}{\sigma} \frac{2 \pi a }{ e \; dz} \end{equation} \begin{equation} \phi = \iint \overrightarrow{B} \overrightarrow{dS} = B_M \cos(\omega t) \pi a^2 \end{equation} \begin{equation} e = -\frac{d\phi}{dt}= B_M \omega \sin(\omega t) \pi a^2 \end{equation} \begin{equation} d P_J = \frac{e^2}{\delta R} = \frac{ (B_M \omega \pi a^2 \sin(\omega t))^2 \sigma_e dz }{2 \pi a} \end{equation} \begin{equation} <d P_J> = \frac{B_M^2 \; \omega ^2 \; \pi \; a^3 \; e \; \sigma \; dz}{4} \end{equation} \begin{equation} <P_J> = \int_{z=0}^H <d P_J> = \frac{B_M^2 \; \omega ^2 \; \pi \; a^3 \; e \; \sigma \; H}{4} \end{equation} \begin{equation} dU = \delta ^2 Q_{creee} + \delta ^2 Q_{recue de l'air} \end{equation} \begin{equation} C dT = <P_J> dt - g 2\pi aH(T-T_a) \end{equation} \begin{equation} C = \mu c 2 \pi a e H \end{equation} \begin{equation} \mu c 2 \pi a e H \frac{dT}{dt} = \frac{B_M \omega^2 \pi a^3 e \sigma H}{4} - g 2 \pi a H (T-T_a) \end{equation} \begin{equation} \tau \frac{dT}{dt} + T = T_{\infty} \end{equation} \begin{equation} \tau = \frac{\mu c e}{g} \end{equation} \begin{equation} T_{\infty} - T_a = \frac{(B_M \omega a)^2 e \sigma}{8g} \end{equation} }
{ "alphanum_fraction": 0.5916018663, "avg_line_length": 26.1382113821, "ext": "tex", "hexsha": "cb8b7a58877c2aa9a5f7e3140ed3f8ec9f7c2157", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2022-02-04T10:24:54.000Z", "max_forks_repo_forks_event_min_datetime": "2020-08-01T00:27:16.000Z", "max_forks_repo_head_hexsha": "d15d33de48a76575b6555399bc3bf9d89e31baa2", "max_forks_repo_licenses": [ "CECILL-B" ], "max_forks_repo_name": "dronemapper-io/micmac", "max_forks_repo_path": "MMVII/Doc/Methods/TiePoints.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "d15d33de48a76575b6555399bc3bf9d89e31baa2", "max_issues_repo_issues_event_max_datetime": "2019-06-30T17:55:28.000Z", "max_issues_repo_issues_event_min_datetime": "2019-05-24T17:11:33.000Z", "max_issues_repo_licenses": [ "CECILL-B" ], "max_issues_repo_name": "dronemapper-io/micmac", "max_issues_repo_path": "MMVII/Doc/Methods/TiePoints.tex", "max_line_length": 117, "max_stars_count": 8, "max_stars_repo_head_hexsha": "d15d33de48a76575b6555399bc3bf9d89e31baa2", "max_stars_repo_licenses": [ "CECILL-B" ], "max_stars_repo_name": "dronemapper-io/micmac", "max_stars_repo_path": "MMVII/Doc/Methods/TiePoints.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-27T20:01:46.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-08T21:43:53.000Z", "num_tokens": 1119, "size": 3215 }
\documentclass[letterpaper,10pt]{article} \usepackage{graphicx} \usepackage{longtable} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \usepackage{alltt} \usepackage{float} \usepackage{color} \usepackage{url} \usepackage{balance} \usepackage[TABBOTCAP, tight]{subfigure} \usepackage{enumitem} \usepackage{pstricks, pst-node} \usepackage{geometry} \geometry{textheight=10in, textwidth=7.5in} \usepackage{hyperref} \newcommand{\longtableendfoot}{Continued on next page} \def\name{Chance Zibolski} %% The following metadata will show up in the PDF properties \hypersetup{ colorlinks = true, urlcolor = black, pdfauthor = {\name}, pdfkeywords = {cs311 ``operating systems'' assignment week1}, pdftitle = {CS 311 Assignment 1}, pdfsubject = {CS 311 Assignment 1}, pdfpagemode = UseNone } \parindent = 0.0 in \parskip = 0.2 in \title{CS311 Assignment 2} \author{\name} \date{October 2013} \begin{document} \maketitle \section{Design} For the ar program I plan on separating out each key into a function, where all the logic for creating an archive will just be a composition of re-usable functions. There are two types of functions needed for AR. The first is a function which accepts the list of file names passed to the program, which will be used for options -q, -x, -d, -t, which are all used with a list of file names on the command line. The other options -A and -v will be acting slightly different, where the -A will be passed a list of files, but it wont be a list from the command line, and -v will be working with an existing flag, which will likely deal with disabling suppressing output of commands. With this in mind, almost all of my ar options will be acting on a list of file names, so I will be designing my functions to work this way. I'll begin with -q so I can create an archive and test its output. The rest should simply deal with reading the ar data structure and parsing the ar header data. That should also be trivial. I will be able to read text from the archive until reaching the ARMAG constants, which will let me iteratively go through the file and either print the -t contents, delete files using -d or extract files using -x. All of those will be very similar, so I will likely have utility functions for reading the file until the ARMAG constants and parsing the ar header data. \subsection{Deviations from Original Design} I pretty much followed my plan from beginning to end. The only part that I really deviated with was where I found I could not extract the looping through the archive file logic. It was mostly because I wouldn't be able to simply store all of that data into memory and had to deal with things as I iterated through the archive file, and this was simply a limitation of the design. Since extraction, deletion, printing the table of contents, all required iterating through the archive file, and we could only pass through it at most one time, I ended up having to use a case statement inside the loop. I could have used call backs which all returned how far they iterated through the file, however I disliked the idea of that and went with the case statement instead. \section{Work Log} % To be filled in when finished \input{work_log.tex} \section{Challenges} The assignment itself was fairly straight forward, and implementation itself wasn't too difficult either. My biggest challenge was getting stuck on the small off by one errors. In particular when trying to get extraction working, I wasn't checking for if read() returned 0 for end of file. Along with that, I was accounting for even byte boundaries in most places, but when working on the extraction piece I forgot about that as well, and things were not working due to that. So overall the biggest challenge was debugging. Tools like GDB helped a little, but most of my problems were due to lack a lack of familiarity with particular functions and system calls. \section{Questions} \begin{description} \item[Q1:] What do you think the main point of this assignment is? \item[A1:] I think the point of this assignment is to introduce the class to a slightly complex C application with a couple of different components to worry about. Since we will likely be using command line args for most of our programs, learning to use getopt in C one of the points. I think its main purpose was to become familiar with the system call API. \item[Q2:] How did you ensure your solution was correct? Testing details, for instance. \item[A2:] When testing I built archives using the built in unix ar program, with various sizes (even and odd), different owners, groups, ect. I then tested replicating these archive files with my archiver and compared them using diff to ensure consistency. Next I would use my archiver to extract, delete, and print items from the real archive file created with the built in ar, and I would use the built in ar to do the same on the archive file created with my archiver. I would then use diff to compare all results with the original files, as well as the ones extracted from the various archiver programs. To test the -A option, I created test directories with various file types inside of them, and tested that it didn't archive itself or the executable. When testing verbose output, I would redirect the output from the real ar into a file, and then do the same with mine, and compare the files using diff again. \item[Q3:] What did you learn? \item[A3:] I learned that in order to do well on these assignments you need to work very iteratively and make sure you understand the underlying problem. You also need to become very familiar with the tools your using for the given task. As I mentioned in my challenges, reading the man pages a little more carefully would have saved me a lot of time running into issues. I also found that you can't sit too long staring at a single problem, and that if your stuck, to try finding something else you can work on until you've got a fresh mind to take a look at the old problem you were having. In on case, I could not get utime to work for some reason and stopped on it for a while. After finishing the rest of another command, I went back to fix utime and it was working because of another bug I fixed along the way. Finally I learned that asking for help isn't a terrible thing. Sometimes it's easy to forget small things that you understand, but aren't perceiving correctly, and someone from the outside might be able to immediately pick up on what your doing wrong. For example when trying to set file permissions back onto the file, I was getting the wrong permissions set. With help from a friend realized my issue was that I wasn't setting the mode back to octal, and that was my issue. \end{description} \end{document}
{ "alphanum_fraction": 0.7824929765, "avg_line_length": 43.076433121, "ext": "tex", "hexsha": "8a9d81f34b9b210b4cacf8ba25306cba15383af2", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-10-20T18:21:27.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-19T21:12:20.000Z", "max_forks_repo_head_hexsha": "a062bed34c1db2a710f71061a97b18b2479964d3", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "chancez/ar", "max_forks_repo_path": "writeup.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a062bed34c1db2a710f71061a97b18b2479964d3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "chancez/ar", "max_issues_repo_path": "writeup.tex", "max_line_length": 87, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a062bed34c1db2a710f71061a97b18b2479964d3", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "chancez/ar", "max_stars_repo_path": "writeup.tex", "max_stars_repo_stars_event_max_datetime": "2016-06-19T21:12:25.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-19T21:12:25.000Z", "num_tokens": 1618, "size": 6763 }
\documentclass[letterpaper]{article} \usepackage[utf8]{inputenc} \usepackage{fullpage} \usepackage{nopageno} \usepackage{amsmath} \usepackage{amssymb} \allowdisplaybreaks \newcommand{\abs}[1]{\left\lvert #1 \right\rvert} \begin{document} \title{Notes} \date{January 23, 2015} \maketitle $E$ is measurable if $\dots$ irrational numbers have a measure because rationals have measure 0 and the complement of a measurable set is 0 \section*{today} explain why any open interval is measurable if $E_1$ and $E_2$ are disjoint and measurable then $m*(A\cap(E_1\cup E_2)=m*(A\cap E_1)+m*(A\cap E_2)$ for any $A$ $E_1,E_2,E_3,\dots,E_n$ disjoint and measurable then $m*(A\cap(\bigcup\limits_{i=1}^nE_i))=\sum\limits_{i=1}^n{m*(A\cap E_i)}$ \subsection*{theorem} if $\{E_i\}_{i=1}^\infty$ are measurable, countable then \begin{enumerate} \item $\bigcup\limits_{i=1}^\infty$ measurable \item $\bigcap\limits_{i=1}^\infty$ measurable \item $m*(\bigcup\limits_{i=1}^\infty E_i)\le\sum\limits_{i=1}^\infty{m*(E_i)}$ \item if $E_i\cap E_j=\emptyset$ for all $i,j$ $m*(\bigcup\limits_{i=1}^\infty E_i)=\sum\limits_{i=1}^\infty{m*(E_i)}$ \end{enumerate} \subsubsection*{proof} {\bfseries case 1} $E_i\cap E_j=\emptyset, i\ne j$ $F_n=\bigcup\limits_{i=1}^n E_i$ we know that $F_n$ is measurable so $m*(A\cap F_n)=\sum\limits_{i=1}^n{m*(A\cap E_i)}$ $m*(A)=m*(A\cap F_n)+m*(A\cap F_n^C)$ for all $n$ \begin{align*} m*(A)&=m*(A\cap F_n)+m*(A\cap F_n^C)\\ &\ge m*(A\cap F_n)+m*(A\cap(\bigcup_{i=1}^\infty E_i)^C)\\ \intertext{$m*(A\cap F_n^C)$ contains $m*(A\cap(\bigcup_{i=1}^\infty E_i)^C)$} &= \sum\limits_{i=1}^n{m*(A\cap E_i)}+m*(A\cap(\bigcup_{i=1}^\infty E_i)^C)\\ m*(A)&\ge \sum\limits_{i=1}^n{m*(A\cap E_i)}+m*(A\cap(\bigcup_{i=1}^\infty E_i)^C)\\ &\ge m*(A\cap \bigcup_{i=1}^\infty E_i)+m*(A\cap(\bigcup_{i=1}^\infty E_i)^C) \end{align*} {\bfseries case 2} not disjoint $E_1=\Omega_1$ and $E_2\setminus\Omega_1=\Omega_2$ and $E_3\setminus(\Omega_1\cup\Omega_2)=\Omega_3$ $\{E_i\}_{i=1}^\infty\to \{\Omega_i\}_{i=1}^\infty$ and $\Omega_i\cap\Omega_j=\emptyset\forall i\ne j$ \begin{enumerate} \item $\bigcup\limits_{i=1}^\infty E_i=\bigcup\limits_{i=1}^\infty \Omega_i$ \item $\Omega_i$ is measurable ($A\setminus B=A\cap B^C$) \item use case one to get $\bigcup\limits_{i=1}^\infty \Omega_i$ is measurable and imply that $\bigcup\limits_{i=1}^\infty E_i$ is measurable \end{enumerate} \begin{enumerate} \item union of countable measurable sets is measurable \item $\{E_i\}_{i=1}^\infty$ with $E_i$ measurable $\bigcap_{i=1}^\infty E_i=\left(\bigcup\limits_{i=1}^\infty(E_i)^C\right)^C$ \item we already knew that $m*(\bigcup\limits_{i=1}^\infty E_i\le \sum\limits_{i=1}^\infty{m*(E_i)}$ \item $\{E_i\}_{i=1}^\infty$ disjoint and measurable $m*(\bigcup_{i=1}\infty E_i$ \begin{enumerate} \item $m*(\bigcup\limits_{i=1}^\infty E_i\le \sum\limits_{i=1}^\infty{m*(E_i)}$ $m*(\bigcup\limits_{i=1}^n E_i= \sum\limits_{i=1}^n{m*(E_i)}$ $m*(\bigcup_{i=1}^\infty E_i)\ge m*(\bigcup\limits_{i=1}^n E_i= \sum\limits_{i=1}^n{m*(E_i)}$ and then $m*(\bigcup_{i=1}^\infty E_i)\ge = \sum\limits_{i=1}^\infty{m*(E_i)}$ \end{enumerate} \end{enumerate} \subsection*{theorem} any open set is measurable \subsubsection*{proof} based on homework we know that $(a,\infty)$ is measurable \begin{enumerate} \item need to show that $[b,\infty)$ is measurable for any $b$ $[b,\infty)=\bigcap_{n=1}^\infty(b-\frac{1}{n},\infty)$ \item $(-\infty,c)$ is measurable because it's the complement of $[c,\infty)$ \item $(a,d)$ is measurable for any $a<d$ $(a,d)=(a,\infty)\cap(-\infty,d)$ \item any open interval $\mathbb{O}$ is measurable $\mathbb{O}=\bigcup_{i=1}^\infty(a_i,b_i)$ Lindlöfs theorem or any closed set is measurable \end{enumerate} cantor set is the intersection of a countable number of closed sets \subsection*{proposition} if $E_i\supseteq E_{i+1}$ in $\{E_i\}_{i=1}^\infty$ and $m*(E_1)$ is finite then $m*(\bigcap_{i=1}^\infty E_i)=\lim_{n\to\infty}m*(E_n)$ $\{n,n+1)\}_{n=1}^\infty$ will not work, why? $\bigcap_{i=i}^\infty E_i\subseteq E_n$ for all $n$. $m*(\bigcap_{i=1}^\infty E_i\le m*(E_n)$ for all $n$ \end{document}
{ "alphanum_fraction": 0.6716453867, "avg_line_length": 29.027972028, "ext": "tex", "hexsha": "4e0809b55ddccec8ecbd1fe9631e4d7c906830de", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "ylixir/school", "max_forks_repo_path": "ra2/ra2-notes-2015-01-23.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "ylixir/school", "max_issues_repo_path": "ra2/ra2-notes-2015-01-23.tex", "max_line_length": 137, "max_stars_count": null, "max_stars_repo_head_hexsha": "66d433f2090b6396c8dd2a53a733c25dbe7bc90f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "ylixir/school", "max_stars_repo_path": "ra2/ra2-notes-2015-01-23.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1757, "size": 4151 }
\section{\module{warnings} --- Warning control} \declaremodule{standard}{warnings} \modulesynopsis{Issue warning messages and control their disposition.} \index{warnings} \versionadded{2.1} Warning messages are typically issued in situations where it is useful to alert the user of some condition in a program, where that condition (normally) doesn't warrant raising an exception and terminating the program. For example, one might want to issue a warning when a program uses an obsolete module. Python programmers issue warnings by calling the \function{warn()} function defined in this module. (C programmers use \cfunction{PyErr_Warn()}; see the \citetitle[../api/exceptionHandling.html]{Python/C API Reference Manual} for details). Warning messages are normally written to \code{sys.stderr}, but their disposition can be changed flexibly, from ignoring all warnings to turning them into exceptions. The disposition of warnings can vary based on the warning category (see below), the text of the warning message, and the source location where it is issued. Repetitions of a particular warning for the same source location are typically suppressed. There are two stages in warning control: first, each time a warning is issued, a determination is made whether a message should be issued or not; next, if a message is to be issued, it is formatted and printed using a user-settable hook. The determination whether to issue a warning message is controlled by the warning filter, which is a sequence of matching rules and actions. Rules can be added to the filter by calling \function{filterwarnings()} and reset to its default state by calling \function{resetwarnings()}. The printing of warning messages is done by calling \function{showwarning()}, which may be overridden; the default implementation of this function formats the message by calling \function{formatwarning()}, which is also available for use by custom implementations. \subsection{Warning Categories \label{warning-categories}} There are a number of built-in exceptions that represent warning categories. This categorization is useful to be able to filter out groups of warnings. The following warnings category classes are currently defined: \begin{tableii}{l|l}{exception}{Class}{Description} \lineii{Warning}{This is the base class of all warning category classes. It is a subclass of \exception{Exception}.} \lineii{UserWarning}{The default category for \function{warn()}.} \lineii{DeprecationWarning}{Base category for warnings about deprecated features.} \lineii{SyntaxWarning}{Base category for warnings about dubious syntactic features.} \lineii{RuntimeWarning}{Base category for warnings about dubious runtime features.} \lineii{FutureWarning}{Base category for warnings about constructs that will change semantically in the future.} \end{tableii} While these are technically built-in exceptions, they are documented here, because conceptually they belong to the warnings mechanism. User code can define additional warning categories by subclassing one of the standard warning categories. A warning category must always be a subclass of the \exception{Warning} class. \subsection{The Warnings Filter \label{warning-filter}} The warnings filter controls whether warnings are ignored, displayed, or turned into errors (raising an exception). Conceptually, the warnings filter maintains an ordered list of filter specifications; any specific warning is matched against each filter specification in the list in turn until a match is found; the match determines the disposition of the match. Each entry is a tuple of the form (\var{action}, \var{message}, \var{category}, \var{module}, \var{lineno}), where: \begin{itemize} \item \var{action} is one of the following strings: \begin{tableii}{l|l}{code}{Value}{Disposition} \lineii{"error"}{turn matching warnings into exceptions} \lineii{"ignore"}{never print matching warnings} \lineii{"always"}{always print matching warnings} \lineii{"default"}{print the first occurrence of matching warnings for each location where the warning is issued} \lineii{"module"}{print the first occurrence of matching warnings for each module where the warning is issued} \lineii{"once"}{print only the first occurrence of matching warnings, regardless of location} \end{tableii} \item \var{message} is a string containing a regular expression that the warning message must match (the match is compiled to always be case-insensitive) \item \var{category} is a class (a subclass of \exception{Warning}) of which the warning category must be a subclass in order to match \item \var{module} is a string containing a regular expression that the module name must match (the match is compiled to be case-sensitive) \item \var{lineno} is an integer that the line number where the warning occurred must match, or \code{0} to match all line numbers \end{itemize} Since the \exception{Warning} class is derived from the built-in \exception{Exception} class, to turn a warning into an error we simply raise \code{category(message)}. The warnings filter is initialized by \programopt{-W} options passed to the Python interpreter command line. The interpreter saves the arguments for all \programopt{-W} options without interpretation in \code{sys.warnoptions}; the \module{warnings} module parses these when it is first imported (invalid options are ignored, after printing a message to \code{sys.stderr}). \subsection{Available Functions \label{warning-functions}} \begin{funcdesc}{warn}{message\optional{, category\optional{, stacklevel}}} Issue a warning, or maybe ignore it or raise an exception. The \var{category} argument, if given, must be a warning category class (see above); it defaults to \exception{UserWarning}. Alternatively \var{message} can be a \exception{Warning} instance, in which case \var{category} will be ignored and \code{message.__class__} will be used. In this case the message text will be \code{str(message)}. This function raises an exception if the particular warning issued is changed into an error by the warnings filter see above. The \var{stacklevel} argument can be used by wrapper functions written in Python, like this: \begin{verbatim} def deprecation(message): warnings.warn(message, DeprecationWarning, stacklevel=2) \end{verbatim} This makes the warning refer to \function{deprecation()}'s caller, rather than to the source of \function{deprecation()} itself (since the latter would defeat the purpose of the warning message). \end{funcdesc} \begin{funcdesc}{warn_explicit}{message, category, filename, lineno\optional{, module\optional{, registry}}} This is a low-level interface to the functionality of \function{warn()}, passing in explicitly the message, category, filename and line number, and optionally the module name and the registry (which should be the \code{__warningregistry__} dictionary of the module). The module name defaults to the filename with \code{.py} stripped; if no registry is passed, the warning is never suppressed. \var{message} must be a string and \var{category} a subclass of \exception{Warning} or \var{message} may be a \exception{Warning} instance, in which case \var{category} will be ignored. \end{funcdesc} \begin{funcdesc}{showwarning}{message, category, filename, lineno\optional{, file}} Write a warning to a file. The default implementation calls \code{formatwarning(\var{message}, \var{category}, \var{filename}, \var{lineno})} and writes the resulting string to \var{file}, which defaults to \code{sys.stderr}. You may replace this function with an alternative implementation by assigning to \code{warnings.showwarning}. \end{funcdesc} \begin{funcdesc}{formatwarning}{message, category, filename, lineno} Format a warning the standard way. This returns a string which may contain embedded newlines and ends in a newline. \end{funcdesc} \begin{funcdesc}{filterwarnings}{action\optional{, message\optional{, category\optional{, module\optional{, lineno\optional{, append}}}}}} Insert an entry into the list of warnings filters. The entry is inserted at the front by default; if \var{append} is true, it is inserted at the end. This checks the types of the arguments, compiles the message and module regular expressions, and inserts them as a tuple in front of the warnings filter. Entries inserted later override entries inserted earlier, if both match a particular warning. Omitted arguments default to a value that matches everything. \end{funcdesc} \begin{funcdesc}{resetwarnings}{} Reset the warnings filter. This discards the effect of all previous calls to \function{filterwarnings()}, including that of the \programopt{-W} command line options. \end{funcdesc}
{ "alphanum_fraction": 0.7782677521, "avg_line_length": 40.7557603687, "ext": "tex", "hexsha": "8655451c13a5ebde0285318565ff4e244f8d31f5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f493d5415b662e99a73d017bcafe2148c5bc8fb5", "max_forks_repo_licenses": [ "PSF-2.0" ], "max_forks_repo_name": "deadsnakes/python2.4", "max_forks_repo_path": "Doc/lib/libwarnings.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f493d5415b662e99a73d017bcafe2148c5bc8fb5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "PSF-2.0" ], "max_issues_repo_name": "deadsnakes/python2.4", "max_issues_repo_path": "Doc/lib/libwarnings.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "f493d5415b662e99a73d017bcafe2148c5bc8fb5", "max_stars_repo_licenses": [ "PSF-2.0" ], "max_stars_repo_name": "deadsnakes/python2.4", "max_stars_repo_path": "Doc/lib/libwarnings.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2034, "size": 8844 }
\section{Verified Parallelization of Monoid Morphisms}\label{sec:parallelization} A monoid morphism is a function between two monoids which preserves the monoidal structure; \ie a function on the underlying sets which preserves identity and associativity. We formally specify this definition using a refinement type @Morphism@. % \begin{definition}[Monoid Morphism]\label{definition:morphism} A function @f :: n -> m@ is a morphism between the monoids (@m@, @$\epsilon$@, @<>@) and (@n@, @$\eta$@, @<+>@), if @Morphism n m f@ has an inhabitant. \begin{code} type Morphism n m F = x:n -> y:n -> {F eta = epsilon && F (x <+> y) = F x <> F y} \end{code} \end{definition} A monoid morphism can be parallelized when its domain can be cut into chunks and put back together again, a property we refer to as chunkable and expand upon in \S~\ref{subsec:chunkable}. A chunkable monoid morphism is then parallelized by: \begin{itemize} \item chunking up the input, \item applying the morphism in parallel to all chunks, and \item recombining the chunks, also in parallel, back to a single value. \end{itemize} In the rest of this section we implement and verify to be correct the above transformation. \subsection{Chunkable Monoids}\label{subsec:chunkable} \begin{definition}[Chunkable Monoids]\label{definition:chunkable} A monoid (@m@, @epsilon@, @<>@) is chunkable if the following four functions are defined on @m@. \begin{code} length_m :: m -> Nat drop_m :: i:Nat -> x:MGEq m i -> MEq m (length_m x - i) take_m :: i:Nat -> x:MGEq m i -> MEq m i takeDropProp_m :: i:Nat -> x:m -> {x = take_m i x <> drop_m i x} \end{code} Where the type aliases @MLeq m I@ (and @MEq m I@) constrain the monoid @m@ to have @length_m@ greater than (resp. equal) to @I@. \begin{code} type MGEq m I = {x:m | I <= length_m x} type MEq m I = {x:m | I = length_m x} \end{code} \end{definition} Note that the ``important'' methods of chunkable monoids are the @take@ and @drop@, while the @length@ method is required to give pre- and post-condition on the other operations. % Finally, @takeDropProp@ provides a proof that for each @i@ and monoid @x@, appending @take i x@ to @drop i x@ will reconstruct @x@. Using @take_m@ and @drop_m@ we define for each chunkable monoid (@m@, @epsilon@, @<>@) a function @chunk_m i x@ that splits @x@ in chunks of size @i@. \begin{code} chunk_m :: i:Pos -> x:m -> {v:L m | chunkRes_m i x v} chunk_m i x | length_m x <= i = C x N | otherwise = take_m i x `C` chunk_m i (drop_m i x) chunkRes_m i x v | length_m x <= i = length_m v == 1 | i == 1 = length_m v == length_m xs | otherwise = length_m v < length_m xs \end{code} % The function @chunk_m@ provably terminates as @drop_m i x@ will return a monoid smaller than @x@, by the Definition of @drop_m@. % The definitions of both @take_m@ and @drop_m@ are also used from Liquid Haskell to verify the @length_m@ constraints in the result of @chunk_m@. \ignore{ As a concrete example, to define list chunking, we first define the @take@ and @drop@ methods on the list monoid of section~\ref{sec:haskell-proofs}. % \begin{code} take i N = N take i (C x xs) | i == 0 = N | otherwise = C x (take (i-1) xs) drop i N = N drop i (C x xs) | i == 0 = C x xs | otherwise = drop (i-1) xs \end{code} We can prove that the above definitions combined with the @length@ of section~\ref{sec:haskell-proofs} satisfy the specifications of the Chunkable Monoid Definition~\ref{definition:chunkable}. % Thus, we can prove that the aforementioned list data type, extended with the appropriate implementation for @takeDropProp@ is a chunkable monoid. } \subsection{Parallel Map} We define a parallelized map function @pmap@ using Haskell's library @parallel@. % Concretely, we use the function @Control.Parallel.Strategies.withStrategy@ that computes its argument in parallel given a parallel strategy. \begin{code} pmap :: (a -> b) -> L a -> L b pmap f xs = withStrategy parStrategy (map f xs) \end{code} % The strategy @parStrategy@ does not affect verification. % In our codebase we choose the traversable strategy. \begin{code} parStrategy :: Strategy (L a) parStrategy = parTraversable rseq \end{code} \mypara{Parallelism in the Logic} The function @withStrategy@ is an imported Haskell library function, whose implementation is not available during verification. % To use it in our verified code, we make the \textit{assumption} that it always returns its second argument. \begin{code} assume withStrategy :: Strategy a -> x:a -> {v:a | v = x} \end{code} % Moreover, we need to reflect the function @pmap@ and represent its implementation in the logic. % Thus, we also need to represent the function @withStrategy@ in the logic. % LiquidHaskell represents @withStrategy@ in the logic as a logical function that merely returns its second argument, @withStrategy _ x = x@, and does not reason about parallelism. \subsection{Monoidal Concatenation}\label{subsec:mconcat} The function @chunk_m@ allows chunking a monoidal value into several pieces. Dually, for any monoid @m@, there is a standard way of turning @L m@ back into a single @m@~\footnote{\texttt{mconcat} is usually defined as \texttt{foldr mappend mempty}}. \begin{code} mconcat :: L m -> m mconcat N = mempty mconcat (C x xs) = x <> mconcat xs \end{code} % For any chunkable monoid @n@, % monoid morphism @f :: n -> m@, % and natural number @i > 0@ % we can write a chunked version of @f@ as \begin{code} mconcat . pmap f . chunk_n i :: n -> m. \end{code} Before parallelizing @mconcat@, we will prove that the previous function is equivalent to @f@. \begin{theorem}[Morphism Distribution]\label{theorem:monoid:distribution} Let (@m@, @$\epsilon$@, @<>@) be a monoid and (@n@, @$\eta$@, @<+>@) be a chunkable monoid. % Then, for every morphism @f :: n -> m@, every positive number @i@ and input @x@, @f x = mconcat (pmap f (chunk_n i x))@ holds. % \begin{code} morphismDistribution :: f:(n -> m) -> Morphism n m f -> x:n -> i:Pos -> {f x = mconcat (pmap f (chunk_n i x))} \end{code} \end{theorem} \begin{proof} We prove the theorem by implementing @morphismDistribution@ in a way that satisfies its type. % The proof proceeds by induction on the length of the input. % \begin{code} morphismDistribution f thm x i | length_n x <= i = mconcat (pmap f (chunk_n i x)) =. mconcat (map f (chunk_n i x)) =. mconcat (map f (C x N)) =. mconcat (f x `C` map f N) =. f is <> mconcat N =. f is <> epsilon =. f is ? idRight_m (f is) ** QED morphismDistribution f thm x i = mconcat (pmap f (chunk_n i x)) =. mconcat (map f (chunk_n i x)) =. mconcat (map f (C takeX) (chunk_n i dropX))) =. mconcat (f takeX `C` map f (chunk_n n dropX)) =. f takeX <> f dropX ? morphismDistribution f thm dropX i =. f (takeX <+> dropX) ? thm takeX dropX =. f x ? takeDropProp_n i x ** QED where dropX = drop_n i x takeX = take_n i x \end{code} % In the base case we use rewriting and right identity on the monoid @f x@. % In the inductive case, we use the inductive hypothesis on the input @dropX = drop_n i x@, that is provably smaller than @x@ as @1 < i@. % Then, the fact that @f@ is a monoid morphism, as encoded by our assumption argument @thm takeX dropX@ we get basic distribution of @f@, that is @f takeX <> f dropX = f (takeX <+> dropX)@. % Finally, we merge @takeX <+> dropX@ to @x@ using the property @takeDropProp_n@ of the chunkable monoid @n@. \cqed\end{proof} \subsection{Parallel Monoidal Concatenation}\label{subsec:pmconcat} % We now parallelize the monoid concatenation by defining a @pmconat i x@ function that chunks the input list of monoids and concatenates each chunk in parallel. We use the @chunk@ function of \S~\ref{subsec:chunkable} instantiated to @L m@ to define a parallelized version of monoid concatenation @pmconcat@. \begin{code} pmconcat :: Int -> L m -> m pmconcat i x | i <= 1 || length x <= i = mconcat x pmconcat i x = pmconcat i (pmap mconcat (chunk i x)) \end{code} The function @pmconcat i x@ calls @mconcat x@ in the base case, otherwise it (1) chunks the list @x@ in lists of size @i@, (2) runs in parallel @mconcat@ to each chunk, (3) recursively runs itself with the resulting list. % Termination of @pmconcat@ holds, as the length of @chunk i x@ is smaller than the length of @x@, when @1 < i@. Next, we prove equivalence of parallelized monoid concatenation. % \begin{theorem}[Correctness of Parallelization]\label{theorem:equivalence:concat} Let (@m@, @$\epsilon$@, @<>@) be a monoid. Then, the parallel and sequential concatenations are equivalent. \begin{code} pmconcatEq :: i:Int -> x:L m -> {pmconcat i x = mconcat x} \end{code} \end{theorem} \begin{proof} We prove the theorem by providing a Haskell implementation of @pmconcatEq@ that satisfies its type. % The details of the proof can be found in~\cite{implementation}, here we provide the sketch of the proof. First, we prove that @mconcat@ distributes over list splitting \begin{code} mconcatSplit :: i:Nat -> xs:{L m | i <= length xs} -> {mconcat xs = mconcat (take i xs) <> mconcat (drop i xs)} \end{code} % The proofs proceeds by structural induction, using monoid left identity in the base case and monoid associativity associavity and unfolding of @take@ and @drop@ methods in the inductive step. We generalize the above to prove that @mconcat@ distributes over list chunking. \begin{code} mconcatChunk :: i:Pos -> xs:L m -> {mconcat xs = mconcat (map mconcat (chunk i xs))} \end{code} % The proofs proceeds by structural induction, using monoid left identity in the base case and lemma @mconcatSplit@ in the inductive step. Lemma @mconcatChunk@ is sufficient to prove @pmconcatEq@ by structural induction, using monoid left identity in the base case. \cqed\end{proof} \subsection{Parallel Monoid Morphism}\label{subsec:both-levels} We can now replace the @mconcat@ in our chunked monoid morphism in \S~\ref{subsec:mconcat} with @pmconcat@ from \S~\ref{subsec:pmconcat} to provide an implementation that uses parallelism to both map the monoid morphism and concatenate the results. %\paragraph{Correctness} of our parallel monoid morphism follows from Theorems~\ref{theorem:monoid:distribution} and~\ref{theorem:equivalence:concat}. % \begin{theorem}[Correctness of Parallelization]\label{theorem:two-level} Let (@m@, @$\epsilon$@, @<>@) be a monoid and (@n@, @$\eta$@, @<+>@) be a chunkable monoid. % Then, for every morphism @f :: n -> m@, every positive numbers @i@ and @j@, and input @x@, @f x = pmconcat i (pmap f (chunk_n j x))@ holds. % \begin{code} parallelismEq :: f:(n -> m) -> Morphism n m f -> x:n -> i:Pos -> j:Pos -> {f x = pmconcat i (pmap f (chunk_n j x))} \end{code} \end{theorem} \begin{proof} We prove the theorem by providing an implementation of @parallelismEq@ that satisfies its type. % \begin{code} parallelismEq f thm x i j = pmconcat i (pmap f (chunk_n j x)) =. mconcat (pmap f (chunk_n j x)) ? pmconcatEq i (pmap f (chunk_n j x)) =. f x ? morphismDistribution f thm x j ** QED \end{code} The proof follows merely by application of the two previous Theorems~\ref{theorem:monoid:distribution} and~\ref{theorem:equivalence:concat}. \cqed\end{proof} \mypara{A Basic Time Complexity} analysis of the algorithm reveals that parallelization of morphism leads to runtime speedups on monads with fast (constant time) appending operator. We want to compare the complexities of the sequential @f i@ and the two-level parallel @pmconcat i (pmap f (chunk_n j x))@. % Let $n$ be the size on the input @x@. Then, the sequential version runs in time $T_f(n) = O(n)$, that is equal to the time complexity of the morphism @f@ on input @i@. % The parallel version runs @f@ on inputs of size $n' = \frac{n}{j}$. % Assuming the complexity of @x <> y@ to be $T_\mappend(\text{max}(|\tx|, |\ty|))$, complexity of @mconcat xs@ is $O((\texttt{length \txs}-1) T_\mappend(\text{max}_{\tx_i \in \txs}(|\tx_i|)))$. % Now, parallel concatenation, @pmconcat i xs@ at each iteration runs @mappend@ on a list of size @i@. Moreover, at each iteration, divides the input list in chunks of size @i@, leading to $\frac{\log|xs|}{\log i}$ iterations, and time complexity $(i-1)(\frac{\log|xs|}{\log i})(T_\mappend(m))$ for some $m$ that bounds the size of the monoids. The time complexity of parallel algorithm consists on the base cost on running @f@ at each chunk and then parallel concatenating the $\frac{n}{j}$ chunks. \begin{equation} O((i-1)(\frac{\log n - \log j}{\log i})T_\mappend(m) + T_f(\frac{n}{j})) \label{eq:complexity} \end{equation} % Since time complexity depends on the time complexity of @<>@ for the parallel algorithm to be efficient time complexity of @<>@ should be constant. % Otherwise, if it depends on the size of the input, the size of monoids can grow at each iteration of @mconcat@. % Moreover, from the complexity analysis we observe that time grows on bigger @i@ and smaller @j@. % Thus, chunking the input in small chunks while splitting the monoid list in half leads to more parallelism, and thus (assuming infinite processors and no caching) greatest speedup.
{ "alphanum_fraction": 0.6972230517, "avg_line_length": 34.7948051948, "ext": "tex", "hexsha": "7b82af1815ebf7357416ba8bd6a30385da4c3900", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_path": "text/stringmatcher/morphisms.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_path": "text/stringmatcher/morphisms.tex", "max_line_length": 150, "max_stars_count": 11, "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_path": "text/stringmatcher/morphisms.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "num_tokens": 4027, "size": 13396 }
\documentclass[a4paper]{scrartcl} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \usepackage{pgfplots} \usepackage{amsmath, enumerate, amssymb, multirow, fancyhdr, color, graphicx, lastpage, listings, tikz, pdflscape, subfigure, float, polynom, hyperref, tabularx, forloop, geometry, listings, fancybox, tikz, forest, tabstackengine, cancel, bbm} \input kvmacros \geometry{a4paper,left=3cm, right=3cm, top=3cm, bottom=3cm} \pagestyle {fancy} \begin{document} We assume $\sigma$ is the weight matrix of a weighted undirected graph, so $\sigma_{ij} = \sigma_{ji}$ and $\sigma_{ij} = 1$ if there is an edge between $i$ to $j$ and $\sigma_{ij} = 0$ if there is no edge between $i$ and $j$. For simplicity we assume that nodes are not self connected, that is $\sigma_{ii} = 0$. \section*{(a)} { \color{green} TODO: Note the $\leq$ instead of the $=$ in the following. This propagates through all formulas below! } For $M$ to be a matching, it needs to hold by definition, that $$\forall i: \sum_j \sigma_{ij} \leq 1$$ This is equivalent to saying $$ \prod_i \mathbbm{1}_{\left[\sum_j \sigma_{ij} \leq 1\right]} = 1 $$ where $\mathbbm{1}$ is the indicator function. So we find the probability distribution \begin{align} P(M) = \frac{1}{Z}\prod_i \mathbbm{1}_{\left[\sum_j \sigma_{ij} \leq 1\right]} \end{align} Where $Z$ is the normalization wich is the count of all matchings \begin{align} Z = \sum_{M \in \mathcal{G}} \mathbbm{1}_{\left[ \prod_i \mathbbm{1}_{\left[\sum_j \sigma_{ij} \leq 1\right]} = 1 \right]} \end{align} Here $\mathcal{G}$ is the set of all subsets of the graph. \section*{(b)} The size of a matching $\lvert M \rvert$ is the count of its edges. With the adjecency matrix $\sigma$ that is the sum of all its elements divided by two: $$ \lvert M \rvert = \frac{\sum_{i} \sum_{j} \sigma_{ij}}{2}$$ We simply introduce the size of a matching $\lvert M \rvert$ as a scaling factor in the probability distribution, to give larger weight to larger matchings: \begin{align} P(M) = \frac{1}{Z} \lvert M \rvert \prod_i \mathbbm{1}_{\left[\sum_j \sigma_{ij} \leq 1\right]} \end{align} which has new normalization factor \begin{align} Z = \sum_{M \in \mathcal{G}} \lvert M \rvert \cdot \mathbbm{1}_{\left[ \prod_i \mathbbm{1}_{\left[\sum_j \sigma_{ij} \leq 1\right]} = 1 \right]} \end{align} % We have the size $S = M=\sum_{ij} \sigma^{ij}$ and the normalized version $S^*=\frac{M}{\lvert E \rvert}$. {\color{red} TODO I feel like there would be a way to integrate the scalar $\lvert M \rvert$ into the factor (in the product) but i can't find it. (Nevermind i don't think so anymore, with clarification 2 we don't need it anyways) } \section*{(c)} The Graph is given as: \begin{tikzpicture} [scale=1,auto=left,every node/.style={circle,draw}] \node (n1) at (1,3) {1}; \node (n2) at (3,3) {2}; \node (n3) at (4,2) {3}; \node (n4) at (3,1) {4}; \node (n5) at (1,1) {5}; \node (n6) at (0,2) {6}; \foreach \from/\to in {n1/n2,n1/n3,n2/n3,n1/n4,n2/n5,n3/n6} \draw (\from) -- (\to); \end{tikzpicture} An ordered drawing of this graph is: \begin{tikzpicture} [scale=1,auto=left,every node/.style={circle,draw}] \node (n1) at (1,4) {1}; \node (n2) at (3,3) {2}; \node (n3) at (5,4) {3}; \node (n4) at (1,1) {4}; \node (n5) at (3,1) {5}; \node (n6) at (5,1) {6}; \foreach \from/\to in {n1/n2,n1/n3,n2/n3,n1/n4,n2/n5,n3/n6} \draw (\from) -- (\to); \end{tikzpicture} In general the probability distribution found above is given as \begin{align} P(M) = \frac{1}{Z}\prod_i \mathbbm{1}_{\left[\sum_j \sigma_{ij} \leq 1\right]} \end{align} Which we can write as \begin{align} P(M) = \frac{1}{Z} \prod_i \psi_i(\partial M) \end{align} with factors \begin{align} \psi_i(\partial M) = \mathbbm{1}_{\left[ \sum_j \sigma_{ij} \leq 1 \right]} \end{align} By $\partial M$ we denote, that each factor $\psi_i$ is only connected to all nodes $\sigma_{ij}$ for which there is a connection from $i$ to $j$, so for all nodes for which $\sigma_{ij} = 1$ in the initial graph $G$. In general to draw a factor graph we would start with a graph with all $\sigma_{ij}$ connected to all factors $\psi_i$. Then we can omit all $\sigma_{ij}$ where $\sigma_{ij} = 0$ in the initial graph $G$. Furthermore we can omit all connections from factors $\psi_i$ to $\sigma_{ij}$ if $i$ and $j$ are not connected. For the graph given here this yields the factor graph: \begin{tikzpicture} [scale=1,auto=left,every node/.style={circle,draw}] \node (s12) at (4,6) {$\sigma_{12}$}; \node (s13) at (4,5) {$\sigma_{13}$}; \node (s23) at (4,4) {$\sigma_{23}$}; \node (s14) at (4,3) {$\sigma_{14}$}; \node (s25) at (4,2) {$\sigma_{25}$}; \node (s36) at (4,1) {$\sigma_{36}$}; \node[rectangle] (f1) at (1,6) {$\psi_{1}$}; \node[rectangle] (f2) at (1,5) {$\psi_{2}$}; \node[rectangle] (f3) at (1,4) {$\psi_{3}$}; \node[rectangle] (f4) at (1,3) {$\psi_{4}$}; \node[rectangle] (f5) at (1,2) {$\psi_{5}$}; \node[rectangle] (f6) at (1,1) {$\psi_{6}$}; \foreach \from/\to in {f1/s12,f1/s13,f1/s14,f2/s12,f2/s23,f2/s25,f3/s13,f3/s23,f3/s36,f4/s14,f5/s25,f6/s36} \draw (\from) -- (\to); \end{tikzpicture} {\color{red} TODO Is this also so simple for the probability distribution scaled by matching size $\lvert M \rvert$? (Nevermind, is saw Clarification 2) } \section*{(d)} The general definitions are given as: \begin{align} \nu_{i \rightarrow a}^{(t + 1)}(x_i) &\cong \prod_{b \in \partial i \backslash a} \hat{\nu}_{b \rightarrow i}^{(t)}(x_i) \\ \hat{\nu}_{a \rightarrow i}^{(t)}(x_i) &\cong \sum_{\textbf{x}_{\partial a} \backslash i} \psi_a (\textbf{x}_{\partial a}) \prod_{j \in \partial a \backslash i} \nu_{j \rightarrow a}^{(t)}(x_j) \\ P(x_i) &= \nu_i^{(t)}(x_i) \cong \prod_{a \in \partial i} \hat{\nu}_{a \rightarrow i}^{(t-1)}(x_i) \end{align} In our case we have messages from variable to factor: \begin{align} \nu_{\sigma_{ij}}^{(ij) \rightarrow i} &\cong \prod_{b \in \partial (ij) \backslash i} \hat{\nu}_{\sigma_{ij}}^{b \rightarrow (ij)} \\ &= \hat{\nu}_{\sigma_{ij}}^{j \rightarrow (ij)} \end{align} This simplification comes, since variable $(ij)$ only has one other neighbouring factor except $i$, which is $j$. \\ And messages from factor to variable: \begin{align} \hat{\nu}_{\sigma_{ij}}^{i \rightarrow (ij)} \cong \sum_{\boldsymbol{\sigma}_{\partial i} \backslash (ij)} \psi_i(\boldsymbol{\sigma}_{\partial i}) \prod_{(ik) \in \partial i \backslash (ij)} \nu_{\sigma_{ik}}^{(ik) \rightarrow i} \end{align} For our problem this leads to messages from factor to variable: \begin{align} \hat{\nu}_{\sigma_{ij}}^{i \rightarrow (ij)} &\cong \sum_{\boldsymbol{\sigma}_{\partial i} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{il} \leq 1 \right]} \prod_{(ik) \in \partial i \backslash (ij)} \nu_{\sigma_{ik}}^{(ik) \rightarrow i}\\ \end{align} { \color{green} Note here that the sum is over all the values that the $\sigma$ which are neighbours of $i$ could assume. } The marginal is: \begin{align} P(\sigma_{ij}) &\cong \prod_{b \in \partial (ij)} \hat{\nu}_{\sigma_{ij}}^{b \rightarrow (ij)} \\ &= \hat{\nu}_{\sigma_{ij}}^{i \rightarrow (ij)} \cdot \hat{\nu}_{\sigma_{ij}}^{j \rightarrow (ij)} \end{align} { \color{green} TODO: I don't know if we have to write something for the second hint, but this would be the following? } In the following we find some derivations for the adjecent edges if $\sigma_{ij} = 1$ and $\sigma_{ij} = 0$: For $\sigma_{ij} = 1$ this leads to \begin{align} P(\sigma_{ij} = 1) &\cong \hat{\nu}_{\sigma_{ij}}^{i \rightarrow (ij)} \cdot \hat{\nu}_{\sigma_{ij}}^{j \rightarrow (ij)} \\ &= \left[ \sum_{\sigma_{\partial i} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{il} \leq 1 \right]} \prod_{(ik) \in \partial i \backslash (ij)} \nu_{\sigma_{ik}}^{(ik) \rightarrow i} \right] \cdot \left[ \sum_{\sigma_{\partial j} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{jl} \leq 1 \right]} \prod_{(jk) \in \partial j \backslash (ij)} \nu_{\sigma_{jk}}^{(jk) \rightarrow j} \right] \end{align} Which can only be $\geq 0$ if \begin{align} \sum_{\sigma_{\partial i} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{il} \leq 1 \right]} &= 1 \\ \sum_{\sigma_{\partial i} \backslash (ij)} \mathbbm{1}_{\left[\sum_{l \backslash (l = j)} \sigma_{il} + 1 \leq 1 \right]} &= 1 \\ \end{align} and \begin{align} \sum_{\sigma_{\partial j} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{jl} \leq 1 \right]} &= 1 \\ \sum_{\sigma_{\partial j} \backslash (ij)} \mathbbm{1}_{\left[\sum_{l \backslash (l = i)} \sigma_{jl} + 1 \leq 1 \right]} &= 1 \end{align} This is saying that all edges adjecent to $i$ and $j$ except edge $(ij)$ have to be zero, meaning non existant. For $\sigma_{ij} = 0$ we find with similar reasoning: \begin{align} P(\sigma_{ij} = 0) &\cong \hat{\nu}_{\sigma_{ij}}^{i \rightarrow (ij)} \cdot \hat{\nu}_{\sigma_{ij}}^{j \rightarrow (ij)} \\ &= \left[ \sum_{\sigma_{\partial i} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{il} \leq 1 \right]} \prod_{(ik) \in \partial i \backslash (ij)} \nu_{\sigma_{ik}}^{(ik) \rightarrow i} \right] \cdot \left[ \sum_{\sigma_{\partial j} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{jl} \leq 1 \right]} \prod_{(jk) \in \partial j \backslash (ij)} \nu_{\sigma_{jk}}^{(jk) \rightarrow j} \right] \end{align} Which can only be $\geq 0$ if \begin{align} \sum_{\sigma_{\partial i} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{il} \leq 1 \right]} &= 1 \\ \sum_{\sigma_{\partial i} \backslash (ij)} \mathbbm{1}_{\left[\sum_{l \backslash (l = j)} \sigma_{il} + 0 \leq 1 \right]} &= 1 \\ \end{align} and \begin{align} \sum_{\sigma_{\partial j} \backslash (ij)} \mathbbm{1}_{\left[\sum_l \sigma_{jl} \leq 1 \right]} &= 1 \\ \sum_{\sigma_{\partial j} \backslash (ij)} \mathbbm{1}_{\left[\sum_{l \backslash (l = i)} \sigma_{jl} + 0 \leq 1 \right]} &= 1 \end{align} Which is saying that any one, but no more than one of the edges adjecent to $i$ except $(ij)$ may be 1 and that any one, but no more than one of the edges adjecent to $j$ except $(ij)$ may be 1. If we use $c_i$ and $c_j$ as the count of edges adjecent to $i$ and $j$ except for edge $(ij)$ respectively, we can say that this allows $c_i+1$ configurations for the edges adjecent to $i$ and $c_j + 1$ configurations for the edges adjecent to $j$ (Any one may be 1 or all are 0). These derivations are intuitively consistent with the definitions of the matching. { \color{green} TODO i don't quite get the following, from the reasoning it makes sense but i don't see how it follows from the formulas For $\sigma_{ij}=1$, this leads to: \begin{align} P(\sigma_{ij}=1) = \prod_{\sigma_{ik} \in \partial a \backslash i} v_{\sigma_{ik} \rightarrow i}(0) \end{align} wich intuitively makes sense, since all the others have to be 0 in order for $\sigma_{ij}$ to be 1.\\ For $\sigma_{ij}=0$, this leads to: \begin{align} P(\sigma_{ij}=1) = \sum_{\sigma_{ik} \in \partial i \backslash \sigma_{ij}} (\mathbbm{1}^{(\sum_{l \backslash i} \sigma_{il} = 1)} \prod_{\sigma_{ik} \in \partial a \backslash i} v_{\sigma_{ik} \rightarrow i}(\sigma_{ik}))\\ \end{align} here the main difference is that the factor $\Psi_{i}$ doesn't sum over $\sigma_{ij}$ anymore, since it is set to 0. } { \color{red} We have the messages from variable to factor: \begin{align} \hat{v}_{i \rightarrow \alpha}(x_i) = \prod_{\beta \in \partial \alpha \backslash i} \hat{v}_{b \rightarrow i}(x_i) \end{align} and factor to variable: \begin{align} \hat{v}_{\alpha \rightarrow i}(x_i) = \sum_{x \in \partial \alpha \backslash i} (\Psi_a(X_{\partial a}) \prod_{j \in \partial a \backslash i} v_{j \rightarrow a}(x_j)) \end{align} The general BP marginal is: \begin{align} P(x_i) = v_i(x_i) = \prod_{a \in \partial i} \hat{v}_{\alpha \rightarrow i}(x_i) \end{align} For our problem, this leads to to message from factor to variable: \begin{align} \hat{v}_{i \rightarrow \sigma_{ij}}(\sigma_{ij}) &= \sum_{\sigma_{ik} \in \partial i \backslash \sigma{ij}} (\mathbbm{1}^{(\sum_l \sigma^{il} = 1)} \prod_{\sigma_{ik} \in \partial a \backslash i} v_{\sigma_{ik} \rightarrow i}(\sigma_{ik}))\\ \end{align} This means the marginal is: \begin{align} P(\sigma_{ij}) = \hat{v}_{i \rightarrow \sigma_{ij}}(\sigma_{ij}) \cdot \hat{v}_{j \rightarrow \sigma_{ij}}(\sigma_{ij}) \end{align} For $\sigma_{ij}=1$, this leads to: \begin{align} P(\sigma_{ij}=1) = \prod_{\sigma_{ik} \in \partial a \backslash i} v_{\sigma_{ik} \rightarrow i}(0) \end{align} wich intuitively makes sense, since all the others have to be 0 in order for $\sigma_{ij}$ to be 1.\\ For $\sigma_{ij}=0$, this leads to: \begin{align} P(\sigma_{ij}=1) = \sum_{\sigma_{ik} \in \partial i \backslash \sigma_{ij}} (\mathbbm{1}^{(\sum_{l \backslash i} \sigma_{il} = 1)} \prod_{\sigma_{ik} \in \partial a \backslash i} v_{\sigma_{ik} \rightarrow i}(\sigma_{ik}))\\ \end{align} here the main difference is that the factor $\Psi_{i}$ doesn't sum over $\sigma_{ij}$ anymore, since it is set to 0. } \section*{(d)} { \color{green} TODO: To get the full marginal we would need to do the full message passing. For the equations the following suffices: } For one point marginal, see (c): \begin{align} P(\sigma_{ij}) &\cong \hat{\nu}_{\sigma_{ij}}^{i \rightarrow (ij)} \cdot \hat{\nu}_{\sigma_{ij}}^{j \rightarrow (ij)} \end{align} For two point marginal, we have: \begin{align} P(\sigma_{ij}, \sigma_{kl}) = P(\sigma_{kl} \vert \sigma_{ij}) P(\sigma_{ij}) \end{align} In addition to the one point marginal we need to find $P(\sigma_{kl} \vert \sigma_{ij})$ which is defined by: \begin{align} P(\sigma_{kl} \vert \sigma_{ij} = x) &\cong \prod_{a} \psi_a(\boldsymbol{\sigma}_{\partial a}) \mathbbm{1}_{\left[ \sigma_{ij} = x \right]} \\ &= \prod_{a} \mathbbm{1}_{\left[ \sum_{l} \sigma_{al} \leq 1 \right]} \mathbbm{1}_{\left[ \sigma_{ij} = x \right]} \end{align} This is, we add to the original factor graph a node of degree 1 connected to the variable node $\sigma_{ij}$ which fixes $\sigma_{ij} = x$ and run BP on this modified factor graph. \end{document}
{ "alphanum_fraction": 0.6450759159, "avg_line_length": 40.3314606742, "ext": "tex", "hexsha": "4f67579514acdb98f3a2dc820670e44d47a433f9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e2e3ccfe393f47b1e78ea481e59d98cb784b1f1e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "philippvb/APMLA-WS_21-22_material", "max_forks_repo_path": "Assignment/assignment.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e2e3ccfe393f47b1e78ea481e59d98cb784b1f1e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "philippvb/APMLA-WS_21-22_material", "max_issues_repo_path": "Assignment/assignment.tex", "max_line_length": 318, "max_stars_count": null, "max_stars_repo_head_hexsha": "e2e3ccfe393f47b1e78ea481e59d98cb784b1f1e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "philippvb/APMLA-WS_21-22_material", "max_stars_repo_path": "Assignment/assignment.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5176, "size": 14358 }
\documentclass{article} \usepackage{algpseudocode} \title{The Cavity Operator} \author{Dan Ibanez} \begin{document} \date{May 15, 2014} \maketitle \section{Overview} The Cavity Operator system, or CavityOp, is a framework for many of the operations carried out in field computation and mesh adaptation. The fundamental characteristic of a cavity operator is that it queries and modifies a local cavity around a given mesh entity. The CavityOp system exists to allow parallel applications to write their cavity-based logic in a serial mindset. The CavityOp runtime system then uses migration to provide local cavities to the user's operator for all entities. \section{User Operator} The operator provided by the user should define routines for the following: \begin{enumerate} \item Determining whether the operator should apply to an entity \item Building the cavity based on adjacencies while marking which entities need to be local \item Applying the operator to a cavity \end{enumerate} It should also follow these rules: \begin{enumerate} \item Building the cavity should not modify anything \item Only local entities should be modified, and only cavity entities should be queried. \item If upward adjacencies of an entity are added to the cavity, that entity must first be local \end{enumerate} A mesh entity is local when it has no remote copies. Given other assumptions about the mesh, this also means that all its upward adjacencies are fully retrievable. The user then gives this operator object containing the definitions to a routine that will apply the operator to every mesh entity of a given dimension. So far all operators target a particular dimension, but any iteration over the mesh fits this algorithm. The job of the CavityOp application routine is to use migration to deal with cavities that would otherwise be split by a partition boundary. \section{User Example} To illustrate what a cavity operator would do, we can use the simple example of averaging per-element values into per-vertex values. In an error estimator, each element receives a scalar target size, and the size field value at a vertex is the average of its adjacent elements' target sizes. A user operator for this would define the three functions described earlier. To determine whether it should apply to a vertex, it would check whether the size value has already been computed and attached to the vertex. To build the cavity, it must first insist that the vertex be local, then gather the upward adjacent elements of the vertex. To apply the operator, it queries the element values and attaches their average to the vertex. Notice that the vertex's locality is enforced before querying upward adjacencies or modifying its field data. \section{Pull Requests} The first concept towards understanding CavityOp is the pull request. This is a part requesting that a mesh entity be made local to it. In order to make an entity local, all its upward adjacent elements must be on the same part. So a pull request is really asking for all adjacent elements to move to the requesting part. Loosely, the cavity operator system is just trying to satisfy pull requests until there are no more. Given an algorithm that is guaranteed to make some progress on those pull requests, the application function will repeat these steps until completion: \begin{enumerate} \item apply the operator to all valid local cavities \item identify paritioned cavities and submit pull requests \item try to satisfy pull requests \end{enumerate} More specifically, here is pseudo-code for the application of a cavity operator: \begin{algorithmic} \Function{apply}{$d,o$} \State let $d$ be the dimension of target entities \State let $o$ be the operator \Loop \For {$e \in M\{M^d\}$ of the local part} \If {$e$ is owned and $o$ should apply to $e$} \State try to build cavity around $e$ \If {locality requirements were met} \State apply $o$ to $e$ \EndIf \EndIf \EndFor \For {$e \in M\{M^d\}$ of the local part} \If {$e$ is owned and $o$ should apply to $e$} \State try to build cavity around $e$ \If {some entities that need to be local are not} \State submit pull requests for them \EndIf \EndIf \EndFor \If {no pull requests were submitted} \Return \EndIf \State try to satisfy pull requests \EndLoop \EndFunction \end{algorithmic} \section{Satisfying Pull Requests} A pull request for an entity is satisfied if all the entity's adjacent elements are migrated to one part. This naturally creates conflicts when two or more pull requests try to move an element to different parts. \emph{When this happens, the element is migrated to the part with the largest integer ID.} As long as this conflict resolution rule is a strict ordering of the parts, it guarantees that at least one pull request is satisfied during migration. In fact, the part with the highest ID that submits pull requests will have them all satisfied, since they win all conflicts. For an operator that will be applied to a fixed number of mesh entities, as is so far the case, this is guaranteed to terminate since targets are always eliminated after each migration. \section{Performance} The performance of this algorithm depends on the ability to satisfy pull requests. In the end, the problem can be reduced to finding the maximal independent set of a graph where the nodes are cavities and the edges are pull request conflicts. In practice, the resolution heuristic should give a decent independent set. This is helped by the fact that operator application is based on ownership, and the PUMI mesh database selects ownership over partition model entities, so there are rarely conflicts in the middle of a partition model face. For varying cavities such as SPR or mesh-modifying operators, it is even harder to prove anything. The expectation is that the number of migrations should be a small constant not dependent on mesh size or part count. Early testing on a 10K region SLAC mesh used 2 migrations to completely perform SPR for 2 parallel parts and 3 migrations for 4 parallel parts. More performance data is now being gathered and should be available within a few days. \end{document}
{ "alphanum_fraction": 0.785656987, "avg_line_length": 36.0289017341, "ext": "tex", "hexsha": "57a8815239203c03b4d89edbe3595566804c0e74", "lang": "TeX", "max_forks_count": 70, "max_forks_repo_forks_event_max_datetime": "2022-02-13T04:58:20.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-17T00:58:41.000Z", "max_forks_repo_head_hexsha": "840fbf6ec49a63aeaa3945f11ddb224f6055ac9f", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "cwsmith/core", "max_forks_repo_path": "apf/cavity.tex", "max_issues_count": 337, "max_issues_repo_head_hexsha": "840fbf6ec49a63aeaa3945f11ddb224f6055ac9f", "max_issues_repo_issues_event_max_datetime": "2022-03-31T14:39:03.000Z", "max_issues_repo_issues_event_min_datetime": "2015-08-07T18:24:58.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "cwsmith/core", "max_issues_repo_path": "apf/cavity.tex", "max_line_length": 83, "max_stars_count": 138, "max_stars_repo_head_hexsha": "840fbf6ec49a63aeaa3945f11ddb224f6055ac9f", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "cwsmith/core", "max_stars_repo_path": "apf/cavity.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-25T01:09:58.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-05T15:50:20.000Z", "num_tokens": 1410, "size": 6233 }
\graphicspath{{Pics/}} \newpage\subsection{Projective Constructions} \begin{minipage}{.6\linewidth} \begin{construction}[Second Intersection of Line with Conic] Given four points $ A, B, C, D $, no three collinear, and a point $ P $ on a line $ l $ passing through at most one of the four points, construct the point $ P'\in l $ such that $ A, B, C, D, P, P' $ line on the same conic. \solu{ Let $ AP\cap BC = X,\ l\cap CD = Y,\ XY\cap AD=Z $. Then by Pascal's Hexagrummum Mysticum Theorem, we have, $\ \ P' = BZ\cap l $ } \end{construction} \end{minipage}\hfill% \begin{minipage}{.35\linewidth} \figdf{}{constructing_second_intersection_with_conic}{} \end{minipage} \begin{minipage}{.6\linewidth} \begin{construction}[Conic touching conic] Given a conic $ \mathcal{C} $, and two points $ A, B $ on it, and $ C $ inside of it. Construct the conic $ \mathcal{H} $ that is tangent to $ \mathcal{C} $ at $ A, B $ and passes through $ C $. \end{construction} \solu{ Draw the two tangest at $ A,B $ which meet at $ X $. Take an arbitrary line passing through $ X $ that intersects $ AC, BC $ at $ Y, Z $. Take $ D=BY\cap AZ $. Then $ D $ lies on $ \mathcal{H} $ by Pascal. Construct another point $ E $ similarly and draw the conic. } \end{minipage}\hfill% \begin{minipage}{.35\linewidth} \figdf{.9}{conic_touching_conic}{} \end{minipage} \begin{construction}[Inconic of a quadrilateral] Given a convex quadrilateral $ ABCD $. $ P=AC\cap BD $, $ S\in AD, T\in BC $ such that $ S, P, T $ are collinear. Construct the conic that touches $ AB, CD $, and also touches $ AD, BC $ at $ S, T $ respectively. \end{construction} \solu{[the\textunderscore Construction] Draw the polar line $ l $ of $ P $ wrt to the quadrilateral. Let $ Z=BC\cap l $. Let $ ZS\cap AB = U $, $ ZT\cap CD = V $. Then $ SSUUTTVV $ is our desired conic. } \begin{minipage}{.5\linewidth} \proof{ If $ U, V\in CD, AB $ such that $ UV $ passes through $ P $, and if the conic passing through $ U, V $ and tangent to $ AD, BC $ at $ S, T $ intersects $ CD $ at $ U' $ again, then $ SV, U'T, DB $ are concurrent. So to show our construction works, we just need to prove that $ U, V, P $ are collinear.\\ Since Pascal's theorem works on $ SVBTUD $, we know $ S, V, B, T, U, D $ lie on a conic $ \mathcal{H} $ and $ l $ is the pole of $ P $ wrt $ \mathcal{H} $. Now, applying Pascal's theorem on $ TDVUBS $, and quadrilateral theorem on $ BTUD $ and $ BVSD $, we have, $ ST\cap UV\in AC $, which is $ P $. So we are done. } \end{minipage}\hfill% \begin{minipage}{.45\linewidth} \figdf{}{conic_touching_quad}{} \end{minipage} \begin{minipage}{.5\linewidth} \begin{construction}[\href{http://www.geometry.ru/persons/beluhov/selected.pdf}{Sharygin Olympiad 2010}] A conic $ \mathcal{C} $ passing through the vertices of $ \triangle ABC $ is drawn, and three points $ A', B', C' $ on its sides $ BC, CA, AB $ are chosen. Then the original triangle is erased. Prove that the original triangle can be constructed iff $ AA', BB', CC' $ are concurrent. \end{construction} \solu{[the\textunderscore Construction] Draw $ B'C' $. It intersects the circle at $ X_1, X_2 $. Draw the conic $ \mathcal{H} $ that is tangent to $ \mathcal{C} $ at $ X_1, X_2 $ and passes through $ A' $. Then $ BC $ is tangent to $ \mathcal{H} $ at $ A' $. } \end{minipage}\hfill% \begin{minipage}{.45\linewidth} \figdf{1}{reconstruct_ABC}{} \end{minipage} \proof{ The only if part is easy to prove. Becase if $ AA', BB', CC' $ aren't concurrent, then we can get multiple triangles $ ABC $. So suppose that they are concurrent. \\ Now we define some intersetion points. \begin{center} \begin{tabular}{ccccc} $W_1$ &= &$BB'$ &$\cap$ &$\mathcal{C}$\\ $S$ &= &$X_1X_1$ &$\cap$ &$AW_1$\\ $T$ &= &$X_1B$ &$\cap$ &$AX_2$\\ $U$ &= &$X_1X_1$ &$\cap$ &$BC$\\ $V$ &= &$X_2X_2$ &$\cap$ &$BC$\\ $R$ &= &$X_2X_2$ &$\cap$ &$AW_1$\\ $Y_1$ &= &$A'B'$ &$\cap$ &$SR$ \end{tabular} \end{center} $T, S, B'$ are collinear by Pascal's theorem on $ BX_1X_1X_2AW_1 $. $ T, B', V $ are similarly collinear for $ AX_2X_2X_1BC $. And similarly $ R, B', U $ are collinear.\\ We will prove that $ \mathcal{H} $ is an inconic of $ SRVU $ that goes through $ A', X_1, X_2 $. \\ For a point $ X $ on $ UV $, define $ f:UV\to UV $ such that $ f(X) $ is the second intersection of the conic $ X_1X_1X_2X_2X $ ($ X_1X_1 = SU, X_2X_2=RV $) with $ UV $. $ f $ is an involution by \autoref{theorem:Three Conic Law}. \\ Suppose $A_1$ is the intersection with the inconic of $ SRUV $ through $ X_1, X_2 $ and $ UV $. Let $ A_2 = X_1X_2\cap UV $. Then $ f(A_1) =A_1, f(A_2)=A_2, f(B)=C $. \\ Which means, $A(B, C; A_1, A_2) = -1$. Which means $ A_1=A' $. So, $ X_1X_2A'X_2X_2 $ is an inconic of $ SRVU $, just as we wanted. \figdf{.8}{reconstruct_ABC_1}{} } \newpage \begin{construction}[Focus and Directrix of a Parabola] First draw two parallel segments on the parabola, join their midpoints to get the line parallel to the axis. Then draw the main axis and find out the tip of the parabola. Then draw $f(x) = \frac{x}{2}$ line through $P$. And find the foot of the intersection of it with the parabola. It is the focus. \figdf{.5}{parabola_foci_construction}{} \end{construction}
{ "alphanum_fraction": 0.5916149068, "avg_line_length": 39.4285714286, "ext": "tex", "hexsha": "2f1bfb8dcdc5d6550feb922d14c95df9b1bfac68", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-09-27T15:19:26.000Z", "max_forks_repo_forks_event_min_datetime": "2020-10-15T08:59:33.000Z", "max_forks_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AnglyPascal/BCS_Question_Bank", "max_forks_repo_path": "geo/sec8_5_projective_construction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AnglyPascal/BCS_Question_Bank", "max_issues_repo_path": "geo/sec8_5_projective_construction.tex", "max_line_length": 92, "max_stars_count": 48, "max_stars_repo_head_hexsha": "83ff9b542999386ea182863e4f25f0b488d3984f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "M-Ahsan-Al-Mahir/BCS_Question_Bank", "max_stars_repo_path": "geo/sec8_5_projective_construction.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-13T19:47:04.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-14T17:15:00.000Z", "num_tokens": 1995, "size": 5796 }
%\section{Entropy and approximation rates} The presentation in this section follows the paper \cite{siegel2021optimal}. The recent success of deep learning \cite{lecun2015deep} has spurred a large amount of research into the mathematical foundations of neural networks. In addition, there is a rapidly growing interest in using neural networks as a function class for solving partial differential equations \cite{han2018solving,CiCP-28-1707} and simulating physical systems \cite{raissi2018hidden}. Of particular importance in both of these research directions are the approximation theory of neural networks, specifically the determination of how effectively neural networks can approximate high dimensional functions. Many recent theoretical results indicate that a wide class of functions, especially in high dimensions, can be efficiently approximated by both shallow \cite{wojtowytsch2020representation,ma2019priori,siegel2020approximation} and deep neural networks \cite{yarotsky2017error,lu2020deep,opschoor2020deep,daubechies2019nonlinear,devore2020neural,li2019better}, and that solving PDEs in high dimensions using neural networks is a viable approach \cite{lu2021priori,li2020multipole,luo2020two}. An important consideration when studying the approximation properties of neural networks, and non-linear approximation in general, is the existence of a stable numerical algorithm which can realize a given approximation rate. This is intimitely connected with the metric entropy of the class of functions under consideration, as observed in \cite{cohen2020optimal}. Consequently, calculating the metric entropy of neural network function classes is important for determining the theoretical limitations of using neural networks. In this work, we calculate the metric entropy of the class of functions which can be efficiently approximated by shallow ReLU$^k$ neural networks. This class of functions has been extensively studied in the statistics and machine learning literature \cite{barron1993universal,jones1992simple,klusowski2018approximation}. We begin by considering a somewhat more general class of functions arising in the study of non-linear approximation by a dictionary of functions $\mathbb{D}\subset H$ in a Hilbert space $H$ \cite{devore1998nonlinear,barron2008approximation}. Let $H$ be Hilbert space and $\mathbb{D}\subset H$ a dictionary with $\sup_{d\in \mathbb{D}} \|d\|_H = K_\mathbb{D} < \infty$ (note here dictionary is simply another name for subset). We introduce the set \begin{equation}\label{unit-ball-definition} B_1(\mathbb{D}) = \overline{\left\{\sum_{j=1}^n a_jh_j:~n\in \mathbb{N},~h_j\in \mathbb{D},~\sum_{i=1}^n|a_i|\leq 1\right\}}, \end{equation} which is the closure of the convex, symmetric hull of $\mathbb{D}$. Further, we define a norm, $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$, on $H$ given by the guage (see for instance \cite{rockafellar1970convex}) of $B_1(\mathbb{D})$, \begin{equation}\label{norm-definition} \|f\|_{\mathcal{K}_1(\mathbb{D})} = \inf\{c > 0:~f\in cB_1(\mathbb{D})\}, \end{equation} which is defined so that $B_1(\mathbb{D})$ is the unit ball of $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$. We also denote the space $\mathcal{K}_1(\mathbb{D})$ by \begin{equation}\label{space-definition} \mathcal{K}_1(\mathbb{D}) := \{f\in H:~\|f\|_{\mathcal{K}_1(\mathbb{D})} < \infty\}. \end{equation} This norm has been introduced in different forms in the literature \cite{devore1998nonlinear,kurkova2001bounds,kurkova2002comparison,barron2008approximation}. The notation and definition we use here was introduced in \cite{devore1998nonlinear}, where a more general $\mathcal{K}_\tau(\mathbb{D})$ space is considered for $0<\tau\leq \infty$. We restrict ourselves to the case $\tau = 1$, which is the most important space for general dictionaries. In Section \ref{spectral-barron-section}, we discuss the properties of the $\mathcal{K}_1(\mathbb{D})$ space in more detail and compare with previously introduced notions, such as the Barron space introduced in \cite{ma2019barron}. The significance of the space $\mathcal{K}_1(\mathbb{D})$ is in connection with approximation from the set of $\ell^1$-bounded $n$-term linear combinations of dictionary elements, \begin{equation} \Sigma_{n,M}(\mathbb{D}) = \left\{\sum_{j=1}^n a_jh_j:~h_j\in \mathbb{D},~\sum_{i=1}^n|a_i|\leq M\right\}, \end{equation} where the coefficients $a_i$ are taken as either real or complex depending upon whether $H$ is a real or complex Hilbert space. A classical result by Maurey \cite{pisier1981remarques} (see also \cite{jones1992simple,barron1993universal,devore1998nonlinear}) is the following approximation rate for functions $f\in \mathcal{K}_1(\mathbb{D})$, \begin{equation}\label{fundamental-bound} \inf_{f_n\in \Sigma_{n,M}(\mathbb{D})} \|f - f_n\|_H \leq K_\mathbb{D}\|f\|_{\mathcal{K}_1(\mathbb{D})}n^{-\frac{1}{2}}, \end{equation} where the bound $M$ can be taken as $M = \|f\|_{\mathcal{K}_1(\mathbb{D})}$. An equivalent formulation of this result, which is sometimes more convenient is that for $f\in B_1(\mathbb{D})$ we have \begin{equation} \inf_{f_n\in \Sigma_{n,1}(\mathbb{D})} \|f - f_n\|_H \leq K_\mathbb{D}n^{-\frac{1}{2}}. \end{equation} In this work, we are primarily interested in the following two types of dictionaries which are related to approximation by neural networks. Throughout the paper, we will consider the unit ball $B_1^d := \{x\in \mathbb{R}^d:~|x| \leq 1\}$ of $\mathbb{R}^d$. We remark that the results we obtain generalize in a straighforward manner to any bounded domain $\Omega\subset \mathbb{R}^d$, however. In particular, the upper bounds transfer to $\Omega$ since $\Omega$ is contained in a ball of sufficiently large radius, and the lower bounds transfer to $\Omega$ since $\Omega$ contains a ball of some sufficiently small positive radius. Thus, in passing to $\Omega$ only the implied constants will change. The first type of dictionary we will be interested in arises when studying networks with ReLU$^k$ activation function $\sigma_k(x) = \text{ReLU}^k(x) := [\max(0,x)]^k$ (here when $k=0$, we interpret $\sigma_k(x)$ to be the Heaviside function). Consider the dictionary \begin{equation}\label{relu-k-space-definition} \mathbb{P}^d_k = \{\sigma_k(\omega\cdot x + b):~\omega\in S^{d-1},~b\in [-2,2]\}\subset L^2(B_1^d), \end{equation} where $S^{d-1} = \{\omega\in \mathbb{R}^d:~|\omega| = 1\}$ is the unit sphere. We remark that the constant $2$ above can be replaced by any $c > 1$ to obtain an equivalent norm. In addition, when $k=1$ this norm is equivalent to the Barron norm studied in \cite{ma2019barron,ma2019priori}. We discuss the definition \eqref{relu-k-space-definition} and its relationship with the Barron norm in more detail in Section \ref{spectral-barron-section}. The relationship with ReLU$^k$ networks arises because \begin{equation} \Sigma_{n,M}(\mathbb{P}^d_k) = \left\{\sum_{j=1}^n a_j\sigma_k(\omega_j \cdot x + b_j):~\omega_j\in S^{d-1},~|b_j| \leq 2,~\sum_{i=1}^n|a_i|\leq M\right\} \end{equation} is the set of shallow ReLU$^k$ neural networks with bounded coefficients and $n$ hidden neurons. The second type of dictionary is the spectral dictionary of order $s \geq 0$, given by \begin{equation} \mathbb{F}^d_s = \{(1+|\omega|)^{-s}e^{2\pi {\mathrm{i}\mkern1mu} \omega\cdot x}:~\omega\in \mathbb{R}^d\}\subset L^2(B_1^d). \end{equation} For this dictionary the space $\mathcal{K}_1(\mathbb{F}_s)$ can be completely characterized in terms of the Fourier transform. In particular \begin{equation}\label{barron-integral-condition} \|f\|_{\mathcal{K}_1(\mathbb{F}^d_s)} = \inf_{f_e|_{B_1^d}= f} \int_{\mathbb{R}^d} (1+|\xi|)^s|\hat{f}_e(\xi)|d\xi, \end{equation} where the infemum is taken over all extensions $f_e\in L^1(\mathbb{R}^d)$. For reference, we provide a detailed proof of this result in Section \ref{spectral-barron-section}. The connection with ReLU$^k$ neural networks is due to the fact that $\mathcal{K}_1(\mathbb{F}^d_{k+1})\subset \mathcal{K}_1(\mathbb{P}^d_k)$, which was first observed in the case $k=0$ in \cite{barron1993universal}, in the case $k=1,2$ in \cite{klusowski2018approximation}, and extended to $k > 2$ in \cite{CiCP-28-1707}. Thus the integral condition \eqref{barron-integral-condition} defines a subspace of $\mathcal{K}_1(\mathbb{P}^d_k)$ which can be characterized via the Fourier transform. However, we remark that the inclusion here is strict \cite{wojtowytsch2020representation}, a point which we will come back to later. Next, we recall the notion of metric entropy first introduced by Kolmogorov \cite{kolmogorov1958linear}. The (dyadic) entropy numbers $\epsilon_n(A)$ of a set $A\subset H$ are defined by \begin{equation} \epsilon_n(A)_H = \inf\{\epsilon > 0:~\text{$A$ is covered by $2^n$ balls of radius $\epsilon$}\}. \end{equation} Roughly speaking, the entropy numbers indicate how precisely we can specify elements of $A$ given $n$ bits of information. It is not necessary for the space $H$ to be a Hilbert space although that is the case we will be interested in here, see for instance \cite{lorentz1996constructive}, Chapter 15 for the general theory. Our main contribution is to calculate the entropy numbers of the unit balls $B_1(\mathbb{P}_k^d)$ and $B_1(\mathbb{F}_s^d)$ in $H = L^2(B_1^d)$. These are given in the following theorem. \begin{theorem} Let $k \geq 0$, $s > 0$ and $H = L^2(B_1^d)$. Then \begin{equation}\label{metric-entropy-rates} \epsilon_n(B_1(\mathbb{P}_k^d))_H \eqsim_{k,d} n^{-\frac{1}{2} - \frac{2k+1}{2d}},~\epsilon_n(B_1(\mathbb{F}_s^d))_H \eqsim_{s,d} n^{-\frac{1}{2} - \frac{s}{d}}. \end{equation} \end{theorem} The estimates given here are weak equivalences, i.e. we have \begin{equation} C_1n^{-\frac{1}{2} - \frac{2k+1}{2d}} \leq \epsilon_n(B_1(\mathbb{P}_k^d))_H \leq C_2n^{-\frac{1}{2} - \frac{2k+1}{2d}}, \end{equation} for some constants $C_1 = C_1(k,d)$ and $C_2 = C_2(k,d)$, and an equivalent statement holds for $\epsilon_n(B_1(\mathbb{F}_s^d))$. (Generally, throughout this manuscript, we will use the notation $X\lesssim Y$ to mean that $X\leq CY$ for some constant $C$, $X\gtrsim Y$ to mean that $X \geq cY$ for some constant $c$, and $X\eqsim Y$ to mean that $X\gtrsim Y$ and $X\lesssim Y$. Moreover, if the constants may depend on a small number of parameters, these will be indicated as subscripts of corresponding symbol. For dependence upon many parameters, the dependence (or independence) will be indicated in the text.) Let us discuss some consequences of these metric entropy rates. The first consequence concerns approximation rates from $\Sigma_{n,M}(\mathbb{P}^d_k)$, with sufficiently large, but fixed, $M$ (i.e. the $\ell^1$-norm of the coefficients $a_j$ is kept bounded). An important result, first observed by Makovoz \cite{makovoz1996random}, is that for certain dictionaries the rate in \eqref{fundamental-bound} can be improved. In particular, for the dictionary $\mathbb{P}^d_0$ corresponding to neural networks with Heaviside activation function, Makovoz showed that for $f\in B_1(\mathbb{P}^d_0)$ \begin{equation}\label{makovoz-original} \inf_{f_n\in \Sigma_{n,M}(\mathbb{P}^d_0)} \|f - f_n\|_{L^2(B_1^d)} \lesssim_d n^{-\frac{1}{2}-\frac{1}{2d}}. \end{equation} (Note that here and in what follows the implied constant is independent of $n$, and the bound $M$ is fixed and independent of $n$.) Furthermore, improved rates have been obtained for other dictionaries. In particular, in \cite{klusowski2018approximation}, the dictionaries $\mathbb{P}^d_k$ corresponding to neural networks with activation function $\sigma = [\max(0,x)]^k$ are studied for $k=1,2$ and it is shown that for $f\in B_1(\mathbb{P}^d_k)$ \begin{equation} \inf_{f_n\in \Sigma_{n,M}(\mathbb{P}^d_k)} \|f - f_n\|_{L^2(B_1^d)} \lesssim_{k,d} n^{-\frac{1}{2}-\frac{1}{d}}. \end{equation} This analysis is extended to $k\geq 3$ in \cite{CiCP-28-1707}, where the same approximation rate is attained. This raises the natural question of what the optimal approximation rates for $\Sigma_{n,M}(\mathbb{P}^d_k)$ are. Specifically, for each $k=0,1,2,...$ and dimension $d=2,...$ (the case $d=1$ is comparatively trivial), what is the largest possible value of $\alpha := \alpha(k,d)$ such that for $f\in B_1(\mathbb{P}^d_k)$ we have \begin{equation} \inf_{f_n\in \Sigma_{n,M}(\mathbb{P}^d_k)} \|f - f_n\|_{L^2(B_1^d)} \lesssim_{k,d} n^{-\frac{1}{2}-\alpha(k,d)}. \end{equation} The result above imply that $\alpha(k,d) \geq \frac{1}{2d}$ for $k=0$ and $\alpha(k,d) \geq \frac{1}{d}$ for $k > 0$. When $d > 1$, the best available upper bounds on $\alpha(k,d)$ are $\alpha(d,k) \leq \frac{k+1}{d}$ (see \cite{makovoz1996random,klusowski2018approximation}), except in the case $k=0$, $d=2$, where Makovoz obtains the sharp bound $\alpha(0,2) = \frac{1}{4}$ \cite{makovoz1996random}. A consequence of the entropy calculation \eqref{metric-entropy-rates}, specifically the lower bound in Theorem \ref{relu-k-lower-bound-corollary} and the approximation rate in Theorem \ref{relu-k-rate-corollary}, is that $\alpha(k,d) = \frac{2k+1}{2d}$, i.e. that for $f\in B_1(\mathbb{P}_k^d)$, we have the rate \begin{equation}\label{reluk-approximation-rate} \inf_{f_n\in \Sigma_{n,M}(\mathbb{P}_k)} \|f - f_n\|_{L^2(B_1^d)} \lesssim_{k,d} n^{-\frac{1}{2}-\frac{2k+1}{2d}}, \end{equation} and that this exponent can not be improved. This solves the problems posed in \cite{makovoz1996random} and \cite{klusowski2018approximation} for approximation rates in $L^2(B_1^d)$. In particular, it shows that the rate \eqref{makovoz-original} obtained by Makovoz \cite{makovoz1996random} is optimal for all $d \geq 3$, and closes the gap between the best upper and lower bounds obtained in \cite{klusowski2018approximation} for approximation in $L^2(B_1^d)$ by neural networks with ReLU$^k$ activation function. The second important consequence concerns the more general stable non-linear approximation studied in \cite{cohen2020optimal}, instead of approximation by $\Sigma_{n,M}(\mathbb{P}^d_k)$. In \cite{cohen2020optimal}, for a subset $A\subset H$ and a fixed $\gamma > 0$, approximation schemes are considered which consist of a pair of $\gamma$-Lipschitz functions $a:H\rightarrow \mathbb{R}^n$, $M:\mathbb{R}^n\rightarrow H$. Here, one can think of $a$ as an encoding map and $M$ as a decoding map, which are both required to be Lipschitz. Then the stable manifold $n$-widths are defined as the reconstruction error of the best encoding scheme $a,M$, \begin{equation} \delta^*_{n,\gamma}(A)_H = \inf_{a,M} \sup_{x\in A} \|x - M(a(x))\|_H. \end{equation} Note that in general we must choose a norm on $R^n$ as well, but since $H$ is a Hilbert space we may take the Euclidean norm (this follows from the results in \cite{cohen2020optimal}). The main results of \cite{cohen2020optimal} relate the stable manifold $n$-widths $\delta^*_{n,\gamma}(A)_H$ to the entropy numbers $\epsilon_n(A)_H$. Combining this with our calculation of the entropies of $B_1(\mathbb{P}_k^d)$ and $B_1(\mathbb{F}_s^d)$, we are able to calculate the stable manifold $n$-widths of these sets as well. In particular, combining the entropy rates \eqref{metric-entropy-rates} with Theorems 3.3 and 4.1 of \cite{cohen2020optimal}, we get the weak equivalences \begin{equation} \delta^*_{n,2}(B_1(\mathbb{P}_k^d))_{H} \eqsim_{k,d} n^{-\frac{1}{2} - \frac{2k+1}{2d}},~ \delta^*_{n,2}(B_1(\mathbb{F}_s^d))_{H} \eqsim_{s,d} n^{-\frac{1}{2} - \frac{s}{d}}, \end{equation} where $H = L^2(B_1^d)$. Thus, the entropy rates \eqref{metric-entropy-rates} combined with the results of \cite{cohen2020optimal} give the theoretically best possible approximation rate that can be attained for the $B_1(\mathbb{P}_k^d)$, and thus for the Barron space when $k=1$, using any stable approximation scheme. Combined with the approximation rate \eqref{reluk-approximation-rate}, this shows that no stable approximation scheme can approximate functions $f\in B_1(\mathbb{P}_k^d)$ more efficiently than shallow neural networks. We note also that Carl's inequality \cite{carl1981entropy} can also be used in combination with \eqref{metric-entropy-rates} to derive lower bounds on the Kolmogorov $n$-widths of $B_1(\mathbb{P}_k^d)$ and $B_1(\mathbb{F}_s^d)$. Recall that the Kolmogorov $n$-widths of a set $A\subset H$ is given by \begin{equation} d_n(A)_H = \inf_{Y_n}\sup_{x\in A}\inf_{y\in Y_n}\|x - y\|_H, \end{equation} where the first infemum is over the collection of subspaces $Y_n$ of dimension $n$. Using Carl's inequality, the entropy rates \eqref{metric-entropy-rates} imply the lower bounds \begin{equation} d_n(B_1(\mathbb{P}_k^d))_{H} \gtrsim_{k,d} n^{-\frac{1}{2} - \frac{2k+1}{2d}},~ d_n(B_1(\mathbb{F}_s^d))_{H} \gtrsim_{s,d} n^{-\frac{1}{2} - \frac{s}{d}}, \end{equation} with $H = L^2(B_1^d)$. These results give a lower bound on how effectively the unit balls in these spaces can be approximated by linear methods. Further, the entropy rates \eqref{metric-entropy-rates} allow the comparison of the spaces $\mathcal{K}_1(\mathbb{P}_k^d)$ and $\mathcal{K}_1(\mathbb{F}_s^d)$ with each other and with more traditional function spaces. For instance, it is known that the entropy of the Sobolev unit ball $B(H^r) = \{f\in L^2(B_1^d):~\|f\|_{H^r(B_1^d)} \leq 1\}$ in the space $H = L^2(B_1^d)$ is given by (see \cite{lorentz1996constructive}, Chapter 15) \begin{equation} \epsilon_n(B(H^r)) \eqsim_{r,d} n^{-\frac{r}{d}}. \end{equation} We observe that for fixed smoothness $k$, the entropy numbers of $B(H^k)$ decay very slowly in high dimensions. This phenomenon is known as the curse of dimensionality, and has the consequence that general high dimensional functions are difficult to approximate accurately. Comparing with the entropy rates \eqref{metric-entropy-rates}, we see that entropy of $B_1(\mathbb{P}_1^d)$ and $B_1(\mathbb{F}_s^d)$ exhibit a decay rate of at least $O(n^{-\frac{1}{2}})$ regardless of dimension. In general, to overcome the curse of dimensionality, it is necessary to find low entropy sets of functions which capture the phenomenon to be modelled. Finally, it is observed in \cite{wojtowytsch2020representation} that the inclusion $\mathcal{K}_1(\mathbb{F}_{k+1})\subset \mathcal{K}_1(\mathbb{P}_k)$ is strict. This leaves open the question of how much larger the Barron space $\mathcal{K}_1(\mathbb{P}_k)$ is, i.e. how much is lost by considering Barron's integral condition \eqref{barron-integral-condition} on the Fourier transform instead of the nore natural space $\mathcal{K}_1(\mathbb{P}_k)$. The entropy rates \eqref{metric-entropy-rates} give an answer to this question. In particular, we see that the entropy numbers $\epsilon_n(B_1(\mathbb{F}_{k+1}))$ decay faster by a factor of $n^{-\frac{1}{2d}}$, which, comparing with the entropy of Sobolev balls, is analogous to about half a derivative. The paper is organized as follows. In Section \ref{spectral-barron-section} we discuss some of the technical subleties in defining the $\mathcal{K}_1(\mathbb{D})$ spaces and the dictionaries $\mathbb{P}^d_k$. We also give a characterization of $\mathcal{K}_1(\mathbb{F}^d_s)$ in terms of the Fourier transform. Then in Section \ref{main-result-1-section} we give our first main result, which gives approximation rates from $\Sigma_{n,M}(\mathbb{D})$ and upper bounds on the entropy of $B_1(\mathbb{D})$ for dictionaries $\mathbb{D}$ which are parameterized by a compact manifold. We apply this to obtain an upper bound on the entropy numbers of $B_1(\mathbb{P}^d_k)$ and $B_1(\mathbb{F}^d_s)$. In Section \ref{main-result-2-section} we give our second main result, which gives a lower bound on the metric entropy numbers of convex hull of ridge functions. We use this to obtain matching lower bounds on the entropy numbers of $B_1(\mathbb{P}^d_k)$ and $B_1(\mathbb{F}^d_s)$. Finally, we give some concluding remarks and further research directions. \section{Properties of the spaces $\mathcal{K}_1(\mathbb{D})$, $\mathcal{K}_1(\mathbb{P}^d_k)$, and $\mathcal{K}_1(\mathbb{F}^d_s)$}\label{spectral-barron-section} In this section, we derive some fundamental properties of the spaces $\mathcal{K}_1(\mathbb{D})$ for a general dictionary $\mathbb{D}$. Further, we derive fundamental properties of the specific spaces $\mathcal{K}_1(\mathbb{P}_k)$ and their relationship with the Barron space considered in \cite{ma2019barron, wojtowytsch2020representation}. Finally, we characterize the $\mathcal{K}_1(\mathbb{F}_s)$ spaces in terms of the Fourier transform. \subsection{Basic Properties of $\mathcal{K}_1(\mathbb{D})$} We begin with an elementary and well-known lemma concerning the unit ball $B_1(\mathbb{D})$, the norm $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$, and the space $\mathcal{K}_1(\mathbb{D})$. The most important point here is that the space $\mathcal{K}_1(\mathbb{D})$ is a Banach space. \begin{lemma}\label{fundamental-norm-lemma} Suppose that $\sup_{d\in \mathbb{D}} \|d\|_H = K_\mathbb{D} < \infty$. Then the $\mathcal{K}_1(\mathbb{D})$ norm satisfies the following properties. \begin{itemize} \item $B_1(\mathbb{D}) = \{f\in H:\|f\|_{\mathcal{K}_1(\mathbb{D})}\leq 1\}$ \item $\|f\|_H\leq K_\mathbb{D}\|f\|_{\mathcal{K}_1(\mathbb{D})}$ \item $\mathcal{K}_1(\mathbb{D}) := \{f\in H:~\|f\|_{\mathcal{K}_1(\mathbb{D})} < \infty\}$ is a Banach space with the $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$ norm \end{itemize} \end{lemma} \begin{proof} From definition \eqref{norm-definition} we see that $B_1(\mathbb{D}) \subset \{f\in H:\|f\|_{\mathcal{K}_1(\mathbb{D})}\leq 1\}$ since $r=1$ is an element of the infemum in \eqref{norm-definition}. For the reverse inclusion, let $\|f\|_{\mathcal{K}_1(\mathbb{D})}\leq 1$. By \eqref{norm-definition} this means that for every $n$ we must have $f\in (1 + \frac{1}{n})B_1(\mathbb{D})$, or in other words that $f_n = \frac{n}{n+1}f\in B_1(\mathbb{D})$. However, it is clear that $f_n\rightarrow f$ in $H$ and thus since $B_1(\mathbb{D})$ is closed, we have $f\in B_1(\mathbb{D})$. Thus $\{f\in H:\|f\|_{\mathcal{K}_1(\mathbb{D})}\leq 1\} = B_1(\mathbb{D})$, proving the first statement. For the second statement, note that $\|d\|_H\leq K_\mathbb{D}$ for all $d\in \mathbb{D}$. This immediately implies that $\|f\|_H\leq K_\mathbb{D}$ for all $f\in B_1(\mathbb{D})$, which proves the result by an elementary scaling argument. Finally, for the third statement we must show that the set $\mathcal{K}_1(\mathbb{D})$ is complete with respect to the $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$ norm. So let $\{f_n\}_{n=1}^\infty$ be a Cauchy sequence with respect to the $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$ norm. By the second statement, we have $\|f_n - f_m\|_H\leq \|f_n - f_m\|_{\mathcal{K}_1(\mathbb{D})}$, so that the sequence is Cauchy with respect the the $H$-norm as well. Thus, there exists an $f\in H$, such that $f_n\rightarrow f$ in $H$. We will show that in fact $f_n\rightarrow f$ in the $\mathcal{K}_1(\mathbb{D})$-norm as well (note that this automatically implies that $\|f\|_{\mathcal{K}_1(\mathbb{D})}<\infty$). To this end, let $\epsilon > 0$ and choose $N$ such that $\|f_n - f_m\|_{\mathcal{K}_1(\mathbb{D})} < \epsilon / 2$ for $n,m \geq N$ ($\{f_n\}$ is Cauchy, so this is possible). In particular, this means that $\|f_N - f_m\|_{\mathcal{K}_1(\mathbb{D})}\leq \epsilon / 2$ for all $m > N$. Now the first statement implies that $f_m - f_N \in (\epsilon / 2)B_1(\mathbb{D})$, or in other words that $f_m \in f_N + (\epsilon / 2)B_1(\mathbb{D})$. Since $f_m\rightarrow f$ in $H$, and $B_1(\mathbb{D})$ is closed in $H$, we get $f\in f_N + (\epsilon / 2)B_1(\mathbb{D})$. Hence $\|f - f_N\|_{\mathcal{K}_1(\mathbb{D})} \leq \epsilon / 2$ and the triangle inequality finally implies that $\|f - f_m\|_{\mathcal{K}_1(\mathbb{D})} \leq \epsilon$ for all $m \geq N$. Thus $f_n\rightarrow f$ in the $\mathcal{K}_1(\mathbb{D})$-norm and $\mathcal{K}_1(\mathbb{D})$ is complete. \end{proof} % Note that the properties proved in Lemma \ref{fundamental-norm-lemma} depend upon the fact that the unit ball $B_1(\mathbb{D})$ is closed in $H$. If the norm is instead defined differently, for instance using integral representations, we argue that it is important to ensure that the unit ball is closed in $H$ to ensure that the resulting space is well-behaved. For instance, let $\Omega\subset \mathbb{R}^d$ be a bounded domain and $\sigma:\mathbb{R}\rightarrow \mathbb{R}$ an activation function. Following \cite{ma2019barron}, which inspired the present work, define % \begin{equation}\label{E-barron-norm} % \|f\|_{\mathcal{B}^\sigma(\Omega)} = \inf\left\{\int_{S_\sigma} d|\mu|(\omega,b):~f(x)=\int_{S_\sigma} \sigma(\omega\cdot x + b)d\mu(\omega,b)~\text{for}~x\in \Omega\right\}, % \end{equation} % where $S_\sigma\subset \mathbb{R}^d\times\mathbb{R}$ is a subset of parameters defending upon $\sigma$. % % We compare this space to the space $\mathcal{K}_1(\mathbb{D})$ for the dictionary % \begin{equation} % \mathbb{D}^\sigma = \{\sigma(\omega\cdot x + b):~(\omega,b)\in S_\sigma\}\subset L^2(\Omega). % \end{equation} % It is clear that $\{f:\|f\|_{\mathcal{B}^\sigma(\Omega)} \leq 1\}\subset B_1(\mathbb{D}^\sigma)$ which implies that $\|\cdot\|_{\mathcal{K}_1(\mathbb{D}^\sigma)}\leq \|\cdot\|_{\mathcal{B}^\sigma(\Omega)}$ and $\mathcal{B}^\sigma(\Omega)\subset \mathbb{K}_1(\mathbb{D}^\sigma)$. However, if there are elements in the closure $B_1(\mathbb{D})$ which are not given by integral representations of the form in \eqref{E-barron-norm}, the reverse inclusion may not hold, as the following result shows. % % \begin{proposition} % Suppose $\Omega = [-1,1]^d$ and $\sigma$ is a smooth sigmoidal function. Let $S_\sigma = \mathbb{R}^d\times \mathbb{R}$. Then $\mathcal{B}^\sigma(\Omega)\subsetneq \mathbb{K}_1(\mathbb{D}^\sigma)$. % % \end{proposition} % \begin{proof} % Consider the Heaviside function % \begin{equation} % H(\tau\cdot x) = \begin{cases} % 0 & \tau\cdot x\leq 0 \\ % 1 & \tau\cdot x\geq 0. % \end{cases} % \end{equation} % for $\tau\in S^{d-1}:=\{x\in \mathbb{R}^d:~|x| = 1\}$. Since $\sigma$ is sigmoidal, we have % \begin{equation} % \lim_{r\rightarrow \infty}\|H(\tau\cdot x) - \sigma(r\tau\cdot x)\|_{L^2(\Omega)} = 0. % \end{equation} % Thus $H(\tau\cdot x)\in B_1(\mathbb{D})$. However, since $\sigma$ is smooth, the discontinuous function $H(\tau\cdot x)$ cannot have an integral representation of the form \eqref{E-barron-norm} and so $H(\tau\cdot x)\notin \mathcal{B}^\sigma(\Omega)$. % % \end{proof} % It is also not clear whether the space $\mathcal{B}^\sigma$ satisfies the conclusions of Lemma \ref{fundamental-norm-lemma} in general. Despite these issues for general activation functions $\sigma$, for specific activation functions such as the rectified linear unit, which are of primary interest in \cite{ma2019barron}, the space $\mathcal{B}^\sigma$ may in fact be bettter behaved. % We begin by noting that the definition \eqref{unit-ball-definition} of $B_1(\mathbb{D})$ contains a closure in $H$ instead of being written in terms of an infemum over integral representations \cite{ma2019barron} or representations by finite sums \cite{barron2008approximation}. This follows the approach taking previously in the literature \cite{devore1998nonlinear,kurkova2001bounds,kurkova2002comparison} and can result in a larger space for some dictionaries. As an example, consider the following situation. % % Suppose that $\Omega\subset \mathbb{R}^d$ is bounded and $\sigma$ is a smooth sigmoidal function. Consider the dictionary % \begin{equation} % \mathbb{D}_{\sigma} = \{\sigma(\omega\cdot x + b):~\omega\in \mathbb{R}^d,~b\in \mathbb{R}\}. % \end{equation} % % By taking $\omega\rightarrow \infty$, we easily see that $B_1(\mathbb{P}_0)\subset B_1(\mathbb{D}_{\sigma})$ (this is the essence of the argument by Barron \cite{barron1993universal}). However, the discontinuous Heaviside function cannot be written as an integral representation of the smooth dictionary elements in $\mathbb{D}_\sigma$. Consequently the definition in terms of integral representations given in \cite{ma2019barron} would fail to capture such functions. Let us remark that for some dictionaries $\mathbb{D}$ the $\mathcal{K}_1(\mathbb{D})$ space can sometimes by substantially smaller than $H$. In fact, if the dictionary $\mathbb{D}$ is contained in a closed subspace of $H$, then we have the following elementary result. \begin{lemma}\label{subspace-lemma} Let $K\subset H$ be a closed subspace of $H$. Then $\mathbb{D}\subset K$ iff $\mathcal{K}_1(\mathbb{D})\subset K$. \end{lemma} \begin{proof} We have $\mathbb{D}\subset\mathcal{K}_1(\mathbb{D})$ so that the reverse implication is trivial. For the forward implication, since $\mathbb{D}\subset K$ and $K$ is closed, it follows that $B_1(\mathbb{D})\subset K$. Then, from the definition \eqref{norm-definition}, it follows that \begin{equation} \mathcal{K}_1(\mathbb{D}) = \bigcup_{r > 0} rB_1(\mathbb{D})\subset K. \end{equation} \end{proof} A simple example when this occurs is when considering a shallow neural network with activation function $\sigma$ which is a polynomial of degree $k$. In this case the space $\mathcal{K}_1(\mathbb{D})$ is contained in the finite-dimensional space of polynomials of degree $k$, and the $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$ norm is infinite on non-polynomial functions. This is related to the well-known result that neural network functions are dense iff the activation function is not a polynomial \cite{leshno1993multilayer}. \begin{proposition} Let $\Omega\subset \mathbb{R}^d$ be a bounded domain and $\mathbb{D} = \{\sigma(\omega\cdot x + b):(\omega,b)\in \mathbb{R}^d\times \mathbb{R}\}\subset L^2(\Omega)$, where the activation function $\sigma\in L^\infty_{loc}(\mathbb{R})$. Suppose further that the set of discontinuities of $\sigma$ has Lebesgue measure $0$. Then $\mathcal{K}_1(\mathbb{D})$ is finite dimensional iff $\sigma$ is a polynomial (a.e.). \end{proposition} \begin{proof} If $\sigma$ is a polynomial, $\mathbb{D}$ is contained in the space of polynomials of degree at most $\text{deg}(\sigma)$, which is finite dimensional. This implies the result by Lemma \ref{subspace-lemma}. For the reverse implication, we use Theorem 1 of \cite{leshno1993multilayer}, which states that if $\sigma$ is not a polynomial, then $$ C(\Omega) \subset \overline{\left\{\sum_{i=1}^na_i\sigma(\omega_i\cdot x + b_i)\right\}}, $$ where the closure is taken in $L^\infty(\Omega)$ (note that this cumbersome statement is necessary since $\sigma$ may not be continuous). This immediately implies that $\mathcal{K}_1(\mathbb{D})$ is dense in $L^2(\Omega)$ (since $C(\Omega)$ is dense in $L^2(\Omega)$), and thus obviously not finite dimensional. \end{proof} Next, we note that Maurey's approximation rate has a converse. In particular, if a function can be approximated by elements from $\Sigma_{n,M}(\mathbb{D})$ with fixed $M$, then it must be in the space $\mathcal{K}_1(\mathbb{D})$. In particular, we have \begin{theorem} $\quad$ \begin{enumerate} \item Let $f\in H$ and suppose that $f_n\rightarrow f$ with $f_n\in \Sigma_{n,M}(\mathbb{D})$ for a fixed $M < \infty$. Then $f\in \mathcal{K}_1(\mathbb{D})$ and \begin{equation} \|f\|_{\mathcal{K}_1(\mathbb{D})} \leq M. \end{equation} \item If $f\in \mathcal{K}_1(\mathbb{D})$, $$ \inf_{f_n\in \Sigma_{n,M}(\mathbb{D})} \|f-f_n\|_H\le n^{-\frac12}\|f\|_{H} $$ \end{enumerate} \end{theorem} \begin{proof} It is clear from the definitions that $\Sigma_{n,M}(\mathbb{D}) \subset MB_1(\mathbb{D})$ for every $n$. Thus $f_n\in MB_1(\mathbb{D})$ and since $MB_1(\mathbb{D})$ is closed, we get $f\in MB_1(\mathbb{D})$, so that $\|f\|_{\mathcal{K}_1(\mathbb{D})} \leq M$. We follow the argument of \cite{barron1993universal}, see also \cite{jones1992simple,pisier1981remarques} to prove the second statement. The result is trivial if $M = 0$, so suppose that $M > 0$. By normalizing both $f$ and the coefficients $a_i$ by $M$, we reduce to the case where $M = 1$. In this case $f\in \mathcal{C}_\psi$ by Lemma \ref{fundamental-norm-lemma}. Let $\epsilon > 0$. Then since $f\in \mathcal{C}_\psi$, i.e. $f$ is in the closure of the convex hull of $\{e^{{\mathrm{i}\mkern1mu}\phi}\psi_\theta:~\phi\in \mathbb{R},~\theta\in\Theta\}$, there exist $a_i$, $\theta_i$ with $i=1,...,N$, such that \begin{equation}\label{eq_129} \left\|f - \sum_{i=1}^Na_i\psi_{\theta_i}\right\|_H \leq \epsilon, \end{equation} and $\sum_{i=1}^N |a_i| = 1$ (note here that $N$ may depend upong $\epsilon$ and in particular may be very large). Next, draw $n$ samples $(i_1,...,i_n)$ from the discrete distribution on $\{1,...,N\}$ with the probability of index $i$ given by $|a_i|$, and form the random variable \begin{equation} f_n = \frac{1}{n}\sum_{j=1}^n \frac{a_{i_j}}{|a_{i_j}|}\psi_{\theta_{i_j}} \in \Sigma_{n,M}(\mathbb{D}). \end{equation} We evidently have $\mathbb{E}(f_n) = \mathbb{E}(f_1) = \sum_{i=1}^Na_i\psi_{\theta_i}$ and $$\mathbb{V}(f_n) = \mathbb{E}(\|f_n-\mathbb{E}(f_n)\|_H^2) = \frac{1}{n}\mathbb{V}(f_1) = \frac{1}{n}(\mathbb{E}(\|f_1\|_H^2) - \|\mathbb{E}(f_n)\|_H^2)\leq \frac{\sup_{\theta\in \Theta} \|\psi_\theta\|^2_H}{n} = \frac{1}{n}.$$ This means that there must exist a realization $\tilde{f}_n\in\Sigma_{n,M}(\mathbb{D})$ such that \begin{equation} \|\tilde{f}_n - \sum_{i=1}^Na_i\psi_{\theta_i}\|_H^2 \leq \frac{1}{n}. \end{equation} Combining this with \eqref{eq_129}, we see that \begin{equation} \inf_{f_n\in\Sigma_{n,M}(\mathbb{D})} \|f-f_n\|_H \leq n^{-\frac{1}{2}} + \epsilon. \end{equation} Since $\epsilon > 0$ was arbitrary, we obtain the desired result. \end{proof} Finally, we give a lemma which relates the space $\mathcal{K}_1(\mathbb{D})$ to the set of functions which have integral representations by elements of $\mathbb{D})$. \begin{lemma}\label{prokhorov-lemma} Suppose that $\mathbb{D}\subset H$ is compact. Then $f\in \mathcal{K}_1(\mathbb{D})$ iff there exists a Borel measure $\mu$ on $\mathbb{D}$ such that \begin{equation} f = \int_\mathbb{D} hd\mu(h). \end{equation} Moreover, \begin{equation} \|f\|_{\mathcal{K}_1(\mathbb{D})} = \inf\left\{\int_\mathbb{D} d|\mu|(h):~f = \int_\mathbb{D} hd\mu(h)\right\}. \end{equation} \end{lemma} \begin{proof} It suffices to show that $$B_1(\mathbb{D}) = M(\mathbb{D}):=\left\{\int_\mathbb{D} hd\mu(h):~\int_\mathbb{D} d|\mu|(h) \leq 1\right\}.$$ By approximating the integral using simple functions we immediately see that $M(\mathbb{D})\subset B_1(\mathbb{D})$. To prove the inverse inclusion, we must show that $M(\mathbb{D})$ is closed. This follows immediately from Prokhorov's theorem \cite{prokhorov1956convergence} (see also \cite{dudley2018real}, Theorem 11.5.4, for instance). Indeed, let $f_n\rightarrow f$ with $f_n\in M(\mathbb{D})$ and let $\mu_n$ be a corresponding sequence of Borel measure on $\mathbb{D}$. By the compactness of $\mathbb{D}$ and Prokhorov's theorem, by taking a subsequence if necessary we may assume that the $\mu_n\rightarrow \mu$ weakly. This implies $f = \int_\mathbb{D} hd\mu(h)$ and $\int_\mathbb{D} d|\mu|(h) \leq 1$, so that $f\in M(\mathbb{D})$. \end{proof} \subsection{Properties of $\mathcal{K}_1(\mathbb{P}^d_k)$ and relationship with the Barron space} Next, we explain the precise definition \eqref{relu-k-space-definition}, i.e. how we define an appropriate dictionary corresponding to the ReLU$^k$ activation function. The problem with letting $\sigma_k(x) = [\max(0,x)]^k$ and setting \begin{equation} \mathbb{D} = \{\sigma_k(\omega\cdot x + b):~\omega\in \mathbb{R}^d,~b\in \mathbb{R}\}, \end{equation} is that unless $k=0$ the dictionary elements are not bounded in $L^2(B_1^d)$, since $\sigma_k$ is not bounded and we can shift $b$ arbitrarily. This manifests itself in the fact that $\|\cdot\|_{\mathcal{K}_1(\mathbb{D})}$ is a semi-norm which contains the set of polynomials of degree at most $k-1$ in its kernel (this occurs since the arbirtrarily large elements in $\mathbb{D}$ are polynomials on $B_1^d$). We rectify this issue by considering the dictionary \begin{equation} \mathbb{P}^d_k = \{\sigma_k(\omega\cdot x + b):~\omega\in S^{d-1},~b\in [-2,2]\}. \end{equation} We remark that the constant $2$ above can be replaced by any $c > 1$, which results in an equivalent norm. This follows since the elements of $\mathbb{P}^d_k$ for which $|b| > 1$ are polynomials, and we only need finitely many of them to span the space of polynomials. % \begin{proposition}\label{constant-independence-proposition} % Let $c > 1$ and consider the dictionary % \begin{equation} % \mathbb{P}^c_k = \{\sigma_k(\omega\cdot x + b):~\omega\in S^{d-1},~b\in [-cR_\Omega,cR_\Omega]\} % \end{equation} % The we have % \begin{equation} % \|f\|_{\mathcal{K}_1(\mathbb{P}_k^c)} \eqsim \|f\|_{\mathcal{K}_1(\mathbb{P}_k)}, % \end{equation} % where the implied constant depends only upon $c$ and $k$. % \end{proposition} % For the proof of this proposition, we will need the following well-known lemma which we include for completeness. % \begin{lemma}\label{polynomial-basis-lemma} % Let $b_1,...,b_{k+1}\subset \mathbb{R}$ be distinct points. Then $(x+b_1)^k,...,(x+b_{k+1})^k$ is a basis for the space of polynomials of degree at most $k$. % \end{lemma} % \begin{proof} % We expand out the polynomials to get % \begin{equation} % (x+b_i)^k = \sum_{j=0}^k \binom{k}{j}b_i^jx^{k-j}. % \end{equation} % Thus the change of basis matrix from the monomials $1,x,...,x^k$ to $(x+b_1)^k,...,(x+b_{k+1})^k$ has entries $M_{ij} = \binom{k}{j}b_i^j$. This is a Vandermode matrix whose $j$-th column has been scaled by $\binom{k}{j}\neq 0$. Since the $b_i$ are distinct, its determinant is non-zero and thus $(x+b_1)^k,...,(x+b_{k+1})^k$ is a basis as claimed. % % \end{proof} % % \begin{proof}[Proof of Proposition \ref{constant-independence-proposition}] % We will show that $\mathcal{K}_1(\mathbb{P}_k^{c_1})$ and $\mathcal{K}_1(\mathbb{P}_k^{c_2})$ are equivalent for any $c_1 > c_2 > 1$. We clearly have $\mathbb{P}_k^{c_1}\supset \mathbb{P}_k^{c_2}$, so that $ % \|f\|_{\mathcal{K}_1(\mathbb{P}_k^{c_1})} \leq \|f\|_{\mathcal{K}_1(\mathbb{P}^{c_2}_k)}$. We must prove the reverse inequality. % % This will follow if for some constant $K$ we can show that $\|g\|_{\mathcal{K}_1(\mathbb{P}^{c_2}_k)} \leq K$, i.e. $g\in KB_1(\mathbb{P}^{c_2}_k)$, for every $g\in \mathbb{P}_k^{c_1}$. To this end, let $g\in \mathbb{P}_k^{c_1}$. Then $g(x) = \sigma_k(\omega\cdot x + b)$ with $\omega\in S^{d-1}$ and $b\in [-c_1R_\Omega, c_1R_\Omega]$. If $b\in [-c_2R_\Omega, c_2R_\Omega]$, then clearly $g(x)\in \mathbb{P}_k^{c_2}$ so that $\|g\|_{\mathcal{K}_1(\mathbb{P}^{c_2}_k)} \leq 1$. So suppose that $b\notin [-c_2R_\Omega, c_2R_\Omega]$. Then since $c_2 > 1$ and $|\omega| = 1$, the quantity $\omega\cdot x + b$ does not change sign on $\Omega$ and so either % $$g(x) = \sigma_k(\omega\cdot x + b) = (\omega\cdot x + b)^k,$$ % or $g = 0$. In the latter case the conclusion is clear, so consider the case where $d = (\omega\cdot x + b)^k$. % % Choose $k+1$ distinct numbers $b_1,...,b_{k+1}\in [R_\Omega, c_2R_\Omega]$. By Lemma \ref{polynomial-basis-lemma} the $(x+b_i)^k$ span the space of polynomials of degree $k$ and thus we can write $(\omega\cdot x + b)^k$ as a linear combination of $(\omega\cdot x+b_i)^k = \sigma_k(\omega\cdot x+b_i)\in \mathbb{P}_k^{c_2}$ for $i=1,...,k+1$. Moreover, the coefficients are continuous as a function of $b$ and thus can be uniformly bounded for $b\in [-c_1R_\Omega,c_1R_\Omega]$. This proves that there is a constant $K$ independent of $b$ such that $g(x) = \sigma_k(\omega\cdot x + b)\in KB_1(\mathbb{P}_k^{c_2})$, which completes the proof. % \end{proof} Next, we consider the relationship between $\mathcal{K}_1(\mathbb{P}_k^d)$ and the Barron norm introduced in \cite{ma2019barron}, which is given by \begin{equation}\label{barron-norm} \|f\|_{\mathcal{B}} = \inf\left\{\mathbb{E}_\rho(|a|(|\omega|_1 + |b|)):~f(x) = \int_{\mathbb{R}\times\mathbb{R}^d\times\mathbb{R}} a\sigma_1(\omega\cdot x + b)\rho(da,d\omega,db)\right\}, \end{equation} where we recall that $\sigma_1$ is the rectified linear unit and the infemum is taken over all integral representations of $f$. Here $\rho$ is a probability distribution on $\mathbb{R}\times\mathbb{R}^d\times\mathbb{R}$, and the expectation is taken with respect to $\rho$. It turns out that the $\mathcal{K}_1(\mathbb{P}^d_1)$ space is equivalent to the Barron space. \begin{proposition} We have \begin{equation} \|f\|_{\mathcal{K}_1(\mathbb{P}^d_1)} \eqsim_d \|f\|_{\mathcal{B}}. \end{equation} \end{proposition} \begin{proof} Consider the dictionary \begin{equation} \mathbb{B} = \{(|\omega|_1 + |b|)^{-1}\sigma_1(\omega\cdot x + b):~\omega\in \mathbb{R}^d,~b\in \mathbb{R}\}\subset L^2(B_1^d). \end{equation} From lemma \ref{prokhorov-lemma}, it is easy to see that $\|f\|_{\mathcal{K}_1(\mathbb{B})} = \|f\|_{\mathcal{B}}$, so it suffices to show that $\|f\|_{\mathcal{K}_1(\mathbb{P}^d_1)} \eqsim_d \|f\|_{\mathcal{K}_1(\mathbb{B})}$. It suffices to show that $\mathbb{P}^d_1\subset CB_1(\mathbb{B})$ and $\mathbb{B}\subset CB_1(\mathbb{P}^d_1)$ for some constant $C$. So let $g\in \mathbb{P}^d_1$. This means that $g(x) = \sigma_1(\omega \cdot x + b)$ for some $\omega\in S^{d-1}$ and $b\in [-2,2]$. Thus $$(|\omega|_1 + |b|) \leq (\sqrt{d} + 2) \leq C:=C(d),$$ and since $(|\omega|_1 + |b|)^{-1}\sigma_1(\omega \cdot x + b)\in \mathbb{B}$, we see that $g\in CB_1(\mathbb{B})$. Thus $\mathbb{P}^d_1\subset CB_1(\mathbb{B})$. Now, let $g\in \mathbb{B}$. Then $g(x) = (|\omega|_1 + |b|)^{-1}\sigma_1(\omega \cdot x + b)$ for some $\omega\in \mathbb{R}^d$ and $b\in \mathbb{R}$. Consider first the case when $\omega \neq 0$. Note that by the positive homogeneity of $\sigma_1$ we can assume that $|\omega| = 1$, i.e. that $\omega\in S^{d-1}$. Further, we have that $(|\omega|_1 + |b|)^{-1} \leq (1+|b|)^{-1}$. Thus, we must show that \begin{equation} \tilde g(x) := (1+|b|)^{-1}\sigma_1(\omega \cdot x + b)\in CB_1(\mathbb{P}^d_1) \end{equation} for $\omega\in S^{d-1}$ and $b\in \mathbb{R}$. For $b\in [-2,2]$ this clearly holds with $C=1$ since $(1 + |b|)^{-1} \leq 1$ and for such values of $b$, we have $\sigma_1(\omega\cdot x + b)\in \mathbb{P}^d_1$. If $b < -2$, then $\tilde g(x) = 0$, so we trivially have $\tilde g\in B_1(\mathbb{P}^d_1)$. Finally, if $b > 2$, then $\omega\cdot x + b$ is positive on $B_1^d$, so that $$ \tilde g(x) = (1+|b|)^{-1}(\omega\cdot x + b) = (1+|b|)^{-1}\omega\cdot x + b(1+|b|)^{-1}. $$ Now $\omega\cdot x\in B_1(\mathbb{P}^d_1)$ and $1 = [\sigma_1(\omega\cdot x + 2) - \sigma_1(\omega\cdot x + 1)]\in 2 B_1(\mathbb{P}^d_1)$. Combined with the above and the fact that $(1+|b|)^{-1},|b|(1+|b|)^{-1}\leq 1$, we get $\tilde g\in CB_1(\mathbb{P}^d_1)$. Finally, if $\omega = 0$, then $g(x) = 1$ and by the above paragraph we clearly also have $g\in CB_1(\mathbb{P}_1)$. This completes the proof. \end{proof} Note that it follows from this result that the Barron space $\mathcal{B}$ is a Banach space, which was first proven in \cite{wojtowytsch2020representation}. \subsection{Characterization of $\mathcal{K}_1(\mathbb{P}^d_k)$} In one dimension, the space $\mathcal{K}_1(\mathbb{P}^d_k)$ has a relatively simple characterization in terms of the space of bounded variation (see \cite{wojtowytsch2020representation}, section 4 for a proof in the case $k=1$ in the context of the Barron space). \begin{theorem}\label{barron-space-1-d-characterization-theorem} We have \begin{equation} \mathcal{K}_1(\mathbb{P}^1_k) = \{f\in L^2([-1,1]):~\text{$f$ is $k$-times differentiable a.e. and }f^{(k)}\in BV([-1,1])\}. \end{equation} In particular, it holds that \begin{equation} \|f\|_{\mathcal{K}_1(\mathbb{P}^1_k)} \eqsim_k \sum_{j=0}^{k-1} |f^{(j)}(-1)| + \|f^{(k)}\|_{BV([-1,1])}. \end{equation} \end{theorem} \begin{proof} We first prove that \begin{equation}\label{upper-bound-barron-1-d} \|f\|_{\mathcal{K}_1(\mathbb{P}^1_k)} \lesssim_k \sum_{j=0}^{k-1} |f^{(j)}(-1)| + \|f^{(k)}\|_{BV([-1,1])}. \end{equation} Note that the right hand side is uniformly bounded for all $f = \sigma_k(\pm x + b)\in \mathbb{P}^1_k$, since $\sigma_k^{(k)}$ is a multiple of the Heaviside function and $b$ is bounded by $2$. By taking convex combinations, this means that for some constant $C$, we have \begin{equation} \left\{\sum_{j=1}^n a_jh_j:~h_j\in \mathbb{P}^1_k,~\sum_{i=1}^n|a_i|\leq 1\right\} \subset CB^1_{BV,k}, \end{equation} where \begin{equation} B^1_{BV,k}:=\left\{f\in L^2([-1,1]):~\sum_{j=0}^{k-1} |f^{(j)}(-1)| + \|f^{(k)}\|_{BV([-1,1])} \leq 1\right\}. \end{equation} It is well-known that $B^1_{BV,k}$ is compact in $L^1([-1,1])$ (see, for instance Theorem 4 of Chapter 5 in \cite{evans2015measure}). This implies that $B^1_{BV,k}$ is closed in $L^2([-1,1])$, since if $f_n\rightarrow_{L^2} f$ with $f_n\in B^1_{BV,k}$, then there must exist a subsequence $f_{k_n}\rightarrow_{L^1} f'\in B^1_{BV,k}$. Clearly $f=f'$ and so $B^1_{BV,k}$ is closed in $L^2([-1,1])$. From this it follows that $B_1(\mathbb{P}^1_k) \subset CB^1_{BV,k}$ and we obtain \eqref{upper-bound-barron-1-d}. Next, we prove the reverse inequality. So let $f\in B^1_{BV,k}$. By Theorem 2 in Chapter 5 of \cite{evans2015measure}, there exist $f_n\in C^\infty\cap B^1_{BV,k}$ such that $f_n\rightarrow f$ in $L^1([-1,1])$. Further, since $f_n,f\in B^1_{BV,k}$, we have that $\|f - f_n\|_{L^\infty([-1,1])}$ is uniformly bounded. Thus $$\|f - f_n\|^2_{L^2([-1,1])} \leq \|f - f_n\|_{L^1([-1,1])}\|f - f_n\|_{L^\infty([-1,1])} \rightarrow 0$$ and so $f_n\rightarrow f$ in $L^2([-1,1])$ as well. Using the Peano kernel formula, we see that \begin{equation} f_n(x) = \sum_{j=0}^{k} \frac{f_n^{(j)}(-1)}{j!}(x+1)^j + \int_{-1}^1 \frac{f_n^{(k+1)}(b)}{k!}\sigma_k(x-b)db. \end{equation} From the definition of the $BV$-norm and the fact that $f_n\in B^1_{BV,k}$, we see that \begin{equation} \sum_{j=0}^{k} \frac{|f_n^{(j)}(-1)|}{j!}+ \int_{-1}^1 \frac{|f_n^{(k+1)}(b)|}{k!}db \leq C_1 \end{equation} for a fixed constant $C_1$. Choose $k+1$ distinct $b_1,...,b_{k+1}\in [1, 2]$. Then by construction $\sigma_k(x+b_i) = (x+b_i)^k$ is a polynomial on $[-1,1]$. Moreover, it is well-known that the polynomials $(x+b_i)^k$ span the space of polynomials of degree at most $k$. Combined with the coefficient bound \begin{equation} \sum_{j=0}^{k} \frac{|f_n^{(j)}(-1)|}{j!} \leq C_1, \end{equation} we see that \begin{equation} \sum_{j=0}^{k} \frac{f_n^{(j)}(-1)}{j!}(x-a)^j \in C_2B_1(\mathbb{P}^1_k) \end{equation} for a fixed constant $C_2$ (independent of $f_n$). Furthermore, since also \begin{equation} \int_{-1}^1 \frac{|f_n^{(k+1)}(b)|}{k!}db \leq C_1, \end{equation} we obtain \begin{equation} \int_{-1}^1 \frac{f_n^{(k+1)}(b)}{k!}\sigma_k(x-b)db\in C_1B_1(\mathbb{P}^1_k). \end{equation} This implies that $f_n\in CB_1(\mathbb{P}^1_k)$ for $C = C_1 + C_2$ and since $f_n\rightarrow f$ and $B_1(\mathbb{P}^1_k)$ is closed in $L^2([-1,1])$, we get $f\in CB_1(\mathbb{P}^1_k)$, which completes the proof. \end{proof} Theorem \ref{barron-space-1-d-characterization-theorem} only serves to characterize the space $\mathcal{K}_1(\mathbb{P}_k^1)$, but this result can be used to bound the $\|\cdot\|_{\mathcal{K}_1(\mathbb{P}^d_k)}$-norm of ridge functions which only vary in one direction in higher dimensions as well. \begin{corollary}\label{ridge-corollary} If $f\in L^2([-1,1])$ is $k$-times differentiable a.e. and satisfies \begin{equation}\label{bound-530} \sum_{j=0}^{k-1} |f^{(j)}(-1)| + \|f^{(k)}\|_{BV([-1,1])} \leq 1, \end{equation} then for any $\omega\in S^{d-1}$, $\|f(\omega\cdot x)\|_{\mathcal{K}_1(\mathbb{P}^d_k)} \lesssim_k 1$. \end{corollary} \begin{proof} This follows immediately from the one dimensional result, Theorem \ref{barron-space-1-d-characterization-theorem}, by considering the dictionary $\mathbb{P}_k^\omega = \{\sigma_k(\omega\cdot x + b):~b\in [-2,2]\}$. \end{proof} An important application of Corollary \ref{ridge-corollary} is to note that the functions $f_\omega(x) = e^{2\pi i \omega\cdot x}$ satisfy \begin{equation}\label{eq-550} \sum_{j=0}^{k-1} |f_\omega^{(j)}(-1)| + \|f_\omega^{(k)}\|_{BV([-1,1])} \lesssim_k (1 + |\omega|)^{k+1}, \end{equation} which leads immediately to the following result. \begin{theorem} For $k \geq 0$ and $d \geq 1$, we have \begin{equation} \mathcal{K}_1(\mathbb{F}^d_{k+1}) \subset \mathcal{K}_1(\mathbb{P}^d_k). \end{equation} \end{theorem} Using different language, an essentially equivalent result first appears for $k=0$ in \cite{barron1993universal}, for $k=1,2$ in \cite{klusowski2018approximation} and for $k \geq 3$ in \cite{CiCP-28-1707}. It is the basis of the Fourier integral condition introduced by Barron \cite{barron1993universal}. In \cite{wojtowytsch2020representation} it is remarked that $\mathcal{K}_1(\mathbb{P}_k)$ is actually significantly larger than $\mathcal{K}_1(\mathbb{F}_{k+1})$ when $k=1$. In later sections we quantify this observation by calculating the entropy of the unit balls of both spaces. In general, a function $f\in \mathcal{K}_1(\mathbb{P}^d_k)$ can be written as a superposition of one-dimensional ridge functions which satisfy \eqref{bound-530}. This leads to the following bound on $\mathcal{K}_1(\mathbb{P}^d_k)$ in higher dimensions. \begin{theorem}\label{bs-theorem} We have \begin{equation}\label{bs-label} \|f\|_{\mathcal{K}_1(\mathbb{P}^d_k)} \lesssim_{k,d} \inf_{f_e|_{B_1^d} = f}\left\{\int_{S^{d-1}}\sum_{j=0}^{k-1} |g_\omega^{(j)}(-1)| + \|g_\omega^{(k)}\|_{BV([-1,1])} d\omega,~g_\omega(t) = \int_{-\infty}^\infty e^{2\pi i ts}\hat{f}_e(\omega s)s^{d-1}dx\right\}. \end{equation} where the infemum is over all extensions $f_e$ which satisfy $f_e, \hat{f}_e\in L^1(\mathbb{R}^d)$. \end{theorem} \begin{proof} By the Fourier inversion formula, we have \begin{equation} f_e(x) = \int_{\mathbb{R}^d} e^{2\pi i \xi\cdot x}\hat{f}_e(\xi)d\xi = C_d\int_{S^{d-1}}\int_{0}^\infty e^{2\pi i s(\omega\cdot x)}\hat{f}_e(\omega s) s^{d-1}ds d\omega. \end{equation} This means that \begin{equation} f_e(x) = C_d\int_{\omega\in S^{d-1}} g_\omega(\omega\cdot x)d\omega. \end{equation} Combined with Corollary \ref{ridge-corollary}, this completes the proof. \end{proof} We conjecture that the bound in Theorem \ref{bs-theorem} in fact characterizes the space $\mathcal{K}_1(\mathbb{P}^d_k)$. \subsection{Characterization of $\mathcal{K}_1(\mathbb{F}^d_s)$} Here we characterize the space $\mathcal{K}_1(\mathbb{F}^d_s)$. In particular, have the following theorem. \begin{theorem}\label{spectral-barron-theorem} We have \begin{equation}\label{fourier-integral-condition} \|f\|_{\mathcal{K}_1(\mathbb{F}^d_s)} = \inf_{f_e|_{B_1^d} = f} \int_{\mathbb{R}^d} (1+|\xi|)^s|\hat{f}_e(\xi)|d\xi, \end{equation} where the infemum is taken over all extensions $f_e\in L^1(\mathbb{R}^d)$. \end{theorem} The proof is a bit more involved due to the failure of Lemma \ref{prokhorov-lemma} when $s=0$, and requires the technical fact that the unit ball \begin{equation} B_1^s(\Omega) = \left\{f:\Omega\rightarrow \mathbb{R}:~\inf_{f_e|_\Omega = f} \int_{\mathbb{R}^d} (1+|\xi|)^s|\hat{f}_e(\xi)|d\xi\leq 1\right\} \end{equation} is closed in $L^2(\Omega)$ (it is shown in \cite{siegel2020approximation} that $B_1^s\subset L^2(\Omega)$). Throughout the proof we will use the notation \begin{equation}\label{spectral-barron-integral-condition} \|f\|_{\mathcal{B}^s(\Omega)} = \inf_{f_e|_\Omega = f} \int_{\mathbb{R}^d} (1+|\xi|)^s|\hat{f}_e(\xi)|d\xi \end{equation} for this infemum. We need the following simple lemmas. \begin{lemma}\label{fourier-cutoff-lemma} Suppose that $\Omega\subset \mathbb{R}^d$ is bounded. Let $\epsilon > 0$ and $s\geq 0$. Then there exists a function $\phi\in L^1(\mathbb{R}^d)$, such that $\phi(x) = 1$ for $x\in \Omega$ and \begin{equation} \int_{\mathbb{R}^d}(1+|\xi|)^s|\hat{\phi}(\xi)|d\xi \leq 1 + \epsilon. \end{equation} \end{lemma} \begin{proof} Since $\Omega$ is bounded, it suffices to consider the case where $\Omega = [-L,L]^d$ for a sufficiently large $L$. We consider separable $\phi = \phi_1(x_1)\cdots\phi_d(x_d)$, and note that \begin{equation} \int_{\mathbb{R}^d}(1+|\xi|)^s|\hat{\phi}(\xi)|d\xi \leq \int_{\mathbb{R}^d}\prod_{i=1}^d(1+|\xi_i|)^s|\hat{\phi}_i(\xi_i)|d\xi \leq \prod_{i=1}^d \int_{\mathbb{R}}(1+|\xi|)^s|\hat{\phi}_i(\xi)|d\xi, \end{equation} and this reduces us to the one-dimensional case where $\Omega = [-L,L]$. For the one-dimensional case, consider a Gaussian $g_R(x) = e^{-\frac{x^2}{2R}}$. A simple calculation shows that the Fourier transform of the Gaussian is $\hat{g}_R(\xi) = \sqrt{\frac{R}{2\pi}}e^{-\frac{R\xi^2}{2}}$. This implies that \begin{equation} \lim_{R\rightarrow \infty} \int_{\mathbb{R}}(1+|\xi|)^s|\hat{g}_R(\xi)|d\xi = 1, \end{equation} and thus by choosing $R$ large enough, we can make this arbitrarily close to $1$. Now consider $\tau_R\in C^{k}(\mathbb{R})$ for $k > s+2$ such that $\tau_R(x) = 1 - g_R(x)$ for $x\in [-L,L]$. Then we have $$\|\tau_R\|_{L^\infty([-L,L])}, \|\tau_R^\prime\|_{L^\infty([-L,L])}, \cdots, \|\tau_R^{k}\|_{L^\infty([-L,L])} \rightarrow 0$$ as $R\rightarrow \infty$. Consequently, it is possible to extend $\tau_R$ to $\mathbb{R}$ so that \begin{equation} \|\tau_R\|_{L^1(\mathbb{R})}, \|\tau_R^{(k)}\|_{L^1(\mathbb{R})} \rightarrow 0. \end{equation} as $R\rightarrow \infty$. For instance, for $x > L$ we can take $\tau_R$ to be a polynomial which matches the first $k$ derivatives at $L$ times a fixed smooth cutoff function which is identically $1$ in some neighborhood of $L$ (and similarly at $-L$). This implies that $\|\hat{\tau}_R(\xi)\|_{L^\infty(\mathbb{R})},\|\xi^{k}\hat{\tau}_R(\xi)\|_{L^\infty(\mathbb{R})}\rightarrow 0$ as $R\rightarrow \infty$. Together, these imply that \begin{equation} \lim_{R\rightarrow \infty} \int_{\mathbb{R}}(1+|\xi|)^s|\hat{\tau}_R(\xi)|d\xi \rightarrow 0, \end{equation} since $k-2 > s$. Finally, set $\phi_R = g_R(x) + \tau_R(x)$. Then clearly $\phi_R = 1$ on $[-L,L]$ and also \begin{equation} \lim_{R\rightarrow \infty} \int_{\mathbb{R}}(1+|\xi|)^s|\hat{\phi}_R(\xi)|d\xi \leq \lim_{R\rightarrow \infty} \int_{\mathbb{R}}(1+|\xi|)^s|\hat{\tau}_R(\xi)|d\xi + \lim_{R\rightarrow \infty} \int_{\mathbb{R}}(1+|\xi|)^s|\hat{g}_R(\xi)|d\xi = 1. \end{equation} Choosing $R$ large enough, we obtain the desired result. \end{proof} Using this lemma, we can show that the infemum in \eqref{spectral-barron-integral-condition} can alternatively be given by an infemum over integral representations by Borel measures. \begin{lemma} Let $\Omega\subset \mathbb{R}^d$ be a bounded domain and $s \geq 0$. Then \begin{equation}\label{barron-norm-form-2} \|f\|_{\mathcal{B}^s(\Omega)} = \inf\left\{\int_{\mathbb{R}^d} (1+|\xi|)^sd|\mu|(\xi):~f(x)=\int_{\mathbb{R}^d} e^{2\pi {\mathrm{i}\mkern1mu} \xi\cdot x}d\mu(\xi)~\text{for}~x\in\Omega\right\}. \end{equation} \end{lemma} \begin{proof} By choosing $\mu = \hat{f}(\xi)d\xi$ it is clear that \begin{equation} \|f\|_{\mathcal{B}^s(\Omega)} \geq \inf\left\{\int_{\mathbb{R}^d} (1+|\xi|)^sd|\mu|(\xi):~f(x)=\int_{\mathbb{R}^d} e^{2\pi {\mathrm{i}\mkern1mu} \xi\cdot x}d\mu(\xi)~\text{for}~x\in\Omega\right\}. \end{equation} The content of the theorem is the reverse inequality. Let $\mu$ be a regular Borel measure such that the integral in \eqref{barron-norm-form-2} is finite (note this must mean that $\mu$ has finite mass) and \begin{equation} f(x)=\int_{\mathbb{R}^d} e^{2\pi {\mathrm{i}\mkern1mu} \xi\cdot x}d\mu(\xi) \end{equation} for $x\in \Omega$. Choose $\epsilon > 0$. By Lemma \ref{fourier-cutoff-lemma} we can find a $\phi\in L^1(\mathbb{R}^d)$ such $\phi|_\Omega = 1$ and $$\int_{\mathbb{R}^d}(1+|\xi|)^s|\hat{\phi}(\xi)|d\xi \leq 1 + \epsilon.$$ We now set \begin{equation} f_e(x) = \phi(x)\left[\int_{\mathbb{R}^d} e^{2\pi {\mathrm{i}\mkern1mu} \xi\cdot x}d\mu(\xi)\right]\in L^1(\mathbb{R}^d), \end{equation} since $\phi\in L^1(\mathbb{R}^d)$ and $\mu$ has finite mass, so the second factor must be bounded. Then we have that for $x\in \Omega$, \begin{equation} f(x) = f(x)\phi(x) = f_e(x), \end{equation} and $\hat{f}_e = \hat{\phi} * \mu$, where the function $\hat{\phi} * \mu$ is given by \begin{equation} (\hat{\phi} * \mu)(\xi) = \int_{\mathbb{R}^d} \hat{\phi}(\xi - \nu)d\mu(\nu). \end{equation} We now calculate \begin{equation} \int_{\mathbb{R}^d}(1+|\xi|)^s|(\hat{\phi} * \mu)(\xi)|d\xi \leq \int_{\mathbb{R}^d}\int_{\mathbb{R}^d}(1+|\xi|)^s |\hat{\phi}(\xi - \nu)|d|\mu|(\nu)d\xi. \end{equation} Finally, we use the simple inequality $(1+|\xi|)^s \leq (1+|\nu|)^s(1+|\xi - \nu|)^s$ combined with a change of variables, to get \begin{equation} \begin{split} \int_{\mathbb{R}^d}(1+|\xi|)^s|(\hat{\phi} * \mu)(\xi)|d\xi &\leq \left(\int_{\mathbb{R}^d}(1+|\xi|)^s|\hat{\phi}(\xi)|d\xi\right)\left(\int_{\mathbb{R}^d}(1+|\nu|)^s d|\mu|(\nu)\right)\\ &\leq (1+\epsilon)\left(\int_{\mathbb{R}^d}(1+|\nu|)^s d|\mu|(\nu)\right). \end{split} \end{equation} This shows that \begin{equation} \|f\|_{\mathcal{B}^s(\Omega)} \leq (1+\epsilon)\inf\left\{\int_{\mathbb{R}^d} (1+|\xi|)^sd|\mu|(\xi):~f(x)=\int_{\mathbb{R}^d} e^{2\pi {\mathrm{i}\mkern1mu} \xi\cdot x}d\mu(\xi)~\text{for}~x\in\Omega\right\}. \end{equation} Since $\epsilon > 0$ was arbitrary, we get the desired result. \end{proof} Finally, we prove that the unit ball $B_1^s(\Omega)$ is closed in $L^2(\Omega)$, from which Theorem \ref{spectral-barron-theorem} follows easily. \begin{proposition} Let $\Omega\subset \mathbb{R}^d$ be a bounded domain and $s \geq 0$. Then \begin{equation} B_1^s(\Omega) = \{f\in L^2(\Omega):~\|f\|_{\mathcal{B}^s(\Omega)}\leq 1\} = \left\{f:\Omega\rightarrow \mathbb{R}:~\inf_{f_e|_\Omega = f} \int_{\mathbb{R}^d} (1+|\xi|)^s|\hat{f}_e(\xi)|d\xi\leq 1\right\} \end{equation} is closed in $L^2(\Omega)$. \end{proposition} \begin{proof} Let $f_n\rightarrow f$ in $L^2(\Omega)$ with $\|f_n\|_{\mathcal{B}^s(\Omega)} \leq 1$. Choose $\epsilon > 0$ and consider the corresponding sequence of $h_n = \hat{f}_{n,e}$ in \eqref{spectral-barron-integral-condition} which satisfy \begin{equation} \int_{\mathbb{R}^d}(1+|\xi|)^s|h_n(\xi)|d\xi \leq 1 + \epsilon,~f_n(x)=\hat{h}_n(x) = \int_{\mathbb{R}^d} h_n(\xi)e^{2\pi {\mathrm{i}\mkern1mu} \xi\cdot x}d\xi. \end{equation} By assumption $f_n\rightarrow f$ in $L^2(\Omega)$ so that for any $g\in L^2(\Omega)$, we have \begin{equation} \langle f_n, g\rangle_{L^2(\Omega)} \rightarrow \langle f, g\rangle_{L^2(\Omega)}. \end{equation} Choose $g$ to be any element in the dense subset $C^\infty_c(\Omega)\subset L^2(\Omega)$ and note that in this case we have by Plancherel's theorem \begin{equation} \langle f_n, g\rangle_{L^2(\Omega)} = \langle h_n, \hat{g}\rangle_{L^2(\mathbb{R}^d)}. \end{equation} Note that $\hat{g}$ is a Schwartz function and so is in the space $C_{s,0}(\mathbb{R}^d)$, defined to be the following space of continuous, decaying functions \begin{equation} C_{s,0}(\mathbb{R}^d) = \{\phi\in C(\mathbb{R}):\lim_{\xi\rightarrow \infty} |(1+|\xi|)^s\phi(\xi)| = 0\} \end{equation} with norm \begin{equation} \|\phi\|_{C_{s,0}(\mathbb{R}^d)} = \sup_{\xi\in \mathbb{R}^d} |(1+|\xi|)^s\phi(\xi)|. \end{equation} This implies that the map \begin{equation} h:\phi \rightarrow \lim_{n\rightarrow \infty}\langle h_n, \phi\rangle_{L^2(\mathbb{R}^d)} \end{equation} defines a bounded linear functional on the subspace of $C_{s,0}(\mathbb{R}^d)$ which is spanned by $\{\hat{g}:g\in C^\infty_c(\Omega)\}$, which has norm $\leq 1 + \epsilon$. By the Hahn-Banach theorem, we can extend $h$ to an element $\mu\in C^*_{s,0}(\mathbb{R}^d)$, such that $\|\mu\|_{C^*_{s,0}(\mathbb{R}^d)}\leq 1 + \epsilon$. By the Riesz-Markov theorem (Theorem 22 in \cite{markoff1938mean}), the dual space $C^*_{s,0}(\mathbb{R}^d)$ is exactly the space of Borel measures with norm given by \begin{equation} \|\mu\|_{C^*_{s,0}(\mathbb{R}^d)} = \int_{\mathbb{R}^d} (1+|\xi|)^s d|\mu|(\xi) \leq 1 + \epsilon. \end{equation} But we also have that for every $g\in C^\infty_c(\Omega)$, $\langle \mu, \hat{g}\rangle = \langle f,g\rangle$. Taking the Fourier transform, we see that the function \begin{equation} f_\mu = \int_{\mathbb{R}^d}e^{2\pi {\mathrm{i}\mkern1mu} \xi\cdot x}d\mu(\xi) \end{equation} satisfies $\langle f_\mu, g\rangle = \langle f,g\rangle$ for all $g\in C^\infty_c(\Omega)$. Thus $f = f_\mu$ in $L^2(\Omega)$ and so by \eqref{barron-norm-form-2}, we have $\|f\|_{\mathcal{B}^{s}(\Omega)} \leq 1 + \epsilon$. Since $\epsilon$ was arbitrary, this completes the proof. \end{proof} \subsection{Relationship Between Entropy Numbers and Approximation Rates}\label{entropy-lemma-section} We end this section by proving a lemma which specifies the relationship between approximation rates from $\Sigma_{n,M}(\mathbb{D})$ and the entropy numbers of $B_1(\mathbb{D})$. Note also that this lemma implicitly appears in \cite{makovoz1996random,klusowski2018approximation} for the dictionaries $\mathbb{P}^d_k$ with $k=0,1,2$. See also the very similar Theorem 3.6 in \cite{cohen2020optimal}, which draws the same conclusion under different assumptions, and note that this lemma can be thought of as a variant of Carl's inequality for approximation from $\Sigma_{n,M}(\mathbb{D})$. The use of this lemma is in proving lower bounds on approximation rates from $\Sigma_{n,M}(\mathbb{D})$. \begin{lemma}\label{entropy-lemma} Let $H$ be a Hilbert space and $\mathbb{D}\subset H$ be a dictionary with $K_\mathbb{D}:=\sup_{h\in \mathbb{D}} \|h\|_H < \infty$. Suppose that for some constants $0 < l < \infty$, $C < \infty$, the dictionary $\mathbb{D}$ can be covered by $C\epsilon^{-l}$ sets of diameter $\epsilon$ for any $\epsilon > 0$. If there exists an $M,K < \infty$ and $\alpha > 0$ such that for all $f\in B_1(\mathbb{D})$ \begin{equation}\label{approx-bound-estimate} \inf_{f_n\in \Sigma_{n,M}(\mathbb{D})} \|f - f_n\|_H \leq Kn^{-\alpha}, \end{equation} then the entropy numbers of $B_1(\mathbb{D})$ are bounded by \begin{equation} \epsilon_{n\log{n}}(B_1(\mathbb{D})) \lesssim n^{-\alpha}, \end{equation} where the implied constant is independent of $n$. \end{lemma} \begin{proof} The proof here essentially follows the argument in the proof of Theorem 4 in \cite{makovoz1996random}, but we provide it here for completeness. In what follows, all implied constants will be independent of $n$. Using our assumption on $\mathbb{D}$ and setting $\epsilon = n^{-\alpha}$, we see that there is a subset $\mathcal{D}_n\subset \mathbb{D}$ such that $|\mathcal{D}_n| \leq Cn^{\alpha l}$ and \begin{equation} \sup_{d\in \mathbb{D}} \inf_{s\in \mathcal{D}_n} \|d - s\|_H \leq n^{-\alpha}. \end{equation} Furthermore, we can cover the unit ball in $\ell^1$ by $(1+\frac{2}{\epsilon})^n$ $\ell^1$-balls of radius $\epsilon$ (see \cite{pisier1999volume}, page $63$). Thus, setting $\epsilon = M^{-1}n^{-\alpha}$, we can find a subset $\mathcal{L}_n$ of the $n$-dimensional $\ell^1$-ball of radius $M$, $B_M(\ell_1^n) = \{x\in \mathbb{R}^d:~|x|_1\leq M\}$, such that $| \mathcal{L}_n| \leq (1 + 2Mn^\alpha)^n \lesssim n^{2\alpha n}$, and \begin{equation} \sup_{x\in B_M(\ell_1^n)} \inf_{s\in \mathcal{L}_n} |d - s|_1 \leq n^{-\alpha}. \end{equation} Let $\mathcal{S}_n$ consist of all linear combinations of $n$ elements of $\mathcal{D}_{n}$ with coefficients in $\mathcal{L}_{n}$. Then clearly \begin{equation}\label{eq-707} |\mathcal{S}_n| \leq |\mathcal{D}_{n}|^n|\mathcal{L}_{n}| \lesssim n^{\alpha ln + 2\alpha n} = n^{(l + 2)\alpha n} \end{equation} By \eqref{approx-bound-estimate}, we have for every $f\in B_1(\mathbb{D})$ an $f_n\in \Sigma_{n,M}(\mathbb{D})$ such that \begin{equation} f_n = \sum_{j=1}^n a_jh_j \end{equation} and $\|f - f_n\|_H \lesssim n^{-\alpha}$, $h_j\in \mathbb{D}$ and $\sum_{j=1}^n|a_j| \leq M$. We now replace the $h_j$ by their closest elements in $\mathcal{D}_{n}$ and the coefficients $a_j$ by their closest point in $\mathcal{L}_{n}$. Since $\|h_j\|_H\leq K_\mathbb{D}$ and $\sum_{j=1}^n|a_j| \leq M$, this results in a point $\tilde{f}_n\in \mathcal{S}_n$ with $$\|f_n - \tilde{f}_n\|_H \leq Mn^{-\alpha} + K_\mathbb{D}n^{-\alpha} \lesssim n^{-\alpha}.$$ Thus $\|f - \tilde{f}_n\|_H\lesssim n^{-\alpha}$ and so \begin{equation} \epsilon_{\log{|\mathcal{S}_n|}} \lesssim n^{-\alpha}. \end{equation} By equation \eqref{eq-707}, we see that $\log{|\mathcal{S}_n|} \lesssim n\log{n}$, which completes the proof. \end{proof} We note that the assumptions of Lemma \ref{entropy-lemma} hold for both of the dictionaries $\mathbb{P}^d_k$ and $\mathbb{F}^d_s$ for $s > 0$. \subsection{Upper bounds for smoothly parameterized dictionaries}\label{main-result-1-section} Let $H$ be a Hilbert space and consider a dictionary $\mathbb{D}\subset H$ which is parameterized by a smooth manifold $\mathcal{M}$, i.e. we have a surjection \begin{equation} \mathcal{P}:\mathcal{M}\rightarrow \mathbb{D}. \end{equation} In this section, we consider dictionaries $\mathbb{D}$ which are parameterized in this way by a smooth compact manifold. For this class of dictionaries, we give upper bounds on the entropy of $B_1(\mathbb{D})$ and on the approximation rates for $B_1(\mathbb{D})$ from sparse convex combinations $\Sigma_{n,M}(\mathbb{D})$, which depend on the degree of smoothness of the parameterization map $\mathcal{P}$. We being by discussing the relevant notion of smoothness. These notions are well-studied for functions whose range is $\mathbb{R}$ (see, for instance \cite{lorentz1996constructive}), but we need to extend them to functions with range contained in the Hilbert space $H$. \begin{definition} Let $H$ be a Hilbert space, $U\subset \mathbb{R}^d$ an open set, $k \geq 0$ and integer and $\alpha\in (0,1]$. A function $\mathcal{F}:U\rightarrow H$ is of smoothness class $k+\alpha$, written $\mathcal{F}\in {\rm Lip}(k+\alpha, L^\infty(U\rightarrow H))$, if \begin{itemize} \item The derivatives $D^j\mathcal{F}:(\mathbb{R}^d)^{\otimes j}\rightarrow H$ exist for $j\leq k$. \item The $k$-the derivative $D^k\mathcal{F}$ is $\alpha$-H\"older continuous on $U$, i.e. $$ \|D^k\mathcal{F}(x) - D^k\mathcal{F}(y)\|_{(\mathbb{R}^d)^{\otimes j}\rightarrow H} \lesssim_{\mathcal{F},\alpha} |x-y|^\alpha, $$ where the norm on the left is the operator norm. \end{itemize} \end{definition} \begin{definition} Let $H$ be a Hilbert space and $\mathcal{M}$ a smooth $d$-dimensional manifold, $k \geq 0$ and integer and $\alpha\in (0,1]$. A map $\mathcal{P}:\mathcal{M}\rightarrow H$ is of smoothness class $k+\alpha$, written $\mathcal{P}\in {\rm Lip}(k+\alpha, L^\infty(M\rightarrow H))$ if for each coordinate chart $(U,\phi)$ we have $\mathcal{P}\circ \phi\in {\rm Lip}(k+\alpha, L^\infty(U\rightarrow H))$. \end{definition} To illustrate this definition, we consider the two examples which arise in the study of neural networks, $\mathbb{P}^d_k$ and $\mathbb{F}^d_s$. First, note that the dictionary $\mathbb{P}^d_k$ is parameterized by the manifold $S^{d-1}\times [-2,2]$ via the map \begin{equation}\label{p-k-parameterization-definition} \mathcal{P}^d_k(\omega,b) = \sigma_k(\omega\cdot x + b)\in L^2(B_1^d). \end{equation} We claim that the map $\mathcal{P}^d_k$ is of smoothness class $k + \frac{1}{2}$. Indeed, differentiating $k$ times we obtain \begin{equation} \begin{split} D^k\mathcal{P}^d_k(\omega,b)\cdot[(\omega_1,b_1)\otimes\cdots\otimes(\omega_k,b_k)] &= \left[\prod_{i=1}^k (\omega_i\cdot x + b_i)\right]\sigma_k^{(k)}(\omega\cdot x + b) \\ & = k!\left[\prod_{i=1}^k (\omega_i\cdot x + b_i)\right]\sigma_0(\omega\cdot x + b)\in L^2(B_1^d), \end{split} \end{equation} and so \begin{equation} \|D^k\mathcal{P}^d_k(\omega,b) - D^k\mathcal{P}^d_k(\omega',b')\|_{(\mathbb{R}^d)^{\otimes j}\rightarrow H} \lesssim_{k,d} \|\sigma_0(\omega\cdot x + b) - \sigma_0(\omega'\cdot x + b')\|_{L^2(B_1^d)}. \end{equation} Since $\sigma_0$ is the Heaviside function and $B_1^d$ is is the unit ball of $\mathbb{R}^d$, it is easy to see that $\sigma_0(\omega\cdot x + b) - \sigma_0(\omega'\cdot x + b')$ is non-zero only on a strip of width $\lesssim_d |\omega - \omega'| + |b - b'|$ (see for instance the argument in \cite{makovoz1996random}, Section 4). This means that \begin{equation} \|\sigma_0(\omega\cdot x + b) - \sigma_0(\omega'\cdot x + b')\|_{B_1^d} \lesssim_d (|\omega - \omega'| + |b - b'|)^{\frac{1}{2}}, \end{equation} and so $\mathcal{P}^d_k$ is of smoothness class $k+\frac{1}{2}$. Next, we observe that the dictionary $\mathbb{F}_s^d$ is parameterized by $S^d$ via the steriographic projection map $\mathcal{F}_s^d:S^d\rightarrow L^2(B_1^d)$ given by \begin{equation}\label{f-s-parameterization-definition} \mathcal{F}^d_s(\nu) = \begin{cases} \left(1 + |\omega|\right)^{-s}e^{2\pi i \omega\cdot x} & \nu_{d+1} \neq -1 \\ 0 & \nu_{d+1} = -1, \end{cases} \end{equation} where $$\omega = (1+\nu_{d+1})^{-1}(\nu_1,...,\nu_d),~|\omega| = \sqrt{\frac{1-\nu_{d+1}}{1+\nu_{d+1}}}.$$ It is easy to check that this map is infinitely smooth at every point except the south pole where $\nu_{d+1} = -1$. Moreover, the factor $\left(1 + |\omega|\right)^{-s}$ implies that $\mathcal{F}^d_s$ decays to order $s$ at the south pole. Taken together, this means that the map $\mathcal{F}^d_s$ is of smoothness class $s$. We proceed to bound the approximation rates from sparse convex combinations $\sigma_{n,M}(\mathbb{D})$ and the dyadic entropy numbers of $B_1(\mathbb{D})$ for dictionaries which are smoothly parameterized by a compact manifold. We will need the following simple lemma in what follows. \begin{lemma}\label{image-of-union-of-cubes-lemma} Suppose that $\mathcal{M}$ is a d-dimensional compact smooth manifold and we are given a parameterization $\mathcal{P}\in {\rm Lip}(k+\alpha, L^\infty(M\rightarrow H))$. Then there exist finitely many maps $\mathcal{P}_j:[-1,1]^d\rightarrow H$, $j=1,...,T$, such that $\mathcal{P}_j\in {\rm Lip}(k+\alpha, L^\infty([-1,1]^d\rightarrow H))$ for each $j$ and \begin{equation} \mathcal{P}(\mathcal{M})\subset \bigcup_{j=1}^T \mathcal{P}_j([-1,1]^d). \end{equation} \end{lemma} \begin{proof} Let $\phi_i: U_i\rightarrow \mathcal{M}$ for $j=1,...,T:=T_{\mathcal{M}}$ be a coordinate atlas for $\mathcal{M}$, which can be taken finite since $\mathcal{M}$ is compact. Further, we can assume that the $U_j$ are bounded. By assumption the composition $\mathcal{P} \circ T_j$ is of smoothness class $k+\alpha$. We translate and dialate each $U_j$ such that they are contained in the unit cube $C = [-1,1]^d$ and apply Whitney's extension theorem \cite{whitney1934analytic} to obtain maps $\mathcal{P}_j:C\rightarrow H$ such that $\mathcal{P}_j|_{U_j} = \mathcal{P} \circ T_j$ which are still of smoothness class $k+\alpha$. Then the maps $\mathcal{P}_j$ satisfy the conclusion of the lemma. \end{proof} \section{Approximation rates for smoothly parameterized dictionaries} Here we give upper bounds on the approximation rates of $B_1(\mathbb{D})$ from sparse convex combinations $\Sigma_{n,M}(\mathbb{D})$ for smoothly parameterized dictionaries. In particular, we have the following theorem. \begin{theorem}\label{upper-bound-theorem} Let $k \geq 0$ be an integer and $\alpha\in (0,1]$. Suppose that $\mathcal{M}$ is a compact $d$-dimensional smooth manifold, $\mathcal{P}\in {\rm Lip}(k+\alpha, L^\infty(M\rightarrow H))$, and the dictionary $\mathbb{D}\subset \mathcal{P}(\mathcal{M})$. Then there exists an $M > 0$ such that for $f\in B_1(\mathbb{D})$ we have \begin{equation} \inf_{f_n\in \Sigma_{n,M}(\mathbb{D})} \|f - f_n\|_{H} \lesssim n^{-\frac{1}{2} - \frac{k+\alpha}{d}}, \end{equation} where both $M$ and the implied constant are independent of $n$. \end{theorem} We note that although the implied constants here are independent of $n$, they may indeed be very large. The proof of this theorem is a higher-order generalization of the stratified sampling argument \cite{makovoz1996random,klusowski2018approximation}. Before proving this theorem, we note a corollary obtained when applying it to the dictionary $\mathbb{P}^d_k$. \begin{theorem}\label{relu-k-rate-corollary} Let $k\geq 0$. Then there exists an $M = M(k,d) > 0$ such that for all $f\in B_1(\mathbb{P}^d_k)$ we have \begin{equation} \inf_{f_n\in \Sigma_{n,M}(\mathbb{P}^d_k)} \|f - f_n\|_{L^2(B_1^d)} \lesssim_{k,d} n^{-\frac{1}{2}-\frac{2k+1}{2d}}. \end{equation} \end{theorem} \begin{proof} This follows immediately from Theorem \ref{upper-bound-theorem} given the smoothness condition of the map $\mathcal{P}^d_k$ defined in \eqref{p-k-parameterization-definition} and the fact that $S^{d-1}\times [-2,2]$ is a compact $d$-dimensional manifold. \end{proof} Note also that the approximation rates for cosine networks obtained in \cite{siegel2020high} follow from Theorem \ref{upper-bound-theorem} by considering the parmeterization \eqref{f-s-parameterization-definition}. \begin{proof}[Proof of Theorem \ref{upper-bound-theorem}] We apply lemma \ref{image-of-union-of-cubes-lemma} to $\mathcal{P}$ and $\mathcal{M}$ to obtain a collection of maps $\mathcal{P}_j:C:=[-1,1]^d\rightarrow H$ such that $\mathbb{D}\subset \cup_{j=1}^T\mathcal{P}_j(C)$ and $\mathcal{P}_j\in {\rm Lip}(k+\alpha, L^\infty(C\rightarrow H))$. It suffices to prove the result for $\mathbb{D} = \mathbb{D}_j := \mathcal{P}_j(C)$, since $B_1(\mathbb{D}) \subset \text{conv}(\cup_{j=1}^T B_1(\mathbb{D}_j))$ and if $f = \alpha_1f_1 +\cdots + \alpha_Tf_T$ with $f_j\in B_1(\mathbb{D}_j)$ and $\sum_{i=1}^T \alpha_j= 1$, then \begin{equation} \inf_{f_n\in \Sigma_{Tn,M}(\mathbb{D})} \|f - f_n\|_{H} \leq \sum_{j=1}^T \alpha_j\inf_{f_{n,j}\in \Sigma_{n,M}(\mathbb{D}_j)} \|f_j - f_{n,j}\|_{H}, \end{equation} which easily follows by setting $f_n = \sum_{j=1}^T\alpha_jf_{n,j}$. So in what follows we consider $\mathbb{D} = \mathbb{D}_j$, $\mathcal{P} = \mathcal{P}_j$ and $\mathcal{M} = C$. In other words, we assume without loss of generality that $T=1$ (at the cost of introducing a constant which depends upon $T$ and thus upon $\mathcal{P}$ and $\mathcal{M}$). Now let $f\in B_1(\mathbb{D})$ and $\delta > 0$. Then there exists a convex combination (with potentially very large $N:=N_\delta$) \begin{equation}\label{eq-176} f_\delta = \sum_{i=1}^N a_id_i, \end{equation} with $d_i\in \mathbb{D}$, $\sum|a_i| \leq 1$, and $\|f - f_\delta\|_H < \delta$. Since $\mathbb{D} = \mathcal{P}(C)$, each $d_i = \mathcal{P}(x_i)$ for some $x_i\in C$, so we get \begin{equation} f_\delta = \sum_{i=1}^{N} a_i\mathcal{P}(x_i). \end{equation} We remark that in what follows all implied constants will be independent of $n$ and $\delta$. Let $n \geq 1$ be given and subdivide the cube $C$ into $n$ sub-cubes $C_1,...,C_n$ such that each $C_l$ has diameter $O(n^{-\frac{1}{d}})$. This can easily be done by considering a uniform subdivision in each direction. We proceed to approximate the map $\mathcal{P}$ by a piecewise polynomial on the subcubes $C_1,...,C_n$. To this end, let $z_1,...,z_{(k+1)^d}\in C$ be the $d$-fold tensor product of the roots of the Chebyshev polynomials of degree $k$ (any other interpolation points will do just as well). Further, let $p_1,...,p_{(k+1)^d}$ be the corresponding Lagrange polynomials satisfying $p_i(z_j) = \delta_{ij}$. We note that these polynomials span the space of polynomials whose monomials $z^\alpha$ satisfy $|\alpha|_\infty \leq k$, which in particular contains the space of polynomials of degree at most $k$. Considering the images of these points and polynomials on the sub-cube $C_l$, which we write $z_1^l,...,z_{(k+1)^d}^l$ and $p^l_1,...,p^l_{(k+1)^d}$, we rewrite $f_\delta$ as \begin{equation}\label{eq-194} f_\delta = \sum_{l=1}^{n}\sum_{x_i\in C_l}a_iP_l(x_i) + \sum_{l=1}^{n}\sum_{x_i\in C_l}a_iE_l(x_i), \end{equation} where the polynomial approximation is given by \begin{equation}\label{eq-198} P_l(x_i) = \sum_{m=1}^{(k+1)^d} \mathcal{P}(z_m^l)p_m^l(x_i), \end{equation} and the error in the approximation is given by \begin{equation} E_l(x_i) = \mathcal{P}(x_i) - P_l(x_i). \end{equation} We now utilize the fact that in \eqref{eq-198} we only evaluate $\mathcal{P}$ at the fixed interpolation points $z_m^l$ and that the Lagrange polynomials are bounded to see that the first term of \eqref{eq-194} satisfies \begin{equation} g_{\delta,1} = \sum_{l=1}^{n}\sum_{x_i\in C_l}a_iP_l(x_i) \in \Sigma_{(k+1)^dn,M}(\mathbb{D}), \end{equation} for some constant $M = M(k,d) = \sup_{x\in C}\left|\sum_{m=1}^{(k+1)^d} p_m(x)\right|$. The next step is to bound the error $E_l(x_i)$ uniformly and to apply a sampling argument to the second term in \eqref{eq-194}. To bound $E_l(x_i)$ we use the smoothness of the parameterization $\mathcal{P}_j$ and a standard Bramble-Hilbert lemma \cite{bramble1970estimation} type argument from numerical analysis (see also \cite{xu1982error} for instance). There are two important points concerning the bound here. First, we are dealing with Hilbert space valued functions, and second, it is important that we are bounding the error to the interpolation polynomial instead of to averaged Taylor polynomials, as is commonly done when proving the Bramble-Hilbert lemma. We proceed as follows, consider any point $x\in C_l$ and let $x_m(t) = x(1-t) + tz^l_m$ be the line segment from $x$ to the Lagrange point $z^l_m$. Using the differentiability of the parameterization map $\mathcal{P}$, we get \begin{equation} \mathcal{P}(z^l_m) - \mathcal{P}(x) = \mathcal{P}(x_m(1)) - \mathcal{P}(x_m(0)) = \sum_{s=1}^k \frac{1}{s!}r_{m}^{(s)}(0) + \frac{1}{k!}\int_0^1 [r_{m}^{(k)}(t) - r_{m}^{(k)}(0)]dt, \end{equation} where $r_{m}(t) = \mathcal{P}(x_m(t))$. We now calculate that \begin{equation} r_{m}^{(s)}(t) = D^s\mathcal{P}(x_m(t))\cdot(z_m^l - x)^{\otimes s}. \end{equation} This gives us that \begin{equation} \mathcal{P}(z^l_m) - \mathcal{P}(x) = \sum_{s=1}^k \frac{1}{s!}D^s\mathcal{P}(x)\cdot(z_m^l - x)^{\otimes s} + \frac{1}{k!}\left(\int_0^1 [D^k\mathcal{P}(x_m(t)) - D^k\mathcal{P}(x)]dt\right)\cdot(z_m^l - x)^{\otimes k}. \end{equation} We now multiply this equation by the interpolation weights $p_m^l(x)$, sum over $m$, and use the following standard facts. First, for every $x$ \begin{equation} \sum_{m=1}^{(k+1)^d} p_m^l(x) = 1, \end{equation} since this is just the interpolation of the constant function $1$. Second, for $s=1,...,k$ and every $x$ we have \begin{equation} \sum_{m=1}^{(k+1)^d} p_m^l(x)(z_m^l - x)^{\otimes s} = 0, \end{equation} since this is the interpolation of the function $g(z) = (z-x)^{\otimes s}$ evaluated at $x$. Note that $g$ is a polynomial of degree at most $k$ (hence is reproduced exactly) and vanishes at $x$. This gives the bound \begin{equation} \|E_l(x)\|_H = \|\mathcal{P}(x) - P_l(x)\|_H \leq \frac{1}{k!}\sum_{m=1}^{(k+1)^d} p_m^l(x)|z_m^l - x|^k\int_0^1\|D^k\mathcal{P}(x_m(t)) - D^k\mathcal{P}(x)]\|dt. \end{equation} Finally, we use the fact that the Lagrange polynomials are bounded, combined with the smoothness condition on the parameterization $\mathcal{P}$ and the fact that $|z_m^l - x|$ is at most the diameter of $C_l$ (which is $O(n^{-\frac{1}{d}})$ by construction) to conclude that \begin{equation} \|E_l(x)\|_H \lesssim n^{-\frac{k+\alpha}{d}}. \end{equation} We use this bound, combined with sampling argument of Lemma 1 in \cite{barron1993universal} (essentially the approximation rate \eqref{fundamental-bound}) to conclude that there exists an $n$-term convex combination \begin{equation} g_{\delta,2} = \frac{1}{n}\sum_{s=1}^n E_{l_s}(x_{i_s}), \end{equation} such that \begin{equation} \left\|g_{\delta,2} - \sum_{l=1}^{n}\sum_{x_{i}\in C_l}a_{i}E_l(x_{i})\right\|_H \lesssim n^{-\frac{1}{2}-\frac{k+\alpha}{d}}. \end{equation} Adding this to $g_{\delta,1}$, we get \begin{equation} f_n = g_{\delta,1} + g_{\delta,2}\in \Sigma_{[(k+1)^d+1]n,2M}(\mathbb{D}), \end{equation} such that \begin{equation} \|f_\delta - f_n\|_H \lesssim n^{-\frac{1}{2}-\frac{k+\alpha}{d}}. \end{equation} Since $\delta > 0$ was arbitrary, this yields the desired result. \end{proof} \subsection*{Entropy bounds for smoothly parameterized dictionaries} Next, we bound the entropy of $B_1(\mathbb{D})$ for smoothly parameterized dictionaries $\mathbb{D}$. We have the following theorem. \begin{theorem}\label{entropy-upper-bound-theorem} Let $k \geq 0$ be an integer and $\alpha\in (0,1]$. Suppose that $\mathcal{M}$ is a compact $d$-dimensional smooth manifold, $\mathcal{P}\in {\rm Lip}(k+\alpha, L^\infty(M\rightarrow H))$, and the dictionary $\mathbb{D}\subset \mathcal{P}(\mathcal{M})$. Then \begin{equation}\label{entropy-bound-equation} \epsilon_n(B_1(\mathbb{D})) \lesssim n^{-\frac{1}{2} - \frac{k+\alpha}{d}}, \end{equation} where the implied constant is independent of $n$. \end{theorem} Combining Theorem \ref{upper-bound-theorem} with lemma \ref{entropy-lemma}, we obtain a bound of $\epsilon_{n\log{n}}(B_1(\mathbb{D}) \lesssim n^{-\frac{1}{2} - \frac{k+\alpha}{d}}$. The content of Theorem \ref{entropy-upper-bound-theorem} is to show that the logarithmic factor can be removed with a much more careful analysis. Before proving this theorem, we note that by using the smoothness of the parameterizations \eqref{p-k-parameterization-definition} and \eqref{f-s-parameterization-definition}, we obtain \begin{equation} \epsilon_n(B_1(\mathbb{P}_k^d)) \lesssim_{k,d} n^{-\frac{1}{2} - \frac{2k+1}{2d}},~\epsilon_n(B_1(\mathbb{F}_s^d)) \lesssim_{s,d} n^{-\frac{1}{2} - \frac{s}{d}}. \end{equation} Further, Theorem 4.1 in \cite{cohen2020optimal} implies that these rates can be attained with a stable (i.e. Lipschitz) non-linear approximation scheme. In the proof of Theorem \ref{entropy-upper-bound-theorem}, which draws heavily on the ideas in \cite{ball1990entropy}, it will be convenient to use the notion of entropy numbers of an operator $T:X\rightarrow Y$ between two Banach spaces $X$ and $Y$, which we briefly recall. For such an operator $T$, we simply define $\epsilon_n(T) = \epsilon_n(T(B_X))$ where $B_X = \{x\in X:~\|x\|_X\leq 1|\}$ is the unit ball in $X$. This notion has itself been significantly studied and corresponds to a measure of the degree of compactness of the operator $T$. We will make use of the following two lemmas in the proof of Theorem \ref{entropy-upper-bound-theorem}. The first is well-known and is simply due to the triangle inequality and the definition of the entropy. \begin{lemma}\label{triangle-inequality-entropy-lemma} Let $S,T:X\rightarrow Y$. Then for any $0 \leq m\leq n$ \begin{equation} \epsilon_n(S+T) \leq \epsilon_m(S) + \epsilon_{n-m}(T). \end{equation} \end{lemma} The second, due to Carl (Proposition 1 in \cite{carl1985inequalities}), is the following bound on the entropy of operators whose domain is a finite-dimensional $\ell^1$-space. \begin{lemma}\label{carls-lemma} Let $T:\ell_1^n\rightarrow H$, where $\ell_1^n$ is the $n$-dimensional $\ell^1$ space (i.e. $\mathbb{R}^n$ with the $\ell^1$-norm) and $H$ is a Hilbert space. Then \begin{equation} \epsilon_m(T) \lesssim \begin{cases} \|T\| & m=0 \\ \sqrt{1+\log{\frac{n}{m}}}m^{-\frac{1}{2}}\|T\| & 1\leq m\leq n \\ 2^{-\frac{m}{n}}n^{-\frac{1}{2}}\|T\| & m\geq n. \end{cases} \end{equation} (Note here the implied constant is absolute.) \end{lemma} \begin{proof}[Proof of Theorem \ref{entropy-upper-bound-theorem}] As in the proof of Theorem \ref{upper-bound-theorem}, we apply lemma \ref{image-of-union-of-cubes-lemma} to $\mathcal{P}$ and $\mathcal{M}$ to obtain a collection of maps $\mathcal{P}_j:C:=[-1,1]^d\rightarrow H$ such that $\mathbb{D}\subset \cup_{j=1}^T\mathcal{P}_j(C)$ and $\mathcal{P}_j\in {\rm Lip}(k+\alpha, L^\infty(C\rightarrow H))$. It suffices to prove the result for $\mathbb{D} = \mathbb{D}_j := \mathcal{P}_j(C)$, since $B_1(\mathbb{D}) \subset \sum_{j=1}^T B_1(\mathbb{D}_j)$ and so by lemma \ref{triangle-inequality-entropy-lemma} \begin{equation} \epsilon_{Tn}(B_1(\mathbb{D})) \leq \sum_{j=1}^T \epsilon_n(B_1(\mathbb{D}_j)). \end{equation} So in what follows, we assume without loss of generality that $T=1$, i.e. that $\mathcal{M} = C$ (doing so introduces at most a constant independent of $n$). Consider the $\ell_1$ space on the set $C = [-1,1]^d$, i.e. \begin{equation} \ell_1(C) = \left\{f:C\rightarrow \mathbb{R}:~\|f\|_{1,C} := \sup \left\{\sum_{i=1}^N |f(x_i)|:~\text{$x_1,...,x_N\in C$ are distinct}\right\} < \infty\right\}, \end{equation} and its unit ball $B_1(\ell_1(C)) = \{f\in \ell_1(C):~\|f\|_{1,C} \leq 1\}$. We observe that $B_1(\mathbb{D}) = \overline{\mathcal{S}(B_1(\ell_1(C)))}$, where the operator $\mathcal{S}:\ell_1(C)\rightarrow H$ is given by $\mathcal{S}(f) = \sum_{x\in C} f(x)\mathcal{P}(x)$. Since the entropy numbers don't change when taking the closure, the problem is reduced to bounding the entropy numbers of the operator $\mathcal{S}$. We do this by decomposing $\mathcal{S} = \sum_{i=1}^\infty \mathcal{S}_i$ as follows. For each $i$, consider a decomposition of the cube $C$ into $N_i = 2^{id}$ subcubes $C_1,...,C_{N_i}$ with side length $2^{-i}$. As in the proof of Theorem \ref{upper-bound-theorem}, we introduce the interpolation points $z_1^l,...,z_{(k+1)^d}^l$ and Lagrange polynomials $p_1^l,...,p_{(k+1)^d}^l$ on each subcube $C_l$. Given a function $\mathcal{F}:C\rightarrow H$, consider the piecewise polynomial interpolation of $\mathcal{F}$ on the cubes $C_1,...,C_{N_i}$, which we denote by \begin{equation} \pi_i(\mathcal{F})(x) = \sum_{m=1}^{(k+1)^d}p_m^l(x)\mathcal{F}(z_m^l)~\text{for $x\in C_l$}. \end{equation} Further, we let $\pi _0(\mathcal{F}) = 0$. We now set \begin{equation} \mathcal{S}_i(f) = \sum_{x\in C} f(x)(\pi_i(\mathcal{P}) - \pi_{i-1}(\mathcal{P}))(x). \end{equation} It is evident that $\sum_{i=1}^\infty \mathcal{S}_i(f) = \lim_{i\rightarrow \infty} \sum_{x\in C} f(x)\pi_i(\mathcal{P})(x) = \sum_{x\in C} f(x)\mathcal{P}(x) = \mathcal{S}(f)$. Since $\pi_j(\mathcal{P})$ is a piecewise polynomial on $C_1,...,C_{N_j}$ and the cubes $C_1,...,C_{N_i}$ for $i > j$ refine the cubes $C_1,...,C_{N_j}$, we clearly have $\pi_i(\pi_j(\mathcal{F})) = \pi_j(\mathcal{F})$ whenever $i > j$. This allows us to write \begin{equation} \mathcal{S}_i(f) = \sum_{x\in C} f(x)(\pi_i(\mathcal{P} - \pi_{i-1}\mathcal{P}))(x). \end{equation} From this, we see that each $\mathcal{S}_i$ factors through an $\ell^1$ subspace of dimension $n_i := (k+1)^d2^{id}$. Namely, $\mathcal{S}_i = \mathcal{U}_i\circ \mathcal{V}_i$, where $\mathcal{V}_i:\ell_1(C)\rightarrow \ell_1^{n_i}$ is given by \begin{equation} \mathcal{V}_i(f) = \left(\sum_{x\in C_l}f(x)p_m^l(x)\right)_{(m,l)}, \end{equation} and $\mathcal{U}_i: \ell_1^{n_i}\rightarrow H$ is given by \begin{equation} \mathcal{U}_i((y)_{(m,l)}) = \sum_{l=1}^{N_i} \sum_{m=1}^{(k+1)^d} y_{(m,l)} \left[\mathcal{P}(z_m^l) - \pi_{i-1}(\mathcal{P})(z_{m}^l)\right]. \end{equation} Here the indexing set $(m,l)$ runs over $\{1,...,(k+1)^d\}\times \{1,...,N_i\}$. From this it is evident that $\|\mathcal{V}_i\| \leq M = M(k,d) = \sup_{x\in C}\left|\sum_{m=1}^{(k+1)^d} p_m(x)\right|$. Furthermore, via a Bramble-Hilbert lemma \cite{bramble1970estimation} type argument analogous to that in the proof of Theorem \ref{upper-bound-theorem}, we get that \begin{equation} \|\mathcal{P}(z_m^l) - \pi_{i-1}(\mathcal{P})(z_{m}^l)\|_H \lesssim 2^{-i(k+\alpha)}, \end{equation} since $\mathcal{P}$ is of smoothness class $k+\alpha$ and the diameter of each of the cubes $C_1,...,C_{N_i}$ is $O(2^{-i})$. This means that $\|\mathcal{U}_i\| \lesssim 2^{-i(k+\alpha)}$. Note that here and in what follows all implied constants are independent of $i$. We now bound, using lemma \ref{triangle-inequality-entropy-lemma}, \begin{equation}\label{eq-912} \epsilon_n(\mathcal{S}) \leq \sum_{i=1}^\infty \epsilon_{m_i}(\mathcal{S}_i)\leq \sum_{i=1}^\infty \|\mathcal{V}_i\|\epsilon_{m_i}(\mathcal{U}_i) \leq M\sum_{i=1}^\infty\epsilon_{m_i}(\mathcal{U}_i), \end{equation} for any $m_i$ which satisfy $\sum_{i=1}^\infty m_i \leq n$ (the $\leq$ can be taken by monotonicity of the entropy). Now let $K = (k+1)^d$ and $c > 0$ be a fixed integer to be specified later. Note that by the monotonicity of the entropy numbers it suffices to prove \eqref{entropy-bound-equation} for $n = K2^{rd}$ for integers $r \geq 2c$. This corresponds to showing that \begin{equation} \epsilon_n(\mathcal{S}) \lesssim 2^{-r\left(k+\alpha+\frac{d}{2}\right)} \end{equation} for $n = K2^{rd}$, where here and in what follows the implied constant is independent of $r$ (and also $i$, if applicable). To simplify the notation in what follows, we set $\beta = k+\alpha+\frac{d}{2}$. Further, we introduce two indices $i_1 = r-c$ and $i_2 = \lfloor r\beta(k+\alpha)^{-1}\rfloor$. In equation \eqref{eq-912} we set \begin{equation} m_i = \begin{cases} K\lceil(r-i)(\beta + 1)2^{id}\rceil & 1\leq i\leq i_1 \\ K\lfloor2^{i_1d - (i - i_1)\delta}\rfloor & i_1 < i\leq i_2 \\ 0 & i > i_2, \end{cases} \end{equation} where $\delta = \frac{d}{2}\left(1 + \frac{d}{2(k+\alpha)}\right)^{-1} > 0$. This choice of $\delta$ ensures that, since $r \geq 2c$, \begin{equation} \begin{split} i_1d - (i_2 - i_1)\delta \geq i_1d - i_2\delta &\geq (r-c)d - r\beta(k+\alpha)^{-1}\delta \\ & = (r - c)d - r\delta\left(1 + \frac{d}{2(k+\alpha)}\right) \\ & \geq r\left(\frac{d}{2} - \delta\left(1 + \frac{d}{2(k+\alpha)}\right)\right) \geq 0. \end{split} \end{equation} This means that for $i_1 < i\leq i_2$, we have $2^{i_1d - (i - i_1)\delta} \geq 1$ so that \begin{equation}\label{delta-choice-bound} \lfloor2^{i_1d - (i - i_1)\delta}\rfloor \geq 2^{i_1d - (i - i_1)\delta - 1}. \end{equation} In addition, it is easy to verify that $\delta < k+\alpha$. These facts will be important later when we bound the sum in \eqref{eq-912}. We proceed to check that for $c$ sufficiently large (independently of $r$) we can ensure that $\sum_{i=1}^\infty m_i \leq n$. To this end, we calculate \begin{equation}\label{eq-932} \sum_{i=1}^\infty m_i = K\sum_{i=1}^{i_1} \lceil(r-i)(\beta + 1)2^{id}\rceil + K\sum_{i=i_1+1}^{i_2} \lfloor2^{i_1d - (i - i_1)\delta}\rfloor. \end{equation} The first sum above is bounded by (recall that $i_1 = r-c$) \begin{equation} \sum_{i=1}^{i_1} K\lceil(r-i)(\beta + 1)2^{id}\rceil \leq (r-c) + (\beta + 1)\sum_{i=1}^{r-c} (r-i)2^{id} \leq [2(\beta + 1)(c+1) + 1]2^{(r-c)d}, \end{equation} by noting that $(r-c) \leq 2^{(r-c)d}$ and by writing $$\sum_{i=1}^{r-c} (r-i)2^{id} = c\sum_{i=1}^{r-c} 2^{id} + \sum_{i=1}^{r-c-1}\sum_{j=1}^{i}2^{jd},$$ and bounding the geometric series. The second sum in \eqref{eq-932} is bounded by (again recall that $i_1 = r-c$) \begin{equation} \sum_{i=i_1+1}^{i_2} \lfloor2^{i_1d - (i - i_1)\delta}\rfloor \leq \sum_{i=i_1+1}^{\infty} 2^{i_1d - (i - i_1)\delta} = 2^{-\delta}(1 - 2^{-\delta})^{-1}2^{(r-c)d}. \end{equation} Thus, if we choose $c$ large enough so that $$ [2(\beta + 1)(c+1) + 1]2^{-cd} \leq \frac{1}{2}~\text{and}~2^{-\delta}(1 - 2^{-\delta})^{-1}2^{-cd} \leq \frac{1}{2}, $$ then we will have that $\sum_{i=1}^\infty m_i \leq K2^{rd} = n$. (Note that such a $c$ can be chosen independently of $r$.) Finally, we bound the sum in equation \eqref{eq-912} using lemma \ref{carls-lemma}. We note that for $i \leq i_1 = r-c$, we have $$m_i = K\lceil(r-i)(\beta + 1)2^{id}\rceil \geq K2^{id} = n_i.$$ Thus lemma \ref{carls-lemma} gives the bound (recall that $\|\mathcal{U}_i\|\lesssim 2^{-i(k+\alpha)}$) \begin{equation} \begin{split} \epsilon_{m_i}(\mathcal{U}_i) \leq 2^{-\frac{m_i}{n_i}}n_i^{-\frac{1}{2}}\|\mathcal{U}_i\| \lesssim 2^{-(r-i)(\beta + 1)}\sqrt{2^{-id}}2^{-i(k+\alpha)} &= 2^{-(r-i)(\beta + 1)}2^{-i\beta} \\ & = 2^{-r\beta}2^{-(r-i)}, \end{split} \end{equation} since $\frac{m_i}{n_i} \geq (r-i)(\beta + 1)$. Similarly, for $i_1 < i\leq i_2$, we note that $$m_i = K\lfloor2^{i_1d - (i - i_1)\delta}\rfloor \leq K2^{id} = n_i,$$ and thus lemma \ref{carls-lemma}, using \eqref{delta-choice-bound}, gives the bound \begin{equation} \begin{split} \epsilon_{m_i}(\mathcal{U}_i) \leq \sqrt{1+\log{\frac{n_i}{m_i}}}m_i^{-\frac{1}{2}}\|\mathcal{U}_i\| &\lesssim \sqrt{2 + (i-i_1)(d + \delta)}2^{-i_1\frac{d}{2}+(i-i_1)\frac{\delta}{2}}2^{-i(k+\alpha)} \\ & = \sqrt{2 + (i-i_1)(d + \delta)}2^{-i_1\left(\frac{d}{2}+k+\alpha\right)}2^{-(i-i_1)\left(k+\alpha - \frac{\delta}{2}\right)} \\ & \lesssim 2^{-r\beta}\sqrt{2 + (i-i_1)(d + \delta)}2^{-(i-i_1)\left(k+\alpha - \frac{\delta}{2}\right)}. \end{split} \end{equation} Here we have used that $\log{\frac{m_i}{n_i}} \leq 1 + (i-i_1)(d + \delta)$, which follows from \eqref{delta-choice-bound}, and that $i_1 = r-c$ and $\beta = \frac{d}{2}+k+\alpha$, so that $2^{-i_1\left(\frac{d}{2}+k+\alpha\right)} \lesssim 2^{-r\beta}$. Finally, if $i > i_2$, then $m_i = 0$ so that lemma \ref{carls-lemma} implies that \begin{equation} \epsilon_{m_i}(\mathcal{U}_i) \leq \|\mathcal{U}_i\| \lesssim 2^{-i(k+\alpha)}. \end{equation} Plugging these bound into equation \eqref{eq-912}, we get \begin{equation}\label{eq-998} \begin{split} \epsilon_n(\mathcal{S}) & \lesssim 2^{-r\beta}\sum_{i=1}^{i_1}2^{-(r-i)} + 2^{-r\beta}\sum_{i=i_1+1}^{i_2}\sqrt{2 + (i-i_1)(d + \delta)}2^{-(i-i_1)\left(k+\alpha - \frac{\delta}{2}\right)} + \sum_{i=i_2+1}^\infty2^{-i(k+\alpha)} \\ & \leq 2^{-r\beta}\sum_{i=c}^{\infty}2^{-i} + 2^{-r\beta}\sum_{i=1}^{\infty}\sqrt{2 + i(d + \delta)}2^{-i\left(k+\alpha - \frac{\delta}{2}\right)} + \sum_{i=i_2+1}^\infty2^{-i(k+\alpha)}. \end{split} \end{equation} Finally, we have the following bounds: \begin{equation} \begin{split} &\sum_{i=c}^{\infty}2^{-i} \leq 2^{1-c} \leq 1,\\ &\sum_{i=1}^{\infty}\sqrt{2 + i(d + \delta)}2^{-i\left(k+\alpha - \frac{\delta}{2}\right)} \lesssim 1, \\ &\sum_{i=i_2+1}^\infty2^{-i(k+\alpha)} \lesssim 2^{-i_2(k+\alpha)} \lesssim 2^{-r\beta(k+\alpha)^{-1}(k+\alpha)} = 2^{-r\beta}, \end{split} \end{equation} by summing geometric series and using that $k+\alpha - \frac{\delta}{2} > 0$ since $\delta < k+\alpha$. Plugging these bounds into \eqref{eq-998}, we finally get for $n=K2^{rd}$ \begin{equation} \epsilon_n(\mathcal{S}) \lesssim 2^{-r\beta} = 2^{-r\left(k+\alpha+\frac{d}{2}\right)}, \end{equation} where, importantly, the implied constant is independent of $r$ (and thus $n$). This completes the proof. \end{proof} \section{Lower bounds for ridge function dictionaries}\label{main-result-2-section} In this section, we consider lower bounds on the entropy of convex subsets $A$ of $L^2(B_1^d)$. We show that if $K$ contains a certain class of ridge functions, then its entropy must be bounded below. This result is useful for analyzing the entropy of $B_1(\mathbb{D})$ when $\mathbb{D}$ is a dictionary of ridge functions. We begin with a general Lemma which is useful for lower bounding the entropy numbers of convex subsets of a Hilbert space. This Lemma is a modest generalization of Lemma 3 in \cite{makovoz1996random}. A slightly different version has also appeared in \cite{klusowski2018approximation} in the context of lower bounding approximation rates of ReLU networks. The proofs given in these references rely on a combinatorial lemma which concerns covering numbers of the cube by Hamming balls (see Lemma 8 in \cite{lorentz1966metric} or Lemma 2.2 of Chapter 15 in \cite{lorentz1996constructive}, for instance). For completeness, we provide here a simpler proof which we found more enlightening. \begin{lemma}\label{lower-eigenvalue-lemma} Let $H$ be a hilbert space and $A\subset H$ a convex and symmetric set. Suppose that $g_1,...,g_n\subset A$. Then \begin{equation}\label{lemma-lower-bound} \epsilon_{n}(A)\geq \frac{1}{2}\sqrt{\frac{\lambda_{min}}{n}}, \end{equation} where $\lambda_{min}$ is the smallest eigenvalue of the Gram matrix $G$ defined by $G_{ij} = \langle g_i,g_j\rangle_H$. \end{lemma} \begin{proof} Consider a maximal set of points $x_1,...,x_N\in b_1^n(0,1):=\{x\in \mathbb{R}^n:~|x|_1\leq 1\}$ in the $\ell^1$-unit ball satisfying $|x_i - x_j| \geq \frac{1}{2}$ for each $i\neq j$. We claim that $N \geq 2^n$. Indeed, if the set $\{x_i\}_{i=1}^N$ is maximal, then the balls $$b^n_1(x_i,1/2) = \left\{x\in \mathbb{R}^n:~|x-x_i|_1\leq \frac{1}{2}\right\}$$ must cover the ball $b_1^n(0,1)$. This implies that \begin{equation} \sum_{i=1}^N |b^n_1(x_i,1/2)| \geq |b_1^n(0,1)|. \end{equation} Since we obviously have $|b^n_1(x_i,1/2)| = (1/2)^n|b_1^n(0,1)|$ for each $i$, it follows that $N \geq 2^n$. Consider the collection of elements $f_1,...,f_N\in H$ defined by \begin{equation} f_i = \sum_{k=1}^nx^k_ig_k. \end{equation} Since $A$ is symmetric and convex, we have $f_i\in A$ for each $i=1,...,N$. Moreover, if $i\neq j$, then \begin{equation} \|f_i-f_j\|^2_H = v^T_{ij}Gv_{ij}, \end{equation} where $v_{ij} = x_i - x_j$. Since $|x_i - x_j|_1 \geq \frac{1}{2}$, it follows from H\"older's inequality that $|v_{ij}|^2_2 \geq \frac{1}{4n}$. From the eigenvalues of $G$ we then see that $\|f_i-f_j\|^2_H \geq \frac{\lambda_{min}}{4n}$ for all $i\neq j$. This gives the lower bound \eqref{lemma-lower-bound}. \end{proof} This Lemma can be applied to sequences of almost orthogonal vectors to obtain Lemma 3 from \cite{makovoz1996random}, which we state here as a corollary for completeness. \begin{corollary}\label{entropy-lower-bound-corollary} Let $H$ be a hilbert space and $A\subset H$ a convex and symmetric set. Suppose that $g_1,...,g_n\subset A$ and the the $g_i$ are almost orthogonal in the sense that for all $i = 1,...,n$, \begin{equation}\label{diagonal-dominant} \sum_{j\neq i}|\langle g_i,g_j\rangle_H| \leq \frac{1}{2}\|g_i\|_H^2. \end{equation} Then \begin{equation} \epsilon_{n}(A)\geq \frac{\min_i \|g_i\|_H}{\sqrt{8n}}. \end{equation} \end{corollary} \begin{proof} This follows from Lemma \ref{lower-eigenvalue-lemma} if we can show that the Gram matrix $G$ satisfies \begin{equation} \lambda_{min}(G) \geq \frac{1}{2}\min_i \|g_i\|^2_H. \end{equation} This follows immediately from the diagonal dominance condition \ref{diagonal-dominant} and the Gerschgorin circle theorem (see the proof in \cite{makovoz1996random} for details). \end{proof} Using these results, it is a relatively simple matter to obtain lower bounds on the entropy of $B_1(\mathbb{F}_s^d)$. \begin{proposition} Let $d \geq 1$. Then \begin{equation} \epsilon_n(B_1(\mathbb{F}_s^d)) \gtrsim_{s,d} n^{-\frac{1}{2}-\frac{s}{d}}. \end{equation} \end{proposition} \begin{proof} Consider the cube $C = [-d^{-\frac{1}{2}},d^{-\frac{1}{2}}]\subset B_1^d$. It suffices to lower bound the $\epsilon_n(B_1(\mathbb{F}_s^d))$ with respect to $H = L^2(C)$. For this, we consider the collection of functions $g_\xi(x) = (1 + \sqrt{d}|\xi|)^{-s}e^{2\pi i \sqrt{d}\xi \cdot x}\in B_1(\mathbb{F}_s^d)$ for $\xi\in \mathbb{Z}^d$ with $|\xi|_\infty \leq N$. This collection of functions is clearly orthogonal, and so satisfies the condition of Corollary \ref{entropy-lower-bound-corollary}. Thus we get \begin{equation} \epsilon_n(B_1(\mathbb{F}_s^d)) \gtrsim n^{-\frac{1}{2}}\min_{\xi}\|g_{\xi}\|_H = n^{-\frac{1}{2}}(1+dN)^{-s} \gtrsim_{s,d} n^{-\frac{1}{2}}N^{-s}, \end{equation} where $n = (2N+1)^d$ is the total number of functions $g_\xi$. Thus $N \lesssim_d n^{\frac{1}{d}}$ and so \begin{equation} \epsilon_n(B_1(\mathbb{F}_s^d))\gtrsim_{s,d} n^{-\frac{1}{2}-\frac{s}{d}}. \end{equation} This only holds a priori for $n$ of the form $n = (2N+1)^d$, but the monotonicity of the entropy extends the bound to all $n$. \end{proof} Since $\mathcal{K}_1(\mathbb{F}_{k+1}^d)\subset \mathcal{K}_1(\mathbb{P}_k^d)$, this result immediately implies that $\epsilon_n(B_1(\mathbb{P}_k^d)) \gtrsim_{k,d} n^{-\frac{1}{2}-\frac{k+1}{d}}$, as observed in \cite{makovoz1996random,klusowski2018approximation}. However, it is known that this inclusion is strict \cite{wojtowytsch2020representation}, and so it may be possible to get a better lower bound on $\epsilon_n(B_1(\mathbb{P}_k^d))$ which does not come from $\mathcal{K}_1(\mathbb{F}_{k+1}^d)$. This would separate the two spaces in a quantifiable way, showing precisely how much larger $\mathcal{K}_1(\mathbb{P}_k^d)$ is. The first such improved lower bound on $\epsilon_n(B_1(\mathbb{P}_k^d))$ is obtained by Makovoz \cite{makovoz1996random} in the case $k=0$, $d=2$, and it is conjectured that an improved lower bound holds more generally. We settle this conjecture by deriving an improved lower bound for all $k \geq 0$ and $d\geq 2$, which requires a much more careful analysis. \begin{theorem}\label{lower-bound-theorem} Let $d \geq 2$, $k\geq 0$, and $A\subset L^2(B_1^d)$ be a convex and symmetric set. Suppose that for every profile $\phi\in C_c^\infty([-2,2])$ such that $\|\phi^{(k+1)}\|_{L^1(\mathbb{R})}\leq 1$, and any direction $\omega\in S^{d-1}$, the ridge function $\phi(\omega\cdot x)\in L^2(B_1^d)$ satisfies \begin{equation} \phi(\omega\cdot x)\in A. \end{equation} Then \begin{equation} \epsilon_n(A) \gtrsim_{k,d} n^{-\frac{1}{2}-\frac{2k+1}{2d}}. \end{equation} \end{theorem} The argument we give here adapts the argument in the proof of Theorem 4 in \cite{makovoz1996random}. A careful analysis allows us extend the result to higher dimensions and remove a logarithmic factor. The key is to consider profiles $\phi$ whose higher order moments vanish. Before we give the proof, we observe that the Peano kernel formula \begin{equation} \phi(x) = \frac{1}{k!}\int_{-2}^2 \phi^{(k+1)}(t)[\max(0,x-t)]^kdt = \frac{1}{k!}\int_{-2}^2 \phi^{(k+1)}(t)\sigma_k(0,x-t)dt, \end{equation} which holds for all $\phi\in C_c^\infty([-2,2])$, implies that for a constant $C = C(k,d)$, the unit ball $CB_1(\mathbb{P}^d_k)$ satisfies the conditions of Theorem \ref{lower-bound-theorem}. This yields the result \begin{theorem}\label{relu-k-lower-bound-corollary} Let $d \geq 2$. Then \begin{equation} \epsilon_n(B_1(\mathbb{P}^d_k)) \gtrsim_{k,d} n^{-\frac{1}{2}-\frac{2k+1}{2d}}. \end{equation} \end{theorem} \begin{proof}[Proof of Theorem \ref{lower-bound-theorem}] We introduce the weight $$d\mu = (1-|x|^2)_+^{\frac{d}{2}}dx$$ of Bochner-Riesz type on $B_1^d$ and consider the space $H = L^2(B_1^d,d\mu)$. Since $1-|x|^2 \leq 1$, it follows that $\|f\|_H \leq \|f\|_{L^2(\Omega)}$, and so it suffices to lower bound the entropy of $A$ with respect to the weighted space $H$. Choose $0\neq \psi\in C^\infty_c([-1,1])$ such that $2d-1$ of its moments vanish, i.e. such that \begin{equation} \int_{-1}^1 x^r\psi(x)dx = 0, \end{equation} for $r=0,...,2d-2$. Such a function $\psi$ can easily be obtained by convolving an arbitrary compactly supported function whose moments vanish (such as a Legendre polynomial) with a $C^\infty$ bump function. Our assumptions on the set $A$ imply that by scaling $\psi$ appropriately, we can ensure that for $0 < \delta < 1$ \begin{equation} \delta^{k}\psi(\delta^{-1}\omega\cdot x + b)\in A, \end{equation} for any $\omega\in S^{d-1}$ and $b\in[-\delta^{-1},\delta^{-1}]$. Note that $\psi$, which will be fixed in what follows, depends upon both $d$ and $k$. Let $N \geq 1$ be an integer and fix $n = N^{d-1}$ directions $\omega_1,...,\omega_n\in S^{d-1}$ with $\min(|\omega_i - \omega_j|_2, |\omega_i + \omega_j|_2) \gtrsim_d N^{-1}$. This can certainly be done since projective space $P^{d-1} = S^{d-1}/\{\pm\}$ has dimension $d-1$. In particular, if $\omega_1,...,\omega_n$ is a maximal set satisfying $\min(|\omega_i - \omega_j|_2, |\omega_i + \omega_j|_2) \geq cN^{-1}$, then balls of radius $cN^{-1}$ centered at the $\omega_i$ must cover $P^{d-1}$. So we must have $n = \Omega(N^{d-1})$, and by choosing $c$ appropriately we can arrange $n = N^{d-1}$. Further, let $a \leq \frac{1}{4}$ be a sufficiently small constant to be specified later and consider for $\delta = aN^{-1}$ the collection of functions \begin{equation} g_{p,l}(x) = \delta^{k}\psi(\delta^{-1}\omega_p \cdot x + 2l)\in A, \end{equation} for $p=1,...,n$ and $l = -\frac{N}{2},...,\frac{N}{2}$. The intuition here is that $g_{p.l}$ is a ridge function which varies in the direction $\omega_p$ and has the compactly supported profile $\psi$ dialated to have width $\delta$ (and scaled appropriately to remain in $A$). The different values of $l$ give different non-overlapping shifts of these functions. The proof proceeds by checking that the $g_{p,l}$ can be made `nearly orthogonal' by choosing $a$ sufficiently small. Indeed, we claim that if $a$ is chosen small enough, then the $g_{p,l}$ satisfy the conditions of Corollary \ref{entropy-lower-bound-corollary}, i.e. for each $(p,l)$ \begin{equation} \sum_{(p',l')\neq (p,l)} |\langle g_{p,l}, g_{p',l'}\rangle_H| \leq \frac{1}{2}\|g_{p,l}\|^2_H. \end{equation} To see this, we begin by estimating $\|g_{p,l}\|^2_H$, as follows \begin{equation} \|g_{p,l}\|^2_H = \delta^{2k}\int_{B_1^d} |\psi(\delta^{-1}\omega_p \cdot x + 2l)|^2(1-|x|^2)^{\frac{d}{2}}dx. \end{equation} We proceed to complete $\omega_p$ to an orthonormal basis of $\mathbb{R}^d$, $b_1 = \omega_p, b_2,...,b_d$ and denote the coordinates of $x$ with respect to this basis by $y_i = x\cdot b_i$. Rewriting the above integral in this new orthonormal basis, we get \begin{equation} \begin{split} \|g_{p,l}\|^2_H &= \delta^{2k}\int_{B_1^d}|\psi(\delta^{-1}y_1 + 2l)|^2\left(1-\sum_{i=1}^d y_i^2\right)^{\frac{d}{2}}dy_1\cdots dy_d \\ &= \delta^{2k}\int_{-1}^1|\psi(\delta^{-1}y_1 + 2l)|^2 \rho_d(y_1)dy_1, \end{split} \end{equation} where \begin{equation} \begin{split} \rho_d(y) &= \int_0^{\sqrt{1-y^2}} (1-y^2-r^2)^{\frac{d}{2}}r^{d-2}dr\\ &= (1-y^2)^{d-\frac{1}{2}}\int_0^{1} (1-r^2)^{\frac{d}{2}}r^{d-2}dr = K_d(1-y^2)^{d-\frac{1}{2}}, \end{split} \end{equation} for a dimension dependent constant $K_d$. Further, we change variables, setting $y = \delta^{-1}y_1 + 2l$ and use the fact that $\psi$ is supported in $[-1,1]$, to get \begin{equation} \|g_{p,l}\|^2_H = K_d\delta^{2k+1}\int_{-1}^1 |\psi(y)|^2 (1-[\delta(y-2l)]^2)^{d-\frac{1}{2}} dy. \end{equation} Since $|y| \leq 1$ and $|2l| \leq N$, as long as $\delta(N+1) \leq 1/2$, which is guaranteed by $a \leq \frac{1}{4}$, the coordinate $y_1 = \delta(y-2l)$ will satisfy $|y_1| \leq 1/2$. This means that $$(1-[\delta(y-2l)]^2)^{d-\frac{1}{2}} = (1-y_1^2)^{d-\frac{1}{2}} \geq (3/4)^{d-\frac{1}{2}}$$ uniformly in $p,l,N$ and $\delta$, and thus \begin{equation}\label{lower-bound} \|g_{p,l}\|^2_H \geq K_d(3/4)^{d-\frac{1}{2}}\delta^{2k+1}\int_{-1}^1 |\psi(y)|^2dy \gtrsim_{k,d} \delta^{2k+1}. \end{equation} Next consider $|\langle g_{p,l}, g_{p',l'}\rangle_H|$ for $(p,l)\neq (p',l')$. If $p=p'$, then $\omega_p = \omega_{p'}$, but $l\neq l'$. In this case, we easily see that the supports of $g_{p,l}$ and $g_{p,l'}$ are disjoint and so the inner product $\langle g_{p,l}, g_{p',l'}\rangle_H = 0$. On the other hand, if $p\neq p'$ we get \begin{equation} \langle g_{p,l}, g_{p',l'}\rangle_{H} = \delta^{2k}\int_{B_1^d} \psi(\delta^{-1}\omega_p\cdot x + 2l)\psi(\delta^{-1}\omega_{p'}\cdot x + 2l')(1-|x|^2)^{\frac{d}{2}}dx. \end{equation} Since $p\neq p'$, the vectors $\omega_p$ and $\omega_{p'}$ are linearly independent and we complete them to a basis $b_1 = \omega_p, b_2 = \omega_{p'}, b_3,...,b_d$, where $b_3,...,b_d$ is an orthonormal basis for the subspace orthogonal to $\omega_p$ and $\omega_{p'}$. Letting $b_1',b_2',b_3'=b_3,...,b_d'=b_d$ be a dual basis (i.e. satisfying $b_i'\cdot b_j = \delta_{ij}$) and making the change of variables $x = y_1b_1' + \cdots + y_db_d'$ in the above integral, we get \begin{equation}\label{inner-product-equation} \langle g_{p,l}, g_{p',l'}\rangle_{H} = \delta^{2k}\det(D_{p,p'})^{-\frac{1}{2}} \int_{-\infty}^\infty \int_{-\infty}^\infty \psi(\delta^{-1}y_1+2l)\psi(\delta^{-1}y_2+2l') \gamma_d(|y_1b_1' + y_2b_2'|) dy_1dy_2, \end{equation} where $D_{p,p'}$ is the Graham matrix of $\omega_1$ and $\omega_2$ (notice that then $D_{p,p'}^{-1}$ is the Graham matrix of $b_1'$ and $b_2'$) and \begin{equation} \begin{split} \gamma_d(y) &= \int_0^{\sqrt{1-y^2}} (1-y^2-r^2)^{\frac{d}{2}}r^{d-3}dr\\ &= (1-y^2)_+^{d-1}\int_0^{1} (1-r^2)^{\frac{d}{2}}r^{d-3}dr = K'_d(1-y^2)_+^{d-1}, \end{split} \end{equation} for a second dimension dependent constant $K'_d$. (Note that if $d=2$, then the above calculation is not correct, but we still have $\gamma_d(y) = (1-y^2)_+^{\frac{d}{2}} = (1-y^2)_+^{d-1}$.) We remark that the choice of Bochner-Riesz weight $d\mu = (1 - |x|^2)_+^{\frac{d}{2}}$ was made precisely so that $\gamma_d$ is a piecewise polynomial with continuous derivatives of order $d-2$, which will be important in what follows. Next, we fix $y_1$ and analyze, as a function of $z$, $$\tau_{p,p'}(y_1,z) = \gamma_d(|y_1b_1' + zb_2'|) = K'_d(1-q_{p,p'}(y_1,z))_+^{d-1},$$ where $q_{p,p'}$ is the quadratic \begin{equation}\label{definition-of-q} q_{p,p'}(y_1,z) = (b_1'\cdot b_1')y_1^2-2(b_1'\cdot b_2')y_1z-(b_2'\cdot b_2')z^2, \end{equation} We observe that, depending upon the value of $y_1$, $\tau_{p,p'}(y_1,z)$ is either identically $0$ or is a piecewise polynomial function of degree $2d-2$ with exactly two break points at the roots $z_1,z_2$ of $q_{p,p'}(y_1,z) = 1$. Furthermore, utilizing Fa\`a di Bruno's formula \cite{di1857note} and the fact that $q_{p,p'}(y_1,\cdot)$ is quadratic, we see that \begin{equation}\label{derivative-of-tau} \left.\frac{d^k}{dz^k} \tau_{p,p'}(y_1,z)\right|_{z_i} = \sum_{m_1+2m_2=k} \frac{k!}{m_1!m_2!2^{m_2}}f_d^{(m_1+m_2)}(1)\left[\frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i}\right]^{m_1}\left[\frac{d^2}{dz^2}q_{p,p'}(y_1,z)|_{z_i}\right]^{m_2}, \end{equation} where $f_d(x) = (1-x)^{d-1}$. Since $f^{(m)}_d(1) = 0$ for all $m \leq d-2$, we see that the derivative in \eqref{derivative-of-tau} is equal to $0$ for $0 \leq k\leq d-2$. Thus the function $\tau_{p,p }(y_1,\cdot)$ has continuous derivatives up to order $d-2$ at the breakpoints $z_1$ and $z_2$. Moreover, if we consider the derivative of order $k=d-1$, then only the term with $m_2 = 0$ in \eqref{derivative-of-tau} survives and we get \begin{equation} \left.\frac{d^{d-1}}{dz^{d-1}} \tau_{p,p'}(y_1,z)\right|_{z_i} = f_d^{(d-1)}(1)\left[\frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i}\right]^{d-1} = (-1)^{d-1}(d-1)!\left[\frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i}\right]^{d-1}. \end{equation} Utilizing the fact that the derivative of a quadratic $q(x) = ax^2 + bx + c$ at its roots is given by $\pm\sqrt{b^2 - 4ac}$ combined with the formula for $q_{p,p'}$ \eqref{definition-of-q}, we get \begin{equation} \frac{d}{dz}q_{p,p'}(y_1,z)|_{z_i} = \pm 2\sqrt{(b_1'\cdot b_1')(b_1'\cdot b_2')^2-(b_2'\cdot b_2')(b_1'\cdot b_1')} = \pm 2\det(D_{p,p'})^{-\frac{1}{2}}. \end{equation} Taken together, this shows that the jump in the $d-1$-st derivative of $\tau_{p,p'}(y_1,z)$ at the breakpoints $z_1$ and $z_2$ has magnitude \begin{equation}\label{derivative-bound} \left|\left.\frac{d^{d-1}}{dz^{d-1}} \tau_{p,p'}(y_1,z)\right|_{z_i}\right| \lesssim_d \det(D_{p,p'})^{-\frac{d-1}{2}}. \end{equation} Going back to equation \eqref{inner-product-equation}, we see that due to the compact support of $\psi$, the integral in \eqref{inner-product-equation} is supported on a square with side length $2\delta$ in $y_1$ and $y_2$. To clarify this, we make the change of variables $s = \delta^{-1}y_1+2l$, $t = \delta^{-1}y_2+2l'$, and use that $\psi$ is supported on $[-1,1]$, to get (for notational convenience we let $y(s,l) = \delta (s-2l)$) \begin{equation} \langle g_{p,l}, g_{p',l'}\rangle_{H} = \delta^{2k+2}\det(D_{p,p'})^{-\frac{1}{2}} \int_{-1}^1 \int_{-1}^1 \psi(s)\psi(t) \tau_{p,p'}(y(s,l), y(t,l'))ds dt. \end{equation} We now estimate the sum over $l'$ as \begin{equation}\label{big-equation} \begin{split} \sum_{l'=-\frac{N}{2}}^{\frac{N}{2}} |\langle g_{p,l}, g_{p',l'}\rangle_{H}| &= \delta^{2k+2}\det(D_{p,p'})^{-\frac{1}{2}}\sum_{l'=-\frac{N}{2}}^{\frac{N}{2}}\left|\int_{-1}^1\int_{-1}^1 \psi(s)\psi(t)\tau_{p,p'}(y(s,l), y(t,l'))dsdt\right| \\ &\leq \delta^{2k+2}\det(D_{p,p'})^{-\frac{1}{2}}\sum_{l'=-\frac{N}{2}}^{\frac{N}{2}}\int_{-1}^1\left|\int_{-1}^1 \psi(s)\psi(t)\tau_{p,p'}(y(s,l), y(t,l'))dt\right|ds \\ & = \delta^{2k+2}\det(D_{p,p'})^{-\frac{1}{2}}\int_{-1}^1|\psi(s)|\sum_{l'=-\frac{N}{2}}^{\frac{N}{2}}\left|\int_{-1}^1 \psi(t)\tau_{p,p'}(y(s,l), y(t,l'))dt\right|ds. \end{split} \end{equation} For fixed $s$ and $l$, consider the inner sum \begin{equation}\label{sum-to-bound} \sum_{l'=-\frac{N}{2}}^{\frac{N}{2}}\left|\int_{-1}^1 \psi(t)\tau_{p,p'}(y(s,l), y(t,l'))dt\right| = \sum_{l'=-\frac{N}{2}}^{\frac{N}{2}}\left|\int_{-1}^1 \psi(t)\tau_{p,p'}(y(s,l), \delta (t - 2l'))dt\right|. \end{equation} In the integrals appearing in this sum, the variable $z = \delta (t - 2l')$ runs over the line segment $[\delta(2l'-1),\delta(2l'+1)]$. These segments are disjoint for distinct $l'$ and are each of length $2\delta$. Further, recall that for fixed $y_1 = y(s,l)$, the function $\tau_{p,p'}(y_1, z)$ is a piecewise polynomial of degree $2d-2$ with at most two breakpoints $z_1$ and $z_2$. Combined with the fact that $2d-1$ moments of $\psi$ vanish, this implies that at most two terms in the above sum are non-zero, namely those where the corresponding integral contains a breakpoint. Furthermore, the bound on the jump in the $d-1$-st order derivatives at the breakpoints \eqref{derivative-bound} implies that in the intervals (of length $2\delta$) which contain a breakpoint, there exists a polynomial $q_i$ of degree $d-2$ for which \begin{equation} |\tau_{p,p'}(y_1,z) - q_i(z)| \leq \frac{(2\delta)^{d-1}}{(d-1)!}M_d \det(D_{p,p'})^{-\frac{d-1}{2}} \lesssim_d \delta^{d-1} \det(D_{p,p'})^{-\frac{d-1}{2}} \end{equation} on the given interval. Using again the vanishing moments of $\psi$, we see that the nonzero integrals in the sum \eqref{sum-to-bound} (of which there are at most $2$) satisfy $$ \left|\int_{-1}^1 \psi(t)\tau_{p,p'}(y(s,l), \delta (t - 2l'))dt\right| \lesssim_{k,d} \delta^{d-1}\det(D_{p,p'})^{-\frac{d-1}{2}}. $$ So for each fixed $s$ and $l$, we get the bound \begin{equation} \sum_{l'=-\frac{N}{2}}^{\frac{N}{2}}\left|\int_{-1}^1 \psi(t)\tau_{p,p'}(y(s,l), y(t,l'))dt\right| \lesssim_{k,d} \delta^{d-1} \det(D_{p,p'})^{-\frac{d-1}{2}}. \end{equation} Plugging this into equation \eqref{big-equation}, we get \begin{equation} \sum_{l'=-\frac{N}{2}}^{\frac{N}{2}} |\langle g_{p,l}, g_{p',l'}\rangle_{H}| \lesssim_{k,d} \delta^{2k+d+1}\det(D_{p,p'})^{-\frac{d}{2}}\int_{-1}^1|\psi(s)| ds \lesssim_{k,d} \delta^{2k+d+1}\det(D_{p,p'})^{-\frac{d}{2}}. \end{equation} We analyze the $\det(D_{p,p'})^{-\frac{d}{2}}$ term using that $\omega_p$ and $\omega_{p'}$ are on the sphere to get \begin{equation} \det(D_{p,p'})^{-\frac{d}{2}} = (1-\langle \omega_p,\omega_{p'}\rangle^2)^{-\frac{d}{2}} = \frac{1}{\sin(\theta_{p,p'})^d}, \end{equation} where $\theta_{p,p'}$ represents the angle between $\omega_p$ and $\omega_{p'}$. Summing over $p'\neq p$, we get \begin{equation}\label{eq-1357} \sum_{(p',l')\neq (p,l)} |\langle g_{p,l}, g_{p',l'}\rangle_H| \lesssim_{k,d} \delta^{2k+d+1}\sum_{p'\neq p}\frac{1}{\sin(\theta_{p,p'})^d}. \end{equation} The final step is to bound the above sum. This is done in a relatively straightforward manner by noting that this sum is comparable to the following integral \begin{equation} \sum_{p'\neq p}\frac{1}{\sin(\theta_{p,p'})^d} \eqsim_d N^{d-1}\int_{P^{d-1}-B(p,r)} |x-p|^{-d}dx, \end{equation} where we are integrating over projective space minus a ball of radius $r \gtrsim_d N^{-1}$ around $p$. Integrating around this pole of order $d$ in the $d-1$ dimensional $P^{d-1}$, this gives \begin{equation} \sum_{p'\neq p}\frac{1}{\sin(\theta_{p,p'})^d} \eqsim_d N^d. \end{equation} To be more precise, we present the detailed estimates in what follows. We bound the sum over one hemisphere \begin{equation} \sum_{0 < \theta_{p,p'}\leq \frac{\pi}{2}}\frac{1}{\sin(\theta_{p,p'})^d}, \end{equation} and note that the sum over the other hemisphere can be handled in an analogous manner. To this end, we decompose this sum as \begin{equation}\label{eq-1365} \sum_{0 < \theta_{p,p'}\leq \frac{\pi}{2}}\frac{1}{\sin(\theta_{p,p'})^d} = \sum_{0 < \theta_{p,p'}\leq \frac{\pi}{4}}\frac{1}{\sin(\theta_{p,p'})^d} + \sum_{\frac{\pi}{4} < \theta_{p,p'}\leq \frac{\pi}{2}}\frac{1}{\sin(\theta_{p,p'})^d}. \end{equation} For the second sum, we note that $\sin(\theta_{p,p'}) \geq \frac{1}{\sqrt{2}}$, and the number of terms is at most $n = N^{d-1}$, so that the second sum is $\lesssim N^{d-1}$. To bound the first sum in \eqref{eq-1365}, we rotate the sphere so that $\omega_p = (0,...,0,1)$ is the north pole. We then take the $\omega_{p'}$ for which $\theta_{p,p'}\leq \frac{\pi}{4}$ and project them onto the tangent plane at $\omega_p$. Specifically, this corresponds to the map $\omega_{p'} = (x_1,...x_{d-1},x_d)\rightarrow x_{p'} = (x_1,...x_{d-1})$, which removes the last coordinate. It is now elementary to check that this maps distorts distances by at most a constant (since the $\omega_{p'}$ are all contained in a spherical cap of radius $\frac{\pi}{4}$), i.e. that for $p'_1\neq p'_2$, we have \begin{equation} |x_{p'_1} - x_{p'_2}| \leq |\omega_{p'_1} - \omega_{p'_2}| \lesssim |x_{p'_1} - x_{p'_2}|, \end{equation} and also that $\sin(\theta_{p,p'}) = |x_{p'}|$. This allows us to write the first sum in \eqref{eq-1365} as \begin{equation} \sum_{0 < \theta_{p,p'}\leq \frac{\pi}{4}}\frac{1}{\sin(\theta_{p,p'})^d} = \sum_{0<|x_{p'}|\leq \frac{1}{\sqrt{2}}}\frac{1}{|x_{p'}|^d}, \end{equation} where by construction we have $|\omega_{p'_1} - \omega_{p'_2}| \gtrsim_d N^{-1}$ for $p'_1\neq p'_2$, and thus $|x_{p'_1} - x_{p'_2}|\gtrsim_d N^{-1}$ as well. In addition, $|\omega_p - \omega_{p'}| \gtrsim_d N^{-1}$ and thus also $|x_{p'}| \gtrsim_d N^{-1}$. Now let $r\gtrsim_d N^{-1}$ be such that the balls of radius $r$ around each of the $x_{p'}$, and around $0$, are disjoint. Notice that since $|x|^{-d}$ is a subharmonic function on $\mathbb{R}^{d-1} / \{0\}$, we have \begin{equation} \frac{1}{|x_{p'}|^d} \leq \frac{1}{|B(x_{p'},r)|}\int_{B(x_{p'},r)}|y|^{-d}dy \lesssim_d N^{d-1}\int_{B(x_{p'},r)}|y|^{-d}dy. \end{equation} Since all of the balls $B(x_{p'},r)$ are disjoint and are disjoint from $B(0,r)$, we get (note that these integrals are in $\mathbb{R}^{d-1}$) \begin{equation} \sum_{0<|x_{p'}|\leq \frac{1}{\sqrt{2}}}\frac{1}{|x_{p'}|^d} \lesssim_d N^{d-1}\int_{r \leq |y| \leq \frac{\pi}{2} + r} |y|^{-d}dy \leq N^{d-1}\int_{r \leq |y|} |y|^{-d}dy \lesssim_d N^{d-1}r^{-1} \lesssim_d N^d. \end{equation} Plugging this into equation \eqref{eq-1365} and bounding the sum over the other hemisphere in a similar manner, we get \begin{equation} \sum_{p'\neq p}\frac{1}{\sin(\theta_{p,p'})^d} \lesssim_d N^d. \end{equation} Using equation \eqref{eq-1357}, we finally obtain \begin{equation} \sum_{(p',l')\neq (p,l)} |\langle g_{p,l}, g_{p',l'}\rangle_H| \lesssim_{k,d} \delta^{2k+d+1}N^d. \end{equation} Combined with the lower bound \eqref{lower-bound}, which gives $\|g_{p,l}\|_H^2 \gtrsim_{k,d} \delta^{2k+1}$, we see that by choosing the factor $a$ in $\delta = aN^{-1}$ small enough (independently of $N$, of course), we can in fact guarantee that the conditions of Corollary \ref{entropy-lower-bound-corollary} are satisfied. Applying the Corollary, we see that \begin{equation} \epsilon_{n}(A) \geq \frac{\min_{(p,l)} \|g_{p,l}\|_H}{\sqrt{8n}} \gtrsim_{k,d} n^{-\frac{1}{2}} \delta^{\frac{2k+1}{2}} \gtrsim_{k,d,a} n^{-\frac{1}{2}}N^{-\frac{2k+1}{2}}, \end{equation} where $n = N^d$ is the total number of functions $g_{p,l}$. This finally gives (since $a$ is fixed depending upon $k$ and $d$) \begin{equation} \epsilon_{n}(A)\gtrsim_{k,d} n^{-\frac{1}{2}-\frac{2k+1}{2d}}. \end{equation} As before, the monotonicity of the entropy extends this bound to all $n$. This completes the proof. \end{proof} \section{Conclusion} We introduce the natural approximation spaces for shallow neural networks with ReLU$^k$ and cosine activation functions and show that this space is equivalent to the Barron space when $\sigma = \text{ReLU}$ and the spectral Barron space when $\sigma = \cos$. Further, we calculate the precise asymptotics of the metric entropy of these spaces with respect to the $L^2$-norm. This has allowed us to calculate optimal approximation rates for shallow ReLU$^k$ neural networks, closing the gap between upper and lower bounds previously attained. There are a few further questions we would like to propose. First, it is unclear how to extend these bounds to Banach spaces which are not Hilbert spaces, such as $L^1$ or $L^\infty$, which have been studied in \cite{klusowski2018approximation,makovoz1998uniform}, for example. Second, we would like to extend this theory to approximation by deeper neural networks. Finally, although it is known that greedy algorithms can attain an $O(n^{-\frac{1}{2}})$ approximation rate \cite{barron2008approximation} when approximating functions from $\mathcal{K}_1(\mathbb{D})$, as far as we know it is not known whether the higher order approximation rates derived here can be attained algorithmically.
{ "alphanum_fraction": 0.6778820817, "avg_line_length": 85.2210526316, "ext": "tex", "hexsha": "41961325304da3f54cce65192d23a8920691c709", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "liuzhengqi1996/math452", "max_forks_repo_path": "6DL/Entropy.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "liuzhengqi1996/math452", "max_issues_repo_path": "6DL/Entropy.tex", "max_line_length": 1090, "max_stars_count": null, "max_stars_repo_head_hexsha": "635b6ce53cb792e316abf4f47396f2e4f0686815", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "liuzhengqi1996/math452", "max_stars_repo_path": "6DL/Entropy.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 44875, "size": 121440 }
\section{Software Architecture Views} This chapter describes what our software architecture looks like. First the subsystems are identified and explained. Then the software to hardware mapping is explained, followed by how we store persistent data. Lastly concurrency between information is explained. \subsection{Design Principle: Separation of Concerns} Following the design principle of Separation of Concerns for separating a computer program into distinct sections, we decided to implement the three models in such a way that they can be used as separate executables. They will be connected with each other in a pipelined manner in the main tab of the program to make them part of the workflow of the program. But these models can also be seen as standalone application: by specifying a dataset and wanted output you could also use the models outside of our program. \subsection{Hardware Software Mapping} Our program is designed for a single computer. The program also runs as one process. However, in future extensions we would like to make the program multi threaded. This would enable the user to run the optimization and visualization models in separate threads. This way the program will be able to optimize the entered flight trajectory while already visualizing the noise contours of the original (unoptimized) trajectory. Also reading in the files could be put into a separate thread. This means that all the identifiers that are read could be shown in the select tab, the moment the thread reads them. \subsection{Persistent data management} For this program it is not required to store information persistently. The output of the program is stored as a KML file for visualization. The format of the output is structured in such a way that it can directly be entered into Google Earth and different visualization objects can be turned on or off in Google Earth. The state of the program can also be stored easily by saving the intermediate results of the pipeline. For example, the output of the noise model or the optimization model can be stored as a .csv file to be read in at a later point in time. This can be stored in a user specified directory. After the file is loaded in the program, the program will be able to continue where the user left off. This also means that changes to intermediate results, that will be entered as input into the next model in the pipeline, will not require changes to the input file you used at the beginning. More on this can be found in section 2j. External Major Technologies. Lastly, the possibility to store the KML files for visualization also enables the user to visualize multiple flight routes in Google Earth at the same time. \subsection{Concurrency} This subchapter is meant to describe how concurrency issues are solved. Because it is not possible that the program is used by multiple people at the same time and the data analysis is not divided over different processes, there are no concurrency issues. However, when multiple threads are going to be used in future projects, it becomes very important to think about concurrency. This could be useful when an ATO researcher wants to read in the next flight trajectory while the previous trajectory is still being optimized or visualized in the program. Our client let us know that this is not one of his requirements. \newpage \subsection{Diagram of the architecture} Here is a diagram of our layered architecture and the connection between the front-end and the back-end: \\ \begin{center} \includegraphics[width= 0.65\textwidth]{images/EmergentArchitecture} \\ \end{center} As depicted in the diagram above, the models present the subsystems described in the Subsystem Decomposition section of the first chapter. They will exchange their outputs with the main presenter which uses the output of one system as input for the next one. The main presenter relegates the output of the the noise model to the visualization model. Based on the actual noise values the visualization model constructs the noise contours. The noise contours will be transformed into KML files and entered into Google Earth for visualization. Changes to the KML files will automatically be transferred to Google Earth by a 'KML load' script, which periodically loads the files into Google Earth. The noise contours will also be outputted tho the main presenter. The main presenter then hands the noise contours together with the current trajectory to the optimization model. The optimization model updates the trajectory to reduce the produced noise and passes the updated trajectory on to the main controller. Now the workflow of the program can start all over again by the main presenter entering the updated trajectory into the noise model. Following this method the models are executed in a pipelined manner. But given the right input files, the subsystems can also be run separately from each other and function as standalone applications. %The diagram also shows the connection between the frond-end and back-end of the system. Each subsystem contains an import component that collaborates with an import controller which controls the view of the noise, optimization and visualization tabs in the GUI. This is where the user specifies which data files he or she wants to import.
{ "alphanum_fraction": 0.8141643059, "avg_line_length": 132.375, "ext": "tex", "hexsha": "57b2d44e9bce94809295a28e9bafdbd1e86edaf2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5c301b806b5377813c140308a2b363ce2fb68304", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "HansSchouten/Aircraft-Trajectories", "max_forks_repo_path": "SE deliverables/emergent architecture/report/sections/software_architecture.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5c301b806b5377813c140308a2b363ce2fb68304", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "HansSchouten/Aircraft-Trajectories", "max_issues_repo_path": "SE deliverables/emergent architecture/report/sections/software_architecture.tex", "max_line_length": 693, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5c301b806b5377813c140308a2b363ce2fb68304", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "HansSchouten/Aircraft-Trajectories", "max_stars_repo_path": "SE deliverables/emergent architecture/report/sections/software_architecture.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-04T11:34:37.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-04T11:34:37.000Z", "num_tokens": 1014, "size": 5295 }
\chapter{JONATHAN HARKER'S JOURNAL} 1 October, 5 a. m.--I went with the party to the search with an easy mind, for I think I never saw Mina so absolutely strong and well. I am so glad that she consented to hold back and let us men do the work. Somehow, it was a dread to me that she was in this fearful business at all, but now that her work is done, and that it is due to her energy and brains and foresight that the whole story is put together in such a way that every point tells, she may well feel that her part is finished, and that she can henceforth leave the rest to us. We were, I think, all a little upset by the scene with Mr. Renfield. When we came away from his room we were silent till we got back to the study. Then Mr. Morris said to Dr. Seward, "Say, Jack, if that man wasn't attempting a bluff, he is about the sanest lunatic I ever saw. I'm not sure, but I believe that he had some serious purpose, and if he had, it was pretty rough on him not to get a chance." Lord Godalming and I were silent, but Dr. Van Helsing added, "Friend John, you know more lunatics than I do, and I'm glad of it, for I fear that if it had been to me to decide I would before that last hysterical outburst have given him free. But we live and learn, and in our present task we must take no chance, as my friend Quincey would say. All is best as they are." Dr. Seward seemed to answer them both in a dreamy kind of way, "I don't know but that I agree with you. If that man had been an ordinary lunatic I would have taken my chance of trusting him, but he seems so mixed up with the Count in an indexy kind of way that I am afraid of doing anything wrong by helping his fads. I can't forget how he prayed with almost equal fervor for a cat, and then tried to tear my throat out with his teeth. Besides, he called the Count `lord and master', and he may want to get out to help him in some diabolical way. That horrid thing has the wolves and the rats and his own kind to help him, so I suppose he isn't above trying to use a respectable lunatic. He certainly did seem earnest, though. I only hope we have done what is best. These things, in conjunction with the wild work we have in hand, help to unnerve a man." The Professor stepped over, and laying his hand on his shoulder, said in his grave, kindly way, "Friend John, have no fear. We are trying to do our duty in a very sad and terrible case, we can only do as we deem best. What else have we to hope for, except the pity of the good God?" Lord Godalming had slipped away for a few minutes, but now he returned. He held up a little silver whistle, as he remarked, "That old place may be full of rats, and if so, I've got an antidote on call." Having passed the wall, we took our way to the house, taking care to keep in the shadows of the trees on the lawn when the moonlight shone out. When we got to the porch the Professor opened his bag and took out a lot of things, which he laid on the step, sorting them into four little groups, evidently one for each. Then he spoke. "My friends, we are going into a terrible danger, and we need arms of many kinds. Our enemy is not merely spiritual. Remember that he has the strength of twenty men, and that, though our necks or our windpipes are of the common kind, and therefore breakable or crushable, his are not amenable to mere strength. A stronger man, or a body of men more strong in all than him, can at certain times hold him, but they cannot hurt him as we can be hurt by him. We must, therefore, guard ourselves from his touch. Keep this near your heart." As he spoke he lifted a little silver crucifix and held it out to me, I being nearest to him, "put these flowers round your neck," here he handed to me a wreath of withered garlic blossoms, "for other enemies more mundane, this revolver and this knife, and for aid in all, these so small electric lamps, which you can fasten to your breast, and for all, and above all at the last, this, which we must not desecrate needless." This was a portion of Sacred Wafer, which he put in an envelope and handed to me. Each of the others was similarly equipped. "Now,"he said,"friend John, where are the skeleton keys? If so that we can open the door, we need not break house by the window, as before at Miss Lucy's." Dr. Seward tried one or two skeleton keys, his mechanical dexterity as a surgeon standing him in good stead. Presently he got one to suit, after a little play back and forward the bolt yielded, and with a rusty clang, shot back. We pressed on the door, the rusty hinges creaked, and it slowly opened. It was startlingly like the image conveyed to me in Dr. Seward's diary of the opening of Miss Westenra's tomb, I fancy that the same idea seemed to strike the others, for with one accord they shrank back. The Professor was the first to move forward, and stepped into the open door. "In manus tuas, Domine!"he said, crossing himself as he passed over the threshold. We closed the door behind us, lest when we should have lit our lamps we should possibly attract attention from the road. The Professor carefully tried the lock, lest we might not be able to open it from within should we be in a hurry making our exit. Then we all lit our lamps and proceeded on our search. The light from the tiny lamps fell in all sorts of odd forms, as the rays crossed each other, or the opacity of our bodies threw great shadows. I could not for my life get away from the feeling that there was someone else amongst us. I suppose it was the recollection, so powerfully brought home to me by the grim surroundings, of that terrible experience in Transylvania. I think the feeling was common to us all, for I noticed that the others kept looking over their shoulders at every sound and every new shadow, just as I felt myself doing. The whole place was thick with dust. The floor was seemingly inches deep, except where there were recent footsteps, in which on holding down my lamp I could see marks of hobnails where the dust was cracked. The walls were fluffy and heavy with dust, and in the corners were masses of spider's webs, whereon the dust had gathered till they looked like old tattered rags as the weight had torn them partly down. On a table in the hall was a great bunch of keys, with a timeyellowed label on each. They had been used several times, for on the table were several similar rents in the blanket of dust, similar to that exposed when the Professor lifted them. He turned to me and said,"You know this place, Jonathan. You have copied maps of it, and you know it at least more than we do. Which is the way to the chapel?" I had an idea of its direction, though on my former visit I had not been able to get admission to it, so I led the way, and after a few wrong turnings found myself opposite a low, arched oaken door, ribbed with iron bands. "This is the spot," said the Professor as he turned his lamp on a small map of the house, copied from the file of my original correspondence regarding the purchase. With a little trouble we found the key on the bunch and opened the door. We were prepared for some unpleasantness, for as we were opening the door a faint, malodorous air seemed to exhale through the gaps, but none of us ever expected such an odor as we encountered. None of the others had met the Count at all at close quarters, and when I had seen him he was either in the fasting stage of his existence in his rooms or, when he was bloated with fresh blood, in a ruined building open to the air, but here the place was small and close, and the long disuse had made the air stagnant and foul. There was an earthy smell, as of some dry miasma, which came through the fouler air. But as to the odor itself, how shall I describe it? It was not alone that it was composed of all the ills of mortality and with the pungent, acrid smell of blood, but it seemed as though corruption had become itself corrupt. Faugh! It sickens me to think of it. Every breath exhaled by that monster seemed to have clung to the place and intensified its loathsomeness. Under ordinary circumstances such a stench would have brought our enterprise to an end, but this was no ordinary case, and the high and terrible purpose in which we were involved gave us a strength which rose above merely physical considerations. After the involuntary shrinking consequent on the first nauseous whiff, we one and all set about our work as though that loathsome place were a garden of roses. We made an accurate examination of the place, the Professor saying as we began, "The first thing is to see how many of the boxes are left, we must then examine every hole and corner and cranny and see if we cannot get some clue as to what has become of the rest." A glance was sufficient to show how many remained, for the great earth chests were bulky, and there was no mistaking them. There were only twenty-nine left out of the fifty! Once I got a fright, for, seeing Lord Godalming suddenly turn and look out of the vaulted door into the dark passage beyond, I looked too, and for an instant my heart stood still. Somewhere, looking out from the shadow, I seemed to see the high lights of the Count's evil face, the ridge of the nose, the red eyes, the red lips, the awful pallor. It was only for a moment, for, as Lord Godalming said,"I thought I saw a face, but it was only the shadows," and resumed his inquiry, I turned my lamp in the direction, and stepped into the passage. There was no sign of anyone, and as there were no corners, no doors, no aperture of any kind, but only the solid walls of the passage, there could be no hiding place even for him. I took it that fear had helped imagination, and said nothing. A few minutes later I saw Morris step suddenly back from a corner, which he was examining. We all followed his movements with our eyes, for undoubtedly some nervousness was growing on us, and we saw a whole mass of phosphorescence, which twinkled like stars. We all instinctively drew back. The whole place was becoming alive with rats. For a moment or two we stood appalled, all save Lord Godalming, who was seemingly prepared for such an emergency. Rushing over to the great iron-bound oaken door, which Dr. Seward had described from the outside, and which I had seen myself, he turned the key in the lock, drew the huge bolts, and swung the door open. Then, taking his little silver whistle from his pocket, he blew a low, shrill call. It was answered from behind Dr. Seward's house by the yelping of dogs, and after about a minute three terriers came dashing round the corner of the house. Unconsciously we had all moved towards the door, and as we moved I noticed that the dust had been much disturbed. The boxes which had been taken out had been brought this way. But even in the minute that had elapsed the number of the rats had vastly increased. They seemed to swarm over the place all at once, till the lamplight, shining on their moving dark bodies and glittering, baleful eyes, made the place look like a bank of earth set with fireflies. The dogs dashed on, but at the threshold suddenly stopped and snarled, and then,simultaneously lifting their noses, began to howl in most lugubrious fashion. The rats were multiplying in thousands, and we moved out. Lord Godalming lifted one of the dogs, and carrying him in, placed him on the floor. The instant his feet touched the ground he seemed to recover his courage, and rushed at his natural enemies. They fled before him so fast that before he had shaken the life out of a score, the other dogs, who had by now been lifted in the same manner, had but small prey ere the whole mass had vanished. With their going it seemed as if some evil presence had departed, for the dogs frisked about and barked merrily as they made sudden darts at their prostrate foes, and turned them over and over and tossed them in the air with vicious shakes. We all seemed to find our spirits rise. Whether it was the purifying of the deadly atmosphere by the opening of the chapel door, or the relief which we experienced by finding ourselves in the open I know not, but most certainly the shadow of dread seemed to slip from us like a robe, and the occasion of our coming lost something of its grim significance, though we did not slacken a whit in our resolution. We closed the outer door and barred and locked it, and bringing the dogs with us, began our search of the house. We found nothing throughout except dust in extraordinary proportions, and all untouched save for my own footsteps when I had made my first visit. Never once did the dogs exhibit any symptom of uneasiness, and even when we returned to the chapel they frisked about as though they had been rabbit hunting in a summer wood. The morning was quickening in the east when we emerged from the front. Dr. Van Helsing had taken the key of the hall door from the bunch, and locked the door in orthodox fashion, putting the key into his pocket when he had done. "So far," he said, "our night has been eminently successful. No harm has come to us such as I feared might be and yet we have ascertained how many boxes are missing. More than all do I rejoice that this, our first, and perhaps our most difficult and dangerous, step has been accomplished without the bringing thereinto our most sweet Madam Mina or troubling her waking or sleeping thoughts with sights and sounds and smells of horror which she might never forget. One lesson, too, we have learned, if it be allowable to argue a particulari, that the brute beasts which are to the Count's command are yet themselves not amenable to his spiritual power, for look, these rats that would come to his call, just as from his castle top he summon the wolves to your going and to that poor mother's cry, though they come to him, they run pell-mell from the so little dogs of my friend Arthur. We have other matters before us, other dangers, other fears, and that monster . . . He has not used his power over the brute world for the only or the last time tonight. So be it that he has gone elsewhere. Good! It has given us opportunity to cry `check'in some ways in this chess game, which we play for the stake of human souls. And now let us go home. The dawn is close at hand, and we have reason to be content with our first night's work. It may be ordained that we have many nights and days to follow, if full of peril, but we must go on, and from no danger shall we shrink." The house was silent when we got back, save for some poor creature who was screaming away in one of the distant wards, and a low, moaning sound from Renfield's room. The poor wretch was doubtless torturing himself, after the manner of the insane, with needless thoughts of pain. I came tiptoe into our own room, and found Mina asleep, breathing so softly that I had to put my ear down to hear it. She looks paler than usual. I hope the meeting tonight has not upset her. I am truly thankful that she is to be left out of our future work, and even of our deliberations. It is too great a strain for a woman to bear. I did not think so at first, but I know better now. Therefore I am glad that it is settled. There may be things which would frighten her to hear, and yet to conceal them from her might be worse than to tell her if once she suspected that there was any concealment. Henceforth our work is to be a sealed book to her, till at least such time as we can tell her that all is finished, and the earth free from a monster of the nether world. I daresay it will be difficult to begin to keep silence after such confidence as ours, but I must be resolute, and tomorrow I shall keep dark over tonight's doings, and shall refuse to speak of anything that has happened. I rest on the sofa, so as not to disturb her. 1 October, later.--I suppose it was natural that we should have all overslept ourselves, for the day was a busy one, and the night had no rest at all. Even Mina must have felt its exhaustion, for though I slept till the sun was high, I was awake before her, and had to call two or three times before she awoke. Indeed, she was so sound asleep that for a few seconds she did not recognize me, but looked at me with a sort of blank terror, as one looks who has been waked out of a bad dream. She complained a little of being tired, and I let her rest till later in the day. We now know of twenty-one boxes having been removed, and if it be that several were taken in any of these removals we may be able to trace them all. Such will, of course, immensely simplify our labor, and the sooner the matter is attended to the better. I shall look up Thomas Snelling today. DR. SEWARD'S DIARY 1 October.--It was towards noon when I was awakened by the Professor walking into my room. He was more jolly and cheerful than usual, and it is quite evident that last night's work has helped to take some of the brooding weight off his mind. After going over the adventure of the night he suddenly said, "Your patient interests me much. May it be that with you I visit him this morning? Or if that you are too occupy, I can go alone if it may be. It is a new experience to me to find a lunatic who talk philosophy, and reason so sound." I had some work to do which pressed, so I told him that if he would go alone I would be glad, as then I should not have to keep him waiting, so I called an attendant and gave him the necessary instructions. Before the Professor left the room I cautioned him against getting any false impression from my patient. "But," he answered, "I want him to talk of himself and of his delusion as to consuming live things. He said to Madam Mina, as I see in your diary of yesterday, that he had once had such a belief. Why do you smile, friend John?" "Excuse me," I said, "but the answer is here." I laid my hand on the typewritten matter."When our sane and learned lunatic made that very statement of how he used to consume life, his mouth was actually nauseous with the flies and spiders which he had eaten just before Mrs. Harker entered the room." Van Helsing smiled in turn. "Good!" he said. "Your memory is true, friend John. I should have remembered. And yet it is this very obliquity of thought and memory which makes mental disease such a fascinating study. Perhaps I may gain more knowledge out of the folly of this madman than I shall from the teaching of the most wise. Who knows?" I went on with my work, and before long was through that in hand. It seemed that the time had been very short indeed, but there was Van Helsing back in the study. "Do I interrupt?" he asked politely as he stood at the door. "Not at all,"I answered. "Come in. My work is finished, and I am free. I can go with you now, if you like." "It is needless, I have seen him!" "Well?" "I fear that he does not appraise me at much. Our interview was short. When I entered his room he was sitting on a stool in the center, with his elbows on his knees, and his face was the picture of sullen discontent. I spoke to him as cheerfully as I could, and with such a measure of respect as I could assume. He made no reply whatever. 'Don't you know me?' I asked. His answer was not reassuring. "I know you well enough, you are the old fool Van Helsing. I wish you would take yourself and your idiotic brain theories somewhere else. Damn all thick-headed Dutchmen!' Not a word more would he say, but sat in his implacable sullenness as indifferent to me as though I had not been in the room at all. Thus departed for this time my chance of much learning from this so clever lunatic, so I shall go, if I may, and cheer myself with a few happy words with that sweet soul Madam Mina. Friend John, it does rejoice me unspeakable that she is no more to be pained, no more to be worried with our terrible things. Though we shall much miss her help, it is better so." "I agree with you with all my heart," I answered earnestly, for I did not want him to weaken in this matter. "Mrs. Harker is better out of it. Things are quite bad enough for us, all men of the world, and who have been in many tight places in our time, but it is no place for a woman, and if she had remained in touch with the affair, it would in time infallibly have wrecked her." So Van Helsing has gone to confer with Mrs. Harker and Harker, Quincey and Art are all out following up the clues as to the earth boxes. I shall finish my round of work and we shall meet tonight. MINA HARKER'S JOURNAL 1 October.--It is strange to me to be kept in the dark as I am today, after Jonathan's full confidence for so many years, to see him manifestly avoid certain matters, and those the most vital of all. This morning I slept late after the fatigues of yesterday, and though Jonathan was late too, he was the earlier. He spoke to me before he went out, never more sweetly or tenderly, but he never mentioned a word of what had happened in the visit to the Count's house. And yet he must have known how terribly anxious I was. Poor dear fellow! I suppose it must have distressed him even more than it did me. They all agreed that it was best that I should not be drawn further into this awful work, and I acquiesced. But to think that he keeps anything from me! And now I am crying like a silly fool, when I know it comes from my husband's great love and from the good, good wishes of those other strong men. That has done me good. Well, some day Jonathan will tell me all. And lest it should ever be that he should think for a moment that I kept anything from him, I still keep my journal as usual. Then if he has feared of my trust I shall show it to him, with every thought of my heart put down for his dear eyes to read. I feel strangely sad and low-spirited today. I suppose it is the reaction from the terrible excitement. Last night I went to bed when the men had gone, simply because they told me to. I didn't feel sleepy, and I did feel full of devouring anxiety. I kept thinking over everything that has been ever since Jonathan came to see me in London, and it all seems like a horrible tragedy, with fate pressing on relentlessly to some destined end. Everything that one does seems, no matter how right it me be, to bring on the very thing which is most to be deplored. If I hadn't gone to Whitby, perhaps poor dear Lucy would be with us now. She hadn't taken to visiting the churchyard till I came, and if she hadn't come there in the day time with me she wouldn't have walked in her sleep. And if she hadn't gone there at night and asleep, that monster couldn't have destroyed her as he did. Oh, why did I ever go to Whitby? There now, crying again! I wonder what has come over me today. I must hide it from Jonathan, for if he knew that I had been crying twice in one morning . . . I, who never cried on my own account, and whom he has never caused to shed a tear, the dear fellow would fret his heart out. I shall put a bold face on, and if I do feel weepy, he shall never see it. I suppose it is just one of the lessons that we poor women have to learn . . . I can't quite remember how I fell asleep last night. I remember hearing the sudden barking of the dogs and a lot of queer sounds, like praying on a very tumultuous scale, from Mr. Renfield's room, which is somewhere under this. And then there was silence over everything, silence so profound that it startled me, and I got up and looked out of the window. All was dark and silent, the black shadows thrown by the moonlight seeming full of a silent mystery of their own. Not a thing seemed to be stirring, but all to be grim and fixed as death or fate, so that a thin streak of white mist,that crept with almost imperceptible slowness across the grass towards the house, seemed to have a sentience and a vitality of its own. I think that the digression of my thoughts must have done me good, for when I got back to bed I found a lethargy creeping over me. I lay a while, but could not quite sleep, so I got out and looked out of the window again. The mist was spreading, and was now close up to the house, so that I could see it lying thick against the wall, as though it were stealing up to the windows. The poor man was more loud than ever, and though I could not distinguish a word he said, I could in some way recognize in his tones some passionate entreaty on his part. Then there was the sound of a struggle, and I knew that the attendants were dealing with him. I was so frightened that I crept into bed, and pulled the clothes over my head, putting my fingers in my ears. I was not then a bit sleepy, at least so I thought, but I must have fallen asleep, for except dreams, I do not remember anything until the morning, when Jonathan woke me. I think that it took me an effort and a little time to realize where I was, and that it was Jonathan who was bending over me. My dream was very peculiar, and was almost typical of the way that waking thoughts become merged in, or continued in, dreams. I thought that I was asleep, and waiting for Jonathan to come back. I was very anxious about him, and I was powerless to act, my feet, and my hands, and my brain were weighted, so that nothing could proceed at the usual pace. And so I slept uneasily and thought. Then it began to dawn upon me that the air was heavy, and dank, and cold. I put back the clothes from my face, and found, to my surprise, that all was dim around. The gaslight which I had left lit for Jonathan, but turned down, came only like a tiny red spark through the fog, which had evidently grown thicker and poured into the room. Then it occurred to me that I had shut the window before I had come to bed. I would have got out to make certain on the point, but some leaden lethargy seemed to chain my limbs and even my will. I lay still and endured, that was all. I closed my eyes, but could still see through my eyelids. (It is wonderful what tricks our dreams play us, and how conveniently we can imagine.) The mist grew thicker and thicker and I could see now how it came in, for I could see it like smoke, or with the white energy of boiling water, pouring in, not through the window, but through the joinings of the door. It got thicker and thicker, till it seemed as if it became concentrated into a sort of pillar of cloud in the room, through the top of which I could see the light of the gas shining like a red eye. Things began to whirl through my brain just as the cloudy column was now whirling in the room, and through it all came the scriptural words "a pillar of cloud by day and of fire by night." Was it indeed such spiritual guidance that was coming to me in my sleep? But the pillar was composed of both the day and the night guiding, for the fire was in the red eye, which at the thought gat a new fascination for me, till, as I looked, the fire divided, and seemed to shine on me through the fog like two red eyes, such as Lucy told me of in her momentary mental wandering when, on the cliff, the dying sunlight struck the windows of St. Mary's Church. Suddenly the horror burst upon me that it was thus that Jonathan had seen those awful women growing into reality through the whirling mist in the moonlight, and in my dream I must have fainted, for all became black darkness. The last conscious effort which imagination made was to show me a livid white face bending over me out of the mist. I must be careful of such dreams, for they would unseat one's reason if there were too much of them. I would get Dr. Van Helsing or Dr. Seward to prescribe something for me which would make me sleep, only that I fear to alarm them. Such a dream at the present time would become woven into their fears for me. Tonight I shall strive hard to sleep naturally. If I do not, I shall tomorrow night get them to give me a dose of chloral, that cannot hurt me for once, and it will give me a good night's sleep. Last night tired me more than if I had not slept at all. 2 October 10 p. m.--Last night I slept, but did not dream. I must have slept soundly, for I was not waked by Jonathan coming to bed, but the sleep has not refreshed me, for today I feel terribly weak and spiritless. I spent all yesterday trying to read, or lying down dozing. In the afternon, Mr. Renfield asked if he might see me.Poor man, he was very gentle, and when I came away he kissed my hand and bade God bless me. Some way it affected me much. I am crying when I think of him. This is a new weakness, of which I must be careful. Jonathan would be miserable if he knew I had been crying. He and the others were out till dinner time, and they all came in tired. I did what I could to brighten them up, and I suppose that the effort did me good, for I forgot how tired I was. After dinner they sent me to bed, and all went off to smoke together, as they said, but I knew that they wanted to tell each other of what had occurred to each during the day. I could see from Jonathan's manner that he had something important to communicate. I was not so sleepy as I should have been, so before they went I asked Dr. Seward to give me a little opiate of some kind, as I had not slept well the night before. He very kindly made me up a sleeping draught, which he gave to me, telling me that it would do me no harm, as it was very mild . . . I have taken it, and am waiting for sleep, which still keeps aloof. I hope I have not done wrong, for as sleep begins to flirt with me, a new fear comes, that I may have been foolish in thus depriving myself of the power of waking. I might want it. Here comes sleep. Goodnight.
{ "alphanum_fraction": 0.7743137919, "avg_line_length": 273.2407407407, "ext": "tex", "hexsha": "4a0c1951851030263cb390d1aa6f8058c4b28bc4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f02030a4f3faf6cfb861c7c660091107b4da3fc6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ggbioing/DraculaDossier", "max_forks_repo_path": "Hopkins_annotations/Chapters/Ch19.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f02030a4f3faf6cfb861c7c660091107b4da3fc6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ggbioing/DraculaDossier", "max_issues_repo_path": "Hopkins_annotations/Chapters/Ch19.tex", "max_line_length": 2385, "max_stars_count": null, "max_stars_repo_head_hexsha": "f02030a4f3faf6cfb861c7c660091107b4da3fc6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ggbioing/DraculaDossier", "max_stars_repo_path": "Hopkins_annotations/Chapters/Ch19.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6863, "size": 29510 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % CS622: Theory of Formal Languages % Copyright 2014 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % More info: https://github.com/ghorbanzade/beacon %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section*{Question 3} Consider the \textit{dfa} $\mathcal{M} = \left( \{a,b\}, \{q_0,q_1,q_2\}, \delta, q_0, \{q_1\} \right) $ whose graph is given in Figure \ref{fig:DR4}. Determine the language accepted by the automaton $\mathcal{M}$. \begin{figure}[H]\centering \begin{tikzpicture}[->, >=stealth', shorten >=1pt, auto, node distance=3cm, semithick] \tikzstyle{final}=[circle,thick,draw=black,fill=gray!40,text=black] \node[state, initial] (0) {$q_0$}; \node[state] (2) [above right of=0] {$q_2$}; \node[state, final] (1) [below right of=2] {$q_1$}; \path (0) edge [bend left] node {b} (2) edge [bend right] node {a} (1) (1) edge [loop right] node {b} (1) edge [bend left] node {a} (2) (2) edge [bend left] node {b} (1) edge [loop above] node {a} (2); \end{tikzpicture} \caption{Graph of the Deterministic Finite Automaton $\mathcal{M}$} \label{fig:DR4} \end{figure} \subsection*{Solution} By definition, the language accepted by the given Deterministic Finite Automaton (\textit{dfa}) is the set $ L \left( \mathcal{M} \right) = \{ x \in \{a\}^*\{b\}^* | \delta ^* \left( q_0,x \right) \in \{q1\} \}$. The final state $q_1$ can be reached from initial state $q_0$ either directly or through $q_2$. Final state can be reached directly from state $q_0$ using the symbol $a$ or indirectly by state $q_1$ using words of the form $b\{a\}^*b$. Once $q_1$ is reached, we can stay in $q_1$ by any arbitrary number of $b$ and/or any arbitrary number of loops of the form $a\{a\}^*b$. Therefore, the language $L(\mathcal{M})$ accepted by the \textit{dfa} $\mathcal{M}$ would be \begin{equation} L(\mathcal{M}) = \{a \cup b\{a\}^*b\}\{b\}^*\{a\{a\}^*b\}^*\{b\}^* \end{equation}
{ "alphanum_fraction": 0.6109785203, "avg_line_length": 51.0975609756, "ext": "tex", "hexsha": "1084bdae0d563e02ad7ae1860fe17b0ef7fd301a", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-12-06T17:18:05.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-20T05:58:32.000Z", "max_forks_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/beacon", "max_forks_repo_path": "umb-cs622-2015f/src/tex/hw02/hw02q03.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/beacon", "max_issues_repo_path": "umb-cs622-2015f/src/tex/hw02/hw02q03.tex", "max_line_length": 214, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c36e3d1909b9e1e47b1ad3cda81f7f33b713adc4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/beacon", "max_stars_repo_path": "umb-cs622-2015f/src/tex/hw02/hw02q03.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-01T11:16:51.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-13T20:00:10.000Z", "num_tokens": 762, "size": 2095 }
\documentclass[main.tex]{subfiles} \begin{document} \section{Sheet 11} \subsection{Wave equation} \subsubsection{General travelling impulse} We want to show that any twice-differentiable function \(F(z \pm v t) = f(z, t)\) solves the wave equation: % \boxalign{ \begin{align} \label{eq:wave-equation} \qty(-\partial_{t}^2 + v^2 \partial_{z}^2) f(t, z) = 0 \,. \end{align}} We denote \(\alpha (t,z) = z \pm vt\), so that \(f = F \circ \alpha \) and: % \begin{subequations} \begin{align} \pdv[2]{f}{t} = \pdv{}{t }\qty( \dv{F}{\alpha } \pdv{\alpha }{t}) &= \dv{F}{\alpha } \pdv[2]{\alpha }{t} + \pdv{\alpha }{t} \pdv{}{t} \qty(\dv{F}{\alpha }) \\ &= \dv{F}{\alpha } \pdv[2]{\alpha }{t} + \pdv{\alpha }{t}\dv{}{\alpha } \qty(\dv{F}{\alpha } \pdv[]{\alpha }{t}) \marginnote{Commuted the derivatives}\\ &= \dv{F}{\alpha } \pdv[2]{\alpha }{t} + \pdv{\alpha }{t} \qty( \dv[2]{F}{\alpha } \pdv[]{\alpha }{t} + \dv{F}{\alpha } \cancelto{}{\pdv{}{t} \dv{\alpha }{\alpha }}) \marginnote{Derivative of a constant} \\ &= \dv{F}{\alpha } \pdv[2]{\alpha }{t} + \dv[2]{F}{\alpha } \qty(\pdv[]{\alpha }{t})^2 \label{eq:faa-di-bruno}\\ &= v^2 \dv[2]{F}{\alpha } \,, \end{align} \end{subequations} % where the expression we derived is fully general up to equation \eqref{eq:faa-di-bruno}, while the last passage comes from the fact that \(\pdv*{\alpha }{t} = \pm v\), so its square is \(v^2\) while the second derivative is zero since \(v\) is a constant. By the same reasoning, % \begin{subequations} \begin{align} \pdv[2]{f}{z} &= \dv{F}{\alpha } \pdv[2]{\alpha }{z} + \dv[2]{F}{\alpha } \qty(\pdv{\alpha }{z})^2 \\ &= \dv[2]{F}{\alpha } \,. \end{align} \end{subequations} Therefore, % \begin{align} \qty(-\partial_{t}^2 + v^2 \partial_{z}^2) f(t, z) = (-v^2 + v^2) \dv[2]{F}{\alpha } \equiv 0 \,. \end{align} \subsubsection{Gaussian pulse} At any fixed time \(t\), the function % \begin{align} F(\alpha ) = \exp(- \alpha^2) = \exp(- \qty(z - v t)^2) \, \end{align} % looks like a rescaled Gaussian, centered around \(z = vt\). As \(t\) increases it travels rightward with speed \(v\). A plot of this, with \(v = 6\) and \(t = 0, 1, 2\) is shown in figure \ref{fig:moving-gaussian-pulse}. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{figures/gauss.pdf} \caption{Pulse for \(t = 0, 1, 2\).} \label{fig:moving-gaussian-pulse} \end{figure} \subsubsection{Harmonic wave solution} We consider the harmonic solution % \begin{align} f(t,z) = A \cos(2 \pi \qty(\frac{z}{\lambda } - \frac{t}{T})) = A \cos(\frac{2\pi}{\lambda } \alpha (t,z)) \,, \end{align} % where \(\alpha = z - \lambda t /T\). Writing the harmonic solution this way allows us to see the properties easily: we have \(v = \lambda / T\) by comparison with the formula from before (\(\alpha = z - vt\)), while the periodicity follows from that of the cosine: we start from the fact that \(\cos(x) = \cos(x \pm 2 \pi )\). If we map \(z \rightarrow z + \lambda \) we get \(\alpha \rightarrow \alpha +\lambda \), and % \begin{align} \cos(\frac{2 \pi }{\lambda } \alpha ) \rightarrow \cos(\frac{2 \pi }{\lambda }\qty(\alpha + \lambda )) = \cos(\frac{2 \pi }{\lambda } \alpha + 2 \pi ) = \cos(\frac{2 \pi }{\lambda } \alpha ) \,. \end{align} Similarly, if we map \(t \rightarrow t + T\) we get \(\alpha \rightarrow \alpha - \lambda T/T = \alpha - \lambda \), so we can apply the same reasoning. This means that the wave has spatial period \(\lambda \) and temporal period \(T\). \subsubsection{Lightspeed waves} The equation \(\square f = 0\) is analogous to \eqref{eq:wave-equation} for \(v = 1 (= c)\). The difference is the three-dimensionality of the spatial derivatives, however this does not present an issue. We still consider solutions in the form \(f (x^{\mu }) = F(\alpha (x^{\mu }) )\), with \(\alpha (x^{\mu }) = k_{\mu } x^{ \mu }\) for a constant covector \(k_{\mu }\). We will have the following derivatives % \begin{align} \pdv[2]{f}{(x^{\mu })} = \dv[2]{F}{\alpha } \qty( \pdv{\alpha }{(x^{\mu})} )^2 = \dv[2]{F}{\alpha } \qty(k_{\mu })^2 \,, \end{align} % where the index \(\mu \) can take the conventional values from \(0\) to \(3\). So, the equation \(\square f = 0\) reads % \begin{align} \dv[2]{F}{\alpha } \qty(-\qty(k_{0})^2 + \sum_{i=1 }^{3} \qty(k_{i })^2) = 0 \,, \end{align} % which is always satisfied if we take for \(k_{\mu }\) a null vector with respect to the Minkowski metric, \(k_{\mu } \eta^{\mu \nu } k_{\nu } = 0 \). In order for the wave to be propagating forwards in time we need to set \(k_{0} < 0\), or equivalently \(k^{0}>0\). So, the wave equation does indeed have travelling waves as solutions, which propagate with speed 1 (which can be gathered from the factor multiplying the Laplacian). \subsection{Gravitational waves} \subsubsection{Weak-field wave equation} We consider a perturbed metric \(g_{\mu \nu } = \eta_{\mu \nu } + h_{\mu \nu }\), to first order in \(h_{\mu \nu }\). First of all we need to compute the Christoffel symbols: they are given by % \begin{subequations} \begin{align} \Gamma^{\mu }_{\nu \rho } &= \frac{1}{2} g^{\mu \alpha } \qty(g_{\alpha \nu , \rho } + g_{\alpha \rho , \nu } - g_{\nu \rho , \alpha }) \\ &= \frac{1}{2} g^{\mu \alpha } \qty(h_{\alpha \nu , \rho } + h_{\alpha \rho , \nu } - h_{\nu \rho , \alpha }) \marginnote{\(\eta_{\mu \nu } \) is constant} \\ &= \frac{1}{2} \eta^{\mu \alpha } \qty(h_{\alpha \nu , \rho } + h_{\alpha \rho , \nu } - h_{\nu \rho , \alpha }) + \mathcal{O}(h^2) \marginnote{We only consider the first order in \(h\)} \\ &= \frac{1}{2} \qty(h^{\mu }_{\nu , \rho } + h^{\mu }_{\rho , \nu } - \tensor{h}{_{\nu \rho , }^{\alpha }}) + \mathcal{O}(h^2) \,. \end{align} \end{subequations} Now, let us look at the Ricci tensor: it is given % \begin{subequations} \begin{align} R_{\mu \nu } = R^{\alpha }_{\mu \alpha \nu } &= -2 \qty(\Gamma^{\alpha }_{\mu [\alpha , \nu ]} + \Gamma^{\beta }_{\mu [\alpha } \Gamma^{\alpha }_{\nu ], \beta }) \\ &= \Gamma^{\alpha }_{\mu \nu , \alpha } - \Gamma^{\alpha }_{\mu \alpha , \nu } + \mathcal{O}(h^2) \\ &= \frac{1}{2} \qty(\partial_{\alpha } \qty(h^{\alpha }_{\mu , \nu } + h^{\alpha }_{\nu , \mu } - \tensor{h}{_{\mu \nu , }^{\alpha }}) - \partial_{\nu } \qty(h^{\alpha }_{\mu , \alpha } + h^{\alpha }_{\alpha , \mu } - \tensor{h}{_{\mu \alpha ,}^{\alpha }})) \\ &= \frac{1}{2} \qty(\cancelto{}{h^{\alpha }_{\mu , \nu \alpha }} + h^{\alpha }_{\nu , \mu \alpha } - \tensor{h}{_{\mu \nu , }^{\alpha }_{\alpha }} - \cancelto{}{h^{\alpha }_{\mu , \alpha \nu }} - h^{\alpha }_{\alpha , \mu \nu } + \tensor{h}{_{\mu \alpha ,}^{\alpha }_{\nu }}) \marginnote{Commuting derivatives} \\ &= - \frac{1}{2} \square h_{\mu \nu } + \frac{1}{2} \qty(h^{\alpha }_{\nu , \mu \alpha } + \tensor{h}{_{\mu \alpha, }^{\alpha }_{\nu }} - h_{,\mu \nu } ) \marginnote{Introduced \(\square = \partial^{\alpha } \partial_{\alpha }\) and \(h = h^{\alpha }_{\alpha }\)} \\ &= - \frac{1}{2} \square h_{\mu \nu } + \frac{1}{2} \partial_{\mu } \qty(h_{\nu, \alpha }^{\alpha } - \frac{1}{2} h_{, \nu } ) + \frac{1}{2 } \partial_{\nu } \qty(h_{\mu, \alpha }^{\alpha } - \frac{1}{2} h_{, \mu }) \marginnote{Split \(h_{, \mu \nu } \) in two, swapped the positions of two contracted indices} \,, \end{align} \end{subequations} % which is our final expression. For brevity\footnote{Which is the formal way to say that I do not feel like writing it all out.} I omit the proof of the fact that setting \(2 h^{\alpha }_{\nu , \alpha } - h_{, \nu } = 0\) (the harmonic gauge condition) is actually a valid gauge choice, it can be found in the lecture notes. Making this gauge choice, the two terms in parentheses cancel so we find % \boxalign{ \begin{align} R_{\mu \nu } = - \frac{1}{2} \square h_{\mu \nu } \,, \end{align}} % which means that, if \(T_{\mu \nu } = 0\), the Einstein Field Equations read \(\square h_{\mu \nu } = 0\), which is precisely the wave equation for a wave propagating with speed \(c\). \subsubsection{Gravitational waves polarizations} We are considering a wave in the form % \begin{align} h_{\mu \nu } = \exp(i k_{\beta } x^{\beta }) \epsilon_{\mu \nu } = \exp(i \qty(- k t + kz)) \epsilon_{\mu \nu } \,, \end{align} % for a constant matrix \(\epsilon_{\mu \nu }\) and for \(k_{\mu } = (-k, 0,0,k)\). We have already shown in the exercise before that this satisfies the wave equation. We want to impose the conditions: \begin{enumerate} \item \(\epsilon_{0 \nu } = 0\); \item \(2\partial_{\alpha } h^{\alpha }_{\nu } - \partial_{\nu } h = 0\). \end{enumerate} The first of these also implies \(\epsilon_{\nu 0} = 0\) by symmetry (the symmetry of \(g_{\mu \nu } \) implies the symmetry of \(h_{\mu \nu }\), which in turn implies the symmetry of \(\epsilon_{\mu \nu }\), since the exponential is a scalar). Can we actually impose these conditions? Recall: our gauge transformations are \(h_{\mu \nu } \rightarrow h_{\mu \nu } - \partial_{(\mu } \xi_{\nu )}\) for some covector \(\xi_{ \nu }\). First of all, we impose the harmonic gauge condition. We need to show that we can use a gauge transformation to transform a generic perturbed metric \(\widetilde{h}_{\mu \nu }\) into a perturbed metric \(h_{\mu \nu }\) such that \(2 \partial^{\alpha } h_{\alpha\nu } - \partial_{\nu } h = 0\). Writing out the condition, we have % \begin{subequations} \begin{align} 2 \partial^{\alpha } \qty(\widetilde{h}_{ \alpha \nu } - \partial_{(\alpha } \xi_{\nu )}) - \partial_{\nu } \qty(\eta^{\rho \sigma } \qty(\widetilde{h}_{\rho \sigma } - \partial_{ (\rho } \xi_{\sigma )})) &\overset{?}{=} 0 \marginnote{The symmetrization in the second term is redundant since \(\eta_{\mu \nu }\) is already symmetric}\\ 2 \partial^{\alpha } \widetilde{h}_{\alpha \nu } - \partial_{\nu } \widetilde{h} &\overset{?}{=} \square \xi_{\nu } + \partial_{\nu } \qty(\partial^{\alpha } \xi_{\alpha }) - \partial_{\nu }\qty( \partial \cdot \xi ) \\ 2 \partial^{\alpha } \widetilde{h}_{\alpha \nu } - \partial_{\nu } \widetilde{h} &\overset{?}{=} \square \xi_{\nu } \,, \end{align} \end{subequations} % so we can use the value of \(2 \partial^{\alpha } \widetilde{h}_{\alpha \nu } - \partial_{\nu } \widetilde{h}\) as a source term \(\rho_{\nu } \) for the four PDEs \(\rho_{\nu } = \square \xi_{\nu }\). These always have a solution, which can be written explicitly using the Green function \(G(x)\), which is defined by % \begin{align} \square G(x - x') = \delta (x - x') \,, \end{align} % where \(x\) and \(x'\) are generic points in spacetime and \(\delta \) is a 4-dimensional Dirac delta function. Then, the solution will be written as % \begin{align} \xi_{\nu } (x) = \int_{ \mathbb{R}^{4}} G(x - y) \rho_{\nu }(y) \dd[4]{y} \,. \end{align} Then, we know that we can fix the harmonic gauge: for an arbitrary \(\widetilde{h}_{\mu \nu }\) we can choose an appropriate \(\xi_{\nu }\) which gives the appropriate vector field \(\square \xi_{\nu }\). A thing to notice is the fact that if we change \(\xi_{\nu } \rightarrow \xi_{\nu } + r_{\nu }\), where \(\square r_{\nu } =0\), then the Dalambertian of the gauge vector field does not change, therefore the vector field we have \emph{still} allows us to set the harmonic gauge condition for our starting metric \(\widetilde{h}_{\mu \nu }\). Then, we say the \(r_{\nu }\) is our residual gauge freedom. So, we assume to have already fixed the primary gauge, and we restrict ourselves to the residual one. We can use this residual gauge freedom to impose the conditions \(h_{0 \mu } =0\): let us start with \(h_{00} =0\). We assume to start from % \begin{align} h_{00} = \widetilde{h}_{00} - \partial_{0} r_{0} \overset{!}{=} 0 \,, \end{align} % so if we set \(\partial_{0} r_{0} = \widetilde{h}_{00}\) we can set this gauge condition. This is compatible with \(\square r_0 = 0\): we will need to also have \(\nabla^2 r_0 = \partial_0 \partial_0 r_0 = \partial_0 \widetilde{h}_{00}\), and we note that this fixes \(r_0 \) to be the integral in \(\dd{x^{0}}\) of \(\widetilde{h}_{00}\) at each spatial point. Now, we show that we can also set \(h_{0i}=0\): it amounts to % \begin{align} h_{0i} = \widetilde{h}_{0i} - \frac{1}{2} \qty(\partial_{i} r_0 + \partial_0 r_{i}) \overset{!}{=} 0 \,, \end{align} % so if we set each of the spatial components \(r_{i}\) to % \begin{align} r_{i} = \int \dd{x^{0}} \qty(2 \widetilde{h}_{0i} - \partial_{i} r_0 ) \, \end{align} % we will have the desired gauge. Since we also need to set \(\square r_{i}=0\), we will need to also have \(\nabla^2 r_{i} = \partial_0 \partial_0 r_{i} = \partial_0 \qty(2 \widetilde{h}_{0i} - \partial_{i} r_0 )\). % We can show this for plane wave solutions: we do not actually lose generality since they form a basis for any solution, the so-called Fourier basis. What do these conditions actually mean for the matrix \(\epsilon_{\mu \nu }\)? We can write the trace of \(h_{\mu \nu }\) as % \begin{subequations} \begin{align} h= \eta^{\mu \nu } h_{\mu \nu } &= \exp(i k_{\beta } x^{\beta } ) \epsilon_{\mu \nu } \eta^{\mu \nu } \\ &= \exp(i k_{\beta } x^{\beta } ) \epsilon_{ij} \delta^{ij} \marginnote{The components with a 0 vanish, and \(\eta_{ij} = \delta_{ij}\)} \,, \end{align} \end{subequations} % so \(h\) depends on the sum of the diagonal entries of \(\epsilon_{ij}\), which is a constant, we will denote it as \(\epsilon = \epsilon_{ij} \delta^{ij}\). Let us now think through the possible values of \(\nu \) in the second condition. If \(\nu =0\) we find: % \begin{align} 2 \partial_{\alpha } h^{\alpha }_{0} - \partial_{0} h = 0 \,, \end{align} % but the first term vanishes since all the components of \(\epsilon_{\mu \nu }\) with an index \(0\) vanish, so the same holds for \(h_{\mu \nu }\). Therefore we get \(\partial_{0} h = 0\), which can be expanded into % \begin{align} - i k \exp(ik_{\beta } x^{\beta } ) \epsilon = 0 \,, \end{align} % but \(k \) cannot be 0 for a nontrivial solution, and the exponential always has norm 1, so we must conclude that \(\epsilon = 0\); this implies \(h = 0\). Moving on to \(\nu = j \): we get % \begin{align} 0 = \partial_{\alpha } h_{\alpha j} = \partial_{i } h_{ij} = \partial_{3} h_{3 j} = ik \exp(ik_{\beta } x^{\beta } ) \epsilon_{3j} \,, \end{align} % but, similarly to before, nor \(k\) nor the exponential can be zero, therefore \(h_{3j}=0\), and by symmetry also \(h_{j3}=0\). This means that the matrix \(\epsilon_{\mu \nu }\) looks like % \begin{subequations} \begin{align} \epsilon_{\mu \nu } = \left[\begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & \epsilon_{11} & \epsilon_{12} & 0 \\ 0 & \epsilon_{21} & \epsilon_{22} & 0 \\ 0 & 0 & 0 & 0 \end{array}\right] \,, \end{align} \end{subequations} % so at this point we are left with two possible degrees of freedom: \(\epsilon_{12} = \epsilon_{21}\), and we found before that \(\epsilon_{\mu \nu }\) must be traceless, which means \(\epsilon_{11} = - \epsilon_{22}\). For simplicity I will now write only the smaller \(2 \times 2\) matrix \(\widetilde{\epsilon}_{mn}\), where \(m\) and \(n\) range from 1 to 2. Two possible real matrices which satisfy our conditions are the Pauli matrices \(\widetilde{\epsilon}= \sigma_{x} \) and \( \widetilde{\epsilon} = \sigma_{z}\): % \begin{subequations} \begin{align} \sigma_{x} = \left[\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right] \qquad \text{and} \qquad \sigma_{z} = \left[\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right] \,. \end{align} \end{subequations} These two basis matrices represent the two basis \emph{linear} polarizations of a gravitational wave and we can get any linearly polarized GW along the \(z\) direction by considering linear combinations of these with real coefficients. An interesting thing to note, however, is the fact that we can consider \emph{complex} combinations of these basis matrices: the matrices \((\sigma_{z} \pm i \sigma_{x} ) / \sqrt{2}\) represent circularly polarized waves, and they form a basis over \(\mathbb{C}\) for all of the polarizations. \end{document}
{ "alphanum_fraction": 0.6252502503, "avg_line_length": 49.95, "ext": "tex", "hexsha": "8f09808c9943731b72ccf41217b0a99e92d60556", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jacopok/notes", "max_forks_repo_path": "ap_first_semester/gr_exercises/sheet11.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jacopok/notes", "max_issues_repo_path": "ap_first_semester/gr_exercises/sheet11.tex", "max_line_length": 608, "max_stars_count": 6, "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jacopok/notes", "max_stars_repo_path": "ap_first_semester/gr_exercises/sheet11.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "num_tokens": 5729, "size": 15984 }
\documentclass[12pt letterpaper]{article} \usepackage{fullpage} \usepackage{graphicx} \usepackage{amsmath} \providecommand{\e}[1]{\ensuremath{\times 10^{#1}}} \usepackage{gensymb} \usepackage{float} \title{Gamma Ray Spectroscopy} \author{Johnny Minor \\ Partner: Kayla Mitchell} \date{\today} \begin{document} \maketitle %abstract should have a very brief overiew of the goals and main results of the experiment. \begin{abstract} In this experiment we used given data to study the gamma ray interactions of various radioactive sources. The radiation comes from a decay of a nucleus which then interacts with a NaI crystal scintillator that would then register the data. Unfortunately the equipment was broken so we used data from previous classes. We used the software Origin to make plots of the channel number versus counts. We then created a calibration plot which allowed us to create a map from channel number to energy. We then were able to analyze each element for it's various mechanisms of interaction such as the photoelectric effect, the Compton effect, back scattering, and beta spectrum. We also attempted to calculate the rest mass of the electron and analyze an unknown sample. \end{abstract} \newpage \section*{Description of Experiment} We were not able to actually conduct an experiment due to the equipment being broken. We instead used data from previous year students. However, it might still be good to explain the apparatus for gamma ray spectroscopy. The apparatus we would have used would be very similar to the one found in the figure \ref{fig:scintillator}. We would have had our seven radioactive sources that we would have swapped out and recorded the gamma ray spectrum. \begin{figure}[H] \caption{Schematic of a scintillation counter.} \centering \includegraphics[width=.5\textwidth]{scint_spec.jpg} \label{fig:scintillator} \end{figure} When the radioactive source is placed in front of the NaI(Tl) scintillator the gamma ray will enter the scintillator and will interact with the NaI crystal. The crystal will emit light that corresponds to how much energy the gamma ray had when it interacted with the crystal. This light will then go through a photomultiplier and finally will be recorded with it's channel number. This can then be made into a histogram. In our scenario this bin versus number of counts was given to us. This is all the physics that was involved and the rest of the experiment was in the analysis of the gamma ray spectrum of the elements that we were given. \section*{Data and Analysis} Once the bin versus number of counts histogram had been created for all of the elements we would then go on to create a calibration plot. We would fit a local Gaussian distribution on each elements photo peak because this is what were told by Dr. Engels. It can be motivated because the photo peak looks to be Gaussian by inspection. This fit allowed us to find what bin the peak was. We then found the energy that this corresponded to in the literature. Our calculations and findings can be seen in table \ref{calibration_table}. \begin{table}[H] \centering \caption{The data of photo peak channel and the value found in literature.} \label{calibration_table} \begin{tabular}{|l|l|l|} \hline Element & Channel & \begin{tabular}[c]{@{}l@{}}Photo Peak \\ (MeV)\end{tabular} \\ \hline Am241 & 27.767 & 0.05936 \\ \hline Ba133 & 140.60 & 0.356 \\ \hline Cd109 & 40.025 & 0.08804 \\ \hline Co57 & 54.005 & 0.12206 \\ \hline Co60 & 330.99 & 1.17321 \\ \hline Co60 & 451.60 & 1.33247 \\ \hline Cs137 & 262.26 & 0.66164 \\ \hline Mn54 & 327.76 & 0.83403 \\ \hline Na22 & 482.97 & 1.2745 \\ \hline \end{tabular} \end{table} We then plotted these values as a scatter plot and then fit a linear equation to the scatter plot. This is motivated by inspection of the scatter plot (figure \ref{fig:calibration}) clearly showing a linear trend. We found the linear equation to be \begin{equation} y = -0.02838 + 0.00291 x. \end{equation} This equation then allowed use to create a linear map from our channel to an actual energy. So we were able to create our new values of energy with this equation using $x$ values between 0 and 3000 with spacing of one. \begin{figure}[H] \caption{Schematic of a scintillation counter.} \centering \includegraphics[width=.60\textwidth]{calibration_plot.png} \label{fig:calibration} \end{figure} Another very important step was to subtract the given background spectrum from our data before plotting it. As we found in lecture the background spectrum had been taken for twice as long as the other elements spectrum. Therefore, we had to divided the given background spectrum in half. We then subtracted these values from our given data. This effectively removed the background counts in our data and left only the real data. We then created our plots of the energy versus counts. \begin{figure}[H] \caption{The energy vs counts spectrum for americium-241.} \centering \includegraphics[width=.60\textwidth]{am241.JPG} \label{fig:am241} \end{figure} In this spectrum we don't see counts for any noteworthy energy. It would seem that this is more background radiation that wasn't subtracted off. The literature states that most common decay of Am-241 is an alpha decay which releases 5.388 MeV. Although the graphs scale doesn't go that high we didn't see this behavior before scaling down for this report. Clearly this must be a more rare alpha decay that didn't happen in the time the data was being collected. Perhaps poor data collection is suspect for the lack of counts too. \begin{figure}[H] \caption{The energy vs counts spectrum for barium-133.} \centering \includegraphics[width=.60\textwidth]{Ba133.JPG} \label{fig:ba133} \end{figure} In barium-133 we see a slight photo peak at about 0.356 MeV. Which exactly what we would expect. We also see a very slight Compton edge that trails off right before the photo peak which is what we would expect. \begin{figure}[H] \caption{The energy vs counts spectrum for cadmium-109.} \centering \includegraphics[width=.60\textwidth]{cd109.JPG} \label{fig:cd109} \end{figure} For cadmiu-109 we see a photo peak at around 0.09 MeV which is near what we would expect. We also see some high count values near zero MeV. We believe that this is background radiation that didn't get subtracted off, or it could be radiation from the apparatus itself. Both of these features are expected. \begin{figure}[H] \caption{The energy vs counts spectrum for cobalt-57.} \centering \includegraphics[width=.60\textwidth]{co57.JPG} \label{fig:co57} \end{figure} In cobalt 57 we can clearly see a photo peak that occurs near 0.122 MeV. This is what we would expect. We can also see a small amount of background radiation that didn't get subtracted off, or it could be radiation from the apparatus itself. \begin{figure}[H] \caption{The energy vs counts spectrum for cobalt-60.} \centering \includegraphics[width=.60\textwidth]{co60.JPG} \label{fig:co60} \end{figure} In cobalt 60 we can actually see two photo peaks. One is around 1.17 MeV, and the second is near 1.33 MeV. These two seem to almost morph together into a single peak. Each photo peak should have it's own corresponding Compton edge however it appears that they merge together and form a single Compton edge near 0.9 MeV. Each photo peak should also have it's own back scattered peak, but they once again seem to merge together into a single back scatter peak near 0.25 MeV. We also see some slight counts that could be from background radiation, or it could be radiation from the apparatus itself. It should be noted that the features would be more prominent and we would be able to distinguish between the two photo peaks and their corresponding features if we had great fidelity in our data. \begin{figure}[H] \caption{The energy vs counts spectrum for cesium-137.} \centering \includegraphics[width=.60\textwidth]{cs137.JPG} \label{fig:cs137} \end{figure} In cesium-137 we see a photo peak near 0.66 MeV which is about what we would expect. The photo peak is due to the photo electric effect. When a gamma ray hits the NaI crystal an electron is ripped off because the gamma ray energy is greater than the binding energy. The energy from the gamma ray is transferred to the electron and then we see the prominent peak at that energy. We also can see a Compton edge. The Compton edge is created by Compton scattering. The equation for this phenomena is \begin{equation} \Delta \lambda = \frac{h}{m_e c} ( 1 - \cos \theta) \end{equation} Where $\Delta\lambda$ is the change in wavelength from incoming gamma ray to the outgoing gamma ray after it has interacted with the electron. $h$ is Planck's constant, $m_e$ is the mass of an electron, $c$ is the speed of light, and $\theta$ is the angle between the outgoing gamma ray and the electron. When the gamma ray comes in and interacts with the electron it can transfer energy to it which depends on the angle that it came in from. From this equation we can see that the energy which is imparted to the electron can vary from $\theta=0$ to $\theta=180$. At when $\theta$ is 180 degrees it will get the maximum contribution of energy from the gamma ray (i.e. a ``head on" collision) and this is called the Compton edge. We can also see the back scatter peak near 0.4 MeV. This phenomena happens when the gamma ray hits the electron and the electron escapes and the new gamma ray is detected instead of the electron. This peak will be the difference between when the Compton edge ends and the central peak. The mathematical expression is $$ E_{\gamma'} = E_{\gamma} - E{e^-} $$ Where $E_{\gamma}$ is the incoming gamma ray energy, $E{e^-}$ is the electron energy, and $E_{\gamma'}$ is the scattered gamma ray energy. What we detect in back scattering is the $E_{\gamma'}$. In our data the $E_{\gamma'}$ is 0.20 MeV so the back scattered peak should be at about 0.20 MeV which is what we find in the our experimental spectra. We also see some counts near zero energy. We believe that these could be left over background radiation that didn't get subtracted off, or radiation from the machine itself. \begin{figure}[H] \caption{The energy vs counts spectrum for manganese-54.} \centering \includegraphics[width=.60\textwidth]{mn54.JPG} \label{fig:mn54} \end{figure} In manganese-54 we can see a photo peak near 0.83 MeV which is what we would expect, and also the Compton edge that ends near 0.60 MeV. Both of these we would expect when we compare with the literature. \begin{figure}[H] \caption{The energy vs counts spectrum for sodium-22.} \centering \includegraphics[width=.60\textwidth]{na22.JPG} \label{fig:na22} \end{figure} In sodium-22 we can see the photo peak at 1.27 MeV and also the Compton edge. We can also note the pear at 0.5 MeV which is due to pair production. Sodium-22 is special because it is a positron emitter. We also see a beta peak at around 0.2 MeV. All of these are as expected with our given theory. \section*{Results and Conclusions} In this experiment looked at the gamma spectrum for numerous radioactive elements. We were able to look at their spectrum and determine the various mechanisms that theory predicts. We were also then able to compare with other experimental data available to see how good our data was. To find the the energy of an electron we can use the equation for the Compton effect and the energies of $E_{\gamma}$ and $E_{\gamma'}$ to find the rest mass of an electron. We can use our modified Compton effect equation \begin{equation} E_{\gamma'} = \frac{E_{\gamma}}{1 + \frac{E_{\gamma}}{m_e c^2}(1 - \cos(\theta))} \end{equation} If we set $\theta = 180$ then we get \begin{equation} E_{\gamma'} = \frac{E_{\gamma}}{1 + \frac{2E_{\gamma}}{m_e c^2}} \end{equation} We can then solve this for the mass of the electron $m_e$ and plug in $E_{\gamma}$ as the photo peak of a sample and $E_{gamma'}$ as the energy at the Compton edge. We find these values for cesium 137 to be 0.66 MeV and 0.45 MeV. When we plug these values in we find the rest mass of the electron to be 7.33\e{-24} kg. This is nowhere near the value of 9.109\e{-30} kg that we would expect. So, either we have bad data, or we did the calculation wrong. \begin{figure}[H] \caption{The energy vs counts spectrum for an unknown sample.} \centering \includegraphics[width=.60\textwidth]{unknown_sample.JPG} \label{fig:unknown_sample} \end{figure} In our unknown sample we notice two prominent photo peaks. We were told that the sample had zinc in it. We found that zinc-60 has a prominent decay at 0.67 MeV. So we believe that's what that photo peak is from. We also believe the other peak higher in energy is due to a cobalt-60 whose spectrum has been pushed together. If you look closely you can make out two very faint peaks. Since cobalt-60 has two peaks near this energy of 1.2 MeV we are lead to believe that those peaks are from it. So, we believe it is a zinc-60 and cobalt-60 sample. In order to determine the resolution of the detector we used a measure of full width half maximum on two different photo peaks. Since the photo peaks are approximately Gaussian we can use the equation \begin{equation} \mathrm{FWHM} = 2\sqrt{2 \ln 2 } \; \sigma \approx 2.355 \; \sigma. \end{equation} So we took our two full width at half maximum values and took an average and we found it to be 0.15 MeV. \end{document}
{ "alphanum_fraction": 0.7338027167, "avg_line_length": 65.0972222222, "ext": "tex", "hexsha": "ac0a6294afce56948ef3d3613d084383b8c3c53f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b55efd501c2d8f0651891f422a486e32533f5aa0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "johnnydevriese/wsu_courses", "max_forks_repo_path": "physics415_quantum_lab/gamma_ray_spectroscopy.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b55efd501c2d8f0651891f422a486e32533f5aa0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "johnnydevriese/wsu_courses", "max_issues_repo_path": "physics415_quantum_lab/gamma_ray_spectroscopy.tex", "max_line_length": 793, "max_stars_count": null, "max_stars_repo_head_hexsha": "b55efd501c2d8f0651891f422a486e32533f5aa0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "johnnydevriese/wsu_courses", "max_stars_repo_path": "physics415_quantum_lab/gamma_ray_spectroscopy.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3538, "size": 14061 }
% data.tex % % computational data processing section % % Samuel Rohrer \subsection{Prerequisites} \subsubsection{Programming experience in a few languages} \subsubsection{Data structures and algorithms experience} \subsubsection{Understanding of computer organization} \subsubsection{Understanding of discrete math and complexity measures} \subsection{Types of Databases: relational, key value, document} \subsection{SQL vs. NoSQL} \subsection{SQL: Redis, MariaDB, MySQL, PostgreSQL, SQLite} \subsection{SQL Queries} \subsection{Database indexes} \subsection{NoSQL: CouchDB, Cassandra, MongoDB, Elasticsearch, PouchDB} \subsection{Linux kernel and drivers, Operating System Threads, Mutexes \& Locks} \subsection{Hardware Parallel Processing} \subsection{Hyperthreading} \subsection{Multicore Systems} \subsection{Programming Multicore Systems \& Compilers} \subsection{Filesystems \& Server Architectures} \subsection{Networking and the Seven Layer Internet} \subsection{Application Specific Hardware: FPGA, GPU, GPGPU, TPU, ASIC} \subsection{Software Parallel Processing Libraries: OpenMP, CUDA, cuDNN} \subsection{Distributed Computing Paradigms} \subsection{Software Libraries for Distributed Computing} \subsection{Orchestration: Virtual Machine, Docker, Kubernetes, Mesos} \subsection{Cold Storage: Object Storage, NFS, HDFS} \subsection{Data Pipeline: Kafka, RabbitMQ, Pub/Sub, Flume (logs)} \subsection{Stream Processing: Hadoop, Flink, Storm} \subsection{Data Analysis: Hive, Spark, SparkML, Pig, Sawzall (logs)} \subsection{Workflow Automation Tools: cron, AirFlow, MLFlow} \subsection{Developer Operations, Continuous Integration, Continuous Deployment} \subsection{Cloud Infrastructure as a Service: GCP, AWS, Azure} \subsection{Deploying a simple application to a cloud infrastructure provider} \subsection{Asymptotic Performance, Performance Tuning (IO, memory, CPU)}
{ "alphanum_fraction": 0.7948051948, "avg_line_length": 30.5555555556, "ext": "tex", "hexsha": "955a121e0d3d48080c8957b13496f0c9ec0cf449", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ebf5665095fce8ea5d9cd912881ee9338495b4ae", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "srohrer32/ai-ToC", "max_forks_repo_path": "data.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ebf5665095fce8ea5d9cd912881ee9338495b4ae", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "srohrer32/ai-ToC", "max_issues_repo_path": "data.tex", "max_line_length": 81, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ebf5665095fce8ea5d9cd912881ee9338495b4ae", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "srohrer32/ai-ToC", "max_stars_repo_path": "data.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-04T22:27:22.000Z", "max_stars_repo_stars_event_min_datetime": "2020-02-04T22:27:22.000Z", "num_tokens": 437, "size": 1925 }
\section{Experiments}\label{sec:expts} \begin{table}[h] \centering \small \setlength{\tabcolsep}{4pt} \begin{tabular}{l l l l l l l l l } \toprule Dataset & \#folds/ & \#users & total & training set & \multicolumn{2}{c}{test set} & \multicolumn{2}{c}{\#features} \\ & samples & & \#items & \#pairs & \#pairs & \#items & items & users \\ \midrule Simulation a and b & 25 & 25 & 100 & 900 & 0 & 100 & 2 & 2 \\ Simulation c & 25 & 25 & 100 & 36--2304 & 0 & 100 & 2 & 2\\ \midrule Sushi A-small & 25 & 100 & 10 & 500 & 2500 & 10 & 18 & 123 \\ Sushi A & 25 & 100 & 10 & 2000 & 2500 & 10 & 18 & 123 \\ Sushi B & 25 & 5000 & 100 & 50000 & 5000 & 100 & 18 & 123 \\ \midrule UKPConvArgCrowdSample & 32 & 1442 & 1052 & 16398 & 529 & 33 & 32310 & 0 \\ \bottomrule \end{tabular} \caption{Summary of datasets showing average counts for the training and test sets used in each fold/subsample. %For simulations, we generate the subsamples of data independently, %for Sushi we select subsamples independently from the dataset. %Values for UKPConvArgCrowdSample are means per fold, %where the test data in each fold corresponds to a single topic and stance. The test sets all contain gold-standard rankings over items as well as pairwise labels, except the simulations, which are not generated as we evaluate using the rankings only. Numbers of features are given after categorical labels have been converted to one-hot encoding, counting each category as a separate feature. } \label{tab:datasets} \end{table} Our experiments test key aspects of crowdGPPL: predicting consensus utilities and personal preferences from pairwise labels %predicting consensus utilities from noisy crowdsourced preferences % in a subjective task; and the scalability of our proposed SVI method. In Section \ref{sec:exp_synth}, we use simulated data to test the robustness of crowdGPPL to noise and unknown numbers of latent components. Section \ref{sec:sushi} compares different configurations of the model against alternative methods using the \emph{Sushi} datasets\footnote{\url{http://www.kamishima.net/sushi/}}~\citep{kamishima2003nantonac}. Section \ref{sec:exp_scale} evaluates prediction performance and scalability of crowdGPPL %to predict both personal and consensus utilities in a high-dimensional NLP task with sparse, noisy crowdsourced preferences (\emph{UKPConvArgCrowdSample}\footnote{\url{https://github.com/ukplab/tacl2018-preference-convincing}}, ~\citet{simpson2018finding}). %Using this latter dataset, we then analyse %the scalability of our SVI approach. Finally, Section \ref{sec:components} evaluates whether crowdGPPL ignores redundant components. %when the number of components, $C$, %is larger than necessary. The datasets are summarised in Table \ref{tab:datasets}. %\paragraph{Method Comparison. } % Ranking-SVM baseline -- only easy to compare in the single user case. My GPPL paper perhaps needs % this adding in any follow up works. % Houlsby tests: with/without user features (without is better with few users). + a hierarchical % model, BI (multi task preference learning, Birlitiu et al), and the GPPL-joint model. None of % these are done at scale, which we can do with our inference method --> *this is a new claim i.e. % new empirical results*. They also test a per-user model. As baselines, we compare crowdGPPL against \emph{GPPL}, which we train on all users' preference labels to learn a single utility function, %and a Gaussian process over the joint feature space of users and items %(\emph{joint-GPPL}), as proposed by \citet{guo2010gaussian}. and \emph{GPPL-per-user}, in which a separate GPPL instance is learned for each user with no collaborative learning. We also compare against the \emph{GPVU} model~\citep{khan2014scalable} and \emph{collabGP} ~\citep{houlsby2012collaborative}. CollabGP contains parameters for each pairwise label and each user, so has a larger memory footprint than our SVI scheme, which stores only the moments at the inducing points. %CollabGP does not learn a consensus or scale parameters $s_c^{(v)}$ and $s_c^{(w)}$, %and predicts pairwise labels directly rather than utilities that can be used for ranking. %combines %matrix factorisation without input features and a separate GP per user. %Inference over the latent %components does not benefit from any item or user features, %and maximum likelihood steps are used to learn the parameters, rather than applying a fully-Bayesian %treatment. % While the original %implementation used some maximum likelihood steps, we implement GPVU using SVI method %to directly compare against crowdGPPL and GPPL-per-user. % --not quite, as we emulate the ML steps without implementing them exactly. We test \emph{crowdBT}~\citep{chen2013pairwise} as part of a method for predicting consensus utilities from crowdsourced pairwise preferences. CrowdBT models each worker's accuracy, assuming that the differences between workers' labels are due to random errors rather than subjective preferences. Since crowdBT does not account for the item features, it cannot predict utilities for items that were not part of the training set. %and is hence purely an aggregation method. We therefore treat the posterior mean utilities produced by crowdBT as training labels for Gaussian process regression using SVI. We set the observation noise variance of the GP equal to the crowdBT posterior variance of the utilities to propagate uncertainty from crowdBT to the GP. This pipeline method, \emph{crowdBT--GP}, tests whether it is sufficient to treat annotator differences as noise, in contrast to the crowdGPPL approach of modelling individual preferences. %Uncertainty is propagated from crowdBT to the GP through the observation noise variance, %hence labels that crowdBT has not learned confidently will have less effect on the GP. We evaluate the methods using the following metrics: \emph{accuracy (acc)}, which is the fraction of correct pairwise labels; \emph{cross entropy error (CEE)} between the posterior probabilities over pairwise labels and the true labels, which captures the quality of the pairwise posterior; \emph{Kendall's $\tau$}, which evaluates the ranking obtained by sorting items by predicted utility. \subsection{Simulated Noisy Data}\label{sec:exp_synth} \begin{figure}[t] %\subfloat[Inferring preferences for a single user]{ %\label{fig:simA} %\includegraphics[width=.35\columnwidth]{../../results/synth_3/single_user/tau_test} %} \subfloat[Consensus]{ \label{fig:simB} \includegraphics[width=.322\columnwidth,clip=true,trim=15 4 13 0]{tau_test} } \subfloat[Personal preferences]{ \label{fig:simC} \includegraphics[width=.306\columnwidth,clip=true,trim=30 5 14 0]{tau_test_personal} } \subfloat[Latent factors]{ \label{fig:simD} \includegraphics[width=.324\columnwidth,clip=true,trim=8 0 13 0]{num_pairs_r} } \caption{Simulations: rank correlation between true and inferred utilities. (a) \& (b) vary the level of noise in pairwise training labels, (c) varies the number of pairwise training labels. } \end{figure} First, we evaluate whether crowdGPPL is able to model individual preferences with varying amounts of labelling noise. We set the number of latent components to $C=20$ and all Gamma hyperparameters for crowdGPPL, GPPL and GPPL-per-user to $\alpha_0 = 1$, $\beta_0 = 100$. We use Mat\'ern 3/2 kernels with the length-scale for each dimension of the feature vector, $d$, chosen by a median heuristic: \begin{flalign} l_{d,\mathrm{MH}} = \mathrm{median}( \{ ||x_{i,d} - x_{j,d}||, \forall i=1,..,N, \forall j=1,...,N\} ). \end{flalign} This is a computationally frugal way to choose the length-scales, %which effectively normalizes the features and that has been extensively used in various kernel methods (e.g., ~\citet{bors1996median,gretton2012optimal}). The SVI hyperparameters were set to $\rho=0.9$, $P_i=1000$ and $\epsilon=1$. \citet{hoffman2013stochastic} found that higher values of $\rho$ gave better final results but slightly slower convergence, recommending $0.9$ as a good balance across several datasets, and did not find any effect from changing $\epsilon$. We follow their recommendations and do not find it necessary to perform further tuning in our experiments. % While we do not find it necessary in our experiments, % it is possible to tune $\rho$ and $\epsilon$ using, for example, % grid search to trade-off between convergence time and maximising the lower bound of the log marginal % likelihood, $\mathcal{L}$. Both $M$ and $P_i$ are constrained %y also found that increasing improves performance, %however this is likely to be constrained in practice by the computational resources available -- we investigate these further in Section \ref{sec:exp_scale}. In simulation (a), to test consensus prediction, we generate a $20\times 20$ grid of points and split them into 50\% training and test sets. For each gridpoint, we generate pairwise labels by drawing from the generative model of crowdGPPL with $U=20$ users, $C=5$, each $s^{(v)}_c$ set to random values between 0.1 and 10, and $s^{(w)}_c = 1, \forall c$. We vary %the inverse scale of the consensus function, $s^{(t)}$ to control the noise in the consensus function. We train and test crowdGPPL with $C=U$ and repeat the complete experiment $25$ times, including generating new data. %for each value of $s^{(t)}$. Figure \ref{fig:simB} shows that crowdGPPL better recovers the consensus ranking than the baselines, even as noise increases, as GPPL's predictions are worsened by biased users who deviate consistently from the consensus. For GPPL-per-user, the consensus is simply the mean of all users' predicted utilities, so does not benefit from sharing information between users when training. For simulation (b), we modify the previous setup by fixing $s^{(t)} = 5$ and varying $s^{(v)}_c,\forall c$ to evaluate the methods' ability to recover the personal preferences of simulated users. The results in Figure \ref{fig:simC} show that crowdGPPL is able to make better predictions when noise is below $0.3$. % but its benefit disappears when %the noise level increases further. %In the final simulation, %we evaluate the effect of the quantity of training data %with different numbers of latent factors in the generating model. We hypothesise that crowdGPPL can recover latent components given sufficient training data. %scenario that would require more training data. In simulation (c), we generate data using the same setup as before, but fix $s^{(t)} = s^{(v)}_c = s^{(w)} = 1,\forall c$ and vary the number of pairwise training labels and the number of true components through $C_{\mathrm{true}} \in \{ 1, 3, 10, 20\}$. %We evaluate the correlation between inferred and true user components, %we match inferred factors to true factors, We match inferred components to the true components as follows: compute Pearson correlations between each unmatched true component and each unmatched inferred component; select the pair with the highest correlation as a match; repeat until all true components are matched. In Figure \ref{fig:simD} we plot the mean correlation between matched pairs of components. For all values of $C_{\mathrm{true}}$, increasing the number of training labels beyond $700$ brings little improvement. Performance is highest when $C_{\mathrm{true}} = 20$, possibly because the predictive model has $C = 20$, so is a closer match to the generating model. However, % crowdGPPL is able to recover latent components reasonably well for all values of $C_{\mathrm{true}}$ given $>500$ labels, despite mismatches between $C$ and $C_{\mathrm{true}}$. % Does this pose a question: does the mismatch affect the performance? If the correlations % decrease, it may mean that the model is decomposing a factor into a sum of multiple factors, % or it may just be unable to learn it. This would need a new experiment: for 700 training pairs, % how does the accuracy of personalised predictions vary with the number of latent factors? % I think this needs us to vary C and keep C_true=3, otherwise we don't know whether the % performance differences are due to the mismatched no. factors or due to different underlying dataset, i.e. we need to keep the data the same to compare! \subsection{Sushi Preferences}\label{sec:sushi} \begin{table} \centering \small \setlength{\tabcolsep}{4pt} \begin{tabular}{l l l l@{\hskip 0.5cm} l l l@{\hskip 0.5cm} l l l} \toprule & \multicolumn{3}{c}{\textbf{Sushi-A-small}} & \multicolumn{3}{c}{\textbf{Sushi-A}} & \multicolumn{3}{c}{\textbf{Sushi-B}} \\ Method & Acc & CEE & $\tau$ & Acc & CEE & $\tau$ & Acc & CEE & $\tau$ \\ \midrule crowdGPPL & \textbf{.71} & \textbf{.56} & .48 & .84 & .33 & .79 & .76 & .50 & . 54 \\ crowdGPPL $\backslash $inducing & .70 & .60 & .45 & .84 & .34 & .78 & - & - & - \\ crowdGPPL $\backslash \bs u$ & .70 & .58 & .46 & \textbf{.85} & \textbf{.31} & \textbf{.80} & \textbf{.78} & .50 & .57 \\ crowdGPPL $\backslash \bs u \backslash \bs x$ & \textbf{.71} & .57 & \textbf{.49} & \textbf{.85} & .33 & .80 & .77 & \textbf{.49} & .56 \\ crowdGPPL $\backslash \bs u,\backslash \bs t$ & .68 & .60 & .43 & .84 & .33 & .80 & .76 & .51 & .58 \\ \midrule GPPL & .65 & .62 & .31 & .65 & .62 & .31 & .65 & .62 & .31 \\ GPPL-per-user & .67 & .64 & .42 & .83 & .40 & .79 & .75 & .60 & \textbf{.60} \\ collabGP & .69 & .58 & n/a & .83 & .35 & n/a & .76 & \textbf{.49} & n/a \\ collabGP$\backslash \bs u$ & .69 & .59 & n/a & .84 & .33 & n/a & .76 & .50 & n/a \\ GPVU & .70 & .67 & .43 & .72 & .67 & .42 & .73 & .59 & .52 %collabGP$\backslash u, C=50$ & .70 & .58 & n/a & .85 & .33 & n/a & \\ \\ \bottomrule \end{tabular} \caption{Predicting personal preferences on \emph{Sushi} datasets, means over $25$ repeats. %CrowdGPPL uses $C=20$ unless otherwise specified. The standard deviations are $\leq 0.02$ for all accuracies, $\leq 0.08$ for all CEE, and $\leq 0.03$ for all $\tau$. For Sushi-B, crowdGPPL, GPPL-per-user and collabGP had runtimes of $~30$ minutes on a 12 core, 2.6GHz CPU server; GPPL required only 1 minute. } \label{tab:sushi} \end{table} %We use the \emph{Sushi} datasets %to compare different variants of crowdGPPL against the baselines and rival methods. The sushi datasets contain, for each user, a gold standard preference ranking of $10$ types of sushi, from which we generate gold-standard pairwise labels. %These labels can be considered noise-free, since %they are derived directly from the gold standard ranking. To test performance with very few training pairs, we obtain \emph{Sushi-A-small} by selecting $100$ users at random from the complete \emph{Sushi-A} dataset, then selecting $5$ pairs for training and $25$ for testing per user. For \emph{Sushi-A}, we select $100$ users at random from the complete dataset, then split the data into training and test sets by randomly selecting $20$ training $25$ test pairs per user. For \emph{Sushi-B}, we use all $5000$ workers, and subsample $10$ training and $1$ test pair per user. We compare standard crowdGPPL with four other variants: \begin{itemize} \item \emph{crowdGPPL$\backslash$inducing}: does not use the sparse inducing point approximation and instead uses all the original points in the training set; \item \emph{crowdGPPL$\mathbf{\backslash \bs u}$}: ignores the user features; \item \emph{crowdGPPL$\mathbf{\backslash \bs u \backslash \bs x}$}: ignores both user and item features; \item \emph{crowdGPPL$\mathbf{\backslash \bs u \backslash \bs t}$}: excludes the consensus function $\bs t$ from the model as well as the user features. \end{itemize} For methods with $\backslash\bs u$, the user covariance matrix, $\bs L$, is replaced by the identity matrix, and for crowdGPPL$\mathbf{\backslash \bs u \backslash \bs x}$, $\bs K$ is also replaced by the identity matrix. As the user features do not contain detailed, personal information (only region, age group, gender, etc.), they are not expected to be sufficiently informative to predict personal preferences on their own. Therefore, for crowdGPPL and crowdGPPL$\backslash$inducing, we compute $\bs L$ for 10 latent components using the Mat\'ern 3/2 kernel function and use the identity matrix for the remaining 10. CollabGP is also tested with and without user features. We set hyperparameters $C=20$, $\epsilon=1$, $\rho=0.9$, $P_i=200$ for \emph{Sushi-A-small} and \emph{Sushi-A}, and $P_i=2000$ for \emph{Sushi-B}, without optimisation. For the gamma hyperparameters, a grid search over $\{10^{-1},...,10^3\}$ on withheld user data from \emph{Sushi-A} resulted in $\alpha_0=1, \beta_0=100$ for GPPL variants, and $\alpha_0^{(t)}=1,\beta_0^{(t)}=100$, $\alpha_0^{(v)}=1,\beta_0^{(v)}=10$ and $\alpha_0^{(w)}=1,\beta_0^{(w)}=10$ for crowdGPPL variants. %All other hyperparameters are the same %as for Section \ref{sec:exp_synth}. The complete process of subsampling, training and testing, was repeated $25$ times for each dataset. The results in Table \ref{tab:sushi} illustrate the benefit of personalised models over single-user GPPL. The inducing point approximation does not appear to harm performance of crowdGPPL, but including the user features tends to decrease its performance compared to crowdGPPL$\backslash\bs u$ and crowdGPPL$\backslash\bs u\backslash\bs x$, except on Sushi-A-small, where they may help with the small amount of training data. Comparing crowdGPPL$\backslash\bs u$ with crowdGPPL$\backslash\bs u\backslash\bs t$, including the consensus function improves performance modestly. The strong performance of GPPL-per-user suggests that even 10 pairs per person were enough to learn a reasonable model for \emph{Sushi-B}. As expected, the more memory-intensive collabGP performs comparably well to crowdGPPL on accuracy and CEE but does not provide a ranking function for computing Kendall's $\tau$. GPVU does not perform as well as other personalised methods on Sushi-A and Sushi-B, potentially due to its maximum likelihood inference steps. % The inference method of collabGP did not present practical problems % on the Sushi dataset on computer with 16GB RAM, % and in fact produces relatively fast run times due to its more efficient implementation using a C library in place of % Python. The results show that crowdGPPL is competitive despite the approximate SVI method, so in the next experiment, we test the approach on a larger crowdsourced dataset where low memory consumption is required. \subsection{Argument Convincingness}\label{sec:exp_scale} % TODO: split the personalised results by no. training examples per worker % and by model confidence. Does filtering predictions with low confidence estimates % help? Do workers with more data get better predictions? We evaluate consensus learning, personal preference learning and scalability on an NLP task, namely, ranking arguments by \emph{convincingness}. The task requires learning from crowdsourced data, but is not simply an aggregation task as it requires learning a predictor for test documents that were not compared by the crowd. The dataset, \emph{UKPConvArgCrowdSample}, was subsampled by \citet{simpson2018finding} from raw data provided by \citet{habernal2016argument}, and contains arguments written by users of online debating forums, with crowdsourced judgements of pairs of arguments indicating the most convincing argument. %The task is to quantify how convincing each argument is %by learning a model from pairwise preference labels obtained from crowdworkers %on Amazon Mechanical Turk. %Each argument is represented by $32,310$ numerical features and the The data is divided into $32$ folds ($16$ topics, each with 2 opposing stances). For each fold, we train on $31$ folds and test on the remaining fold. We extend the task %described in \citet{simpson2018finding} to predicting both the consensus and personal preferences of individual crowd workers. %thereby performing a cross-topic evaluation. %test the ability of the preference learning methods to predict the consensus %by training on raw crowdsourced pairwise labels %for $31$ topics, and testing against the gold pairwise labels and rankings for the %remaining topic. This` process is repeated for all $32$ topics. GPPL previously outperformed SVM and Bi-LSTM methods at consensus prediction for \emph{UKPConvArgCrowdSample}~\citep{simpson2018finding}. %We compare GPPL with crowdGPPL and also test each method's %ability to predict the raw crowdsourced labels, i.e. the individual preference labels %supplied by each worker. We hypothesise that a worker's view of convincingness depends on their personal view of the subject discussed, so crowdGPPL may outperform GPPL and crowdBT-GP on both consensus and personal preference prediction. %We expect crowdGPPL to improve consensus prediction over crowdBT-GP, %as it accounts for personal biases as well as noisy labels. %For consensus prediction, crowdBT handles disagreements between workers by treating them %as noisy labellers, but we expect that by accounting for %for personal biases as well as noise, crowdGPPL may further improve consensus prediction. The dataset contains $32,310$ linguistic and embedding features for each document (we use mean GloVe embeddings for the words in each document, see \citet{simpson2018finding}). The high-dimensionality of the input feature vectors requires us to modify the length-scale heuristic for all GP methods, %of \emph{UKPConvArgCrowdSample}. as the distance between items grows with the number of dimensions, which causes the covariance to shrink to very small values. We therefore use $l_{d,\mathrm{scaledMH}} = 20\sqrt{D} \times l_{d,\mathrm{MH}}$, where $D$ is the dimension of the input feature vectors, and the scale was chosen by comparing the training set accuracy with scales in $\{\sqrt{D}, 10\sqrt{D}, 20\sqrt{D}, 100\sqrt{D}\}$. The hyperparameters are the same as Section \ref{sec:exp_synth} except GPPL uses $\alpha_0 = 2$, $\beta_0 = 200$ and crowdGPPL uses $\alpha^{(t)}_0=\alpha^{(v)}_0=2$, $\beta^{(t)}_0=\beta^{(t)}_0=200$, $\alpha^{(w)}_0=1$, $\beta^{(w)}_0=10$. We do not optimise $\alpha_0$, but choose $\beta_0$ by comparing training set accuracy for GPPL with $\beta_0 \in \left\{2,200,20000\right\}$. The best value of $\beta_0$ is also used for $\beta^{(t)}_0$ and $\beta^{(v)}_0$, then training set accuracy of crowdGPPL is used to select $\beta^{(w)}_0 \in \left\{1, 10, 100 \right\}$. We set $C=50$, $M=500$, $P_i=200$, $\epsilon=10$, and $\rho=0.9$ without optimisation. \begin{table} \centering \small \setlength{\tabcolsep}{4pt} \begin{tabular}{ l l l l@{\hskip 1.0cm} l l l@{\hskip 1.0cm} l l l} \hline & \multicolumn{3}{l}{Consensus} & \multicolumn{3}{l}{Personal: all workers} &\multicolumn{3}{l}{$>$50 training pairs} \\ Method & Acc & CEE & $\tau$ & Acc & CEE & $\tau$ & Acc & CEE & $\tau$ \\ \midrule %SVM & .70 & .58 & .31 & .63 & .66 & .31 \\ %Bi-LSTM & .73 & .55 & .21 & .64 & .64 & .21 \\ GPPL & .77 & .51 & .50 & .71 & \textbf{.56} & .31 & .72 & \textbf{.55} & .25 \\ % .71 & .56 & .32 \\ % crowdGPPL & \textbf{.79} & \textbf{.52} & \textbf{.53} & \textbf{.72} & .58 & \textbf{.33} & \textbf{.74} & \textbf{.55} & \textbf{.27} \\ %.71 & .59 & .32 \\ % now showing workers with > 40 training examples crowdGPPL$\backslash \bs t$ & - & - & - & .68 & .63 & .23 & \textbf{.74} & .57 & \textbf{.27} \\ crowdBT-GP & .75 & .53 & .45 & .69 & .58 & .30 & .71 & .56 & .23 \\ \bottomrule \end{tabular} %} \caption{UKPConvArgCrowdSample: predicting consensus, personal preferences for all workers, and personal preferences for workers with $>$50 pairs in the training set. %Runtimes on a 12 core, 2.6GHz CPU server were approximately 3 minutes for GPPL and crowdBT-GP, 20 minutes for crowdGPPL. } % wilcoxon signed-rank test: crowdGPPL vs. GPPL --> medi. p = \label{tab:convarg} \end{table} Table \ref{tab:convarg} shows that crowdGPPL outperforms both GPPL and crowdBT--GP at predicting both the consensus and personal preferences %pairwise labels %shown by classification accuracy and cross entropy error, %and the consensus ranking (significant for Kendall's $\tau$ with $p<0.05$, Wilcoxon signed-rank test), %shown by Kendall's $\tau$ rank correlation. suggesting that there is a benefit to modelling individual workers in subjective, crowdsourced tasks. %For the personal preference predictions, crowdGPPL also outperforms %GPPL. We also compare against crowdGPPL without the consensus (crowdGPPL$\backslash \bs t$) and find that including $\bs t$ in the model improves personalised predictions. This is likely because many workers have few training pairs, so the consensus helps to identify arguments that are commonly considered very poor or very convincing. Table \ref{tab:convarg} also shows that for workers with more than 50 pairs in the training set, accuracy and CEE improve for all methods but $\tau$ decreases, suggesting that some items may be ranked further away from their correct ranking for these workers. It is possible that workers who were willing to complete more annotations (on average 31 per fold) deviate further from the consensus, and crowdGPPL does not fully capture their preferences given the data available. \begin{figure}[ht] \centering \subfloat[Varying $M$]{ \label{fig:M} \includegraphics[clip=true,trim=9 0 55 0,width=.28\columnwidth]{num_inducing_32310_features} } \subfloat[Varying $P_i$ \hspace{0.5cm}]{ \label{fig:P_i} \includegraphics[clip=true,trim=60 2 9 0,width=.275\columnwidth]{P_i_32310_features} }\\ \subfloat[Varying $N$]{ \label{fig:Ntr} \includegraphics[clip=true,trim=9 0 10 0,width=.33\columnwidth]{num_arguments} } \subfloat[Varying $P$ ]{ \label{fig:Npairs} \includegraphics[clip=true,trim=26 0 10 0,width=.30\columnwidth]{num_pairs} } \caption{ Wall-clock times for training+prediction of consensus utilities for arguments in the training folds of UKPConvArgCrowdSample. CrowdGPPL was run with $C=5$. In (b), (c) and (d), $M=100$. Lines show means over 32 runs, bands indicate 1 standard deviation (mostly very little variation between folds). } \end{figure} We examine the scalability of our SVI method by evaluating GPPL and crowd-GPPL with different numbers of inducing points, $M$, and different mini-batch sizes, $P_i$. %Here, we fix $C=5$ and keep other model hyperparameters %and experimental setup the same as for Table \ref{tab:convarg}. Figure \ref{fig:M} shows the trade-off between runtime and training set accuracy as an effect of choosing $M$. Accuracy levels off as $M$ increases, while runtime continues to increase rapidly in a polynomial fashion. Using inducing points can therefore give a large improvement in runtimes with a fairly small performance hit. Figure \ref{fig:P_i} demonstrates that smaller batch sizes do not negatively affect the accuracy, although they increase runtimes as more iterations are required for convergence. The runtimes flatten out as $P_i$ increases, so we recommend choosing $P_i\geq 200$ but small enough to complete an iteration rapidly with the computational resources available. Figures \ref{fig:Ntr} and \ref{fig:Npairs} show runtimes as a function of the number of items in the training set, $N$, and the number of pairwise training labels, $P$, respectively (all other settings remain as in Figure \ref{fig:M}). In both cases, the increases to runtime are small, despite the growing dataset size. \subsection{Posterior Variance of Item Components} \label{sec:components} \begin{figure}[t] \centering \subfloat[\emph{UKPConvArgCrowdSample}]{ \includegraphics[trim=0 5 0 20,clip=true,width=.343\textwidth]{conv_factor_scales2} \hspace{1cm} } \subfloat[\emph{Sushi-A}]{ \hspace{1cm} \includegraphics[trim=10 0 0 17,clip=true,width=.33\textwidth]{sushi_factor_scales2} } \caption{ Latent component variances, $1/(s^{(v)}_c * s^{(w)}_c)$ in crowdGPPL, means over all runs. } \label{fig:latent_factor_variance} \end{figure} We investigate how many latent components were actively used by crowdGPPL on the \emph{UKPConvArgCrowdSample} and \emph{Sushi-A} datasets. Figure \ref{fig:latent_factor_variance} plots the posterior expectations of the inferred scales, $1/(s^{(v)}_c * s^{(w)}_c)$, for the latent item components. The plots show that many factors have a relatively small variance and therefore do not contribute to many of the model's predictions. This indicates that our Bayesian approach will only make use of components that are supported by the data, even if $C$ is larger than required.
{ "alphanum_fraction": 0.7479916172, "avg_line_length": 49.0239726027, "ext": "tex", "hexsha": "9faf7f898c874b0548c15f332a315f70e63bbae0", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2022-01-10T20:40:22.000Z", "max_forks_repo_forks_event_min_datetime": "2019-02-06T12:08:20.000Z", "max_forks_repo_head_hexsha": "65eb1cd3bf76f8068889880e0f80178e790350ce", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "UKPLab/tacl2018-preference-convincing", "max_forks_repo_path": "documents/scalable_bayesian_preference_learning_from_crowds/experiments.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "65eb1cd3bf76f8068889880e0f80178e790350ce", "max_issues_repo_issues_event_max_datetime": "2022-02-09T23:39:11.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-13T17:54:01.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "UKPLab/tacl2018-preference-convincing", "max_issues_repo_path": "documents/scalable_bayesian_preference_learning_from_crowds/experiments.tex", "max_line_length": 234, "max_stars_count": 13, "max_stars_repo_head_hexsha": "65eb1cd3bf76f8068889880e0f80178e790350ce", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "UKPLab/tacl2018-preference-convincing", "max_stars_repo_path": "documents/scalable_bayesian_preference_learning_from_crowds/experiments.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-10T05:53:47.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-01T19:40:23.000Z", "num_tokens": 7988, "size": 28630 }
% !TEX root = ../sethomas_thesis_main.tex \section{Validation using a Case Study: A Multi-Output SMA Mandrel}\label{sec:smacm-mandrel} \subsection{Motivation and Background} As previously stated, Shape Memory Alloys exhibit a relatively high volumetric work density. Thus, they are an ideal candidate in creating lightweight actuators for applications where reducing the total weight of the system drastically improves the efficiency such as drone deliveries. Here, any reduced weight increases the total flight time of the drone and thus, makes it the ideal use case of lightweight SMA actuators. In the context of drone deliveries, grippers generally consist of an actuator such as a motor and kinematic stage that converts the motion of the actuator into a gripping motion. In certain scenarios, the required gripping motion can be complex consisting of multiple outputs and radial movements such as the grippers implemented in the work by \cite{leeClosedStructureCompliantGripper2021} and in the work by \cite{singhCompactCompliantExternal2013}. In this case study, the goal is to apply the design methodology presented in this chapter to fabricate a drone-ready gripper with an advanced gripping mechanism. Using compliant mechanisms generated by topology optimization, an advanced kinematic stage can be designed to convert the motion of the SMA actuator into a multi-output gripping motion. Based on the design approach, the multi-output SMA gripper is sized and fabricated so as to validate the methodology. \subsection{Working Principle of the Gripper} The working principle of the proposed SMA gripper is based on the traditional SMA actuator presented in \cref{chap:sma-model} as adapted by the design methodology described in \cref{sec:bias-compliant-methodology}. Here, the gripper consists of a simple SMA coil that acts as the active element and an accompanying compliant structure that behaves as the kinematic stage and as the biasing element. The active SMA coil is pre-stretched by the biasing element at low temperatures and contracts when heated above its transition temperature. The biasing element, which also acts as a conversion mechanism to transform the linear actuation of the SMA coil into a gripper movement, exhibits an inherent stiffness due to the fact that it composed of flexure-based hinges. As the SMA cools down, a spring force acts on the coil due to the stiffness of the compliant mechanism. When heating the SMA, the contraction of the coil deforms the compliant biasing element creating the desired gripping motion at the output. By controlling the temperature of the SMA coil, the entire system can be made to grip and release objects. In this application, the SMA coil is heating using Joule's heating by applying a constant voltage across it. This simple solution makes use of the internal resistance of the SMA coil to exploit the Joule's losses when passing a current through it to raise the temperature of the material. The coil, here, is cooled using passive cooling by simple convection with the surrounding air. This standard solution allows for a simple control system that does not require any external heating system which would add weight to the system. Furthermore, an H-bridge is used to supply the current and control the heating of the SMA coil. Finally, as precise control of the system is required, a hall-effect sensor or any other low profile position sensor can be used to detect the state of the gripper. \begin{figure}[hbt!] % t for top of the page, H could be put to impose the position of the float \centering \resizebox{0.9\textwidth}{!}{\input{images/chap3/gripper-working-principle.pgf}} % \resizebox{0.6\textwidth}{!}{\input{images/chap7/gripper-working-principle.pgf}} \caption{The sizing methodology of the biased-compliant SMA gripper based on the proposed designing methodology.} \label{fig:mandrel-gripperwp} \end{figure} The working principle can be further visualised using the simplified sizing graphs as described in the work by \cite{spaggiariDesignEquationsBinary2013} and as adapted to compliant systems in \cref{subsec:passive-biasing-compliant-mech}. The SMA coil within the gripper operates between the ambient temperature and a temperature above its transition temperature. This behaviour, as mentioned previously, can be simplified into two straight lines based on its temperature. It is important to note that there exists a locus of lines representing the SMA at every temperature in between. Based on the methodology, the operating point of the gripper can be deduced using the intersection of the SMA curves and the characteristic of the designed biasing element. This characteristic will be described and validated in the following sections. The operating points are labelled in the figure as \circled{1} and \circled{2}. The maximum possible stroke of the gripper can be estimated by taking the difference between the x-values of the two operating points, $x_1-x_2$. However, the variable, $x_\mathrm{off}$, represents the distance between the SMA and the biasing-element before they are pre-stretched and attached to each other. These operating points described represent the behaviour of the gripper without a payload. In the case a payload, is present, the behaviour of the gripper and the gripping force is dependant on the size of the payload, $x_\mathrm{Obj}$. This behaviour of the payload can be simplified as a perfectly rigid object with an infinite stiffness. Thus, the intersection between the force-displacement curve of the payload and the high temperature SMA will deduce the last operating point \circled{3}. The gripping force, $F_\mathrm{Grip}$, experienced by the payload can be estimated using this last operating point as shown in the figure. Thus, as established by the design methodology presented in \cref{chap:design-methodology}, the curves of the SMA and the compliant mechanism can be used to estimate the total stroke and gripping force of the system, as represented by $x_1-x_2$ and $F_\mathrm{Grip}$ respectively. \subsection{Designing the Integrated Biased Kinematic Stage} The main goal of the design presented in this section is to create a multi-output gripper actuated using a single SMA coil. When designing grippers with complex outputs such as the work shown in \cite{zhakypovOrigamiInspiredReconfigurableSuction2018}, multiple active elements are used. The goal of this design is to improve these grippers by implementing an integrated system that can present multi-output gripping motion while only using a single SMA coil as an input. Here, the design is based on the 4-Point Mandrel topology presented in \cref{chap:compliant-sma}. However, here, the structure is used as the passive element as opposed to the active element proposed in the following chapter. Thus, in this gripper design, the 4-prong radial compliant mechanism is used as the passive biasing element and as the kinematic stage. Furthermore, the compliant mechanism is redesigned using flexure-based hinges so as to improve the stroke of the gripper system. Here, as the topology is limited to the 2D plane, the flexural mechanism is further improved by extending the design space to a 2.5D design, which involved stacking 2D structures along the third dimension. \subsubsection{Articulated Parallelogram Core and Analytical Geometry Approach} The algorithmically generated mandrel topology consists of an articulated symmetrical parallelogram (rhombus). The topology can be reduced to four identical triangles joined at one vertex by a pivot as shown in \cref{fig:mandrel-kinematic}(a). The kinematics of the design is adapted from the generated one by using hinges instead of distributed deformation. In the figure, the blue arrows represent the input while the black arrows represent the output. The design has been generated such that the SMA coils can be placed at the inputs so as to generate a radial output towards the centre when actuated. \begin{figure}[hbt!] % t for top of the page, H could be put to impose the position of the float \centering \resizebox{\textwidth}{!}{\input{images/chap7/mandrel-kinematic-stage.tex}} \caption[Kinematic diagram of proposed design]{Kinematic diagram of proposed design where the black dots represent ideal pivots, the blue and black arrows represent the input and output displacement, respectively. On the left: the outward-triangle configuration with a) the initial position and b) the displaced one. On the right-hand side: the inward-triangle configuration achieving a stroke amplification with c) the initial position and d) the displaced one. e) represents mandrel topology generated with 2D topology optimization which behaves similarly to the outward version. f) shows the 2.5D adaptation of the inward version into a flexure-based mechanism distributed over multiple layers, and with a reversed actuation direction.} \label{fig:mandrel-kinematic} \end{figure} When examining the concept behind the mandrel mechanism, the conclusion can be made that the driving structure consists of four right-angled isosceles triangles which represent the four claws of the mandrel. Furthermore, the pivots, being constrained along the horizontal and vertical axes due to symmetry, force the outputs to move along a $45^{\circ}$ path. However, upon careful examination, these constraints can be satisfied in two distinct configurations as shown in \cref{fig:mandrel-kinematic}(a) and (c). The four triangles can be position inwardly or outwardly. Due to the inward-facing configuration having overlapping triangles, this topology cannot be generated from a 2D design space. The main advantage of the inward-facing configuration compared to the outward-facing configuration is the stroke amplification of the output vertices based on the stroke of the SMA coil contraction. It is also important to note that the direction of the radial output depends on the direction of the input displacement. Thus, the gripper can be made to open or close when the SMA coil is heated. By attaching the SMA coil to the vertical pivots rather than the horizontal ones, the gripper can be made to be always opened or always closed. \subsubsection{Determining the Relationship between the Hinges and the SMA Stroke} The stroke amplification, $\gamma$, of the system is not uniform and is dependant on the position of the mechanism, $x$. This amplification factor can be described by deriving the output vertex position, $R$, with respect to the input vertex position, $x$, and can be expressed analytically as : \begin{equation} \gamma(x) = \frac{\partial R(x)}{\partial x} = \frac{1}{2\sqrt{2}}\left(1-\text{sign}(\alpha)\frac{\frac{x}{2}}{\sqrt{L_{h}^2-\left(\frac{x}{2}\right)^2}} \right) \label{eq:1} \end{equation} where, \begin{equation} \begin{split} R(x) &= L_{h}\cos\left(\theta(x) - \alpha\right)\\ &= \frac{1}{\sqrt{2}} \left(\frac{x}{2} +\text{sign}(\alpha) \sqrt{L_{h}^2-\left(\frac{x}{2}\right)^2}\right), \label{eq:7} \end{split} \end{equation} with, \begin{equation} \theta(x) = \arccos{\left(\frac{x}{2L_{h}}\right)} \quad \text{and} \quad \alpha=\pm\frac{\pi}{4}. \label{eq:theta} \end{equation} Here, $L_h$ represents the length of the hypotenuse of each triangle, $\theta$ represents the angle between the horizontal and the hypotenuse, $\alpha$ represents the angle between the hypotenuse and the side of the triangle and can equal a value of $\pm\frac{\pi}{4}$. This angle is defined as positive for the outward-facing configuration and as negative for the inward-facing one. Here, the output stroke is majorly dependant on the sign of the angle $\alpha$. The outward configuration has a stroke amplification of less than one for almost all possible inputs, implying a stroke reduction. While, on the other hand, the inward configuration has a stroke amplification large than one. In the context of a gripper, the inward version was chosen for its larger output stroke. \subsubsection{Simple Kinematics to a Compliant Mechanism} The kinematic schematic is composed of only simple hinges and rigid links. This makes it the ideal candidate for implementation using flexure-based hinges. In this case, as shown in \cref{fig:mech_stages}, truncated semi-circular flexure hinges were selected for their ability to avoid high localised stress concentration and allow for an acceptable angular stroke as described in the work by \cite{torresmelgarejoModelingBendingStiffness2018}. \begin{figure}[hbt!] % t for top of the page, H could be put to impose the position of the float \centering \resizebox{0.9\textwidth}{!}{\input{images/chap7/mandrel-mech-stages.tex}} \caption{Different views of the flexure-based compliant mechanism spreading over multiple stages, and parametrized flexure pivot.} \label{fig:mech_stages} \end{figure} As stated earlier, the inward-facing triangle configuration was chosen and due to fact that the triangles overlap during actuation, it makes it impossible to be implemented in a 2D design space. Thus, as shown in \cref{fig:mech_stages}, a 2.D design approach was implemented where each overlapping triangle is stacked in the 3\textsuperscript{rd} dimension. The mechanism is distributed among different superimposed layers linked at the vertices of the triangle to replicated the kinematic schematic. This adaptation to 2.5D design does not impact the functionality of the mechanism as long as the hinges are considered infinitely rigid when bending in any other direction other than the desired one. Here, two layers are implemented to accommodate the four triangles : the $L_1$ stage comprising of the green and orange triangles, and the $L_2$ stage comprising of the pink and blue triangles. In the case of the ideal kinematic schematic, the triangles are attached at a single point to form a parallelogram. However, this is difficult to implement with a flexure-based solution due to the rigid links having a non-null width. This adds unwanted links and hinges to the kinematic chain which results in undesired Degrees of Freedom (DoF). This parasitic DoF is overcome by adding a third DoF-inhibiting stage, $L_3$, to the 2.5D design as shown in \cref{fig:mech_stages}. As the goal of this gripper is for drone delivery purposes, another stage, which behaves as the frame of the drone, can be added to the design. For mechanism to behave as intended in the kinematic schematic, the left and right input vertices must be constraint to move along a single line. In this case, for simplicity, this constraint has been implemented using a rail. For a future drone mounted setup, an additional stage can be added to perform this required constraint, showing the advantage of the 2.5D design approach. \subsubsection{Sizing of the Biasing Compliant Mechanism} Based on the proposed design methodology, the goal of the mechanism is to create a kinematic stage that also behaves as the passive biasing element for the SMA coil. The inherent stiffness of the overall compliant mechanism due to the flexural hinges is harness to pre-stretch the SMA for activation. Thus, in order to size the actuator and the corresponding biasing element, an analytical model of the stiffness must be developed. Using the pseudo-rigid model, as presented in the work by \cite{heneinConceptionStructuresArticulees2005}, the flexural hinges can be considered as torsional spring with a constant angular stiffness, $K_\theta$. As detailed in \cref{subsec:passive-biasing-compliant-mech}, the stiffness of the compliant structure requires an expression for the relationship between angular position of the flexural hinge and the contraction of the SMA, $\theta(\Delta x)$. In this case, this relationship can be expressed as : \begin{equation}\label{eq:mandrel-theta-model} \theta(\Delta x) = \arccos{\left(\frac{\Delta x}{2L_{h}} + \cos(\theta_0)\right)}, \end{equation} with $\theta_0$ being the resting angle at which the mechanism is printer/fabricated as shown in \cref{fig:mandrel-kinematic}(f). As detailed in the methodology, the next step is to calculate the potential energy within the system during deformation. Due to the symmetry of the mechanism, all the hinges store the same potential energy. Here, the $L_3$ stage has 8 hinges while the stages $L_1$ and $L_2$ have 4 hinges each. Thus, the total potential energy of the whole mechanism is : \begin{equation}\label{eq:mandrel-potential-energy-model} U_\mathrm{tot}(\Delta x) = 16\cdot U(\Delta x) \end{equation} The force-displacement characteristic of the mechanism is then given by : \begin{equation}\label{eq:mandrel-force-model} \begin{split} F(\Delta x) &= \lVert -\nabla{U_\text{tot}(\Delta x)}\rVert = \frac{\partial U_\text{tot}(\Delta x)}{\partial \Delta x}\\ &= \frac{8K_{\theta}}{L_{h}} \frac{(\theta(\Delta x)-\theta_{0})}{\sin(\theta(\Delta x))}. \end{split} \end{equation} The resulting characteristic is plotted as shown in \cref{fig:mandrel-am-expt-compare}. Furthermore, the model is validated using experimental results obtained using a pull-tester. As the experimental results follow the model quite closely, it validates the working hypothesis and approximations used during the definition of the analytical model. \begin{figure}[hbt!] % t for top of the page, H could be put to impose the position of the float \centering \resizebox{0.9\textwidth}{!}{\input{images/chap3/am-expt-compare.pgf}} \caption[Validation of the analytical model of the biasing compliant mechanism using experimental results obtained using a pull-tester]{Validation of the analytical model of the biasing compliant mechanism using experimental results obtained using a pull-tester. Here, the small discrepency arises due to the creep present in the plastic structure as the mechanism returns to its original shape.} \label{fig:mandrel-am-expt-compare} \end{figure} As the force characteristic flattens unlike a traditional linear spring, the operating points of the actuator will offer higher strokes for a given SMA coil. Furthermore, as the curve also decreases in value near the low-temperature operating point \circled{1}, its implies that the gripper can exert higher gripping forces for objects with a larger diameter. This shows that the current force-displacement characteristic of the proposed compliant mechanism is well suited as a biasing element in an SMA actuator. Finally, in the figure, some plastic deformation can be observed as the mechanism was deformed beyond above its admissible range during the test. The flexural hinges exhibit a limited angular stroke which is one of the limitations of such a design. However, the maximal admissible angle before permanent plastic deformation can be estimated using the work by \cite{heneinConceptionStructuresArticulees2005} as : \begin{equation}\label{eq:adm-theta-theory} \Delta\theta_\text{adm} \cong \frac{3\pi\sigma_\text{adm}\sqrt{r}}{4E\sqrt{e}}, \end{equation} which in this context can be expressed : \begin{equation} \begin{split} \Delta x_{\text{adm},+} = 2L_{h}\left(\cos(\theta_0+\Delta\theta_\text{adm})-\cos(\theta_0)\right)\\ \Delta x_{\text{adm},-} = 2L_{h}\left(\cos(\theta_0-\Delta\theta_\text{adm}) - \cos(\theta_0)\right) \label{eq:adm} \end{split} \end{equation} If $\mathbb{X}_\text{eq} = \{ x \;|\; x_{2}\leqslant x \leqslant x_{1}\}$ is the set of possible operating points (see \cref{fig:mandrel-gripperwp}) and $\mathbb{X}_\text{el} = \{ x \; | \; x_\text{off} + \Delta x_{\text{adm},+} \leqslant x \leqslant x_\text{off} + \Delta x_{\text{adm},-} \}$ is the set of admissible positions for elastic deformation as portrayed in equation \cref{eq:adm} and \cref{fig:mandrel-gripperwp}. Then, based on the design methodology presented in \cref{chap:design-methodology}, the set of feasible operating points for the actuator $\mathbb{X}_\text{f}$ is given by their intersection: \begin{equation} \mathbb{X}_\text{f} = \mathbb{X}_\text{eq} \cap \mathbb{X}_\text{el}. \end{equation} One of the goals of the sizing methodology for the compliant mechanism is to maximise the output stroke while avoiding permanent deformation during operation. Thus, the compliant mechanism must be sized such that $\mathbb{X}_\text{eq} \subseteq \mathbb{X}_\text{el}$. This eliminates the need for precise temperature and positional control of the SMA coil during activation to protect the flexural hinges. In this case, the output stroke of the gripper can be maximised by maximising the size $\mathbb{X}_\text{f}$ or by having the operating points be located at positions where the curve of the stroke amplification is high (from eq. \cref{eq:1}). A trade-off exists as displacing $\mathbb{X}_\text{f}$ for higher stroke amplification tends to reduce the overall size. Based on \cref{tab:sma-pt-tradeoffs}, various additional trade-offs for the proposed mandrel can be observed as summarised in \cref{tab:mandrel-tradeoffs}. \begin{table}[hbt!] \centering \caption[Additional trade-offs observed when sizing the flexural compliant mechanism actuated by a given SMA wire or coil]{Additional trade-offs observed when sizing the flexural compliant mechanism actuated by a given SMA wire or coil. Here, $S_{\text{mech}}$ and $t_{\text{mech}}$ are the surface area and the thickness of the entire compliant mechanism, respectively.} \label{tab:mandrel-tradeoffs} \input{images/chap7/mandrel-tradeoffs.tex} \end{table} As seen in the table, an optimal solution can be estimated based on the design parameters. However, in the case of $L_h$ and $b$, increasing them will always result in an increased stroke while increasing the overall size of the mechanism. This shows that there exists a trade-off between the weight/size and the output stroke which will be critical to ascertain based on the gripper specifications. \subsection{Implementation and Experimental Results} One of the main advantages of this integrated design is the simplicity of fabrication and assembly. The entire compliant structure is printed from Nylon (PA 2200 Polyamide 12) using selective laser sintering (SLS) as shown in \cref{fig:mandrel-finalproto}. As the mechanism consists of stacked layers, the entire structure was printed as a single piece validating the simplicity of the novel design approach. \begin{figure}[hbt!] \centering \resizebox{0.9\textwidth}{!}{\input{images/chap7/mandrel-final-proto.tex}} \caption[The working prototype of the biased-compliant SMA gripper]{The working prototype of the biased-compliant SMA gripper (a) opened configuration with a $0.2$mm wire diameter SMA coil (framed in red), (b) closed configuration grasping an object.} \label{fig:mandrel-finalproto} \end{figure} In \cref{fig:mandrel-stroke-estimation}, the validated analytical models are plotted against the force-displacement curves of the SMA coil. This SMA model was estimated using experimental setup where the SMA coil was maintained at a constant temperature using a PID control and a thermal camera, and was then experimentally tested using the pull-tester. With the experimental data, a simplified linear model of the cold and hot SMAs were determined using linear regression. Using these analytical models and the sizing methodology detailed in \cref{subsec:passive-biasing-compliant-mech}, the stroke of the final mandrel gripper can be estimated. In this prototype, the design parameters used are $e=0.5$ mm, $r=15$ mm, $b=4$ mm, $L_h=42.4$ mm, $x_\text{off}=27.5$ mm and $\theta_0 = \frac{\pi}{8}$. Here, an estimated linear output stroke of up to $4.5$ mm is observed for each claw. \begin{figure}[hbt!] % t for top of the page, H could be put to impose the position of the float \centering \resizebox{0.7\textwidth}{!}{\input{images/chap7/mandrel-stroke-estimation.pgf}} \caption[The sizing diagram of the SMA mandrel based on the developed analytical model of the compliant mechanism and the models of the SMA coils obtained from experimental results]{The sizing diagram of the SMA mandrel based on the developed analytical model of the compliant mechanism and the models of the SMA coils obtained from experimental results. Here, based on the estimations a maximum stroke of $4.5$ mm is observed.} \label{fig:mandrel-stroke-estimation} \end{figure} The gripping force, however, is dependant on the size of the gripped object. For an object of diameter close to $x_1$, the gripping force will be maximal. Using a pair of load cells attached to two opposing claws, the gripping force was measured as shown in \cref{fig:mandrel-forcesetup}. The load cells were placed at different distances from the claws to simulated objects of varying sizes. The results of the gripping force measurements can be seen in \cref{fig:mandrel-force-temp}. The gripper shows a force close to constant for large span of SMA temperatures above its transition temperature of 80\degreeC. This constant force behaviour is ideal for a gripper and greatly simplifies the control, preventing any unintended damage to the gripped object. A maximum steady-state force of $1.78$ N was measured for the biggest payload size while using the smallest available SMA coil, whose wire diameter is $0.2$ mm. While this result is promising, it should be noted that the fabricated prototype is sub-optimal and can be further optimised for greater forces, either by optimising the compliant mechanism or by using thicker SMA coils. Increasing the wire diameter of the SMA coil comes with higher gripping forces but comes at the cost of slower cooling time or increased time delay between the opening and closing sequence of the gripper. \begin{figure}[hbt!] \centering \resizebox{\textwidth}{!}{\input{images/chap3/mandrel-forcesetup.tex}} \caption{The experimental setup, using a pair of force sensors, to measure the gripping force of two opposing jaws.} \label{fig:mandrel-forcesetup} \end{figure} \begin{figure}[hbt!] % t for top of the page, H could be put to impose the position of the float \centering \resizebox{0.9\textwidth}{!}{\input{images/chap3/force-temp-filt.pgf}} \caption[Results of the gripping force performed at different object diameters]{Results of the gripping force performed at different object diameters. The temperature and force output were recorded in real-time using a thermal imaging camera and a force sensor, respectively.} \label{fig:mandrel-force-temp} \end{figure} In the end, the final prototype, as seen in \cref{fig:mandrel-finalproto}, was measured to weigh only $17$ g, which implies that the gripper has a maximum force-density of around $105$ N/kg. This shows that proposed mandrel gripper can be ideal for lightweight applications such as drone deliveries. Furthermore, this demonstrates the highly integrated nature of the gripper, thus, validating the design methodology presented in \cref{chap:design-methodology}.
{ "alphanum_fraction": 0.7913934426, "avg_line_length": 163.6585365854, "ext": "tex", "hexsha": "f0252b6b272a324d240ac35ff448625ca12992ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5cc1b082be09da01e7545b7da93d1b113edc77b6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "seanthomas0409/sethomas_EPFL_thesis", "max_forks_repo_path": "main/ch3_1_case_study_mandrel.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5cc1b082be09da01e7545b7da93d1b113edc77b6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "seanthomas0409/sethomas_EPFL_thesis", "max_issues_repo_path": "main/ch3_1_case_study_mandrel.tex", "max_line_length": 2140, "max_stars_count": null, "max_stars_repo_head_hexsha": "5cc1b082be09da01e7545b7da93d1b113edc77b6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "seanthomas0409/sethomas_EPFL_thesis", "max_stars_repo_path": "main/ch3_1_case_study_mandrel.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6342, "size": 26840 }
\section{Search for EW-$ZZjj$} \subsection{MD discriminant} To further separate the EW-$ZZjj$ component from QCD-$ZZjj$, a MD based on Gradient Boosted Decision Tree (BDT) algorithm~\cite{Coadou_BDT} is trained with simulated events via TMVA framework\cite{Speckmayer_2010}. Training is performed between EW (signal) and QCD (background) processes. Twelve event kinematic variables sensitive to the characteristics of the EW signal are used as input features in training. Table~\ref{tab:bdt_features} lists those input variables with the order of their importance in BDT response provided by TMVA tool. One can see the jet-related information provides larger sensitivity. Then the MD distributions in both SR and QCD CR region are used for statistical fit. \begin{table}[h] \begin{center} \renewcommand\arraystretch{1.8} \begin{spacing}{0.8} \begin{tabular}{p{1cm}|p{2cm}|p{8cm}} \hline \hline Rank & Variables & Description \\ \hline 1 & \mjj & Dijet invariant mass \\ \hline 2 & $p_{T}^{j1}$ & \pT of the leading jet \\ \hline 3 & $p_{T}^{j2}$ & \pT of the sub-leading jet \\ \hline 4 & $\frac{p_{T}(ZZjj)}{H_{T}(ZZjj)}$ & \pT of the $ZZjj$ system divided by the scalar \pT sum of Z bosons and two jets \\ \hline 5 & $y_{j1} \times y_{j2}$ & Product of jet rapidities \\ \hline 6 & \dyjj & Rapidity difference between two jets \\ \hline 7 & $Y_{Z2}^{*}$ & Rapidity of the second Z boson \\ \hline 8 & $Y_{Z1}^{*}$ & Rapidity of the Z boson reconstructed from the lepton pair with the mass closer to the Z boson mass \\ \hline 9 & $p_{T}^{ZZ}$ & \pT of 4l system \\ \hline 10 & $m_{ZZ}$ & Invariant mass of 4l system \\ \hline 11 & $p_{T}^{Z1}$ & \pT of the Z boson reconstructed from the lepton pair with the mass closer to the Z boson mass \\ \hline 12 & $p_{T}^{\ell 3}$ & \pT of the third lepton \\ \hline \hline \hline \end{tabular} \end{spacing} \caption{Input features for the training of MD. } \label{tab:bdt_features} \end{center} \end{table} \iffalse \subsection{Statistical procedure} To examine the compatibility between data and the signal-plus-background hypothesis, a test statistic is based on the profile likelihood ratio method. %The likelihood function is the product of all the Poisson probability density functions built in individual MD bins given as: The binned likelihood function is given as \begin{equation} \mathcal{L}(\mu,\sigma) = \prod_{i}^\mathrm{bins} \mathcal{L}_{\mathrm{poiss}}(N_{\mathrm{data}}\,|\,\mu s(\theta)+b(\theta))_{i} \times \mathcal{L}_{\text{gauss}}(\theta)_{i} \end{equation} where the Poisson term presents the statistical fluctuations of the data and a Gaussian term models the pdf of auxiliary measurement to constrain the systematics. $\mu$ denotes the signal strength of EW-$ZZjj$ process, computed as the ratio between measured (expected) cross section to the SM prediction. $\theta$ presents the nuisance parameter, which is the set of parameters that parameterize the effect of systematic uncertainties described in section~\ref{sec:systematics} following the Gaussian distribution. $N_{data}$ is the number of selected data events, while the $s(\theta)$ is the expected signal yield and $b(\theta)$ is the expected background yield as the function of nuisance parameters. The test statistic $q_{\mu}$ is defined as: \begin{equation} q_\mu = -2 \ln \left( \dfrac{\mathcal{L}(\mu,\hat{\hat{\theta}}_{\mu})}{\mathcal{L}(\hat{\mu},\hat{\theta})} \right) \end{equation} in which $\mathcal{L}(\hat{\mu},\hat{\theta})$ is the unconditional likelihood with respect to both $\mu$ and $\theta$, and $\mathcal{L}(\mu,\hat{\hat{\theta}}_{\mu})$ is the conditional likelihood for a constant $\mu$. Signal-like data distributions are more likely to have a low test-statistic ($q_\mu$ close to 0) while the contributions of background-like data have a larger $q_\mu$. Under the background-only hypothesis, the compatibility of the observed (Asimov) data with the prediction is calculated to obtain the observed (expected) significance respectively. \fi \subsection{Fitting procedure} A profile likelihood fit, as described in chapter~\ref{sec:statisticalfit}, is performed on MD discriminant to extract the EW-$ZZjj$ signal from backgrounds. %The fit combined the observed and expected measurements from both \llll and \llvv channel from EW-ZZ production to gain more statistic. The binning of MD distributions in SR is optimized to maximize the sensitivity for detecting EW signal. The normalization of QCD-$ZZjj$ production ($\mu_{QCD}^{llll}$) in \llll channel is determined by data from simultaneously fit in SR and QCD CR as described in section~\ref{sec:background}. The signal strength of EW-$ZZjj$ production ($\mu_{EW}$) is taken as parameter of interest and floated in the fit. The effects of the uncertainties related to normalizations and shapes described previously in section~\ref{sec:systematics} of background processes in the MD distribution are all taken into account. In most case, a common nuisance parameter is used for each source of systematic in all bins and all categories. The statistical uncertainties for simulated samples are uncorrelated among all bins, and the background uncertainties only applied to their corresponding backgrounds. %For combination between two channels, the theoretical uncertainties between \llll and \llvv are uncorrelated due to different fiducial volumes definition. Furthermore, to be more conservative, the generator modelling uncertainty for QCD-$ZZjj$ production mentioned in section~\ref{sec:systematics} is separated to be two nuisance parameters in low and high MD region. \subsection{Result of fit} The statistical fit is performed both in individual \llll channel, as well as the combination between \llll and \llvv channel to gain more statistic. The results of statistical fit in \llll final state, and the one in combined channel are presented in table~\ref{tab:fit_result}. The \llvv analysis will not be described in this dissertation, but more details can refer to~\cite{ATLAS:2019vrv}. To drive expected results, the observed data is used for QCD CR to extract normalization factor of QCD component ($\mu_{QCD}^{\llll}$), while in SR, asimov data built from background prediction and signal model with SM assumed cross section is used. \begin{table}[!htbp] \begin{center} \begin{tabular}{c|c|c|c} \hline & $\mu_{\mathrm{EW}}$ & $\mu^{\llll}_{\mathrm{QCD}}$ & Significance Obs. (Exp.) \\ \hline \llll & $1.54 \pm 0.42$ & $0.95 \pm 0.22$ & 5.48 (3.90) $\sigma$ \\ \hline Combination of \llll and \llvv & $1.35 \pm 0.34$ & $0.96 \pm 0.22$ & 5.52 (4.30) $\sigma$ \\ \hline \end{tabular} \end{center} \caption{ Observed \muEW and \muQCD, as well as the observed and expected significance from the individual \llll channel.% and the combined fits. The full set of systematic uncertainties are included. } \label{tab:fit_result} \end{table} As a conclusion, in \llll channel, the background-only hypothesis is rejected at 5.5$\sigma$ (3.9$\sigma$) for observed (expected) data, which leads to the observation of EW-$ZZjj$ production. Figure~\ref{fig:fit_MD} shows the post-fit MD distributions for \llll events after performing a combined fit in SR (left) and QCD CR (right). The EW-$ZZjj$ cross section measured in \llll channel is extracted to be $0.94 \pm 0.26$~fb. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.42\textwidth]{figures/VBSZZ/fit/BDT_4l_SR_postFit.pdf} \includegraphics[width=0.42\textwidth]{figures/VBSZZ/fit/BDT_4l_QCD_CR_postFit.pdf} \end{center} \caption{Observed and post-fit expected multivariate discriminant distributions after the statistical fit in the \llll SR (left) and QCD CR (right). The error bands include the experimental and theoretical uncertainties, as well as the uncertainties in \muEW and \muQCD. The error bars on the data points show the statistical uncertainty on data. } \label{fig:fit_MD} \end{figure} Figure~\ref{fig:scaled_mjj} shows the $\mjj$ distribution in SR (left) and QCD CR (right), where the normalization of EW and QCD processes are scaled according to their observed value in table~\ref{tab:fit_result}. High $\mjj$ region is more sensitive for EW-$ZZjj$ events detection from this figure. Figure~\ref{fig:scaled_mzz} shows the spectrum of invariant mass of \llll system ($\mzz$) in SR also with the normalization of EW and QCD processes scaled. \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.32\textwidth]{figures/VBSZZ/fit/MJJ_4l_SR.pdf} \includegraphics[width=0.32\textwidth]{figures/VBSZZ/fit/MJJ_4l_QCD_CR.pdf} \end{center} \caption{Observed and post-fit expected \mjj distributions in SR (left) and QCD CR (right). The error bands include the expected experimental and theoretical uncertainties. The error bars on the data points show the statistical uncertainty. The contributions from the QCD and EW production of $ZZjj$ events are scaled by 0.96 and 1.35, respectively, corresponding to the observed normalization factors in the statistical fit. The last bin includes the overflow events. } \label{fig:scaled_mjj} \end{figure} \begin{figure}[!htbp] \begin{center} \includegraphics[width=0.4\textwidth]{figures/VBSZZ/fit/MZZ_4l_SR.pdf} \end{center} \caption{Observed and post-fit expected $\mzz$ spectrum in SR. The error bands include the expected experimental and theoretical uncertainties. The error bars on the data points show the statistical uncertainty. The contributions from the QCD and EW production of $ZZjj$ events are scaled by 0.96 and 1.35, respectively, corresponding to the observed normalization factors in the statistical fit. The last bin includes the overflow events. } \label{fig:scaled_mzz} \end{figure} Figure~\ref{fig:event_display} is the display of one event candidate of EW-$ZZjj$ production in $2e2\mu$ final state with two jets in forward and backward region. \begin{figure}[!htbp] \begin{center} \includegraphics[width=1.0\textwidth]{figures/VBSZZ/fit/resize_340368_454611985_v3.pdf} \end{center} \caption{Display of an event candidate of EW-$ZZjj$ production in $2e2\mu$ channel in last MD bin (0.875 < MD < 1.0). The invariant mass of the di-jet (four-lepton) system is 2228 (605)~\gev. } \label{fig:event_display} \end{figure}
{ "alphanum_fraction": 0.7287642978, "avg_line_length": 60.2598870056, "ext": "tex", "hexsha": "305c7900b9416faa961a5d97a4f8c0e956ba056a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "zhuhel/PhDthesis", "max_forks_repo_path": "chapters/VBSZZ/fit.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "zhuhel/PhDthesis", "max_issues_repo_path": "chapters/VBSZZ/fit.tex", "max_line_length": 209, "max_stars_count": null, "max_stars_repo_head_hexsha": "55ec32affb5c105143798989d78043467c88da8e", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "zhuhel/PhDthesis", "max_stars_repo_path": "chapters/VBSZZ/fit.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2879, "size": 10666 }
\chapter{Parametric and Line Spectra} \input{report/parametric-and-line-spectra/correlation-estimation/index.tex} \input{report/parametric-and-line-spectra/spectrum-of-autoregressive-processes/index.tex} \input{report/parametric-and-line-spectra/principal-component-analysis/index.tex} \input{report/parametric-and-line-spectra/real-world-signals_respiratory-sinus-arrhythmia-from-RR-Intervals/index.tex}
{ "alphanum_fraction": 0.8357843137, "avg_line_length": 45.3333333333, "ext": "tex", "hexsha": "e859f5bd020de67f258d6769ab00ac30b635d45d", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2021-02-12T18:26:18.000Z", "max_forks_repo_forks_event_min_datetime": "2018-07-17T08:32:24.000Z", "max_forks_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "AmjadHisham/ASPMI", "max_forks_repo_path": "tex/report/parametric-and-line-spectra/index.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "AmjadHisham/ASPMI", "max_issues_repo_path": "tex/report/parametric-and-line-spectra/index.tex", "max_line_length": 118, "max_stars_count": 8, "max_stars_repo_head_hexsha": "9d985f50787f0b9a3ccf1c6537c0cb6b0d9d8cce", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "filangel/ASPMI", "max_stars_repo_path": "tex/report/parametric-and-line-spectra/index.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-13T21:13:02.000Z", "max_stars_repo_stars_event_min_datetime": "2019-02-20T14:43:34.000Z", "num_tokens": 108, "size": 408 }
\chapter{ODE group}
{ "alphanum_fraction": 0.7894736842, "avg_line_length": 19, "ext": "tex", "hexsha": "b738f4b236852fb47a7bfb4504d9d2783b7fc17e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "58a0cb36bbf3dc4333860be666fcfe18f8aada14", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wangkuner/mechanics", "max_forks_repo_path": "OrdinaryDifferentialEquation/Group.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "58a0cb36bbf3dc4333860be666fcfe18f8aada14", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wangkuner/mechanics", "max_issues_repo_path": "OrdinaryDifferentialEquation/Group.tex", "max_line_length": 19, "max_stars_count": null, "max_stars_repo_head_hexsha": "58a0cb36bbf3dc4333860be666fcfe18f8aada14", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wangkuner/mechanics", "max_stars_repo_path": "OrdinaryDifferentialEquation/Group.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6, "size": 19 }
\documentclass[../thesis.tex]{subfiles} \begin{document} \chapter{Properties of Natural Numbers and Equational Reasoning}\label{props} Properties of natural numbers play a big role in the development of proofs in this thesis. With propositional equality at our disposal, we will demonstrate how to prove properties such as the commutative property of addition. As proofs get more complicated, we will make proving easier by introducing a powerful tool: \textit{equational reasoning}. \paragraph{Remark} Readers who are familiar with equational/preorder reasoning may skip this chapter. \section{Proving Equational Propositions} \paragraph{Right identity of addition} Recap the definition of addition on {\lstinline|ℕ|}. \begin{lstlisting} _+_ : ℕ → ℕ → ℕ zero + y = y suc x + y = suc (x + y) \end{lstlisting} {\lstinline|_+_|} is defined by induction on the first argument. That means we get the \textit{left identity} of addition for free, as {\lstinline|zero + y|} and {\lstinline|y|} are \textit{computationally equal}. However, this is not the case for the \textit{right identity}. It has to be proven explicitly. \begin{lstlisting} +-right-identity : (n : ℕ) → n + 0 ≡ n +-right-identity zero = ?0 +-right-identity (suc n) = ?1 \end{lstlisting} By induction on the only argument, we get two sub-goals: \begin{lstlisting} ?0 : 0 ≡ 0 ?1 : suc (n + 0) ≡ suc n \end{lstlisting} {\lstinline|?0|} can be trivially proven with {\lstinline|refl|}. As for {\lstinline|?1|}, we see that its type looks a lot like the proposition we are proving, except that both sides of the equation are ``coated'' with a {\lstinline|suc|}. With {\lstinline|cong suc : ∀ {x y} → x ≡ y → suc x ≡ suc y|}, we could substitute a term in {\lstinline|suc|} with another if they are equal, and finish the proof by recursively calling itself with a \textit{smaller} argument. \begin{lstlisting} +-right-identity : ∀ n → n + 0 ≡ n +-right-identity zero = refl +-right-identity (suc n) = cong suc (+-right-identity n) \end{lstlisting} \paragraph{Moving suc to the other side} This is an essential lemma for proving more advanced theorems. The proof also follows a similar pattern as that of {\lstinline|+-right-identity|}. \footnote{In fact, all of these proofs (hence programs) can be generalized to a \textit{fold}, but that is not the point here.} \begin{lstlisting} +-suc : ∀ m n → m + suc n ≡ suc (m + n) +-suc zero n = refl +-suc (suc m) n = cong suc (+-suc m n) \end{lstlisting} \paragraph{Commutative property of addition} Similarly, by induction on the first argument, we get two sub-goals: \begin{lstlisting} +-comm : ∀ m n → m + n ≡ n + m +-comm zero n = ?0 +-comm (suc m) n = ?1 ?0 : n ≡ m + zero ?1 : suc (m + n) ≡ m + suc n \end{lstlisting} {\lstinline|?0|} can be solved with {\lstinline|+-right-identity|} with a ``twist''. The symmetry of equality {\lstinline|sym|} enables us to swap both sides of an equation. \begin{lstlisting} +-comm zero n = sym (+-right-identity n) \end{lstlisting} However, it is not obvious how to solve {\lstinline|?1|} straight out. The proof has to be ``broken'', or ``split'' into two steps: \begin{enumerate} \item Apply {\lstinline|+-suc|} with {\lstinline|sym|} to the right-hand side of the equation to get {\lstinline|suc (m + n) ≡ suc (n + m)|}. \item Apply the induction hypothesis to {\lstinline|cong suc|}. \end{enumerate} These small pieces of proofs are glued back together with the transitivity of equality {\lstinline|trans|}. \begin{lstlisting} +-comm (suc m) n = trans (cong suc (+-comm m n)) (sym (+-suc n m)) \end{lstlisting} \section{Equational Reasoning} We can see that proofs are composable just like programs. \begin{lstlisting} trans (cong suc (+-comm m n)) (sym (+-suc n m)) \end{lstlisting} However, it is difficult to see what is going on in between these clauses, and it could get only worse as propositions get more complicated. Imagine having dozens of {\lstinline|trans|}, {\lstinline|sym|} and {\lstinline|cong|} spreading everywhere. Fortunately, these complex proofs can be written in a concise and modular manner with a simple yet powerful technique called \textit{equational reasoning}. Agda's flexible mixfix syntax allows the technique to be implemented with just a few combinators\cite{erikhesselinkpaulvisschers2008}. This is best illustrated by an example: \begin{lstlisting} +-comm : ∀ m n → m + n ≡ n + m +-comm zero n = sym (+-right-identity n) +-comm (suc m) n = begin suc m + n ≡⟨ refl ⟩ suc (m + n) ≡⟨ cong suc (+-comm m n) ⟩ suc (n + m) ≡⟨ sym (+-suc n m) ⟩ n + suc m ∎ \end{lstlisting} With equational reasoning, we can see how an expression equates to another, step by step, justified by theorems. The first and the last steps correspond to two sides of the equation of a proposition. {\lstinline|begin_|} marks the beginning of a reasoning; {\lstinline|_≡⟨_⟩_|} chains two expressions with the justification placed in between; {\lstinline|_∎|} marks the end of a reasoning (\textit{QED}). \subsection{Anatomy of Equational Reasoning} A typical equational reasoning can often be broken down into \textbf{three} parts. \begin{enumerate} \item\label{eq-part1} Starting from the left-hand side of the equation, through a series of steps, the expression will be ``arranged'' into a form that allows the induction hypothesis to be applied. In the following example of {\lstinline|+-comm|}, nothing needs to be arranged because these two expressions are computationally equal (the {\lstinline|refl|} can be omitted). \begin{lstlisting} begin suc m + n ≡⟨ refl ⟩ suc (m + n) \end{lstlisting} \item\label{eq-part2} {\lstinline|m + n|} emerged as part of the proposition which enables us to apply the induction hypothesis. \begin{lstlisting} suc (m + n) ≡⟨ cong suc (+-comm m n) ⟩ suc (n + m) \end{lstlisting} \item\label{eq-part3} After applying the induction hypothesis, the expression is then ``rearranged'' into the right-hand side of the equation, hence completes the proof. \begin{lstlisting} suc (n + m) ≡⟨ sym (+-suc n m) ⟩ n + suc m ∎ \end{lstlisting} \end{enumerate} \paragraph{arranging expressions} To arrange an expression into the shape that we desire as in part \ref{eq-part1} and part \ref{eq-part3}, while remaining equal, we need properties such as commutativity or associativity of some operator, or distributive properties when there is more than one operator. The operators we will be dealing with often comes with these properties. Take addition and multiplication, for example, together they form a nice semiring structure. \paragraph{substituting equals for equals} As what we have seen in \ref{eq-part2}, sometimes there is only a part of an expression needs to be substituted. Say, we have a proof {\lstinline|eq : X ≡ Y|}, and we want to substitute {\lstinline|X|} for {\lstinline|Y|} in a more complex expression {\lstinline|a b (c X) d|}. We could ask {\lstinline|cong|} to ``target'' the part to substitute by supplying a function like this: % \begin{lstlisting} λ w → a b (c w) d \end{lstlisting} % which abstracts the part we want to substitute away, such that: \begin{lstlisting} cong (λ w → a b (c w) d) eq : a b (c X) d ≡ a b (c Y) d \end{lstlisting} \section{Decidable Equality on Natural Numbers} Equality is decidable on natural numbers, which means that we can always tell whether two numbers are equal, and explain the reason with a proof. \begin{lstlisting} _≟_ : Decidable {A = ℕ} _≡_ zero ≟ zero = yes refl suc m ≟ suc n with m ≟ n suc m ≟ suc .m | yes refl = yes refl suc m ≟ suc n | no prf = no (prf ∘ (λ p → subst (λ x → m ≡ pred x) p refl)) zero ≟ suc n = no λ() suc m ≟ zero = no λ() \end{lstlisting} Decidable functions are often used together with \textit{with-abstractions}. \begin{lstlisting} answer : ℕ → Bool answer n with n ≟ 42 answer n | yes p = true answer n | no ¬p = false \end{lstlisting} % where {\lstinline|p : n ≡ 42|} and {\lstinline|¬p : n ≢ 42|}. \section{Preorder} Aside from stating that two expressions are equal, a proposition can also assert that one expression is ``less than or equal to'' than another, given a preorder. A preorder is a binary relation that is \textit{reflexive} and \textit{transitive}. One of the instances of preorder on natural numbers is the binary relation {\lstinline|_≤_|} (less than or equal to.) \begin{lstlisting} data _≤_ : ℕ → ℕ → Set where z≤n : ∀ {n} → zero ≤ n s≤s : ∀ {m n} (m≤n : m ≤ n) → suc m ≤ suc n \end{lstlisting} The following is a proof of $ 3 \leq 5 $: \begin{lstlisting} 3≤5 : 3 ≤ 5 3≤5 = s≤s (s≤s (s≤s z≤n)) \end{lstlisting} To prove $ 3 \leq 5 $, we need a proof of $ 2 \leq 4 $ for {\lstinline|s≤s|}, and so on, until it reaches zero where it ends with a {\lstinline|z≤n|}. Here are some other binary relations that can be defined with {\lstinline|_≤_|}. \begin{lstlisting} _<_ : Rel ℕ Level.zero m < n = suc m ≤ n _≰_ : Rel ℕ Level.zero a ≰ b = ¬ a ≤ b _≥_ : Rel ℕ Level.zero m ≥ n = n ≤ m \end{lstlisting} \section{Preorder reasoning} Combinators for equational reasoning can be further generalized to support \textit{preorder reasoning}. Preorders are \textit{reflexive} and \textit{transitive}, which means that expressions can be chained with a series of relations just as that of equational reasoning. Suppose we already have {\lstinline|m≤m+n : ∀ m n → m ≤ m + n|} and we want to prove a slightly different theorem. \begin{lstlisting} m≤n+m : ∀ m n → m ≤ n + m m≤n+m m n = start m ≤⟨ m≤m+n m n ⟩ m + n ≈⟨ +-comm m n ⟩ n + m □ \end{lstlisting} % where {\lstinline|_≤⟨_⟩_|} and {\lstinline|_≈⟨_⟩_|} are respectively transitive and reflexive combinators.\footnote{Combinators for preorder reasoning are renamed to prevent conflictions with equational reasoning.} Step by step, starting from the left-hand side of the relation, expressions get greater and greater as it reaches the right-hand side the relation. \paragraph{monotonicity of operators} In equational reasoning, we could substitute part of an expression with something equal with {\lstinline|cong|} because {\lstinline|_≡_|} is congruent. However, we cannot substitute part of an expression with something \textit{greater} in general. Take the following function {\lstinline|f|} as example. \begin{lstlisting} f : ℕ → ℕ f 0 = 1 f 1 = 0 f _ = 1 \end{lstlisting} {\lstinline|f|} returns $ 1 $ on all inputs except for $ 1 $. {\lstinline|0 ≤ 1|} holds, but it does not imply that {\lstinline|f 0 ≤ f 1|} also holds. As a result, a generic mechanism like {\lstinline|cong|} does not exist in preorder reasoning. Given \lstinline|x ≤ y|, we can substitute \lstinline|f x| by \lstinline|f y| using a proof that \lstinline|f| is monotonic. \footnote{Such a proof or theorem often goes by the name of \lstinline|f-mono|}. \section{Decidable Preorder on Natural Numbers} Preorder is also decidable on natural numbers, which means that we can always tell whether one number is less than or equal to another. \begin{lstlisting} _≤?_ : Decidable _≤_ zero ≤? _ = yes z≤n suc m ≤? zero = no λ() suc m ≤? suc n with m ≤? n ... | yes m≤n = yes (s≤s m≤n) ... | no m≰n = no (m≰n ∘ ≤-pred) \end{lstlisting} With with-abstractions we could define some function like this: \begin{lstlisting} threshold : ℕ → ℕ threshold n with n ≤? 87 threshold n | yes p = n threshold n | no ¬p = 87 \end{lstlisting} % where {\lstinline|p : n ≤ 42|} and {\lstinline|¬p : n ≰ 42|}. % Suppose we want to prove that {\lstinline|_+_|} is monotonic in both arguments. % % \begin{lstlisting} % +-mono : ∀ m₁ m₂ n₁ n₂ → m₁ ≤ m₂ → n₁ ≤ n₂ → m₁ + n₁ ≤ m₂ + n₂ % \end{lstlisting} % % By induction on the argument of type {\lstinline|m₁ ≤ m₂|}, we get two goals. % The type and context of the first goal is as follows: % % \begin{lstlisting} % +-mono 0 m₂ n₁ n₂ z≤n n₁≤n₂ = ?0 % % Goal of ?0 : n₁ ≤ m₂ + n₂ % n₁≤n₂ : n₁ ≤ n₂ % n₁, n₂, m₂ : ℕ % \end{lstlisting} % % \begin{lstlisting} % +-mono (suc m₁) (suc m₂) n₁ n₂ (s≤s p) n₁≤n₂ = ?1 % \end{lstlisting} % % The induction forces {\lstinline|m₁ m₂|} to change as well. % % \begin{lstlisting} % +-mono : ∀ m₁ m₂ n₁ n₂ → m₁ ≤ m₂ → n₁ ≤ n₂ → m₁ + n₁ ≤ m₂ + n₂ % +-mono 0 m₂ n₁ n₂ z≤n n₁≤n₂ = ?0 % +-mono (suc m₁) (suc m₂) n₁ n₂ (s≤s p) n₁≤n₂ = ?1 % \end{lstlisting} % \begin{lstlisting} % +-mono : ∀ m₁ m₂ n₁ n₂ → m₁ ≤ m₂ → n₁ ≤ n₂ → m₁ + n₁ ≤ m₂ + n₂ % +-mono 0 m₂ n₁ n₂ z≤n n₁≤n₂ = ?0 % +-mono (suc m₁) (suc m₂) n₁ n₂ (s≤s p) n₁≤n₂ = ?1 % \end{lstlisting} % Take \textit{monus} \footnote{Monus, or \textit{truncated subtraction}, is a % kind of subtraction that never goes negative when the subtrahend is greater then % the minued.} {\lstinline|_∸_|} for example: % % \begin{lstlisting} % f : ∀ m n % → (prop : m ≤ n) % → m ∸ m ≤ m ∸ n % f m n prop = % start % m ∸ m % ≤⟨ ? ⟩ % m ∸ n % □ % \end{lstlisting} % % We cannot prove {\lstinline|f|} because {\lstinline|m ≤ n|} does not imply % {\lstinline|m ∸ m ≤ m ∸ n|}\footnote{the second argument of {\lstinline|_∸_|} % is \textit{contravariant} in the sense that the result of {\lstinline|_∸_|} would % increase when the second argument decreases.} % The proposition {\lstinline|f|} as seen above can only be disapproved, because % the second argument of {\lstinline|_∸_|} is \textit{contravariant} in the sense % that the result of {\lstinline|_∸_|} would increase when the second argument decreases. % Even worse, the function that takes the substitute as an argument may not % even be \textit{monotonic} like monus. As a result, a generic mechanism like % {\lstinline|cong|} does not exist in preorder reasoning. We can only substitute % part of an expression when the function is \textit{monotonic}. The monotonicity % of each function are different and have to be proved % % Similar to that of equation reasoning, sometimes we need to rewrite expressions with % {\lstinline|_≈⟨_⟩_|} when they are equal to each other (An equivalence relation is also a % preorder). We would also skip these steps. % % Some of the obvious transitive steps (that is written in {\lstinline|_≤⟨_⟩_|}) % will also be skipped. \section{Skipping trivial proofs} From now on, we will dispense with most of the steps and justifications in equational and preorder reasonings, because it is often obvious to see what happened in the process. In fact, there are is no formal distinction between the proofs we disregard and those we feel important. They are all equally indispensable to Agda. \section{Relevant Properties of Natural Numbers} Relevant properties of {\lstinline|ℕ|} used in the remainder of the thesis are introduced in this section. Aside from some basic properties taken from the standard library, we have also added some useful theorems, lemmata, and corollaries. \footnote{\textit{Theorem}, \textit{lemma}, \textit{corollary} and \textit{property} are all synonyms for \textit{established proposition}. There are no formal distinction between these terms and they are used exchangeably in the thesis.} \subsection{Equational Propositions} \subsubsection{natural number} \begin{lstlisting} data ℕ : Set where zero : ℕ suc : ℕ → ℕ \end{lstlisting} \begin{itemize} \item {\lstinline|cancel-suc : ∀ {x y} → suc x ≡ suc y → x ≡ y|} \\ suc is injective. \end{itemize} \subsubsection{addition} \begin{lstlisting} _+_ : ℕ → ℕ → ℕ zero + y = y suc x + y = suc (x + y) \end{lstlisting} \begin{itemize} \item {\lstinline|+-right-identity : ∀ n → n + 0 ≡ n|} \\ the right identity of addition. \item {\lstinline|+-suc : ∀ m n → m + suc n ≡ suc (m + n)|} \\ moving {\lstinline|suc|} from one term to another. \item {\lstinline|+-assoc : ∀ m n o → (m + n) + o ≡ m + (n + o)|} \\ the associative property of addition. \item {\lstinline|+-comm : ∀ m n → m + n ≡ n + m|} \\ the commutative property of addition. \item {\lstinline|[a+b]+c≡[a+c]+b : ∀ a b c → a + b + c ≡ a + c + b|} \\ a convenient corollary for swapping terms. \item {\lstinline|a+[b+c]≡b+[a+c] : ∀ a b c → a + (b + c) ≡ b + (a + c)|} \\ a convenient corollary for swapping terms. \item {\lstinline|cancel-+-left : ∀ i {j k} → i + j ≡ i + k → j ≡ k|} \\ the left cancellation property of addition. \item {\lstinline|cancel-+-right : ∀ k {i j} → i + k ≡ j + k → i ≡ j|} \\ the right cancellation property of addition. \end{itemize} \subsubsection{multiplication} \begin{lstlisting} _*_ : ℕ → ℕ → ℕ zero * y = y suc x * y = y + (x * y) \end{lstlisting} \begin{itemize} \item {\lstinline|*-right-zero : ∀ n → n * 0 ≡ 0|} \\ the right absorbing element of multiplication. \item {\lstinline|*-left-identity : ∀ n → 1 * n ≡ n|} \\ the left identity of addition multiplication. \item {\lstinline|*-right-identity : ∀ n → n * 1 ≡ n|} \\ the right identity of addition multiplication. \item {\lstinline|+-*-suc : ∀ m n → m * suc n ≡ m + m * n|} \\ multiplication over {\lstinline|suc|}. \item {\lstinline|*-assoc : ∀ m n o → (m * n) * o ≡ m * (n * o)|} \\ the associative property of multiplication. \item {\lstinline|*-comm : ∀ m n → m * n ≡ n * m|} \\ the commutative property of multiplication. \item {\lstinline|distribʳ-*-+ : ∀ m n o → (n + o) * m ≡ n * m + o * m|} \\ the right distributive property of multiplication over addition. \item {\lstinline|distrib-left-*-+ : ∀ m n o → m * (n + o) ≡ m * n + m * o|} \\ the left distributive property of multiplication over addition. \end{itemize} \subsubsection{monus} Monus, or \textit{truncated subtraction}, is a kind of subtraction that never goes negative when the subtrahend is greater than the minued. \begin{lstlisting} _∸_ : Nat → Nat → Nat n ∸ zero = n zero ∸ suc m = zero suc n ∸ suc m = n ∸ m \end{lstlisting} \begin{itemize} \item {\lstinline|0∸n≡0 : ∀ n → 0 ∸ n ≡ 0|} \item {\lstinline|n∸n≡0 : ∀ n → n ∸ n ≡ 0|} \item {\lstinline|m+n∸n≡m : ∀ m n → (m + n) ∸ n ≡ m|} \item {\lstinline|m+n∸m≡n : ∀ {m n} → m ≤ n → m + (n ∸ m) ≡ n|} \item {\lstinline|m∸n+n≡m : ∀ {m n} → n ≤ m → m ∸ n + n ≡ m|} \item {\lstinline|∸-+-assoc : ∀ m n o → (m ∸ n) ∸ o ≡ m ∸ (n + o)|} \\ the associative property of monus and addition. \item {\lstinline|+-∸-assoc : ∀ m {n o} → o ≤ n → (m + n) ∸ o ≡ m + (n ∸ o)|} \\ the associative property of monus and addition. \item {\lstinline|*-distrib-∸ʳ : ∀ m n o → (n ∸ o) * m ≡ n * m ∸ o * m|} \\ the right distributive property of monus over multiplication. \end{itemize} \subsubsection{min and max} So called {\lstinline|min|} and {\lstinline|max|} in Haskell. Min {\lstinline|_⊓_|} computes the lesser of two numbers. \begin{lstlisting} _⊓_ : ℕ → ℕ → ℕ zero ⊓ n = zero suc m ⊓ zero = zero suc m ⊓ suc n = suc (m ⊓ n) \end{lstlisting} Max {\lstinline|_⊔_|} computes the greater of two numbers. \begin{lstlisting} _⊔_ : ℕ → ℕ → ℕ zero ⊔ n = n suc m ⊔ zero = suc m suc m ⊔ suc n = suc (m ⊔ n) \end{lstlisting} \begin{itemize} \item {\lstinline|⊓-comm : ∀ m n → m ⊓ n ≡ n ⊓ m|} \\ the commutative property of min. \item {\lstinline|⊔-comm : ∀ m n → m ⊔ n ≡ n ⊔ m|} \\ the commutative property of max. \end{itemize} \subsection{Relational Propositions} \subsubsection{natural number} \begin{itemize} \item {\lstinline|≤-pred : ∀ {m n} → suc m ≤ suc n → m ≤ n|} \\ inverse of {\lstinline|s≤s|}. \item {\lstinline|<⇒≤ : _<_ ⇒ _≤_|} \item {\lstinline|>⇒≰ : _>_ ⇒ _≰_|} \item {\lstinline|≤⇒≯ : _≤_ ⇒ _≯_|} \item {\lstinline|<⇒≱ : _<_ ⇒ _≱_|} \item {\lstinline|>⇒≢ : _>_ ⇒ _≢_|} \item {\lstinline|≥⇒≮ : _≥_ ⇒ _≮_|} \item {\lstinline|<⇒≢ : _<_ ⇒ _≢_|} \item {\lstinline|≤∧≢⇒< : ∀ {m n} → m ≤ n → m ≢ n → m < n|} \item {\lstinline|≥∧≢⇒> : ∀ {m n} → m ≥ n → m ≢ n → m > n|} \end{itemize} \subsubsection{addition} \begin{itemize} \item {\lstinline|≤-step : ∀ {m n} → m ≤ n → m ≤ 1 + n|} \item {\lstinline|≤-steps : ∀ {m n} k → m ≤ n → m ≤ k + n|} \item {\lstinline|m≤m+n : ∀ m n → m ≤ m + n|} \item {\lstinline|n≤m+n : ∀ m n → n ≤ m + n|} \item {\lstinline|_+-mono_ : ∀ {m₁ m₂ n₁ n₂} → m₁ ≤ m₂ → n₁ ≤ n₂ → m₁ + n₁ ≤ m₂ + n₂|} \\ the monotonicity of addition \item {\lstinline|n+-mono : ∀ {i j} n → i ≤ j → n + i ≤ n + j|} \\ {\lstinline|_+-mono_|} with the first argument fixed. \item {\lstinline|+n-mono : ∀ {i j} n → i ≤ j → n + i ≤ n + j|} \\ {\lstinline|_+-mono_|} with the second argument fixed. \item {\lstinline|n+-mono-inverse : ∀ n → ∀ {a b} → n + a ≤ n + b → a ≤ b|} \\ the inverse of {\lstinline|n+-mono|} \item {\lstinline|+n-mono-inverse : ∀ n → ∀ {a b} → a + n ≤ b + n → a ≤ b|} \\ the inverse of {\lstinline|+n-mono|} \item {\lstinline|+-mono-contra : ∀ {a b c d} → a ≥ b → a + c < b + d → c < d|} \end{itemize} \subsubsection{monus} \begin{itemize} \item {\lstinline|n∸m≤n : ∀ m n → n ∸ m ≤ n|} \item {\lstinline|n≤m+n∸m : ∀ m n → n ≤ m + (n ∸ m)|} \item {\lstinline|∸-mono : ∀ x y u v → x ≤ y → u ≥ v → x ∸ u ≤ y ∸ v|} \\ the monotonicity of monus \end{itemize} \end{document}
{ "alphanum_fraction": 0.6428505407, "avg_line_length": 34.6762820513, "ext": "tex", "hexsha": "ef9e928ecd866d7d34e1bb083c5ed40a6afd6415", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_forks_event_min_datetime": "2015-05-30T05:50:50.000Z", "max_forks_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "banacorn/numeral", "max_forks_repo_path": "Thesis/tex/props.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "banacorn/numeral", "max_issues_repo_path": "Thesis/tex/props.tex", "max_line_length": 158, "max_stars_count": 1, "max_stars_repo_head_hexsha": "aae093cc9bf21f11064e7f7b12049448cd6449f1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "banacorn/numeral", "max_stars_repo_path": "Thesis/tex/props.tex", "max_stars_repo_stars_event_max_datetime": "2015-04-23T15:58:28.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-23T15:58:28.000Z", "num_tokens": 7128, "size": 21638 }
\label{sec:introduction} \pagenumbering{arabic} % Introduction to the thesis \section{History and Motivation} This is where you set the context for your thesis, by reviewing historic development in the field, and argue why the problem you want to solve is relevant. Typically a short review of references. Example: The book in ref. \cite{altland2010condensed} is great, and the work of \cite{Thouless1982,Tomonaga1950} is very nice. \section{Structure of Thesis} The introduction is in \cref{sec:introduction}. In \cref{ch:chapter1} you can see how a normal chapter looks like. At the end you can find \cref{ch:appendixA,ch:appendixB}.
{ "alphanum_fraction": 0.7862714509, "avg_line_length": 42.7333333333, "ext": "tex", "hexsha": "d11ffb9d202e8f29a8521d577a33f30a4e7ea4ea", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ca9b9f9c29f3b6d90a4b383ba90b52fd94803780", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "FagKom/oppgavemal_1", "max_forks_repo_path": "tex/04-introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ca9b9f9c29f3b6d90a4b383ba90b52fd94803780", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "FagKom/oppgavemal_1", "max_issues_repo_path": "tex/04-introduction.tex", "max_line_length": 166, "max_stars_count": null, "max_stars_repo_head_hexsha": "ca9b9f9c29f3b6d90a4b383ba90b52fd94803780", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "FagKom/oppgavemal_1", "max_stars_repo_path": "tex/04-introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 171, "size": 641 }
\SetAPI{C} \section{ICommandExtendable} \label{extendable:ICommandExtendable} \ClearAPI \javadoc{De.Osthus.Minerva.Command.ICommandExtendable}{ICommandExtendable} \javadoc{System.Windows.Input.ICommand}{ICommand} \TODO %% GENERATED LISTINGS - DO NOT EDIT \inputcsharp{Extension point for instances of \type{ICommand}} {Minerva.Core/minerva/command/ICommandExtendable.cs} \begin{lstlisting}[style=Csharp,caption={Example to register to the extension point (C\#)}] IBeanContextFactory bcf = ... IBeanConfiguration myExtension = bcf.RegisterAnonymousBean... bcf.Link(myExtension).To<ICommandExtendable>().With(...); \end{lstlisting} %% GENERATED LISTINGS END
{ "alphanum_fraction": 0.799086758, "avg_line_length": 36.5, "ext": "tex", "hexsha": "689c6792e986a41f6ca8c65746febcda9ed2028c", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/extendable/ICommandExtendable.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/extendable/ICommandExtendable.tex", "max_line_length": 91, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/extendable/ICommandExtendable.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 175, "size": 657 }
\chapter{Basic concepts}\label{sec:basic_concepts} A UAVCAN network is a decentralized peer network, where each peer (node) has a unique numeric identifier --- \emph{node ID} --- ranging from 1 to 127, inclusively. Nodes of a UAVCAN network can communicate using the following communication methods: \begin{description} \item[Message broadcasting] --- The primary method of data exchange with one-to-all publish/subscribe semantics. \item[Service invocation] --- The communication method for peer-to-peer request/response interactions\footnote{Like remote procedure call (RPC).}. \end{description} For each type of communication, a predefined set of data structures is used, where each data structure has a unique identifier --- the \emph{data type ID} (DTID). Additionally, every data structure definition has a pair of major and minor semantic version numbers, which enable data type definitions to evolve in arbitrary ways while ensuring a well-defined migration path if backward-incompatible changes are introduced. Some data structures are standard and defined by the protocol specification (of which only a small subset are required); others may be specific to a particular application or vendor. Since every message or service data type has its own unique data type ID, and each node in the network has its own unique node ID, a pair of data type ID and node ID can be used to support redundant nodes with identical functionality inside the same network. Message and service data structures are defined using the Data Structure Description Language (DSDL) (chapter \ref{sec:dsdl}). A DSDL description can be used to automatically generate the serialization/deserialization code for every defined data structure in a particular programming language. DSDL ensures that the worst case memory footprint and computational complexity per data type are constant and easily predictable. On top of the standard data types, UAVCAN defines a set of standard high-level functions including: node health monitoring, node discovery, time synchronization, firmware update, plug-and-play node support, and more. For more information see chapter \ref{sec:application_layer}. Serialized message and service data structures are exchanged by means of the transport layer (chapter \ref{sec:transport_layer}), which implements automatic decomposition of long transfers into several transport frames\footnote{Here and elsewhere, a \emph{transport frame} means a block of data that can be atomically exchanged over the network, e.g., a CAN frame.} and reassembly from these transport frames back into a single transfer buffer, allowing nodes to exchange data structures of arbitrary size. \begin{figure}[hbt] \centering \begin{tabular}{|c|c|l|c|l|c|} \hline \multicolumn{6}{|c|}{Applications} \\ \hline \multicolumn{1}{|l|}{} & Required functions & & Standard functions & & Custom functions \\ \cline{2-2} \cline{4-4} \cline{6-6} \multicolumn{2}{|c|}{Required data types} & \multicolumn{2}{c|}{Standard data types} & \multicolumn{2}{c|}{Custom data types} \\ \hline \multicolumn{6}{|c|}{Serialization} \\ \hline \multicolumn{6}{|c|}{Transport} \\ \hline \end{tabular} \caption{UAVCAN architectural diagram.\label{fig:architecture}} \end{figure} \section{Message broadcasting} Message broadcasting refers to the transmission of a serialized data structure over the network to other nodes. This is the primary data exchange mechanism used in UAVCAN; it is functionally similar to raw data exchange with minimal overhead, additional communication integrity guarantees, and automatic decomposition and reassembly of long payloads across multiple transport frames. Typical use cases may include transfer of the following kinds of data (either cyclically or on an ad-hoc basis): sensor measurements, actuator commands, equipment status information, and more. Information contained in a broadcast message is summarized in the table \ref{table:broadcast_message_info}. \begin{UAVCANSimpleTable}{Broadcast message properties}{|l X|}\label{table:broadcast_message_info} Property & Description \\ Payload & The serialized message data structure. \\ Data type ID & Numerical identifier that indicates how the data structure should be interpreted. \\ Data type major version number & Semantic major version number of the data type description. \\ Source node ID & The node ID of the transmitting node (excepting anonymous messages). \\ Transfer ID & A small overflowing integer that increments with every transfer of this message type from a given node. Used for message sequence monitoring, multi-frame transfer reassembly, and elimination of transport frame duplication errors for single-frame transfers. Additionally, Transfer ID is crucial for automatic management of redundant transport interfaces. The properties of this field are explained in detail in the chapter \ref{sec:transport_layer}. \\ \end{UAVCANSimpleTable} \subsection{Anonymous message broadcasting} Nodes that don't have a unique node ID can publish \emph{anonymous messages}. An anonymous message is different from a regular message in that it doesn't contain a source node ID. UAVCAN nodes will not have an identifier initially until they are assigned one, either statically (which is generally the preferred option for applications where a high degree of determinism and high safety assurances are required) or dynamically. Anonymous messages are particularly useful for the dynamic node ID allocation feature, which is explored in detail in the chapter~\ref{sec:application_layer}. Anonymous messages cannot be decomposed into multiple transport frames, meaning that their payload capacity is limited to that of a single transport frame. More info is provided in the chapter~\ref{sec:transport_layer}. \section{Service invocation} Service invocation is a two-step data exchange operation between exactly two nodes: a client and a server. The steps are\footnote{The request/response semantic is facilitated by means of hardware (if available) or software acceptance filtering and higher-layer logic. No additional support or non-standard transport layer features are required.}: \begin{enumerate} \item The client sends a service request to the server. \item The server takes appropriate actions and sends a response to the client. \end{enumerate} Typical use cases for this type of communication include: node configuration parameter update, firmware update, an ad-hoc action request, file transfer, and similar service tasks. Information contained in service requests and responses is summarized in the table \ref{table:service_req_resp_info}. \begin{UAVCANSimpleTable}{Service request/response properties}{|l X|}\label{table:service_req_resp_info} Property & Description \\ Payload & The serialized request/response data structure. \\ Data type ID & Numerical identifier that indicates how the data structure should be interpreted. \\ Data type major version number & Semantic major version number of the data type definition. \\ Client node ID & Source node ID during request transfer, destination node ID during response transfer. \\ Server node ID & Destination node ID during request transfer, source node ID during response transfer. \\ Transfer ID & A small overflowing integer that increments with every call of this service type from a given node. Used for request/response matching, multi-frame transfer reassembly, and elimination of transport frame duplication errors for single-frame transfers. Additionally, Transfer ID is crucial for automatic management of redundant transport interfaces. The properties of this field are explained in detail in the chapter \ref{sec:transport_layer}. \\ \end{UAVCANSimpleTable} Both request and response contain same values for all listed fields except payload, where the content is application-defined. Clients match responses with corresponding requests using the following fields: data type ID, data type major version number, client node ID, server node ID, and transfer ID.
{ "alphanum_fraction": 0.7379997709, "avg_line_length": 62.7985611511, "ext": "tex", "hexsha": "4964f85e06182755b47acc817d194653be063c47", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "de515cb0c38c89770f9d5c0106fffc405c0501aa", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "kjetilkjeka/specification", "max_forks_repo_path": "specification/basic_concepts/basic_concepts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "de515cb0c38c89770f9d5c0106fffc405c0501aa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "kjetilkjeka/specification", "max_issues_repo_path": "specification/basic_concepts/basic_concepts.tex", "max_line_length": 175, "max_stars_count": null, "max_stars_repo_head_hexsha": "de515cb0c38c89770f9d5c0106fffc405c0501aa", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "kjetilkjeka/specification", "max_stars_repo_path": "specification/basic_concepts/basic_concepts.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1721, "size": 8729 }
% gjilguid2e.tex % V2.0 released 1998 December 18 % V2.1 released 2003 October 7 -- Gregor Hutton, updated the web address for the style files. \documentclass[extra,mreferee]{gji} \usepackage{timet} \usepackage{gensymb} \usepackage{graphicx} \usepackage{float} \usepackage{amsmath} \usepackage[utf8]{inputenc} \usepackage{indentfirst} \title[Geophys.\ J.\ Int.: Magnetic radial inversion] {Magnetic radial inversion for 3-D source geometry estimation} \author[L.B. Vital, V.C. Oliveira Jr. and V.C.F. Barbosa] {L.B. Vital$^1$\thanks{[email protected]}, V.C. Oliveira Jr.$^1$ and V.C.F. Barbosa$^1$\\ $^1$ Observat{\'o}rio Nacional, Gal. Jos{\'e} Cristino, 77, São Crist{\'o}v{\~a}o, Rio de Janeiro, 20921-400, Brazil } \date{Received 1998 December 18; in original form 1998 November 22} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \volume{200} \pubyear{1998} %\def\LaTeX{L\kern-.36em\raise.3ex\hbox{{\small A}}\kern-.15em % T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX} %\def\LATeX{L\kern-.36em\raise.3ex\hbox{{\Large A}}\kern-.15em % T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX} % Authors with AMS fonts and mssymb.tex can comment out the following % line to get the correct symbol for Geophysical Journal International. \let\leqslant=\leq \newtheorem{theorem}{Theorem}[section] \begin{document} \label{firstpage} \maketitle \include{abstract} \begin{keywords} Magnetic anomalies: modeling and interpretation; Inverse theory; Numerical solutions. \end{keywords} \include{introduction} \include{methodology} \include{synthetic_data} \include{field_data} \include{conclusions} \begin{acknowledgments} The authors thank associate editor Jorg Renner, reviewers Horst Holstein, and Alexander Minakov for their questions and suggestions that greatly improved the original manuscript. We thank the Brazilian geological service CPRM for providing the field data. Leonardo Vital thanks the PhD. scholarship from CAPES (Finance Code 001). Vanderlei Oliveira Jr. thanks the fellowships from CNPQ (grant 315768/2020-7) and FAPERJ (grant E-26/202.729/2018). Valeria Barbosa thanks the fellowships from CNPQ (grant 307135/2014-4) and FAPERJ (grant E-26/202.582/2019). \end{acknowledgments} \section*{DATA AVAILABILITY} The synthetic data is available with the source code at http://doi.org/10.5281/zenodo.4755459. Additionally, the survey provided by the Brazilian Geological Survey (CPRM – www.cprm.gov.br) are available at http://geosgb.cprm.gov.br/. \bibliographystyle{gji} \bibliography{ref} \include{figures} \appendix \label{lastpage} \end{document}
{ "alphanum_fraction": 0.7559478127, "avg_line_length": 31.0238095238, "ext": "tex", "hexsha": "fea50ad57861e2594762b1dc915d424ea144d54a", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-03-01T02:14:31.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-01T02:14:31.000Z", "max_forks_repo_head_hexsha": "ac7e04a143ddc29eb4ded78671a5382a2869d5d8", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "pinga-lab/magnetic-radial-inversion", "max_forks_repo_path": "manuscript/radial_tfa_paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ac7e04a143ddc29eb4ded78671a5382a2869d5d8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "pinga-lab/magnetic-radial-inversion", "max_issues_repo_path": "manuscript/radial_tfa_paper.tex", "max_line_length": 233, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ac7e04a143ddc29eb4ded78671a5382a2869d5d8", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "pinga-lab/magnetic-radial-inversion", "max_stars_repo_path": "manuscript/radial_tfa_paper.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-15T11:35:41.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-15T11:35:41.000Z", "num_tokens": 832, "size": 2606 }
\documentclass[main.tex]{subfiles} \begin{document} \section{Symmetry breaking and phase transitions} \marginpar{Sunday\\ 2020-7-5, \\ compiled \\ \today} Let us consider the time, in the early universe, before BBN. The times are \begin{figure} \centering \begin{tabular}{ccc} Temperature & Symmetry group & time \\ \hline \SI{e19}{GeV} & \(G _{\text{GUT}}\) & \SI{e-43}{s} \\ \num{e15} \(\divisionsymbol\) \SI{e16}{GeV} & \(G _{\text{GUT}}\), barely & \SI{e-38}{s} \\ % \num{e2} \(\divisionsymbol\) \SI{e3}{GeV} & \(SU(3)_c \times SU(2)_L \times U(1)_Y\) & \num{e-10} \(\divisionsymbol\) \SI{e-12}{s} \\ \num{e2} \(\divisionsymbol\) \SI{e3}{GeV} & \(SU(3)_c \times SU(2)_L \times U(1)_Y\), barely & \num{e-10} \(\divisionsymbol\) \SI{e-12}{s} \\ \num{200} \(\divisionsymbol\) \SI{300}{MeV} & \(SU(3)_c \times U(1) _{\text{em}}\): quark-gluon confinement & \num{e-4} \(\divisionsymbol\) \SI{e-5}{s} \\ \SI{1}{MeV} & \(SU(3)_c \times U(1) _{\text{em}}\): nucleosynthesis & \SI{1}{s} \end{tabular} \label{tab:phase-transitions} \caption{Phase transitions in the early universe.} \end{figure} The time shortly thereafter, from 1 to \SI{100}{s}, is nucleosynthesis, the earliest time we can see; we do so by exploring the abundances of light nuclei. Everything before comes out of our model for particle physics. After BBN, we have an epoch of radiation dominance, followed by matter dominance, then recombination, accelerated expansion. We will not consider these, but instead keep to the first second. The properties of the cosmic medium can either change in a \textbf{smooth crossover}, or abruptly in a \textbf{phase transition}. Formally, the latter corresponds to an order parameter being zero in a phase and nonzero in another. The order phase transitions describes the first derivative order which is discontinuous at them: so, first-order transitions are the sharpest, they have latent heat; while second-order transitions are smoother. We get phase transitions when there is a mismatch between the ground state at zero temperature and at some finite temperature. Let us denote by \(\expval{\phi }_{T}\) the absolute minimum of the effective potential \(V _{\text{eff}} (T, \phi )\). Sometimes, the ground state being \(\phi = 0\) is restored at a finite temperature. First-order phase transitions have a discontinuity in \(\expval{\phi }_{T}\) at a certain \(T = T_C\); for second-order phase transitions it is continuous but not differentiable. An example of a first-order transitions is the boiling of a liquid, second-order ones are the transition of a ferromagnet, of a superconductor, of a superfluid. For second-order and smooth-crossover transitions the medium may always be at equilibrium, on the other hand for first-order transitions switching to the other minimum becomes thermodynamically favorable if the temperature crosses the critical point: so, we can get \textbf{bubble nucleation}. The expectation value of the field becomes inhomogeneous in space. We get bubbles with the new vacuum, for which it is energetically favorable to expand. The transition being energetically favorable is not enough: the nucleation rate must be higher than the expansion of the universe, otherwise the transition does not complete. \subsection{The electroweak phase transition} The potential for the Higgs field is given by % \begin{align} V(\phi ) = \mu^2 \abs{\phi }^2 + \lambda \abs{\phi }^{4} \,, \end{align} % with \(\mu^2<0\) and \(\lambda >0\). Then, the VEV is % \begin{align} \expval{\phi }_0 = \sqrt{\frac{- \mu^2}{2 \lambda }} = \frac{v}{\sqrt{2}} \,, \end{align} % so at zero temperature the potential can be written as % \begin{align} V(\phi) = \lambda \qty(\phi^2 - \frac{v^2}{2})^2 \,. \end{align} What do we write for a nonzero temperature? The term to add is: % \begin{align} V &= V(T = 0) + \frac{T^2}{24} \qty[ \sum _{\text{bosons}} g_i m_i^2 (\phi ) + \frac{1}{2} \sum _{\text{fermions}} g_i m_i^2 (\phi )] \\ &= \qty(- \lambda v^2 + \frac{\alpha}{24} T^2) \phi + \lambda \phi^{4} \,, \end{align} % where \(m_i (\phi ) = h_i \phi \), \(h_i\) being the coupling constant of that specific fermion or boson to \(\phi \). The sum over the bosons runs over the Higgs itself as well: so, we get a term \(g_h h_h^2 H^2\) as well. Finally, we defined % \begin{align} \alpha = \sum _{\text{bosons}} g_i h_i^2 (\phi ) + \frac{1}{2} \sum _{\text{fermions}} g_i h_i^2 (\phi ) \,. \end{align} So, we can see that when the coefficient of \(\phi^2\) becomes larger than \(0\) the transition takes place: this will happen at \(\alpha T^2 / 24 = \lambda v^2\), which means % \begin{align} T_C = 2 v \sqrt{ \frac{6\lambda}{\alpha }} = 2 \sqrt{ \frac{M_H^2}{4 M_W^2 + 2 M_Z^2 + 4 m_t^2 + M_H^2}} v \approx \SI{146}{GeV} \,. \end{align} This is a tree-level perturbative result: if we account for nonperturbative effects we find \(T_C \approx \SI{160}{GeV}\). If we were to use the perturbative approach (at one loop) we would find that the electroweak phase transition is of first order, however using a nonperturbative one we get that it is smoother. Perturbing would work for small \( M_H / M_W \), but actually this is larger than 1. Working nonperturbatively, one finds that the transition is of second order or even smoother. \subsubsection{Phase transitions and inflation} \todo[inline]{Graph of \(V(\phi )\) with a bump, a slow decrease region, and then an absolute minimum: where does this shape come from?} At the GUT temperature, the average energy of the Higgs field is around the GUT energy scale. After the temperature goes below this, the GUT symmetry is broken. The scale factor scales like \(H \sim T^2 / M_P^{*}\), where \(M_P^{*} \approx M_P / \sqrt{g_{*}} \approx \SI{e18}{GeV}\). Then, if we take \(M _{\text{GUT}} \sim \SI{e16}{GeV}\) we get % \begin{align} t = H^{-1} = \frac{\num{e2}}{M _{\text{GUT}}} \,. \end{align} The vacuum energy density will be given by the potential at \(\phi = 0\). In this dark energy dominated period, the scale factor will scale like \(a \sim e^{Ht}\). The energy of this false vacuum is much larger than the one of the true vacuum. If the time it takes for \(\phi \) to evolve from 0 to \(v\) is of the order \(t \sim 100 H^{-1}\), then initial smooth patches whose size is comparable to \(H^{-1} \sim \SI{e-28}{cm}\) will be blown out by \(e^{H t} = e^{100}\): they will become of a size \SI{3e15}{cm}, or about \SI{200}{AU}. The entropy increase is enormous: we can compute it like \(S = T_C^3 / H^3\). This is because the entropy density \(s\) scales like \(T^3\) \cite[eq.\ 8.46]{bergstromCosmologyParticleAstrophysics2003}, and we retrieve the total entropy by multiplying by \(V \sim a^{3}\). \todo[inline]{Hold on, the volume is \(a^3\), not \(H^{-3}\)! In a steady-state solution \(H\) is constant. The reasoning works out, since it is \(t\) which scales by 100, but what he wrote is incorrect\dots} So, even if the transition does not correspond to a temperature change, the variation in temperature is \(\qty( e^{Ht})^3 \sim \num{e130}\), since \(t\) is multiplied by 100. Some early models of slow-roll inflation are based on a transition between \(SU(5)\) to the SM symmetries. The idea is to get a GUT potential which is very close to flat near the origin, so that the quadratic term \(m^2\phi^2\) vanishes. This way, we have no SSB in the tree-level GUT Higgs potential (since the self-interaction terms are only relevant at higher powers of \(\phi \)). The only corrections are \emph{loop} corrections, possibly with loops constituted by \(SU(5) \) gauge bosons. \subsubsection{Coleman-Weinberg mechanism} We take % \begin{align} V(\phi ) = \const \times \phi^{4} \qty[\log \qty( \frac{\phi^2}{v^2}) - \frac{1}{2}] + \const \times v^4 \,, \end{align} % where \(v = \expval{\phi } \sim \SI{e16}{GeV}\). If \(\phi \ll v\), the potential looks like \(V \sim v^{4} - \lambda \phi^{4} / 4\), where \(\lambda = \const \log (\phi^2 / v^2) - 1/2 \sim \num{e-1} \divisionsymbol \num{e-2}\). If \(\phi \) is close to 0, then, the potential is approximated up to third order as % \begin{align} V(\phi ) \sim \alpha^2 _{\text{GUT}} \frac{v^{4}}{2} \,, \end{align} % where \(\alpha^2 _{\text{GUT}} \sim \num{e-3}\). \todo[inline]{Should \(\alpha^2 \) not be related to \(\lambda \)? Also, I do not understand the characterization of the situations: how is \(\phi \ll v\) different from \(\phi \sim 0\)? } In this model, the expansion rate is given by % \begin{align} H^2 \approx \frac{4 \pi }{3} \frac{\const v^{4}}{M_P^2} \approx \qty( \SI{e11}{GeV})^2 \,. \end{align} With this model we get a phase transition which is first order at \(T_C \sim \SI{e16}{GeV}\). There are, however, issues with this model. \todo[inline]{What can we say about this model?} \section{Cosmic matter-antimatter asymmetry} The matter-antimatter asymmetry problem is two-fold: \begin{enumerate} \item we want to find a way to generate a nonzero baryon number \(\Delta B = n_B - n_{\overline{B}}\) starting from a symmetric \(\Delta B = 0\) universe; \item we want to account for the smallness of the baryon-to-photon ratio: % \begin{align} \eta_{B} = \frac{n_{B, 0}}{n_{\gamma , 0}} \approx \num{6e-10} \,. \end{align} \end{enumerate} We can express the latter using % \begin{align} \Delta B = \frac{n_B - n_{\overline{B}}}{s} \approx \num{9e-11} \,, \end{align} % the baryon-to-entropy ratio. If we had \(\Delta B = 0\), the fraction of the matter density in the universe which is represented by baryons would be very much smaller than \SI{5}{\percent}. So far, there is no experimental evidence for large antimatter regions. If it existed, it would have to be separated from regular matter by extremely large distances, since otherwise we would detect a gamma-ray background from the common annihilation. \subsection{Sakharov's conditions} They are necessary conditions for the creation of baryon asymmetry starting from a baryon-symmetric situation. They are: \begin{enumerate} \item Baryon number non-conservation; \item \(C\) and \(CP\) violation; \item out-of-equilibrium processes. \end{enumerate} The first condition arises because if all processes conserved baryon number then there could be no formation of asymmetry. The second condition arises because if there was charge-conjugation symmetry then processes generating matter and antimatter would have the exact same properties, and would therefore statistically occur in the same amounts. Out-of-equilibrium processes are needed since otherwise, because of \(CPT\) symmetry, \(\Delta B\) increasing and decreasing processes would occur in the same amounts. \subsubsection{Baryogenesis in a GUT} Let us suppose that there is a GUT scalar particle \(X\) which has two decay channels, either \(qq\) or \(\overline{q} \overline{\ell}\), where \(q\) is a quark and \(\ell\) is a lepton. These are both \(B\)-violating, the first introduces \(B = +2/3\) and the second introduces \(B = - 1/3\). Let us suppose that the branching ratios are \(\tau \) and \(1 - \tau \) respectively. The antiparticle \(\overline{X}\) can decay into \(\overline{q} \overline{q}\) or \(q \ell\), which have \(B = - 2/3\) and \(B = + 1/3\) and branching ratios \(\overline{\tau}\) and \(1 - \overline{\tau}\) respectively. If there is \(C\) violation, we can have \(\tau \neq \overline{\tau}\). So, if we have a \(X \overline{X}\) pair, and both of the particles decay, then the changes of baryon number are \( \tau - 1/3\) and \(- \overline{\tau} + 1/3\) respectively, so in total we find \(\Delta B_{X \overline{X}} = \tau - \overline{\tau}\). This yields baryon number violation by violating the first two Sakharov conditions: where does the third one come in? Let us again consider the boson \(X\), a ``leptoquark'' gauge boson which is coupled to all fermions. Its coupling constant will be \(\alpha = \alpha _{\text{GUT}}\). Its SM charges can either be \((3, 2, +4/3)\) or \((3,2, + 1/3)\). The decay rate of \(X\) into fermions will be given by \(\Gamma^{X}_{D} \approx \alpha M_X\). The decay of \(X\) can occur as long as \(\Gamma^{X}_{D} \geq H \sim T^2 / M_P^{*}\), so that on average a particle has good odds to have actually managed to decay since the beginning of the universe. This condition can be written as \(T^2< \alpha M_X M^{*}_{P}\), and if we consider the time at which \(T = M_X\) it becomes \(M_X < \alpha M_P^{*}\). The condition of this \(X\) being able to decay means that it can reach equilibrium: so, at \(T = M_X\) equilibrium can be achieved as long as \(X\) is less massive than \(\alpha \times \SI{e18}{GeV}\). If this condition is fulfilled, then as \(T\) drops below \(M_X\) the particle \(X\) is still in equilibrium and the \(B\) violating processes are still happening. So, as this happens, all the \(B\) violation is erased. \todo[inline]{Wait, how?} If, on the other hand, \(M_X \geq \alpha \SI{e18}{GeV}\), the lifetime of \(X\) is longer than the age of the universe when the temperature reaches its mass. Therefore, it only starts decaying \emph{later}, when \(T < M_X\), and these decays are occurring out of equilibrium. The ratio \(n_X / n_\gamma \) is approximately 1, but the density of \(X\) is far from the equilibrium one, which would satisfy % \begin{align} n_X^{\text{eq}} \sim (M_X T)^{3/2} e^{-M_X / T} \,, \end{align} % and since \(M_X > T\) this would be suppressed, and we'd have \(n_X^{\text{eq}} \ll n_\gamma \). Intuitively, if we \emph{did} have equilibrium almost all the \(X\)s would have decayed earlier, many more of them are left than this. So, the net \(\Delta B \neq 0\) cannot be erased. If we define \(\Delta B = \tau - \overline{\tau}\), then we will be left with a number density of baryons % \begin{align} n_B = \Delta B n_X \, \end{align} % at the time when \(T = M_X\). Since \(n_X \sim n_\gamma \), we also have \(n_B \approx \Delta B n_\gamma \). At this point, \(\Delta B\) is frozen \todo[inline]{Is it? Many \(X\)s still have to decay, and when they do they violate \(B\)!} however \(n_\gamma \) is not conserved in later stages of the evolution --- for example, when electron-positron annihilation happens ---, so we cannot rely on the ratio \(n_B / n_\gamma \) to be constant. Instead, we can use the specific entropy ratio \(n_B / s\), where \cite[eqs.\ 8.46 and 8.49]{bergstromCosmologyParticleAstrophysics2003} % \begin{align} s = \frac{2 \pi^2}{45} g_s (T) T^3 = \frac{\pi^{4}}{45 \zeta (3)} g_s (T) n_\gamma \,, \end{align} % where, at \(T \sim M_X\), we have \(g_s (T) \sim 100\), so this means \(s \sim 100 n_\gamma \). Note that this \(g_s(T)\) is not the same as the \(g(T)\) we get when computing the density, although they are close in value \cite[fig.\ 1]{husdalEffectiveDegreesFreedom2016}. For an in-depth review of how they are defined, refer to Husdal \cite[sec.\ 4.5]{husdalEffectiveDegreesFreedom2016}; the gist is that thermodynamic quantities have different dependencies on the phase space density and the temperature, so when we weigh the various particles we must account for these differences. The \(g_s\) describes the effective degrees of freedom associated with the computation of the entropy, while \(g\) refers to the degrees of freedom associated with the energy density. Therefore, % \begin{align} \frac{n_B}{s} \sim \frac{\Delta B}{100} \,. \end{align} From Big Bang Nucleosynthesis we can place a bound on \(\eta = n_B / n_\gamma \), namely \(\num{5e-10} < \eta < \num{7e-10}\). The same reasoning can be applied today: now we have \(g_s(T) \approx 4\), so we find \(s \approx 7 n_\gamma \). Then, using the same bound from BBN we get % \begin{align} \num{7e-11} < \frac{n_B}{s} < \num{e-10} \,. \end{align} \todo[inline]{And this is incompatible with the earlier one\dots which should we use?} Now, since \(n_B / s \approx \num{e-2} \Delta B\) we get that we can have \(\tau - \overline{\tau} \sim \num{e-8}\) to explain observations. In order to have \(\tau \neq \overline{\tau}\) we need \(B\), \(C\) and \(CP\) violation. However, even if these symmetries are violated, at tree level the decay rates for \(\Gamma (X \to qq)\) and \(\Gamma (\overline{X} \to \overline{q} \overline{q})\) are equal: all the kinematics is the same, and we will get % \begin{align} \Gamma (X \to qq) \propto \abs{g}^2 M_X \qquad \text{and} \qquad \Gamma (\overline{X} \to \overline{q} \overline{q}) \propto \abs{g^{*}}^2 M_X \,, \end{align} % so they will be equal! This can change if we consider loops. \begin{claim} Say \(I\) is the complex-valued loop integral, and we assume that a particle \(Y\) propagates between the quarks with a coupling \(g'\). Then, the result reads: % \begin{align} \tau - \overline{\tau} = \frac{2 \Im I \Im \qty(g_2 g_1^{*} g_1' g^{\prime *}_2)}{\abs{g_1 }^2 + \abs{g_2 }^2} \approx \frac{g^2}{4 \pi } f \qty( \frac{m_X}{m_Y}) \,, \end{align} % where \(f\) is a function such that \(f(1) = 0\), and which is of order 1 for large arguments. \end{claim} Since we want \(\tau - \overline{\tau} \lesssim g^2 / 4 \pi \), we must have \(g^2 / 4 \pi \gtrsim \num{e-8}\). This allows us to specify our out-of-equilibrium condition better: it was \(M_X \gtrsim \alpha M_P^{*}\), but with the new bound on \(\alpha \) we can write it as % \begin{align} M_X > \SI{e10}{GeV} \,. \end{align} So, \(X\) must be a superheavy particle. In order to implement this baryogenesis process the reheating temperature must be quite high, \(T _{\text{RH}} > \SI{e10}{GeV}\). Is \(\Delta B\) preserved after the breaking of \(G _{\text{GUT}}\)? If, under the SM symmetries, \(B\) is conserved we should have no issue. This, as we shall see, might not be the case. \todo[inline]{Hold up: if the decay channel of the superheavy \(X\) are \(B\)-breaking, what does the temperature have to do with it? If there are \(X\)s leftover they still need to decay\dots} \end{document}
{ "alphanum_fraction": 0.689468997, "avg_line_length": 51.3314285714, "ext": "tex", "hexsha": "636ffd2b4ba9dd5195c430cf383e81bd35f491c8", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jacopok/notes", "max_forks_repo_path": "ap_second_semester/astroparticle_physics/may27.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jacopok/notes", "max_issues_repo_path": "ap_second_semester/astroparticle_physics/may27.tex", "max_line_length": 497, "max_stars_count": 6, "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jacopok/notes", "max_stars_repo_path": "ap_second_semester/astroparticle_physics/may27.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "num_tokens": 5592, "size": 17966 }
\documentclass{beamer} \input{../util/beamerpreamble.tex} \usepackage{tikz} \usetikzlibrary{arrows,shapes} \title[GB13604]{GB13604 - Maths for Computer Science} \subtitle[]{Lecture 6 -- Counting Part I} \author[Claus Aranha]{Claus Aranha\\{\footnotesize [email protected]}} \institute[COINS]{College of Information Science} \date[2018-11-14]{2018-11-14\\{\tiny Last updated \today}} \tikzstyle{vertex}=[circle,fill=black!25,minimum size=10pt,inner sep=0pt] \tikzstyle{blue vertex}=[circle,fill=blue!100,minimum size=10pt,inner sep=0pt] \tikzstyle{red vertex}=[circle,fill=red!100,minimum size=10pt,inner sep=0pt] \tikzstyle{yellow vertex}=[circle,fill=yellow!100,minimum size=10pt,inner sep=0pt] \tikzstyle{edge} = [draw,thick,-] \tikzstyle{pedge} = [draw,thick,.] \tikzstyle{red edge} = [draw, thick,-,red!50] \tikzstyle{black edge} = [draw, line width=2pt,-,black!20] \tikzstyle{weight} = [font=\smaller] \begin{document} \begin{frame} \maketitle \begin{center} {\smaller This course is based on Mathematics for Computer Science, Spring 2015, by Albert Meyer and Adam Chlipala, Massachusetts Institute of Technology OpenCourseWare.} \includegraphics[width=0.2\textwidth]{../img/by-nc-sa} \end{center} \end{frame} \section{Introduction} \begin{frame} \frametitle{Week 6 and 7 summary} {\larger {\bf Counting} \bigskip \begin{itemize} \item \structure{Sums and Products} \item \structure{Asymptotics} \item \alert{Counting with Bijections} \item \alert{Repetitions and Binomial Theorem} \item \alert{Pigeonhole Principle} \end{itemize} } \end{frame} \section{Sums and Products} \begin{frame} \frametitle{Sums for Children} {\larger Next two lectures: Talking about \structure{sums} and \structure{counting}. \bigskip \begin{block}{Middle School Classwork:} \begin{equation*} 89 + 102 + 115 + 128 + \ldots + 440 + 453 + 466 \end{equation*} \end{block} \bigskip It is said that \alert{Gauss} found a quick solution for this problem at \alert{9 years old}: \begin{itemize} \item \structure{30 numbers, each 13 greater than the previous one.} \end{itemize} } \end{frame} \begin{frame} \frametitle{Sums for Children} {\larger \begin{itemize} \item $89 + (89+13) + \ldots + (89+28\times13) +(89+29\times13)$ \item $F + (F+d) + \ldots + (L-d) + L = A$ \item<2-> $L + (L-d) + \ldots + (F+d) + F = A$ \item<3-> \structure{$(F+L) + (F+L) + \ldots + (F+L) + (F+L) = 2A$} \end{itemize} \bigskip \begin{onlyenv}<3-> \begin{equation} A = \frac{(F+L)}{2} \times (\text{\# of terms}) \end{equation} \end{onlyenv} \begin{onlyenv}<4-> \begin{equation} A = \frac{(1+n)}{2} \times (n) \end{equation} \end{onlyenv} } \end{frame} \begin{frame} \frametitle{Why Counting?} {\larger \begin{itemize} \item \structure{Counting} techniques help understand the structure of numbers: \item We will study three different sums: \begin{itemize} \item Arithmetic Sums \item Geometric Sums \item Harmonic Sums \end{itemize} \end{itemize} \bigskip \begin{block}{Motivation for this unit} Counting techniques can be used to estimate \structure{upper bounds} and \structure{lower bounds} of computer complexity in algorithms. \end{block} } \end{frame} \subsection{Geometric Sums} \begin{frame} \frametitle{Arithmetic Sums and Geometric Sums} {\larger \begin{itemize} \item \structure{Arithmetic Sums}: Add a fixed value to each element\\ $A = (a+0d) + (a+d) + (a+2d) + \ldots + (a+nd)$ \bigskip \item \structure{Geometric Sums}: Multiply a fixed value to each element\\ $G = (kx^0) + (kx^1) + (kx^2) + \ldots + (kx^n)$ \end{itemize} } \end{frame} \begin{frame} \frametitle{Closed Form for the Geometric Sum} {\larger \begin{itemize} \item $A = 1 + 2 + 3 + \ldots + n = \frac{n(n+1)}{2}$, $G = 1 + x + x^2 + \ldots + x^n = ?$ \bigskip \item<2-> $G = 1 + x + x^2 + \ldots + x^n$ \item<2-> $xG = x + x^2 + x^3 + \ldots + x^{n+1}$ \bigskip \item<3-> $G = 1 + x + x^2 + \ldots + x^n$ \item<3-> $-xG = - x - x^2 - \ldots - x^{n+1}$ \item<3-> $G-xG = 1 $\hspace{2cm}$-x^{n+1}$ \item<3-> $G(1-x) = 1 - x^{n+1}$, $G = \frac{1-x^{n+1}}{1-x}$ \end{itemize} } \end{frame} \begin{frame} \frametitle{Closed Form for the Geometric Sum} {\larger \begin{itemize} \item The {\bf Proof By Induction} can show if a formula is correct, but does not show \alert{where the formula comes from} \bigskip \item The {\bf Perturbation Method} can be used to find \structure{closed forms} for sums. \end{itemize} } \end{frame} \begin{frame} \frametitle{Infinite Geometric Sums} {\larger \begin{itemize} \item \structure{Geometric Sum}: \begin{equation} G_n = \frac{1-x^{n+1}}{1-x} \end{equation} \item \structure{Geometric {\bf Series}}: \begin{equation} \lim_{n\rightarrow\inf}G_n = \frac{1-\lim_{n\rightarrow\inf}x^{n+1}}{1-x} = \frac{1}{1-x} \end{equation} \hfill(Provided that $|x| < 1$) \end{itemize} } \end{frame} \begin{frame} \frametitle{Application: Calculating Loans} {\larger {\bf Problem:} I pay you \structure{100\$ in 1 year} if you \alert{pay me X\$ now} \bigskip How do we calculate a fair amount for \alert{X}? \begin{itemize} \item <2-> My bank will pay me 3\% \structure{interest}\\ \hfill{\emph{bankrate} b ::= 1.03} \item <3-> If I deposit \alert{X\$} now, I will have \alert{$b\times X$\$} in 1 year. \item <3-> So I will not lose money if $b\times X \geq 100$\\ \hfill $X \geq 100/1.03 =$ \structure{97.09} \end{itemize} } \end{frame} \begin{frame} \frametitle{The future worth of money} {\larger \begin{itemize} \item 1\$ next year is worth \$0.9709 now. \bigskip \item \structure{r\$} last year is worth 1 today, if $r ::= 1/b$ \bigskip \item $n$ in two years is worth \item $nr$ in one year, and it is worth \item $nr^2$ today. \end{itemize} \vfill \begin{center} \structure{$n$} paid in $k$ years is \alert{worth $nr^k$ today}, where \structure{$r ::= 1/\text{bankrate}$} \end{center} } \end{frame} \begin{frame} \frametitle{Annuity} {\larger {\bf Insurance Company}: I will pay you 100\$ for 10 years, if you pay me \$Y now. \structure{How much should Y be?} \bigskip \begin{itemize} \item The Insurance company needs: \item $100r + 100r^2 + 100r^3 + \ldots + 100r^{10}$ \item<2-> Insurance = $100r(1+r+r^2+\ldots+r^9)$ \item<2-> $I = 100r \times \frac{1-r^{10}}{1-r} = 853.02$ \end{itemize} } \end{frame} \subsection{Book Stacking} \begin{frame} \frametitle{Problem: Book Stacking} \begin{center} \includegraphics[width=0.8\textwidth]{../img/bookstack1} \end{center} \end{frame} \begin{frame} \frametitle{Book Stacking Problem} {\large \begin{itemize} \item All books have size 1. \item For 1 book: \structure{Center of Mass} is 0.5\\ \hfill 1-book overhang is 0.5. \bigskip \item What about $n$ books? \end{itemize} \begin{center} \includegraphics[width=0.6\textwidth]{../img/bookstack2} \end{center} } \end{frame} \begin{frame} \frametitle{Book Stacking Problem} \begin{center} \includegraphics[width=0.6\textwidth]{../img/bookstack3} \end{center} {\larger $\Delta$-overhang ::= distance between CoM of $n$ books and CoM of the $n+1$ book. } \end{frame} \begin{frame} \frametitle{$\Delta$-overhang} {\larger From physics, we know that: \begin{equation} \Delta = \frac{1}{2(n+1)} \end{equation} \begin{itemize} \item $B_n ::=$ overhang of $n$ books \item $B_1 = 1/2$ \item $B_{n+1} = B_n + \frac{1}{2(n+1)}$ \bigskip \item $B_n = 1/2(1+1/2+1/3+\ldots+1/n)$\\ \hfill(This is harmonic Sum!) \end{itemize} } \end{frame} \begin{frame} \frametitle{How big can the overhang get?} {\larger It turns out that by increasing the number of $n$ books, we can make $1/2H_n$ as big as necessary. \vfill So there is no upper limit for the size of the overhang, if you position the books \alert{very carefully} and know integrals. } \end{frame} %% TODO: Include slides about Integral Theorem (for upper and lower limits) %% TODO: Include slides about calculating bounds for Factorials \section{Asymptotics} \begin{frame} \begin{center} {\huge Asymptotics } \end{center} \end{frame} \begin{frame} \frametitle{Study of Asymptotics} {\larger \begin{itemize} \item How fast do expressions grow? \bigskip \item What is the maximum/minimum size of an expression? \bigskip \item How can we compare two expressions as they get more complex? \end{itemize} \bigskip We will look at \structure{four notations} that describe the relationship between the {\bf growth of functions}. } \end{frame} \begin{frame} \frametitle{Asymptotic Equivalence} {\larger Def: \structure{$f(n) \sim g(n)$}: \hfill(f(n) is asymptotically equal to g(n)) \bigskip \begin{equation} \lim_{n\to\infty}\frac{f(n)}{g(n)} = 1 \end{equation} {\bf Example:} \begin{itemize} \item $n^2 \sim n^2+n$ because... \item $\lim_{n\to\infty}\frac{n^2+n}{n} = \lim_{n\to\infty} \frac{1}{n} + 1 = 1$ \end{itemize} } \end{frame} \begin{frame} \frametitle{Asymptotic Equivalence} {\larger {\bf Lemma:} $\sim$ is symmetric \hfill($f \sim g \implies g\sim f$) \begin{itemize} \item Let $f \sim g$ be true, is $g \sim f$ true too? \item $\lim \frac{g}{f} = \lim \frac{1}{(f/g)}$ \item $\lim \frac{1}{(f/g)} = \frac{1}{\lim(f/g)} = \frac{1}{1} = 1$ \item $f \sim g \implies g\sim f$\hfill$\blacksquare$ \end{itemize} } \end{frame} \begin{frame} \frametitle{Asymptotic Equivalence} {\larger {\bf Transitivity:} Suppose $f \sim g$ and $g \sim h$. Prove $f \sim h$ \bigskip \begin{equation} 1 = \lim\frac{f}{g} = \lim\frac{f/h}{g/h} = \frac{\lim(f/h)}{\lim(g/h) = 1} = \lim\frac{f}{h} \end{equation} \vfill {\bf Colorary:} $\sim$ is an \structure{equivalence relation} \begin{itemize} \item \alert{important} $sim$ is a relation \alert{on functions} \item $f(n) \sim g(n)$ does not care about \structure{particular values} of f(n) or g(n). ($f \sim g$) \end{itemize} } \end{frame} \begin{frame} \frametitle{Little Oh: $o(\cdot)$ -- Asymptotically Smaller} {\larger {\bf Definition:} $f(n) = o(g(n))\iff \lim_{n\to\infty}\frac{f(n)}{g(n)} = 0$ \bigskip {\bf Example:} \begin{itemize} \item $n^2 = o(n^3)$ \item $\lim_{n\to\infty}\frac{n^2}{n^3} = \lim_{n\to\infty}\frac{1}{n} = 0$ \end{itemize} \bigskip {\bf Lemma:} $o(\cdot)$ defines a \structure{strict partial order} } \end{frame} \begin{frame} \frametitle{Big Oh: $O(\cdot)$ -- Asymptotic Order of Growth} {\larger {\bf Definition:} $f(n) = O(g(n))$ \begin{equation} \limsup_{n\to\infty}\frac{f(n)}{g(n)} < \infty \end{equation} \vfill \begin{itemize} \item The limit of f(n)/g(n) is \structure{finite}. \item Could be 0, could be 1, could be something else. \item Why \structure{limsup} not \structure{lim}? Ignore this for now. \bigskip \item {\bf Example}: $3n^2 = O(n^2)$ because $\lim\frac{3n^2}{n^2} = \lim3 = 3$ \end{itemize} } \end{frame} \begin{frame} \frametitle{Why do we like $O(\cdot)$ so much?} {\larger \begin{itemize} \item What O() means is that \structure{constant factors} don't matter. \item Only \structure{rate of growth} matters. \bigskip \item When we talk about {\bf execution time}, if the hardware changes, only a \structure{constant factor} changes. \bigskip \item Slow algorithms will still be slow (for bigger data!) even if the hardware changes. \end{itemize} } \end{frame} \begin{frame} \frametitle{Theta: $\Theta(\cdot)$ -- Same Order of Growth} {\larger {\bf Definition}: \begin{equation} f = \Theta(g) \iff f = O(g) \land g = O(f) \end{equation} \bigskip {\bf Lemma}: $\Theta$ is an equivalence relation. } \end{frame} \begin{frame} \frametitle{Asymptotics: Intuitive Summary} {\larger \begin{itemize} \item $f\sim g$ \hfill f and g nearly equal; \bigskip \item $f = o(g)$ \hfill f much less than g; \bigskip \item $f = O(g)$ \hfill f is about $\leq$ g; \bigskip \item $f = \Theta(g)$ \hfill f is about equal to g; \end{itemize} } \end{frame} \subsection{Asymptotic Properties} \begin{frame} \frametitle{Asymptotic Properties} {\larger \begin{itemize} \item If $f \sim g$ or $f = o(g)$ then $f = O(g)$ \bigskip \item If $f = o(g)$ then $g \neq O(f)$\\ \hfill ($\lim f/g = 0 \implies \lim g/f = \infty$) \bigskip \end{itemize} } \end{frame} \begin{frame} \frametitle{Big Oh $O(\cdot)$ and limsup} {\larger {\bf Alternate Definition:} $f(n) = O(g(n))$ \begin{equation} \exists c,n_0, \forall n \geq n_0, f(n) \leq c\cdot g(n) \end{equation} \begin{center} \includegraphics[width=0.6\textwidth]{../img/bigOh} \end{center} } \end{frame} \begin{frame} \frametitle{Big Oh $O(\cdot)$ and limsup} {\larger \begin{itemize} \item Why is \structure{limsup} necessary in the first definition? \item Suppose $f \leq 2g$ then $f = O(g)$ but $f/g$ has no limit. \bigskip \item {\bf Example:} $f(n) = (1+\sin^2(\frac{n\pi}{2}))\times g(n)$ \bigskip \item The \structure{limit} of $f/g$ in this expression alternates between 1 and 2. \item On the other hand, the \structure{limsup} of $f/g$ is 2. \end{itemize} } \end{frame} \begin{frame} \frametitle{A few more facts about asymptotics} {\larger \begin{itemize} \item {\bf Lemma:} $x^a = o(x^b)$ if a < b\\ \hfill because $\lim\frac{x^a}{x^b} = \lim\frac{1}{x^{a-b}}$ and $a-b > 0$ \bigskip \item {\bf Lemma:} $\ln(x) = o(x^\epsilon)$ for $\epsilon > 0$\\ \hfill (logarithms grow slower than roots) \end{itemize} {\bf Proof:} \begin{itemize} \item $1/y \leq y$ for $y \geq 1$, so $\int_1^z \frac{1}{y}dy \leq \int_1^z ydy$ \item $\ln(z) \leq \frac{z^2}{2}$ for $z \geq 1$, so let $z = \sqrt{x^\delta}$, (for some $\delta > 0$) \item $\frac{\delta\ln(x)}{2} \leq \frac{x^\delta}{2}$, but $x^\delta = o(x^\epsilon)$ for $\delta < \epsilon$ \item So $\delta\ln(x) = o(x^\epsilon)$\hfill $\blacksquare$ \end{itemize} } \end{frame} \begin{frame} \frametitle{Asymptotic Blunders -- Be careful!} {\larger \begin{itemize} \item ``$\cdot = O(\cdot)$'' defines a \structure{binary relation}.\\ \hfill \alert{Do not write O(x) = x!!} \begin{itemize} \item If $x = O(x)$ and $O(x) = x$... \item But $2x = O(x)$ and $O(x) = x$... so $2x = x$????? \end{itemize} \bigskip \item Big Oh is \structure{not a lower bound}.\\ \hfill \alert{Do not write: ``f is at least $O(n^2)$''} \begin{itemize} \item If you want to say that $n^2$ is a \structure{lower bound} of f... \item $n^2 = O(f)$ \end{itemize} \end{itemize} } \end{frame} \begin{frame} \frametitle{Asymptotic Blunders -- Be careful!} {\larger \begin{itemize} \item {\bf False Proof:} $\sum^n_{i=1}i = O(n)$\\ \hfill(We know that $\sum^n_{i=1}i = n(n+1)/2)$ \bigskip \item Any constant is O(1): $0 = O(1), 1 = O(1), 2 = O(1) \ldots$ \item So, $i = O(1)$ \item So, $\sum^n_{i=1}i = O(1) + O(1) + O(1) + O(1) \ldots$ \item So, $\sum^n_{i=1}i = nO(1) = O(n)$\hfill (???) \end{itemize} \vfill \alert{$O(\cdot)$ is not a quantity! Do not do arithmetic with it!} } \end{frame} \section{Conclusion} \begin{frame} \frametitle{End of Class -- Class Summary} {\larger \begin{itemize} \item Sums: Arithmetic, Geometric, Harmonic, and Closed Forms \bigskip \item Asymptotic Notation: $\sim$, o(), O(), $\Theta$() \end{itemize} } \end{frame} \end{document}
{ "alphanum_fraction": 0.5928759264, "avg_line_length": 23.3687150838, "ext": "tex", "hexsha": "8fd121c20e4308b26efea438ce8ffae0429f23a8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "caranha/MathCS", "max_forks_repo_path": "week06/MathForCSW6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "caranha/MathCS", "max_issues_repo_path": "week06/MathForCSW6.tex", "max_line_length": 100, "max_stars_count": 2, "max_stars_repo_head_hexsha": "f3ce6705d09c55541f629cd542191bfd3e9adf34", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "caranha/MathCS", "max_stars_repo_path": "week06/MathForCSW6.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-16T02:14:56.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-13T18:59:55.000Z", "num_tokens": 6041, "size": 16732 }
\section{Idealised numerical tests} Three numerical tests are performed to validate the well-balanced stochastic Galerkin model. The first test simulates a lake-at-rest over idealised, uncertain topography to verify that the stochastic Galerkin model preserves well-balancing numerically. The second test simulates flow over a hump with an uncertain elevation. This test is designed to challenge the stochastic Galerkin model at representing discontinuous, non-Gaussian probability distributions by generating a steady-state solution that may be subcritical or transcritical depending on the hump elevation. The third test verifies the robustness of the stochastic Galerkin model for a steady flow over a highly irregular and uncertain bed that is more representative of real-world river hydraulics. \rev{The configuration of these three tests is summarised in table~\ref{tab:results}.} \begin{table} \centering \begin{tabular}{llll} \toprule & 1: Lake-at-rest & 2: Steady-state critical flow & 3: Flow over an irregular bed \\ % Test case & Mesh & $\Delta x$ & $\Delta t$ & End time & Topography & Models compared & Key findings \\ \midrule Purpose & Verify well-balancedness & Nonlinear, non-Gaussian flow & Realistic topography \\ Mesh & $[ \SI{-50}{\meter}, \SI{50}{\meter} ]$ & $[ \SI{-50}{\meter}, \SI{50}{\meter} ]$ & $[ \SI{0}{\meter}, \SI{1500}{\meter} ]$ \\ $\Delta x$ & \SI{1}{\meter} & \SI{1}{\meter} & \SI{7.5}{\meter} \\ $\Delta t$ & \SI{0.15}{\second} & \SI{0.15}{\second} & \SI{0.5}{\second} \\ End time & \SI{100}{\second} & \SI{500}{\second} & \SI{100000}{\second} \\ Topography & Equation~\eqref{eqn:lakeAtRest:z} & Equation~\eqref{eqn:bed} & \citet{goutal-maurel1997} \\ % Models & Centred difference; Well-balanced & Well-balanced; Monte Carlo & Well-balanced only \\ \bottomrule \end{tabular} \caption{Summary of the three idealised numerical test cases.} \label{tab:results} \end{table} \subsection{Specification of Tests 1 and 2} For the first two tests, the 1D domain is [\SI{-50}{\meter}, \SI{50}{\meter}], tessellated by $M = 100$ elements with no overlaps or gaps such that the mesh spacing is $\Delta x = \SI{1}{\meter}$. The timestep is $\Delta t = \SI{0.15}{\second}$ resulting in a maximum Courant number of about $0.8$. By choosing a fixed timestep, simulations of a given test complete in the same number of timesteps irrespective of the model configuration, and error accumulation due to timestepping errors will be the same across all models. Both tests include a topographic hump centred at $x = \SI{0}{\meter}$ with a region of Gaussian uncertainty. Following a similar approach to \citet{ge2008}, there are two representations for the same uncertain topography. The first representation enables smooth topography profiles to be randomly generated in Monte Carlo iterations. The topography $z$ is defined as \begin{align} z(x, \hump) = \hump \sech^2 \left( \frac{\pi x}{\lambda} \right) \label{eqn:bed} \end{align} where the hump amplitude $\hump$ is a random variable with mean $\humpmean = \SI{0.6}{\meter}$ and standard deviation $\sigma_\hump = \SI{0.3}{\meter}$, and the half-width is $\lambda = \SI{10}{\meter}$. This topography profile is seen in figure~\ref{fig:criticalSteadyState-flow}. The second representation is used by the stochastic Galerkin model, with topography represented by expansion coefficients $z_{i,0}, \ldots, z_{i,P}$. To be able to calculate the topography expansion coefficients, equation~\eqref{eqn:bed} must be expressed in terms of mean topography $\zmean(x)$ and topographic variance $\sigma_z^2(x)$ without involving the random variable $\hump$. The mean topography $\zmean(x)$ is simply \begin{align} \zmean(x) = z(x, \humpmean) \label{eqn:bed-mean} \end{align} The topographic variance is \begin{align} \sigma^2_z(x) = \E[z^2(x, \hump)] - \E^2[z(x, \hump)] \label{eqn:bed-variance} \end{align} Equation~\eqref{eqn:bed-variance} can be rewritten using Taylor series expansions of the two terms $\E\left[z^2(x, \hump)\right]$ and $\E^2\left[z(x, \hump)\right]$. To illustrate the approach, a Taylor series expansion of $\E\left[ z(x, \hump) \right]$ about $\humpmean$ is \begin{align} \E\left[ z(x, \hump) \right] &= \E\left[ z(x, \humpmean + (\hump - \humpmean)) \right] \nonumber \\ &= \E\left[ z + \frac{\partial z}{\partial \hump} (\hump - \humpmean) + \frac{1}{2} \frac{\partial^2 z}{\partial \hump^2} \left(\hump - \humpmean\right)^2 + \mathcal{O}(\hump^3) \right] \label{eqn:mean-taylor} % \intertext{where $z$ is shorthand for $z(x, \humpmean)$ and $\mathcal{O}(\hump^3)$ is the error term involving high-order derivatives $\partial^m z/\partial \hump^m$ with $m \geq 3$. Since $\E\left[ \hump - \humpmean \right] = 0$ and $\E\left[ \left(\hump-\humpmean\right)^2\right] = \sigma_\hump^2$ then equation~\eqref{eqn:mean-taylor} simplifies to} % \E\left[ z(x, \hump) \right] &= z + \frac{1}{2}\frac{\partial^2 z}{\partial \hump^2} \sigma_\hump^2 + \mathcal{O}(\hump^3) \end{align} Applying this approach to equation~\eqref{eqn:bed-variance} gives: \begin{align} \sigma_z^2(x) &= z^2 + \left[ \left(\frac{\partial z}{\partial \hump}\right)^2 + z \frac{\partial^2 z}{\partial \hump^2} \right] \sigma^2_\hump - \left[ z + \frac{1}{2} \frac{\partial^2 z}{\partial \hump^2} \sigma_\hump^2 \right]^2 + \mathcal{O}(\hump^3) \label{eqn:z-taylor} % \intertext{For the topographic profile given by equation~\eqref{eqn:bed}, it holds that $\partial^m z/\partial \hump^m = 0$ where $m \geq 2$, so the Taylor series approximation introduces no spurious oscillations in stochastic space. The topographic variance in equation~\eqref{eqn:z-taylor} then simplifies to} % \sigma_z^2(x) &= \left( \frac{\partial z(x, \humpmean)}{\partial \hump} \sigma_\hump \right)^2 \label{eqn:z-variance-continuous} \end{align} Equipped with analytic expressions for the mean topography $\zmean(x)$ (equation~\ref{eqn:bed-mean}) and topographic variance $\sigma_z^2(x)$ (equation~\ref{eqn:z-variance-continuous}), now the topography expansion coefficients $z_{i,0}, \ldots, z_{i,P}$ can be calculated. Since the topographic bump has a Gaussian probability distribution with $\mu_1[z(x)] = \zmean(x)$, $\mu_2[z(x)] = \sigma_z^2(x)$ and high-order moments $\mu_m[z(x)] = 0$ for $m \geq 3$ then, using equations~\eqref{eqn:moment}, \eqref{eqn:mean} and \eqref{eqn:variance}, the topography expansion coefficients are \begin{align} z_{i,p} = \begin{cases} \zmean(x_i) & \text{if $p=0$} \\ \sigma_z(x_i) & \text{if $p=1$} \\ 0 & \text{otherwise} \end{cases} \label{eqn:z-pc-coeffs} \end{align} where values are calculated at the centre point $x_i$ for all elements $i=1,\ldots, M$. %and $z_{i,1}$ is calculated using equations~\eqref{eqn:z-variance-continuous} and \eqref{eqn:variance} with $\mu_2[z(x, \xi)] = \sigma_z^2(x)$. The initial water depth expansion coefficients $h_{i,0}^{(0)}, \ldots, h_{i,P}^{(0)}$ can be calculated in terms of free-surface elevation and topography using equation~\eqref{eqn:h-eta-z}. For both tests, the initial, spatially-uniform mean free-surface elevation is \SI{1.5}{\meter} with no initial uncertainty such that $\eta_{i,0}^{(0)} = \SI{1.5}{\meter}$ and $\eta_{i,p}^{(0)} = 0$ with $p = 1, \ldots, P$ and $i = 1, \ldots, M$. \input{lakeAtRest} \input{criticalSteadyState} \input{tsengSteadyState}
{ "alphanum_fraction": 0.7013879531, "avg_line_length": 70.0094339623, "ext": "tex", "hexsha": "c4d255458337d5d025f750cb6b6a4705d4c8bb9e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "10a9b2e18d11cf3f4e711a90523f85758e5fb531", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "hertzsprung/seamless-wave-uq", "max_forks_repo_path": "overleaf/experiments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "10a9b2e18d11cf3f4e711a90523f85758e5fb531", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "hertzsprung/seamless-wave-uq", "max_issues_repo_path": "overleaf/experiments.tex", "max_line_length": 311, "max_stars_count": null, "max_stars_repo_head_hexsha": "10a9b2e18d11cf3f4e711a90523f85758e5fb531", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "hertzsprung/seamless-wave-uq", "max_stars_repo_path": "overleaf/experiments.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2412, "size": 7421 }
% arara: pdflatex: {action: nonstopmode} \documentclass{scrreprt} \usepackage{listings} \usepackage{underscore} \usepackage[bookmarks=true]{hyperref} \usepackage{tabu} \usepackage{multirow} \hypersetup{ bookmarks=false, % show bookmarks bar? pdftitle={Requirement Specification}, % title pdfauthor={Pascal Grosch}, % author pdfsubject={Requirements: Recording of Scenarios for Autonomous Vehicles}, % subject of the document pdfkeywords={testing, autonomous vehicle, bachelor thesis, requirements, recording}, % list of keywords colorlinks=true, % false: boxed links; true: colored links linkcolor=black, % color of internal links citecolor=black, % color of links to bibliography filecolor=black, % color of file links urlcolor=purple, % color of external links linktoc=page % only page is linked }% \title{% \flushright \rule{16cm}{5pt}\vskip1cm \Huge{SOFTWARE REQUIREMENTS\\ SPECIFICATION}\\ \vspace{2cm} for\\ \vspace{2cm} Recording System for Scenarios for Autonomous Vehicles\\ \vfill \rule{16cm}{5pt} } \date{} \usepackage{hyperref} \tabulinesep=1.5mm \newcommand{\goal}[4]{ \label{#1} \begin{tabu}{|X[-1,m]|X[l,m]|} \hline \textbf{ID} & #1\\ \hline \textbf{Name} & #2\\ \hline \textbf{Description} & #3\\ \hline \textbf{Success Criterion} & #4\\ \hline \end{tabu}~\\ } \newcommand{\usecase}[9]{ \label{#1} \begin{tabu}{|X[-1,m]|X[l,m]|} \hline \textbf{ID} & #1\\ \hline \textbf{Name} & #2\\ \hline \textbf{Precondition} & #3\\ \hline \textbf{Description} & #4\\ \hline \textbf{Steps} & #5\\ \hline \textbf{Postcondition} & #6\\ \hline \textbf{Exceptions} & #7\\ \hline \textbf{Quality Requirements} & #8\\ \hline \textbf{Links} & #9\\ \hline \end{tabu} } \newcommand{\systemfunction}[9]{ \label{#1} \begin{tabu}{|X[-1,m]|X[l,m]|} \hline \textbf{ID} & #1\\ \hline \textbf{Name} & #2\\ \hline \textbf{Input} & #3\\ \hline \textbf{Precondition} & #4\\ \hline \textbf{Description} & #5\\ \hline \textbf{Postcondition} & #6\\ \hline \textbf{Exceptions} & #7\\ \hline \textbf{Quality Requirements} & #8\\ \hline \textbf{Links} & #9\\ \hline \end{tabu}~\\ } \newcommand{\nfr}[3]{ \label{#1} \begin{tabu}{|X[-1,m]|X[l,m]|} \hline \textbf{ID} & #1\\ \hline \textbf{Name} & #2\\ \hline \textbf{Description} & #3\\ \hline \end{tabu}~\\ } \newcommand{\ir}[4]{ \label{#1} \begin{tabu}{|X[-1,m]|X[l,m]|} \hline \textbf{ID} & #1\\ \hline \textbf{Name} & #2\\ \hline \textbf{Description} & #3\\ \hline \textbf{Priority} & #4\\ \hline \end{tabu}~\\ } \newcommand{\myref}[1]{ \hyperref[#1]{#1} } \begin{document} \maketitle \tableofcontents \chapter{Goals} \goal{G10} {Record Traffic Situations} {The system creates an entry in the database that contains information about a sequence of traffic situations in which the VUT is unstable.} {The data is entered in the database and can be used to recreate the situations.} \\ \goal{G20} {Display Recorded Data} {The system retrieves the stored data and displays it in a graphical format.} {The contents of the data storage are displayed correctly and readable by humans} \chapter{Use Cases} \usecase{UC10} {Recording of Traffic Situations} {Exactly one VUT exists and is in a stable state.} {The simulation is started and the data is delivered to the recording framework where it is processed and stored. After the simulation run has ended the database contains the information about all unstable situations and transitions between them.} {\begin{enumerate} \item Execute the simulation \item Receive data for each step \item Translate the Raw data to predicates \item Detect situation changes \item Store data \end{enumerate}} {Every change of situations is stored in the database.} {-} {The recording should not slow down the simulation below a speed of 1 simulated second per real second.} {\myref{G10}} ~\\ \\ \usecase{UC20} {Display Recorded Situations} {Valid data is stored in the database.} {The data is retrieved from the databased and exported as an image representing the situation graph.} {\begin{enumerate} \item Retrieve Data \item Display situation graph \item Store Data \end{enumerate}} {Every situation and transition stored in the database is displayed in a file.} {There is no data in the database} {-} {\myref{G20}} \chapter{Requirements} \section{System Functions} \systemfunction{SF10} {Execute Simulation} {-} {A VUT exists in the simulation.} {The simulation is run step by step} {-} {-} {-} {\myref{UC10}} \\ \systemfunction{SF20} {Receive Data} {raw data of a simulated vehicle} {data for all observed variables is present} {The raw data of a simulated vehicle is received by the data processor} {The data is temporarily stored.} {-} {-} {\myref{UC10}} \\ \systemfunction{SF30} {Process Data} {raw data of a simulated vehicle} {data for all observed variables is present} {The raw data of a simulated vehicle is converted into predicate logic. Hysteresis are used to prevent jumping between two states.} {The data is formatted into predicate logic} {-} {-} {\myref{UC10}} \\ \systemfunction{SF40} {Detect Situation Changes} {data formatted into predicate logic, current situation} {data for all observed variables is present} {The observed data is compared to the data in the current situation. If a difference is detected a transition between the situations is created.} {If the situations differ a transition is temporarily stored in the processing unit and the new situation is stored as the current situation.} {If no current situation is available no comparison is done. If both situations are stable no comparison is done.} {-} {\myref{UC10}} \\ \systemfunction{SF50} {Store Data} {data formatted into predicate logic} {difference to current situation was found or no current situation is present} {The situations and transition are translated to the database format and stored in the database.} {The data is stored in the database} {-} {-} {\myref{UC10}, \myref{UC20}} \\ \systemfunction{SF60} {Display Data} {set of situations and transitions} {for each transition the preceding and succeeding situation is given} {The data is transformed into a visual directed graph with situations as nodes and transitions as edges.} {A human readable graph is created.} {The set of situations and transitions is empty.} {-} {\myref{UC20}} \\ \systemfunction{SF70} {Retrieve Data} {-} {data is present} {The data corresponding to the ID is retrieved from the database and converted to the system format.} {A situation tree is restored in the work format.} {-} {-} {\myref{UC20}} \section{Nonfunctional Requirements} \nfr{NFR10} {Simulation Speed} {The simulated speed should never fall below 1 simulated second per real second.} \section{Inverse Requirements} There are no inverse requirements for the system. \section{Design Decisions} \begin{tabu}{|X[-1,m]|X[l,m]|} \hline \rowfont[l]{\bfseries} ID & Description\\ \hline DD10 & SQLite Database should be integrated. The solution should foresee a change to another database, i.e. Neo4j instead of SQLite.\\ \hline DD20 & Since a framework to run simulations and retrieve data is already available for PTV Vissim, PTV Vissim is used as the simulator.\\ \hline DD30 & Since the COM-Interface and the existing framework is written in Visual C++ the system will also be written in Visual C++, using the the C++14 standard.\\ \hline DD40 & For the visualization of the graph Graphviz is used since it has a C/C++ library that already handles most of the structure, layout and memory management. Additionally the form of the nodes and edges can easily be adapted to different use cases.\\ \hline \end{tabu} \chapter{Traceability} \begin{tabu}{|X[-1,m]|X[-1,m]|X[-1,m]|} \hline \rowfont[l]{\bfseries} Goal & Use Case & System Function\\ \hline \multirow{5}{*}{\myref{G10}} & \multirow{5}{*}{\myref{UC10}} & \myref{SF10}\\ & & \myref{SF20}\\ & & \myref{SF30}\\ & & \myref{SF40}\\ & &\myref{SF50}\\ \hline \multirow{3}{*}{\myref{G20}} & \multirow{3}{*}{\myref{UC20}} & \myref{SF50}\\ & & \myref{SF60}\\ & & \myref{SF70}\\ \hline \end{tabu} % add other chapters and sections to suit \end{document}
{ "alphanum_fraction": 0.6823214078, "avg_line_length": 31.6642066421, "ext": "tex", "hexsha": "064b676e2c8fba0e6aa79dd8e0f89d349f925615", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3b581f557830526e3f0e51f40df04a5be24b7e62", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Tronso/test_recording", "max_forks_repo_path": "Specifications/requirements.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b581f557830526e3f0e51f40df04a5be24b7e62", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Tronso/test_recording", "max_issues_repo_path": "Specifications/requirements.tex", "max_line_length": 263, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b581f557830526e3f0e51f40df04a5be24b7e62", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Tronso/test_recording", "max_stars_repo_path": "Specifications/requirements.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2743, "size": 8581 }
\chapter{Conclusion} \label{chapter:conclusion} Programming modern supercomputers is currently a daunting task for even the most expert programmers. For application scientists and engineers without considerable programming experience, the task will soon become completely intractable as machines become increasingly complex and heterogeneous. If left unaddressed, this trend could potentially stunt the advancement and progress of many areas of science as users are no longer capable of fully leveraging the potential of new machines. Unfortunately, combating this problem cannot be achieved by a simple evolution of existing software tools at the same level of abstraction. Instead, a new approach to supercomputing is necessary to raise the level of abstraction and allow software systems such as runtimes and compilers to manage the complexity of targeting code to new architectures. In this thesis we have presented Legion as one possible candidate for achieving this end. Legion raises the level of abstraction in two ways. First, Legion provides a way of decoupling policy decisions from how they are implemented, allowing the Legion runtime to manage all of the complexity associated with carrying out an implementation. Second, Legion provides a programming model that allows applications to be specified in a machine-independent manner which decouples how applications are written from how they are mapped onto a target machine. Ultimately, this is essential for allowing codes to be written once and re-targeted to new architectures as hardware changes, thereby ensuring that application developers never have to re-write their code from scratch when a new machine is released. By exposing all performance decisions through the Legion mapping interface, codes can be easily ported without application developers needing to fight the Legion runtime in order to control performance. As we have shown throughout this thesis, the crucial innovation that permits Legion to raise the level of abstraction is the introduction of logical regions. Unlike most current programming systems, logical regions and operations on them provide a data model for the application to communicate information about the structure of program data to the programming system in a machine independent fashion. Specifically logical regions support many of the same operations as relations, a fundamental data abstraction for many areas of computer science. The relational data model supported by logical regions provides a familiar way to describe both how data is partitioned and how it is used by computations. Furthermore, Legion can leverage its knowledge of the structure of program data and how computations use it to automatically version it to support transparent resiliency and speculation. The primary result of this thesis is a demonstration of the power that logical regions confer on a programming system when used in executing real scientific computing applications. Our implementation of Legion illustrates that a significant amount of complexity that currently burdens application developers can be cleanly encapsulated within a reasonable abstraction. While the Legion implementation is non-trivial, with proper software engineering techniques it can be maintained by several expert programmers, thereby insulating application developers from the complexity and allowing them to focus on actual science and research. To prove that Legion is a real programming system, we ported S3D, a full production combustion simulation to use Legion and compared it to a highly optimized version of S3D tuned independently by experts. Our Legion version is between 2-3X faster when run at full scale compared to the baseline on the world's number two supercomputer. The relational data model supported by logical regions allowed us to express computations as tasks that operate over thousands of fields. Furthermore, the Legion mapping interface allowed us to experiment with many different mapping strategies for different supercomputers to discover higher performance versions of the code while requiring no changes to the machine-independent specification of the application. Our version of Legion S3D is now being used for production runs by real combustion scientists and may in the immediate future become the canonical version of S3D. In the future we expect to see the evolution of many other programming systems that mirror the characteristics of Legion. We contend that in order to be successful, these programming systems will need to have many of the same characteristics of Legion. First, they must be capable of expressing codes in a machine-independent manner, and then later mapping the code onto the target machine. Second, they should decouple policy from mechanism, thereby allowing applications to specify the best way of performing important operations such as data partitioning and communication. In order to achieve these goals, these programming systems will need to be able to abstract the structure of data in a way that is at least as powerful as the relational data model supported by logical regions. In order to continue to scale applications for modern and future supercomputers, a disruptive shift in the way applications are developed is necessary. Raising the level of abstraction so that advanced programming systems can hide much of the complexity associated with writing codes for target machines will be the only way forward. We believe that Legion provides a crucial first look at some of the techniques necessary for making this change a reality. By already demonstrating considerable improvements over current programming systems when running production codes at scale on modern machines, we have shown that the Legion design is a viable way forward. In the future, we anticipate Legion continuing to grow and evolve to support additional applications on new architectures, thereby improving the productivity of the scientists and researchers that use Legion.
{ "alphanum_fraction": 0.8319216077, "avg_line_length": 49.7603305785, "ext": "tex", "hexsha": "ab75f45c1c3e550ef38c71294cfd111c23afc00f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "979412714037035d39c03d441c904796f8a7b495", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "lightsighter/Thesis", "max_forks_repo_path": "conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "979412714037035d39c03d441c904796f8a7b495", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "lightsighter/Thesis", "max_issues_repo_path": "conclusion.tex", "max_line_length": 63, "max_stars_count": 1, "max_stars_repo_head_hexsha": "979412714037035d39c03d441c904796f8a7b495", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "lightsighter/Thesis", "max_stars_repo_path": "conclusion.tex", "max_stars_repo_stars_event_max_datetime": "2018-05-22T23:07:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-05-22T23:07:31.000Z", "num_tokens": 1140, "size": 6021 }
\section{Setting integration period and time step} \label{sec:timeintiv} %------------------------------------------------------ The integration period and time step are configured appropriately according to experimental design. The time step depends on the spatial resolution of the model. A shorter time step is sometimes required to avoid numerical instability. The period of integration and the time step are configured in \namelist{PARAM_TIME} in \runconf. \editboxtwo{ \verb|&PARAM_TIME| & \\ \verb| TIME_STARTDATE = 2014, 8, 10, 0, 0, 0,| & Start date of integration: it is required for the calculation of the radiation process.\\ \verb| TIME_STARTMS = 0.D0, | & Start date [mili sec]\\ \verb| TIME_DURATION = 12.0D0, | & Integration time [init is defined by \verb|TIME_DURATION_UNIT|]\\ \verb| TIME_DURATION_UNIT = "HOUR", | & Unit for \verb|TIME_DURATION|\\ \verb| TIME_DT = 60.0D0, | & Time step for time integration\\ \verb| TIME_DT_UNIT = "SEC", | & Unit for \verb|TIME_DT|\\ \verb| TIME_DT_ATMOS_DYN = 30.0D0, | & Time step for calculation of dynamical process\\ \verb| TIME_DT_ATMOS_DYN_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_DYN|\\ \verb| TIME_DT_ATMOS_PHY_CP = 600.0D0, | & Time step for calculation of cumulus parameterization process\\ \verb| TIME_DT_ATMOS_PHY_CP_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_PHY_CP|\\ \verb| TIME_DT_ATMOS_PHY_MP = 60.0D0, | & Time step for calculation of microphysics process\\ \verb| TIME_DT_ATMOS_PHY_MP_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_PHY_MP|\\ \verb| TIME_DT_ATMOS_PHY_RD = 600.0D0, | & Time step for calculation of radiation process\\ \verb| TIME_DT_ATMOS_PHY_RD_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_PHY_RD|\\ \verb| TIME_DT_ATMOS_PHY_SF = 60.0D0, | & Time step for calculation of bottom boundary condition (surface process) for atmosphere\\ \verb| TIME_DT_ATMOS_PHY_SF_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_PHY_SF|\\ \verb| TIME_DT_ATMOS_PHY_TB = 60.0D0, | & Time step for calculation of turbulence process\\ \verb| TIME_DT_ATMOS_PHY_TB_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_PHY_TB|\\ \verb| TIME_DT_ATMOS_PHY_BL = 60.0D0, | & Time step for calculation of boundary layer process\\ \verb| TIME_DT_ATMOS_PHY_BL_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_PHY_BL|\\ \verb| TIME_DT_OCEAN = 300.0D0, | & Time step for calculation of ocean process\\ \verb| TIME_DT_OCEAN_UNIT = "SEC", | & Unit for \verb|TIME_DT_OCEAN|\\ \verb| TIME_DT_LAND = 300.0D0, | & Time step for calculation of land process\\ \verb| TIME_DT_LAND_UNIT = "SEC", | & Unit for \verb|TIME_DT_LAND|\\ \verb| TIME_DT_URBAN = 300.0D0, | & Time step for calculation of urban process\\ \verb| TIME_DT_URBAN_UNIT = "SEC", | & Unit for \verb|TIME_DT_URBAN|\\ \verb| TIME_DT_ATMOS_RESTART = 21600.D0, | & Output interval of restart files for atmospheric variables\\ \verb| TIME_DT_ATMOS_RESTART_UNIT = "SEC", | & Unit for \verb|TIME_DT_ATMOS_RESTART|\\ \verb| TIME_DT_OCEAN_RESTART = 21600.D0, | & Output interval of restart files for ocean variables\\ \verb| TIME_DT_OCEAN_RESTART_UNIT = "SEC", | & Unit for \verb|TIME_DT_OCEAN_RESTART|\\ \verb| TIME_DT_LAND_RESTART = 21600.D0, | & Output interval of restart files for land variables\\ \verb| TIME_DT_LAND_RESTART_UNIT = "SEC", | & Unit for \verb|TIME_DT_LAND_RESTART|\\ \verb| TIME_DT_URBAN_RESTART = 21600.D0, | & Output interval of restart files for urban variables\\ \verb| TIME_DT_URBAN_RESTART_UNIT = "SEC", | & Unit for \verb|TIME_DT_URBAN_RESTART|\\ \verb| TIME_DT_WALLCLOCK_CHECK = 21600.D0, | & Interval of checking wallclock\\ \verb| TIME_DT_WALLCLOCK_CHECK_UNIT = "SEC", | & Unit for \verb|TIME_DT_WALLCLOCK_CHECK|\\ \verb| TIME_WALLCLOCK_LIMIT = 86400.D0, | & Limit of elapse time of wall clock time [sec]\\ \verb| TIME_WALLCLOCK_SAFE = 0.95D0, | & Safety coefficient for elapse time limit\\ \verb|/|\\ } \subsection{Time Step for Dynamical Processes} \nmitem{TIME_DT} is the time step for time integration, usually described as $\Delta t$. It is used as time step for tracer advection as well as the basic unit for all physical processes. To avoid numerical instability, \nmitem{TIME_DT} must satisfy the following condition: it is less than the value calculated by dividing grid size by a supposed maximum advection velocity. A time step for dynamical process, i.e. \nmitem{TIME_DT_ATMOS_DYN}, should be given shorter than $\Delta t$ because the time integration of dynamic variables is constrained not by advection velocity, but by the speed of the acoustic wave. \nmitem{TIME_DT_ATMOS_DYN} depends on the time integration scheme in relation to the stability of calculation. As a criterion, the standard values of \nmitem{TIME_DT_ATMOS_DYN} are calculated by dividing the minimum grid interval by 420 m/s and 840 m/s, in the case \nmitem{ATMOS_DYN_TINTEG_SHORT_TYPE="RK4, RK3"}, respectively. Note that \nmitem{TIME_DT_ATMOS_DYN} needs to be a divisor of \nmitem{TIME_DT}. When the ratio of \nmitem{TIME_DT} to \nmitem{TIME_DT_ATMOS_DYN} is too large, the numerical instability sometimes occurs. The ratio of \nmitem{TIME_DT}/\nmitem{TIME_DT_ATMOS_DYN} is recommended to be set two or three. See also Section \ref{subsec:cfl_check}. In stead of setting \nmitem{TIME_DT_ATMOS_DYN} and \nmitem{TIME_DT_ATMOS_DYN_UNIT}, the ratio of \nmitem{TIME_DT}/\nmitem{TIME_DT_ATMOS_DYN} can be specified by \nmitem{TIME_NSTEP_ATMOS_DYN}. Integer is acceptable for \nmitem{TIME_NSTEP_ATMOS_DYN}. \subsection{Check of CFL Condition} \label{subsec:cfl_check} The time step for the advection \nmitem{TIME_DT} must be smaller than the grid spacing divided by velocity, i.e., the Courant-Friedrichs-Lewy (CFL) condition. A non-dimensional number $U \Delta t/\Delta x$ is called the Courant number, where $U, \Delta x$ and $\Delta t$ are velocity, spatial grid spacing, and time step, respectively. The CFL condition is that the Courant number must be smaller than 1. \scalerm has a functionality to check whether the Courant number exceeds a limit. To enable this function, set \nmitem{ATMOS_VARS_CHECKCFL_SOFT} and/or \nmitem{ATMOS_VARS_CHECKCFL_HARD} in \namelist{PARAM_ATMOS_VARS}. Their default values are 1.0 and 2.0, respectively. If the Courant number in simulations exceeds \nmitem{ATMOS_VARS_CHECKCFL_SOFT}, the following message is output to the LOG file. \msgbox{ \verb|INFO [ATMOS_vars_monitor] Courant number = xxx exceeded the soft limit = yyy| } If it exceeds \nmitem{ATMOS_VARS_CHECKCFL_HARD}, the following message is output to the standard output, and the simulation aborts. \msgbox{ \verb|ERROR [ATMOS_vars_monitor] Courant number = xxx exceeded the hard limit = yyy| } \subsection{Time Step for Physical Processes} A time step for the physical process represents the timing of the tendency to update given by the process. Once the model starts, each physical process is called during the setup of the model to obtain the initial tendency. Each tendency is updated at every time step specified process by process. All time steps for the physical process must be a multiple of \nmitem{TIME_DT}. The surface fluxes are calculated by the surface process for the atmosphere. On the contrary, if a model grid contains several types of land use, such as ocean, urban, and land, ocean, land, and urban models are used and the fluxes are calculated by these models. The grid mean value of fluxes is obtained as the weighted average of fluxes over each instance of land use according to the fraction of land use. As described above, the initial tendencies of all processes are updated during the setup of the model. Therefore, the output intervals of restart file are required as multiples of time steps for all processes. If not, a restart calculation disagrees with the continuous calculation. When \nmitem{TIME_DT_ATMOS_RESTART}, \nmitem{TIME_DT_OCEAN_RESTART}, \nmitem{TIME_DT_LAND_RESTART}, and\\ \nmitem{TIME_DT_URBAN_RESTART}, are not specified, the restart files are created at the end of the simulation, i.e. at \nmitem{TIME_DURATION}. The details of the restarted simulation are described in Section \ref{sec:restart}. \subsection{Finalization by wall-clock timer} \label{subsec:wallclock_check} Some batch job system usually have the limit of the execution time. However, it is difficult to estimate the elapse time of the long-termed simulation, and the job sometimes exceeds the time limit. To solve this problem, \scalerm has the finalize option by using the self timer. When the elapse time reached the time specified by \nmitem{TIME_WALLCLOCK_LIMIT} (in second), the simulation outputs the restart file and finalize the time loop, even if the timestep is not finished. There is the safety factor for \nmitem{TIME_WALLCLOCK_LIMIT}. The default value is 0.9 and specified by \nmitem{TIME_WALLCLOCK_SAFE}. As described above, the interval of the restart output should be a multiples of time steps for all physical processes and surface submodels. However, the self timer will stop the simulation suddenly. To avoid restart output at a time other than expected timing, you can specify the timing to check the wall clock time. The wall clock will be checked the time interval specified by \nmitem{TIME_DT_WALLCLOCK_CHECK} and \nmitem{TIME_DT_WALLCLOCK_CHECK_UNIT}. When these parameters are not specified, maximum time interval in the physical process and the surface submodel is set. Note that if you set a very long interval for checking, the timing of the finalization may delay. In the sample above, the \nmitem{TIME_WALLCLOCK_LIMIT} and \nmitem{TIME_WALLCLOCK_SAFE} are set to 24 hours and 0.95, respectively. The wall clock is checked every 6 hours of the simulation time. When the elapse time exceeds 22.8 hours, the restart file will be generated and the simulation will stop.
{ "alphanum_fraction": 0.7176057674, "avg_line_length": 83.6666666667, "ext": "tex", "hexsha": "7ea1160400ece06d7c026678922c3486640a23f7", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-07-28T22:20:41.000Z", "max_forks_repo_forks_event_min_datetime": "2020-07-10T10:39:45.000Z", "max_forks_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "slayoo/scale", "max_forks_repo_path": "doc/users-guide/en/55_setting_time.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_issues_repo_issues_event_max_datetime": "2021-07-30T05:08:47.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-29T03:38:05.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "slayoo/scale", "max_issues_repo_path": "doc/users-guide/en/55_setting_time.tex", "max_line_length": 297, "max_stars_count": 7, "max_stars_repo_head_hexsha": "ca4b476ad55cb728b2009f0427ce3f7161ecfcf7", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "slayoo/scale", "max_stars_repo_path": "doc/users-guide/en/55_setting_time.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-13T05:29:55.000Z", "max_stars_repo_stars_event_min_datetime": "2020-06-14T11:12:31.000Z", "num_tokens": 2756, "size": 10542 }
\filetitle{yearly}{Display tseries object one full year per row}{tseries/yearly} \paragraph{Syntax}\label{syntax} \begin{verbatim} yearly(X) \end{verbatim} \paragraph{Input arguments}\label{input-arguments} \begin{itemize} \itemsep1pt\parskip0pt\parsep0pt \item \texttt{X} {[} tseries {]} - Tseries object that will be displayed one full year of observations per row. \end{itemize} \paragraph{Description}\label{description} \paragraph{Example}\label{example}
{ "alphanum_fraction": 0.751565762, "avg_line_length": 19.16, "ext": "tex", "hexsha": "65a6a743578343f0e983d9a24935253e7bde563e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_forks_event_min_datetime": "2022-01-17T07:06:39.000Z", "max_forks_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_forks_repo_path": "-help/tseries/yearly.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_issues_repo_issues_event_max_datetime": "2020-09-02T10:40:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-28T08:13:20.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_issues_repo_path": "-help/tseries/yearly.tex", "max_line_length": 84, "max_stars_count": 1, "max_stars_repo_head_hexsha": "682ea1960229dc701e446137623b120688953cef", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "OGResearch/IRIS-Toolbox-For-Octave", "max_stars_repo_path": "-help/tseries/yearly.tex", "max_stars_repo_stars_event_max_datetime": "2017-12-06T13:38:38.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-06T13:38:38.000Z", "num_tokens": 140, "size": 479 }
% This is samplepaper.tex, a sample chapter demonstrating the % LLNCS macro package for Springer Computer Science proceedings; % Version 2.20 of 2017/10/04 % \documentclass[runningheads]{llncs} % \usepackage{graphicx} \usepackage{tabularx} % Use titling to display the title centered on the first page (https://tex.stackexchange.com/questions/290432/vertically-center-title-page-article) \usepackage{titling} % Use biber and bibLaTeX \usepackage[sorting=none, backend=biber]{biblatex} \addbibresource{bibliography.bib} % Used for displaying a sample figure. If possible, figure files should % be included in EPS format. % % If you use the hyperref package, please uncomment the following line % to display URLs in blue roman font according to Springer's eBook style: \usepackage{hyperref} \renewcommand\UrlFont{\color{blue}\rmfamily} \setcounter{tocdepth}{3} \renewcommand\maketitlehooka{\null\mbox{}\vfill} \renewcommand\maketitlehookd{\vfill\null} \begin{document} % \title{Introduction to Membership Inference Attacks} % %\titlerunning{Abbreviated paper title} % If the paper title is too long for the running head, you can set % an abbreviated paper title here % \author{Lukas Gehrke \\ \email{[email protected]} \\ Leipzig University \\ } % \authorrunning{Lukas Gehrke} % First names are abbreviated in the running head. % If there are more than two authors, 'et al.' is used. % % \maketitle % typeset the header of the contribution % \tableofcontents \newpage \begin{abstract} With more and more companies offering Machine Learning as a Service (MLaaS) a novel threat of data breaches has emerged: Membership Inference Attacks aim at identifying the fact that given data instances were among the training data of a machine learning model available to an adversary. This knowledge might be of fatal consequences, if the membership exposes sensitive information about people, such as a disease or financial dept. This paper gives a general introduction about membership inference attacks. After discussing enabling factors and underlying theory, core studies as well as mitigation strategies are surveyed, followed by a discussion. \keywords{Membership Inference Attacks \and Machine Learning Security \and Machine Learning as a Service} \end{abstract} % % % \section{Introduction} \subsection{Machine Learning} With computers, automation is one of the most simple yet impactful aims that can be achieved. However, teaching computers to solve complex tasks automatically is hard, as it would require huge amounts of instructions and decision rules for the computer. In order to avoid that, computers are programmed to "learn" the rules on their own using Machine Learning models. A model's learning process is based on experience \cite{mitchell1997machinel}, which again comes from \textbf{data}, called \textit{training data}. If with experience, that is to say appropriate data, a model gets better at its task, it is said to have successfully "learned". \subsection{Machine Learning as a Service} Thanks to vast scientific efforts, Machine learning is applicable for many use cases nowadays. These range from simple classification of objects to complicated recognition of signals, such as visual or audio detection. Consequently, numerous possibilities for companies have emerged to improve their products, services or internal processes. However, infrastructure for machine learning is expensive, as huge amounts of data have to be stored and transferred and many calculations have to be conducted. To address these problems, the market of \textbf{Machine Learning as a Service} (MLaaS) has emerged \cite{ribeiro2015mlaas}. The idea behind MLaaS is similar to Software as a Service: A customer has access to remote hardware and software running on this hardware, so that the customer has no need to own them. Examples of companies offering MLaaS are Amazon, Google, IBM or Microsoft. These companies already need the infrastructure and knowledge to process huge amounts of data for their own services. With MLaaS, they gain additional profit by renting hardware that is not fully occupied. \subsection{Membership Inference} With ready-to-use machine learning models being used and often being accessible online, a new kind of privacy risk has evolved: The \textit{Membership Inference Attack}. \textit{Membership Inference} asks the simple question, whether a given data record was used to train a machine learning model in question \cite{shokri2017membership}. This leads to critical privacy breaches, if membership in the training dataset allows to imply certain facts. For example, when considering a model that uses cellular structure to recommend ideal medicine dosage for a disease, membership of a person's record in the training data reveals that the person is suffering from this very disease. This report first explains the theoretical foundation of membership inference attacks. Afterwards, already conducted studies about membership inference attacks are surveyed. Finally, mitigation strategies are discussed, followed by a discussion. The explanation of theoretical foundations of a membership inference attack are mainly based on the publication by Shokri et al. \cite{shokri2017membership}, who first defined the attack concept. They introduced an architecture based on an attack machine learning model and shadow machine learning models which imitate the behavior of the target model and create training data for the attack model. \section{Membership Inference Attacks} The idea behind membership inference attacks is to make use of the fact that machine learning models often exploit different behaviors on previously unseen data in comparison to the data they were trained on\footnote{In the following sections, these two sets of data will be referred to as \textit{testing} and \textit{training data}.} \cite{shokri2017membership}. The main reason for this phenomenon is \textit{Overfitting}. Overfitting describes the tendency of machine learning models or statistical classification models to perform significantly better on their training data in comparison to data they have not been trained on \cite{weigend1994overfitting}. That is to say, the model fails to generalize to data records apart from its training data. Often, this effect is caused by training data with low degree of diversity. The model then does not to learn enough information about the underlying population of the data, as its learning sample is not representative. Model selection also influences the success of membership inference: Some models \textit{remember} more details about their training data, leading to more leakage of information. \subsection{Formal Definition} \subsubsection{Prerequesits} In this report, \textbf{Membership Inference} is defined as the (theoretical) \textit{question}, whether a given data record was part of the training data of a given, ready-to-use machine learning model. A \textbf{Membership Inference Attack} describes the (theoretical) \textit{act} of performing membership inference against a machine learning model the adversary does not know the training data for and which - if successful - leads to potentially harmful privacy breaches. \subsubsection{Machine Learning Models} As introduced by Shokri et al. \cite{shokri2017membership}, the basic membership inference attack setup consists of three machine learning models: \begin{enumerate} \item The \textbf{target model} is the model under attack. The attacker has access to the target model but does not know its exact training dataset. \item The \textbf{attack model} performs membership inference by deciding, whether the attack model's output using a given record as input, was part of the training dataset of the attack model. \item The \textbf{shadow models} imitate the target model as good as possible. They are used to teach the different reactions of the target model to seen and unseen data to the attack model. \end{enumerate} \subsubsection{Attack Definition} Let $f_{target}()$ be the target model and $f_{attack}()$ be the attack model. $D_{train}^{target}$ is the training dataset of $f_{target}()$, consisting of data records $(\textbf{x}^{i},y^{i})$, with $|D_{train}^{target}| = n$, $i \in (1, \dots , n)$. Here, $\textbf{x}^{i}$ is a vector describing an entity and $y^{i}$ refers to the class of the entity with respect to the task of $f_{target}()$. Assume the number of possible classes for the target model is $c_{target}$. The output of the target model is a vector of values of size $c_{target}$ with each value being in $[0, 1]$ and indicating the probability that the entity in question belongs to the respective class. The vector is denoted by $\textbf{y}_{train}^{i}$. \par The attack model $f_{attack}$ uses labeled data entries and the respective outputs generated by the target model for these entries as input $x_{attack}$. Its purpose is to recognize whether a data entry was part of the training dataset based on these inputs. This task leads to a binary output scheme: "\textit{in}" for data records of the training dataset and "\textit{out}" for non-members. The attack procedure is as follows: For a labeled data record $(\textbf{x}, y)$ the output of $f_{target}()$ is generated: $f_{target}(\textbf{x}) = \textbf{y}$. Afterwards, the tuple $(y, \textbf{y})$ is queried to the attack model. Ideally, the attack model then recognizes patterns in distribution of $\textbf{y}$ around the actual $y$ \cite{shokri2017membership}. The attack model then calculates the pobability $P(((\textbf{x}, y) \in D_{train}^{target})$ that the record $(\textbf{x}, y)$ was part of the training dataset of the target function or belongs to the "\textit{in}" class. Figure \ref{fig:attack} gives an overview over the attack procedure. Note that it would also be possible to use only $\textbf{y}$ or to use $((\textbf{x}), \textbf{y})$ as input for the target model. In the latter case the model could identify patterns in the relationship between $\textbf{x}$ and $\textbf{y}$. \begin{figure}[ht] \centering \includegraphics[scale=0.1]{figures/mia-black-box-attack.png} \caption{The black-box Membership Inference Attack scenario. A data record $(\textbf{x}, y)$ is first queried to the target model. Afterwards, the output and of the target model and the true class of the data record are used as input for the attack model. The attack model decides, whether the data record was in the training dataset of the target model or not.} \label{fig:attack} \end{figure} \subsubsection{Attack Model Training} One fundamental part of the aforementioned process remains open: The training of an attack model. Therefore, Shokri et al. \cite{shokri2017membership} propose a technique called "\textit{shadow training}". Multiple shadow models $f_{shadow_{i}}()$ are generated with the main aim being to make their behavior as similar to the target model's behavior as possible. The training datasets $D_{train}^{shadow_{i}}$ are all known and may partially overlap. Training data for the shadow models may originate from existing sources. Otherwise data records have to be generated as "\textit{synthetic}" training data. The idea of using synthetic training data is based on empirical evidence following the idea that similar machine learning models provided by the same service and trained on similar data behave similarly \cite{shokri2017membership}. Moreover, Shokri et al. claim that a higher amount of shadow models provides more training experience for the attack model and thus enhances the attack accuracy. To generate training data for the shadow models, Shokri et al. \cite{shokri2017membership} propose different methods: \begin{itemize} \item \textbf{Model-based Synthesis} uses the target model for the generation of synthetic training data. It is based on the idea that if the target model classifies a randomly generated data record with high confidence, that is to say with a value close to $1$ for the predicted class and with values close to zero for all other classes, the data record is similar to a real entry from the target models training data. \item \textbf{Statistics-based Synthesis} bases generation of synthetic data on statistical knowledge about the underlying population of the training dataset of the target model. \item \textbf{Noisy real data}: If real data records of the training dataset of the target model are accessible, these can be used to generate new records by changing a selected number of random properties of the records at hand. \end{itemize} The Model-based synthesis as introduced by Shokri et al. \cite{shokri2017membership} uses a two-phase process: First, records that are classified by the target model with high confidence are searched. This procedure is iterative and involves randomly initializing data records. Afterwards, features are changed until the prediction confidence by the target model for a fixed class surpasses the confidence for all other classes and a threshold. Additionally, already accepted synthetic records are used as a base for new ones. In each iteration, a new record is created by randomly changing features of the last accepted record. The second phase consists of sampling new records for the synthetic datasets if the acceptance criteria from phase one are met. \par To train the attack model, each shadow model $f_{shadow_i}()$ has to classify all records of its respective training dataset $D_{shadow_i}^{train}$ and test dataset $D_{shadow_i}^{test}$. Given a record $(\textbf{x}, y)$ from a training set, the output $f_{shadow_i}(\textbf{x}) = \textbf{y}$ will be extended with the label "\textit{in}". Thus, the tuple $(y, \textit{y}, in)$ will be added to the attack model training dataset $D_{attack}^{train}$. Note that $y$, the true label, is added to enable the attack model to learn the distribution of $\textbf{y}$ around $y$ as mentioned earlier. For all entries of $D_{shadow_i}^{test}$, tuples of the form $(y, \textit{y}, out)$ are added to $D_{attack}^{train}$. Afterwards, $D_{attack}^{train}$ is split into $c_{target}$ partitions, one for each value of $y$, the true class label. Then, according to Shokri et al. \cite{shokri2017membership}, for every class $y_i$ a separate attack model $f_{attack_i}$ should be trained. Figure \ref{fig:attack} illustrates the attack model training process. Ultimately, the resulting task is a binary classification task. This is a standard task in machine learning, allowing the attack model to be build with any machine learning framework. Within this binary classification task, the attack model learns how to keep apart records that belong to the "\textit{in}" or "\textit{out}" group and where \textit{both classified with high confidence by the target model}, as Shokri et al. \cite{shokri2017membership} highlight. A task much subtler than only distinguishing between records classified with high or low confidence, which is not what the attack model learns \cite{shokri2017membership}. \begin{figure}[ht] \centering \includegraphics[scale=0.093]{figures/mia-attack-model-training.png} \caption{The attack model training process involves $n$ shadow model with partially overlapping training and testing datasets.For every shadow models, all prediction results are obtained and additionally, each result is flagged with "\textit{in}" if the respective record was in the training dataset of the shadow model and "\textit{out}" otherwise. Resulting tuples are extended with a records true class $y$ and then used as training data for the attack model.} \label{fig:attack-training} \end{figure} \subsection{Scientific Studies} In this chapter, studies about membership inference attacks are presented. The chapter begins with an overview of most notable scientific publications. Afterwards, key studies are reviewed in detail. As many publications include a chapter about mitigation strategies, these are also mentioned in the overview. Mitigation strategies are discussed later on. \subsubsection{Overview of Studies} Several studies about membership inference attacks have been conducted. Shokri et al. \cite{shokri2017membership} use six different datasets and 4 different target models to explore the possible threats of membership inference attacks. They find out that machine learning models indeed leak information about their training data and show the success of their shadow model training technique. They also discuss multiple mitigation techniques such as regularization or restriction of the prediction vector. Truex et al. \cite{truex2018towards} directly tie on this work by developing a framework for membership inference attacks. They survey attacks against different types of machine learning models and explore the success factors of membership inference attacks. A study of mitigation strategies such as model hardening and differential privacy is also conducted, revealing need for more studies on this matter. Hayes et al. \cite{hayes2017logan} focus on attacks against generative machine learning models, a version of neural networks that is very popular in many applications nowadays. Generative adversarial networks use a combination of a discriminative and a generative model and are mostly used to generate images or videos \cite{goodfellow2014generative}. Hayes et al. show that membership inference attacks work better against discriminative models than against generative models. Moreover, they experiment with regularization techniques as mitigation strategies and report mixed success as well as need for more research on that topic. Salem et al. \cite{salem2018ml} study applicability of membership inference attacks by gradually reducing the assumptions on the attack model. They experiment with a lower amount of shadow models and attack training data generation through so-called data transferring attacks. They also explore means of reduction of overfitting to oppose membership inference attacks. Jia et al. \cite{jia2019memguard} develop a defense framework against membership inference attacks. The framework enables adding noise to output vectors of potential target models in order to confuse attack models. \subsubsection{Original Studies} Shokri et al. \cite{shokri2017membership} use six datasets to explore the possibilities of membership inference attacks against different target models. Table \ref{tab:datasets-shokri} gives an overview of the datasets. As target models, they use three MLaaS models and one model that is implemented locally. The models are listed in table \ref{tab:target-models-shokri}. \begin{table}[ht] \centering \begin{tabularx}{\textwidth}{|l|l|X|c|} \hline \textbf{Dataset} & \textbf{Task} & \textbf{Features} & \textbf{Entities} \\ \hline \textbf{\href{https://archive.ics.uci.edu/ml/datasets/adult}{UCI Adult}{}} & Binary classification & 14 features (binary and numerical) & 48,842 \\ \textbf{\href{https://www.kaggle.com/c/acquire-valued-shoppers-challenge/data}{Purchases}{}} & 2 to 100 class classification & 600 binary features & 197,324 \\ \textbf{\href{https://www.cs.toronto.edu/~kriz/cifar.html}{CIFAR}{}} & 10 to 100 class classification & 32 x 32 color images & 60,000 \\ \textbf{\href{http://yann.lecun.com/exdb/mnist/}{MNIST}{}} & 10 class classification & 32 x 32 images of handwritten digits & 70,000 \\ \textbf{\href{https://sites.google.com/site/yangdingqi/home/foursquare-dataset}{Locations}{}} & 128 class classification & 446 binary features & 11,592 \\ \textbf{\href{https://www.dshs.texas.gov/THCIC/Hospitals/Download.shtm}{Hospital Stays}{}} & 100 class classification & 6,170 binary features & 67,330 \\ \hline \end{tabularx} \caption{Overview of the datasets initially used to study membership inference attacks \cite{shokri2017membership}} \label{tab:datasets-shokri} \end{table} \begin{table}[ht] \centering \begin{tabularx}{\textwidth}{|l|X|} \hline \textbf{Model} & \textbf{Parameters} \\ \hline \textbf{\href{https://cloud.google.com/ai-platform}{Google Prediction API}} & No configuration parameters available \\ \textbf{\href{https://aws.amazon.com/machine-learning/}{Amazon ML} version I} & maximum number of passes over the training data, L2 regularization amount: $(10, 1e - 6)$ \\ \textbf{\href{https://aws.amazon.com/machine-learning/}{Amazon ML} version II} & maximum number of passes over the training data, L2 regularization amount: $(100, 1e - 4)$ \\ \textbf{Neural networks} & \href{https://github.com/torch/nn}{Torch7}, Convolutional neural network with two convolutions and max pooling layers, fully connected layer of size 128, SoftMax layer \\ \hline \end{tabularx} \caption{Overview of the machine learning models used in the initial publication about membership inference attacks \cite{shokri2017membership}} \label{tab:target-models-shokri} \end{table} Shokri et al. \cite{shokri2017membership} use precision and recall as well as accuracy as measures of attack success. Precision is defined as the fraction of records the attack model identifies as members of the training dataset that indeed are members. Recall is defined as the fraction of the training dataset that is correctly identified by the attack. \par First, the accuracy of the attack is measured. The measurement is based on training and testing datasets of the same size, thus the baseline accuracy is $0,5$. On the neural-network, the authors report a high degree of overfitting when measuring test dataset accuracy with the CIFAR dataset. The train-test accuracy gap on the CIFAR dataset is $0.5$ when using Amazon ML version II. Regarding the attack precision, on the CIFAR dataset with ten classes (CIFAR-10), the median precision is $0.72$ for a training set size of $10,000$. For the CIFAR-100 dataset, the precision is almost $1$, while recall is almost $1$ for both scenarios. Against the Google model, the precision is even higher in the same setup. On the Hospital Stays and Locations dataset, Shokri et al. \cite{shokri2017membership} evaluate an attack against the Google model, reaching precision of more than $0.7$ for half of the classes and a precision between $0.6$ and $0.8$ respectively. \par Furthermore, the \textit{effect of the different shadow model training techniques} was studied. For the noisy training data, precision is reported as dropping with increasing noise. However, the authors note that with noisy data the attack precision is still above baseline. Therefore they deduce that membership inference attacks even work with vague assumptions about the training data. To evaluate the approach of model-based synthesis on the Purchase dataset, $30,000$ fully synthetic records and $187,300$ marginal-based synthetic records are generated. The marginal-based approach uses marginal distributions of individual features to generate synthetic data. For the fully synthetic records, on average $156$ queries to the target model are needed in order to generate a single entry. Against the Google model with black-box access, the attack precision and recall are $0.935$ and $0.994$ for real-data, $0.896$ and $0.526$ for model-based synthetic data as well as $0.795$ and $0.991$ for the marginal-based synthetic data. The authors mention that the attack model performs very poorly on some classes when being trained on the model-based synthetic data. This effect might be due to the under-representation of some classes in the target-models training dataset, leading to bad synthetic records. Overall, Shokri et al. \cite{shokri2017membership} report that membership inference attacks perform very well even without any knowledge about the data in the training dataset of a black-box-access target model. A prerequisite is the ability of an adversary to access the target model and obtain predictions with high confidence from it. \par Additional studies by Shokri et al. focus on the "\textit{effect of the number of classes and training data per class}" and the consequences of \textit{overfitting} for attack success. The authors claim that the more classes, the better a membership inference attack works. A study using the Purchases dataset and the Google model supports this claim. While with $20$ classes average precision is around $0.6$, with 50 classes average precision rises to almost $0.9$. The authors try to explain this effect by analyzing the behavior of the target model. With more classes, the model has to use more separating information from the records to be able to distinguish inputs from the different classes accurately. Thus, the model leaks more information. \par Finally, the effect of overfitting is considered with a calculation of the train-test accuracy gap. Overfitting is directly related to a high train-test accuracy gap, because a high gap indicates that the model in question fails to generalize from its training data to the test data and is thus overfitted. Shokri et al. identify a positive correlation between the size of the accuracy gap of a model and the accuracy of the membership inference attack against the same model. However, they also find counter-examples, where a model with a smaller accuracy gap is more vulnerable to membership inference attacks. The authors conclude that overfitting and diversity of training data straightly enable success of membership inference. They thereby identify privacy leakage as another negative effect that overfitting causes for machine learning models next to performance loss. \subsubsection{Further Studies} Following the initial paper about membership inference attacks, Truex et al. \cite{truex2018towards} conduct experiments with the same datasets, Adults, MNIST, CIFAR and Purchases. Additionally, they carefully define the attack setup and process. They highlight that the training dataset of the target model heavily influences the success of an attack: The higher the number of classes and the higher the in-class standard deviation, the higher the accuracy of a membership inference attack. They explain the effect of the number of classes with the underlying space ${\rm I\!R}$ of the dataset, that they assume to be more and more divided for higher numbers of classes. This would result in smaller areas for single classes and the areas would more closely frame the single data records from the associated class. Consequently, the borders between class areas become much and much smaller with all records being closer to the borders. This results in higher influence of single records on the decision borders. Thus, Truex et al. argue, an adversary would identify a data record with higher probability as the model under attack would more likely be "\textit{impact}(ed)" by the instances for its decisions. Regarding the in-class standard deviation, Truex et al. claim that higher standard deviation inside a class favors membership inference attacks. They argue that if records within the same group have huge differences, then the consideration of every single instance will influence the decision structure of the target model. The authors underpin their claims with a comparison of the aforementioned datasets using the respective in-class standard deviation, number of classes and attack accuracy. In general they claim that the higher the complexity of the classification problem, the higher the accuracy of an attack and that success of membership inference attacks is driven by the underlying data of the target model. They add that some factors may be dominant over others, as they find Purchases-100 to result in higher attack accuracy than Purchases-50 while the in-class standard deviation is nearly equal. Additionally, they study the selection of model type for all three models involved in an attack: The target model, the attack model and the model for attack model data generation ("shadow model"). The experiments involved measurement of attack accuracy and standard deviation in attack accuracy with changing models. They reveal that attack success depends on the target model while the other models are interchangeable. Therefore, the authors described membership inference attacks as "\textit{transferable}" \cite{truex2018towards}. As a consequence, an adversary might create a successful attack model without the knowledge about the best model type for membership inference. Moreover, Truex et al. study the influence of the knowledge of an attacker about the target data, similarly to the experiments by Shokri et al. with noisy data. The receive similar findings: Even with $10\%$ data loss, features replaced by random values ("noise"), attack accuracy remains high. When inspecting the shadow model training data, they find a similar noise resilience. They furthermore suggest that shadow model training datasets should be sufficiently large, as they find some significant increases in performance when carrying out experiments on the Purchases-20 and 50 datasets with varying shadow model training dataset sizes. To sum up, Truex et al. report new insights about membership inference attacks such as that the attacks are mainly influenced or enabled by the target model and the underlying data and that attack and attack data generation models are interchangeable. \par Hayes et al. \cite{hayes2017logan} focus on membership inference attacks against generative machine learning models. They again use CIFAR-10 as well as Labeled Faces in the Wild (LFW), a dataset consisting of images with faces and Diabetic Retinography (DR), consisting of images of retinas. The authors perform white-box and black-box attacks against three different generative adversarial networks (GAN) with one fixed GAN as attack model. They also used a black-box setting with limited auxiliary knowledge: Here, an adversary has knowledge of about $20-30\%$ of the training or testing set of the target model. The black-box attack without and with auxiliary knowledge on the LFW dataset yields an accuracy of $0.4$ and $0.63$ respectively. With the CIFAR-10 dataset, accuracies $0.37$ and $0.58$ respectively are reached, while $0.22$ and $0.81$ are the results on DR. Note that random guessing yielded $0.1$ accuracy on LFW and CIFAR-10 and $0.2$ on DR. Additionally, Hayes et al. \cite{hayes2017logan} underpin the finding of Truex et al. \cite{truex2018towards} that different attack models yield similar attack performance. Moreover, the authors claim that attack accuracy is directly correlated to with the quality of generated training data for the attack model. They report that the target model, if used for data generation, generates better data than an attack data generation model and that in general better training data is created after more training epochs of the models. All in all, Hayes et al. show that membership inference attacks against GANs are possible and discover techniques to improve attack performance. \par Salem et al. \cite{salem2018ml} simplify membership inference attacks by removing multiple requirements towards the adversary. First, they perform membership inference using only one shadow model in contrast to Shokri et al. \cite{shokri2017membership} using the same datasets. Notably, their attack with only one shadow model yields almost equal performance regarding precision and recall on all datasets. They also study the need of knowledge about the structure of the target model. By changing the parameters of the target neural network model and performing an attack using the Purchase-100 dataset, they receive similar results despite changed parameters. Therefore, the authors claim that knowledge about the hyperparameters of a target model is not required to perform the attack. To test the influence of knowledge about the classification algorithm of the target model, Salem et al. created a shadow model that combines different algorithms. Thereby, the behaviors of all algorithm can be taught to an attack model with one shadow model. They combine logistic regression, multilayer perceptron and random forests and evaluate the combined model on the Purchase-100 dataset against each of these algorithms alone as target model. Moreover, they use a single shadow model against each target model, always with the same algorithm as the target, for comparison. The results indicate that the combined model performs similarly to the respective single models, revealing only a little skew for random forests as target algorithm. A second study by Salem et al. introduces a novel technique for synthetic attack data generation. Here, the shadow model generates training data for the attack model independently from the distribution of the target models training data. The underlying idea is to only teach the attack model general knowledge about membership and non-membership in a training dataset of a target machine learning model. In their experiment, the authors randomly pick on of the datasets for attack data generation and one as target training dataset and overall receive precision of more than $0.7$ and similar rec all, although there are some outliers. The authors claim that this procedure also simplifies generation of attack training data, as huge amounts of queries against the target model become unnecessary. The third approach by Salem et al. focusses on minimizing all requirements to the attack model by completely withdrawing shadow models and using unsupervised binary classification \cite{salem2018ml}. The authors claim that membership inference with solid accuracy is even possible in this setting. Overall, Salem et al. present some effective simplifications for membership inference attacks such as the reduction of the amount of shadow models or the usage of training data for attack data generation, that is completely unrelated to the target dataset. \subsection{Mitigation Strategies} In order to prevent membership inference attacks, several studies share their insights about the best mitigation strategies. Shokri et al. \cite{shokri2017membership} discuss four techniques: They (1) leave out classes with low probability from the prediction vector, (2) round the values in the prediction vector, (2) increase the output vector's entropy and (3) apply regularization. Regularization aims at reducing model complexity, so that the fit of the model to its training data, especially outliers, is not to tight \cite{ng2004feature}. The authors find their membership inference attacks to withstand most of their mitigation attempts. For example, leaving out classes of the prediction vector, even reducing the number of output classes to 1, the class with the highest probability does not decrease attack accuracy significantly. Shokri et al. report regularization as the only useful technique but at the same time warn to not over-use it to prevent the potential target model from losing its prediction performance. Truex et al. \cite{truex2018towards} present "\textit{model hardening}" and "\textit{API hardening}" to prevent membership inference attacks. In case of model hardening, they suggest (1) choosing a machine learning model or algorithm that is resilient against membership inference, (2) controlling their model while training in order to prevent overfitting, e.g. by using certain model parameters, (3) applying regularization and (4) apply anonymization techniques. With API hardening, Truex et al. describe techniques that affect the working model in use. Here, they recommend techniques similar to Shokri et al. \cite{shokri2017membership}. Moreover, they recommend usage of differential privacy mechanisms. Hayes et al. \cite{hayes2017logan} study usage of "\textit{Weight Normalization}" and "\textit{Dropout}". They report Dropout to be way more effective than Weight Normalization but also claim that Dropout significantly prolongs the training phase. Similarly to Truex et al. they also study differential privacy mechanisms. Salem et al. \cite{salem2018ml} introduce "\textit{model stacking}" as mitigation strategy in addition to Dropout, which is only applicable to neural networks. This technique uses ensemble learning to split training of the components of a potential target model with different parts of the training dataset. Thereby, Salem et al. want to reduce to degree of overfitting of the model. Their experiments conducted with model stacking revealed reductions of attack performance on different datasets of up to $30\%$ and more. To sum up, different privacy preserving techniques are researched, ranging from simple reduction of model output complexity over implementation of new model training procedures up to privacy systems such as differential privacy. \section{Conclusion} \subsection{Summary} Machine Learning offers many possibilities to create new services or simplify existing ones but also has high requirements regarding hardware, software and know-how. With Machine Learning as a Service, the possibilities of machine learning have become accessible for a broad range of institutions. However, large amounts of data underlie these systems and are the basis for successful learning. Often, these data are privacy-sensitive. MLaaS and the potential privacy-sensitivity of underlying data have caused a new threat for privacy: Membership Inference Attacks - the identification of the fact that a given data record was used to train a machine learning model at hand. \par This report gives a general introduction about the developments that made membership inference attacks possible, the theoretical process of performing an attack with an attack model and shadow models as well as an overview of key studies about membership inference attacks and related defense strategies. \subsection{Discussion} Membership inference attacks offer an interesting field of studies. In some studies, single questions remain open. Shokri et al. \cite{shokri2017membership} claim that the more shadow models are used for attack data generation, the more accurate the attack model will be. However, they provide no empirical evidence for their claim. Moreover, Salem et al. \cite{salem2018ml} show that a single shadow model offers similar performance in comparison to multiple shadow models. Therefore, the aforementioned un-tested claim in the study by Shokri et al. is simply wrong. The mistake could have been avoided by doing an additional experiment on the number of shadow models. \par The discussion of the effect of the number of classes as explained by Truex et al. \cite{truex2018towards} seems to link to the "curse of dimensionality" defined by Bellman \cite{Bellman:DynamicProgramming}. The curse of dimensionality states that with higher amount of dimensions, a latent space gets larger, increasingly empty and the instances in it gather close together. Following their argumentation, higher numbers of dimensionality of the training dataset under attack should cause higher vulnerability for membership inference attacks, as the instances would get closer together and influence the decision processes of the target model \cite{truex2018towards}. Truex et al. state that in class standard deviation directly affects membership inference attack success but do not give real empirical evidence for that claim in their publication. In the corresponding table, they even have an example that opposes their hypothesis with CIFAR-10 having lower in-class standard deviation than Purchases-10 but higher attack accuracy. Both examples have 10 classes. In general, Truex et al. \cite{truex2018towards} seem to replicate or at most refine lot of the work of Shokri et al. \cite{shokri2017membership} such as a giving formal definition of membership inference attacks and their order of events with shadow models which they call "\textit{(...) the first generalized framework for the development of a membership inference attack model}". This is a critical claim, as Shokri et al. \cite{shokri2017membership} introduced a very similar architecture before, though never called it "\textit{framework}". Additionally, Truex et al. \cite{truex2018towards} do not reference Shokri et al. \cite{shokri2017membership} for first finding out that the number of classes is positively correlated with the accuracy of a membership inference attack. Instead, they avoid to directly describe the phenomenon in their publication (chapter 4.2) although it is observable in the table in chapter 4.2. \subsection{Outlook} With new machine learning models and refinements of already available models being published all the time, membership inference attacks remain an important topic. Every new model is a potential target model and a potential attack model. Therefore, studies about frameworks for quick checks of vulnerability of new models would be an interesting topic for further research. In general, the study of mitigation techniques remains important, as the aforementioned studies show, that many mitigation ideas fail to prevent membership inference attacks. Additionally, topics like the influence of the dimensionality of the data under attack on the attack performance could be addressed in future work. % % ---- Bibliography ---- % % BibTeX users should specify bibliography style 'splncs04'. % References will then be sorted and formatted in the correct style. % % \bibliographystyle{splncs04} % \bibliography{mybibliography} % \printbibliography \end{document}
{ "alphanum_fraction": 0.7946819978, "avg_line_length": 157.5283018868, "ext": "tex", "hexsha": "a7587f900faeb17c45c099518257691d441c126d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f2c6734021ebc3dd69eeceb38117471da1d0c9a3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "LuGeNat/ul-ntda-mia", "max_forks_repo_path": "report/gehrke-mia-report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f2c6734021ebc3dd69eeceb38117471da1d0c9a3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "LuGeNat/ul-ntda-mia", "max_issues_repo_path": "report/gehrke-mia-report.tex", "max_line_length": 1684, "max_stars_count": null, "max_stars_repo_head_hexsha": "f2c6734021ebc3dd69eeceb38117471da1d0c9a3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "LuGeNat/ul-ntda-mia", "max_stars_repo_path": "report/gehrke-mia-report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8889, "size": 41745 }
\section{PacketSkip Improvements} \label{sec:solution} In this section, we first describe our solution for the storage and update message issue. Secondly, we present our approach for a host cache. \subsection{Protocol revisions} \label{subsec:revisions} Version 1.0 of the PacketSkip protocol had a notable redundancy in the content of update messages sent to PacketSkip. The additional features were added to the index elements on the sending peer. Additional features are basically a copy of all other elements in the update set. As long as all features are sent in one message, as is the case when sending an update to an entry node of PacketSkip, the additional feature sets per element contain no essential information. The additional feature sets become only relevant when index items are dispersed on several nodes. Therefore, it is sufficient when the receiving entry node of PacketSkip builds the additional feature sets from all other index elements. This is the first change in the protocol (v1.1). Via this simple change we avoid adding capacity information multiple times to one update message. Since update messages to PacketSkip are frequent and sent by all peers in the overlay, this has a noticeable effect on the overall maintenance traffic costs. In addition, v1.2 reduces redundancy on the storage side and on PacketSkip's inner update traffic. The goal was to not impact the general positive behavior introduced in v1.0. Some concessions to the search duration were nevertheless necessary. If search duration is paramount, v1.2 can be regarded as an option. Before adding additional features to an index element, all features are sorted lexicographically by their feature id. We regard all features as items in an ordered transaction set. In our example the ordered transaction would be $\{ BW,\;CPU,\;MEM,\;STOR \}$. Then we will add only those features to the additional feature set of an index element which are suffixes of is feature id in regard to its ordered transaction set. In our example this will lead to the following elements: \begin{equation*} \begin{split} (5000, \;BW, \;\{ CPU\text{:}2400, MEM\text{:}123, STOR\text{:}9583 \}, \\ \;peer\text{=}x | ip(x)\text{:}port(x)) \\ (2400, \;CPU, \;\{ MEM\text{:}123, STOR\text{:}9583 \}, \\ \;peer\text{=}x | ip(x)\text{:}port(x)) \\ (123, \;MEM, \;\{ STOR\text{:}9583 \}, \;peer\text{=}x | ip(x)\text{:}port(x)) \\ (9583, \;STOR, \;\{ \}, \;peer\text{=}x | ip(x)\text{:}port(x)) \end{split} \end{equation*} We have now reduced the costs for additional features from $(d-1)^2$ to $(d^2-d)/2$ with $d$ being the number of all features. Our costs may still be in $O((d-1)^2)$, but the practical effect is a reduction of 50\% of the storage and update costs for the additional features introduced in v1.0. In v1.0 a peer was allowed to choose the feature with the smallest search range as a primary feature. With revision v1.2 we must omit this free choice. Instead, the main feature is now determined by the lexicographical order. An entry node will sort the search dimensions lexicographically and search for the feature with the smallest id. For example: $\{ BW\text{:}[5000,\infty), \;\{ STOR\text{:}[5000,6000], \;CPU\text{:}[999,2730] \} \}$. This will guarantee that all features, which are lexicographically higher than the main feature, are included in any index element. The search protocol of v1.0 can remain identical as to the rest. Secondary features are still taken into account on each potential search result, before sending a result set to the seeker. The following is noteworthy: performing a search on the dimension with the shortest range will not necessarily lead to a minimum amount of search hops or to the shortest search duration. First of all, there is a random factor by choosing an arbitrary entry node, which is unavoidable for good load balancing. Such an entry node may be further away from the shortest range than from a longer range. More importantly, the number of nodes responsible for a given range depends heavily on the distribution of the index elements. Given a normal distribution, a short range close to the mean may result in more search hops than a broad range close to the tail of the distribution. So, the drawback of not choosing the shortest range is somewhat alleviated. \subsection{Communication host cache} \label{subsec:cache} We introduced a cache to PacketSkip nodes that maps node ids to actual hosts. Each node has its own cache. The cache is not part of the DHT object, but maintained by the PacketSkip service running on a peer. When a node switches to a different peer under churn, the cache is therefore reset. Incoming messages already contain information of the sending host (ip, port), the sending peer (peer id) and the sending node (node id). With each incoming message the cache mapping is updated, with the node id being the key and the host information being its value. Before sending a message to another node, the sender looks up the host information for a given node id from the cache and sends the message directly to this host. A lookup via the p2p overlay is thus bypassed. There can be cache misses when a peer either leaves the overlay or the responsibility for a PacketSkip node has changed. In this case, a traditional overlay lookup is performed and the node$\rightarrow$host mapping will be removed from the cache. A cache miss is identified when the sender hasn't received an ACK up to a certain timeout. Since we assume that a direct message to an online host should result in a fast ACK we can set the timeout quite strictly. This will avoid a notable latency for cache misses. Choosing the timeout too strictly can lead to redundant messages. The choice for an appropriate timeout may be host dependent and can also be included as additional information in the host cache. % This is however beyond the scope of this paper since we have evaluated with a constant net latency.
{ "alphanum_fraction": 0.7630363036, "avg_line_length": 168.3333333333, "ext": "tex", "hexsha": "98198d00e1e275283e56c3d45e4bb14d1b9f79d6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "421465667e219085d2dde08a8fa898da44ad38fe", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "andifunke/packetskip", "max_forks_repo_path": "paper/chapter-solution.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "421465667e219085d2dde08a8fa898da44ad38fe", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "andifunke/packetskip", "max_issues_repo_path": "paper/chapter-solution.tex", "max_line_length": 893, "max_stars_count": 1, "max_stars_repo_head_hexsha": "421465667e219085d2dde08a8fa898da44ad38fe", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "andifunke/packetskip", "max_stars_repo_path": "paper/chapter-solution.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-22T18:36:36.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-22T18:36:36.000Z", "num_tokens": 1363, "size": 6060 }
% !TeX root = ../main.tex % !TEX spellcheck = en_GB \chapter{Architecture} \label{sec:Architecture} This section describes the overall system architecture, the chosen platforms for development and how the different subsystems communicate with each other. The architecture is illustrated with a Block Definition Diagram (BDD), an Internal Block Diagram (IBD) and an allocation diagram. \FloatBarrier
{ "alphanum_fraction": 0.7980049875, "avg_line_length": 40.1, "ext": "tex", "hexsha": "62bb39be43f769ac1f24e7a670f50f2c409d0775", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cf26993e2642b0f5529e7ec1f309446ee5c3cbfa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lsangild/ETLYAK", "max_forks_repo_path": "Report/Report/architecture.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "cf26993e2642b0f5529e7ec1f309446ee5c3cbfa", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lsangild/ETLYAK", "max_issues_repo_path": "Report/Report/architecture.tex", "max_line_length": 154, "max_stars_count": null, "max_stars_repo_head_hexsha": "cf26993e2642b0f5529e7ec1f309446ee5c3cbfa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lsangild/ETLYAK", "max_stars_repo_path": "Report/Report/architecture.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 83, "size": 401 }
\section{Concepts} \label{sec:concepts} \newpage \subsection{Applying Constraints} \label{sec:applying_constraints} \subsubsection{Requires Clauses} \label{sec:requires_clauses} \lstinputlisting[linerange={6-13}]{examples/misc.cc} \begin{lstlisting} []<typename T>(T x, T y) requires EqualityComparable<T> { return x == y; }; \end{lstlisting} \lstinputlisting[linerange={18-22,52-61,138-138}]{examples/matrix.hh} \lstinputlisting[linerange={24-28}]{examples/misc.cc} \lstinputlisting[linerange={42-52}]{examples/misc.cc} \newpage \subsubsection{Overload Resolution} \label{sec:overload_resolution} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{figures/overloading.eps} \end{figure} \lstinputlisting[linerange={47-50,52-59,61-62}]{examples/advance.cc} \newpage \subsubsection{Logical Operations} \label{sec:logical_operations} \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{figures/logical_operations.eps} \end{figure} \subsubsection*{Conjunctions} \label{sec:conjunctions} \lstinputlisting[linerange={30-35}]{examples/misc.cc} \subsubsection*{Disjunctions} \label{sec:disjunctions} \lstinputlisting[linerange={15-22}]{examples/misc.cc} \newpage \subsection{Defining Requirements} \label{sec:defining_requirements} \subsubsection{Requires Expressions} \label{sec:requires_expressions} \lstinputlisting[linerange={152-169}]{examples/concepts.h} \subsubsection*{Simple Requirements} \label{sec:simple_requirements} \begin{lstlisting} template<typename T> concept ForwardIterator = requires { T(); T{}; }; \end{lstlisting} \subsubsection*{Type Requirements} \label{sec:type_requirements} \begin{lstlisting} template<typename T> concept ForwardIterator = requires { typename std::iterator_traits<T>::value_type; typename std::iterator_traits<T>::difference_type; typename std::iterator_traits<T>::reference; typename std::iterator_traits<T>::pointer; typename std::iterator_traits<T>::iterator_category; }; \end{lstlisting} \subsubsection*{Compound Requirements} \label{sec:compound_requirements} \begin{lstlisting} template<typename T> concept ForwardIterator = requires(T x) { { *x } -> typename std::iterator_traits<T>::reference; { ++x } -> T&; { x++ } -> T; } && requires(T x, T y) { { std::swap(x, y) } noexcept; { std::swap(y, x) } noexcept; }; \end{lstlisting} \subsubsection*{Nested Requirements} \label{sec:nested_requirements} \lstinputlisting[linerange={191-199}]{examples/concepts.h} \newpage \subsection{Giving Names to Concepts} \label{sec:giving_names_to_concepts} \lstinputlisting[linerange={175-180}]{examples/concepts.h} \lstinputlisting[linerange={182-184,186-186,188-189}]{examples/concepts.h} \subsubsection{``Good'' Concepts} \label{sec:good_concepts}
{ "alphanum_fraction": 0.6827296248, "avg_line_length": 29.4476190476, "ext": "tex", "hexsha": "406a3a9e27965229cd9c45f4426c072be10b4935", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "041cec40fa4a25cd954ce91da6d9d10ee31499a3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CaffeineViking/concepts-primer", "max_forks_repo_path": "primer/sections/concepts.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "041cec40fa4a25cd954ce91da6d9d10ee31499a3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CaffeineViking/concepts-primer", "max_issues_repo_path": "primer/sections/concepts.tex", "max_line_length": 81, "max_stars_count": 17, "max_stars_repo_head_hexsha": "041cec40fa4a25cd954ce91da6d9d10ee31499a3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CaffeineViking/concepts-primer", "max_stars_repo_path": "primer/sections/concepts.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-25T06:50:47.000Z", "max_stars_repo_stars_event_min_datetime": "2019-05-01T21:49:43.000Z", "num_tokens": 826, "size": 3092 }
\chapter{Obesity-associated genetic signatures and cancer} \label{cha:obesity_genetic_signatures_and_cancer} The obesity-associated genetic signatures are central to this project, as a means to clarify the relationship between \gls{bmi} and tumour biology. In this chapter, the previously identified obesity-associated genetic signatures from the studies conducted by \citet{Creighton2012} and \citet{Fuentes-Mattei2014} are examined in turn to judge the agreement of these signatures with the patient \gls{bmi} and \gls{bmi} status, as presented in their results. After this, novel obesity-associated genetic signatures were identified in the Creighton \textit{et al.} (CR) data set and was compared with the obesity-associated genetic signatures from the \citet{Creighton2012} and the \citet{Fuentes-Mattei2014} studies. Lastly, the presence of common genes or pathways associated with obesity in multiple types of cancer is explored using the data sets from \gls{icgc} (\citetalias{ICGC}; \citealp{Zhang2011}). \section{Obesity-associated genetic signature from the\\\citet{Creighton2012} study} \label{sec:creighton_obesity_metagene} An obesity metagene was created using the obesity-associated genetic signature from the \citet{Creighton2012} study. There was no description about the normalisation method used by Creighton \textit{et al.} when they first analysed their data, so the more popular \gls{rma} method was used to normalise the CR data set \citep{Irizarry2003}. The obesity metagene was created in the standardised \gls{rma} normalised CR data set, and was plotted above a heatmap with the sample gene expression to check whether the metagene scores were in accordance with the overall gene expression of the samples (\cref{fig:crmetaheat}). \begin{figure}[tb!] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/creighton_mg_heatmap1} \caption[Obesity metagene from the \citet{Creighton2012} study and sample gene expression in CR data] {Heatmap showing the obesity metagene from the \citet{Creighton2012} study with the sample gene expressions of the obesity-associated genes in the CR data. Level of expression is represented in the top left histogram, where low and high gene expression are colour-coded with blue and red, respectively. Each row of the heatmap represents a gene from the obesity-associated genetic signature, and each column of the heatmap represents a sample from the CR data. The obesity-associated metagene scores of the samples are shown as a separate row at top of the heatmap, and the tree diagram of the hierarchical clustering of the genes is shown on the left of the heatmap. For clarity, the sign of the metagene scores were reversed in order to match the results from the \citet{Creighton2012} study.} \label{fig:crmetaheat} \end{figure} As shown in \cref{fig:crmetaheat}, a high obesity-associated metagene score reflects low expression in majority of the genes in the signature, and in contrast, a low obesity-associated metagene score reflects high expression in the majority of the genes in the signature. This was consistent with the reported property of the obesity-associated genetic signature by \citet{Creighton2012} (see \cref{ssub:creighton_study}). To provide further evidence that the obesity metagenes were in fact associated with both the \gls{bmi} status and the \gls{bmi} value of the patients, a box plot and a scatter plot were created, respectively (\cref{fig:crmetaboxplot}). \begin{figure}[tb!] \centering \includegraphics[page=4,width=0.45\linewidth]{results1/creighton_mg_heatmap1} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/creighton_mg_heatmap1} \caption[Obesity metagene from the \citet{Creighton2012} study and sample \gls{bmi}/\gls{bmi} status in CR data]{Box plot and scatter plot showing the association of the obesity metagene from the \citet{Creighton2012} study with the sample \gls{bmi} status and \gls{bmi}, respectively, from the CR data. In the box plot, the p-values above the groups represent the statistical significance of the association of the obesity metagene with the overweight or obese group compared with the normal weight group. The \gls{anova} p-value shows the statistical significance of the association of the obesity metagene with the sample \gls{bmi} groups. In the scatter plot, $R^2$- and p-values describe the adjusted coefficient of determination of the regression line and the statistical significance of the linear model used to draw the regression line, respectively.} \label{fig:crmetaboxplot} \end{figure} \cref{fig:crmetaboxplot} clearly showed that the obesity metagene from the \citet{Creighton2012} study significantly associated with the obese group of patients, as well as patient \gls{bmi} value. It should be noted here that the obesity metagene significantly associated with the samples from the patients that were obese, but not with the sample from the patients that were overweight. This was due to the fact that the obesity-associated genetic signature was originally identified from the comparison of the patients that were obese with the patients that were not obese, and therefore the obesity metagene scores were significant with the obese group, but not with the overweight group. Another thing to note here was that even though the regression line in the scatter plot showed statistically significant association with the patient \gls{bmi}, patient \gls{bmi} values seem to be randomly dispersed across the obesity metagene scores. This suggests that perhaps the obesity metagene from the \citet{Creighton2012} study may not be associated with the patient \gls{bmi} as strongly as reported. \\ \noindent Now that the association of the obesity metagene from the \citet{Creighton2012} study was established in the CR data set, the Creighton \textit{et al.} obesity metagene was generated in the \gls{icgc} cancer data. The direction of the obesity metagene was checked in the CR data first, so that high metagene scores reflected high patient \gls{bmi} and low metagene scores reflected low patient \gls{bmi} (\cref{fig:crmetaboxplot}). The transformation matrix was then created in the \gls{rma} normalised CR data, as described in \cref{sub:svd}. All of the \gls{icgc} \gls{rnaseq} data were normalised as described in \cref{ssub:rna_seq_data}. Before the transformation matrix was applied to the log$_{10}$-normalised cancer data, the obesity metagene was created from the standardised data or untouched (non-standardised) \gls{icgc} data and compared with one another to determine which data format was better for the application of the transformation matrix (\cref{sec:comp_cr_raw_std_icgc}). From these results, the standardised data was found to be the most suitable format for the application of the transformation matrix. The transformation matrix was applied to each cancer data set in turn to generate an obesity metagene from each of the data sets. Each obesity metagene was plotted in a heatmap with the corresponding \gls{icgc} data set from which the obesity metagene was generated from (\cref{fig:crmetaicgc}; \cref{sec:rest_of_the_cr_icgc_cancer_heatmap_results}). These heatmaps confirmed that the obesity metagene was able to capture the overall gene expression pattern in all of the \gls{icgc} cancer data, where the metagene scores reflected the expression levels of the majority of the genes in the signature. As before, association of the obesity metagene with the patient \gls{bmi} and \gls{bmi} status was examined in their respective cancer data set (\cref{fig:crmetaicgc}; \cref{sec:rest_of_the_cr_icgc_cancer_heatmap_results}). Out of all the cancer types, only the \gls{blca} data set showed a significant association with the obesity metagene, and only for the overweight group (not the obese group). \begin{figure}[htp!] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/crtcga_std}\\ \includegraphics[page=4,width=0.45\linewidth]{results1/crtcga_std} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/crtcga_std} \caption[Obesity metagene from the \citet{Creighton2012} study in the \acrshort{icgc} \acrshort{blca} data]{Heatmap, scatter plot and box plot showing the association of the obesity metagene from the \citet{Creighton2012} study with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the \gls{icgc} \gls{blca} data. The results for the other \gls{icgc} cancer types are shown in \cref{sec:rest_of_the_cr_icgc_cancer_heatmap_results}. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:crmetaicgc} \end{figure} There could be several reasons for the apparent lack of association of the obesity metagene with patient \gls{bmi} in most of the \gls{icgc} data. First, the transformation matrix was derived from the CR microarray data, but the \gls{icgc} cancer data were generated via \gls{rnaseq}. Though the log$_{10}$-normalisation and standardisation of the data were the most appropriate adjustments to be made to the \gls{rnaseq} data, these adjustments were not equivalent to the \gls{rma}-normalisation method that was used on the microarray data. Secondly, none of the \gls{icgc} cancer data in this project originated from breast tumours as in the CR data. Since the obesity-associated genetic signature was identified in the breast cancer data, the signature may be specific to breast cancer and may not be generalisable to other cancer types. \\ \noindent To check whether the signature was specific to breast cancer microarray data, the same transformation matrix used in the \gls{icgc} data was applied to the breast cancer microarray data from the \citet{Print2016} study (referred to as \gls{nzbc} data hereafter). \gls{nzbc} data was normalised with the \gls{rma} method and the transformation matrix was applied to the normalised data to obtain the Creighton \textit{et al.} obesity metagene in the \gls{nzbc} data set. The generated obesity metagene was again compared with the gene expression of the samples with a heatmap and the association of the metagene with the patient \gls{bmi} and \gls{bmi} status was examined with box and scatter plots, respectively (\cref{fig:crmetaprint}). The Creighton \textit{et al.} obesity metagene managed to reflect the overall gene expression of the samples in the \gls{nzbc} data (\cref{fig:crmetaprint}). However, as with the \gls{icgc} cancer data, the obesity metagene scores did not significantly associate with patient \gls{bmi} or \gls{bmi} status (\cref{fig:crmetaprint}). These results confirmed that the lack of association of the obesity-associated genetic signature from the Creighton \textit{et al.} study was not due to the technology in which the data was gathered (microarray or \gls{rnaseq}), nor the cancer type in which the genetic signature was derived from. \begin{figure}[htp!] \centering \includegraphics[width=0.8\linewidth,page=3]{results1/cris_cr_trans_meta}\\ \includegraphics[width=0.45\linewidth,page=4]{results1/cris_cr_trans_meta} \hfill \includegraphics[width=0.45\linewidth,page=5]{results1/cris_cr_trans_meta} \caption[Obesity metagene from the \citet{Creighton2012} study in the \gls{nzbc} data]{Heatmap, scatter plot and box plot showing the association of the obesity metagene from the \citet{Creighton2012} study with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the \gls{nzbc} data. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:crmetaprint} \end{figure} Taken together, these results suggest that the obesity-associated genetic signature identified in the study conducted by \citet{Creighton2012} correlated with patient \gls{bmi} and \gls{bmi} status only in the CR data set. The lack of association with patient \gls{bmi} in other cancer data sets was not due to the type of technology platform in which the data was gathered, as neither the \gls{icgc} \gls{rnaseq} data nor the \gls{nzbc} microarray data showed significant association of the obesity metagene with the patient \gls{bmi}. Furthermore, the obesity-associated genetic signature was not dependent on the cancer type in which it was generated from, since the obesity metagene did not show significant association in either the \gls{icgc} cancer data or in the \gls{nzbc} data. One possible reason why the obesity metagene from the \citet{Creighton2012} study did not show significant association with other data sets could be because the genetic signature was in fact not an obesity specific signature, but a signature that was detected due to another clinical variable (investigated in \cref{sec:creighton_obesity_metagene_new}). Another reason for this apparent lack of association could be that the obesity-associated genetic signature was too specific to the CR data and was not a broad obesity-associated genetic signature, but an obesity-associated signature that was specific to the patient cohort that was profiled in the \citet{Creighton2012} publication. \vspace{-3mm} \section{Obesity-associated genetic signature from the\\\citet{Fuentes-Mattei2014} study} \label{sec:fm_obesity_metagene} The obesity-associated genetic signature from the \citet{Creighton2012} study was not associated with patient \gls{bmi} or \gls{bmi} status in majority of the cancer data sets, so the obesity-associated genetic signature from the \citet{Fuentes-Mattei2014} study (FM) was examined to see whether this obesity metagene was able to significantly associate with the patient \gls{bmi} and \gls{bmi} status across different data sets. Since the FM data set did not have patient \gls{bmi} information, the FM obesity metagene was not able to be compared with the patient \gls{bmi} or \gls{bmi} status in the original FM data. Nevertheless, the transformation matrix was still generated in the FM data and applied first to the microarray data (CR and \gls{nzbc} data sets) then to the \gls{icgc} data sets to see whether the FM obesity metagene associated with patient \gls{bmi} and \gls{bmi} status in these data sets. The FM data was normalised with the \gls{rma} method and \gls{svd} was performed on the normalised FM data to obtain the transformation matrix. The transformation matrix was used to transform the \gls{rma} normalised CR data to extract the FM obesity metagene scores in the CR data. The FM obesity metagene scores were compared with sample gene expression levels, patient \gls{bmi} and \gls{bmi} status in the CR data (\cref{fig:fmmetacr}). Clearly, as with the CR obesity metagene, the FM obesity metagene was reflective of the overall gene expression of the samples, but did not associate with patient \gls{bmi} or \gls{bmi} status, suggesting that this signature also does not generalise to other data sets. The transformation matrix was then applied to the \gls{nzbc} microarray data. Again, FM obesity metagene scores reflected the gene expression of the samples, but did not significantly associate with patient \gls{bmi} or \gls{bmi} status (\cref{fig:fmmetacris}). \begin{figure}[htp!] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/cr_fm_meta}\\ \includegraphics[page=4,width=0.45\linewidth]{results1/cr_fm_meta} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/cr_fm_meta} \caption[FM obesity metagene in the CR data]{Heatmap, scatter plot and box plot showing the association of the FM obesity metagene with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the CR data. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:fmmetacr} \end{figure} \begin{figure}[htp!] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/cris_fm_meta}\\ \includegraphics[page=4,width=0.45\linewidth]{results1/cris_fm_meta} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/cris_fm_meta} \caption[FM metagene in the \gls{nzbc} data]{Heatmap, scatter plot and box plot showing the association of the FM obesity-associated metagene with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the \gls{nzbc} data. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:fmmetacris} \end{figure} Next, the transformation matrix was applied to the \gls{icgc} cancer data and the resulting FM metagene was compared with the sample gene expression, patient \gls{bmi} and \gls{bmi} status in each of the \gls{icgc} cancer data. As evident in \cref{fig:fmmetaicgc,sec:rest_of_the_fm_icgc_cancer_heatmap_results}, the FM obesity metagene scores appeared to reflect the overall gene expression of the FM obesity-associated genetic signature. As with all the results in this chapter so far, the FM obesity metagene did not significantly associate with any of the \gls{icgc} cancer data, except in the \gls{blca} data set (\cref{fig:fmmetaicgc}; \cref{sec:rest_of_the_fm_icgc_cancer_heatmap_results}). The FM obesity metagene significantly associated with the overweight group (but not with the obese group), and also had a significant \gls{anova} p-value \cref{fig:fmmetaicgc}. On the contrary to the association of the metagene with the patient \gls{bmi} status, the FM obesity metagene was not associated with the patient \gls{bmi}. These results suggested that the samples from the patients that were overweight in the \gls{blca} data set had similar biological properties as the samples taken from the patients that were obese in the FM data set. However, due to the fact that the FM obesity metagene lacked association with the patient \gls{bmi} in the \gls{blca} data set, and that the metagene did not show any significant association in any of the other \gls{icgc} cancer types, it was difficult to determine whether the observed association of the FM obesity metagene with the overweight group was truly reflective of the quality of the FM metagene or observed by chance. \begin{figure}[htp!] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/fm_meta_ICGC}\\ \includegraphics[page=4,width=0.45\linewidth]{results1/fm_meta_ICGC} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/fm_meta_ICGC} \caption[FM obesity metagene in the \acrshort{icgc} \acrshort{blca} data]{Heatmap, scatter plot and box plot showing the association of the FM obesity metagene with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the \acrshort{icgc} \acrshort{blca} data. The results for other \gls{icgc} cancer types are shown in \cref{sec:rest_of_the_fm_icgc_cancer_heatmap_results}. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:fmmetaicgc} \end{figure} These results showed that the obesity-associated genetic signature from the \citet{Fuentes-Mattei2014} study was not generalisable to other cancer data sets, similar to the obesity signature identified by Creighton \textit{et al.}. This meant that both the Creighton \textit{et al.} and Fuentes-Mattei \textit{et al.} obesity metagenes may have been too specific to the original data set in which the signatures were identified in. Furthermore, there was a possibility that these obesity-associated metagenes were not related to obesity, but associated with a different clinical variable that may be closely related to \gls{bmi} (\cref{sec:creighton_obesity_metagene_new}). \section{Novel obesity-associated genetic signatures from the \citet{Creighton2012} data set} \label{sec:creighton_obesity_metagene_new} \subsection{Identification of novel obesity-associated genetic signatures} \label{sub:identification_of_obesity_associated_genetic_signatures} Both the obesity-associated genetic signatures identified from the \citet{Creighton2012} and \citet{Fuentes-Mattei2014} studies were able to capture the overall gene expression patterns of the genetic signatures of the samples, but did not associate with patient \gls{bmi} or \gls{bmi} status in majority of the cancer data sets. One possible reason for this result could be that the obesity-associated genetic signatures from the \citet{Creighton2012} and \citet{Fuentes-Mattei2014} studies may not have been truly associated with patient \gls{bmi}, but with another clinical variable. To investigate this possibility, novel obesity-associated genetic signatures were identified in the CR data after controlling for all the clinical variables in the data set. The FM data set was not used to get the obesity-associated genetic signatures, as no \gls{bmi} information was available for the patients in the FM data. Firstly, an attempt was made to replicate the Creighton \textit{et al.} obesity-associated genetic signature from the original CR data. \citet{Creighton2012} originally found their obesity-associated genetic signature by doing a gene expression analysis between the breast tumour samples from the obese and non-obese patients, and from this list, Creighton \textit{et al.} selected the 799 statistically significant \glspl{deg} (p \textless{} 0.01) with log$_2$-fold change greater than 1.2. Therefore, gene expression analysis between the samples from obese and non-obese patients in the \gls{rma} normalised CR data was carried out (described in \cref{sec:gene_expression_analysis}). % Therefore, \glspl{deg} were identified between the samples from obese and non-obese patients in the \gls{rma} normalised CR data, as described in \cref{sec:gene_expression_analysis}. Without adjusting the p-value to account for multiple hypothesis testing, 5278 gene probes and 1781 gene probes were significant at p \textless{} 0.05 and p \textless{} 0.01, respectively (\cref{tab:ob_deg_summary}). After adjustments were made for multiple hypothesis testing, there were only 9 gene probes significant at p \textless{} 0.05 and no gene was significant at p \textless{} 0.01, which suggest that the majority of the 799 probe sets reported by Creighton \textit{et al.} may actually be false positives. Furthermore, there were only 61 gene probes that were significantly differentially expressed at p \textless{} 0.05 with a log$_2$-\gls{fc} greater than 1.2. From these observations, the log$_2$-fold change of the gene probes were ignored and the threshold p-value was set to 0.01 (unadjusted p-value) for the identification of the significant probe sets. Additionally, when there were more than 799 gene probes identified, only the most significant 799 gene probes were taken as the obesity-associated genetic signature, as this many genes were originally identified by Creighton \textit{et al.}. This was to include as many probe set as possible for the novel genetic signatures to be comparable with the original Creighton \textit{et al.} obesity-associated genetic signature. These criteria were applied for the identification of other genetic signatures as well. % To determine whether the identified obesity-associated genetic signature was similar to the signature that Creighton \textit{et al.} had originally found, the identified \glspl{deg} were compared with the original genetic signature identified by Creighton \textit{et al.}. The above analysis was repeated with the residual data (\gls{rma} normalised CR data that had been controlled for other clinical variables; see \cref{sub:residual_data_creation}). The clinical variables controlled were age, ethnicity, menopause status, tumour grade, receptor (\gls{er}, \gls{pr}, and \gls{her2}) statuses and \gls{ln} status. In the residual data, 1104 gene probes were significantly differentially expressed with unadjusted p-value (p \textless{} 0.01; additional results shown in \cref{tab:ob_deg_summary}). Again, the most significant 799 gene probes were taken as the obesity-associated genetic signature from this data. In addition to the above two obesity-associated genetic signatures, two more sets of genetic signatures were identified by taking the gene probes that were common between each of the above two signatures with the original obesity-associated genetic signatures. There were 239 common gene probes between the original Creighton \textit{et al.} obesity-associated genetic signature and the gene probes identified from the unadjusted CR data, and 168 common gene probes between the original signature and the gene probes from the residual CR data. The genetic signatures identified were summarised as a Venn diagram, as shown in \cref{fig:venn1}. \\ \begin{table}[htpb] \centering \caption[Summary of the number of \acrshort{deg}s identified using different unadjusted and \acrshort{fdr}-adjusted p-value threshold in different versions of the \gls{rma} normalised CR data set]{Summary of the number of \glspl{deg} identified using different unadjusted and \gls{fdr}-adjusted p-value thresholds in different versions of the \gls{rma} normalised CR data set} \label{tab:ob_deg_summary} \begin{tabular}{lcccc} & \multicolumn{4}{c}{Number of \glspl{deg} identified}\\ \cmidrule(r){2-5} & \multicolumn{2}{c}{Unadjusted p-value} & \multicolumn{2}{c}{\gls{fdr}-adjusted p-value}\\ \cmidrule(r){2-3} \cmidrule(r){4-5} & 0.01 & 0.05 & 0.01 & 0.05\\ \hline \hline \rule{0pt}{2.25ex}CR data & 1781 & 5278 & 0 & 9 \\ Residual CR data & 1104 & 4371 & 0 & 0 \\ Caucasian-only CR data & 2129 & 6029 & 0 & 0 \\ Residual Caucasian-only CR data & 1558 & 5427 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} \begin{figure}[htpb] \centering \includegraphics[width=0.5\linewidth]{results1/deg_venn1} \caption[Venn diagram of the \glspl{deg} identified from the CR data (all patients included)]{Venn diagram showing the common gene probes between the signatures obtained from the unadjusted and the clinical variable-adjusted CR data and the original obesity-associated genetic signature from the \citet{Creighton2012} study } \label{fig:venn1} \end{figure} \noindent There was a possible bias between the different ethnic groups, where African-American patients were more likely to be obese compared with the Caucasian patients (13 out of 16 were obese in African-American group, whereas 22 out of 77 were obese in Caucasian group; see \cref{ssub:creighton_study,tab:num_sample_microarray} in \cref{sub:breast_cancer_data}). Though ethnicity was controlled in the residual CR data, the effect of ethnicity on the CR data was completely removed to prevent any possibility of ethnicity influencing the analysis. Therefore, the effect of ethnicity was ignored by considering only the Caucasian patients in the CR data, which left a total of 77 Caucasian patients in the data set. With the Caucasian-only CR data set, obesity-associated genetic signatures were identified as described above. % ; first in the unadjusted data, then in the data with clinical variables adjusted (except ethnicity, as it has already been controlled for by considering only the Caucasian samples), and lastly the common genes between each of these two signatures with the original obesity-associated genetic signature were identified. 2129 and 1558 gene probes were significantly differentially expressed (unadjusted p-value \textless{} 0.01) in the unadjusted and clinical variable-adjusted Caucasian patient data, respectively (\cref{tab:ob_deg_summary}). As before, the most significant 799 gene probes were selected from these gene probes. There were 148 and 92 common gene probes with the original Creighton \textit{et al.} obesity-associated genetic signatures and unadjusted or clinical variable-adjusted Caucasian patient data set, respectively. Again, Venn diagram was used to summarise the genes identified from the Caucasian patient data set (\cref{fig:venn2}). \begin{figure}[tb] \centering \includegraphics[width=0.5\linewidth]{results1/deg_venn2} \caption[Venn diagram of the \glspl{deg} identified from the CR data (Caucasian patients only)]{Venn diagram showing the common gene probes between the signatures obtained from the unadjusted and the clinical variable-adjusted CR data (Caucasian patients only) and the original obesity-associated genetic signature from the \citet{Creighton2012} study} \label{fig:venn2} \end{figure} Additionally, the correlation of all of the obesity metagenes were examined to see whether these metagenes were similar to one another. As clearly shown in \cref{fig:cr_meta_cor}, there were two distinct groups within the eight metagenes: the first group contained the obesity metagenes that were not overlapped with the Or metagene (Cr, Res, Ca and CaRes), while the other group had the overlapped metagenes (CrOl, ResOl, CaOl and CaResOl). With that said, all eight metagenes showed high correlation with one another (lowest correlation approximately at 0.85), which suggested that all of these metagenes were detecting similar underlying biological mechanism from the data, as would be expected. \begin{figure}[tpb] \centering \includegraphics[width=0.8\linewidth]{results1/cr_meta_cor} \caption[Pearson correlation of all eight obesity metagenes identified in the CR data]{Heatmap showing the Pearson correlation of all eight obesity-associated metagenes from the CR data with one another. \gls{svd} was applied to the \gls{rma} normalised CR data to generate each of the eight metagenes. High and low correlation were represented as red and yellow, respectively, where the colours were matched with the values on the scale shown in the top left histogram. The lowest correlation was 0.8469, between the CrOl and the CaRes metagenes. } \label{fig:cr_meta_cor} \end{figure} Since all of these obesity-associated genetic signatures were derived based on the discrete values of the patient \gls{bmi} status, an obesity-associated genetic signature was searched for using the correlation of the gene probe expression with the patient \gls{bmi} value (continuous variable) in the CR data with all of the samples included, but no significant genes were identified (\cref{sec:obesity_associated_genetic_signature_using_sample_bmi_values}). All of these obesity-associated genetic signatures (eight in total) were checked to see whether these signatures had significant association with patient \gls{bmi} or \gls{bmi} status in the \gls{nzbc} and \gls{icgc} cancer data sets (\cref{sub:_novel_obesity_associated_signatures_and_sample_bmi}). For simplicity, the abbreviations in \cref{tab:mg_abbrev} will be used to refer to the appropriate genetic signatures. \begin{table}[tb] \centering \caption{Summary of the abbreviations used to refer to the different obesity-associated genetic signatures identified in the CR data} \label{tab:mg_abbrev} \begin{tabular}{lp{0.51\textwidth}c} \hline \hline Abbreviation & Definition & No. of gene probes\\ \hline \rule{0pt}{2.25ex}Or & Original obesity-associated genetic signature identified by \citet{Creighton2012} & 799\\ \rule{0pt}{2.25ex}Cr & Obesity-associated genetic signature identified from unadjusted CR data & 799 \\ \rule{0pt}{2.25ex}CrOl & Genes common between Or and Cr genetic signatures & 239\\ \rule{0pt}{2.25ex}Res & Obesity-associated genetic signature identified from clinical variable-adjusted CR data & 799\\ \rule{0pt}{2.25ex}ResOl & Genes common between Or and Res genetic signatures & 168\\ \rule{0pt}{2.25ex}Ca & Obesity-associated genetic signature identified from unadjusted Caucasian-only CR data & 799\\ \rule{0pt}{2.25ex}CaOl & Genes common between Or and Ca genetic signatures & 148\\ \rule{0pt}{2.25ex}CaRes & Obesity-associated genetic signature identified from clinical variable-adjusted Caucasian-only CR data & 799\\ \rule{0pt}{2.25ex}CaResOl & Genes common between Or and CaRes genetic signatures & 92\\ \hline \hline \end{tabular} \end{table} \subsection{Novel obesity-associated signatures and patient \gls{bmi}/\gls{bmi} status} \label{sub:_novel_obesity_associated_signatures_and_sample_bmi} As with the Or signature, all eight obesity-associated genetic signatures were validated in the CR data set first, and then compared in other cancer data sets by using the transformation matrix that was generated in the \gls{rma} normalised CR data. The CR data (unadjusted, all samples included) was normalised with the \gls{rma} method and \gls{svd} was applied to generate the all of the obesity metagenes. The direction of the metagenes were examined to make sure that all of the metagenes were in line with one another (see \cref{sub:metagene_direction}; \cref{sec:direction_of_all_the_obesity_metagenes_from_the_cr_data}). The comparison of the metagenes with the sample gene expression in the CR data were displayed as heatmaps (\cref{fig:degmetacr}; \cref{sec:rest_of_the_cr_ob_meta_heatmap_results_cr}). It was clear from the heatmaps that all of the metagenes reflected the overall expression of the corresponding obesity-associated genetic signatures. The association of the metagenes with patient \gls{bmi} and \gls{bmi} status was significant for all eight of the obesity-associated genetic signatures identified (\cref{fig:degmetacr}; \cref{sec:rest_of_the_cr_ob_meta_heatmap_results_cr}). These results confirmed that all of the obesity-associated genetic signatures identified in \cref{sub:identification_of_obesity_associated_genetic_signatures} significantly associated with patient \gls{bmi} and \gls{bmi} status in the CR data in which the metagenes were derived from. In the \gls{nzbc} data, the higher the metagenes scores of the samples were, the more expressed the genes were (and vice versa), but lacked the association with the patient \gls{bmi} or \gls{bmi} status (\cref{fig:degmetaprint}; \cref{sec:rest_of_the_cr_ob_meta_heatmap_results_cris}). \begin{figure}[htp!] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/cr_deg_meta_vs_clin}\\ \vspace{1em} \includegraphics[page=4,width=0.45\linewidth]{results1/cr_deg_meta_vs_clin} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/cr_deg_meta_vs_clin} \caption[Cr obesity metagene in the CR data]{Heatmap, scatter plot and box plot showing the association of the Cr obesity-associated metagene with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the CR data. The results for the other obesity metagenes are shown in \cref{sec:rest_of_the_cr_ob_meta_heatmap_results_cr}. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:degmetacr} \end{figure} \begin{figure}[htp!] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/cris_crdeg_trans_meta}\\ \vspace{1em} \includegraphics[page=4,width=0.45\linewidth]{results1/cris_crdeg_trans_meta} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/cris_crdeg_trans_meta} \caption[Cr obesity metagene in the \gls{nzbc} data]{Heatmap, scatter plot and box plot showing the association of the Cr obesity-associated metagene with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the \gls{nzbc} data. The results for the other obesity metagenes are shown in \cref{sec:rest_of_the_cr_ob_meta_heatmap_results_cris}. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:degmetaprint} \end{figure} To confirm whether these metagenes showed significant association with patient \gls{bmi} or \gls{bmi} status in other cancer types, the transformation matrix was applied to the \gls{icgc} cancer data. In all of the \gls{icgc} cancer data sets, all eight metagene scores were reflective of the sample gene expression of the corresponding obesity-associated genetic signatures (\cref{fig:degmetaicgc}; \cref{sec:rest_of_the_cr_icgc_cancer_heatmap_results}). In all cancer types but \gls{blca}, none of the obesity metagenes significantly associated with the patient \gls{bmi} or \gls{bmi} status (\cref{sec:rest_of_the_cr_icgc_cancer_heatmap_results}). As shown in \cref{fig:degmetaicgc}, the \gls{blca} data set showed significant association with the Cr metagene with the patient \gls{bmi} and \gls{bmi} status. Furthermore, Res and Ca metagenes also showed statistical significance in the overweight group, \gls{anova} and the regression line p-values; CaRes and Res metagenes were significant in overweight group and \gls{anova} p-value; CrOl and CaResOl metagenes were significant in overweight group; and ResOl and CaOl metagenes were not significantly associated with the sample \gls{bmi} or \gls{bmi} status in \gls{blca} data set (summarised in \cref{tab:degmetablca}). This was unexpected since all of the genetic signatures resulted from the gene expression analysis between the non-obese group and the obese group of the samples in the CR data, and yet all of the obesity metagenes showed significant association with the overweight group rather than the obese group in the \gls{blca} data set. These results suggested that the samples from the patients that were overweight in the \gls{blca} data were similar in genotype with the samples from the patients that were obese in the CR data. Again, due to the fact that all of the metagenes lacked association with the patient \gls{bmi} and \gls{bmi} status in all of the other \gls{icgc} cancer types, it was difficult to conclude whether the observed association of many of the obesity metagenes with the overweight group of the samples was truly reflective of the effect resulted from these metagenes. Taken together, these results showed that, even though all of the obesity metagenes significantly associated with the patient \gls{bmi} and \gls{bmi} status in the original data where the genetic signatures were derived from, none of the metagenes were generalisable in other cancer data sets. Furthermore, these results showed that the lack of association with patient \gls{bmi} and \gls{bmi} status were not due to other clinical variables in the CR data set. This raised a question of whether there was any obesity-associated genetic signature that was common in multiple types of cancer (\cref{sec:common_genes_across_multiple_cancer_types}). \begin{table}[p] \centering \caption{Statistics of all the obesity metagenes with the patient \gls{bmi} and \gls{bmi} status in the \gls{icgc} \gls{blca} cancer data} \label{tab:degmetablca} \begin{threeparttable} \begin{tabular}{lccccc} & \multicolumn{3}{c}{ P-values} & \multicolumn{2}{c}{ Regression line statistics}\\ \cmidrule(r){2-4} \cmidrule(r){5-6} Metagenes & Overweight & Obese & \gls{anova} & R$^2$ & P \\ \hline \hline \rule{0pt}{2.25ex}Cr & {\bfseries 0.0134}\tnote{1} & 0.1365 & {\bfseries 0.0387} & 0.0171 & {\bfseries 0.0195} \\ Res & {\bfseries 0.0055} & 0.1974 & {\bfseries 0.0196} & 0.0117 & {\bfseries 0.0446} \\ CrOl & {\bfseries 0.0383} & 0.4583 & 0.1096 & 0.0047 & 0.1374 \\ ResOl & 0.0909 & 0.7092 & 0.2125 & 0.0018 & 0.2254 \\ Ca & {\bfseries 0.0077} & 0.1973 & {\bfseries 0.0231} & 0.0116 & {\bfseries 0.0456} \\ CaRes & {\bfseries 0.0104} & 0.2712 & {\bfseries 0.0322} & 0.0101 & 0.0572 \\ CaOl & 0.0575 & 0.6185 & 0.1487 & 0.0022 & 0.2126 \\ CaResOl & {\bfseries 0.0463} & 0.5820 & 0.1263 & 0.0036 & 0.1660 \\ \hline \hline \end{tabular} \begin{tablenotes} \begin{footnotesize} \item [1] All values in bold are statistically significant (p \textless{} 0.05). \end{footnotesize} \end{tablenotes} \end{threeparttable} \end{table} \begin{figure}[p] \centering \includegraphics[page=3,width=0.8\linewidth]{results1/rawobsgene_ICGC}\\ \vspace{1em} \includegraphics[page=4,width=0.45\linewidth]{results1/rawobsgene_ICGC} \hfill \includegraphics[page=5,width=0.45\linewidth]{results1/rawobsgene_ICGC} \caption[Cr obesity metagene in the \acrshort{icgc} \acrshort{blca} data]{Heatmap, scatter plot and box plot showing the association of the Cr obesity metagene with the sample gene expression, patient \gls{bmi} and \gls{bmi} status, respectively, from the \acrshort{icgc} \acrshort{blca} data. The results for other other metagenes and other \gls{icgc} cancer types are shown in \cref{sec:rest_of_the_cr_icgc_cancer_heatmap_results}. Scales, p-values and $R^2$-value are as described in previous figures.} \label{fig:degmetaicgc} \end{figure} \section{Common genes across multiple cancer types} \label{sec:common_genes_across_multiple_cancer_types} It was clear that all of the obesity-associated genetic signatures created from the CR data set were not significantly associated with patient \gls{bmi} across different \gls{icgc} cancer data sets. To determine whether there was any obesity-associated genetic signature that was expressed in multiple different cancer types, gene expression analysis was carried out on each of the eight \gls{icgc} cancer types, and common genes were searched from the \glspl{deg} that were identified. All of the \gls{icgc} cancer data sets were normalised with voom (\cref{ssub:rna_seq_data}), which were then put through the gene expression analysis pipeline to identify the \glspl{deg} between the obese and non-obese groups of samples (\cref{sec:gene_expression_analysis}). \cref{tab:icgcdegnum} summarised the number of \glspl{deg} (unadjusted p \textless{} 0.05) found from each cancer type. \begin{table}[p] \centering \caption{Summary of the number of \glspl{deg} identified in each of the \gls{icgc} cancer data set} \label{tab:icgcdegnum} \begin{tabular}{lc} Cancer type & No. of \glspl{deg} identified\\ \hline \hline \rule{0pt}{2.25ex}\gls{blca} & 679 \\ \gls{cesc} & 1229\\ \gls{coad} & 974\\ \gls{kirp} & 687\\ \gls{lihc} & 3340\\ \gls{read} & 796\\ \gls{skcm} & 1137\\ \gls{ucec} & 2934\\ \hline \hline \end{tabular} \end{table} There were 9695 unique \glspl{deg} across the eight different cancer types, and these genes were checked for any commonalities across the different cancer types. There were 7330 genes that were differentially expressed in one cancer type, 2024 genes differentially expressed in any two cancer types, 320 genes expressed in any three cancer types, and 21 genes expressed in any four cancer types. There were no genes differentially expressed by five or more cancer types (see \cref{tab:icgcdegtab} for a summary). To confirm that these results were statistically significant, the gene expression analysis was repeated 1000 times for each cancer type after the samples were randomised in each analysis (see \cref{sub:sample_randomisation_in_simulation_analysis}). The results from the simulation were summarised together with the earlier results in \cref{tab:icgcdegtab}. The results from the simulation showed that, on average, 5732 genes were found to be differentially expressed in any single cancer type, 1057 in any two cancer types, 111 in any three cancer types, and 7 in any four cancer types. Unfortunately, there were no \glspl{deg} expressed in all eight cancer types, which confirmed that there was no common genes that were differentially expressed between the samples that were obese and normal weight. When the results from the \gls{icgc} gene expression analysis were compared with the simulation results, the number of \glspl{deg} found were statistically significant, as the numbers of genes identified exceeded the 95$^{th}$ percentile values for up to four cancer types. This result confirmed that the \glspl{deg} from the \gls{icgc} cancer data were not identified by chance. However, this also showed that many of the \glspl{deg} identified in the gene expression analysis of the cancer types were observed by chance, and that the majority of these genes were likely to be false positives. \begin{table}[tb] \centering \begin{threeparttable} \caption{Summary of the number of \glspl{deg} identified by the gene expression analysis and simulation analysis in the \gls{icgc} cancer data} \label{tab:icgcdegtab} \begin{tabular}{>{\quad}lcccccccc} & \multicolumn{8}{c}{\small No. of cancer types expressing the \glspl{deg}\tnote{1}}\\ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \hline \hline \rule{0pt}{2.25ex}\hspace{-1em}{\small Results from gene expression analysis} & 7330 & 2024 & 320 & 21 & 0 & 0 & 0 & 0 \\ \hspace{-1em}{\small Results from the simulation:\tnote{2}} & & & & & & & & \\ {\small Mean no. of \glspl{deg} identified} & 5732 & 1057 & 111 & 7 & 0 & 0 & 0 & 0 \\ {\small $95^{th}$ percentile} & 6965 & 1722 & 227 & 20 & 2 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \begin{tablenotes} \begin{footnotesize} \item [1] The numbers represent the number of cancer types a gene was expressed in. \item [2] The simulation was repeated 1000 times, each with randomised samples. \end{footnotesize} \end{tablenotes} \end{threeparttable} \end{table} \section{Pathways enriched in \gls{icgc} data sets} \label{sec:pathways_enriched_in_icgc_data_sets} To check whether there were any significant pathways enriched in any of the \gls{icgc} cancer data sets that were associated with obesity, pathway enrichment analysis was carried out on each cancer type separately and then all cancer types combined (\cref{sec:pathway_enrichment_analysis}). Each cancer type was normalised with voom (\cref{ssub:rna_seq_data}) and pathway enrichment analysis was carried out as described in \cref{sec:pathway_enrichment_analysis}. When the pathway enrichment analysis was carried out with the \gls{kegg} pathway database, only the ``ABC transporter'' pathway was significantly enriched (\gls{fdr}-adjusted p-value \textless{} 0.05) in the \gls{cesc} data, and no other pathways were enriched in any of the other \gls{icgc} cancer types (\cref{sec:pathways_significant_in_each_of_the_cancer_types}). With the Reactome database, ``Phosphatase bond hydrolysis by NUDT proteins'' (in \gls{blca}) and ``Mitochondrial ABC transporters'' (in \gls{cesc}) were significantly enriched (\cref{sec:pathways_significant_in_each_of_the_cancer_types}). With the \gls{go} database, there were 22 \gls{go} terms significantly enriched in \gls{blca}, 17 terms in \gls{cesc}, 3 terms in \gls{kirp}, 21 terms in \gls{read}, 10 terms in \gls{skcm}, 14 terms in \gls{ucec}, and no terms were significant in the \gls{coad} and \gls{lihc} cancer types (\cref{sec:pathways_significant_in_each_of_the_cancer_types}). Though there were many \gls{go} terms identified as significantly enriched with the \gls{go} database in each of the cancer types, there were no terms that were common across all of the different cancer types. Furthermore, these terms were not similar in terms of the biological activities involved. For example, \gls{go} terms enriched in the \gls{blca} data suggested possible activation of the \gls{pi3k} pathway, but those enriched in the \gls{read} data were mainly involved in \acrshort{rna} processing (\cref{sec:pathways_significant_in_each_of_the_cancer_types}). One interesting result from this enrichment analysis was that the ``ABC transport'' pathway was identified as significantly enriched in the \gls{cesc} data set by all three databases, suggesting that the ``ABC transport'' pathway may be a core component in the \gls{cesc} tumour biology. All of these results indicated that the biological processes that drive tumour progression were unique to each cancer type, and no one pathway was associated with obesity and tumour biology across multiple cancer types. Since there were no common pathway enriched in the \gls{icgc} cancer types when analysed individually, all of the \gls{icgc} cancer data sets were combined into a single data set to see any enriched pathways could be identified in a collective analysis. The combined data set was generated in two ways. The first combined data set was created by normalising (via voom) each cancer data set separately, combine all of the normalised cancer data sets into one, then applying batch correction to the data set (\cref{sub:batch_correction}). The second combined data set was created by combining the cancer data sets into one big data set first, then the combined data was normalised, and finally batch correction was applied to obtain the final data set. Each of the combined data sets were analysed for enriched pathways, but there were no pathways significantly enriched in either data sets. These results suggested that there were no common biological pathways associated with obesity in the \gls{icgc} cancer data sets.
{ "alphanum_fraction": 0.7765506561, "avg_line_length": 101.8174273859, "ext": "tex", "hexsha": "823e29e2295dd2a570d53909d20e71698f5c6954", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "402349085b0e9fe5373e0e0ce12780811809a41f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "rikutakei/mastersDoc", "max_forks_repo_path": "thesis/results/results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "402349085b0e9fe5373e0e0ce12780811809a41f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "rikutakei/mastersDoc", "max_issues_repo_path": "thesis/results/results.tex", "max_line_length": 464, "max_stars_count": null, "max_stars_repo_head_hexsha": "402349085b0e9fe5373e0e0ce12780811809a41f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "rikutakei/mastersDoc", "max_stars_repo_path": "thesis/results/results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 13012, "size": 49076 }
% ----------------------------------------------- % chktex-file 44 \documentclass[../index.tex]{subfiles} \begin{document} % ------------------------------------- \renewcommand{\sectiontitle}{I want to write a compiler\textemdash{what do?}} \section{\sectiontitle} % --------------------------- \renewcommand{\currenttitle}{First, why?} \begin{frame}{\currenttitle} Why might you want to create a compiler? \only<+->{} \begin{itemize} \item<+-> Writing a domain-specific language (DSL) such as a custom configuration or specification language \item<+-> Learn about useful data structures and algorithms \item<+-> Having a better understanding of compilers will make you a better programmer \item<+-> Software engineering exercise \item<+-> Etc. \end{itemize} \end{frame} % --------------------------- \renewcommand{\currenttitle}{My first recommendation} \begin{frame}{\currenttitle} Learn a lower-level language (such as C, C++, or Rust): \begin{itemize} \item Teach you a lot about memory management \item Expose you to systems programming concepts \end{itemize} \end{frame} % --------------------------- \renewcommand{\currenttitle}{Learning materials} \begin{frame}{\currenttitle} Follow a tutorial or book: \begin{itemize} \item Crafting Interpreters\footnote{\url{https://craftinginterpreters.com/}} (Java) \item LLVM Kaleidoscope language tutorial (C++) \item Dragon Book (\textit{Compilers: Principles, Techniques, and Tools})\footnote{College textbook material; heavy use of weird formal language and mathematical notation} \item \textit{Modern Compiler Design} \end{itemize} \end{frame} % --------------------------- \renewcommand{\currenttitle}{Useful tools} \begin{frame}{\currenttitle} If you're not hell-bent on doing everything from scratch, use of pre-existing tools will help make your compiler better and easier to develop: \begin{itemize} \item Lexer generators \item Parser generators \item Analysis and optimization infrastructures \end{itemize} \end{frame} % --------------------------- \renewcommand{\currenttitle}{Lexer generators} \begin{frame}{\currenttitle} Instead of handwriting a lexer or implementing regular expression and state machines, you can use a tool that will generate lexers \\[1.5em] Typically: \begin{itemize} \item Input regular expressions and the tokens they resolve to \item Can embed code to process the text from the input \end{itemize} \end{frame} % --------------------------- \begin{frame}[fragile]{\currenttitle} \vspace*{1em} Example for the lexer generator I wrote (Rust): \\[1em] \begin{lstlisting}[language=Rust] lexer! { pub struct Lexer; pub fn stream; (text) -> Token, Token::Error; r"\s" => None, r"[0-9]+" => { let i = text.parse().unwrap(); Some(Token::Integer(i)) } r"[0-9]+(\.[0-9]+)?" => { let f = text.parse().unwrap(); Some(Token::Float(f)) } } \end{lstlisting} \end{frame} % --------------------------- \renewcommand{\currenttitle}{Parser generators} \begin{frame}{\currenttitle} \textbf{Parser generators} will generate a parser for a grammar: \begin{itemize} \item Define production rules \item Embed code to serve as translation rules (to construct an AST, evalulate expressions, etc.) \item Most commonly LR parsers \end{itemize} \end{frame} % --------------------------- \renewcommand{\currenttitle}{LLVM} \begin{frame}{\currenttitle} \textbf{LLVM} is a compiler infrastructure that can provide the intermediate and back-end phases: \only<+->{} \begin{itemize} \item<+-> Defines its own SSA-based IR that it can perform all kinds of optimizations on \item<+-> Has back-ends implemented for many different target machines \item<+-> Many production compilers implemented with LLVM \begin{itemize} \item C\# \item Clang (C/C++) \item Haskell \item Rust \item Etc. \end{itemize} \end{itemize} \end{frame} % ----------------------------------------------- \end{document}
{ "alphanum_fraction": 0.6295852314, "avg_line_length": 29.5815602837, "ext": "tex", "hexsha": "427bad3e08466207d6d739eb832f55ba6d8b5729", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3e1a6022c062ba993a5c4d3f4325967c2a71d3b5", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Dophin2009/intro-compilers", "max_forks_repo_path": "sections/02-tools.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3e1a6022c062ba993a5c4d3f4325967c2a71d3b5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Dophin2009/intro-compilers", "max_issues_repo_path": "sections/02-tools.tex", "max_line_length": 111, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e1a6022c062ba993a5c4d3f4325967c2a71d3b5", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Dophin2009/intro-compilers", "max_stars_repo_path": "sections/02-tools.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1052, "size": 4171 }
\documentclass[11pt]{article} % \usepackage{amsmath,amssymb,mathtools,amsfonts} \usepackage{url} \usepackage{fullpage} % \usepackage{setspace} % \singlespace, \doublespace \onehalfspacing % \usepackage{ifthen} % \ifthenelse,\boolean,\newboolean,\setboolean % \usepackage{mathptmx} % slightly more compressed font. \usepackage[T1]{fontenc}\usepackage[condensed,math]{kurier} % fancy font % \usepackage{makeidx} % to make a keyword index: \index{} % \usepackage{showidx} % prints index entries in the left margin (debug) % \usepackage{needspace} % \needspace{5\baselineskip} no page breaks for 5 lines % \usepackage{mparhack} % correct Latex bug in \marginpar % \usepackage{chemarr} % arrows 4 chem: \xrightleftharpoons[]{} \xrightarrow{} % \usepackage{listings} % source code printer for latex % \lstset{language=Matlab} % \lstset{basicstyle=\small,morekeywords={cvx_begin,cvx_end,variables,maximize,minimize,subject,to,linprog,quadprog,ones,optimset}} %%%% Figure packages % \usepackage{graphicx,psfrag} % \usepackage{pstool} % \psfrag for pdflatex -- preferable(?), not transparent % \usepackage{auto-pst-pdf} % \psfrag & PStricks for pdflatex -- transparent!!! % \usepackage[pdftex]{graphics} % \usepackage{subfigure} % \subfigure[a]{\includegraphics\ldots\label{fi:\ldots}} % \usepackage{sidecaption} % \sidecaption (to be placed inside figure env. % \graphicspath{{./figuresdir1/}{./figuresdir2/}{./figuresdir3/}} %%%%%%%%%%%%%%%%%%%%%%%%%% %%%% Bibliography packages (order is important) % \usepackage{bibentry}% \nobibliography* \ldots \bibentry{..} inserts a bib entry % apparently incompatible with hyperef % \makeatletter\let\NAT@parse\undefined\makeatother % enbl natbib with IEEE cls \usepackage[numbers,sort&compress,sectionbib]{natbib} % \cite,\citet,\citep,\ldots \usepackage[colorlinks=true,linkcolor=blue,backref=page]{hyperref} \renewcommand*{\backref}[1]{\small (cited in p.~#1)} \usepackage[norefs,nocites]{refcheck} % options:norefs,nocites,msgs,chkunlbld %%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%% source code formatting for latex %%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{listings} %%%% Matlab language \lstset{ basicstyle={\footnotesize\tt}, keywordstyle={\footnotesize\tt\color{blue}}, commentstyle={\footnotesize\sl}, frame=shadowbox, columns={[l]fixed}, % character alignment %columns={[l]fullflexible}, basewidth={.5em,.4em}, xleftmargin=3em, xrightmargin=3em, numbers=left, numberstyle=\tiny, numbersep=5pt, escapeinside={(*}{*)}, numberblanklines=false, firstnumber=last, } \lstdefinelanguage[extended]{Matlab}[]{Matlab}{ morekeywords={linprog,quadprog,ones,optimset,ode23s,expand}, deletekeywords={hess} } %%%% my dialect of Matlab, based on Matlab without dialect \lstdefinelanguage[TC]{Matlab}[extended]{Matlab}{ morekeywords={Tvariable,parameter,Teye,Tones,Tzeros,Tconstant,gradient,norm2}, morekeywords={csparse,declareSet,declareGet,declareCopy,declareComputation,compile2matlab,compile2C, begin_csparse,end_csparse,getFunction,setFunction,copyFunction} } %%%% set default language \lstset{language=[TC]Matlab} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage[draft,fancythm,fancybb,morse]{jphmacros2e} %% Macros & options for this Document \allowdisplaybreaks \newcommand{\tprod}[2]{\,\,^{#1}\!\!*^{#2}} \newcommand{\TC}{\texttt{TensCalcTools}} \newcommand{\TTS}{\texttt{TTSTools}} \newcommand{\CS}{\texttt{CSparse}} \newcommand{\CMEX}{\texttt{cmexTools}} \newcommand{\codesize}{\footnotesize} \newcommand{\toidx}[1]{\index{\lstinline{#1}}}%\addcontentsline{toc}{subsubsection}{\lstinline{#1}}} \newenvironment{command}[1]{\toidx{#1}\addcontentsline{toc}{subsection}{\lstinline{#1}}\paragraph*{\lstinline[basicstyle=\large,columns={[l]flexible}]{#1}}~\\\noindent\rule{\textwidth}{2pt}\\\vspace{-3ex}\codesize}{\vspace{-3ex}\rule{\textwidth}{1pt}\medskip\noindent} % \makeindex %% Start of Document \title{\sc \TTS \\[1em]\Large \matlab{} Functions for Tensor-Valued Time-Series Processing} \author{\jph} \date{November 11, 2014} \begin{document} \maketitle \begin{abstract} This toolbox provides functions to process tensor-valued time series, including differentiations, integration, resampling, solution of ODEs, etc. These functions can be applied to numerical values or to \TC{} Symbolic Tensor-Valued Expressions (STVEs). \end{abstract} \tableofcontents %\newpage \section{Tensor-Valued Time Series} \emph{Tensors} are essentially multi-dimensional arrays. Specifically, an \emph{$\alpha$-index tensor} is an array in $\R^{n_1\times n_2\times\cdots\times n_\alpha}$ where $\alpha$ is an integer in $\Z_{\ge 0}$. By convention, the case $\alpha=0$ corresponds to a \emph{scalar} in $\R$. We use the terminology \emph{vector} and \emph{matrix} for the cases $\alpha=1$ and $\alpha=2$, respectively. The integar $\alpha$ is called the \emph{index} of the tensor and the vector of integers $[n_1,n_2,\dots,n_\alpha]$ (possibly empty for $\alpha=0$) is called the \emph{dimension} of the tensor. \medskip A \emph{tensor-valued time series} (TVTS) is a sampled-based representation of a time-varying tensor, i.e., a function $F:\R\to \R^{n_1\times n_2\times\cdots\times n_\alpha}$. A TVTS represents $F$ through a pair $(X,T)$ where $T\in\R^{n_t}$ is a vector of sample times and $X\in\R^{n_1\times n_2\times\cdots\times n_\alpha\times n_t}$ is an \emph{$\alpha+1$-index tensor} with the understanding that \begin{align*} F(T_i)=\big[ X_{i_1,i_2,\dots,i_\alpha,i}\big]_{i_1=1,i_2=1,\dots,i_\alpha=1}^{i_1=n_1,i_2=n_2,\dots,i_\alpha=n_\alpha}, \quad\forall i\in\{1,2,\dots,n_t\}, \end{align*} i.e., the first $\alpha$ indices of $X$ represent the value of $F$ and the last index represents time. \newpage \section{Functions provided by \TTS} \subsection{Signal processing functions} These functions take one or two TVTS signals as input and operate on them to produce a TVTS output. \begin{command}{tsDerivative} \begin{lstlisting} [dx,ts]=tsDerivative(x,ts) [dx,ts]=tsDerivative(x,ts,invDts,invD2ts) \end{lstlisting} \end{command} This function returns a TVTS \lstinline{(dx,ts)} that represents the time derivative of the input TVTS \lstinline{(x,ts)}. The time derivative is computed assuming that the input time-series is piecewise quadratic. The input parameters are: \begin{enumerate} \item \lstinline{x}: matrix representing the values of the TVTS at the sampling times. Can be \begin{enumerate} \item $n\times N$ \lstinline{double} matrix, or \item $n\times N$ \lstinline{Tcalculus} matrix. \end{enumerate} \item \lstinline{ts}: vector/scalar representing the sampling times of the \lstinline{x} TVTS. Can be \begin{enumerate} \item $N\times1$ \lstinline{double} or vector of samplig time; \item $1\times1$ \lstinline{double} scalar with the sampling interval; \item $N$ \lstinline{Tcalculus} vector of samplig time; or \item \lstinline{Tcalculus} scalar with the (fixed) sampling interval. \end{enumerate} \item \lstinline{invDts}: (optional) vector/scalar representing the inverse of the sampling intervals. Can be \begin{enumerate} \item $N-1$ \lstinline{Tcalculus} vector of inverses of samplig intervals, i.e., \begin{lstlisting} invDts = 1./(ts(2:end)-ts(1:end-1)); \end{lstlisting} \item \lstinline{Tcalculus} scalar with the inverse of the (fixed) sampling interval, i.e., \lstinline{invDts=1/ts}. \end{enumerate} This parameter is only needed when \lstinline{ts} is of type \lstinline{Tcalculus}. By including this variable, the output does not include divisions and is therefore more ``friendly'' for optimizations. \item \lstinline{invDts}: $(N-2)$ \lstinline{Tcalculus} vector with the inverses of the sampling 2-intervals, i.e., \begin{lstlisting} invD2ts = 1./(ts(3:end)-ts(1:end-2)) \end{lstlisting} This parameter is only needed when \lstinline{ts} is of type \lstinline{Tcalculus}. By including this variable, the output does not include divisions and is therefore more ``friendly'' for optimizations. \end{enumerate} The output parameters are: \begin{enumerate} \item \lstinline{dx}: matrix representing the values of the derivative of \lstinline{x} the sampling times. Will be \begin{enumerate} \item $n\times N$ \lstinline{double} matrix, or \item $n\times N$ \lstinline{Tcalculus} matrix. \end{enumerate} \item \lstinline{ts}: vector/scalar representing the sampling times of the \lstinline{dx} TVTS and replicates exactly the input \lstinline{ts}. \end{enumerate} \joao{All remining functions need to be updated.} \begin{command}{tsDerivative2} \begin{lstlisting} [ddx,ts]=tsDerivative2(x,ts) \end{lstlisting} \end{command} This function returns a TVTS \lstinline{(ddx,ts)} that represents the second time derivative of the input TVTS \lstinline{(x,ts)}. The output sampling times \lstinline{ts} are equal to the input sampling times \lstinline{ts} and therefore the size of derivatine \lstinline{dx} is equal to the size of the input \lstinline{x}. The time derivatives are computed assuming that the input time-series is piecewise quadratic. \begin{command}{tsDot} \begin{lstlisting} [y,ts]=tsDot(x1,x2,ts) \end{lstlisting} \end{command} This function returns a scalar-valued time-series \lstinline{(y,ts)} that represents the dot product of two n-vector time-series \lstinline{(x1,ts)} and \lstinline{(x2,ts)}: \begin{align*} y=x1'.*x2 \end{align*} The size of the output \lstinline{y} is equal to the size of \lstinline{ts}. \begin{command}{tsCross} \begin{lstlisting} [y,ts]=tsCross(x1,x2,ts) \end{lstlisting} \end{command} This function returns a 3-vector time-series \lstinline{(y,ts)} that represents the cross product of two 3-vector time-series \lstinline{(x1,ts)} and \lstinline{(x2,ts)}: \begin{align*} y=cross(x1,x2) \end{align*} The size of the output \lstinline{y} is equal to the size of the inputs \lstinline{x1} and \lstinline{x2}. \begin{command}{tsQdot} \begin{lstlisting} [y,ts]=tsQdot(q1,q2,ts) \end{lstlisting} \end{command} This function returns a 4-vector time-series \lstinline{(y,ts)} that represents the product of two quaternions \lstinline{(q1,ts)}, \lstinline{(q2,ts)}: \begin{align*} y=q1\cdot q2 \end{align*} The size of the output \lstinline{y} is equal to the size of the inputs \lstinline{x1} and \lstinline{x2}. However, if either \lstinline{(q1,ts)} or \lstinline{(q2,ts)} is a 3-vector time series, then the corresponding input quaternion is assumed pure, but the output quaternion is always a 4-vector time series. \begin{command}{tsQdotStar} \begin{lstlisting} [y,ts]=tsQdotStar(q1,q2,ts) \end{lstlisting} \end{command} This function returns a 4-vector time-series \lstinline{(y,ts)} that represents the product of two quaternions \lstinline{(q1,ts)*}, \lstinline{(q2,ts)}: \begin{align*} y=q1^*\cdot q2 \end{align*} The size of the output \lstinline{y} is equal to the size of the inputs \lstinline{x1} and \lstinline{x2}. However, if either \lstinline{(q1,ts)} or \lstinline{(q2,ts)} is a 3-vector time series, then the corresponding input quaternion is assumed pure, but the output quaternion is always a 4-vector time series. \begin{command}{tsRotation} \begin{lstlisting} [y,ts]=tsRotation(q,x,ts) \end{lstlisting} \end{command} This function returns a 3-vector time-series \lstinline{(y,ts)} that represents the rotation of a 3-vector time-series \lstinline{(x,ts)} by a 4-vector time-series \lstinline{(q,ts)} representing a quaternion: \begin{align*} y=q\cdot x\cdot q^* \end{align*} The size of the output \lstinline{y} is equal to the size of the input \lstinline{x}. \begin{command}{tsRotationT} \begin{lstlisting} [y,ts]=tsRotationT(q,x,ts) \end{lstlisting} \end{command} This function returns a 3-vector time-series \lstinline{(y,ts)} that represents the rotation of a 3-vector time-series \lstinline{(x,ts)} by a 4-vector time-series \lstinline{(q,ts)*} representing a quaternion: \begin{align*} y=q^* \cdot x\cdot q \end{align*} The size of the output \lstinline{y} is equal to the size of the input \lstinline{x}. \subsection{Criteria} The following functions take a one or two TVTS signals as input and produce a scalar that determines a particular property of the TVTS. Typically, they are used to define optimization criteria. \begin{command}{tsIntegral} \begin{lstlisting} y=tsIntegral(x,ts) \end{lstlisting} \end{command} This function returns a tensor \lstinline{y} that represents the time integral of the input TVTS \lstinline{(x,ts)}. The size of the integral \lstinline{y} is equal to the size of the input \lstinline{x} with the last (time) dimension removed. The integral is computed assuming that the input time-series is piecewise quadratic. \subsection{Dynamical systems functions} These functions produce \TC{} constraints that encode the solution dynamical systems modeled by differential and difference equations. \begin{command}{tsODE} \begin{lstlisting} constraint=tsODE(x,u,d,ts,fun) \end{lstlisting} \end{command} This function returns a \lstinline{Tcalculus} constraint that encodes the solution to an Ordinary differential equation of the form \begin{align*} \dot x = f(x,u,d,t), \quad x\in\R^n, u\in\R^k, d\in\R^m. \end{align*} The input parameters are: \begin{enumerate} \item \lstinline{x}: $n\times N$ \lstinline{Tcalculus} matrix representing the state at the sampling times. \item \lstinline{u}: $k\times N$ \lstinline{Tcalculus} matrix representing the input $u(t)$ at the sampling times. This input is assumed piecewise constant between sampling times, i.e., \begin{align*} u(t)=u(t_k), &\forall t\in[t_k,t_{k+1}). \end{align*} \item \lstinline{d}: $k\times N$ \lstinline{Tcalculus} matrix representing the input $d(t)$ at the sampling times. This input is assumed continuous between sampling times. \item \lstinline{ts}: vector/scalar representing the sampling times of the \lstinline{x} TVTS. Can be \begin{enumerate} \item $N\times1$ \lstinline{double} or vector of samplig time; \item $1\times1$ \lstinline{double} scalar with the sampling interval; \item $N$ \lstinline{Tcalculus} vector of samplig time; or \item \lstinline{Tcalculus} scalar with the (fixed) sampling interval. \end{enumerate} \item \lstinline{fun}: handle to a matlab function that computes the right-hand side of the ODE: \begin{lstlisting} fun(x,u,d,t). \end{lstlisting} \item \lstinline{method}: integration method used to solve the ODE. Can be one of the following: \begin{enumerate} \item \lstinline{'forwardEuler'} (explicit) forward Euler method: \begin{align*} x(t_{k+1})= x(t_k)+ (t_{k+1}-t_k) f\big(x(t_k),u(t_k),d(t_k),t_k\big) \end{align*} \item \lstinline{'backwardEuler'} (implicit) backward Euler method: \begin{align*} x(t_{k+1})= x(t_k)+ (t_{k+1}-t_k) f\big(x(t_{k+1}),u(t_{k}),d(t_{k+1}),t_{k+1}\big) \end{align*} Notice that this method still does ``forward integration'' on $u$ because this variable is assumed piecewise constant. \item \lstinline{midPoint} (implicit) mid-point method method: \begin{align*} x(t_{k+1})= x(t_{k-1})+ (t_{k+1}-t_{k-1}) f\big(x(t_k),(u(t_{k-1})+u(t_{k}))/2,d(t_k),t_k\big) \end{align*} Notice that this method still does ``forward integration'' on $u$ because this variable is assumed piecewise constant. \end{enumerate} \end{enumerate} % \bibliographystyle{ieeetr} % \bibliographystyle{abbrvnat} % \bibliography{strings,jph,crossrefs} % \printindex \end{document} %%% Local Variables: %%% mode: latex %%% eval: (tex-pdf-mode) ; only for pdflatex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7235002542, "avg_line_length": 35.8451025057, "ext": "tex", "hexsha": "e19e027b2195ed0ed1829d91d4fe131e496affac", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-04-04T15:33:59.000Z", "max_forks_repo_forks_event_min_datetime": "2017-09-21T15:39:33.000Z", "max_forks_repo_head_hexsha": "3d214c12a0ddfaecd9f091baa8d60121fe40e1ab", "max_forks_repo_licenses": [ "Intel" ], "max_forks_repo_name": "hespanha/tenscalc", "max_forks_repo_path": "doc/timeseries.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "3d214c12a0ddfaecd9f091baa8d60121fe40e1ab", "max_issues_repo_issues_event_max_datetime": "2021-12-09T08:48:45.000Z", "max_issues_repo_issues_event_min_datetime": "2018-05-20T18:12:46.000Z", "max_issues_repo_licenses": [ "Intel" ], "max_issues_repo_name": "hespanha/tenscalc", "max_issues_repo_path": "doc/timeseries.tex", "max_line_length": 268, "max_stars_count": 10, "max_stars_repo_head_hexsha": "3d214c12a0ddfaecd9f091baa8d60121fe40e1ab", "max_stars_repo_licenses": [ "Intel" ], "max_stars_repo_name": "hespanha/tenscalc", "max_stars_repo_path": "doc/timeseries.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-26T04:27:39.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-01T17:52:21.000Z", "num_tokens": 4886, "size": 15736 }
\chapter{Functions} \section{Basics} A function \emph{f}, from A to B is an assignment of exactly one element of B to each element in A. \begin{example} Let \emph{f} be a function such that $\forall x \in \mathbb{R}$, $f(x) = x^{2}$. You can observe that for any desired value of x, there is only one unique mapping to $f(x)$. The reverse is not always true! \end{example} \begin{definition} A function is considered \textbf{\underline{one-to-one or injective}} if $f(x) = f(y) \Leftrightarrow x = y$ \end{definition} \begin{definition} A function from A to B is \textbf{\underline{onto or surjective}} if $\forall y \in B$ $\exists x \in A$ such that $f(x) = y$. \end{definition} \begin{definition} A function that is both one-to-one and onto is called a \textbf{\underline{bijection}}. A function that is bijective implies that an inverse for the function exists! \end{definition} \begin{example} Consider the function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that $f(x) = 12x + 5$. Show that \emph{f} has an inverse and find its inverse. \end{example} \begin{solution} First we have to show that f is injective. Let us assume that $f(x_{1}) = f(x_{2})$. Then, $12x_{1} + 5 = 12x_{2} + 5 \Leftrightarrow 12x_{1} = 12{x2} \Leftrightarrow x_{1} = x_{2}$. Thus, f is injective! Next, we have to show that f is surjective. Let $f(x) = y$. Then, $y = 12x + 5$. Then, $x = \frac{y - 5}{12}$. One can observe that for any real number y, there exists a real number x such that $x = \frac{y - 5}{12}$. Thus, f is surjective! Since f is a bijection, the inverse of f $f^{-1}(x) = \frac{x - 5}{12}$ \end{solution} \section{Floor and Ceiling Functions} \begin{definition} The \textbf{floor} of any real number returns the greatest integer that is less than or equal to the real number. \end{definition} \begin{definition} The \textbf{ceiling} of any real number returns the smallest integer that is greater than or equal to the real number. \end{definition} \begin{example} \[ \begin{split} &\lfloor 2.5 \rfloor = 2, \lfloor -3.4 \rfloor = -4, \lfloor 7 \rfloor = 7 \\ &\lceil 2.5 \rceil = 3, \lceil -3.4 \rceil = -3, \lceil \pi \rceil = 4 \end{split} \] \end{example} \subsection{Proofs regarding Floor and ceiling functions} I think it would be quite redundant to show proofs regarding floor and ceiling functions. Questions that are similar to the ones shown in the \href{http://nook.cs.ucdavis.edu/~koehl/Teaching/ECS20/Lectures/Lecture5_notes.pdf}{\underline{\emph{lecture notes}}} generally show up on the midterms/final. Thus, I have omitted this one topic as you can just read it up from the notes. \section{Growth of Functions} Often times, your task would be to determine the running time of an algorithm that you just developed. These running times are represented as $O()$ (Big Oh), $\Omega()$ (Big Omega), and $\Theta()$ (Big Theta) respectively. \subsection{Big-O Notation} The Big-O Notation is often used to represent the worst case analysis. Think of Big-O as an upper bound for functions. The function f(x) is O(g(x)) if $[f(x) \leq c \times g(x)]$ for all x > k, where c is a positive constant. \begin{example} Show that $x^{2} + 5x + 3$ is $O(x^{2})$ \end{example} \begin{solution} In order for $x^{2} + 5x + 3$ to be $O(x^{2})$, $x^{2} + 5x + 3 \leq cx^{2}$. We know that if $x > 1$ then, $5x \leq 5x^{2}$ and $3 \leq 3x^{2}$. Adding these up we get: $x^{2} + 5x + 3 \leq x^{2} + 5x^{2} + 3x^{2} \leq 9x^{2}$. Thus by selectively choosing $k = 1$ and $c = 9$, we can show that $x^{2} + 5x + 3$ is $O(x^{2})$. For further verification and understanding, let us graph the functions $x^{2} + 5x + 3$ and $9x^{2}$. \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $x$, ylabel = {$f(x)$}, ] %Below the red parabola is defined \addplot [ domain=0:50, samples=100, color=red, ] {x^2 + 5*x + 3}; \addlegendentry{$x^2 + 5x + 3$} %Here the blue parabola is defined \addplot [ domain=0:50, samples=100, color=blue, ] {9*x^2}; \addlegendentry{$9x^{2}$} \end{axis} \end{tikzpicture} As you can see, $x^2 + 5x + 3$ is never greater than $9x^{2}$. \end{solution} \subsection{Big-Omega Notation} The Big-Omega Notation is often used to represent the best case analysis. Think of Big-Omega as an lower bound for functions. The function f(x) is $\Omega(g(x))$ if $[f(x) \geq c \times g(x)]$ for all x > k, where c is a positive constant. \begin{example} Show that $x^{3} + 6x^{2} + 3$ is $\Omega(x^{3})$ \end{example} \begin{solution} In order for $x^{3} + 6x^{2} + 3$ to be $\Omega(x^{3})$, $x^{3} + 6x^{2} + 3 \geq cx^{3}$. We know that if $x > 1$ then, $x^{3} + 6x^{2} + 3 \geq x^{3}$. Thus by selectively choosing $k = 1$ and $c = 1$, we can show that $x^{3} + 6x^{2} + 3$ is $\Omega(x^{3})$. For further verification and understanding, let us graph the functions $x^{3} + 6x^{2} + 3$ and $x^{3}$. \begin{tikzpicture} \begin{axis}[ axis lines = left, xlabel = $x$, ylabel = {$f(x)$}, ] %Below the red parabola is defined \addplot [ domain=0:50, samples=100, color=red, ] {x^3 + 6*x^2 + 3}; \addlegendentry{$x^{3} + 6x^{2} + 3$} %Here the blue parabola is defined \addplot [ domain=0:50, samples=100, color=blue, ] {x^3}; \addlegendentry{$x^{3}$} \end{axis} \end{tikzpicture} As you can see, $x^{3} + 6x^{2} + 3$ is always greater than $x^{3}$. \end{solution} \subsection{Big-Theta Notation} The Big-Theta Notation is often used to represent the average case analysis. Think of Big-Theta as a tight bound for functions. The function f(x) is $\Theta(g(x))$ if f(x) is $O(g(x))$ and if f(x) is $\Omega(g(x))$ \begin{example} Show that $x^{2} + 4x + 7$ is $\Theta(x^{2})$ \end{example} \begin{solution} First, we have to show that $x^{2} + 4x + 7$ is $O(x^{2})$. Then, $x^{2} + 4x + 7 \leq cx^{2}$. We know that for all $x > 1$, $4x \leq 4x^{2}$ and $7 \leq 7x^{2}$. Adding these up, we can say that $x^{2} + 4x + 7 \leq 12x^{2}$. Thus by selectively choosing $k = 1$ and $c = 12$ we can conclude that $x^{2} + 4x + 7$ is $O(x^{2})$. Next, we have to show the Big-Omega relation between these two functions. We know that for all $x > 1$ $x^{2} + 4x + 7 \geq x^{2}$ Thus by selectively choosing $k = 1$ and $c = 1$ we can conclude that $x > 1$ $x^{2} + 4x + 7$ is $\Omega(x^{2})$. Thus $\forall x > 1$ we can infer that: \[ x^{2} \leq x^{2} + 4x + 7 \leq 12x^{2} \] Thus, $x^{2} + 4x + 7$ is $\Theta(x^{2})$ \end{solution} \subsection{Why do we need all this?} Often times in algorithm analysis, you deal with large chunks of data. If, for example, $n = 10$ where n is the number of elements, then it doesn't really matter how good or bad our Big-Oh is. What if $n = 1,000,000$? Then $n^{2}$ and $n^{3}$ would be significantly different. This is why we study the growth of these functions. These notations are also sometimes referred to as time complexities. You will learn about time complexities extensively in ECS 60 and ECS 122A.
{ "alphanum_fraction": 0.5870876653, "avg_line_length": 33.7272727273, "ext": "tex", "hexsha": "e242f62ef8cb57bde17812e3497665c799a70c5d", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-06-19T01:54:16.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-19T01:54:16.000Z", "max_forks_repo_head_hexsha": "3123a6e883b7379c865df9287da2b969e7af20bb", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "UC-Davis-CS-Tutoring/Study-Guides", "max_forks_repo_path": "ECS 20 Study Guide/Functions.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "3123a6e883b7379c865df9287da2b969e7af20bb", "max_issues_repo_issues_event_max_datetime": "2018-12-30T05:45:28.000Z", "max_issues_repo_issues_event_min_datetime": "2018-12-19T03:33:59.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "UC-Davis-CS-Tutoring/Study-Guides", "max_issues_repo_path": "ECS 20 Study Guide/Functions.tex", "max_line_length": 108, "max_stars_count": null, "max_stars_repo_head_hexsha": "3123a6e883b7379c865df9287da2b969e7af20bb", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "UC-Davis-CS-Tutoring/Study-Guides", "max_stars_repo_path": "ECS 20 Study Guide/Functions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2584, "size": 7791 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % CS240: Programming in C % Copyright 2016 Pejman Ghorbanzade <[email protected]> % Creative Commons Attribution-ShareAlike 4.0 International License % https://github.com/ghorbanzade/UMB-CS240-2016S/blob/master/LICENSE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \def \topDirectory {.} \def \resDirectory {\topDirectory/src/c/main/ls14} \def \texDirectory {\topDirectory/src/tex} \def \styDirectory {\texDirectory/sty} \def \cfgDirectory {\texDirectory/cfg} \def \imgDirectory {\texDirectory/img} \documentclass[compress]{beamer} %\mode<presentation> %\usetheme{default} \usepackage{\styDirectory/directives} \input{\cfgDirectory/config} \usepackage{\styDirectory/beamerthemePejman} \doc{number}{14} %\setbeamertemplate{footline}[text line]{} \begin{document} \prepareCover \section{Memory Management} \subsection{Static Memory Allocation} \begin{slide} \begin{block}{Introduction} Static memory is the memory allocated at compile-time before the associated program is executed. \end{block} \end{slide} \begin{slide} \begin{block}{Lifetime} Static variables persist during the lifetime of the program and are accessible from within any function defined in the same file in which the variable is declared. \end{block} \end{slide} \begin{slide} \begin{block}{Syntax} Static memory can be allocated using the \texttt{static} keyword. \begin{terminal} static int a = 4; \end{terminal} \end{block} \end{slide} \begin{slide} \begin{block}{Example} \inputminted[ fontsize=\scriptsize, firstline=10, linenos ]{c}{\resDirectory/static.c} \end{block} \end{slide} \subsection{Atomatic Memory Allocation} \begin{slide} \begin{block}{Introduction} Automatic memory is the memory allocated in the \alert{stack} frame of the function they are declared in. \end{block} \end{slide} \begin{slide} \begin{block}{Lifetime} Automatic variables have a scope \emph{local} to the function they are declared in. Memory allocated to an automatic variable is freed once its declaring function exits the stack (returns). \end{block} \end{slide} \begin{slide} \begin{block}{Example} \inputminted[ fontsize=\scriptsize, firstline=10, linenos ]{c}{\resDirectory/automatic.c} \end{block} \end{slide} \begin{slide} \begin{block}{Limitation} Size of the memory to be allocated for static or automatic variables must be constant and known to the compiler. \end{block} \end{slide} \begin{slide} \begin{block}{Problem} What if the required size for a variable is not known during the compilation time? \end{block} \end{slide} \subsection{Dynamic Memory Allocation} \begin{slide} \begin{block}{Objective} Write a program \texttt{dynamic-array.c} that continues to prompt user for integers as long as the number is not negative. Your program should print all numbers previously entered, once the user enters a negative integer. \end{block} \end{slide} \begin{slide} \begin{block}{Introduction} C allows \emph{dynamic memory allocation} with which memory allocated to a variable can be explicitly manipulated. Memory dynamically assigned to a variable is \emph{borrowed} from \emph{the heap}. \end{block} \end{slide} \begin{slide} \begin{block}{Heap Memory} \textit{The heap} is a region in computer memory which is not tightly managed by the CPU. The heap memory may be manually managed by the programmer. \end{block} \end{slide} \begin{slide} \begin{block}{Allocating Heap Memory} C provides a set of built-in functions to allocate memory on the heap. If the allocation is successful, the address of the allocated memory can be stored in a pointer and used the same way as the memory allocated on the stack. \end{block} \end{slide} \begin{slide} \begin{block}{\texttt{malloc()} Function} Declaring an array of fixed-size 10 on the stack memory. \begin{terminal} int array[10]; \end{terminal} Declaring an array with initial size 10 on the heap memory. \begin{terminal} int *array = malloc(10 * sizeof(int)); \end{terminal} \end{block} \end{slide} \begin{slide} \begin{block}{Note} A request for allocated memory of a certain size may be rejected by the operating system, in which case, \texttt{malloc()} will return \texttt{NULL}. \end{block} \end{slide} \begin{slide} \begin{block}{Error Checking} \begin{terminal} int *array = malloc(10 * sizeof(int)); if (array == NULL) goto ERROR; \end{terminal} \end{block} \end{slide} \begin{slide} \begin{block}{Memory Leak} Memory allocated on the heap will persist as long as it is not freed. If a program terminates without freeing a memory block reserved on the heap for it, the memory block will remain inaccessible by other processes, causing \emph{memory leak}. \end{block} \end{slide} \begin{slide} \begin{block}{\texttt{free()} Function} The function \texttt{void free(void *ptr)} deallocates the heap memory previously allocated by the program. \end{block} \end{slide} \begin{slide} \begin{block}{Note} Function \texttt{void *malloc(size\_t size)} does not initialize the allocated memory. \end{block} \end{slide} \begin{slide} \begin{block}{\texttt{calloc()} Function} Function \texttt{void *calloc(size\_t num, size\_t size)} can be used to dynamically allocate \texttt{num} elements of size \texttt{size} on the heap and initialize all the elements with zero. \begin{terminal} int *array = calloc(10, sizeof(int)); if (array == NULL) goto ERROR; \end{terminal} \end{block} \end{slide} \begin{slide} \begin{block}{Note} Since \texttt{calloc()} initializes the allocated memory, a call to \texttt{calloc()} is slightly more costly than a call to \texttt{malloc()}. \end{block} \end{slide} \begin{slide} \begin{block}{Example} \inputminted[ fontsize=\scriptsize, firstline=10, linenos ]{c}{\resDirectory/malloc.c} \end{block} \end{slide} \begin{slide} \begin{block}{Memory Reallocation} A dynamically allocated memory block can be reallocated to another memory block of a different size. \end{block} \end{slide} \begin{slide} \begin{block}{Function \texttt{realloc()}} C provides the \texttt{realloc()} function for memory reallocation. \begin{terminal} void *realloc(void *ptr, size_t size); \end{terminal} \end{block} \end{slide} \begin{slide} \begin{block}{Function \texttt{realloc()}} \begin{terminal} int *ptr1 = malloc(5 * sizeof(int)); size_t size = 10 * sizeof(int); int *ptr2 = realloc(ptr1, size); \end{terminal} If reallocation is successful: \begin{itemize} \item[] A new memory block of size \texttt{size} will be allocated. \item[] Content of memory block pointed by \texttt{ptr1} will be copied to the new memory block. \item[] Memory block pointed by \texttt{ptr1} is deallocated. \item[] Function returns address of newly allocated memory block. \end{itemize} \end{block} \end{slide} \begin{slide} \begin{block}{Function \texttt{realloc()}} \begin{terminal} int *ptr1 = malloc(5 * sizeof(int)); size_t size = 10 * sizeof(int); int *ptr2 = realloc(ptr1, size); \end{terminal} If reallocation fails: \begin{itemize} \item[] Memory block pointed by \texttt{ptr1} remains allocated. \item[] Function returns \texttt{NULL}. \end{itemize} \end{block} \end{slide} \begin{slide} \begin{block}{Example} \inputminted[ fontsize=\scriptsize, firstline=10, linenos ]{c}{\resDirectory/realloc.c} \end{block} \end{slide} \begin{slide} \begin{block}{Notes} \begin{itemize} \item[] Accessing the memory allocated on the heap is slightly slower than the memory allocated on the stack. \item[] Exessive heap memory allocation may lead to memory fragmentation. \end{itemize} \end{block} \end{slide} \begin{slide} \begin{figure} \includegraphics[width=0.5\textwidth]{\imgDirectory/malloc.jpg} \end{figure} \end{slide} \end{document}
{ "alphanum_fraction": 0.7206957399, "avg_line_length": 22.9971014493, "ext": "tex", "hexsha": "e7392fa9b01709c1ab0eb9f6721da5ecd88d96e7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_forks_repo_path": "src/tex/main/ls14/ls14.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_issues_repo_issues_event_max_datetime": "2016-06-20T03:04:35.000Z", "max_issues_repo_issues_event_min_datetime": "2016-05-16T23:55:39.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_issues_repo_path": "src/tex/main/ls14/ls14.tex", "max_line_length": 193, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c32c866cbe5f7d7044f51f2bcd689b33bda61980", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ghorbanzade/UMB-CS240-2016S", "max_stars_repo_path": "src/tex/main/ls14/ls14.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-03T18:41:24.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-03T18:41:24.000Z", "num_tokens": 2209, "size": 7934 }
\chapter{For Software Developers} \section{Code Style Guide} This section defines a style guide which should be followed by all code that is written in \drake. Being consistent with this style will make the code easier to read, debug, and maintain. The section was inspired by the C++ style guide for ROS: \url{http://www.ros.org/wiki/CppStyleGuide}. It makes use of the follow shortcuts for naming schemes: \begin{itemize} \item \mcode{CamelCased}: The name starts with a capital letter, and has a capital letter for each new word, with no underscores. \item \mcode{camelCased}: Like CamelCase, but with a lower-case first letter \item \mcode{under_scored}: The name uses only lower-case letters, with words separated by underscores. \item \mcode{Under_scored}: The name starts with a capital letter, then uses \mcode{under_score}. \item \mcode{ALL_CAPITALS}: All capital letters, with words separated by underscores. \end{itemize} \noindent Note: Some of the files in the repository were written before this style guide. If you find one, rather than trying to change it yourself, log a bug in bugzilla. \begin{itemize} \item In General: \begin{itemize} \item Robot Names are \mcode{CamelCased}. \end{itemize} \item In Java: \begin{itemize} \item Class names (and therefore class filenames/directories) are \mcode{CamelCased} \item Methods names are \mcode{camelCased} \item Variable names are \mcode{under_scored} \item Member variables are \mcode{under_scored} with a leading \mcode{m_} added \item Global variables are \mcode{under_scored} with a leading \mcode{g_} added \item Constants are \mcode{ALL_CAPITALS} \item Every class and method should have a brief "javadoc" associated with it. \item All java classes should be in packages relative to the locomotion svn root, e.g.: \\ \mcode{package drake.examples.Pendulum;} \\ \mcode{package robots.compassTripod;} \end{itemize} \item In \matlab: \begin{itemize} \item All of the above rules hold, except: \item Member variables need not start with \mcode{m_} since the requirement that they are referenced with obj.var makes the distinction from local variables clear \item Variable names that describe a matrix (instead of vector/scalar) are \mcode{Under_scored}. \item Calls to \matlab class member functions use \mcode{obj = memberFunc(obj,...) }. \item All methods begin by checking their inputs (e.g. with \mcode{typecheck.m}). \end{itemize} \item In C++: \begin{itemize} \item All of the above rules still hold. \item Filenames for \mcode{.cpp} and \mcode{.h} files which define a single class are \mcode{CamelCased}. \item Filenames for \mcode{.cpp} and \mcode{.h} files which define a single method are \mcode{camelCased}. \item Filenames for any other \mcode{.cpp} and \mcode{.h} files are \mcode{under_scored}. \end{itemize} \item In LCM: \begin{itemize} \item LCM types are \mcode{under_scored} with a leading \mcode{lcmt_} added. If the type is specific to a particular robot, then it begins with \mcode{lcmt_robotname_}. \item Channel names are \mcode{under_scored}, and ALWAYS begin with \mcode{robotname_}. \emph{Although robotnames are \mcode{CamelCased}, their use in LCM channels and types should be all lowercase} \item Variable names in LCM types follow the rules above. \end{itemize} %\item In Makefiles: % garratt says use CMAKE?? \end{itemize} \section{Check-In Procedures} This section defines the requirements that must be met before anything is committed to the main branch (\mcode{trunk}) of the \drake repository. \subsection{Unit tests} Whenever possible, add test files (in any subdirectory test) any code that you have added or modified. These take a little time initially, but can save incredible amounts of time later. \subsection{Run all tests} Before committing anything to the repository, the code must pass all of the unit tests. Use the following script to check: \mcode{drake/runAllTests.m} \subsection{Matlab Reports} There are a number of helpful matlab reports, that can be run using: \mcode{Desktop>Current Directory}, then \mcode{Action>Reports} (the Action menu is the gear icon) Before a commit, your code should pass the following reports: \begin{itemize} \item Contents report \item Help report (with all but the Copyright options checked) \end{itemize} and you should run the M-Link Code Check report to look for useful suggestions. \subsection{Contributing Code} If you don't have write permissions to the repository, then please make sure that your changes meet the requirements above, then email a patch to Russ by running \begin{lstlisting} svn diff > my_patch.diff \end{lstlisting} in your main \drake directory, then email the diff file. \section{Version Number} Version number has format W.X.Y.Z where \begin{itemize} \item W= major release number \item X = minor release number \item Y = development stage* \item Z = build \end{itemize} * Development stage is one of four values: \begin{itemize} \item 0 = alpha (buggy, not for use) \item 1 = beta (mostly bug-free, needs more testing) \item 2 = release candidate (rc) (stable) \item 3 = release \end{itemize} Z (build) is optional. This is probably not needed but could just refer to the revision of the repo at the time of snapshot. Numbered versions should be referenced via tags.
{ "alphanum_fraction": 0.7616104869, "avg_line_length": 44.5, "ext": "tex", "hexsha": "69679eb7fedab295ae09d21539f52b56f0ff47ff", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0a124c044357d5a29ec7e536acb747cfa5682eba", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "cmmccann/drake", "max_forks_repo_path": "doc/developers.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0a124c044357d5a29ec7e536acb747cfa5682eba", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "cmmccann/drake", "max_issues_repo_path": "doc/developers.tex", "max_line_length": 349, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0a124c044357d5a29ec7e536acb747cfa5682eba", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "cmmccann/drake", "max_stars_repo_path": "doc/developers.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-12T14:32:29.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-12T14:32:29.000Z", "num_tokens": 1436, "size": 5340 }
\chapter{Results}\label{chapter:results} To search for and quantify the significance of the production of signals $\left(V, H, \Zprime\right)$, and to set limits in the absence of observation, a Bayesian method is used to calculate a posterior likelihood as a function of number of signal events for the signal hypotheses under consideration~\cite{EXOT-2010-07}. In this method, a final conditional probability of the parameters of interest given the observed data (the ``posterior'') is built by integrating over the nuisance parameters (``marginalization'') with a \gls{MCMC} procedure. The posterior distribution is used to gauge the fitted signal statistical significance and to set $95\%$ credibility level (CL)% \footnote{As discussed in \Cref{section:intervals_and_limits}, \emph{credibility levels} for Bayesian credible intervals are different from \emph{confidence levels} for frequentist confidence intervals.} upper limits on the cross section times acceptance times efficiency. \section{Measurement of Standard Model Signals} To measure the Standard Model signals a model comprised of the Standard Model $\Vjets$, $\Hbb$, and $\ttbar$ signal templates along with the QCD multijet model is fit to the data. The normalization of the $\ttbar$ component of the model is constrained with the scale factor obtained in the dedicated $\ttbar$ Control Region. This fit simultaneously extracts the signal strengths of the $\Vjets$ and $\Hbb$ process, $\mu_{V}$ and $\mu_{H}$ respectively, which are unconstrained. The comparison of the model post marginalization of the nuisance parameters to the data is seen in \Cref{fig:post_fit}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{results/SRPostFit_mass} \caption[Plot of post-fit model with observed data and with model components subtracted.]{% The top panel shows the post-fit plot of the SM Higgs boson, $\Vjets$, $\ttbar$ and QCD model with the observed data. The middle panel shows the post-fit model and the data with the QCD and $\ttbar$ components of the model subtracted, highlighting the large resonance from $\Vjets$. The bottom panel shows the post-fit model and the data with the QCD, $\Vjets$, and $\ttbar$ components of the model subtracted, highlighting a small excess of events near $125~\GeV$~\cite{ATLAS-CONF-2018-052}. } \label{fig:post_fit} \end{figure} \clearpage \subsection{Observation of boosted $V\to b\bar{b}$} The observed signal strength for the $\Vjets$ process is \[ \mu_{V} = 1.5 \pm 0.22~\mathrm{(stat.)}^{+0.29}_{-0.25}~\mathrm{(syst.)} \pm 0.18~\mathrm{(th.)}\,, \] corresponding to an observed significance of $5\,\sigma$ with an expected significance of $4.8\,\sigma$. This constitutes the first direct observation of boosted vector bosons decaying to bottom quark pairs in ATLAS for a center-of-mass energy of $\sqrt{s}=13~\TeV$ following the lower momentum Run-I measurement of high transverse momentum $\Zbb$~\cite{STDM-2013-04}. From the results of simulation, seen in \Cref{table:cutflow}, the expected composition of the $\Vjets$ excess is approximately $24\%$ $W+\mathrm{jets}$ and approximately $76\%$ $Z+\mathrm{jets}$. \subsection{Measurement of boosted $H\to b\bar{b}$} For the $\Hbb$ process, the observed signal strength is \[ \mu_{H} = 5.8 \pm 3.1~\mathrm{(stat.)} \pm 1.9~\mathrm{(syst.)} \pm 1.7~\mathrm{(th.)}\,, \] which given the uncertainties is consistent with the background-only hypothesis at $1.6\,\sigma$ with an expected sensitivity of $0.28\,\sigma$. This constitutes a measurement of boosted Higgs decaying to bottom quark pairs, though not a direct observation. The combined posterior distributions of $\mu_{V}$ and $\mu_{H}$ is seen in \Cref{fig:signal_strength_contour}, showing the agreement between the best-fit values of the model and the Standard Model prediction of $\mu_{V} = \mu_{H} = 1$. Given their respective uncertainties, the $\mu_{V}$ best-fit value is compatible with the SM prediction at $1.32\,\sigma$, and the $\mu_{H}$ best-fit value is compatible at $1.2\,\sigma$. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{results/contourFinal} \caption[Compatibility of best-fit values for $\mu_{H}$ and $\mu_{V}$ with Standard Model predicted values.]{% Combined posterior distributions of $\mu_{H}$ and $\mu_{V}$ from the signal region fit. It is seen that the best-fit values for the signal processes lie within the $2\,\sigma$ interval of the Standard Model prediction~\cite{ATLAS-CONF-2018-052}.} \label{fig:signal_strength_contour} \end{figure} \clearpage \section{Limits on $\Zprime$ production} Following the measurement of the Standard Model signals, a search for exotic signals in the \largeR jet mass distribution is performed. The first step in the search is to apply the \BumpHunter{} search procedure~\cite{Aaltonen:2008vt,Choudalakis:2011qn}. Given that the effect of the Higgs boson with a SM strength is smaller than the expected uncertainty on the $\Zprime$ limit the SM Higgs template is excluded from the model, and only the $\Vjets$ and $\ttbar$ templates are considered along with the QCD parametric model. With this model, a fit to the data is performed with the full set of systematic uncertainties resulting in the best-fit values for the model nuisance parameters used in the analysis. With these post-fit shapes, the \BumpHunter{} goodness-of-fit algorithm scans the mass range looking for significant deviations from the background-only model. For the given model, the largest deviation from the background is found in the \largeR jet mass interval of $\left[115,130\right]~\GeV$, as seen in \Cref{fig:BumpHunter_scan}. This deviation has a \BumpHunter{} global \pvalue{} of $0.54$, indicating that background-only model is quite consistent with the data. The \BumpHunter{} test statistic, described in detail in~\cite{Choudalakis:2011qn}, is calculated for the given model and for pseudo-data sampled from the background only hypothesis (the null hypothesis, $H_{0}$) for various widths of fit windows. The \BumpHunter{} \pvalue{} is then calculated from the most discrepant test statistic from all the window width choices in a treatment that creates a hypertest --- a union of multiple hypothesis tests --- allowing for accounting of the ``trial factor''~\cite{Gross:2010qma} in the calculation resulting in a global \pvalue{}, making the result quite robust. For each bin the significance of the deviation between the model and the data is also calculated. The significance for each bin is defined as the $z$-score for the observed Poisson \pvalue{}~\cite{Choudalakis:2012}, and given the integral form of the relationship between a Normal \pvalue{}, $p$, and the $z$-score, $z$, \[ p = \int_{z}^{\infty} \frac{1}{\sqrt{2\pi}}\,e^{-t^2/2}\,dt = 1 - \mathrm{CDF}\left(z\right) = \frac{1}{2}\left(1 - \mathrm{erf}\left(\frac{z}{\sqrt{2}}\right)\right)\,, \] shown in \Cref{fig:pvalues_from_zscores}, it is seen the $z$-score can be described numerically% \footnote{The error function does not have a closed form solution.} as \[ z\textrm{-score} = \sqrt{2}\, \mathrm{erf}^{-1}\left(1-2p\right). \] \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{results/pvalues_from_zscores.eps} \caption[Plot of \pvalue{} as a function of $z$-score.]{% Plot of $\frac{1}{2}\left(1 - \mathrm{erf}\left(z/\sqrt{2}\,\right)\right)$, which describes the corresponding \pvalue{} for a given $z$-score.}\label{fig:pvalues_from_zscores} \end{figure} In the absence of excesses in the data that could correspond to new physics, $95\%$ credibility level upper limits, competitive with other published ATLAS limits~\cite{EXOT-2017-32} as seen in \Cref{fig:darkmatter_coupling_summary}, are set on signals from dark matter mediators with democratic decays to quarks for masses between $100~\GeV$ and $200~\GeV$. These limits are shown in terms of cross section times branching ratio, acceptance and efficiency in \Cref{fig:cross_section_limits}, and in terms of the $g_{q}$ parameter% \footnote{The limits on $g_{q}$ are determined from the limits on $\sigma \times \epsilon \times A \times BR$ which used signal events simulated with $g_{q} = 0.25$, meaning that as $\sigma\left(\Zprime \to q\bar{q}\right) \propto g_{q}^{2}$ that $g_{q} = 0.25 \sqrt{\sigma/\sigma_{g_{q} = 0.25}}$\,.} that controls the coupling of the \gls{dark matter mediator} to quarks that determines the cross section in \Cref{fig:gq_limits}. From these limits, an exotic dark matter mediator $\Zprime$ with $g_{q}=0.25$ is excluded for $m_{\Zprime} < 200~\GeV$ at the $95\%~\mathrm{CL}$. As in seen in \Cref{fig:gq_limits}, the smallest coupling to quarks excluded at the $95\%~\mathrm{CL}$ is $g_{q} = 0.124$ for a mass hypothesis of $m_{\Zprime} = 105~\GeV$. The limits become less restrictive at higher masses as the systematic uncertainty on the $t\bar{t}$ scale factor becomes dominant and greatly affects the total uncertainty, as seen in \Cref{table:systematic_uncertainties}. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{results/Final_Limits} \caption[The $95\%$ credibility level upper limit on the cross section times acceptance times branching ratio times efficiency for the $\Zprime$ dark matter mediator model.]{% The $95\%$ credibility level upper limit on the cross section times acceptance times branching ratio times efficiency obtained from the invariant mass distribution for the $\Zprime$ dark matter mediator model~\cite{ATLAS-CONF-2018-052}.} \label{fig:cross_section_limits} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{results/gqlimit} \caption[The $95\%$ credibility level upper limit on the $g_{q}$ parameter for the $\Zprime$ dark matter mediator model.]{% The $95\%$ credibility level upper limit on the $g_{q}$ parameter that controls the decay width of the $\Zprime$ into SM quarks obtained from the invariant mass distribution for the $\Zprime$ dark matter mediator model~\cite{ATLAS-CONF-2018-052}.} \label{fig:gq_limits} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{results/BH_final} \caption[\BumpHunter{} fit of post-fit background-only model to data.]{% The reconstructed mass distribution $m_{J}$ with the event reconstruction and selection as described in the text. The solid red line depicts the background prediction, consisting of the non-resonant dijet, $\Vjets$, and $\ttbar$ processes. The vertical blue lines indicate the most discrepant interval identified by the \BumpHunter{} algorithm between $115~\GeV$ and $130~\GeV$. Without including systematic uncertainties, the probability that fluctuations of the background model would produce an excess at least as significant as the one observed in the data anywhere in the distribution, the \BumpHunter{} probability, is $0.54$. The bottom panel shows the bin-by-bin significances~\cite{Choudalakis:2012} of the differences between the data and the fit, considering only statistical fluctuations~\cite{ATLAS-CONF-2018-052}. } \label{fig:BumpHunter_scan} \end{figure}
{ "alphanum_fraction": 0.7604288675, "avg_line_length": 87.3937007874, "ext": "tex", "hexsha": "39f8014d07b77a8a8171860501b64f13a32f65ff", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-04T21:35:30.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-04T21:35:30.000Z", "max_forks_repo_head_hexsha": "7ab7210359495fe7ea1e610be237560ec35f63a6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "matthewfeickert/feickert-thesis", "max_forks_repo_path": "src/results.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "7ab7210359495fe7ea1e610be237560ec35f63a6", "max_issues_repo_issues_event_max_datetime": "2019-10-03T11:36:53.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-03T11:36:53.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "matthewfeickert/feickert-thesis", "max_issues_repo_path": "src/results.tex", "max_line_length": 358, "max_stars_count": 3, "max_stars_repo_head_hexsha": "7ab7210359495fe7ea1e610be237560ec35f63a6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "matthewfeickert/feickert-thesis", "max_stars_repo_path": "src/results.tex", "max_stars_repo_stars_event_max_datetime": "2019-10-18T20:40:23.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-20T04:40:30.000Z", "num_tokens": 2948, "size": 11099 }
\section{Selection of the Optimal Architectural Solution} The selection of the optimal architectural solution and libraries used to create the framework is not an easy task. Constraints imposed on the PaWS framework (see Section \ref{design_contraints}) require design and implementation of a WEB application. It can be done with various programming languages and libraries. \subsubsection{Language Selection} One group of languages used in bigger development projects are Java, C++ and C\#. Using them requires program compilation and creation of binary files. Also the programming process requires more programming work and it is not efficient. What is more, C\# is dedicated to the Microsoft Windows operation system which disqualifies its use (see Design Contraints Section \ref{design_contraints}). The second group of possible languages are scripting languages like Perl, Python, Ruby and PHP. Perl is an obsolete language, but all of them are popular as a languages for WEB application design. They are constantly developed, stay upward-compatible, have a lively community of developpers, rich documentation, libraries and support. That explains their popularity. \subsubsection{WEB Support} There are three aproaches to create a WEB application using scripting languages. The first of them is based on CGI\cite{cgi} - Common Gateway Interface. This technology enables communication between the WEB server software and and other programs located on the server. The main disadvantage of this approach is that CGI usually requires the creation a new process for each request. This is not scalable, it is not possible to use the same context and global variables for several requests, and it can cause server load over very quickly. What is more, CGI does not provide session mechanisms and the Ajax library is not built-in. The second possibility is to use template engines like Cheetah \cite{cheetah} or Jinja2 \cite{jinja2}. The problem is that this solution is too lightweight - it supports only code generation and not provides other useful mechanisms such as routing, session variable handling etc. The third approach is the most popular approach. It is based on the Model-View-Controller design pattern (described in Section \ref{mvc}). This paradigm supports reusability, separation of layers of the application, is lightweight but easy to extend and takes care of persistent storage if it is needed. There are a lot of framework using the MVC pattern like Pylons \cite{pylons}, Django \cite{django} and Turbogears \cite{turbogears} in Python, Ruby on Rails \cite{rubyonrails} in Ruby and Symfony \cite{symfony}, CakePHP \cite{cakephp} and Yii \cite{yii} in PHP. \subsubsection{WEB Frameworks} PHP frameworks are more heavyweight and harder to learn than Python and Ruby ones. The most popular PHP web framework - Symphony also has problems with compatibility between its versions which makes difficult to use its support. Ruby on Rails is a very good framework, but has less support than Python frameworks, because Ruby is a younger language than Python. Also Ruby frameworks have performance problems. The use of Python tools has also other advantages. The main one is the availability of a Python binding in PIPS framework - PyPS, which can be use to dynamically perform PIPS operations instead of creating Tpips (see Section \ref{tpips_interface}) scripts on-the-fly. The selection of Python enables using PyPS in an easy way, by importing it as a Python module, but also does not exclude usage of Tpips. Three Python WEB frameworks were taken into consideration: Pylons, Django and Turbogears. The last one was rejected because of stability problems\footnote{TurboGears is based on CherryPy \cite{cherrypy} unstable server.}. Despite the fact that Django is more popular than Pylons, the second one was chosen due to greater flexibility. Pylons can be extended with any component the user needs, while Django has a set of its own preferred components. A good example of this problem is the usage of ORM (Object-relational mapping\cite{orm}). Django does not allow to use all the available toolkits, like the most popular one - SQLAlchemy\cite{sqlalchemy}. Choosing the Pylons tool and its approach to architecture and structure of the application paid off very quickly - after less than seven days of work, the first prototype of the application was ready and able to communicate with PIPS. %% Very significant fact is that PaWS is not the first CRI project based on Pylons and I could count on strong support from more experienced developers.
{ "alphanum_fraction": 0.8082131935, "avg_line_length": 51.4382022472, "ext": "tex", "hexsha": "04a6d7ae9f339481e0551a1d821561a56fe89a2d", "lang": "TeX", "max_forks_count": 12, "max_forks_repo_forks_event_max_datetime": "2022-02-18T02:01:51.000Z", "max_forks_repo_forks_event_min_datetime": "2015-03-26T08:05:38.000Z", "max_forks_repo_head_hexsha": "86b33ca9da736e832b568c5637a2381f360f1996", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "DVSR1966/par4all", "max_forks_repo_path": "packages/PIPS/pips/src/Documentation/paws/reportCh2/optimal_solution.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "86b33ca9da736e832b568c5637a2381f360f1996", "max_issues_repo_issues_event_max_datetime": "2019-03-11T16:01:39.000Z", "max_issues_repo_issues_event_min_datetime": "2017-05-29T09:29:00.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "DVSR1966/par4all", "max_issues_repo_path": "packages/PIPS/pips/src/Documentation/paws/reportCh2/optimal_solution.tex", "max_line_length": 152, "max_stars_count": 51, "max_stars_repo_head_hexsha": "86b33ca9da736e832b568c5637a2381f360f1996", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "DVSR1966/par4all", "max_stars_repo_path": "packages/PIPS/pips/src/Documentation/paws/reportCh2/optimal_solution.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-18T02:01:50.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-31T01:51:39.000Z", "num_tokens": 1007, "size": 4578 }
\section{Skills} \denseouterlist{ \entry{ \textbf{Personal:} excellent communication and cooperation skills, effective decision making under pressure, organizational skills, team leading, hard worker } \entry{ \textbf{Computers:} Windows, Microsoft Office, Microsoft Excel, Matematica, AutoCAD, QBasic } \entry{ \textbf{Engineering:} electrical design, electronic design, digital design, electrical machines } }
{ "alphanum_fraction": 0.6783505155, "avg_line_length": 16.724137931, "ext": "tex", "hexsha": "dda5d2f689491e71060b0241e813697bfd7c3257", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "803aaa9aa2cbbf8b5eb80bc56b0e71673ed4165a", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "nkampougeris/Simple-CV", "max_forks_repo_path": "sections/skills.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "803aaa9aa2cbbf8b5eb80bc56b0e71673ed4165a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "nkampougeris/Simple-CV", "max_issues_repo_path": "sections/skills.tex", "max_line_length": 51, "max_stars_count": null, "max_stars_repo_head_hexsha": "803aaa9aa2cbbf8b5eb80bc56b0e71673ed4165a", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "nkampougeris/Simple-CV", "max_stars_repo_path": "sections/skills.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 115, "size": 485 }
% Required packages %\usepackage[inline]{enumitem} % for enumerate* (horizontal lists) \usepackage{listings} % \usepackage{proof} \usepackage{mathpartir} %\usepackage{amsthm} %\usepackage{thmtools, \usepackage{thm-restate} %\usepackage{xcolor} \definecolor{Maroon}{rgb}{0.5, 0.0, 0.0} \definecolor{MediumVioletRed}{rgb}{0.78, 0.08, 0.52} \definecolor{ltblue}{rgb}{0.23, 0.21, 0.98} % floating figures \usepackage{wrapfig} % symbols \usepackage{amsmath} % \usepackage{dsfont} % \usepackage{stmaryrd} % \usepackage{amssymb} % diagrams \usepackage{tikz} \usetikzlibrary{cd, calc, positioning, decorations.markings, decorations.pathmorphing, shapes, decorations.pathreplacing} % \usepackage[normalem]{ulem} % for strikethrough with \sout{} \usepackage{cancel} % useful packages \usepackage{xspace} \usepackage{fancyvrb} % \VerbatimFootnotes % to enable the use of verbatim in footnotes, it may conflict with some packages % syntax highlighting % \usepackage[skins,listings]{tcolorbox} \definecolor{dkblue}{rgb}{0,0.1,0.5} \definecolor{lightblue}{rgb}{0,0.5,0.5} \definecolor{dkgreen}{rgb}{0,0.4,0} \definecolor{dk2green}{rgb}{0.4,0,0} \definecolor{dkviolet}{rgb}{0.6,0,0.8} \definecolor{mantra}{rgb}{0.2,0.6,0.2} \definecolor{gotcha}{rgb}{0.8,0.2,0} \definecolor{ocre}{RGB}{243,102,25} %borrowed to Orange Book \definecolor{dkolive}{RGB}{85, 107, 47} \definecolor{pine}{RGB}{1, 121, 111} \definecolor{DarkSlateBlue}{RGB}{72,61,139} \definecolor{dkred}{RGB}{139, 0, 0} \definecolor{coffee}{RGB}{111, 78, 55} %%%% package imports \usepackage[british]{babel} \usepackage[nomargin,inline,draft]{fixme} \fxsetup{theme=color,mode=multiuser} \FXRegisterAuthor{FF}{aFF}{FF} % Francisco \FXRegisterAuthor{NY}{aNY}{NY} % Nobuko Yoshida \FXRegisterAuthor{DC}{aDC}{DC} % David \FXRegisterAuthor{LG}{aLG}{LG} % Lorenzo \newcommand{\FF}[1]{\fxnote[author=FF]{#1}} \newcommand{\NY}[1]{\fxnote[author=NY]{#1}} \newcommand{\DC}[1]{\fxnote[author=DC]{#1}} \newcommand{\LG}[1]{\fxnote[author=LG]{#1}} %%% Some macros % Miscelaneous typesetting \newcommand{\code}[1]{\ensuremath{\texttt{#1}}} % typeset with mono spaced font \newcommand{\file}[1]{{\withcolor{teal}{\code{#1}}}} % typeset the name of a file \newcommand{\rulename}[1]{\DefTirName{#1}\xspace} \newcommand{\fnm}[1]{\ensuremath{\text{#1}}} % the name of a function \newcommand{\prednm}[1]{{\code{#1}}} % predicate name \newcommand{\dbj}{de Bruijn\xspace} \newcommand{\ssreflect}{{\code{Ssreflect}\xspace}} \newcommand{\emtst}{\textsc{EMTST}\xspace} \newcommand{\SEP}{\mathbin{\mathbf{|\!\!|}}} \newcommand{\theDiagram}{\text{the diagram}} %% Color management % \def\NOCOLOR{} % define to remove color from the syntax \ifdefined\NOCOLOR \newcommand{\withcolor}[2]{#2} % no color selsected \else % sets and restores the color \newcommand{\withcolor}[2]{\colorlet{currbkp}{.}\color{#1}{#2}\color{currbkp}} \fi % to cancel terms in red instead of black \ifdefined\NOCOLOR \else \renewcommand{\CancelColor}{\color{red}} \fi % Macros for BNF grammars \newcommand{\bnfas}{\mathrel{::=}} \newcommand{\bnfalt}{\mathrel{\mid}} % Optional color definitions % no color % \newcommand{\colorch}{black} % colour for channel vars % \newcommand{\colorex}{black} % colour for expresion vars % \newcommand{\colorse}{black} % colour for session vars % \newcommand{\colorlbl}{black} % colour for labels % \newcommand{\colorproc}{black} % colour for processes % \newcommand{\colorexp}{black} % colour for expressions % \newcommand{\colorte}{black} % colour for the types of expressions % \newcommand{\colorlp}{black} % colour for local types % \newcommand{\colorgt}{black} % colour for global types % some colors \newcommand{\defcl}[1]{\withcolor{black}{#1}} %default colour % for session types \newcommand{\colorch}{DarkGreen} % colour for channel vars \newcommand{\colorex}{MediumVioletRed} % colour for expresion vars % Tomato was too bright \newcommand{\colorse}{Teal} % colour for session vars \newcommand{\colorlbl}{Indigo} % colour for labels \newcommand{\colorproc}{Maroon} % colour for processes \newcommand{\colorexp}{MediumVioletRed} % colour for expressions \newcommand{\colorte}{black}%{purple} %{DarkOrchid} is undefined % colour for the types of expressions \newcommand{\colorlp}{blue} %{NavyBlue} is undefined % colour for local types \newcommand{\colorgt}{violet} % colour for global types \newcommand{\colorcog}{pine} % colour for coinductive global types \newcommand{\colorcol}{dkolive} % colour for coinductive local types \newcommand{\colorpre}{coffee} %colour for prefixes % for the lambda calculus \newcommand{\colorlctp}{NavyBlue} % colour for the types of terms \newcommand{\colorlctm}{FireBrick} %for terms \newcommand{\colorlcvar}{DarkOrchid} % purple for bound variables \newcommand{\colorrules}{DarkSlateBlue} % color for rule names %% Macros for kinds of vars % these are to control the rendering of the different kinds of variables \newcommand{\vch}[1]{\withcolor{\colorch}{#1}} % variables for channels \newcommand{\vex}[1]{\withcolor{\colorex}{#1}} % variables for expressions \newcommand{\vse}[1]{\withcolor{\colorse}{#1}} % variables for sessions %% Macros for labels \newcommand{\dlbl}[1]{\withcolor{\colorlbl}{#1}} % just to define possible use of colours \newcommand{\lblleft}{\dlbl{\code{l}}} \newcommand{\lblright}{\dlbl{\code{r}}} % \newcommand{\lblleft}{\dlbl{\code{left}}} % \newcommand{\lblright}{\dlbl{\code{right}}} % square brackets in rule names \let\DefTirNameOld\DefTirName \renewcommand{\DefTirName}[1]{\withcolor\colorrules{\DefTirNameOld{\footnotesize[{#1}]}}} %% Macros for processes \newcommand{\ofLt}{\vdash_{\text{\textsc{lt}}}} \newcommand{\dslName}{\textsf{Zooid}\xspace} \newcommand{\dproc}[1]{\withcolor{\colorproc}{#1}} % just to define possible use of colours \newcommand{\dcoq}[1]{\withcolor{\colorexp}{#1}} % just to define possible use of colours \newcommand{\proc}{\dproc{\code{proc}}} \newcommand{\Proc}{\code{Proc}} \newcommand{\pend}{\dproc{\code{finish}}} \newcommand{\pjump}[1]{\dproc{\code{jump} \; #1}} \newcommand{\ploop}[2]{\dproc{\code{loop} \; #1 \; \{ #2 \}}} \newcommand{\precv}[2]{\dproc{\code{recv} \; #1 \; \{ #2 \}}} \newcommand{\psend}[3]{\dproc{\code{send} \; #1 \; (#2).\; #3}} \newcommand{\pcont}[2]{\dproc{\lambda #1.\; #2}} \newcommand{\pread}[2]{\dproc{\code{read} \; {#1} \; (x.{#2})}} \newcommand{\pwrite}[3]{\dproc{\code{write} \; {#1} \; {#2} \; {#3}}} \newcommand{\pinteract}[3]{\dproc{\code{interact} \; {#1} \; {#2} \; (x.{#3})}} \newcommand{\pacti}{\ensuremath{\withcolor{blue}{\code{act}_i}}\xspace} \newcommand{\pactr}{\ensuremath{\withcolor{blue}{\code{act}_r}}\xspace} \newcommand{\pactw}{\ensuremath{\withcolor{blue}{\code{act}_w}}\xspace} \newcommand{\zooid}{\dproc{\code{Z}}} \newcommand{\zend}{\dproc{\code{finish}}} \newcommand{\zjump}[1]{\dproc{\code{jump}} \; #1} \newcommand{\zloop}[2]{\dproc{\code{loop} \; #1 \; ( #2 )}} \newcommand{\zif}[1]{\dproc{\code{if} \; #1 }} \newcommand{\zthen}[1]{\dproc{\code{then} \; #1 }} \newcommand{\zelse}[1]{\dproc{\code{else} \; #1 }} \newcommand{\zalts}[1]{\dproc{\begin{array}[t]{@{}l@{}} [#1] \end{array}}} \newcommand{\zsend}[3]{\dproc{\code{send} \; #1 \; (#2, \; #3) \code{!} \;}} \newcommand{\zrecv}[3]{\dproc{\code{recv} \; #1 \; (#2, \; #3) \code{?} \;}} \newcommand{\zbranch}[2]{\dproc{\code{branch} \; #1 \; #2}} \newcommand{\zselect}[2]{\dproc{\code{select} \; #1 \; #2}} \newcommand{\zfrom}{\dproc{\code{read}}\ \ } \newcommand{\zfun}[1]{\code{fun}\ #1\ \Rightarrow\ } \newcommand{\zto}[2]{\dproc{\code{write} \; #1 \ \ #2}} \newcommand{\zinteract}[2]{\dproc{\code{interact} \; #1 \ \ #2}} \newcommand{\zbalt}[2]{\dproc{\; #1, \; #2 \code{?} \; }} \newcommand{\zcase}[3]{\dproc{\code{case} \; #1 \Rightarrow #2, \; #3 \code{!} \;}} \newcommand{\zskip}[2]{\dproc{\code{skip} \Rightarrow #1, \; #2 \code{!} \;}} \newcommand{\zdflt}[2]{\dproc{\code{otherwise} \Rightarrow #1, \; #2 \code{!} \;}} \newcommand{\pX}{\dproc{X}} \newcommand{\zooidb}{\dproc{\zooid^b}} \newcommand{\zooids}{\dproc{\zooid^s}} \newcommand{\eif}[1]{\dcoq{\code{if} \; #1}} \newcommand{\ethen}[1]{\dcoq{\code{then} \; #1}} \newcommand{\eelse}[1]{\dcoq{\code{else} \; #1}} \newcommand{\tfun}[2]{\dcoq{\code{fun} \; #1 \Rightarrow #2}} \newcommand{\expr}{\dcoq{e}} \newcommand{\vx}{\dcoq{x}} \newcommand{\vy}{\dcoq{y}} \newcommand{\subtrace}{\preceq} \newcommand{\zooidTheoremCoq}{\code{process\_traces\_are\_global\_types}} \newcommand{\Rpipe}{\code{pipeline}} \newcommand{\Rpingpong}{\code{ping\_pong}} \newcommand{\RpipeLT}{\code{pipeline$_{lt}$}} \newcommand{\AliceLT}{\code{alice$_{lt}$}} \newcommand{\BobLT}{\code{bob$_{lt}$}} \newcommand{\BobProc}{\code{bob}} \newcommand{\AliceProc}{\code{alice}} \newcommand{\coqProj}{\code{\textbackslash}\code{project}} \newcommand{\coqGet}{\code{\textbackslash}\code{get}} \newcommand{\coqkwstyle}{\ttfamily\color{dkviolet}} \newcommand{\coqDef}{\code{\coqkwstyle{Definition}}} \newcommand{\coqEval}{\code{\coqkwstyle{Eval}}} \newcommand{\coqOpaque}{\code{\coqkwstyle{Opaque}}} \newcommand{\zooidTy}[1]{\code{wt\_proc} \; #1} \newcommand{\azooid}{\code{typed\_proc}} \newcommand{\BobMP}{\code{bob}_{\code{mp}}} \newcommand{\runProc}{\code{extract\_proc}} \newcommand{\TwoBuyer}{\code{two\_buyer}} \newcommand{\ProcB}{\code{buyer}_{\code{B}}} \newcommand{\BuyerA}{\code{A}} \newcommand{\BuyerB}{\code{B}} \newcommand{\BuyerBLT}{\code{B}_{\code{lt}}} \newcommand{\Seller}{\code{S}} \newcommand{\ItemId}{\code{ItemId}} \newcommand{\Quote}{\code{Quote}} \newcommand{\Propose}{\code{Propose}} \newcommand{\Accept}{\code{Accept}} \newcommand{\Reject}{\code{Reject}} \newcommand{\Date}{\code{Date}} % \newcommand{\preq}[3]{\dproc{\code{request}\ \vse{#1}\,(\vch{#2}). \dproc{#3}}} % \newcommand{\pacc}[3]{\dproc{\code{accept}\ \vse{#1}\,(\vch{#2}). \dproc{#3}}} % \newcommand{\psend}[3]{\dproc{\vch{#1}\,![\dexp{#2}];\,\dproc{#3}}} % \newcommand{\precv}[3]{\dproc{\vch{#1}\,?(\vex{#2}). \dproc{#3}}} % \newcommand{\psel}[3]{\dproc{\vch{#1}\triangleleft\dlbl{#2};\dproc{#3}}} % \newcommand{\pbran}[3] % {\dproc{\vch{#1}\triangleright\{\lblleft:\dproc{#2}[\hspace{-0.27em}]\lblright:\dproc{#3}\}}} % \newcommand{\pthrow}[3]{\dproc{\code{throw}\ \vch{#1}\,[\vch{#2}];\dproc{#3}}} % \newcommand{\pcatch}[3]{\dproc{\code{catch}\ \vch{#1}\,(\vch{#2}). \dproc{#3}}} % \newcommand{\pif}[3] % {\dproc{\code{if}\ \dexp{#1}\ \code{then}\ \dproc{#2}\ \code{else}\ \dproc{#3}}} % \newcommand{\ppar}[2]{\dproc{#1}\mathrel{|}\dproc{#2}} % \newcommand{\pinact}{\dproc{\code{inact}}} % \newcommand{\phide}[3]{\dproc{\nu_{#1}\,({#2}).\dproc{#3}}} % \newcommand{\hideboth}{\{n,c\}} % \newcommand{\phidenm}[2]{\phide{n} {\vse{#1}} {#2}} % \newcommand{\phidech}[2]{\phide{c} {\vch{#1}} {#2}} % \newcommand{\pbang}[1]{\dproc{!{#1}}} %% Macros for expressions \newcommand{\dexp}[1]{\withcolor{\colorexp}{#1}} % just to define possible use of colours \newcommand{\etrue}{\dexp{\code{true}}} \newcommand{\efalse}{\dexp{\code{false}}} %% Macros for types %\newcommand{\dual}[1]{\bar{#1}} \newcommand{\dual}[1]{\overline{#1}} % expressions \newcommand{\dte}[1]{\withcolor{\colorte}{#1}} % just to define possible use of colours \newcommand{\tbool}{\dte{\code{bool}}} \newcommand{\tendp}[1]{\dte{\langle\dtp{#1},\dtp{\dual{#1}}\rangle}} \newcommand{\tnat}{\dte{\code{nat}}} \newcommand{\tunit}{\dte{\code{unit}}} \newcommand{\tint}{\dte{\code{int}}} \newcommand{\tseq}{\dte{\code{seq}}} \newcommand{\tS}{\dte{\code{S}}} \newcommand{\tplus}[2]{\dte{#1\code{+}#2}} \newcommand{\tpair}[2]{\dte{#1\code{*}#2}} %auxiliary \newcommand{\role}{\code{role}} \newcommand{\rel}[2]{\code{rel}\ \ {#1}\ \ {#2}} \newcommand{\lty}{\code{l\_ty}} \newcommand{\gty}{\code{g\_ty}} \newcommand{\rlty}{\code{rl\_ty}} \newcommand{\rgty}{\code{rg\_ty}} \newcommand{\igty}{\code{ig\_ty}} \newcommand{\colty}{\code{l\_ty}^{\code c}} \newcommand{\cogty}{\code{g\_ty}^{\code c}} \newcommand{\mty}{\code{mty}} \newcommand{\lbl}{\code{label}} \newcommand{\seq}{\code{seq}} \newcommand{\fmap}{\code{fmap}} \newcommand{\var}{\code{var}} \newcommand{\guarded}{\code{guarded}} \newcommand{\gfv}{\code{fv}} \newcommand{\lfv}{\code{fv}} \newcommand{\closed}{\code{closed}} \newcommand{\env}{\code{renv}} \renewcommand{\subject}[1]{\ensuremath{\code{subj}\ {#1}}} \newcommand{\Dfrac}[2]{% \ooalign{% $\genfrac{}{}{3.0pt}0{#1}{#2}$\cr%\\%\cr%\vspace*{12.0pt}% $\color{white}\genfrac{}{}{2.2pt}0{\phantom{#1}}{\phantom{#2}}$}% }%coinductive defs \newcommand{\newDfrac}[2]{% \ooalign{% $\dfrac{#1}{\phantom{#2}}$\cr%\\%\cr%\vspace*{12.0pt}% \raisebox{-2.0pt}{$\dfrac{\phantom{#1}}{#2}$}}% } %participants \newcommand{\p}{{\sf p}} \newcommand{\q}{{\sf q}} \newcommand{\pr}{{\sf r}} \newcommand{\parti}{\code{prts}} \newcommand{\partof}[2]{\code{part\_of}\ {#1} \ {#2}} \newcommand{\pof}{\code{part\_of}} \newcommand{\Alice}{{{\sf Alice}}} \newcommand{\Bob}{{{\sf Bob}}} \newcommand{\Carol}{{{\sf Carol}}} % local types \newcommand{\dlt}[1]{\withcolor{\colorlp}{#1}} % just to define possible use of colours %\newcommand{\tsende}[2]{\dlt{![\dte{#1}];{#2}}} %\newcommand{\tsendp}[2]{\dlt{![{#1}];{#2}}} %\newcommand{\trecve}[2]{\dlt{?[\dte{#1}];{#2}}} %\newcommand{\trecvp}[2]{\dlt{?[{#1}];{#2}}} %\newcommand{\toffer}[2]{\dlt{\&\{\lblleft:{#1}, \lblright:{#2}\}}} %\newcommand{\ttake}[2]{\dlt{\oplus\{\lblleft:{#1}, \lblright:{#2}\}}} %\newcommand{\tend}{\dlt{\code{end}}} %\newcommand{\tbot}{\dlt{\bot}} \newcommand{\lT}{\dlt{ \ensuremath{{\sf L}}}} \newcommand{\lsend}[4]{\dlt{![{#1}];\{#2_i({#3}_i). #4_i \}_{i \in I}}} \newcommand{\lsendni}[4]{\dlt{![{#1}];\{#2_i({#3}_i). #4 \}_{i \in I}}} \newcommand{\lrecv}[4]{\dlt{?[{#1}];\{#2_i({#3}_i). #4_i \}_{i \in I}}} \newcommand{\lrecvni}[4]{\dlt{?[{#1}];\{#2_i({#3}_i). #4 \}_{i \in I}}} \newcommand{\lrcv}[1]{\dlt{?[{#1}];}} \newcommand{\lsnd}[1]{\dlt{![{#1}];}} \newcommand{\lrec}[2]{\dlt{\mu #1 . #2}} \newcommand{\lX}{\dlt{X}} \newcommand{\lend}{\dlt{\mathtt{end}}} \newcommand{\lunroll}[2]{#1\;\Re\;#2} % coinductive local types \newcommand{\dcol}[1]{\withcolor{\colorcol}{#1}} \newcommand{\colT}{\dcol{ \ensuremath{{\sf L^c}}}} \newcommand{\colrecv}[4]{\dcol{?^{\sf c}[{#1}];\{#2_i({#3}_i). #4_i \}_{i \in I}}} \newcommand{\colsend}[4]{\dcol{!^{\sf c}[{#1}];\{#2_i({#3}_i). #4_i \}_{i \in I}}} \newcommand{\colrcv}[1]{\dcol{?^{\sf c}[{#1}];}} \newcommand{\colsnd}[1]{\dcol{!^{\sf c}[{#1}];}} \newcommand{\colend}{\dcol{\mathtt{end}^{\sf c}}} % global types \newcommand{\dgt}[1]{\withcolor{\colorgt}{#1}} % just to define possible use of colours \ifdefined\G \renewcommand{\G}{\dgt{ \ensuremath{{\sf G}}}} \else \newcommand{\G}{\dgt{ \ensuremath{{\sf G}}}} \fi \newcommand{\gmu}{\dgt{\mu}} \newcommand{\msg}[2]{\dgt{ \ensuremath{#1\to#2:}}} \newcommand{\msgi}[5]{\dgt{ \ensuremath{#1\to#2:\{#3_i({#4}_i). #5_i \}_{i \in I}}}} \newcommand{\msgj}[5]{\dgt{ \ensuremath{#1\to#2:\{#3_j({#4}_j). #5_j \}_{j \in J}}}} \newcommand{\msgk}[5]{\dgt{ \ensuremath{#1\to#2:\{#3_k({#4}_k). #5_k \}_{k \in K}}}} \newcommand{\gX}{\dgt{X}} \newcommand{\grec}[2]{\dgt{\mu #1 . #2}} %\newcommand{\gsende}[2]{\dgt{![\dte{#1}];{#2}}} %\newcommand{\gsendp}[2]{\dgt{![{#1}];{#2}}} %\newcommand{\grecve}[2]{\dgt{?[\dte{#1}];{#2}}} %\newcommand{\grecvp}[2]{\dgt{?[{#1}];{#2}}} %\newcommand{\goffer}[2]{\dgt{\&\{\lblleft:{#1}, \lblright:{#2}\}}} %\newcommand{\gtake}[2]{\dgt{\oplus\{\lblleft:{#1}, \lblright:{#2}\}}} \newcommand{\gend}{\dgt{\mathtt{end}}} %\newcommand{\gbot}{\dgt{\bot}} %%Macros for coinductive global types \newcommand{\dcog}[1]{\withcolor{\colorcog}{#1}} %\newcounter{sarrow} %\newcommand\xrsa[1]{% %\stepcounter{sarrow}% %\begin{tikzpicture}[decoration=snake] %\node (\thesarrow) {\strut#1}; %\draw[->,decorate] (\thesarrow.south west) -- (\thesarrow.south east); %\end{tikzpicture}% %} \newcounter{sarrow} \newcommand\xrsa[1]{% \stepcounter{sarrow}% \mathrel{\begin{tikzpicture}[baseline= {( $ (current bounding box.south) + (0,0%-0.5ex ) $ )}] \node[inner sep=.5ex] (\thesarrow) {$\scriptstyle #1$}; \path[draw,<-,decorate, decoration={zigzag,amplitude=0.7pt,segment length=1.2mm,pre=lineto,pre length=4pt}] (\thesarrow.south east) -- (\thesarrow.south west); \end{tikzpicture}}% } \newcommand{\coG}{\dcog{ \ensuremath{{\sf G}^{\sf c}}}} \newcommand{\cogend}{\dcog{\mathtt{end}^{\sf c}}} \newcommand{\comsgni}[5]{\dcog{ \ensuremath{#1\to#2:\{#3_i({#4}_i). #5_i \}_{i \in I}}}} \newcommand{\comsgnj}[5]{\dcog{ \ensuremath{#1\to#2:\{#3_j({#4}_j). #5_j \}_{j \in J}}}} \newcommand{\comsgnk}[5]{\dcog{ \ensuremath{#1\to#2:\{#3_k({#4}_k). #5_k \}_{k \in K}}}} \newcommand{\comsgsi}[6]{\dcog{ \ensuremath{#1\xrsa{#3} #2:\{#4_i({#5}_i). #6_i \}_{i \in I}}}} \newcommand{\comsgsj}[5]{\dcog{ \ensuremath{#1\xrsa #2:\{#3_j({#4}_j). #5_j \}_{j \in J}}}} \newcommand{\comsgsk}[5]{\dcog{ \ensuremath{#1\xrsa #2:\{#3_k({#4}_k). #5_k \}_{k \in K}}}} \newcommand{\comsgn}[2]{\dcog{ \ensuremath{#1\to#2:}}} \newcommand{\comsgs}[3]{\dcog{ \ensuremath{#1\xrsa{#3} #2:}}} \newcommand{\gunroll}[2]{#1\;\Re\;#2} \newcommand{\step}{\xrightarrow{\text{step}}} \newcommand{\stepa}[1]{\xrightarrow{#1}} %% prefixes \newcommand{\preG}{\dpre{ \ensuremath{{\sf G}^{\sf p}}}} \newcommand{\dpre}[1]{\withcolor{\colorpre}{#1}} \newcommand{\pgend}{\dpre{\mathtt{inj}^{\sf p}}} \newcommand{\pmsgni}[5]{\dpre{ \ensuremath{#1\to#2:\{#3_i({#4}_i). #5_i \}_{i \in I}}}} \newcommand{\pmsgnj}[5]{\dpre{ \ensuremath{#1\to#2:\{#3_j({#4}_j). #5_j \}_{j \in J}}}} \newcommand{\pmsgnk}[5]{\dpre{ \ensuremath{#1\to#2:\{#3_k({#4}_k). #5_k \}_{k \in K}}}} \newcommand{\pmsgsi}[6]{\dpre{ \ensuremath{#1\xrsa{#3} #2:\{#4_i({#5}_i). #6_i \}_{i \in I}}}} \newcommand{\pmsgsj}[5]{\dpre{ \ensuremath{#1\xrsa #2:\{#3_j({#4}_j). #5_j \}_{j \in J}}}} \newcommand{\pmsgsk}[5]{\dpre{ \ensuremath{#1\xrsa #2:\{#3_k({#4}_k). #5_k \}_{k \in K}}}} \newcommand{\pmsgn}[2]{\dpre{ \ensuremath{#1\to#2:}}} \newcommand{\pmsgs}[3]{\dpre{ \ensuremath{#1\xrsa{#3} #2:}}} % projections \newcommand{\proj}[2]{ #1 \upharpoonright_{#2}} \newcommand{\projt}[2]{{#1}{\upharpoonright}{#2}} \newcommand{\coproj}[3]{ #2\ {\upharpoonright^{\code c}} {#1}\ #3} \newcommand{\osproj}{\upharpoonright\!\upharpoonright} %%Macros for queue environments and queue projection \newcommand{\qenv}{\code{qenv}} %\newcommand{\enq}{\xrightarrow{\text{enqueue}}} %\newcommand{\deq}{\xrightarrow{\text{dequeue}}} \newcommand{\enq}{\code{enq}} \newcommand{\deq}{\code{deq}} \newcommand{\qproj}[2]{{#1}{\upharpoonright^{\code q}}{#2}} %% Traces \newcommand{\trend}{[]} \newcommand{\trnext}[2]{{#1}\#{#2}} \newcommand{\glts}[2]{\code{tr}^{\code{g}}\;{#1}\;{#2}} \newcommand{\llts}[2]{\code{tr}^{\code{l}}\;{#1}\;{#2}} \newcommand{\plts}[2]{\code{tr}^{\code{p}}\;{#1}\;{#2}} %% Macros for judgments and such \newcommand{\scon}{\equiv} \newcommand{\sconalpha}{\equiv_\alpha} \newcommand{\freenames}[1]{\fnm{fn}({#1})} \newcommand{\stepsto}{\longrightarrow} % process steps to \newcommand{\stepstomany}{\stepsto^*} % process steps to \newcommand{\evalsto}{\downarrow} % expression evaluates to \newcommand{\subst}[2]{[{#1}/{#2}]} % substitution \newcommand{\esubst}[2]{[\dexp{#1}/\dexp{#2}]} % substitution for expressions \newcommand{\oft}{\mathrel{:}} % is of type \newcommand{\oftj}{\mathrel{\triangleright}} % is of type in a process judgment \newcommand{\oftc}{\mathrel{:}} % is of type in a process judgment \newcommand{\ofte}[3]{\dexp{#1}\vdash\dexp{#2}\oftc\dlt{#3}} \newcommand{\oftp}[3]{\dexp{#1}\vdash\dproc{#2}\oftj\dproc{#3}} \newcommand{\completed}[1]{\dproc{#1}\ \text{completed}} % Macros for contexts and typings \newcommand{\sorting}[1]{\dexp{#1}} \newcommand{\typing}[1]{\dproc{#1}} \newcommand{\emptyctx}{\ensuremath{\cdot}} \newcommand{\typc}[3]{\dproc{#1},\vch{#2}\oft\dlt{#3}} % add to a typing \newcommand{\ctxc}[3]{\dexp{#1},\vex{#2}\oft\dte{#3}} % add to a context \newcommand{\typl}[3]{\dexp{\Gamma}(\vex{#2})=\dte{#3}} % lookup typ \newcommand{\ctxl}[3]{\dexp{#1}(\vex{#2})=\dte{#3}} % lookup context \newcommand{\join}[2]{\dproc{#1}\circ\dproc{#2}} \newcommand{\comp}[2]{\dproc{#1}\asymp\dproc{#2}} \newcommand{\dom}[1]{\ensuremath{\fnm{dom}({#1})}} \newcommand{\doms}[1]{\ensuremath{\fnm{dom}(\sorting{#1})}} % domain of a sorting \newcommand{\domt}[1]{\ensuremath{\fnm{dom}(\typing{#1})}} % domain of a typing \newcommand{\compatible}[2]{\ensuremath{\typing{#1}\mathrel{\asymp}\typing{#2}}} \newcommand{\compose}[2]{\ensuremath{\typing{#1}\mathrel{\circ}\typing{#2}}} \newcommand{\fv}[1]{\ensuremath{\fnm{fv}({#1})}} % free variables \newcommand{\fvp}[1]{\fv{\dproc{#1}}} % free variables in a proc % Macros for the lambda calculus \newcommand{\dlctm}[1]{\withcolor{\colorlctm}{#1}} % terms \newcommand{\dlctp}[1]{\withcolor{\colorlctp}{#1}} % types \newcommand{\dlcvar}[1]{\withcolor{\colorlcvar}{#1}} % vars \newcommand{\lclam}[1]{\ensuremath{\dlctm{\lambda.{#1}}}} \newcommand{\lcapp}[2]{\ensuremath{\dlctm{{#1}\,{#2}}}} \newcommand{\lcvar}[1]{\ensuremath{\dlcvar{#1}}} \newcommand{\lcbase}[1]{\withcolor{\colorlctp}{#1}} % base types \newcommand{\lcarr}[2]{\withcolor{\colorlctp}{{#1}\multimap{#2}}} % base types \newcommand{\lcctx}[1]{\dlctm{#1}} \newcommand{\lcctxadd}[3]{\lcctx{#1}\defcl,\,\lcvar{#2}\,\defcl\oft\,\dlctp{#3}} \newcommand{\lcnil}{\dlctm\cdot} %% Macros for LN things \newcommand{\atoms}{\ensuremath{\mathbb{A}}} \newcommand{\nat}{\ensuremath{\mathbb{N}}} \newcommand{\atomset}[1]{\atoms_{\code{#1}}} \newcommand{\atomslc}{\atomset{LC}} \newcommand{\atomssc}{\atomset{SC}} \newcommand{\atomscn}{\atomset{CN}} \newcommand{\atomsev}{\atomset{EV}} \newcommand{\setsuffix}[1]{\ensuremath{_{\mathbb{#1}}}} \newcommand{\slc}{\setsuffix{LC}} \newcommand{\ssc}{\setsuffix{SC}} \newcommand{\scn}{\setsuffix{CN}} \newcommand{\sev}{\setsuffix{EV}} %% Underlines for kinds of vars \newcommand{\dashlcsty}{densely dotted} \newcommand{\dashscsty}{densely dashed} \newcommand{\dashcnsty}{dashdotted} \newcommand{\dashevsty}{densely dashdotdotted} \newcommand{\underl}[3]{% \tikz[baseline=(todotted.base)]{ \node[inner sep=1pt,outer sep=0pt,color={#2}] (todotted) {#3}; \draw[#1, line width=0.3mm] (todotted.south west) -- (todotted.south east); }% }% \ifdefined\NOCOLOR \newcommand{\chcolor}[1]{black} % always black \else % sets and restores the color \newcommand{\chcolor}[1]{#1} % returns the color \fi \newcommand{\dashlc}[1]{\underl{\dashlcsty}{\chcolor{\colorch}}{#1}} \newcommand{\dashsc}[1]{\underl{\dashscsty}{\chcolor{\colorse}}{#1}} \newcommand{\dashcn}[1]{\underl{\dashcnsty}{\chcolor{\colorch}}{#1}} \newcommand{\dashev}[1]{\underl{\dashevsty}{\chcolor{\colorex}}{#1}} % open/close process with expression \newcommand{\oppe}[2]{\dproc{#1}^{\dexp{#2}}} \newcommand{\clpe}[2]{^{\backslash\dexp{#2}}\dproc{#1}} % open/close process with channel \newcommand{\oppk}[2]{\dproc{#1}^{\vch{#2}}} \newcommand{\clpk}[2]{^{\backslash\vse{#2}}\dproc{#1}} % open/close process with session (name) \newcommand{\oppn}[2]{\dproc{#1}^{\vse{#2}}} \newcommand{\clpn}[2]{^{\backslash\vse{#2}}\dproc{#1}} % lambda calculus % open terms (with arbitrary indices) \newcommand{\opentm}[3]{\ensuremath{\defcl\{\dlcvar{#1}\defcl\to\dlctm{#2}\defcl\}\,\dlctm{#3}}} % eg: {0->u}t \newcommand{\oplc}[2]{\ensuremath{\dlctm{#1}^{\dlctm{#2}}}} % co finite quantifications \newcommand{\cofin}[2]{\forall {#1}\notin {#2},\ } \newcommand{\cofine}[2]{\cofin{\vex{#1}} {#2}} \newcommand{\cofink}[2]{\cofin{\vch{#1}} {#2}} \newcommand{\cofinn}[2]{\cofin{\vse{#1}} {#2}} % predicates \newcommand{\lc}[1]{\prednm{lc}\,{#1}} % locally closed \newcommand{\lcp}[1]{\lc{\dproc{#1}}} \newcommand{\lce}[1]{\lc{\dexp{#1}}} \newcommand{\body}[1]{\prednm{body}\, {#1}} % at most one free index (0) \newcommand{\bodyp}[1]{\body{\dproc{#1}}} \newcommand{\bodye}[1]{\body{\dexp{#1}}} \newcommand{\val}[1]{\fnm{value}\,{#1}} % channel endpoints \newcommand{\posch}[1]{\ensuremath{{#1}^+}} \newcommand{\negch}[1]{\ensuremath{{#1}^-}} % some names \newcommand{\nuscr}{$\nu$-Scr} % comments \newcommand\FUTURE[1]{{\color{blue}{#1}}} % style \newcommand{\myparagraph}[1]{\paragraph{\textbf{#1}}} %\renewcommand{\sec}{Section\xspace} \renewcommand{\sec}{\S\xspace} \newcommand{\Sec}{Section\xspace} % ocaml lstlisting format \newcommand{\ocamlcommentstyle}{\color{blue}} \lstdefinelanguage{ocaml}[Objective]{Caml}{ % Fix errors in the default definition of ocaml. deletekeywords={ref}, morekeywords={module} % General settings. flexiblecolumns=false, showstringspaces=false, framesep=5pt, commentstyle=\ocamlcommentstyle, % By default, we use a small font. basicstyle=\tt\footnotesize, numberstyle=\scriptsize, % LaTeX escape. escapeinside={$}{$}, } \newcommand{\appref}[1]{% \ifthenelse{\isundefined{\appendixIncluded}}{\ref{#1}}{\ref{APP-#1}}} %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End: %\iftoggle{fullversion}{appendix}{full version of the paper %(\url{https://arxiv.org/abs/2009.06541}).} %\iftoggle{colourkeyword} %{ %\newcommand{\keyword}[1]{{\color{keywordcolour}\code{#1}}} %} %{ %\newcommand{\keyword}[1]{\code{#1}} %}
{ "alphanum_fraction": 0.652153091, "avg_line_length": 36.1572871573, "ext": "tex", "hexsha": "9d86faec4db77be5c1f02867f5c0f4213da6b67e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "33d557ae6b1b7ec8f75f6e408a9b2d07e0e38e44", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "emtst/GentleAdventure", "max_forks_repo_path": "act3/slides/act3macros.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "33d557ae6b1b7ec8f75f6e408a9b2d07e0e38e44", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "emtst/GentleAdventure", "max_issues_repo_path": "act3/slides/act3macros.tex", "max_line_length": 121, "max_stars_count": null, "max_stars_repo_head_hexsha": "33d557ae6b1b7ec8f75f6e408a9b2d07e0e38e44", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "emtst/GentleAdventure", "max_stars_repo_path": "act3/slides/act3macros.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 10053, "size": 25057 }