Search is not available for this dataset
text
string
meta
dict
\section{Conclusion} \label{s:conclusions}
{ "alphanum_fraction": 0.7727272727, "avg_line_length": 11, "ext": "tex", "hexsha": "289d4ada9a34aa094c795f4ea1874844adb65289", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "51125af74af935c26e0c6e7793faeaf095c0cf1d", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "Rugang/scaffold", "max_forks_repo_path": "SIGPLAN/A-conclusions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "51125af74af935c26e0c6e7793faeaf095c0cf1d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "Rugang/scaffold", "max_issues_repo_path": "SIGPLAN/A-conclusions.tex", "max_line_length": 21, "max_stars_count": null, "max_stars_repo_head_hexsha": "51125af74af935c26e0c6e7793faeaf095c0cf1d", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "Rugang/scaffold", "max_stars_repo_path": "SIGPLAN/A-conclusions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 12, "size": 44 }
% !TEX TS-program = pdflatex % !TEX encoding = UTF-8 Unicode % This is a simple template for a LaTeX document using the "article" class. % See "book", "report", "letter" for other types of document. \documentclass[10pt,twocolumn]{article} \usepackage[utf8]{inputenc} % set input encoding (not needed with XeLaTeX) \usepackage{geometry} \usepackage{graphicx} \usepackage{listings} \usepackage{caption} \usepackage{xcolor,colortbl} \usepackage{mathtools} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsthm} \usepackage{subfig} \usepackage{wrapfig} \usepackage{tabularx} \usepackage{pdflscape} \graphicspath{ {images/} } %%% Examples of Article customizations % These packages are optional, depending whether you want the features they provide. % See the LaTeX Companion or other references for full information. %%% PAGE DIMENSIONS % to change the page dimensions \geometry{a4paper} % or letterpaper (US) or a5paper or.... \geometry{margin=0.55in} % for example, change the margins to 2 inches all round % \geometry{landscape} % set up the page for landscape % read geometry.pdf for detailed page layout information % \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent %%% PACKAGES \usepackage{booktabs} % for much better looking tables \usepackage{array} % for better arrays (eg matrices) in maths \usepackage{paralist} % very flexible & customisable lists (eg. enumerate/itemize, etc.) \usepackage{verbatim} % adds environment for commenting out blocks of text & for better verbatim \usepackage{subfig} % make it possible to include more than one captioned figure/table in a single float \usepackage{indentfirst} % These packages are all incorporated in the memoir class to one degree or another... %%% HEADERS & FOOTERS \usepackage{fancyhdr} % This should be set AFTER setting up the page geometry \pagestyle{fancy} % options: empty , plain , fancy \renewcommand{\headrulewidth}{0pt} % customise the layout... \lhead{}\chead{}\rhead{} \lfoot{}\cfoot{\thepage}\rfoot{} %%% SECTION TITLE APPEARANCE \usepackage{sectsty} \allsectionsfont{\sffamily\mdseries\upshape} % (See the fntguide.pdf for font help) % (This matches ConTeXt defaults) %%% ToC (table of contents) APPEARANCE \usepackage[nottoc,notlof,notlot]{tocbibind} % Put the bibliography in the ToC \usepackage[titles,subfigure]{tocloft} % Alter the style of the Table of Contents \renewcommand{\cftsecfont}{\rmfamily\mdseries\upshape} \renewcommand{\cftsecpagefont}{\rmfamily\mdseries\upshape} % No bold! \setlength{\parindent}{0.5cm} \newcommand{\subsubfloat}[2]{% \begin{tabular}{@{}c@{}}#1\\#2\end{tabular}% } %%% END Article customizations %%% The "real" document content comes below... \title{On 2D and 3D Shapes Based On Primality Tests} \author{Marcin Barylski} \date{\small{Created: May 7, 2017 \\ The last update: June 9, 2019}} \definecolor{Gray}{gray}{0.85} \definecolor{LightCyan}{rgb}{0.88,1,1} \newcolumntype{a}{>{\columncolor{Gray}}c} \newcolumntype{b}{>{\columncolor{white}}c} \begin{document} \maketitle \begin{abstract} Based on primality property change for two consecutive terms of infinite sequence of natural numbers it is possible to establish rules which allow to draw a figure. This work is devoted to studies on various sequences of natural numbers which produced interesting 2D and 3D outputs. \end{abstract} \section{Introduction} Each natural number $n$ belongs to either prime (P) or non-prime (NP) set. If we arrange $N$ natural numbers $n_1$, $n_2$, ... $n_N$ in an ordered sequence $S$ ($S(i)$ = $n_i$), then we can create a corresponding ordered sequence $P$ of $N$ boolean values ($P(i)$ is either $True$ or $False$, 1 $\leq$ i $\leq$ N) based on primality of $n_i$. $P(i)$ is $True$ if $S(i)$ is prime, otherwise is $False$. For instance, if $S = 1, 2, 3, 4, 5, 6$ then $P = False, True, True, False, True, False$. \par Observation of changes in $P$ provides information about changeability of primality property in $S$. If $S(i)$ is prime and $S(i+1)$ is not prime, then $P(i)$ = $True$ and $P(i+1)$ = $False$ (change from $True$ to $False$). If $S(i)$ is not prime and $S(i+1)$ is prime, then $P(i)$ = $False$ and $P(i+1)$ = $True$ (change from $False$ to $True$). We can establish yet another sequence $Q$ which indicates primality property change between two consecutive terms of $S$. If $P(i)$ XOR $P(i+1)$ = $True$, then $Q(i+1)$ = $True$, otherwise $Q(i+1)$ = $False$. For instance, if $S = 1, 2, 3, 4, 5, 6$ then $P = False, True, True, False, True, False$ and $Q = N/A, True, False, True, True, True$. For simplication in calculations we can assume that $Q(0)$ will be always $False$.\par \section{Basic plotting rules} Values recorded in $Q$ may be used to draw 2D shape. \#Q = \#S , thus shape would have as many points as terms in $S$. We start plotting from point $W$ = $(0, 0)$ and with vector $D$ = $(+1,0)$, which is indicating current direction for next point to be drawn. If $Q(i)$ XOR $Q(i+1)$ = $False$, then we continue plotting in already established direction. If $Q(i)$ XOR $Q(i+1)$ = $True$, then we change direction (for instance, clockwise: Table $\ref{tablevectorchange}$) before plotting the next point. \begin{table}[h] \centering \caption{Clockwise change of two-dimensional $D$} \label{tablevectorchange} \begin{tabular}{|c|c|} \hline \rowcolor{LightCyan} Initial $D$ (x,y) & New $D$ (x,y) \\ \hline (0,1) & (-1,0) \\ \hline (1,0) & (0,1) \\ \hline (0,-1) & (1,0) \\ \hline (-1,0) & (0,-1) \\ \hline \end{tabular} \end{table} Plotting algorithm $A$ for 2D figures is depicted in Listing 1. Figure drawing starts from point (0,0) with direction (+1,0). In its main part $A$ is testing primality of elements in $S$, one-by-one, and stores results in $P$. Then (based on difference between two consecutive terms of $P$) $A$ decides on the next point location and draws a single point. Entire operation is repeated until end of $S$ is reached. \lstset{language=Python} \lstset{breaklines=true} \lstset{frame=shadowbox} \lstset{caption={Plotting algorithm for 2D shape}} \label{listingplottingbasics} \begin{lstlisting}[linewidth=8.7cm] # initial values W = (0, 0) D = (+1,0) P[0] = False # assuming that S contains N elements # is_prime(n) returns True iff n is prime i = 1 for n in S: P[i] = is_prime(n) # change direction if Q is changing if P[i-1] XOR P[i]: D = change_direction (D) W += D plot_point (W) i += 1 def change_direction (D) D = rotate_vector_clockwise (D) return D def plot_point (W) plot (W.x, W.y, color) \end{lstlisting} In case of 3D plotting, third dimension is added which is changing delta of $z$ (table $\ref{tablevectorchange3d}$ shows rules for such change). \begin{table}[h] \centering \caption{Change of dimension $z$ in three-dimensional $D$} \label{tablevectorchange3d} \begin{tabular}{|c|c|} \hline \rowcolor{LightCyan} Initial $z$ & New $z$ \\ \hline 0 & 1 \\ \hline 1 & -1 \\ \hline -1 & 0 \\ \hline \end{tabular} \end{table} \section{Advanced plotting rules and metrics} Basic plotting rules may be suplemented with more advanced rules like points aging, more demanding rules for changing $D$, including changeable length of $D$ and using more than one color to plot the points. \par Point aging may be defined in a way that each visible point $P_i$ is associated with watchdog $W_i$ which is started when $P_i$ is created. $W_i$ initial value is set to value $AGE_{max}$ ($AGE_{max}$ \textgreater 0) and is decreased by 1 with every iteration if actual point $P_j$ is different than $P_i$. $W_i$ is reset to $AGE_{max}$ if $P_j$ = $P_i$. $P_i$ is removed (aged out) if $W_i$ = $AGE_{min}$ = 0. \par If point aging is in use, point $P_i$ color selection may be based on $W_i$ value (the bigger value of $W_i$, the hotter color). Color may be also based on total number of hits (the more hits, the hotter color) which is ideal for case without point aging. \par Each shape can be accompanied with a set of metrics which can be used to compare them and quantify properties in time. Table $\ref{tablemetrics}$ shows such sample metrics. \begin{table}[h] \centering \caption{Metrics} \label{tablemetrics} \begin{tabular}{|l|l|} \hline \rowcolor{LightCyan} Metric & Description\\ \hline $M_1$: \% of primes & primes / all $\times$ 100\% \\ \hline $M_2$: \% of non-primes & non-primes / all $\times$ 100\%\\ \hline $M_3$: Max diff in p.X & max(p.X) - min(p.X) \\ \hline $M_4$: Max diff in p.Y & max(p.Y) - min(p.Y) \\ \hline $M_5$: Max diff in p.Z & max(p.Z) - min(p.Z) \\ \hline $M_6$: 2D fill in \% & pts / ($M_3$ $\times$ $M_4$) $\times$ 100\% \\ \hline $M_7$: 3D fill in \% & pts / ($M_3$ $\times$ $M_4$ $\times$ $M_5$) $\times$ 100\% \\ \hline \end{tabular} \end{table} \section{Results of experiments} During experiments (which were targeted to find interesting relations and visualisations between primes and composite numbers) various sequences $S_i$ were evaluated (Table $\ref{tablesequences}$). \begin{table}[h] \centering \caption{Sequences $S_i$ subjected to experiments} \label{tablesequences} \begin{tabular}{|c|c|} \hline \rowcolor{LightCyan} i & Formula for $S_i(n)$\\ \hline 1 & 2$\times$n + 1 \\ \hline 2 & 6$\times$n + 1 \\ \hline 3 & 6$\times$n - 1 \\ \hline 4 & 6$\times$n + 1 for n even \\ & 6$\times$n - 1 for n odd \\ \hline 5 & n \\ \hline 6 & 30$\times$n + 1 \\ \hline 7 & 30$\times$n - 1 \\ \hline 8 & 30$\times$n + 1 for n even \\ & 30$\times$n - 1 for n odd \\ \hline 9 & sum of decimal digits of n \\ \hline 10 & 1 for n even \\ & 2 for n odd \\ \hline 11 & $\left \lfloor{10 \times sin(n)}\right \rfloor$ \\ \hline 12 & random (2, 3, 4, 5, 6, 7, 8, 9) \\ \hline \end{tabular} \end{table} Figures 1 and 2 present 2D shapes for $S_i$ (Table $\ref{tablesequences}$) and figures 3 and 4 - corresponding 3D shapes. Experiments were run for the first $10^6$ terms of $S_i$. \onecolumn \begin{figure} \centering \subfloat{ \begin{minipage}{\columnwidth}\footnotesize \centering \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_1}}{$S_1$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_2}}{$S_2$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_3}}{$S_3$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_4}}{$S_4$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_5}}{$S_5$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_6}}{$S_6$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_7}}{$S_7$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_8}}{$S_8$} \end{minipage} } \caption{2D plots - sequences 1-8} \end{figure} \begin{figure} \centering \subfloat{ \begin{minipage}{\columnwidth}\footnotesize \centering \subsubfloat{\includegraphics[width=0.40\columnwidth]{f_shape_9}}{$S_9$} \qquad \subsubfloat{\includegraphics[width=0.40\columnwidth]{f_shape_10}}{$S_{10}$} \qquad \subsubfloat{\includegraphics[width=0.40\columnwidth]{f_shape_11}}{$S_{11}$} \qquad \subsubfloat{\includegraphics[width=0.40\columnwidth]{f_shape_12}}{$S_{12}$} \end{minipage} } \caption{2D plots - sequences 9-12} \end{figure} \begin{figure} \centering \subfloat{ \begin{minipage}{\columnwidth}\footnotesize \centering \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_13d}}{$S_{1}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_23d}}{$S_{2}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_33d}}{$S_{3}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_43d}}{$S_{4}$} \end{minipage} } \caption{3D plots - sequences 1-4} \end{figure} \begin{figure} \centering \subfloat{ \begin{minipage}{\columnwidth}\footnotesize \centering \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_53d}}{$S_{5}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_63d}}{$S_{6}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_73d}}{$S_{7}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_83d}}{$S_{8}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_93d}}{$S_{9}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_103d}}{$S_{10}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_113d}}{$S_{11}$} \qquad \subsubfloat{\includegraphics[width=0.45\columnwidth]{f_shape_123d}}{$S_{12}$} \end{minipage} } \caption{3D plots - sequences 5-12} \end{figure} \twocolumn TBD \section{Summary and future work} TBD \begin{thebibliography}{9} \bibitem{github} \emph{Prime shapes automated generation framework.} https://github.com/mbarylsk/prime-shapes/ \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.6997086783, "avg_line_length": 35.3495934959, "ext": "tex", "hexsha": "bf5358cf23112b305c96a454b3f41cd9ce5cd378", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5515b298019f8dde1402336e213677d487a5cd59", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "mbarylsk/research-papers", "max_forks_repo_path": "PrimeShapes/PrimeShapes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5515b298019f8dde1402336e213677d487a5cd59", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "mbarylsk/research-papers", "max_issues_repo_path": "PrimeShapes/PrimeShapes.tex", "max_line_length": 778, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5515b298019f8dde1402336e213677d487a5cd59", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "mbarylsk/research-papers", "max_stars_repo_path": "PrimeShapes/PrimeShapes.tex", "max_stars_repo_stars_event_max_datetime": "2020-03-22T16:46:49.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-22T16:46:49.000Z", "num_tokens": 4395, "size": 13044 }
\subsubsection{\stid{3.09} SLATE}\label{subsubsect:slate} \paragraph{Overview} The objective of the Software for Linear Algebra Targeting Exascale (SLATE) project is to provide fundamental dense linear algebra capabilities to the US Department of Energy (DOE) and to the high-performance computing (HPC) community at large. To this end, SLATE will provide basic dense matrix operations (e.g., matrix multiplication, rank-k update, triangular solve), linear systems solvers, least square solvers, singular value and eigenvalue solvers. The ultimate objective of SLATE is to replace the venerable Scalable Linear Algebra PACKage (ScaLAPACK) library, which has become the industry standard for dense linear algebra operations in distributed memory environments. However, after two decades of operation, ScaLAPACK is past the end of its lifecycle and overdue for a replacement, as it can hardly be retrofitted to support hardware accelerators, which are an integral part of today's HPC hardware infrastructure. Primarily, SLATE aims to extract the full performance potential and maximum scalability from modern, many-node HPC machines with large numbers of cores and multiple hardware accelerators per node. For typical dense linear algebra workloads, this means getting close to 90\% of the theoretical peak performance and scaling to the full size of the machine (i.e., thousands to tens of thousands of nodes). This is to be accomplished in a portable manner by relying on standards like MPI and OpenMP. SLATE functionalities will first be delivered to the ECP applications that most urgently require SLATE capabilities (e.g., EXAALT, NWChem, QMPACK, GAMESS, and CANDLE) and to other software libraries that rely on underlying dense linear algebra services (e.g., FBSS). SLATE will also fill the void left by ScaLAPACK's inability to utilize hardware accelerators, and it will ease the difficulties associated with ScaLAPACK's legacy matrix layout and Fortran API. Figure~\ref{fig:slate-architecture} shows SLATE within the ECP software stack. \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth]{projects/2.3.3-MathLibs/2.3.3.09-SLATE/SLATE-architecture.jpg} \caption{\label{fig:slate-architecture} SLATE in the ECP software stack.} \end{figure} \paragraph{Key Challenges} \begin{enumerate} \item \textbf{Designing from the ground up:} The SLATE project's primary challenge stems from the need to design the package from the ground up, as no existing software package offered a viable path forward for efficient support of hardware accelerators in a distributed-memory environment. \item \textbf{Aiming at a hard hardware target:} SLATE's acceleration capabilities are being developed in an unforgiving hardware environment, where the computing power of the GPUs exceeds the computing power of the CPUs, as well as the communication capabilities of the network, by orders of magnitude. \item \textbf{Using cutting-edge software technologies:} Finally, SLATE is being designed using cutting-edge software technologies, including modern features of the C++ language, as well as fairly recent extensions of the OpenMP standard and the MPI standard. \end{enumerate} \paragraph{Solution Strategy} \begin{enumerate} \item \textbf{Deliberate design phase:} The need for building SLATE from scratch was a primary focus of SLATE's architecture and design in the initial project milestones. First, we conducted a careful analysis of all the relevant technologies---existing and emerging~\cite{abdelfattah2017roadmap}. % \footnote{\url{http://www.icl.utk.edu/publications/swan-001}} Then we designed the initial architecture~\cite{kurzak2017designing}. %\footnote{\url{http://www.icl.utk.edu/publications/swan-003}} Going forward, the development process is based on alternating feature development with refactoring and redesign of the underlying infrastructure. \item \textbf{Accelerator-centric focus:} Hardware accelerators (e.g., graphic processing units [GPUs]), are treated as first-class citizens in SLATE, and accelerated performance is the primary focus of the performance engineering efforts. Device performance relies on highly optimized routines in vendor libraries, mostly the batch matrix multiply routine (i.e., batch gemm). Care is also taken to hide accelerator communication in addition to hiding message passing communication. To address the network deficiencies in the long term, the team is investigating cutting-edge algorithmic solutions like communication-avoiding algorithms \cite{ballard2011minimizing} and randomization algorithms \cite{mahoney2011randomized}. \item \textbf{Community engagement:} The SLATE team maintains regular contact with the OpenMP community, the ECP SOLLVE project, and the broader OpenMP community (joining the OpenMP ARB). The team also engages vendors and has contacts at Intel, NVIDIA, and AMD. The SLATE team is co-located with the ECP OMPI-X project and has a direct line of communication with the MPI community. \end{enumerate} \paragraph{Recent Progress} The most recent development in SLATE is the implementation of all level-3 PBLAS routines, covering all four precisions, single real (S), single complex (C), double real (D), double complex (Z), and all combinations of input parameters, \texttt{side}, \texttt{uplo}, and \texttt{trans}. SLATE's accelerated runs deliver up to a $20\times$ performance improvement over ScaLAPACK multi-core runs on the SummitDev machine at the Oak Ridge Leadership Computing Facility \footnote{\url{https://www.olcf.ornl.gov/kb_articles/summitdev-quickstart/}} (Figure~\ref{fig:slate-dgemm}). A more exhaustive performance analysis is available in ``SLATE Working Note 5: Parallel BLAS Performance Report,''~\cite{kurzak2018parallel}. %\footnote{\url{http://www.icl.utk.edu/publications/swan-005}} \begin{figure}[htb] \centering \includegraphics[width=0.55\textwidth]{projects/2.3.3-MathLibs/2.3.3.09-SLATE/SLATE-dgemm.pdf} \caption{\label{fig:slate-dgemm} Accelerated performance using SLATE compared to multi-core performance using ScaLAPACK.} \end{figure} \paragraph{Next Steps} \begin{enumerate} \item \textbf{Matrix norms:} We plan to implement matrix norms by the end of June 2018. This includes routines for computing the one norm, max norm, infinity norm, and a specialized routine for computing a column-wise norm, which is required for mixed-precision linear solvers based on iterative-refinement. \item \textbf{Linear solvers:} We plan to implement linear solvers by the end of September 2018. This includes an LL$^T$ factorization, an LU factorization, an LDL$^T$ factorization, and the corresponding forward/backward substitution routines. \item \textbf{Least squares solvers:} We will implement least squares solvers by the end of December 2018, including the QR and LQ factorizations and the routines for the generation and application of the Q matrices. \end{enumerate} All four standard ScaLAPACK precisions will be covered, and all routines will support hardware acceleration.
{ "alphanum_fraction": 0.7850804215, "avg_line_length": 45.9363057325, "ext": "tex", "hexsha": "563447d7630ac7685f8dd502bd0d5dd430bbea54", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_forks_repo_path": "projects/2.3.3-MathLibs/2.3.3.09-SLATE/2.3.3.09-SLATE.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_issues_repo_path": "projects/2.3.3-MathLibs/2.3.3.09-SLATE/2.3.3.09-SLATE.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "74d6fb18bae7ff1c32b78dd8cd7ae29e91218c33", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "tgamblin/ECP-ST-CAR-PUBLIC", "max_stars_repo_path": "projects/2.3.3-MathLibs/2.3.3.09-SLATE/2.3.3.09-SLATE.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1705, "size": 7212 }
\subsection{Euler's totient function} This functions counts numbers up to \(n\) which are relatively prime eg for 10 we have \(1\), \(3\), \(7\), \(9\). So \(\phi (10)=4\)
{ "alphanum_fraction": 0.6214689266, "avg_line_length": 17.7, "ext": "tex", "hexsha": "297e3d1f6847782b51dbe4bed4caa6d65db9a288", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/logic/primes/01-03-totient.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/logic/primes/01-03-totient.tex", "max_line_length": 68, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/logic/primes/01-03-totient.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 57, "size": 177 }
\chapter{Character Creation} \index{Characters, Creating} Making Dungeon World characters is quick and easy. You should all create your first characters together at the beginning of your first session. Character creation is, just like play, a kind of conversation--everyone should be there for it. You may need to make another character during play, if yours gets killed for example. If so, no worries, the character creation process helps you make a new character that fits into the group in just a few minutes. All characters, even replacement characters, start at first level. Most everything you need to create a character you'll find on the character sheets. These steps will walk you through filling out a character sheet. \section*{1. Choose a Class} Look over the character classes and choose one that interests you. To start with everyone chooses a different class; there aren't two wizards. If two people want the same class, talk it over like adults and compromise. \begin{quote} \emph{I sit down with Paul and Shannon to play a game run by John. I've got some cool ideas for a wizard, so I mention that would be my first choice. No one else was thinking of playing one, so I take the wizard character sheet.} \end{quote} \section*{2. Choose a Race} Some classes have race options. Choose one. Your race gives you a special move. \begin{quote} \emph{I like the idea of being flexible--having more spells available is always good, right? I choose Human, since it'll allow me to pick a cleric spell and cast it like it was a wizard one. That'll leave Shannon's cleric free to keep the healing magic flowing.} \end{quote} \section*{3. Choose a Name} Choose your character's name from the list. \begin{quote} \emph{Avon sounds good.} \end{quote} \section*{4. Choose Look} Your look is your physical appearance. Choose one item from each list. \begin{quote} \emph{Haunted eyes sound good since every wizard has seen some things no mortal was meant to. No good wizard has time for hair styling so wild hair it is. My robes are strange, and I mention to everyone that I think maybe they came from Beyond as part of a summoning ritual. No time to eat with all that studying and research: thin body.} \end{quote} \section*{5. Choose Stats} Assign these scores to your stats: 16, 15, 13, 12, 9, 8. Start by looking over the basic moves and the starting moves for your class. Pick out the move that interests you the most: something you'll be doing a lot, or something that you excel at. Put a 16 in the stat for that move. Look over the list again and pick out the next most important move to your character, maybe something that supports your first choice. Put your 15 in the stat for that move. Repeat this process for your remaining scores: 13, 12, 9, 8. \index{Ability Scores!choosing} \begin{quote} \emph{It looks like I need Intelligence to cast spells, which are my thing, so my 16 goes there. The defy danger option for Dexterity looks like something I might be doing to dive out of the way of a spell, so that gets my 15. A 13 Wisdom will help me notice important details (and maybe keep my sanity, based on the defy danger move). Charisma might be useful in dealing with summoned creatures so I'll put my 12 there. Living is always nice, so I put my 9 in Constitution for some extra HP\@. Strength gets the 8.} \end{quote} \section*{6. Figure Out Modifiers} Next you need to figure out the modifiers for your stats. The modifiers are what you use when a move says +DEX or +CHA\@. If you're using the standard character sheets the modifiers are already listed with each score. \index{Ability Modifiers!determining} \begin{center} \begin{tabular}{|c|c|}\hline Score & Modifier \\ \hline 1--3 & -3\\ \hline 4--5 & -2\\ \hline 6--8 & -1\\ \hline 9--12 & 0\\ \hline 13--15 & +1\\ \hline 16--17 & +2\\ \hline 18 & +3\\ \hline \end{tabular} \end{center} \section*{7. Set Maximum HP} Your maximum HP is equal to your class's base HP+Constitution score. You start with your maximum HP\@. \index{HP!calculating} \begin{quote} \emph{Base 4 plus 9 con gives me a whopping 13 HP\@. } \end{quote} \section*{8. Choose Starting Moves} The front side of each character sheet lists the starting moves. Some classes, like the fighter, have choices to make as part of one of their moves. Make these choices now. The wizard will need to choose spells for their spellbook. Both the cleric and the wizard will need to choose which spells they have prepared to start with. \index{Moves!starting} \begin{quote} \emph{A Summoning spell is an easy choice, so I take Contact Spirits. Magic Missile will allow me to deal more damage than the pitiful d4 for the wizard class, so that's in too. I choose Alarm for my last spell, since I can think of some interesting uses for it.} \end{quote} \section*{9. Choose Alignment} Your alignment is a few words that describe your character's moral outlook. Each class may only start with certain alignments. Choose your alignment--in play, it'll give your character certain actions that can earn you additional XP \index{Alignment!choosing} \begin{quote} \emph{The Neutral option for wizards says I earn extra XP when I discover a magical mystery. Avon is all about discovering mystery--I'll go with Neutral.} \end{quote} \section*{10. Choose Gear} Each class has choices to make for starting gear. Keep your load in mind--it limits how much you can easily carry. Make sure to total up your armor and note it on your character sheet. \index{Equipment!starting} \begin{quote} \emph{I'm worried about my HP, so I take armor over books. A dagger sounds about right for rituals; I choose that over a staff. It's a toss-up between the healing potion and the antitoxin, but healing wins out. I also end up with some rations.} \end{quote} \section*{11. Introduce Your Character} Now that you know who your character is, it's time to introduce them to everyone else. Wait until everyone's finished choosing their name. Then go around the table; when it's your turn, share your look, class and anything else pertinent about your character. You can share your alignment now or keep it a secret if you prefer. This is also the time for the GM to ask questions. The GM's questions should help establish the relationships between characters (``What do you think about that?'') and draw the group into the adventure (``Does that mean you've met Grundloch before?''). The GM should listen to everything in the description and ask about anything that stands out. Establish where they're from, who they are, how they came together, or anything else that seems relevant or interesting. \begin{quote} \emph{``This is Avon, mighty wizard! He's a human with haunted eyes, wild hair, strange robes, and a thin body. Like I mentioned before his robes are strange because they're literally not of this world: they came to him as part of a summoning ritual.''} \end{quote} \section*{12. Choose Bonds} Once everyone has described their characters you can choose your bonds. You must fill in one bond but it's in your best interest to fill in more. For each blank fill in the name of one character. You can use the same character for more than one statement. \index{Bonds!stating} Take some time to discuss the bonds and let the GM ask questions about them as they come up. You'll want to go back and forth and make sure everyone is happy and comfortable with how the bonds have come out. Leave space to discover what each one might mean in play, too: don't pre-determine everything at the start. Once everyone's filled in their bonds read them out to the group. When a move has you roll+Bond you'll count the number of bonds you have with the character in question and add that to the roll. \begin{quote} \emph{With everyone introduced I choose which character to list in each bond, I have Paul's fighter Gregor and Shannon's cleric Brinton to choose from. The bond about prophecy sounds fun, so I choose Gregor for it and end up with ``Gregor will play an important role in the events to come. I have foreseen it!'' It seems like the wizard who contacts Things From Beyond and the cleric might not see eye to eye, so I add Shannon's character and get ``Brinton is woefully misinformed about the world; I will teach them all that I can.'' I leave my last bond blank; I'll deal with it later. Once everyone is done I read my bonds aloud and we all discuss what this means about why we're together and where we're going.} \end{quote} \section*{13. Get Ready to Play} Take a little break: grab a drink, stretch your legs and let the GM brainstorm for a little bit about what they've learned about your characters. Once you're all ready, grab your dice and your sheet and get ready to take on the dungeon. Once you're ready the GM will get things started as described in the First Session chapter.
{ "alphanum_fraction": 0.7691261476, "avg_line_length": 74.7711864407, "ext": "tex", "hexsha": "9e6155e4a7acf53f7fd5b9731fd1a8c89a332785", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-01-27T03:56:49.000Z", "max_forks_repo_forks_event_min_datetime": "2016-09-01T13:27:46.000Z", "max_forks_repo_head_hexsha": "49a230f82fdeab7faa7c736ef81ef13266ac399d", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "Hegz/DW-Latex", "max_forks_repo_path": "tex/Character_Creation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "49a230f82fdeab7faa7c736ef81ef13266ac399d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "Hegz/DW-Latex", "max_issues_repo_path": "tex/Character_Creation.tex", "max_line_length": 714, "max_stars_count": 6, "max_stars_repo_head_hexsha": "49a230f82fdeab7faa7c736ef81ef13266ac399d", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "Hegz/DW-Latex", "max_stars_repo_path": "tex/Character_Creation.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-15T21:25:14.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-27T22:54:43.000Z", "num_tokens": 2135, "size": 8823 }
\chapter{Publications} \label{ch:pubs}
{ "alphanum_fraction": 0.7692307692, "avg_line_length": 13, "ext": "tex", "hexsha": "4ce0602cac8186efceaabf7bebd315bcda6c00c1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0c0fc1deb9ca22b60816526514d3430c1b838e1c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xlauko/thesis-proposal", "max_forks_repo_path": "chapters/publications.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0c0fc1deb9ca22b60816526514d3430c1b838e1c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xlauko/thesis-proposal", "max_issues_repo_path": "chapters/publications.tex", "max_line_length": 22, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0c0fc1deb9ca22b60816526514d3430c1b838e1c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xlauko/thesis-proposal", "max_stars_repo_path": "chapters/publications.tex", "max_stars_repo_stars_event_max_datetime": "2020-04-09T18:22:20.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-09T18:22:20.000Z", "num_tokens": 14, "size": 39 }
% !TEX root = ../main.tex The range-based approach to set reconciliation has been employed as an implementation detail for a specific use case in~\cite{chen1999prototype} section 3.6 and~\cite{shang2017survey} section II.A\footnote{We cite a survey because there is no standalone publication on CCNx 0.8 Sync. The survey refers to online documentation at \url{https://github.com/ProjectCCNx/ccnx/blob/master/doc/technical/SynchronizationProtocol.txt}}. Neither work studies it as a self-contained approach of interest to the set reconciliation problem, consequently the treatment is rather superficial, e.g,. neither work considers collision resistance, even though it is relevant to both settings. Considering the simplicity of the approach, there may well be further publications reusing or reinventing it. To the best of our knowledge, there is no prior work dedicated to the algorithm itself. We give an overview of the existing literature on set fingerprinting in \cref{related-fingerprinting}, and of the set reconciliation literature in \cref{related-reconciliation}. \section{Fingerprinting Sets} \label{related-fingerprinting} As range-based synchronization relies on fingerprinting sets, we give a small overview of work on that topic. Some of these references have already come up withn this thesis because they are directly related to our approaches, the other references are given to round out the picture and because they might prove useful for developing further ideas. Merkle trees~\cite{merkle1989certified} introduce the idea of maintaining hashes in a tree to allow efficient updating without a group structure on the hashes. Since the exact shape of the tree determines the root label, unique representations of sets as trees are of interest. \cite{uniquerepresentation} proves the important negative result that in general unique representations require superlogarithmic time to maintain under insertions and deletions. \cite{sundar1994unique} gives logarithmic solutions for sparse or dense sets and points to further deterministic solutions. Unique representations with probabilistic time complexities that are logarithmic in the expected case have been studied in~\cite{pugh1989incremental}, suggesting hash-tries as a set representation for computing hashes. Pugh also developed skip lists~\cite{pugh1990skip}, a probabilistic set representation not based on trees. \cite{seidel1996randomized} offers treaps, another randomized tree structure. Further study of uniquely represented data structures has been done under the moniker of \textit{history-independent data structures}, including IO-efficient solutions for treaps~\cite{golovin2009b} and skip list~\cite{bender2016anti} in an \textit{external memory} computational model. Beyond the comparison of root labels for set equality testing, Merkle trees and their technique of hashing the concatenation of hashes form the basis of many authenticated data structures, ranging from simple balanced trees or treaps~\cite{naor2000certificate} and skip lists~\cite{goodrich2000efficient} to more general DAG-based data structures~\cite{martel2004general}. A different line of authenticated data structures utilizes dynamic accumulators~\cite{camenisch2002dynamic}, small digests for sets that can be efficiently updated under insertion and deletion of items, and which allow computing small certificates that some item has been ingested. \cite{papamanthou2016authenticated} and \cite{papamanthou2011optimal} use accumulators to provide authenticated set data structures that are more efficient than their Merkle-tree-based counterparts. Accumulators are stronger than necessary for the range-based synchronization approach, so they are not discussed in our main text. An interesting opportunity for further research could be the design of synchronization protocols which leverage the strong properties offered by dynamic accumulators. Orthogonal to these lines of research are algebraic approaches. The earliest mention of hashing into a group and performing computations on hashes to compactly store information about sets that we could find is in~\cite{wegman1981new}. They provide probabilistic set equality testing, but without maintaining any tree structure, effectively anticipating (cryptographically insecure) homomorphic set fingerprinting. The first investigation of the cryptographic security of this approach was done in the seminal~\cite{bellare1997new}, albeit in the slightly less natural context of sequences rather than sets, with the additive hash being broken in~\cite{wagner2002generalized} and~\cite{lyubashevsky2005parity}. The multiset homomorphic formulation was given in~\cite{clarke2003incremental}, further constructions are given in~\cite{cathalo2009comparing},~\cite{maitin2017elliptic} and~\cite{lewi2019securing}. The generalized hash tree of~\cite{papamanthou2013streaming} extends the idea of combining hashes by a binary operation to operations that are not closed, instead the output is mapped back into the original domain by a separate function. These ``compressed'' outputs are then arranged in a tree, this technique allows using the algebraic approach for authenticated data structures. \section{Set Reconcilliation} \label{related-reconciliation} We now present several alternative approaches to the set reconciliation problem, almost all of which perform reconciliation in a single communication round at the price of high computational cost. In the following presentations, we consider the setting of a node $\mathcal{X}_0$ holding a set $X_0 \subseteq U$ reconciling with a node $\mathcal{X}_1$ holding a set $X_1 \subseteq U$. We denote by $n_{\triangle}$ the size of the symmetric difference between $X_0$ and $X_1$. \subsection{Characteristic Polynomial Interpolation} The seminal work on set reconciliation is~\cite{minsky2003set}, introducing an approach based on characteristic polynomial interpolation (CPI). They consider the items to reconcile as bit-strings of length $b$. Given a set $S = \set{x_1, x_2, \ldots, x_n}$, the characteristic polynomial $\cp{S}$ is defined as $(Z - x_1)(Z - x_2)\ldots(Z - x_n)$. The elements in $S$ are exactly the roots of $\cp{S}$, so given the coefficients of $\cp{S}$, $S$ can be recovered by factoring the polynomial. Next observe that when dividing one characteristic polynomial by another, factors occurring in both polynomials cancel each other out, so it holds that: \[ \frac{\cp{X_0}}{\cp{X_1}} = \frac{\cp{X_0 \setminus X_1}}{\cp{X_1 \setminus X_0}} \] The degrees of $\cp{X_0 \setminus X_1}$ and $\cp{X_1 \setminus X_0}$ are upper-bounded by $n_{\triangle}$. Assume that the nodes have somehow obtained an approximation $\bar{n}_{\triangle} \geq n_{\triangle}$. The nodes then agree on $\bar{n}_{\triangle}$ sample points from some sufficiently large field, and node $\mathcal{X}_i$ locally evaluates $\cp{X_i}$. The results are exchanged, so each node can then recover the value of $\frac{\cp{X_0 \setminus X_1}}{\cp{X_1 \setminus X_0}}$ at each of the sample points. Knowing these values, a node can interpolate them to recover $\cp{X_0 \setminus X_1}$ and $\cp{X_1 \setminus X_0}$. Factoring these then yields the items $X_0 \setminus X_1$ and $X_1 \setminus X_0$, completing the reconciliation. The number of evaluation results that need to be exchanged is linear in $n_{\triangle}$, thus the total number of bits that need to be transmitted is proportional to $n_{\triangle}$, which is more efficient than the range-based approach. Evaluating $\cp{X_i}$ at $\bar{n}_{\triangle}$ sample points takes $\complexity{\abs{X_i} \cdot \bar{n}_{\triangle}}$ time, since every item in $X_i$ increases the number of terms in the characteristic polynomial. So while the communication complexity is asymptotically optimal, the computational complexity for a node is at least linear in the size of its set. Interpolating the evaluation points can be reduced to performing Gaussian elimination in $\complexity{\bar{n}_{\triangle}^3}$ time. The authors also give an $\complexity{\bar{n}_{\triangle}^3}$ time algorithm for finding the roots of the interpolated polynomials. The overall computational complexity is thus in $\complexity{\abs{X_i} \cdot \bar{n}_{\triangle} + \bar{n}_{\triangle}^3}$. The space complexity of the computations is not elaborated on, but since recovering of the polynomials requires the information about all $\bar{n}_{\triangle}$ evaluation points, it appears to be at least in $\complexity{\bar{n}_{\triangle}}$, i.e. an implementation using only a fixed amount of memory is not possible. The above description of the protocol assumes that an upper bound $\bar{n}_{\triangle} \geq n_{\triangle}$ is known, it then performs reconciliation within a single round trip. For the protocol to be asymptotically efficient in the communication complexity, $\bar{n}_{\triangle}$ must be within a constant factor of $n_{\triangle}$. If no such value can be known in advance, the number of sample points can be increased by a constant factor over multiple communication rounds, resulting in an appropriate $\bar{n}_{\triangle}$ after a logarithmic number of rounds, leading to an overall $\complexity{\log(\bar{n}_{\triangle})}$ communication rounds. In each round, the nodes verify probabilistically whether $\bar{n}_{\triangle}$ is large enough by sending some additional random sample points and verifying whether the result of the interpolation correctly evaluates these samples. A linear number of verification sample points decreases the probability of false positives exponentially. \subsection{Invertible Bloom Lookup Tables} A bloom filter~\cite{bloom1970space} is a probabilistic data structure for set membership queries. Given an item, the bloom filter can answer a membership query with a certain false positive rate. The size of a bloom filter is a fraction of the size of an array storing the items. The larger the filter is chosen relative to the number of items it stores, the smaller is the false positive rate. Invertible bloom lookup tables~\cite{goodrich2011invertible} (IBLTs) extend this behavior with the ability to list all items stored in the data structure, albeit again with an error probability which decreases as the fraction of space per contained item goes up. \cite{eppstein2011s}~introduces set reconciliation based on IBLTs by formulating a way to compute the difference between two IBLTs, with the result corresponding to an IBLT encoding the corresponding set difference. This leads to the following set reconciliation algorithm: Assume the nodes have obtained an approximation $\bar{n}_{\triangle} \geq \abs{X_0 \setminus X_1}$. $\mathcal{X}_0$ creates an IBLT large enough that listing its contents succeeds with sufficient probability as long as it contains at most $\bar{n}_{\triangle}$ items. It then stores all items of $X_0$ in this table and transmits the table to $\mathcal{X}_1$. $\mathcal{X}_1$ creates an IBLT of the same size containing the items of $X_1$, and then subtracts the tables, yielding a table containing most $\bar{n}_{\triangle}$ items. This table is invertible, so $\mathcal{X}_1$ can recover $X_0 \setminus X_1$ from it. The same protocol can run in parallel with the roles reversed, resulting in full reconciliation. Creating the IBLT requires $\complexity{\abs{X_i}}$ time for node $\mathcal{X}_i$ and $\complexity{\bar{n}_{\triangle}}$ space, both of which dominate the cost for subtracting the IBLTs and for inverting them. Similar to the CPI approach, the nodes need an estimate of the size of the set difference before they can perform reconciliation. The authors present a single-message estimation protocol for the set difference that uses IBLTs as well. The size of the message is in $\complexity{\log(\abs{U})}$. Both creating or receiving and using the message requires $\complexity{X_i}$ time for node $\mathcal{X}_i$ and $\complexity{\log(\abs{U})}$ space. Overall, the IBLT approach achieves set reconciliation in a single round trip, transmitting only $\complexity{n_{\triangle} + \log(\abs{U})}$ bits. The computational cost is however linear in the size of the sets, and the space requirements for the computation are in $\complexity{n_{\triangle}}$, which can cause trouble in the case of lopsided reconciliation sessions. \subsection{Other Bloom Filter Approaches} Several other approaches based on bloom filters have been published, all of which achieve a constant number of roundtrips and a small message size at the cost of at least linear computation time and computation space requirements.\cite{byers2002fast}~transmits the nodes of a patricia tree representation of the sets in a bloom filter, \cite{tian2011exact} uses a bloom filter-based approach to obtain an estimate of the size of the set difference in the first phase of a CPI reconciliation. \cite{guo2012set} employs counting bloom filters, \cite{luo2019set} makes use of cuckoo filters. \cite{ozisik2019graphene}~combines IPLTs with regular bloom filters to reduce the message size. \subsection{Partition Reconciliation} \cite{minsky2002practical} is the only publication we found which considers reconciliation in a logarithmic number of rounds in order to reduce computational load. It builds upon the CPI approach, but unlike~\cite{minsky2003set}, does not attempt to find some $\bar{n}_{\triangle} \geq n_{\triangle}$ to craft a message containing sufficient information to compute the set difference. Instead the protocol fixes some $\bar{m} \in \N$ and attempts reconciliation using $\bar{m}$ as an approximation of $n_{\triangle}$. If reconciliation fails, $X_0$ and $X_1$ are partitioned and reconciliation of the partitions is attempted with the same value $\bar{m}$ to determine the message size. Because the size of the partitions decreases, so must eventually the size of the symmetric difference between the pairs of partitions, until it is less than $\bar{m}$ and reconciliation succeeds. This approach eliminates the cubic scaling of the computation time in $\bar{n}_{\triangle}$. Similar to our auxiliary tree, the authors propose a tree structure of partitions where a parent node includes the intervals of its children. The reconciliation message for each of these intervals is precomputed and stored within the tree. Given such a partition tree, the reconciliation procedure has the same asymptotic complexity bounds as ours. It has better constant terms: whereas our approach recurses whenever the two ranges that are being compared are unequal, i.e., whenever the size of the symmetric difference is greater than zero, their approach only recurses whenever the size of the symmetric difference is greater than $\bar{m}$. The tree can be updated over insertions and deletions in time logarithmic in the height of the tree, but these updates are not balancing. In the worst case, maintaining this auxiliary data structure can thus take time linear in the size of the set per update. Furthermore, the tree is specific to a particular choice of evaluation and control points for the characteristic polynomial. If the tree is to be reused across different reconciliation sessions, these points have to be fixed in advance. This could allow an attacker to craft malicious sets for which a failed reconciliation is not detected. As a consequence, this approach can only be used with the producers of the sets are trusted.
{ "alphanum_fraction": 0.8008312228, "avg_line_length": 194.9240506329, "ext": "tex", "hexsha": "1ad71b701b43b704c843f5bf28cd37081b9a337e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "01bd42dd4cc51078e1526ac7197c5294551beafb", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "AljoschaMeyer/master_thesis", "max_forks_repo_path": "chapters/related_work.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "01bd42dd4cc51078e1526ac7197c5294551beafb", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "AljoschaMeyer/master_thesis", "max_issues_repo_path": "chapters/related_work.tex", "max_line_length": 988, "max_stars_count": 1, "max_stars_repo_head_hexsha": "01bd42dd4cc51078e1526ac7197c5294551beafb", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "AljoschaMeyer/master_thesis", "max_stars_repo_path": "chapters/related_work.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-30T11:02:56.000Z", "max_stars_repo_stars_event_min_datetime": "2021-10-30T11:02:56.000Z", "num_tokens": 3466, "size": 15399 }
\par \section{Data Structures} \label{section:FrontMtx:dataStructure} \par The {\tt FrontMtx} structure has the following fields. \begin{itemize} \item {\tt int nfront} : number of fronts. \item {\tt int neqns} : number of rows and columns in the factor matrix. \item {\tt int symmetryflag} : flag to denote the type of symmetry of $A + \sigma B$. \begin{itemize} \item {\tt SPOOLES\_SYMMETRIC} --- $A$ and/or $B$ are symmetric. \item {\tt SPOOLES\_HERMITIAN} --- $A$ and/or $B$ are hermitian. \item {\tt SPOOLES\_NONSYMMETRIC} --- $A$ and/or $B$ are nonsymmetric. \end{itemize} \item {\tt int pivotingflag} : flag to specify pivoting for stability, \begin{itemize} \item {\tt SPOOLES\_NO\_PIVOTING} --- pivoting not used \item {\tt SPOOLES\_PIVOTING} --- pivoting used \end{itemize} \item {\tt int sparsityflag} : flag to specify storage of factors. \begin{itemize} \item {\tt 0} --- each front is dense \item {\tt 1} --- a front may be sparse due to entries dropped because they are below a drop tolerance. \end{itemize} \item {\tt int dataMode} : flag to specify data storage. \begin{itemize} \item {\tt 1} --- one-dimensional, used during the factorization. \item {\tt 2} --- two-dimensional, used during the solves. \end{itemize} \item {\tt int nentD} : number of entries in $D$ \item {\tt int nentL} : number of entries in $L$ \item {\tt int nentU} : number of entries in $U$ \item {\tt Tree *tree} : Tree object that holds the tree of fronts. Note, normally this is {\tt frontETree->tree}, but we leave this here for later enhancements where we change the tree after the factorization, e.g., merge/drop fronts. \item {\tt ETree *frontETree} : elimination tree object that holds the front tree. \item {\tt IVL *symbfacIVL} : {\tt IVL} object that holds the symbolic factorization. \item {\tt IV *frontsizesIV} : {\tt IV} object that holds the vector of front sizes, i.e., the number of internal rows and columns in a front. \item {\tt IVL *rowadjIVL} : {\tt IVL} object that holds the row list for the fronts, used only for a nonsymmetric factorization with pivoting enabled. \item {\tt IVL *coladjIVL} : {\tt IVL} object that holds the column list for the fronts, used only for a symmetric or nonsymmetric factorization with pivoting enabled. \item {\tt IVL *lowerblockIVL} : {\tt IVL} object that holds the front-to-front coupling in $L$, used only for a nonsymmetric factorization. \item {\tt IVL *upperblockIVL} : {\tt IVL} object that holds the front-to-front coupling in $U$. \item {\tt SubMtx **p\_mtxDJJ} : a vector of pointers to diagonal submatrices. \item {\tt SubMtx **p\_mtxUJJ} : a vector of pointers to submatrices in U that are on the block diagonal, used only during the factorization. \item {\tt SubMtx **p\_mtxUJN} : a vector of pointers to submatrices in U that are off the block diagonal, used only during the factorization. \item {\tt SubMtx **p\_mtxLJJ} : a vector of pointers to submatrices in L that are on the block diagonal, used only during a nonsymmetric factorization. \item {\tt SubMtx **p\_mtxLNJ} : a vector of pointers to submatrices in L that are off the block diagonal, used only during a nonsymmetric factorization. \item {\tt I2Ohash *lowerhash} : pointer to a {\tt I2Ohash} hash table for submatrices in $L$, used during the solves. \item {\tt I2Ohash *upperhash} : pointer to a {\tt I2Ohash} hash table for submatrices in $U$, used during the solves. \item {\tt SubMtxManager *manager} : pointer to an object that manages the instances of submatrices during the factors and solves. \item {\tt Lock *lock} : pointer to a {\tt Lock} lock used in a multithreaded environment to ensure exlusive access while allocating storage in the {\tt IV} and {\tt IVL} objects. This is not used in a serial or MPI environment. \item {\tt int nlocks} : number of times the lock has been locked. \item {\tt PatchAndGo *info} : this is a pointer to an object that is used by the {\tt Chv} object during the factorization of a front. \end{itemize} \par One can query the properties of the front matrix object using these simple macros. \begin{itemize} \item {\tt FRONTMTX\_IS\_REAL(frontmtx)} is {\tt 1} if {\tt frontmtx} has real entries and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_COMPLEX(frontmtx)} is {\tt 1} if {\tt frontmtx} has complex entries and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_SYMMETRIC(frontmtx)} is {\tt 1} if {\tt frontmtx} comes from a symmetric matrix or linear combination of symmetric matrices, and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_HERMITIAN(frontmtx)} is {\tt 1} if {\tt frontmtx} comes from a Hermitian matrix or linear combination of Hermitian matrices, and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_NONSYMMETRIC(frontmtx)} is {\tt 1} if {\tt frontmtx} comes from a nonsymmetric matrix or linear combination of nonsymmetric matrices, and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_DENSE\_FRONTS(frontmtx)} is {\tt 1} if {\tt frontmtx} comes from a direct factorization and so stores dense submatrices, and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_SPARSE\_FRONTS(frontmtx)} is {\tt 1} if {\tt frontmtx} comes from an approximate factorization and so stores sparse submatrices, and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_PIVOTING(frontmtx)} is {\tt 1} if pivoting was used during the factorization, and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_1D\_MODE(frontmtx)} is {\tt 1} if the factor are still stored as a one-dimensional data decomposition (i.e., the matrix has not yet been post-processed), and {\tt 0} otherwise. \item {\tt FRONTMTX\_IS\_2D\_MODE(frontmtx)} is {\tt 1} if the factor are stored as a two-dimensional data decomposition (i.e., the matrix has been post-processed), and {\tt 0} otherwise. \end{itemize}
{ "alphanum_fraction": 0.7349860335, "avg_line_length": 34.7151515152, "ext": "tex", "hexsha": "22720f91414eccd2c51e3aa8a57e24019bc44770", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-08-29T18:41:28.000Z", "max_forks_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alleindrach/calculix-desktop", "max_forks_repo_path": "ccx_prool/SPOOLES.2.2/FrontMtx/doc/dataStructure.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_issues_repo_issues_event_max_datetime": "2018-01-25T16:08:31.000Z", "max_issues_repo_issues_event_min_datetime": "2017-09-21T17:03:55.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alleindrach/calculix-desktop", "max_issues_repo_path": "ccx_prool/SPOOLES.2.2/FrontMtx/doc/dataStructure.tex", "max_line_length": 72, "max_stars_count": null, "max_stars_repo_head_hexsha": "2cb2c434b536eb668ff88bdf82538d22f4f0f711", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alleindrach/calculix-desktop", "max_stars_repo_path": "ccx_prool/SPOOLES.2.2/FrontMtx/doc/dataStructure.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1772, "size": 5728 }
\chapter{The Finite Element Solver} \label{chap:fem} In this chapter all steps required for solving a PDE using the finite element method in {\ViennaFEM} are discussed. Basically, the discussion follows the tutorials in \lstinline|examples/tutorial/|. Familiarity with the very basics of the finite element method is expected. However, since {\ViennaFEM} aims at hiding unimportant low-level details from the user, no detailed knowledge is required. \section{Grid Setup} The first step is to set up the discrete grid using {\ViennaGrid}, which is accomplished by reading the mesh from a file. File readers for the VTK format \cite{VTK,VTKfileformat} and the Netgen legacy format \cite{netgen} are provided by {\ViennaGrid}. The domain type is instantiated by first retrieving the domain type, and then by instantiating the domain as usual: \begin{lstlisting} //retrieve domain type: typedef viennagrid::config::triangular_2d ConfigType; typedef viennagrid::result_of::domain<ConfigType>::type DomainType; DomainType my_domain; viennagrid::io::netgen_reader my_reader; my_reader(my_domain, "../examples/data/square512.mesh"); \end{lstlisting} Here, the Netgen file reader is used to read the mesh '\texttt{square512.mesh}' included in the {\ViennaFEM} release. \TIP{ The easiest way of generating meshes for use within {\ViennaFEM} is to use Netgen \cite{netgen}, which also offers a graphical user interface. } \section{Boundary Conditions} The next step is to set boundary conditions for the respective partial differential equation. Only Dirichlet boundary conditions and homogeneous Neumann are supported in {\ViennaFEMversion}. Inhomogeneous Neumann boundary conditions as well as Robin-type boundary conditions are scheduled for future releases. Boundary conditions are imposed by interpolation at boundary vertices. Using {\ViennaGrid}, this is achieved by iterating over all vertices and calling the function \lstinline|set_dirichlet_boundary()| from {\ViennaFEM} for the respective vertices. For example, to specify inhomogeneous Dirichlet boundary conditions $ = 1$ for all vertices with $x$-coordinate equal to zero, the required code is: \begin{lstlisting} typedef viennagrid::result_of::ncell_range<DomainType, 0>::type VertexContainer; typedef viennagrid::result_of::iterator<VertexContainer>::type VertexIterator; VertexContainer vertices = viennagrid::ncells<0>(my_domain); for (VertexIterator vit = vertices.begin(); vit != vertices.end(); ++vit) { if ( (*vit)[0] == 0.0 ) viennafem::set_dirichlet_boundary(*vit, 1.0, 0); } \end{lstlisting} The first argument to \lstinline|set_dirichlet_boundary| is the vertex, the second argument is the Dirichlet boundary value, and the third argument is the simulation ID. The simulation ID allows to distinguish between different finite element solutions of possibly different partial differential equations and should be chosen uniquely for each solution to be computed. However, it is not required to pass a simulation ID to \lstinline|set_dirichlet_boundary|, in which case it defaults to zero. The identification of boundary vertices may not be possible in the cases where the simulation domain is complicated. In such cases, the indices of boundary vertices may be either read from a separate file, or boundary nodes are flagged in the VTK file already. In addition, boundary and/or interface detection functionality from {\ViennaGrid} can be used. \section{PDE Specification} The actual PDE to be solved is specified using the symbolic math library {\ViennaMath}. Either the strong form or the weak form can be specified. For the case of the Poisson equation \begin{align} \Delta u = -1 \ , \end{align} the representation using {\ViennaMath} is \begin{lstlisting} function_symbol u; equation poisson_eq = make_equation( laplace(u), -1); \end{lstlisting} where all functions and types reside in the namespace \lstinline|viennamath|. It is also possible to supply an inhomogeneous right hand side $f$ instead of a constant value $-1$. In {\ViennaMathversion}, $f$ is required to be constant within each cell of the mesh. The values of $f$ are accessed via objects of type \lstinline|cell_quan<T>|, where \lstinline|T| is the cell type of the {\ViennaGrid} domain: \begin{lstlisting} viennafem::cell_quan<CellType> f; f.wrap_constant( data_key ); \end{lstlisting} Here, \lstinline|data_key| denotes the key to be used in order to access the values of $f$ of type \lstinline|double| from the respective cell. \TIP{Have a look at \lstinline|examples/tutorial/poisson_cellquan_2d| for an example of using \lstinline|cell_quan|.} \section{Running the FE Solver} After the boundary conditions and the partial differential equation are specified, the finite element solver can be started. This is accomplished in three steps: \begin{itemize} \item Instantiate the PDE assembler object, the system matrix and the load vector \item Run the assembly \item Solve the resulting system of linear equations \end{itemize} The first step is accomplished by the lines \begin{lstlisting} viennafem::pde_assembler fem_assembler; boost::numeric::ublas::compressed_matrix<double> system_matrix; boost::numeric::ublas::vector<double> load_vector; \end{lstlisting} Other matrix and vector types can also be used as long as they offer access to their entries using \lstinline|operator()| as well as \lstinline|size()| and \lstinline|resize()| member functions. The second step, namely the assembly of the linear system of equations, is triggered by passing the PDE system, the domain, the system matrix and the load vector objects to the functor interface: \begin{lstlisting} fem_assembler(viennafem::make_linear_pde_system(poisson_eq, u), my_domain, system_matrix, load_vector); \end{lstlisting} The convenience function \lstinline|make_linear_pde_system()| sets up a system given by the Poisson equation defined in the previous section, and specifies the unknown function to be \lstinline|u|. By default, a linear finite element simulation using the simulation ID $0$ is triggered. Additional options can be specified for the PDE system using the optional third parameter. In {\ViennaFEMversion}, three parameters can be specified: The first parameter is the simulation ID, while the second and third parameter denote the basis function tag used for the trial space and the test space, respectively. Currently, only Lagrange basis functions are supported using the tag \lstinline|lagrange_tag<order>|, where the parameter \lstinline|order| denotes the polynomial order along cell edges. Note that only linear basis functions are supported at the moment. Thus, the previous call to \lstinline|make_linear_pde_system| is equivalent to \begin{lstlisting} make_linear_pde_system(poisson_eq, u, make_linear_pde_options(0, lagrange_tag<1>(), lagrange_tag<1>()) ), \end{lstlisting} The third step is to solve the assembled system of linear equations. For this purpose, the solvers in {\ViennaCL} are used. For example, the iterative conjugate gradient solver is launched via \begin{lstlisting} VectorType pde_result = solve(system_matrix, load_vector, cg_tag() ); \end{lstlisting} where \lstinline|solve()| and \lstinline|cg_tag| reside in namespace \lstinline|viennacl::linalg|. An overview of solvers and preconditioners available in {\ViennaCL} can be found in the {\ViennaCL} manual in \lstinline|doc/viennacl.pdf|. \section{Postprocessing Results} For a visualization of results, a VTK exporter is included in {\ViennaFEM} in the namespace \lstinline|viennafem::io| and defined in the header file \lstinline|viennafem/io/vtk_writer.hpp| \begin{lstlisting} write_solution_to_VTK_file(pde_result, "filename", my_domain, 0); \end{lstlisting} The first argument is the computed solution vector, the second is the filename, the third is the domain object, and the forth parameter is the simulation ID (optional, defaults to $0$). \section{{\LaTeX} Simulation Protocol} {\ViennaFEM} by default writes a protocol named \lstinline|protocol_<ID>.tex|, where \lstinline|<ID>| is replaced by the respective simulation ID. This file can directly be processed with a {\LaTeX} compiler and prints a summary containing the strong formulation, the weak formulation, the basis functions used, etc. The document is compiled on Unix-based systems using either the standard {\LaTeX} toolchain \begin{lstlisting} $> latex protocol_0.tex $> dvips protocol_0.dvi #optional $> ps2pdf protocol_0.ps #optional \end{lstlisting} or using PDF{\LaTeX}: \begin{lstlisting} $> pdflatex protocol_0.tex \end{lstlisting} Any {\LaTeX}-environment can be used on Windows-based systems. The resulting PDF file can then be viewed with your favorite PDF viewer.
{ "alphanum_fraction": 0.7594199714, "avg_line_length": 57.9808917197, "ext": "tex", "hexsha": "5f39fac2fa2abce1a8ba1e5cde31a8b942e93e7e", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-12-23T20:24:15.000Z", "max_forks_repo_forks_event_min_datetime": "2021-12-23T20:24:15.000Z", "max_forks_repo_head_hexsha": "1f2d772cef5fb1c148e22e5bbbb6302b301e896b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "viennafem/viennafem-dev", "max_forks_repo_path": "doc/manual/fem.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "1f2d772cef5fb1c148e22e5bbbb6302b301e896b", "max_issues_repo_issues_event_max_datetime": "2020-03-04T03:40:11.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-17T03:28:47.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "viennafem/viennafem-dev", "max_issues_repo_path": "doc/manual/fem.tex", "max_line_length": 286, "max_stars_count": 3, "max_stars_repo_head_hexsha": "1f2d772cef5fb1c148e22e5bbbb6302b301e896b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "viennafem/viennafem-dev", "max_stars_repo_path": "doc/manual/fem.tex", "max_stars_repo_stars_event_max_datetime": "2020-02-19T14:39:03.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-23T17:35:29.000Z", "num_tokens": 2118, "size": 9103 }
% !TeX spellcheck = en_GB % !TeX encoding = UTF-8 \chapter{IT-Specifications and Implementation} \label{ch:it_specifications} \epigraph{"What would life be if we had no courage to attempt anything?"}{- Vincent Van Gogh} The previous two chapters focus on the theory of the prototype. In contrast, this chapter goes into detail of the implementation. \section{The Prototypes Structure} The structure of the prototype implementation is more or less identical to the five steps which are introduced in the requirements of the prototype (Chapter \ref{ch:requirements}). In Figure \ref{fig:51_Requirements_process_libs}, the steps are visualized again. Additionally, Figure \ref{fig:51_Requirements_process_libs} contains information about which tools are mainly used in each process step. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.9\linewidth]{photo/51_Requirements_process_libs}} \caption{Python and the so-called Matplotlib library are used in the overall implementation of the prototype. Whereas Python is the primary programming language, Matplotlib is used to make visualisations. Also, it is used to create diagrams for this thesis to support statements like Figure \ref{fig:47_data-set-quantative} The data engineering, the preprocessing steps, the VGG16 model and the evaluation criteria are implemented with the help of the TensorFlow library. The SHAP library is used within the last step, to make transparent decisions.} \label{fig:51_Requirements_process_libs} \end{figure} Python and the so-called Matplotlib library are used in the overall implementation of the prototype. Whereas Python is the primary programming language, Matplotlib is used to make visualizations. For example, if a dataset is successfully loaded (Section \ref{sec:input_data}), the images are visualized with Matplotlib. Furthermore, Matplotlib is used to create diagrams for the thesis in order to support statements as can be seen in the following listing: \nopagebreak \begin{python}[label={pred}, caption={Predictions of the pretrained VGG16 Architecture CNN model}] colors = ( '#ece7f2','#a6bddb','#2b8cbe') distincted_lables = np.array([2,10,100]) fig, (ax1,ax2,ax3) = plt.subplots(1, 3,figsize=(15,10)) ax1.bar(data_sets, data_sets_sizes, color=colors) ax1.set_title("Size (MB)") ax2.bar(data_sets, distincted_lables, color=colors) ax2.set_title("Distinced labels") ax3.bar(data_sets, (data_sets_sizes/distincted_lables), color=colors) ax3.set_title("Size / Distinced Label") plt.show() \end{python} The result of the code can be seen in Figure \ref{fig:47_data-set-quantative} and is used in the section about the input data for the prototype (Section \ref{sec:input_data}). The data, the preprocessing steps, the VGG16 model and the evaluation criteria are implemented with the help of the TensorFlow library (Sections \ref{sec:input_data} - \ref{make_predictions}). The SHAP is used within the last step, to make transparent decisions (Section \ref{make_transparent_decisions}). In the following, it is described why the tools are used within the prototype implementation. Besides, the tools are compared to alternatives. \section{Python as programming Language for the prototype} Python is the primary programming language for the prototype implementation because of the following reasons: \begin{enumerate} \item Python is the most popular language when it comes to machine learning and deep learning. This can be observed in Figure \ref{fig:52_python_popular}. It shows the percentage of matching job postings and is created by the job portal Indeed. The Figure compares the languages Python, R, Java, Javascript, C, C++, Julia and Scala \cite{LegacyCo5:online}. \item Pythons excellent libraries, which allows implementing the required steps straight forward \item Because of its popularity and because the language is open source, there is a vast community to help with problems. Furthermore, a lot of documentation for Python is available online. \item As already named as a reason above, Python offers a variety of libraries, and some of them are excellent visualisation tools as well. However, for the prototype implementation, it is necessary to be able to represent data in a human-readable format. \item Libraries like Matplotlib allow building plots for better data understanding and visualisation. Different application programming interfaces also simplify the visualisation process and make it easier to generate figures for transparent decisions. \end{enumerate} A few alternatives to Python can be seen in Figure \ref{fig:52_python_popular}. Every programming language has its downsides, and none of these languages is perfect. For example, other programming languages are maybe faster and more efficient as Python. In comparison, Python is considered as the languages with better packages and the greater community. These two points there mentioned as reasons why Python is the choice for the prototype implementation. If the community and package support would increase in other languages, the decision could be different. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.7\linewidth]{photo/52_python_popular}} \caption{Python is the most popular language when it comes to machine learning and deep learning. The Figure shows the percentage of matching job postings and is created by the job portal Indeed. The Figure compares the languages Python, R, Jave, Javascript, C, C++, Julia and Scala \cite{LegacyCo5:online}.} \label{fig:52_python_popular} \end{figure} \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.7\linewidth]{photo/53_boston_summary_plot_bar}} \caption{The Figure shows a build-in visualization from the SHAP library. It can be used to plot a visualization of the SHAP values for non-image data.} \label{fig:53_boston_summary_plot_bar} \end{figure} \section{Libraries within the Prototype implementation} \subsection{Matplotlib} Matplotlib is already mentioned in the section about the structure of the prototype as a tool for visualisation. Furthermore, it is said as one of the reasons why Python is used as the primary programming language in the first place. Matplotlib is quite easy. Usually, while plotting, the same structure is used in every new plot. For example, for questions that arose during this thesis, Matplotlib can be used to visualise the answers in an easy way. As can be seen in the following listing, the code which is used to create two different diagrams is quite similar: \begin{python}[label={pred}, caption={Predictions of the pre-trained VGG16 Architecture CNN model}] # Figure 1 colors = ( '#ece7f2','#a6bddb','#2b8cbe') distincted_lables = np.array([2,10,100]) fig, (ax1,ax2,ax3) = plt.subplots(1, 3,figsize=(15,10)) ax1.bar(data_sets, data_sets_sizes, color=colors) ax1.set_title("Size (MB)") ax2.bar(data_sets, distincted_lables, color=colors) ax2.set_title("Distinced labels") ax3.bar(data_sets, (data_sets_sizes/distincted_lables), color=colors) ax3.set_title("Size / Distinced Label") plt.show() # Figure 2 labels = 'Train', 'Validation', 'Test', sizes = [80,10,10] explode = (0.1, 0, 0) plt.rcParams['font.size'] = 20 fig1, ax1 = plt.subplots(figsize=(15,10)) ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90, colors = colors) ax1.axis('equal') plt.show() \end{python} The name comes from the fact that Matlab is emulated. Matlab is an enterprise software environment and programming language which is traditionally used by scientists and companies to work on machine learning tasks. It is expensive, difficult to scale and as a programming language, it is slow in comparison with other programming languages which are introduced in Figure \ref{fig:52_python_popular}. However, Matlab offers excellent ways to implement data visualisation. So, Matplotlib in Python is used as a robust, free and easy library for data visualisation instead. Seaborn is an alternative approach than it comes to visualise data in Python. It is almost the same with libraries for visualisation and programming languages. Like described at the end of the python section, visualisation libraries have differences in use cases, scalability and many other things. So, based on this statement, the best visualisation tool for the particular example of the prototype implementation is Matplotlib. If the prototype would contain more statistics instead of image recognition, then Seaborn is the right choice because it has a lot of things suitable for mathematical tasks, built-in. The prototype has to deal with image data, and Matplotlib offers excellent opportunities to visualise them. For example, to get a first impression of the downloaded data, it takes just a few lines of code to visualise them: \begin{python}[label={pred}, caption={Predictions of the pre-trained VGG16 Architecture CNN model}] fig = plt.figure(figsize=(20, 4)) for i in np.arange(20): ax = fig.add_subplot(2, 10, i+1, xticks=[], yticks=[]) plt.imshow(x_train[:20][i], cmap='gray') ax.set_title(cats_vs_dogs_labels[y_train[:20][i].item()]) \end{python} X\_train contains the image data and y\_train the corresponding labels as described in the Theory Chapter (Chapter \ref{ch:theory}). \subsection{TensorFlow} TensorFlow is used as a full-stack library, for four out of five steps of the prototype implementation. That means it offers ways to implement data loading, data preprocessing, creating models and make predictions (Sections \ref{sec:input_data} - \ref{make_predictions}). TensorFlow offers different levels of abstraction. So, for the prototype implementation, a level of abstraction is used, which provides two things. First, a syntax which makes it easy to implement ideas and second enough freedom to adjust the code. To adjust the code is important because as described in the functional-specifications, the base of the pre-trained model has to be changed. If the syntax is overwhelming, it can make things worse. Mainly because the goal is to make transparent decisions, it is not good to use an abstraction level that confuses even more. This is why Keras, as a high-level application programming interface, is used as a sufficient level of abstraction. For example, if it comes to implement a VGG16 architecture as described in the functional specifics, this can help a lot. In the following listings, two things can be observed. First, the code which is used in the prototype implementation needs just a few lines of code to load a pre-trained VGG16 model. That means the code offers two things as well. At first, the model is written as executable code, and then the pre-adjusted weights get loaded: \begin{python}[label={pred}, caption={Predictions of the pre-trained VGG16 Architecture CNN model}] from keras.applications.vgg16 import VGG16 model = VGG16(weights='imagenet', include_top=True) model.summary() \end{python} The following listing shows an implementation from scratch, of a simple neural network with just one neuron: \begin{python}[label={pred}, caption={Predictions of the pretrained VGG16 Architecture CNN model}] class NeuralNetwork: def __init__(self, x, y): self.input = x self.weights1 = np.random.rand(self.input.shape[1],4) self.weights2 = np.random.rand(4,1) self.y = y self.output = np.zeros(self.y.shape) def feedforward(self): self.layer1 = sigmoid(np.dot(self.input, self.weights1)) self.output = sigmoid(np.dot(self.layer1, self.weights2)) def backprop(self): # application of the chain rule to find derivative of the loss function with respect to weights2 and weights1 d_weights2 = np.dot(self.layer1.T, (2*(self.y - self.output) * sigmoid_derivative(self.output))) d_weights1 = np.dot(self.input.T, (np.dot(2*(self.y - self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1))) # update the weights with the derivative (slope) of the loss function self.weights1 += d_weights1 self.weights2 += d_weights2 \end{python} It has more lines of code, even though the model is so simple. This points out the importance of a library than it comes to the implementation of a deep learning model. Furthermore, the library is used to get the data and adjust it such it fits to the use case of my prototype. Instead of download the data by hand and use multiple libraries to get the result, TensorFlow provides a very handy end-to-end solution for the first two steps. With just one function call it downloads the images, scales them and split them into different parts: \begin{python}[label={pred}, caption={Predictions of the pre-trained VGG16 Architecture CNN model}] (raw_train, raw_validation, raw_test), metadata = this.load( 'cats_vs_dogs', split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'], with_info=True, as_supervised=True, ) \end{python} Even the alternative library PyThorch is faster, and its syntax is even more apparent, the opportunities to handle two out of five steps with just one function call made the decision clear. TensorFlow is the best fit for the prototype implementation because it allows implementing the whole prototype with just a few lines of code. \subsection{SHAP Library for transparent decision} Based on the work of Luneberg and Lee, the SHAP library offers a method which allows computing the SHAP values after each iteration from each layer. For a visualization, as described in the requirements (Chapter \ref{ch:requirements}) and the functional specifics (Chapter \ref{ch:functional_specifications}), a separate implemented function is required. The SHAP software library others different classes and methods which optimise the calculation of SHAP values for various machine learning approaches. The prototype uses deep explainer, which is optimised for deep learning models. It is not only for convolutional neural networks, but at least, the implementation works fine and comes to a result in a sufficed amount of time. There is no approach which provides similar functionalities and classes as the SHAP Library. This is why the library can not be compared with others and has to be used for the calculations. Otherwise, the implementation has to be done from scratch. \subsection{Implementation} Data visualisation and model explainability are two integral aspects of the prototype implementation. They are the binoculars helping to see the patterns in the data, which lead to prototypes decisions. That is why the prototype uses a function which combines the original image with another layer that shows the calculated values on top. The visualisation is based on the visualisation, which is already offered by the SHAP library for non-image data. Figure \ref{fig:53_boston_summary_plot_bar} shows such a kind of visualization from the SHAP library. It can just be implemented by calling the shap\_plot() Method. As a parameter, it gets the features, the calculated SHAP values and the corresponding machine learning model. The function which is implemented in the prototype works similar. However, it allows - by the following function call - to get visualizations of the SHAP values for image data: \nopagebreak \begin{python}[label={pred}, caption={Predictions of the pretrained VGG16 Architecture CNN model}] visualize_model_decisions(shap_values=shap_values, x=to_predict, labels=index_names, figsize=(20, 40)) \end{python} The output of the function can be seen in Figure \ref{fig:41_expected_gradients_layer_7}. More precisely, the images show how the function was used on the dataset after predictions were made. \begin{figure}[htp] \centering \fbox{\includegraphics[width=0.9\linewidth]{photo/41_expected_gradients_layer_7}} \caption{The output of the visulaize\_model\_deciions function. More precisely, the images show how the function was used on the dataset after predictions were made to visualize the SHAP values of the 7th layer in a convolutional neural network.} \label{fig:41_expected_gradients_layer_7} \end{figure} Note that not every aspect of the shown listings is explained in any detail. They provide a better understanding of the mentioned points in each section. The theoretical background and the functional usage towards the listings is described in the corresponding section at the Functional Specifications (Chapter \ref{ch:functional_specifications}) and \hyperref[ch:theory]{Theory Chapter (\ref{ch:theory})}.
{ "alphanum_fraction": 0.7888612811, "avg_line_length": 78.95215311, "ext": "tex", "hexsha": "bda843f772e84761e2df6ca5f6e6a17ac05ada17", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d61496a25e5dd016d84823f38277fc9472000b17", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "bohniti/Computing-Transparent-Decisions", "max_forks_repo_path": "Thesis/content/5_it_specifications.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d61496a25e5dd016d84823f38277fc9472000b17", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "bohniti/Computing-Transparent-Decisions", "max_issues_repo_path": "Thesis/content/5_it_specifications.tex", "max_line_length": 838, "max_stars_count": 1, "max_stars_repo_head_hexsha": "d61496a25e5dd016d84823f38277fc9472000b17", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "bohniti/Computing_transparent_decisions", "max_stars_repo_path": "Thesis/content/5_it_specifications.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-19T13:41:47.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-19T13:41:47.000Z", "num_tokens": 3745, "size": 16501 }
\chapter{Unsupervised Learning}~\label{clustering} Unsupervised learning is a paradigm of the machine learning field which is based on the training of knowledge without using a teacher. It includes a large set of techniques and algorithms used for learning from data without knowing the true classes. The main application of unsupervised learning consists on estimating how data are organized in the space, such that they can reconstruct the prior probability distribution of data. For doing that clustering algorithms are used with the goal of grouping a set of objects in such a way that objects in the same cluster are strongly similar (\textit{internal criterion}) and objects from distinct clusters are strongly dissimilar (\textit{external criterion}). The classical clustering problem starts with a set of $n$ objects and an $n \times n$ affinity matrix $A$ of pairwise similarities that gives us an edge-weighted graph $G$. The goal of the clustering problem is to partition vertices of $G$ into maximally homogeneous groups (clusters). Usually the graph $G$ is an undirected graph, meaning that the affinity matrix $A$ is symmetric. \image{img/ulearning/clustering}{"Classical" clustering problem.}{0.9} In literature we can find different clustering algorithms that are strongly used, and each of them manages data in different ways. Some of the clustering algorithm, that we are going to test against adversarial noise in chapter \ref{results}, are: K-Means , Spectral and Dominant Sets clustering. \section{Images as Graphs} In some applications we can have that images correspond to our data for which we want to obtain groups partition. In this case the clustering algorithms could be used in order to reconstruct a simplified version of the input image, removing noisy information. For doing that the image is represented as an edge-weighted undirected graph, where vertices correspond to individuals pixels and edge-weights reflect the similarity between pairs of vertices. Given an input image with $H \times W$ pixels we construct a similarity matrix $A$ such that the similarity between the pixels $i$ and $j$ is measured by: $$A(i,j) = \exp\Big(\frac{-||F(i)- F(j)||^2_2}{\sigma^2}\Big)$$ \begin{itemize} \item $F(i)$, is the normalized intensity of pixel $i$ (\textit{intensity segmentation}). \item $F(i) = [v, vs\sin(h), vs\cos(h)](i)$ where $h,s,v$ are the HSV values of pixel $i$ (\textit{color segmentation}). \item $F(i) = [|I*f_1|, \dots, |I*f_k|](i)$ is a vector based on texture information at pixel $i$ (\textit{texture segmentation}). \end{itemize} The constant $\sigma$ is introduced for obtaining a scaling effect on the affinity: \begin{itemize} \item Small $\sigma$: only nearby points are similar.% group only nearby points. \item Large $\sigma$: distant points tend to be similar.%group far-away points. \end{itemize} An example of application of clustering algorithms for image segmentation is below provided: \image{img/ulearning/fruits.png}{Image of vegetables.}{0.3} \begin{figure}[H] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.48\textwidth]{img/ulearning/fruits_intensity} \caption{Clustering on pixels intensity.} \end{minipage} \hspace{1.cm} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.48\textwidth]{img/ulearning/fruits_color} \caption{Clustering on pixels color.} \end{minipage} \end{figure} \FloatBarrier \newpage \section{K-Means} K-Means is one of the simplest, famous and used iterative clustering algorithms. It aims to partition $n$ objects into $K$ maximal cohesive groups. The goal of the K-Means algorithm is to reach the following state: each observation belongs to the cluster with the nearest center. Its implementation can be shortly described in few lines: \begin{itemize} \item \textbf{Initialization:} Pick $K$ random points as cluster centers (centroids). \image{img/ulearning/kmeans1}{Initialization with $K=2$.}{0.3} \item \textbf{Alternate:} \begin{enumerate} \item Assign data points to closest cluster centroid. \item For each cluster C update the corresponding centroid to the average of points in C. \begin{figure}[H] \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[width=0.68\textwidth]{img/ulearning/kmeans2} \caption{Iterative step 1.} \end{minipage} \hspace{2.5cm} \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[width=0.68\textwidth]{img/ulearning/kmeans3} \caption{Iterative step 2.} \end{minipage} \end{figure} \FloatBarrier \end{enumerate} \item \textbf{Stop:} When no points' assignments change. \begin{figure}[H] \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[width=0.68\textwidth]{img/ulearning/kmeans4} \caption{Repeat until convergence.} \end{minipage} \hspace{2.5cm} \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[width=0.68\textwidth]{img/ulearning/kmeans5} \caption{Final output.} \end{minipage} \end{figure} \end{itemize} \newpage Formally speaking we can define the K-Means algorithm over the set of points $X$ in the following way: \begin{enumerate} \begin{footnotesize} \item Initialize cluster centroids $\mu_1, \dots, \mu_k,$. \item Repeat until all points remain unchanged (convergence): \end{footnotesize} \begin{enumerate}[label*=\arabic*.] \item $\forall i\in X \quad c^{(i)} = \arg\min_j \vert\vert x^{(i)}-\mu_j\vert\vert^2$ \item $\forall j\in C \quad \mu_j = \frac{\sum_{i=1}^m 1\{c^{(i)} = j\}x^{(i)}}{\sum_{i=1}^m 1\{c^{(i)} = j\}}$ \end{enumerate} \end{enumerate} \paragraph*{Properties of K-Means.} K-Means benefits of the following properties: \begin{itemize} \item It is guaranteed to converge in a finite number of steps. \item It minimizes an objective function, which represents the compactness of the retrieved $K$ clusters: $$\arg \min_C\sum_{i=0}^K \Biggl\{ \sum_{j \in \text{elements of } C_i\text{ cluster}} ||x_j - \mu_i||^2 \Biggr\}$$ where $\mu_i$ is the centroid of cluster $i$. \item It is a polynomial algorithm: $O(Kn)$ for assigning each sample to the closest cluster and $O(n)$ for the update of the clusters center. \end{itemize} It is possible to say that K-Means is a very simple and efficient method but, on the other hand, it is strongly sensible to the initialization phase. If the initial centroids are not chosen correctly the algorithm converges to a local minimum of the error function. Another disadvantage of K-Means is that does not work well in presence non-convex shapes. \section{Eigenvector-based Clustering}~\label{eingenvector-based-clustering} The eigenvector-based clustering collects different techniques that use properties of eigenvalues and eigenvectors for solving the clustering problem. Let us represent a cluster using a vector $x$ whose $k$-th entry captures the participation of node $k$ in that cluster. If a node does not participate in cluster $x$, the corresponding entry is zero. We also impose the restriction that $x^Tx = 1$. The goal of the clustering algorithm is to maximize: \begin{equation}~\label{cohesiveness} \arg\max_x \sum_{i=1}^n \sum_{j=1}^n w_{ij} x_i x_j = x^TAx \end{equation} which measures the cluster's cohesiveness and $x_i$, $x_j$ represent the measure of centrality to the cluster and it is defined as: $$x_i = \begin{cases} \neq 0 \text{ if } i \in C\\ = 0 \text{ if } i \notin C \end{cases}$$.\\ Coming back to the notion of eigenvalues of a matrix we can say that $\lambda$ is an eigenvalue of $A$ and $x_\lambda$ is the corresponding eigenvector if: $$Ax_\lambda = \lambda x_\lambda$$ From which we can derive that: $$x_\lambda^TAx_\lambda = \lambda x_\lambda^Tx_\lambda = \lambda$$ There are two important theorems that defines the nature of eigenvalues of a $n\times n$ matrix $A$: \begin{enumerate} \item If $A = A^T$ then $A$ is symmetric and has only "\textit{real}" eigenvalues. It means that we can sort them, from the smallest one to the largest one. \item If $A$ is symmetric then $\max_x x^TAx$ corresponds to the largest eigenvalue $\lambda$. Moreover, the corresponding eigenvector $x_\lambda$ is the argument which maximizes the cohesiveness. \end{enumerate} Taking advantage of the two theorems we can say that considering $A$ as the affinity matrix, then clustering problem \ref{cohesiveness} corresponds to an \textbf{eigenvalue problem}, maximized by the eigenvector of $A$ with the largest eigenvalue.\\ \paragraph*{Clustering by Eigenvectors Algorithm\\\\} We can define the algorithm for extracting clusters from data points using the eigenvectors strategy with the following steps: \begin{enumerate} \begin{footnotesize} \item Construct the affinity matrix $A$ from input $G$. \item Compute the eigenvalues and eigenvectors of $A$. \item Repeat \item $\quad$ Take the largest unprocessed eigenvalue and the corresponding eigenvector. \item $\quad$ Zero all the components corresponding to samples that have already been clustered. \item $\quad$ Threshold the remaining components to detect which elements belong to this cluster. \item $\quad$ If all elements have been accounted for, there are sufficient clusters. \item Until there are sufficient clusters. \end{footnotesize} \end{enumerate} \subsection{Clustering as Graph Partitioning} Let $G=(V,E,w)$ is an undirected weighted graph with $\vert V\vert$ nodes (samples) and $\vert E\vert$ edges. Note that it is undirected when the affinity matrix is symmetric. Given two graph partitions of vertices $A$ and $B$, with $B=V\setminus A$, we define cut$(A,B)$ in the following way:\\ $$cut(A,B) = \sum_{i \in A} \sum_{j \in B} w(i,j)$$ \image{img/ulearning/minCut}{Minimum cut problem.}{0.8} In the MinCut problem, we look for the partitioning that minimizes the cost of crossing from one $A$ to $B$, which is the sum of weights of the edges which cross the cut. The fundamental idea is to consider the clustering problem as a graph partitioning. Indeed, the MinCut problem can be considered a good way of solving the clustering problem in graph data. The MinCut clustering is advantageous because it is solvable in polynomial time but, on the other hand, it favors highly unbalanced clusters (often with isolated vertices), indeed, it only measures what happens between the clusters and not what happens within the clusters. \image{img/ulearning/minCut2}{Minimum cut unbalance clusters.\cite{normalized_cut}}{0.45} \subsection{Normalized Cut} In order to overcome the problem of unbalanced clusters, a normalized version of the min cut problem, called \textbf{Normalized Cut}, is used and it is defined by:\\ $$Ncut(A,B) = \underbrace{cut(A,B)}_{\text{Between A and B}}\left( \underbrace{\frac{1}{vol(A)} + \frac{1}{vol(B)}}_{\text{Within A and B}}\right)$$\\ where $vol(A)$ is the volume of the set $A$ given by $vol(A) = \sum_{i \in A}d_i,~A \subseteq V$ and $d_i = \sum_j w_{i,j}$ is the degree of nodes (sum of weights).\\ The Normalized Cut has the advantage of taking into consideration what happens within clusters, indeed, considering $vol(A)$ and $vol(B)$ it takes into account what's going on within $A$ and $B$. \newpage \subsection{Graph Laplacians} From an accurate analysis \citeauthor{spectral_tutorial} discovered that the main tools for spectral clustering are graph Laplacian matrices, defined in the spectral graph theory. In this section we are going to define different graph Laplacians and point out their most important properties since they will be later on used for solving the MinCut and NMinCut problem. \paragraph{The Unnormalized Graph Laplacian.} The \textbf{unnormalized graph Laplacian} matrix is defined as: $$L = D - W$$ where: \begin{itemize} \item $D$ is a diagonal matrix containing information about the degree of each node in $G$. \item $W$ is the affinity matrix of $G$, containing $1s$ or $0s$ if nodes are adjacent. Diagonal elements are all set to $0$. \end{itemize} In the following we provide an example of matrices $D$ and $W$ obtained considering the graph shown in Fig. \ref{fig:graphG}. \begin{figure}[H] \begin{minipage}[t]{0.49\linewidth} \centering $$ D = \begin{bmatrix} 2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 4 & 0 & 0 & 0 & 0 \\ 0 & 0 & 4 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 \\ \end{bmatrix}$$ \caption{Degree matrix $D$.} \end{minipage} \hspace{1cm} \begin{minipage}[t]{0.49\linewidth} \centering $$ W = \begin{bmatrix} 0 & 1 & 1 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 1 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 \\ \end{bmatrix}$$ \caption{Affinity matrix $W$.} \end{minipage} \end{figure} The elements of $L$ are given by: $$ L _ { i , j } = \left\{ \begin{array} { l l } { \operatorname { d } \left( v _ { i } \right) } & { \text { if } i = j } \\ { - 1 } & { \text { if } i \neq j \text { and } v _ { i } \text { is adjacent to } v _ { j } } \\ { 0 } & { \text { otherwise } } \end{array} \right. $$ where $d(v_i)$ is the degree of the vertex $i$. \imageLabel{img/ulearning/graphLaplacian}{Laplacian matrix L associated to the graph in the left.}{0.85}{graphG} In \cite{spectral_tutorial} are reported the properties satisfied by the matrix $L$, that are: \begin{enumerate} \item For all vectors $f$ in $\mathbb{R}^n$, we have: $$f ^ { \top } L f = \frac { 1 } { 2 } \sum _ { i, j = 1 } ^ { n } w _ { i j } \left( f _ { i } - f _ { j } \right) ^ { 2 }$$ This is proved by the definition of $d_i$: $$\begin{aligned} f ^ { \top } L f & = f ^ { \top } D f - f ^ { \top } W f = \sum _ { i=1 }^n d _ { i } f _ { i } ^ { 2 } - \sum _ { i , j =1 }^n f _ { i } f _ { j } w _ { i j } \\ & = \frac { 1 } { 2 } \left( \sum _ { i=1 }^n \left( \sum _ { j=1 }^n w _ { i j } \right) f _ { i } ^ { 2 } - 2 \sum _ { i, j=1 }^n f _ { i } f _ { j } w _ { i j } + \sum _ { j=1 }^n \left( \sum _ { i=1 }^n w _ { i j } \right) f _ { j } ^ { 2 } \right) \\ & = \frac { 1 } { 2 } \sum _ { i ,j=1 }^n w _ { i j } \left( f _ { i } - f _ { j } \right) ^ { 2 } \end{aligned}$$ \item $L$ is symmetric (by assumption) and positive semi-definite. The symmetry of $L$ follows directly from the symmetry of $W$ and $D$. The positive semi-definiteness is a direct consequence of the first property, which shows that $f ^ { T } L f \geq 0$ \item The smallest eigenvalue of $L$ is 0, the corresponding eigenvector is the constant 1 vector. \item $L$ has $n$ non-negative, real-valued eigenvalues $0 = \lambda _ { 1 } \leq \lambda _ { 2 } \leq \ldots \leq \lambda _ { n }$ \end{enumerate} First relation between spectrum and clusters: \begin{itemize} \item The multiplicity of eigenvalue $\lambda_1=0$ corresponds to the number of connected components of the graph. \item The eigenspace is spanned by the characteristic function of these components (so all eigenvectors are piecewise constant). \end{itemize} \paragraph{Normalized Graph Laplacians.} In literature it is also defined the normalized form of a Laplacian graph. In particular, there exists two definitions that are closely related: $$\begin{array} { l } { L _ { \mathrm { sym } } = D ^ { - 1 / 2 } L D ^ { - 1 / 2 } = I - D ^ { - 1 / 2 } W D ^ { - 1 / 2 } } \\ { L _ { \mathrm { rw } } = D ^ { - 1 } L = I - D ^ { - 1 } W } \end{array}$$ The first matrix $L_{sym}$ is a symmetric matrix, and the second one $L_{rw}$ as a normalized form of a Laplacian graph which is closely connected to a random walk \cite{spectral_tutorial}. \begin{defn}[Properties for Laplacian matrices and normalized ones]{} In relation to Laplacian matrices, it is possible to notice that, let $L$ be the Laplacian of a graph $G=(V,E)$. Then, $L \geq 0$, indeed:\\ $\forall x = (x_1, \dots, x_n),$ $$\begin{aligned} x^{T}Lx &= x^{T} \sum_{e \in E} L_{e} x \\ &=\sum_{e \in E} x^T L_{e} x \\ &=\sum_{i, j \in E}\left(x_{i}-x_{j}\right)^{2} \geq 0 \end{aligned}$$ In relation instead to the normalized Laplacian Matrix we have that: $$\forall x \in \mathbb{R}^n \quad x^TLx = \sum_{i,j} \left(\frac{x(i)}{\sqrt{d(i)}} - \frac{x(j)}{\sqrt{d(j)}} \right)^2 \geq 0$$ \end{defn} \par \bigskip \bigskip \noindent \subsection{Solving Ncut}~\label{solveNcutTheory} Any cut $(A,B)$ can be represented by a binary indicator vector $x$: $$x_i = \begin{cases} +1 \text{ if } i \in A\\ -1 \text{ if } i \in B \end{cases}$$ It can be shown that: \begin{equation}~\label{solvencut} min_x ~ Ncut(x) = min_y \underbrace{\frac{y'(D-W)y}{y'Dy}}_{\text{Rayleigh quotient}} \end{equation} subject to the constraint that $y ^ { \prime } D1 = \sum_{i} y_{i} d _ { i } = 0$ (with $y _ { i } \in \{ 1 , - b \}$ (relaxation introducing also real values), indeed $y$ is an indicator vector with 1 in the $i$-th position if the $i$-th feature point belongs to $A$, negative constant ($-b$) otherwise). \begin{thm}[Solving Ncut proof]{} $$\lambda_2 = \min_x \frac{x^TLx}{x^Tx} = \min_x \frac{x^TD^{-1/2}LD^{-1/2}x}{x^Tx} \qquad \text{Remember } L_{sym} = D^{-1/2}LD^{-1/2}$$ \text{Considering the change of variables obtained by setting} $y=D^{-1/2}x$ \text{ and } $x = D^{1/2}y$: $$\lambda_2 = \min_y \frac{y^TLy}{(D^{1/2}y)^T(D^{1/2}y)} = \min_y \frac{y^TLy}{y^TDy}$$ \end{thm} Issues rise up because solving Problem \ref{solvencut} is not computationally efficient since it is an \textbf{NP-Hard} problem. The huge Ncut time complexity brings us to take into consideration an approximation of it. If we relax the constraint that $y$ must be a discrete-valued vector and allow it to take on real values, then the original problem $$\min _ { y } \frac { y ^ { \prime } ( D - W ) y } { y ^ { \prime } D y }$$ is equivalent to: $$\min _ { y } y ^ { \prime } ( D - W ) y \quad \text { subject to } y ^ { \prime } D y = 1$$ This amount to solve a \textit{generalized} eigenvalue problem, but now the optimal solution is provided by the second smallest eigenvalue since we want to minimize the cut. Note that we pick the second smaller eigenvalues since we have seen the smallest one is always zero and corresponds to the trivial partitioning $A=V$ and $B=\emptyset$. $$\underbrace{(D-W)}_{Laplacian}y=\lambda D y$$ We started from an NP-Hard problem and through relaxation we reached a feasible solution. However, we have not the warranty that the relaxed solution is in one to one correspondence. \paragraph*{The effect of relaxation.} Through the relaxation we loose some precision in the final solution. \imageb{img/ulearning/relaxation1}{0.8} Note that the original problem returns binary values $(-1,1)$, indicating the clustering membership. The relaxed version, on the right, returns continuous values of it can be the case that some points are not so clear to assign (close to the margin between the two). For that reason relaxed solution not always is in one-to-one correspondence with the original problem. \subsection{Random Walk Interpretation} The Ncut problem can be formalized also in terms of random walk, as highlighted in \cite{spectral_tutorial}, since we want to find a cut that reduces the probability of jumping between nodes of different clusters. It can be defined by a Markov chain where each data point is a state, connected to all other states with some probability. With our affinity $W$ and degree $D$, the stochastic matrix is: $$P=D^{-1}W$$ which is the row-normalized version of $W$, so each entry $P(i,j)$ is a probability of "\textit{walking}" to state $j$ from state $i$\cite{randomwalk_spectral}. \imageb{img/ulearning/randomWalk}{0.20} Probability of a walk through states $( s _ { 1 } , \dots , s _ { m })$ is given by: $$P \left( s _ { 1 } , \ldots , s _ { 2 } \right) = P \left( s _ { 1 } \right) \prod _ { i = 2 } ^ { m } P \left( s _ { i } , s _ { i - 1 } \right)$$ Suppose we divide the states into two groups, and we want to minimize the probability of jumping between the two groups. We can formulate this as an eigenvector problem: $$P y = \lambda y$$ where the component of vector $y$ will give the segmentation.\\ We can precise also that: \begin{itemize} \item $P$ is a stochastic matrix. \item The largest eigenvalue is 1, and its eigenvector is the all-one vector 1. Not very informative about segmentation. \item The second largest eigenvector is orthogonal to the first, and its components indicate the strongly connected sets of states. \item Meila and Shi (2001) showed that minimizing the probability of jumping between two groups in the Markov chain is equivalent to minimizing Ncut. \end{itemize} \begin{thm}[Random Walk Proposition] $(\lambda, y)$ is a solution to $Py = \lambda y$ if and only if \footnote{Adapted from Y. Weiss}: \begin{itemize} \item $1-\lambda$ is an eigenvalue of $(D-W)y = \lambda D y$ \item $y$ is an eigenvector of $(D-W)y = \lambda Dy$ \end{itemize} \textbf{Proof:} $$\begin{array} { l l } { P y = \lambda y } & { \Leftrightarrow \quad - P y = - \lambda y } \\ { } & { \Leftrightarrow \quad y - P y = y - \lambda y } \\ { } & { \Leftrightarrow \quad ( I - P ) y = ( 1 - \lambda ) I y } \\ { } & { \Leftrightarrow \quad \left( D ^ { - 1 } D - D ^ { - 1 } W \right) y = ( 1 - \lambda ) D ^ { - 1 } D y } \\ { } & { \Leftrightarrow \quad D ^ { - 1 } ( D - W ) y = D ^ { - 1 } ( 1 - \lambda ) D y } \\ { } & { \Leftrightarrow \quad ( D - W ) y = ( 1 - \lambda ) D y } \end{array}$$ \end{thm} The problem is to find a cut $(A,B)$ in a graph $G$ such that a random walk does not have many opportunities to jump between the two clusters.\\ This is equivalent to the Ncut problem due to the following relation: $$Ncut(A,B) = P(A|B) + P(B|A)$$ \subsection{2-way Ncut clustering algorithm} In section \ref{solveNcutTheory} we have seen how to solve the Normalized Cut clustering problem, and here we want to discuss its implementation for extracting just two clusters: \begin{enumerate} \begin{footnotesize} \item Compute the affinity matrix $W$, compute the degree matrix $D$. $D$ is diagonal and \\$D_{i,i} = \sum_{j \in V} W_{i,j}$ \item Solve the generalized eigenvalue problem $(D-W)y = \lambda Dy$ \item Use the eigenvector associated to the second smallest eigenvalue to bipartition the graph into two parts. \end{footnotesize} \end{enumerate} Sometimes there's not a clear threshold to split based on the second vector since it takes continuous values. In which way it is possible to choose the splitting point? \begin{itemize} \item Pick a constant value (0 or 0.5). \item Pick the median value as the splitting point. \item Look for the splitting point that has minimum Ncut value: \begin{enumerate} \item Choose $n$ possible splitting points. \item Compute Ncut value. \item Pick the minimum. \end{enumerate} \end{itemize} \subsection{K-way Ncut clustering algorithm} In the case we want to extract more than 2 clusters we can adopt two possible strategies: \paragraph*{Approach \# 1.} It is a recursive two-way cuts: \begin{enumerate} \begin{footnotesize} \item Given a weighted graph $G=(V,E,w)$, summarize the information into matrices $W$ and $D$. \item Solve $(D-W)y = \lambda Dy$ for eigenvectors with the smallest eigenvalues. \item Use the eigenvector with the second smallest eigenvalue to bipartition the graph by finding the splitting point such that Ncut is minimized. \item Decide if the current partition should be subdivided by checking the stability of the cut, and make sure Ncut is below the prespecified value. \item Recursively repartition the segmented parts if necessary. \end{footnotesize} \end{enumerate} Note that the approach is computationally wasteful, only the second eigenvector is used, whereas the next few small eigenvectors also contain useful partitioning information. \paragraph*{Approach \#2.} Using the first $k$ eigenvectors: \begin{enumerate} \begin{footnotesize} \item Construct a similarity graph and compute the unnormalized graph Laplacian $L$. \item Compute the $k$ smallest \textbf{generalized} eigenvectors $u _ { 1 } , u _ { 2 } , \dots , u _ { k }$ of the generalized eigenproblem $L u = \lambda D u$. \item Let $U = \left[ u_1, u_{ 2 }, \dots, u_{ k } \right] \in \mathbb { R } ^ { n \times k }$. \item Let $y_i \in \mathbb{ R }^k$ be the vector corresponding to the $i$th row of $U$. $$U = \left[ \begin{array} { c c c c } { u _ { 11 } } & { u _ { 12 } } & { \cdots } & { u _ { 1 k } } \\ { u _ { 21 } } & { u _ { 22 } } & { \cdots } & { u _ { 2 k } } \\ { \vdots } & { \vdots } & { \ddots } & { \vdots } \\ { u _ { n 1 } } & { u _ { n 2 } } & { \cdots } & { u _ { n k } } \end{array} \right] = \left[ \begin{array} { c } { y _ { 1 } ^ { T } } \\ { y _ { 2 } ^ { T } } \\ { \vdots } \\ { y _ { n } ^ { T } } \end{array} \right]$$ \item Thinking of $y_i$'s as points in $\mathbb{ R }^k$, cluster them with $k$-means algorithms. \end{footnotesize} \end{enumerate} \subsection{Spectral Clustering vs $K$-Means} First of all, let us define the spectral clustering algorithm \cite{spectral_algorithm}, its goal is to cluster objects that are connected but not necessarily compact or clustered within convex boundaries. The algorithm has in input the similarity matrix $S \in \mathbb{ R }^{n \times n}$ and the $k$ number of clusters to construct. It returns in output the $k$ clusters. It follows these steps: \begin{enumerate} \begin{footnotesize} \item Construct a similarity graph and compute the normalized graph Laplacian $L_{sym}$. \item Embed data points in a low-dimensional space (spectral embedding), in which the clusters are more obvious, computing the $k$ smallest eigenvectors $v_1, \dots, v_k$ of $L_{sym}$. \item Let $V=\left[v_1,\dots, v_k \right] \in \mathbb{ R }^{n \times k}$. \item Form the matrix $U \in \mathbb{ R }^{n \times k}$ from $V$ by normalizing the row sums to have norm 1, that is: $$u_ { i j } = \frac { v _ { i j } } { \left( \sum _ { k } v _ { i k } ^ { 2 } \right) ^ { 1 / 2 } }$$ \item For $i=1,\dots, n$, let $y _ { i } \in \mathbb { R } ^ { k }$ be the vector corresponding to the $i$th row of $U$. \item Cluster the points $y_i$ with $i=1,\dots,n$ with the $k$-means algorithm into clusters $C_1, \dots, C_k$. \end{footnotesize} \end{enumerate} Applying $k$-means to Laplacian eigenvectors allows us to find cluster with non-convex boundaries. \imageb{img/ulearning/spectral1}{0.95} \imageb{img/ulearning/spectral2}{0.95} \imageb{img/ulearning/spectral3}{0.95} One of the possible problems that could appear on the usage of the Spectral Clustering algorithm consists on choosing the best $k$. We want to find a $k$ such that all eigenvalues $\lambda_1, \dots, \lambda_k$ are very small, but $\lambda_{k+1}$ is relatively large. In this way, the choosing of $k$ maximizes the eigengap (difference between consecutive eigenvalues) $\delta_k = |\lambda_k - \lambda_{k-1}|$. \imageb{img/ulearning/eigengap}{0.95} \newpage \section{Dominant Sets} In the previous sections we have seen that data can be represented using weighted graphs, also called similarity graphs, in which data are represented by nodes in the graph and the edges represent the similarity relation between nodes. This representation allows us to codify also very complex structured entities. In literature some authors argue to the fact that a cluster can be seen as a \textbf{maximal clique}\footnote{A \textbf{clique} is a subset of mutually adjacent vertices.\\ A \textbf{maximal clique} is a clique that is not contained in a larger one.} of a graph, indeed the concept of clique is related to the internal cluster criteria, instead maximal clique responds to the external criteria. But the standard definition of click does not consider weighted graphs. For this reason, dominant set is introduced by \citeauthor{dominantset} as an extension of the maximal clique problem. We are going to see that the notion of dominant set provides measures of cohesiveness of a cluster and vertex participation of different clusters. \subsection{Cluster in Graph Theory} Data to be clustered could be coded as an undirected weighted graph with no self-loops: $G=(V, E, \omega)$, where $V=\{1,\dots,n\}$ is the vertex set, $E\subseteq V\times V$ is the edges set and $w: E\rightarrow \mathbb{R}^*_+$ is the positive weight function. Vertices represent data points, edges neighborhood relationships and edge-weights similarity relations. $G$ is then represented with an adjacency matrix $A$, such that $a_{ij} = \omega(i,j)$. Since there are not self-loops we have that $\omega(i,i) = 0$ (main diagonal equal to $0$).\\ One of the key problem of clustering is that there is not a unique and well define definition of cluster, but in literature researches agree that a cluster should satisfy two conditions: \begin{itemize} \item \textbf{High internal homogeneity}, also named \textit{Internal criterion}. It means that all the objects inside a cluster should be highly similar(or low distance) to each other. \item \textbf{High external in-homogeneity}, also named \textit{External criterion}. It means that objects coming from different clusters have low similarity (or high distance). \end{itemize} The idea of the criterion is that clusters are groups of objects which are strongly similar to each other if they become to the same cluster, otherwise they have a highly dissimilarity. Informally speaking a cluster is a set of entities which are alike, and entities from different clusters are not alike. Let $S\subseteq V$ be a nonempty subset of vertices and $i \in S$. The average weighted degree of $i$ with regard to $S$ is defined as:\\ \begin{equation} \text{awdeg}_S(i)=\frac{1}{|S|}\sum_{j\in S}a_{ij} \end{equation} This quantity over here represents the average similarity between entity $i$ and the rest of the entities in $S$. In other words how much similar $i$ is in average with all the objects in $S$. It can be observed that $\text{awdeg}_{\{i\}}(i) = 0$ $\forall i \in V$, since we have no self-loops.\\ We now introduce a new quantity $\phi$ such that if $j \notin S$: \begin{equation} \phi_S(i, j)=a_{ij}-\text{awdeg}_S(i) \end{equation} Note that $\phi_{\{i\}}(i, j)=a_{ij}$ $\forall i, j\in V$ with $i \neq j$. $\phi_S(i, j)$ measures the relative similarity between $i$ and $j$ with respect to the average similarity between $i$ and its neighbors in $S$. This measures can be either positive or negative. \begin{defn}[\citeauthor{ds1}, Node's weight]{} Let $S\subseteq V$ be a nonempty subset of vertices and $i \in S$. The weight of $i$ with regard to $S$ is: \begin{equation} w_S(i)= \begin{cases} 1 & \text{if } |S| = 1 \\ \sum\limits_{j\in S\setminus \{i\}}\phi_{S\setminus \{i\}}(j, i)w_{S\setminus \{i\}}(j) & \text{otherwise} \end{cases} \end{equation} \end{defn} Further, the total weight of $S$ is defined to be $W(S)=\sum_{i\in S}w_S(i)$. \image{img/ulearning/ws.png}{Weight of i respect to elements in S.}{0.18} Note that $w_{\{i, j\}}(i)=w_{\{i, j\}}(j)=a_{ij}$ $\forall i, j \in V \land i\neq j$. Then, $w_S(i)$ is calculated simply as a function of the weights on the edges of the sub-graph induced by $S$.\\ Intuitively, $w_S(i)$ gives a measure of the similarity between $i$ and $S\setminus \{i\}$ with respect to the overall similarity among the vertices of $S\setminus \{i\}$. In other words, how similar (important) $i$ is with respect to the entities in $S$. An important property of this definition is that it induces a sort of natural ranking among vertices of the graph. \imageLabel{img/ulearning/graph.png}{Similarity graph example.}{0.15}{gex} Considering the graph proposed in Figure \ref{fig:gex}, we can derive a ranking between nodes: $$w_{\{1,2,3\}}(1) < w_{\{1,2,3\}}(2) < w_{\{1,2,3\}}(3)$$ \begin{defn}[\citeauthor{ds1}, Dominant Set]{} A nonempty subset of vertices $S\subset V$ such that $W(T)>0$ for any nonempty $T \subseteq S$, is said to be a \textbf{dominant set} if: \begin{itemize} \item $w_S(i) > 0$ $\forall i \in S \qquad$ (\textit{internal homogeneity}) \item $w_{S \cup \{i\}}(i) < 0$ $\forall i \notin S \qquad$ (\textit{external homogeneity}) \end{itemize} \end{defn} These conditions correspond to cluster properties (\textbf{internal homogeneity} and \textbf{external in-homogeneity}). Informally we can say that the first condition requires that all the nodes in the cluster $S$ are important (high weight, similar). The second one assumes that if we consider a new point in the cluster $S$, the cluster cohesiveness will be lower, meaning that the current cluster is already maximal.\\ By definition, dominant sets are expected to capture compact structures. Moreover, this definition is equivalent to the one of maximal clique problem when applied to unweighted graphs. \image{img/ulearning/dominant_def}{The set \{1,2,3\} is dominant.}{0.2} \subsection{Link to Standard Quadratic Optimization} Clusters are commonly represented as an $n$-dimensional vector expressing the participation of each node to a cluster. Large numbers denote a strong participation, while zero values no participation. In section \ref{eingenvector-based-clustering} we have seen that the goal of clustering algorithm is to maximize the cohesiveness of the retrieved clusters. Formally speaking the goal can be expressed using the following optimization problem: \begin{equation}\label{SPQ} \begin{array}{lcl} \text{maximize} & f(x) \\ \text{subject to} & x \in \Delta \end{array} \end{equation} where $A$ is a symmetric real-valued matrix with null diagonal and \begin{equation} \Delta=\{x\in\mathbb{R}^n:x\geq 0 \land e^\top x = 1\} \end{equation} is the standard simple of $\mathbb{R}^n$. This yields the following standard quadratic problem in which local solution corresponds to a maximally cohesive cluster. The entity $x$ is a strict local solution of problem \ref{SPQ} if there exists a neighborhood $U \subset \Delta$ of $x$ such that $f(x) > f(z)$ $\forall z \in U\setminus\{x\}$. Then we define the support $\sigma(x)$ of $x\in\Delta$ as the index set of the positive components in $x$. $$\sigma(x) = \{i\in V: x_i > 0\}$$ In other words $\sigma(x)$ is the set of vertices in $V$ that belongs to the extracted cluster. \begin{defn}[\citeauthor{dominantset}, Characteristic vector]{}~\label{ds_simplex} A non-empty subset $C \subseteq V$ and $C$ is a dominant set, admits a weighted \textbf{characteristic vector} $x^c\in \Delta$ if it has positive total weight $W(C)$, in which: $$ x_i^C= \begin{cases} \frac{W_c(i)}{W(C)} & \text{if } i\in C\\ 0 & \text{otherwise} \end{cases} $$ \end{defn} The important notion provided by Definition \ref{ds_simplex} is that also dominant set solutions belong to the standard simplex, as imposed in problem \ref{SPQ}. The advantage is that, empirically, strict local maximizers of the dominant sets procedure work well in extracting clusters. \subsection{Link to Game Theory} Game theory is a theoretical framework used for examining and analyzing models of strategic interaction between competing rational actors. The clustering problem, as suggested by \citeauthor{dominantset}, can be formulated in terms of a game, also called \textit{clustering game}, with the following properties: \begin{itemize} \item \textbf{Symmetric game}, the payoff of playing any strategy does not depend by the player but only by the strategy itself. \item \textbf{Complete knowledge}, players have complete knowledge about the game, they know what are the strategies that can be played and the corresponding payoffs. \item \textbf{Non-cooperative game}, players take independent decisions about the strategy to play, they don't make a priori alliance. \item Players play only \textbf{pure strategies}, meaning that they do not behave “rationally” but they take decisions in a pre-programmed pattern. \end{itemize} In the clustering game we have two players that want to extract the best structure of cluster from data samples. The pure strategies available to the players are the data points themselves in $V$ and the similarity matrix $A$ is used as the \textit{payoff matrix} for the clustering game. The values $A_{ij}$ and $A_{ji}$ are the revenues obtained by player 1 and player 2 considering that they have player strategies $(i,j) \in V\times V$. Remember that the main diagonal of the similarity matrix is zero, meaning that $A_{ii}=0$. A \textit{mixed strategy} $x=(x_1, \dots, x_n)^T \in \Delta$ is a probability distribution over the set of pure strategies, which models a stochastic playing strategy of a player. If player 1 and 2 play mixed strategies $(x_1, x_2) \in \Delta \times \Delta$, then the expected payoffs for the players are: $\mathbf{x_1^TAx_2}$ and $\mathbf{x_2^TAx_1}$ respectively. The goal of the two players of course is to maximize as much as possible their resulting revenue. During the game each player extract an object $(i,j)$ and the resulting revenue is associated according to the payoff matrix $A$. Since we are considering $A$ equal to the similarity matrix we can say that in order to maximize their revenue the two players would coordinate their strategies so that the extracted samples belong to the same cluster. In other words, only by selecting objects belonging to the same cluster, each player is able to maximize his expected payoff. The desired condition is that the two players reach a \textbf{symmetric Nash equilibrium}, that is state in which the two players agree about the cluster membership. A \textbf{Nash Equilibrium} is a mixed-strategy profile $(x_1,x_2)\in \Delta\times \Delta$ such that no player can improve the expected payoff by changing his playing strategy, given the opponent's strategy being fixed. This concept can be expressed with the following expression:\\ $$y_1^TAx_2 \leq x_1^TAx_2 \qquad y_2^TAx_1 \leq x_2^TAx_1 \qquad \forall (y_1,y_2) \in (V\times V).$$ A Nash equilibrium is \textbf{symmetric} if $x_1 = x_2$, meaning that considering a symmetric Nash Equilibrium $x \in \Delta$ the two conditions hold in a unique one:\\ $$y^TAx \leq x^TAx$$ The symmetric Nash equilibrium condition satisfies the internal homogeneity criterion required by the dominant set definition. However, it does not include any kind of constraint that guarantees the maximality condition. In order to satisfy this condition it is necessary to look for a different type of Nash Equilibrium, known as \textbf{Evolutionary Stable Strategy (ESS)}. \paragraph{Definition.} A symmetric Nash equilibrium $x\in \Delta$ is an ESS if it satisfies also: $$y^TAx = x^TAx \implies x^TAy > y^TAy \qquad \forall y \in \Delta\setminus\{x\}$$ $$y^TAx = x^TAx \implies x^TAy < x^TAx \qquad \forall y \in \Delta\setminus\{x\}$$ Even if the strategy $y$ provides the same payoff of the strategy $x$, it is better to play $x$ since the payoff against itself is greater than the one provided by $y$. The two strategies $x$ and $y$ represents two Nash Equilibrium, but only $x$ is an ESS.\\ In conclusion we can say that the ESSs of the clustering game with affinity matrix $A$ are in \textbf{correspondence} with dominant sets of the same clustering problem instance. However, we can also conclude that ESS’s are in one-to-one correspondence to (strict) local solutions of StQP.\\ It is possible to say that ESS's satisfy the main characteristics of a cluster: \begin{itemize} \item \textbf{Internal coherency:} High support for all samples within the group. \item \textbf{External incoherency:} Low support for external samples. \end{itemize} \subsection{Extracting Dominant Sets} One of the major advantages of using dominant sets is that it can be written with few lines of code, and moreover we can define different clustering approaches: \begin{itemize} \item Extracting a dominant set, done using the replicator dynamics procedure. \item Partitioning of the data points, obtained using the \textit{peel-off} strategy, which means that at each iteration we extract a dominant set and the corresponding vertices are remove from the graphs. This is done until all vertices have been clustered (Partitioning-based clustering). \item Extracting overlapping clusters, obtained enumerating dominant sets. \end{itemize} In our applications we are going to deal with the second one, assuming that each entity belongs to a cluster. This assumption is required since the subject of this thesis is based on comparing three algorithms, but the first two (K-Means and Spectral clustering) are essentially partitioning-based algorithm.\\ The \textbf{Replicator Dynamics} are deterministic game dynamics that have been developed in evolutionary game theory. It considers an ideal scenario whereby individuals are repeatedly drawn at random from a large, ideally infinite, population to play a two-player game. Players are not supposed to behave rationally, but they act according to an inherited behavioral pattern (pure strategy). An evolutionary selection process operates over time on the distribution of behaviors \cite{replicator}.\\ Let $x_i(t)$ the population share playing pure strategy $i$ at time $t$. The state of the population at time $t$ is: $x(t) = (x_1(t),\dots,x_n(t))\in\Delta$.\\ We define an evolution equation, derived from Darwin's principle of nature selection: $$\dot{x_i} = x_i~g_i(x)$$ where $g_i$ specifies the rate at which pure strategy $i$ replicates, $\dot{x_i}$ is grow rate of strategy $i$. $$\frac{\dot{x_i}}{x_i} \propto \text{payoff of pure strategy }i\text{ - average population payoff}$$ The most general continuous form is given by the following equation: $$\dot{x_i} = x_i[(Ax)_i - x^TAx]$$ where $(Ax)_i$ is the $i$-th component of the vector and $x^TAx$ is the average payoff for the population. If we have a result better than the average strategy there's an improvement. \begin{thm}[Nachbar,1990 TaylorandJonker,1978]{} A point $x\in\Delta$ is a Nash equilibrium if and only if $x$ is the limit point of a replicator dynamics trajectory starting from the interior of $\Delta$. Furthermore, if $x\in\Delta$ is an ESS, then it is an asymptotically stable equilibrium point for the replicator dynamics.\\ \end{thm} Assuming that the payoff matrix $A$ is symmetric $(A = A^T)$ then the game is said to be doubly symmetric. Thanks to this assumption we can derive some conclusions: \begin{itemize} \item \textit{Fundamental Theorem of Natural Selection (Losert and Akin,1983)} \\ For any doubly symmetric game, the average population payoff $f(x) = x^TAx$ is strictly increasing along any non-constant trajectory of replicator dynamics, meaning that $\frac{df(x(t))}{dt} \geq 0$ $\forall t \geq0$, with equality if and only if $x(t)$ is a stationary point. \item \textit{Characterization of ESS's (Hofbauer and Sigmund, 1988)}\\ For any doubly symmetric game with payoff matrix $A$, the following statements are equivalent: \begin{itemize} \item $x\in \Delta^{ESS}$ \item $x \in \Delta$ is a strict local maximizer of $f(x) = x^TAx$ over the standard simplex $\Delta$. \item $x\in\Delta$ is asymptotically stable in the replicator dynamics. \end{itemize} \end{itemize} A well-known discretization of replicator dynamics, which assumes non-overlapping generations, is the following (assuming a non-negative $A$): $$ x_i(t+1) = x_i(t)\frac{A(x(t))_i}{x(t)^TAx(t)} $$ which inherits most of the dynamical properties of its continuous-time counterpart. \image{img/ulearning/rep_dynamics}{MATLAB implementation of discrete-time replicator dynamics}{0.5} The components of the converged vector give us a measure of the participation of the corresponding vertices in the cluster, while the value of the objective function provides of the cohesiveness of the cluster. \subsection{Dominant Sets Hierarchy} A useful extension of the Dominant Sets formulation is introduced in the optimization problem. The new formulation is now defined: \begin{equation}\label{alphaSPQ} \begin{array}{lcl} \text{maximize} & f_\alpha(x) = x^\prime(A - \alpha I)x\\ \text{subject to} & x \in \Delta \end{array} \end{equation} where $\alpha \geq 0$ is a parameter and $I$ is the identity matrix.\\ The parameter $\alpha$ affects the number of clusters found by the algorithm. With an huge value of $\alpha$ each point defines a cluster since we require a strong cohesiveness. Instead decreasing the value of $\alpha$ the number of cluster increases.\\ The objective function $f_{\alpha}$ has now two kinds of solutions: \begin{itemize} \item solutions which correspond to dominant sets for original matrix $A$ $(\alpha=0)$. \item solutions which don't correspond to any dominant set for the original matrix $A$, although they are dominant for the scaled matrix $A+\alpha(ee' - I)$. In other words, it allows to find subsets of points that are not sufficiently coherent to be dominant with respect to $A$, and hence they should be split. \end{itemize} This algorithm has the basic idea of starting with a sufficiently large $\alpha$ and adaptively decrease it during the clustering process following these steps: \begin{enumerate} \item Let $\alpha$ be a large positive value (ex: $\alpha > |V|-1$). \item Find a partition of the data into $\alpha$-clusters. \item For all the $\alpha$-clusters that are not 0-clusters recursively repeat step 2 with decreasing $\alpha$. \end{enumerate} \newpage \subsection{Properties} \begin{itemize} \item \textbf{Well separation between structure and noise.} In such situations it is often more important to cluster a small subset of the data very well, rather than optimizing a clustering criterion over all the data points, particularly in application scenarios where a large amount of noisy data is encountered. \item \textbf{Overlapping clustering}. In some cases we can have that two distinct clusters share some points, but partitional approaches impose that each element cannot be long to more than one cluster. \item Dominant sets can be found by mining \textbf{local solutions}, so it is not necessary to look for global solutions. \item Deal very well in presence of noise. \item Strong connection with theoretical results. \item Makes no assumptions on the structure of the affinity matrix, being it able to work with asymmetric and even negative similarity functions. \item Does not require a priori knowledge on the number of clusters (since it extracts them sequentially). \item Leaves clutter elements unassigned (useful, e.g., in figure/ground separation or one-class clustering problems). \item Limiting the number of dynamic's iterations it is possible to detect quasi-click structures. \end{itemize}
{ "alphanum_fraction": 0.7261343833, "avg_line_length": 78.5118243243, "ext": "tex", "hexsha": "fa3ef2f83f2638fc5dc54645b75e561c89fba935", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-04-20T09:05:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-20T09:05:12.000Z", "max_forks_repo_head_hexsha": "d3c46697b59e9a0b53541589cbd9c4033babdcef", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Cinofix/templates_latex", "max_forks_repo_path": "thesis_template/UnsupervisedLearning.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d3c46697b59e9a0b53541589cbd9c4033babdcef", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Cinofix/templates_latex", "max_issues_repo_path": "thesis_template/UnsupervisedLearning.tex", "max_line_length": 1021, "max_stars_count": 9, "max_stars_repo_head_hexsha": "d3c46697b59e9a0b53541589cbd9c4033babdcef", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Cinofix/templates_latex", "max_stars_repo_path": "thesis_template/UnsupervisedLearning.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-02T15:05:41.000Z", "max_stars_repo_stars_event_min_datetime": "2020-04-20T09:05:09.000Z", "num_tokens": 13407, "size": 46479 }
\documentclass[10pt, twoside]{article} % use "amsart" instead of "article" for AMSLaTeX format \usepackage[top=0.5in,bottom=0.5in,left=0.5in,right=0.5in]{geometry} % See geometry.pdf to learn the layout options. There are lots. \geometry{letterpaper} % ... or a4paper or a5paper or ... \geometry{landscape} % Activate for for rotated page geometry \usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent %\usepackage{amssymb} \usepackage{setspace} \usepackage{color} %Extempore code highlighting colors \definecolor{xtLangTypecolor}{RGB}{85,142,40} %Green \definecolor{xtLangColor}{RGB}{46,111,253} %Blue \definecolor{schemeVerbColor}{RGB}{168,24,75} % Red \definecolor{xtLangSchemeVerbColor}{RGB}{159,0,197} % Purple \usepackage{multicol} \usepackage{titlesec} \titlespacing*{\section}{0pt}{0pt}{0pt} \titlespacing*{\subsection}{0pt}{0pt}{0pt} %\titlespacing*{\paragraph}{0pt}{0.5\baselineskip}{1em} \titlespacing*{\paragraph}{0pt}{0pt}{0pt} \usepackage{textcomp} %\leftskip=0pt plus .5fil \rightskip=0pt plus -.5fil \parfillskip=0pt plus .5fil \title{Extempore Reference Card} \author{Neil Tiffin} \date{\today} % Activate to display a given date or no date % Turn off header and footer \pagestyle{empty} \righthyphenmin=64 \lefthyphenmin=64 % \null or \hphantom{.} to have object to fill against. % \hfill or \hspace{ \stretch{1} } to fill space in middle of line % Math Mode, \; thick space, \: medium space, \, thin space, \! negative thin space \begin{document} \newcommand{\xtType}[1]{\texttt\scriptsize{\textcolor{xtLangTypecolor}{ #1}}\normalsize} \newcommand{\xtFunc}[1]{\textcolor{xtLangColor}{#1}} \newcommand{\schemeVerb}[1]{\textcolor{schemeVerbColor}{#1}} \newcommand{\xtSchemeVerb}[1]{\textcolor{xtLangSchemeVerbColor}{#1}} \begin{multicols}{3} \begin{center} \Large{\textbf{Extempore Reference Card}} \end{center} \section*{xtlang Types} \begin{spacing}{1.5} Boolean \hfill \xtType{i1} \\ Boolean pointer \hfill \xtType{i1*} \\ Character \hfill \xtType{i8} \\ Strings {\scriptsize(are null terminated)} \hfill \xtType{i8*} \\ Integer \hfill \xtType{i32} \\ Integer pointer \hfill \xtType{i32*} \\ Long integer (default) \hfill \xtType{i64} \\ Long integer pointer \hfill \xtType{i64*} \\ 32 bit float \hfill \xtType{float} \\ 32 bit float pointer \hfill \xtType{float*} \\ 64 bit double (default) \hfill \xtType{double} \\ 64 bit double pointer \hfill \xtType{double*} \\ Variable a1 declared as pointer to double \hfill a1:\xtType{double*} \\ C type void* \hfill \xtType{i8*} \\ Tuple of 2 i8's \hfill \xtType{\textless i8, i8\textgreater} \\ Pointer to tuple of double and i32 \null \hfill \xtType{\textless double, i32\textgreater *} \\ Array of length and type \hfill \xtType{$\vert$length, type$\vert$} \\ Pointer to an array of 4 double \hfill \xtType{$\vert$4, double$\vert$*} \\ Vector of length and type \hfill \xtType{/length, type/} \\ Pointer to a vector of 4 floats \hfill \xtType{/4,float/*} \\ Closure \hfill \xtType{[return type, arg type,$\:\mathit{. . .}$]}\\ Pointer to closure which takes 2 i32 arguments and returns i64\hfill \xtType{[i64, i32, i32]*} \\ \scriptsize {\raggedright `*' is not a deference operator, it is part of the type label.\par} \normalsize \vspace{-5mm} \end{spacing} \section*{Custom xtlang Types} Define type for LLVM compiler. \\ \null\hfill (\schemeVerb{bind-type} name type) \\ Tell pre-processor to textually substitute type for name before compiling. \hfill (\schemeVerb{bind-alias} name type) \\ Define value.\hfill (\schemeVerb{bind-val} name type value) \section*{xtlang Coercion Functions} \begin{multicols}{3} (\xtFunc{i1toi64}~\xtType{i1}) (\xtFunc{i64toi1}~\xtType{i64}) (\xtFunc{ftoi64}~\xtType{float}) (\xtFunc{i64tod}~\xtType{i64}) (\xtFunc{dtoi64}~\xtType{double}) (\xtFunc{i32toptr}~\xtType{i32}) (\xtFunc{ptrtoi64}~\xtType{ptr}) (\xtFunc{dtof}~\xtType{double}) \\ etc. \end{multicols} \section*{Common xtlang \& Scheme} The following functions are available in both Scheme and xtlang and should follow R5RS, except as noted.\\ if $c_{1}$ then $e_{2}$ else $e_{3}$ \hfill(\xtSchemeVerb{if} $c_{1}\:e_{2}\:e_{3}$) \\ Assign $s_1 = e_1$, eval expr \hfill(\xtSchemeVerb{let} ( ($s_1\:e_1$)$\:\mathit{. . .}$) expr) \\ Sequence \hfill(\xtSchemeVerb{begin} \\ Boolean true \hfill\xtSchemeVerb{\#t} \\ Boolean false \hfill\xtSchemeVerb{\#f} \\ Boolean and \hfill(\xtSchemeVerb{and} $n_1\:n_2\:\mathit{. . .}$) \\ Boolean or \hfill(\xtSchemeVerb{or} $n_1\:n_2\:\mathit{. . .}$) \\ Boolean not \hfill(\xtSchemeVerb{not} $o_1$) \\ Anonymous function \hfill(\xtSchemeVerb{lambda} ($a_{1}\:\mathit{...}$) expr)\\ Multiply \hfill(\xtSchemeVerb{*} $n_1\:\mathit{. . .}$) \\ Divide \hfill(\xtSchemeVerb{/} $n_1\:n_2\:\mathit{. . .}$) \\ Add \hfill(\xtSchemeVerb{+} $n_1\:\mathit{. . .}$) \\ Subtract \hfill(\xtSchemeVerb{-} $n_1\:n_2\:\mathit{. . .}$) \\ Equality \hfill(\xtSchemeVerb{=} $n_1\:n_2$) \\ Less Than \hfill(\xtSchemeVerb{\textless} $n_1\:n_2$) \\ Greater Than \hfill(\xtSchemeVerb{\textgreater} $n_1\:n_2$) \\ Not equal \hfill(\xtSchemeVerb{\textless\textgreater} $n_1\:n_2$) \\ Remainder \hfill(\xtSchemeVerb{modulo} $n_1\:n_2$) \\ Is null \hfill(\xtSchemeVerb{null?} $n_1$) \\ Assignment \hfill(\xtSchemeVerb{set!} $s_1\:e_1$) \\ Loop {\scriptsize(from common lisp)}\hfill(\xtSchemeVerb{dotimes} ($s_1\:e_1$) expr)\\ Conditional \hfill(\xtSchemeVerb{cond} ($c_{1}\:e_{2}$) $\mathit{. . .}$ (\xtSchemeVerb{else} $e_{n}$)) \\ \scriptsize {\raggedright $a_x$ argument, $c_x$ boolean expression, $e_x$ general expression, \\ $n_x$ numeric expression, $o_x$ object, $s_x$ symbol\par} \normalsize \section*{xtlang Memory Allocation} Allocate on stack.\hfill(\xtFunc{salloc} size) \\ Allocate on heap.\hfill(\xtFunc{halloc} size) \\ Create memory zone.\hfill(\xtFunc{memzone} size ...) \\ Allocate in zone.\hfill(\xtFunc{zalloc} size) \\ Push zone.\hfill(\xtFunc{push\_zone}) \\ Pop zone.\hfill(\xtFunc{pop\_zone}) \\ long int *a1 = 2; \\ \hphantom{.}\hfill (\xtSchemeVerb{let} ((a1:\xtType{i64*} (\xtFunc{salloc}))) (\xtFunc{pset!} a1 0 2) ...) \\ int a1[4]; \hfill(a1:\xtType{i32*} (\xtFunc{zalloc} 4)) \section*{xtlang Pointers \& Aggregrates} Deference pointer \hfill(\xtFunc{pref}~ptr$_1$~offset$_1$) \\ Deference tuple \hfill(\xtFunc{tref}~ptr$_1$~offset$_1$) \\ Deference array \hfill(\xtFunc{aref}~ptr$_1$~offset$_1$) \\ Deference vector \hfill(\xtFunc{vref}~ptr$_1$~offset$_1$) \\ Set pointer \hfill(\xtFunc{pset!}~ptr$_1$~offset$_1$~$p_1$) \\ Set tuple \hfill(\xtFunc{tset!}~ptr$_1$~offset$_1$~$t_1$) \\ Set array \hfill(\xtFunc{aset!}~ptr$_1$~offset$_1$~$a_1$) \\ Set vector \hfill(\xtFunc{vset!}~ptr$_1$~offset$_1$~$v_1$) \\ Fill pointer \hfill(\xtFunc{pfill!}~ptr$_1$~$e_1$,~...~$e_n$) \\ Fill tuple \hfill(\xtFunc{tfill!}~ptr$_1$~$e_1$,~...~$e_n$) \\ Fill array \hfill(\xtFunc{afill!}~ptr$_1$~$e_1$,~...~$e_n$) \\ Fill vector \hfill(\xtFunc{vfill!}~ptr$_1$~$e_1$,~...~$e_n$) \\ Get address pointer \hfill(\xtFunc{pref-ptr}~ptr$_1$~offset$_1$) \\ Get address tuple \hfill(\xtFunc{tref-ptr}~ptr$_1$~offset$_1$) \\ Get address array \hfill(\xtFunc{aref-ptr}~ptr$_1$~offset$_1$) \\ Get address vector \hfill(\xtFunc{vref-ptr}~ptr$_1$~offset$_1$) \\ \scriptsize {\raggedright $a_x$ array. ptr$_x$ i1*, i8*, i16*, i32*, float*, double* pointers. offset$_x$ natural number. $e_x$ expression. $n_x$ float, double, i1, i8, i32, i64 expression. $t_x$ tuple. $v_x$ vector. \par} \normalsize \section*{xtlang Core Functions} Uses boost random() or C rand() [0.0..1.0] \\ \hphantom{.}\hfill\xtType{double}~(\xtFunc{random}) \\ \hphantom{.}\hfill(\xtFunc{begin} $e_1$ $e_2$ ...) \\ \hphantom{.}\hfill\xtType{null} \\ \hphantom{.}\hfill(\xtFunc{now}) \\ void \\ \hphantom{.}\hfill(\xtFunc{lambdas} ?) \\ \hphantom{.}\hfill(\xtFunc{lambdaz} ?) \\ \hphantom{.}\hfill(\xtFunc{lambdah} ?) \\ \hphantom{.}\hfill(\xtFunc{printf}~format:\xtType{i8*}~$e_1$~$e_2$...) \\ \hphantom{.}\hfill(\xtFunc{sprintf}~str:\xtType{i8*}~format:\xtType{i8*}~$e_1$~$e_2$...)\\ Callback\hfill(\xtFunc{callback} time name args ...) \\ Schedule {\scriptsize(same as callback)}\hfill(\xtFunc{schedule} time name args ...) \\ cast $\vert$ bitcast\hfill(\xtFunc{bitcast} ?)\\ convert $\vert$ bitconvert\hfill(\xtFunc{bitconvert} ?) \\ \& $\vert$ bitwise-and\hfill(\xtFunc{bitwise-and}~$n_1$~$n_2$~...) \\ bor $\vert$ bitwise-or\hfill(\xtFunc{bitwise-or}~$n_1$~$n_2$~...) \\ $\wedge$ $\vert$ bitwise-eor\hfill(\xtFunc{bitwise-eor}~$n_1$~$n_2$~...)\\ \textless\textless $\vert$ bitwise-shift-left\hfill(\xtFunc{bitwise-shift-left}~$n_1$~$n_2$) \\ \textgreater\textgreater $\vert$ bitwise-shift-right\hfill(\xtFunc{bitwise-shift-right}~$n_1$~$n_2$)\\ \textasciitilde~$\vert$~bitwise-not\hfill(\xtFunc{bitwise-not}~$n_1$) \\ \scriptsize {\raggedright $e_x$ expression, $n_x$ float, double, i1, i8, i32, i64 expression\par} \normalsize \section*{Extempore Scheme} {\raggedright Originally from TinyScheme v1.35. Implements most of R5RS except macro, which is a common lisp style macro.\par} \section*{Time} Now. \hfill(\xtFunc{now}) \\ Second constant. \hfill*second* \\ Minute constant. \hfill*minute* \\ Hour constant. \hfill*hour* \section*{General Functions} TODO \section*{Music Functions} \hfill (\xtFunc{define-instrument} name note\_c effect\_c ...) \\ Note closure \\ \null\hfill[[output,time,channel,freq,volume]*]* \\ Effect closure \\ \null\hfill[output,input,time,channel,data*]* \\ \hphantom{.}\hfill(\xtFunc{play-note} time inst pitch volume duration)\par TODO \section*{Debugging} TODO \section*{xtlang C bindings math.h c99} \begin{multicols}{3} (\xtFunc{cos}~$r_1$) \\ (\xtFunc{tan}~$r_1$) \\ (\xtFunc{sin}~$r_1$) (\xtFunc{cosh}~$r_1$) (\xtFunc{tanh}~$r_1$) (\xtFunc{sinh}~$r_1$) (\xtFunc{acos}~$r_1$) (\xtFunc{asin}~$r_1$) (\xtFunc{atan}~$r_1$) (\xtFunc{atan2}~$r_1$~$r_2$) (\xtFunc{ceil}~$r_1$) (\xtFunc{floor}~$r_1$) (\xtFunc{exp}~$r_1$) (\xtFunc{fmod}~$r_1$~$r_2$) (\xtFunc{pow}~$r_1$~$r_2$) (\xtFunc{log}~$r_1$) (\xtFunc{log2}~$r_1$) (\xtFunc{log10}~$r_1$) (\xtFunc{sqrt}~$r_1$) (\xtFunc{fabs}~$r_1$) %c99 (\xtFunc{acosh}~$r_1$) (\xtFunc{asinh}~$r_1$) (\xtFunc{atanh}~$r_1$) (\xtFunc{cbrt}~$r_1$) (\xtFunc{copysign}~$r_1$~$r_2$) (\xtFunc{erf}~$r_1$) (\xtFunc{erfc}~$r_1$) (\xtFunc{exp2}~$r_1$) (\xtFunc{expm1}~$r_1$) (\xtFunc{fdim}~$r_1$~$r_1$) (\xtFunc{fma}~$r_1$~$r_2$~$r_3$) (\xtFunc{fmax}~$r_1$~$r_1$) (\xtFunc{fmin}~$r_1$~$r_1$) (\xtFunc{hypot}~$r_1$~$r_1$) (\xtFunc{ilogb}~$r_1$) (\xtFunc{lgamma}~$r_1$) \xtType{i64}~(\xtFunc{llrint}~$r_1$) \xtType{i64}~(\xtFunc{lrint}~$r_1$) \xtType{i32}~(\xtFunc{rint}~$r_1$) \xtType{i64}~(\xtFunc{llround}~$r_1$) \xtType{i32}~(\xtFunc{lround}~$r_1$) (\xtFunc{log1p}~$r_1$) \xtType{i32}~(\xtFunc{logb}~$r_1$) (\xtFunc{nan}~\xtType{i8*}) (\xtFunc{nearbyint}~$r_1$) (\xtFunc{nextafter}~$r_1$~$r_2$) (\xtFunc{nexttoward}~$r_1$~$r_2$) (\xtFunc{remainder}~$r_1$~$r_2$) (\xtFunc{remquo}~$r_1$~$r_2$~\xtType{i8*}) (\xtFunc{round}~$r_1$) (\xtFunc{scalbn}~$r_1$,~\xtType{i32}) (\xtFunc{tgamma}~$r_1$) (\xtFunc{trunc}~$r_1$) \\ \end{multicols} \scriptsize {\raggedright $r_x$ double or float. Return type same as args, except as noted.\par} \normalsize \section*{xtlang C bindings stdio.h} \begin{multicols}{2} \xtType{void}~(\xtFunc{clearerr}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{ctermid}~\xtType{i8*}) \xtType{i32}~(\xtFunc{fclose}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{fdopen}~\xtType{i32,i8*}) \xtType{i32}~(\xtFunc{feof}~\xtType{i8*}) \xtType{i32}~(\xtFunc{ferror}~\xtType{i8*}) \xtType{i32}~(\xtFunc{fflush}~\xtType{i8*}) \xtType{i32}~(\xtFunc{fgetc}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{fgets}~\xtType{i8*,i32,i8*}) \xtType{i32}~(\xtFunc{fileno}~\xtType{i8*}) \xtType{void}~(\xtFunc{flockfile}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{fopen}~\xtType{i8*,i8*}) \xtType{i32}~(\xtFunc{fputc}~\xtType{i32,i8*}) \xtType{i32}~(\xtFunc{fputs}~\xtType{i8*,i8*}) \xtType{i64}~(\xtFunc{fread}~\xtType{i8*,i64,i64,i8*}) \xtType{i8*}~(\xtFunc{freopen}~\xtType{i8*,i8*,i8*}) \xtType{i32}~(\xtFunc{fseek}~\xtType{i8*,i64,i32}) \xtType{i64}~(\xtFunc{ftell}~\xtType{i8*}) \xtType{i32}~(\xtFunc{ftrylockfile}~\xtType{i8*}) \xtType{void}~(\xtFunc{funlockfile}~\xtType{i8*}) \xtType{i64}~(\xtFunc{fwrite}~\xtType{i8*,i64,i64,i8*}) \xtType{i32}~(\xtFunc{getc}~\xtType{i8*}) \xtType{i32}~(\xtFunc{getchar}) \xtType{i32}~(\xtFunc{getc\_unlocked}~\xtType{i8*}) \xtType{i32}~(\xtFunc{getchar\_unlocked}) \xtType{i8*}~(\xtFunc{gets}~\xtType{i8*}) \xtType{i32}~(\xtFunc{getw}~\xtType{i8*}) \xtType{i32}~(\xtFunc{pclose}~\xtType{i8*}) \xtType{void}~(\xtFunc{perror}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{popen}~\xtType{i8*,i8*}) \xtType{i32}~(\xtFunc{putc}~\xtType{i32,i8*}) \xtType{i32}~(\xtFunc{putchar}~\xtType{i32}) \xtType{i32}~(\xtFunc{putc\_unlocked}~\xtType{i32,i8*}) \xtType{i32}~(\xtFunc{putchar\_unlocked}~\xtType{i32}) \xtType{i32}~(\xtFunc{puts}~\xtType{i8*}) \xtType{i32}~(\xtFunc{putw}~\xtType{i32,i8*}) \xtType{i32}~(\xtFunc{remove}~\xtType{i8*}) \xtType{i32}~(\xtFunc{rename}~\xtType{i8*,i8*}) \xtType{void}~(\xtFunc{rewind}~\xtType{i8*}) \xtType{void}~(\xtFunc{setbuf}~\xtType{i8*,i8*}) \xtType{i32}~(\xtFunc{setvbuf}~\xtType{i8*,i8*,i32,i64}) \xtType{i8*}~(\xtFunc{tempnam}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{tmpfile}) \xtType{i8*}~(\xtFunc{tmpnam}~\xtType{i8*}) \xtType{i32}~(\xtFunc{ungetc}~\xtType{i32,i8*}) \xtType{i32}~(\xtFunc{llvm\_printf}$^1$~\xtType{i8*},~...) \xtType{i32}~(\xtFunc{llvm\_sprintf}$^1$~\xtType{i8*,i8*},~...) \end{multicols} \scriptsize {\raggedright $^1$~Can't be standard printf because of variable args\par} \normalsize \section*{xtlang C bindings stdlib.h} \begin{multicols}{2} \xtType{i8*}~(\xtFunc{malloc}~\xtType{i64}) \xtType{void}~(\xtFunc{free}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{malloc16}~\xtType{i64}) \xtType{void}~(\xtFunc{free16}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{getenv}~\xtType{i8*}) \xtType{i32}~(\xtFunc{system}~\xtType{i8*}) \\ \end{multicols} \section*{xtlang C bindings string.h} \begin{multicols}{2} \xtType{double}~(\xtFunc{atof}~\xtType{i8*}) \xtType{i32}~(\xtFunc{atoi}~\xtType{i8*}) \\ \xtType{i64}~(\xtFunc{atol}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{memccpy}~\xtType{i8*,i8*,i32,i64}) \xtType{i8*}~(\xtFunc{memchr}~\xtType{i8*,i32,i64}) \xtType{i32}~(\xtFunc{memcmp}~\xtType{i8*,i8*,i64}) \xtType{i8*}~(\xtFunc{memcpy}~\xtType{i8*,i8*,i64}) \xtType{i8*}~(\xtFunc{memmove}~\xtType{i8*,i8*,i64}) \xtType{i8*}~(\xtFunc{memset}~\xtType{i8*,i32,i64}) \xtType{i8*}~(\xtFunc{strcat}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{strchr}~\xtType{i8*,i32}) \xtType{i32}~(\xtFunc{strcmp}~\xtType{i8*,i8*}) \xtType{i32}~(\xtFunc{strcoll}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{strcpy}~\xtType{i8*,i8*}) \xtType{i64}~(\xtFunc{strcspn}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{strdup}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{strerror}~\xtType{i32}) \xtType{i64}~(\xtFunc{strlen}~\xtType{i8*}) \xtType{i8*}~(\xtFunc{strncat}~\xtType{i8*,i8*,i64}) \xtType{i32}~(\xtFunc{strncmp}~\xtType{i8*,i8*,i64}) \xtType{i8*}~(\xtFunc{strncpy}~\xtType{i8*,i8*,i64}) \xtType{i8*}~(\xtFunc{strpbrk}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{strrchr}~\xtType{i8*,i32}) \xtType{i64}~(\xtFunc{strspn}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{strstr}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{strtok}~\xtType{i8*,i8*}) \xtType{i8*}~(\xtFunc{strtok\_r}~\xtType{i8*,i8*,i8**}) \xtType{i64}~(\xtFunc{strxfrm}~\xtType{i8*,i8*,i64}) \\ \end{multicols} \scriptsize \section*{Function Name Conventions} `!' at end of name indicates function is destructive (e.g. mutates the arguments passed to it). \\ `\_c' at end of name indicates an xtlang closure. \\ `-' minus sign separator denotes Scheme name. \\ `\_' underscore separator denotes xtlang name. \\ `*' on both ends of variable denotes global scope. \\ `?' at the end of name denotes function returns boolean. \scriptsize \flushleft \textcolor{xtLangTypecolor}{Color denotes xtlang type.} \\ \textcolor{xtLangColor}{Color denotes xtlang only verb.} \\ \textcolor{schemeVerbColor}{Color denotes scheme only verb.} \\ \textcolor{xtLangSchemeVerbColor}{Color denotes common xtlang and Scheme verb.} \\ \vfill \today~v1.0 \copyright 2013 Neil Tiffin \\ Permission is granted to make and distribute copies of this card provided the copyright notice and this permission notice are preserved on all copies. Send comments and corrections to [email protected] \end{multicols} \end{document}
{ "alphanum_fraction": 0.6717220544, "avg_line_length": 38.8497652582, "ext": "tex", "hexsha": "ca84fa655dea5daab099cff5187b8fb46ec67aa1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0107105d4de92fc69b54c928ad7ef6fc8b68a310", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "RossBencina/extempore", "max_forks_repo_path": "docs/xtlangReferenceCard.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0107105d4de92fc69b54c928ad7ef6fc8b68a310", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "RossBencina/extempore", "max_issues_repo_path": "docs/xtlangReferenceCard.tex", "max_line_length": 205, "max_stars_count": 1, "max_stars_repo_head_hexsha": "0107105d4de92fc69b54c928ad7ef6fc8b68a310", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "RossBencina/extempore", "max_stars_repo_path": "docs/xtlangReferenceCard.tex", "max_stars_repo_stars_event_max_datetime": "2019-07-15T17:05:12.000Z", "max_stars_repo_stars_event_min_datetime": "2019-07-15T17:05:12.000Z", "num_tokens": 6924, "size": 16550 }
\chapter{Experiments and Discussion} \label{ch:resultsAndAnalysis} In this chapter we present how data is retrieved from social media, what are the models we use for generating synthetic data and discuss the results we obtain by running the methods presented in \autoref{ch:solving}. \section{Data Collection and Generation}% \label{sec:data_collection_and_generation} We now define techniques for generating synthetic data and present how real-world data is retrieved and preprocessed. \subsection{Synthetic Data}% \label{sub:synthetic_data} Here we propose two possible methods for generating data, the Signed SBM and the Information spread model. Given $\tau \in \mathbb{N}$, both of these methods randomly generate interaction graphs with $\tau$ threads/layers. Depending on the method, they will require more parameters. \subsubsection{Signed SBM}% \label{ssub:signed_sbm} This model is very similar to the Stochastic Block Model (SBM), a model commonly used for generating random graphs having some community structures \cite{Newman2018}. The Signed SBM is based on the following parameters: \begin{itemize} \item $k \in \mathbb{N}$, the number of communities. \item $b_{i} \in [k]$, the group assignment of each vertex $i$. \item $\omega ^{+} _{rs} \in [0, 1]$ and $\omega ^{-} _{rs} \in [0, 1]$, the probabilities of positive and negative edges, respectively, between users in group $r$ and $s$. Vertices have also a probability of not having an edge, which is equal to $1 - \omega ^{-} _{rs} - \omega ^{+} _{rs} $. For this reason it is needed that $\omega ^{+} _{rs} + \omega ^{-} _{rs} \leq 1$. \item $\theta \in [0, 1]$, controlling the reduction of the probability of interacting between \emph{inactive} communities: for the generation of each thread we will distinguish between \emph{active} and \emph{inactive} communities, having different probabilities of interacting. \item $\hat{k} \in \mathbb{N}$, the number of communities active in a thread. \end{itemize} During the generation process, we will sample the edges from a categorical distribution with three parameters which we will denote as \begin{equation*} \Omega = (\Omega^+, \Omega ^-, \Omega ^0). \end{equation*} Here, $\Omega ^+$ and $\Omega ^-$ are the probabilities of adding a positive and negative edges, respectively, while $\Omega ^0$ is the probability of not adding any edge. Note that since this is a probability distribution, we will have $\Omega ^+, \Omega ^-, \Omega ^0 \geq 0$ and $\Omega ^+ + \Omega ^- + \Omega ^0 = 1$. \bigskip Therefore, generating a thread layer T\footnote{In this model we will generate contents uniquely associated to threads.} for an interaction graph involves the following steps: \begin{enumerate} \item Sample uniformly $\hat{k}$ of the $k$ communities. These are the \emph{active} communities in the thread. The remaining communities are \emph{inactive}. \item For each node pair $i, j$ consider their corresponding groups $r$ and $s$ and, if both communities are \emph{active}, draw from \begin{equation*} \Omega = (\omega _{rs} ^{+}, \omega _{rs} ^{-}, 1 - \omega _{rs} ^{+} - \omega _{rs} ^{-}). \end{equation*} Otherwise, if at least one of the two communities is not \emph{active}, the distribution becomes \begin{equation*} \Omega = (\theta \omega _{rs} ^{+}, \theta \omega _{rs} ^{-}, 1 - \theta (\omega _{rs} ^{+} + \omega _{rs} ^{-})). \end{equation*} Then, we possibly add the edge to thread $T$. \end{enumerate} \subsubsection{Information Spread Model}% \label{ssub:information_spread_model} Here we describe the Information spread model, which aims at simulating the process of information flowing between different users of a social network. Like in the Signed SBM, each node has a group assignment $b_i \in [k]$ and each pair of groups has probabilities of positive and negative edges ($\omega _{rs}^{+} $ and $\omega _{rs}^{-} $, respectively, with $\omega ^{-} _{rs} + \omega ^{+} _{rs} \leq 1$). Additionally, we have the following new parameters: \begin{itemize} \item $\{\phi_{rs} \}$, the edge probabilities of a standard SBM. A standard SBM is a model for generating undirected and unweighted graphs with community-like structures. It takes as parameters the number of community $k$, a group assignment for each node (we will use $b_i$) and the probability of edge between each pair of communities (exactly $\{\phi_{rs} \}$). Then, for each pair of nodes $v_i$ and $v_j$ belonging to communities $r$ and $s$, respectively, it adds an edge between $v_i$ and $v_j$ with probability $\phi_{rs}$. This model is used for generating a graph $G_f$, which we will call the \emph{friend} graph, representing the friendship relationships between the users. We will refer to neighbors in this graph as \emph{friends}. \item $\beta _a$, the probability that a node is initially activated: we will distinguish between \emph{active} and \emph{inactive} nodes, that will have different probabilities of interacting. \item $\beta _n$, the probability that an \emph{inactive} node is activated from an \emph{active} friend. \end{itemize} \bigskip After generating $G_f$ from an SBM with parameters $\{\phi_{rs} \}$, the generation of each thread of an \emph{interaction graph} goes as follows: \begin{enumerate} \item Initialize all nodes as \emph{inactive}. \item Activate each vertex with probability $\beta_{a} $. \item Active nodes activate their inactive friends with probability $\beta_n$. This step is repeated each time a new user is activated, until the network becomes stable. \item Similarly to the Signed SBM, if two nodes are both active, draw from \begin{equation*} \Omega = (\omega _{rs} ^{+}, \omega _{rs} ^{-}, 1 - \omega _{rs} ^{+} - \omega _{rs} ^{-}) \end{equation*} for adding a positive, negative or no edge. If, instead, at least one of them is not active, draw from \begin{equation*} \Omega = (\theta \omega _{rs} ^{+}, \theta \omega _{rs} ^{-}, 1 - \theta (\omega _{rs} ^{+} + \omega _{rs} ^{-})). \end{equation*} \end{enumerate} \subsection{Collection and Preprocessing}% \label{sub:collection_and_preprocessing} Datasets are built over two social medias: Twitter\footnote{\url{twitter.com}} and Reddit\footnote{\url{www.reddit.com}}; the data collection process, consequently, slightly differs between them. \paragraph{Twitter.}% \label{par:twitter-data} Interaction graphs from Twitter are mainly built starting from the tweets of profiles associated to well-known news sources, like The New York Times or Fox News, that typically post links to their articles: the set of shared URLs are the contents $\mathcal{C} $ of the corresponding interaction graph. % Each time a user tweets one of these URLs (as in % \autoref{fig:twitter-thread}) this % will create another thread related to the same content, and all the % replies it receives will be part of this new thread. More specifically, % consider Each content $C \in \mathcal{C}$ will be represented just by its URL, e.g. {\footnotesize \begin{center} \url{https://www.nytimes.com/2021/03/04/us/richard-barnett-pelosi-tantrum.html}. \end{center} } Then, in order to find all threads related to $C$, we search Twitter for the content URL to obtain the tweets containing it. Each of these tweets will correspond to a different thread. For each thread we then construct the tree of replies through a DFS, recursevily fetching users replying to a comment. \medskip In order to validate our methods, we also construct datasets in which users are labeled either as \emph{democrat} or \emph{republican} \footnotemark. This is done by looking at the people a certain user $v_i$ follows: for each account $v_j$ followed by $v_i$, if $v_j$ is a political representative, then we can retrieve from Wikipedia the party to which $v_j$ belongs to (see, for example, \autoref{fig:tex/img/wikipedia-elected}). Then, user $v_i$ is assigned a label according to the party of the majority of the users $v_j$ this follows. Twitter data is retrieved with the help of Tweepy \cite{tweepy}, a Python library for accessing the Twitter API, which has been patched for using some features available only in the beta of the new v2 Twitter API. \footnotetext{We choose these two labels since the news sources we analyze for this purpose are based in U.S. Also, we think political discussion are a main source of controversial content and so it is an interesting criterion according to which users can be differentiated.} \begin{figure} \centering \includegraphics[width=0.4\linewidth]{tex/img/wikipedia-elected.png} \caption[Example Wikipedia entry]{Wikipedia entry associated to Alexandria Ocasio-Cortez, a member of the U.S. House of Representatives.}% \label{fig:tex/img/wikipedia-elected} \end{figure} \paragraph{Reddit.}% \label{par:reddit} Differently from Twitter, Reddit focuses on subreddits, which are pages collecting posts of users about a specific topic (e.g.\ r/politics, r/economics, $\dots$). This means that in the datasets built from this social media the contents $\mathcal{C} $ is the set of URLs posted on these pages, which, differently from how we fetch data from Twitter, most likely come from different sources. These posts are in turn \emph{crossposted}, i.e.\ reposted on other subreddits. Each of these \emph{crossposts} will correspond to another thread. We also analyzed a very specific case, that of r/asktrumpsupporters. This subreddit is a "Q\&A subreddit to understand Trump supporters, their views, and the reasons behind those views. Debates are discouraged" (from its description). We found it interesting as it provides an explicit labeling of the users who, before commenting in any of these posts, must "declare" their side by choosing a \emph{flair}, which is shown next to the username of the person commenting. Three flairs are available: \emph{Trump Supporter}, \emph{Non supporter} and \emph{Undecided}. The PRAW library is used for retrieving Reddit data \cite{praw}. \paragraph{Edge weights assignment.}% \label{par:assigning_edge_weights} Once the threads interactions are retrieved, they are passed to a state-of-the-art sentiment analyzer which labels them. More specifically, the model used is RoBERTa, adapted and retrained for dealing with Twitter data \cite{Barbieri2020}. The model is made available by the Transformers python library \cite{wolf-etal-2020-transformers}. The model, given a string of text, returns a probability distribution \begin{equation*} (p_{neg}, p_{neu}, p_{pos}) \end{equation*} whose parameters represent the probabilities of negative, neutral and positive sentiment. Note that since this is a probability distribution, we will have $p_{neg}, p_{neu}, p_{pos} \geq 0$ and $p_{neg} + p_{neu} + p_{pos} = 1$. Let \begin{align} \label{eq:} p_n & \coloneqq p_{neg}, & p_p & \coloneqq p_{pos} + p_{neu}. \end{align} If $p_p > p_n $ we assign the edge weight $p_p$, otherwise $-p_n$. % \bigskip % % Finally, complying with the current privacy legislation, all the data related % to the user is pseudo-anonymized (accounts identifier are replaced by random % ones) while no data is publicly available. \paragraph{Initial observations on the datasets.} \label{sub:some_observations_on_the_datasets} Reddit and Twitter are intrinsically different social medias. As mentioned before, Reddit focuses on subreddits, where all the discussions related to a certain topic find their place and most of the users interested in the theme gather. We think that this contributes to create community of users which are more active and more likely to discuss among each other, as they end up looking at the same posts and discussions. Twitter, instead, is a much less "centralized" social media with a lot of \emph{hubs} (users with many followers) that discuss similar topics but have disjoint communities. Think, as an example, of the Twitter accounts of Joe Biden and Donald Trump. We expect that most of the followers of the first one are not followers of the latter, and vice versa, even if both of them discuss U.S. politics. This means that many users, even if they are interested in the same theme (U.S. politics in our example), will rarely interact with each other. The "centralization" of Reddit produces, in our data model, contents that are associated with few threads. We observed that even the most discussed articles on r/politics (a subreddit focusing on U.S.\ politics discussions) are \emph{crossposted} only one to five times (see, for example, \autoref{fig:tex/img/reddit-crossposts}), while on Twitter articles of the New York Times are often shared even 100 times. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{tex/img/reddit-crossposts.png} \caption{The \emph{crossposts} on one of the most discussed article of the day on r/politics.}% \label{fig:tex/img/reddit-crossposts} \end{figure} \subsection{A Study on r/asktrumpsupporters}% \label{sec:the_r_asktrumpsupporters_case} During the research we studied the r/asktrumpsupporters subreddit to understand if it is possible to infer the community\footnotemark of the users by looking at how they react to contents. More specifically, in a highly polarized environment we expect that members in the same community of the author have a positive stance towards the content; conversely, members of the other communities have a negative one. \footnotetext{In this case, we refer to community as the set of users with the same \emph{flair}} \bigskip For this analysis we add to our interaction graph another type of vertices, the \emph{content nodes} which we uniquely associate to a content. More formally, for each content $C \in \mathcal{C} $, we add a vertex $v_C$. In order to add links between users and contents, we consider the sequence of comments leading to a specific user comment and multiply the sign of the associated edges to calculate the sign of the content-user edge. For example, consider a user $v_i$ replying positively to a post related to content $C$ and user $v_j$ replying negatively to $v_i$. We will add a positive edge between $v_i$ and $v_C$ and a negative edge between $v_j$ and $v_C$. We then assign to the users linked to a content the same label of the author of the post if the user is connected to the content by a positive edge and the opposite one\footnotemark otherwise. Then, we measure the accuracy of this classification. \footnotetext{We ignore the \emph{Undecided} label} We show in \autoref{fig:tex/out/experimental200/experimental200-accuracy-hist} the histogram of the accuracy of classification of the contents in the dataset. We see that in very few cases it is possible to discriminate users better than by just using the majority label ($79\%$), while a big part of the contents achieve an accuracy between $0.5$ and $0.6$. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{tex/out/experimental200/experimental200-accuracy-hist.pdf} \caption[Distribution of accuracy of classification of r/asktrumpsupporters users for the different contents]{This plot shows how the accuracy of the classification process explained in \autoref{sec:the_r_asktrumpsupporters_case} is distributed for the different contents $C \in \mathcal{C} $. The graph used for the analysis (built on r/asktrumpsupporters) contains $1850$ user vertices and $16470$ edges between them. Furthermore, we introduce $71$ content nodes and $5773$ edges between content and user vertices.}% \label{fig:tex/out/experimental200/experimental200-accuracy-hist} \end{figure} We explain this with the following reasons: \begin{itemize} \item The majority of the posts in the subreddit are open questions, e.g.\ ``What do you think of ...' and, similarly, ``What's your idea on...''. This means that our initial hypothesis that positive and negative stances can be used for inferring the positions of the users is not correct: for this type of posts most of the users will just answer with their opinion, without being either friendly or hostile. \item This community of users is not a representative sample: people attending the subreddit are open to discuss with people of different opinion and for this reason we generally expect less hostility in the comments. \end{itemize} \section{Experiments} The following presented results have been obtained from a Python implementation of the methods described in the previous chapters. The library used for handling and manipulating graphs is graph-tool, which has been chosen because of its efficiency \cite{peixoto_graph-tool_2014}. \subsection{Initial Real-World Data Analysis}% \label{sub:validity_problem_definition} We did an initial analysis of the data to understand basic properties of the real-world datasets we fetched. We can gain some insights about the existence of echo chambers by comparing the distribution of $\eta(C)$ and $\eta(T)$ for different interaction graphs. % Threads with an high fraction of positive edges may correspond to echo % chambers, i.e. users that discuss a topic with no \emph{controversy}. % Consequently, if we find an increase in the relative number of threads with a % low $\eta(T)$ (with respect to content distribution) this may mean that this % effect is already visible. Intuitively, echo chambers may correspond to threads with a high fraction of positive edges. Consequently, given a certain $\eta(C)$ distribution for the interaction graph, in presence of Echo Chambers we expect an increase for low values of $\eta(T)$, when compared to the distribution of $\eta(C)$. We report these results for three datasets, a first built over @nytimes, a second over @foxnews and a third one over @bbcnews Twitter accounts\footnotemark. The basic statistics of these graphs are listed in \autoref{tab:basic-statistics}. Histograms, obtained by distributing values in $10$ equal-sized buckets, are shown in \autoref{fig:eta-content-thread}. \footnotetext{As explained above (\autoref{sub:collection_and_preprocessing}), we are referring to the accounts that are used to retrieve the contents of the graph.} By looking at these plots it is evident, as we were expecting, that when moving from contents to threads there is a significant increase in the percentage of threads with a very small $\eta$, meaning that it is possible that contents which globally have a non-negligible amount of negative edges produce also threads that have very few or no negative edges. These are the subgraphs in which we expect to find the echo chambers. This is especially evident in the @nytimes and @bbcnews datasets, while in @foxnews this effect is less visible. \begin{table} \centering \caption{Basic statistics for analyzed datasets. Threads with no replies are excluded from the counts. @nytimes dataset is built from contents between the 13th and 21th of May, @foxnews between the 22nd and 29th of April and @bbcnews between the 26th and 31st of May.} \label{tab:basic-statistics} \begin{tabular}{|c c c c c c|} \toprule Dataset & $|V|$ & $|E|$ & $|\mathcal{C} |$ & Threads & Fraction of neg. edges \\ \midrule @foxnews & 45509 & 82494 & 311 & 1922 & \num{0.5884306737459694} \\ @nytimes & 81318 & 118876 & 492 & 6246 & \num{0.46229684713482955} \\ @bbcnews & 16875 & 26636 & 380 & 1566 & \num{0.4381288481754017} \\ \bottomrule \end{tabular} \end{table} \begin{figure} \begin{center} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/nytimes700/neg-fraction-content-hist.pdf} \caption{$\eta(C)$ of @nytimes} \label{fig:nytimes-content-eta} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/nytimes700/neg-fraction-thread-hist.pdf} \caption{$\eta(T)$ for @nytimes} \label{fig:nytimes-thread-eta} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/foxnews2000/neg-fraction-content-hist.pdf} \caption{$\eta(C)$ of @foxnews} \label{fig:foxnews-content-eta} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/foxnews2000/neg-fraction-thread-hist.pdf} \caption{$\eta(T)$ for @foxnews} \label{fig:foxnews-thread-eta} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/bbcnews500/neg-fraction-content-hist.pdf} \caption{$\eta(C)$ of @bbcnews} \label{fig:CNN-content-eta} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/bbcnews500/neg-fraction-thread-hist.pdf} \caption{$\eta(T)$ for @bbcnews} \label{fig:CNN-thread-eta} \end{subfigure} \caption{$\eta(C)$ and $\eta(T)$ distribution for many datasets.} \label{fig:eta-content-thread} \end{center} \end{figure} \bigskip For verifying the reliability of the definition of \emph{controversial} content, we also looked at the fraction of negative edges for different datasets, each associated to contents of the same topic, which we report in \autoref{tab:eta-subreddits}. These results show an intuitive association between the fraction of negative edges in the graph and the topic discussed: graphs dealing with well-known \emph{controversial} contents, like r/politics and r/asktrumpsupporters, are the one producing a higher fraction of negative edges. Also, as expected, they are followed by related topics (r/economics and r/climate) and football, while subreddits in which discussions over technologies and sciences predominate generally have less negative interactions between the users. \begin{table} \centering \caption[Fraction of negative edges in different subreddits]{Fraction of negative edges for datasets built on different subreddits, each for $200$ contents between December 14, 2020 and March 11, 2021.} \label{tab:eta-subreddits} {\small \begin{tabular}{|c p{6cm} p{2cm} |} \toprule Dataset & Description & Fraction of neg. edges \\ \midrule r/cats & Pictures and videos about cats & \num{0.16922540125610608} \\ r/Covid19 & Scientific discussion of the pandemic & \num{0.2981398553220806} \\ r/programming & Computer Programming discussions & \num{0.30264993026499304} \\ r/climate & News about climate and related politics & \num{0.391787072243346} \\ r/Football & News, Rumors, Analysis of \mbox{football} & \num{0.41103067250904934} \\ r/Economics & News and discussion about economics & \num{0.41730200715730514} \\ r/Politics & News and discussion about U.S. politics & \num{0.5112245929821013} \\ r/AskTrumpSupporters & {Q\&A between Trump supporters and non supporters} & \num{0.5329949238578681} \\ \bottomrule \end{tabular} } \end{table} \bigskip Furthermore, for each content $C$ in an interaction graph, we plotted in \autoref{fig:edge-sum-n-interactions} the relationship between its number of interactions and the sum of the weights of its edges. This relationship is closely related to the $\eta(C)$ of a content: let $E(C)$ be the set of edges associated to content $C \in \mathcal{C} $. We have that \begin{align*} \sum^{}_{e_{ij} \in E(C)} w_{ij} & = \sum^{}_{e_{ij} \in E^+(C)} |w_{ij}| - \sum^{}_{e_{ij} \in E^-(C)} |w_{ij}| \\ & = \sum^{}_{e_{ij} \in E(C)} |w_{ij}| - 2 \sum^{}_{e_{ij} \in E^-(C)} |w_{ij}|. \end{align*} Thus, if we take the ratio with the number of interactions and suppose $|w_{ij}| \approx 1$ we obtain \begin{equation} \label{eq:angular-coeff-eta} \frac{\sum^{}_{e_{ij} \in E^(C)} |w_{ij}| - 2 \sum^{}_{e_{ij} \in E^-(C)} |w_{ij}|}{|E(C)|} \approx 1 - 2 \eta(C). \end{equation} \begin{figure} \begin{center} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/cats200/edge-sum-n-interactions.pdf} \caption{r/cats} \label{fig:tex/out/cats200/edge-sum-n-interactions.pdf} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/covid19200/edge-sum-n-interactions.pdf} \caption{r/covid19} \label{fig:tex/out/covid19200/edge-sum-n-interactions.pdf} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/politics200/edge-sum-n-interactions.pdf} \caption{r/politics} \label{fig:tex/out/politics200/edge-sum-n-interactions.pdf} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/asktrumpsupporters200/edge-sum-n-interactions.pdf} \caption{r/asktrumpsupporters} \label{fig:tex/out/covid19200/edge-sum-n-interactions.pdf} \end{subfigure} \end{center} \caption[Sum edges over number of interactions for many datasets]{Plots of sum of the edge weights over the number of interactions for contents from different datasets/subreddits.} \label{fig:edge-sum-n-interactions} \end{figure} We can see in the plots that content distributes in a pattern which is very similar to that of a line with rare or no outliers. Due to \eqref{eq:angular-coeff-eta} this means that $\eta(C)$ is very similar for different contents $C$, thus the points create a line whose angular coefficient is exactly \eqref{eq:angular-coeff-eta}. Consequently, as we would intuitively say, contents related to politics are generally controversial, and most of them, as we can see in \autoref{fig:tex/out/politics200/edge-sum-n-interactions.pdf}, have a high $\eta(C)$. This is even more clearly visible when plotting the histogram of $\eta(C)$ for the contents in the dataset (\autoref{fig:eta-distribution-content}), with most of the contents having a $\eta(C)$ which is very close to the fraction of negative edges in the graph reported in \autoref{tab:eta-subreddits}. \begin{figure} \begin{center} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/cats200/neg-fraction-content-hist.pdf} \caption{r/cats} \label{fig:tex/out/cats200/neg-fraction-content-hist.pdf} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/asktrumpsupporters200/neg-fraction-content-hist.pdf} \caption{r/asktrumpsupporters} \label{fig:asktrump-hist-eta} \end{subfigure} \end{center} \caption{$\eta(C)$ distribution for $2$ of the datasets shown in \autoref{fig:edge-sum-n-interactions}.} \label{fig:eta-distribution-content} \end{figure} \subsection{Experiments on Synthetic Data}% \label{sub:testing_on_synthetic_data} For studying how the model behaves in controlled situations we define a parametrized model based on the Information Spread model (\autoref{sub:synthetic_data}). We will generate graphs with four communities, i.e.\ $k= 4$. Also, We choose $\beta _{a} = 1$, meaning that all nodes will be active in each thread. This a simplifying assumption which allows us to have a better grasp of the results. Because of this choice the values $\beta _n = 1$, $\theta = 1$ and \begin{equation} \phi_{rs} = \begin{cases} 1, \; & \text{if } r = s, \\ 0, \; & \text{otherwise } \end{cases} \quad\quad \text{for all } r,s \; groups \end{equation} do not influence the structure of the resulting graph. We also choose $\omega ^{-} _{rs}$ and $\omega ^{+} _{rs} $ to be dependent on a \emph{noise} variable $x$. More specifically, we choose \begin{equation} \omega_{rs}^{+} = \begin{cases} 1 - x, \; & \text{if } r = s, \\ \frac{x}{4}, \; & \text{otherwise,} \end{cases} \quad\quad \text{for all } r,s \; groups \end{equation} and \begin{equation} \omega_{rs}^{-} = \begin{cases} x, \; & \text{if } r = s, \\ \frac{1 - x}{4}, \; & \text{otherwise,} \end{cases} \quad\quad \text{for all } r,s \; groups. \end{equation} In absence of noise ($x = 0$) we will generate threads whose communities are positive cliques and all the edges between vertices in different communities (which will be present with probability $1/4$) are negative. We will compare different techniques for finding echo chambers (which, in this case, we will consider as corresponding to a community). The approach, described in detail in \autoref{alg:clustering_process}, involves calling an algorithm (generally any of the methods presented in \autoref{ch:solving}) returning a set of users $U \subseteq V$ which will be labeled according to the majority of its members (by looking at the ground-truth assignment). Let $E_k$ be the edges of thread $T_k$. We then remove the edges contributing to $\xi(U)$, i.e\ \begin{equation*} \{ e_{ij} \in E_k[U], T_k \in \mathcal{S}_{C}(U), C \in \mathcal{\hat{C}} \}. \end{equation*} After repeating $k$ times this process, where $k$ is the number of communities in the model, we compare the ground-truth labels and the predictions through the Jaccard coefficient and the Adjusted RAND index. We already introduced the Jaccard coefficient in \autoref{sub:the_o_2_bff_problem}. The Adjusted RAND index is a measure of similarity between different clusterings. It is based on the RAND index, which compares the number of agreeing pairs in the two solutions; the Adjusted RAND index corrects the RAND Index "by chance", i.e. compares it to the expected index (i.e.\ of a random assignment). For more details we refer to \cite{CharuC.Aggarwal2013}. \begin{algorithm} \KwIn{$G = \{G_k = (V,E_k) \}_k \leftarrow $ interaction graph, $\alpha \in [0, 1]$, $\mathcal{L} $ ground truth labels of $V$, $\mathcal{I} $ possible labels} \KwOut{Jaccard and Adjusted RAND index} \SetAlgoLined // Initialize predicted labels $\mathcal{P} $ with $-1$ (no label)\; $\mathcal{P}[v] \leftarrow -1$ for all $v \in V$ \; \ForEach{ $i \in \mathcal{I} $ }{ $U \leftarrow $ solve \acrshort{ECP} on $G$ \; // Remove edges contributing to $\xi(U)$ \; $E \leftarrow E \setminus \{ e_{ij} \in E_k, T_k \in \mathcal{S}_C(U), C \in \mathcal{\hat{C}}\}$ \; $l \leftarrow $ majority label of users $U$ in $\mathcal{L} $ \; // Do not re-label previously labeled nodes \; $U' \leftarrow U \setminus \{ v \in U \; s.t. \; \mathcal{P}[v] \neq -1 \}$ \; $\mathcal{P}[v] = l $ for all $v \in U'$ \; } // Compute Jaccard for each label and take the average \; $J[l] \leftarrow Jaccard( \{ v \in V \; s.t. \; \mathcal{P}[v] = l \}, \; \{ v \in V\; s.t. \; \mathcal{L}[v] = l \}) $ for each $l \in \mathcal{I} $ \; Jaccard score $\leftarrow \sum^{}_{l \in \mathcal{I} } J[l] / |\mathcal{I}| $ \; Adjusted RAND index $\leftarrow $ Adjusted RAND($\mathcal{P} $, $\mathcal{L} $) \; \Return Jaccard score, Adjusted RAND index\; \caption{Clustering process} \label{alg:clustering_process} \end{algorithm} What we expect is that, as the value of $x$ increases, the produced threads will have generally more negative edges inside a community and more positive edges between different communities, making it more difficult for our algorithms to find the set of vertices corresponding to one of the Echo Chambers. \begin{figure} \begin{center} \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/synthetic/noise_adj_rand.pdf} \caption{Adjusted RAND index} \label{fig:tex/out/synthetic_exact/model2_noise_adj_rand.pdf} \end{subfigure} \quad \begin{subfigure}[b]{0.8\textwidth} \centering \includegraphics[width=\textwidth]{tex/out/synthetic/noise_jaccard.pdf} \caption{Jaccard Score} \label{fig:tex/out/synthetic_exact/_noise_jaccard.pdf} \end{subfigure} \end{center} \caption[MIP and rounding algorithm clustering scores on generated graphs]{Clustering scores on generated graphs with 12 threads and four communities, each of six nodes, for different values of the noise variable~$x$. Running times are reported in \autoref{tab:synthetic-times}.} \label{fig:clustering-mip-rounding} \end{figure} \begin{table} \centering \caption[MIP and rounding algorithm clustering running times on generated graphs]{Running times on generated graphs with 12 threads and four communities, each of six nodes, for different values of the noise variable~$x$. The times are expressed in seconds.} \label{tab:synthetic-times} \begin{tabular}{|c|cccccc|} \toprule & \multicolumn{6}{c|}{$x$} \\ & 0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \midrule \acrshort{MIP} & 1762 & 2441 & 7247 & 24543 & 38992 & 41345 \\ Rounding Algorithm & 26 & 30 & 40 & 39 & 41 & 40 \\ \bottomrule \end{tabular} \end{table} We report the scores obtained with the \acrshort{MIP} for the \acrshort{ECP} (\autoref{sub:a_mip_model_for_the_ecp}) and the Rounding algorithm (\autoref{sub:approximation_algorithms}). Due to the use of the \acrshort{MIP} model these experiments have been carried out on small graphs with four communities, each of six nodes. The interaction graph contains 12 threads and we choose $\alpha = 0.2$. We experimentally saw that this choice of parameters produces controversial contents, thus allowing our methods to be applied. Note that since we choose $\alpha = 0.2$, our analysis is partially limited for $x > 0.2$ since we may produce graphs that have a fraction of negative \emph{intra-community} edges higher than $\alpha $, although it is smaller than the fraction of negative \emph{inter-community} edges. Nonetheless, we may be able to reconstruct the communities at least partially. The performances of the two approaches are shown in \autoref{fig:clustering-mip-rounding}. We can see that the MIP model: \begin{itemize} \item Reconstructs the communities perfectly for values of $x \in \{0, 0.1\}$. \item Predicts the labels almost perfectly for $x= 0.2$. \item Reconstructs the communities partially for $x = 0.3$, achieving both a Jaccard coefficient and an Adjusted RAND Index around $0.7$. \item Fails to find the original groups of users for $x \geq 0.4$. \end{itemize} The MIP formulation is generally better at finding the communities since, if the noise $x$ is not too large, the best score is still achieved by selecting mostly nodes in the same community (choosing nodes from other communities will generally add many more negative edges to the subgraph). Conversely, the rounding algorithm is more affected by noise than MIP. More specifically, the rounding algorithm: \begin{itemize} \item Reconstructs the communities perfectly for $x = 0$. \item Partially recognizes the communities for $x= 0.1$, obtaining a Jaccard coefficient and Adjusted RAND Index around $0.8$. \item Finds few users belonging to the same community for $x = 0.2$, with scores around $0.6$. \item Fails to find the original groups of users for $x \geq 0.3$. \end{itemize} We illustrate its limitations with one example. Consider the graph in \autoref{fig:rounding-interaction-graph-example} for $\alpha = 0.1$: the MIP model will find one of the two communities as optimal solution. Now consider the rounding algorithm (\autoref{sub:approximation_algorithms}): the relaxation of the MIP will assign to all positive edges value $0.66$, while the negative edges get value $0$. This means that the algorithm will initially iterate over the positive edges, choosing randomly among them (since they have the same value). We illustrate in \autoref{fig:rounding-run-unlucky} one possible iteration: in this case the algorithm will not be able to reconstruct one of the communities exactly since the considered edge will not allow the heuristic to have a single component of $\hat{G}$ associated to one of the communities. Conversely, in \autoref{fig:rounding-run-lucky} we can see a "luckier" iteration in which it is able to find one of the communities. More generally, we can say that the rounding algorithm is less robust to noise than the MIP, especially if the noise produces an increase in the number of positive \emph{inter-community} edges, as we saw in the example of \autoref{fig:rounding-example}. This is due to the fact that it may need to pick among positive edges with the same value (in the solution of the relaxation) during its execution: if the picked edge connects different communities, this will most likely prevent the algorithm from having communities as separate component in the dummy graph $\hat{G}$ (\autoref{sub:approximation_algorithms}). % This is due to the fact that the rounding algorithm performances depends on the % probability of picking these positive % \emph{inter-community} edges (the lower the probability is, the higher the % chances of a correct clustering). % Consequently, we can expect than if we decrease the number of these positive % \emph{inter-community} edges with respect to the number of positive % \emph{intra-community}, the algorithm is more likely to reconstruct correctly % the Echo Chambers. Consequently, we could improve the performances of the algorithm by decreasing the probability of picking inter-community edges, or, equivalently, increasing the probability of picking intra-community edges. For example, we could increase the latter probability by rising the number of threads in an interaction graph. We repeated this experiment with a different number of threads while maintaining the same set of parameters as before. We show the results obtained by the rounding algorithm in \autoref{fig:clustering-threads}: we obtain better clustering performances as the number of threads increases, especially for values of $x \in \{ 0.0, 0.1 \}$. \begin{figure} \begin{center} \begin{subfigure}{0.3\textwidth} \centering \tikzfig{tex/tikz/rounding_original} \caption{An example of interaction graph $G$} \label{fig:rounding-interaction-graph-example} \end{subfigure} \quad \begin{subfigure}{0.3\textwidth} \centering \tikzfig{tex/tikz/rounding_exec} \caption{A possible state of $\hat{G}$ when running the rounding algorithm} \label{fig:rounding-run-unlucky} \end{subfigure} \quad \begin{subfigure}{0.3\textwidth} \centering \tikzfig{tex/tikz/rounding_exec_lucky} \caption{Another possible state of $\hat{G}$ when running the rounding algorithm} \label{fig:rounding-run-lucky} \end{subfigure} \end{center} \caption[An interaction graph with a single thread and content and two possible iterations of the rounding algorithm]{Possible rounding algorithm executions on an interaction graph with a one thread and one content. The two communities are represented by $\{ v_1, v_2, v_3\}$ and $\{ v_4, v_5, v_6\}$, respectively. Negative and positive edges are coloured in red and green, respectively. For $\alpha = 0.1$ we will have that its content is controversial and the exact solution returns either one of the two communities. The rounding algorithm may fail to reconstruct communities if the edge between the communities ($e_{35}$) is added early in the iterations (\autoref{fig:rounding-run-unlucky}).} \label{fig:rounding-example} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{tex/out/synthetic/noise_adj_rand_threads.pdf} \caption[Adjusted RAND indices for graphs with different number of threads]{Adjusted RAND indices for graphs with four communities, each of six nodes, and different number of threads, obtained with the rounding algorithm.} \label{fig:clustering-threads} \end{figure} \subsection{Detecting Real-World Echo Chambers}% \label{sub:detecting_real_echo_chambers} We measured the performances of the rounding algorithm on real-world data by classifying the nodes of labeled datasets, similarly to what has been done with synthetic data. Recall from \autoref{sec:data_collection_and_generation}, that for these datasets we have retrieved a label for each user. We will refer to a group of users with the same label as \emph{community}. We ran the experiments on the r/asktrumpsupporters and @nytimes datasets. In the first case users label themselves either as Trump Supporters ($19\%$), Non Supporters ($79\%$) or Undecided ($2\%$). This last group of users was ignored in the analysis, i.e.\ these vertices were removed from the graph. In the @nytimes dataset users are labeled either as democrats ($80\%$) or republican ($20\%$). In this case, in order to decrease the sparsity, we selected the $4$-core. % Both of them are unbalanced datasets, with r/asktrumpsupporters having a % and of the users choosing the \emph{Non Supporter} and \emph{Trump % Supporter} flairs, respectively (the remaining is % \emph{Undecided}), while of @nytimes users are labeled \emph{democrats} % and as \emph{republican}. We cluster the nodes as shown in \autoref{alg:clustering_process} and choose $\alpha $ as the median of the $\eta(C), C \in \mathcal{C} $. We run the rounding algorithm to find the Echo Chambers. By looking at the poor results, which we show in \autoref{tab:scores-datasets-labeled}, it is clear that the algorithm is not able to correctly separate the communities. We motivate this with the following reasons: \begin{itemize} \item \textbf{Non-validity of the data model.} In trying to classify the nodes with our \acrshort{ECP} solver, we are assuming that the data contains a clear separation of the users, in which one chamber corresponds to a single community. Furthermore, we are assuming that there are only two communities in the datasets we chose, which may also be a limiting assumption, since: \begin{itemize} \item @nytimes may contain echo chambers related to different topics, as the set of contents does not only take into account U.S. political discussions. \item r/asktrumpsupporters may be a non-representative dataset of discussion of polarized communities (we discuss this more in details in \autoref{sec:the_r_asktrumpsupporters_case}). \end{itemize} \item \textbf{Complexity of sentiment analysis of social media language.} Social medias often involve messages which are not easily classifiable as either friendly and hostile, both because users often use jargon and because sometimes messages are aided by pictures and GIFs which are not taken into account by the sentiment analyzer. % additional noise is introduced by messages in other languages as well % as messages involving medias like pictures and GIF. \item \textbf{Limitations of the rounding algorithm.} Since we are using an approximation algorithm we are not solving exactly the \acrshort{ECP}: this may introduce limitations to the solutions which is used to cluster the nodes. More specifically, since at each iteration it uses a set of users connected by positive edges as possible solution (see \autoref{ssub:rounding_algorithm}), it is likely to return a set $U$ with just one connected component. \item \textbf{Sparsity of the data}. We can see from \autoref{tab:scores-datasets-labeled} that the two datasets are sparse. This, as we discussed in \autoref{sub:testing_on_synthetic_data}, is important factor in achieving good performances, especially for the rounding algorithm. More generally, this is a limitation of real-world data: we observed, especially on Twitter, that even increasing the number of contents does not produce denser graphs, as the average degree remains between one and two. Furthermore, analyzing the $k$-core, a denser part of the graph, may affect the results since we expect that the echo chamber effect is especially visible in small and isolated components, maybe a small "bubble" of users sharing the same opinion, which may get excluded by the $k$-core selection. \end{itemize} \begin{table} \centering \caption[Classification scores obtained with the rounding algorithm on two labeled datasets]{Classification scores obtained with the rounding algorithm on two labeled datasets. $\alpha $ is chosen as the median $\eta(C)$, $C \in \mathcal{C} $. For the @nytimes dataset we report the statistics related to its $4$-core. $|\{T\}|$ indicates the number of threads. The contents of @nytimes belong to the period between the 2nd and 8th of May, while the contents of r/asktrumpsupporters are between December 27, 2020 and May 7, 2021.} \label{tab:scores-datasets-labeled} {\small \begin{tabular}{|ccccc p{1.8cm} c|} \toprule Dataset & $|V|$ & $|E|$ & $|\mathcal{C}| $ & $|\{T\}| $ & Adjusted RAND & Jaccard \\ \midrule r/asktrumpsupporters & 11640 & 83038 & 357 & 357 & \num{0.09453599837921367} & \num{0.01607717041800643} \\ @nytimes & 1074 & 4921 & 139 & 254 & \num{0.02176064966494696} & \num{0.4200626959247649} \\ \bottomrule \end{tabular} } \end{table} \section{Further Discussion of the Results}% \label{sec:discussion} We focused our experiments on the rounding algorithm. We did this since we observed that, when running the other heuristic algorithms on smaller datasets (with less than $3000$ nodes), its \emph{time} performances were generally better than those of the peeling algorithm (\autoref{sub:approximation_algorithms}). Also, when compared to the $\beta$-approach, the rounding algorithm is more expressive: we already discussed in \autoref{sub:approximation_algorithms} that the $\beta$-approach is able to find only group of nodes that are connected. We report in \autoref{tab:times-approximation} the execution times on some datasets. While execution \emph{times} of the peeling approach explode with @BBCTech and r/cats, the rounding algorithm and the $\beta $-approach show a more stable trend, with most of the experiments of both of them being completed in less than $6$ seconds. \begin{table} \centering \caption[Execution time of the heuristics]{Execution time in seconds of the heuristics. Here, $\alpha$ is chosen to be the median of the $\eta(C)$ for each dataset. The contents of the datasets belong to the period between the 26th of May and the 1st of June.} \label{tab:times-approximation} \begin{tabular}{c|c|c|c|c|c} Dataset & $|V|$ & $|E|$ & Rounding & Peeling & $\beta$ \\ \hline @EMA\_News & 1226 & 1842 & \num{0.9331817626953125} & \num{24.05003571510315} & \num{36.05760359764099} \\ @bbcsciencenews & 447 & 388 & \num{4.916658878326416} & \num{175.36006140708923} & \num{0.7964000701904297} \\ % @BBCSport & 2140 & 3100 & & & \\ @BBCNewsEnts & 220 & 183 & \num{4.617515325546265} & \num{138.2591540813446} & \num{0.4381392002105713} \\ @BBCTech & 793 & 719 & \num{86.20259594917297} & \num{2798.376838684082} & \num{0.9822568893432617} \\ r/cats & 2493 & 4028 & \num{5.439620494842529} & \num{140844.752099514} & \num{0.7333328723907471} \\ % r/covid19 & & & & & \\ \end{tabular} \end{table} % Similarly, due to evident complexity limitations we were able to execute the exact % \acrshort{MIP} only on very small synthetic datasets, as shown in % \autoref{sub:synthetic_data}. \bigskip We summarize our results for the rounding algorithm as follows. It achieves good performances on data with polarized communities, showing also to be more robust as the available data increases: a larger number of threads, as we discussed in \autoref{sub:testing_on_synthetic_data}, helps the algorithm selecting \emph{intra-community} edges, which allows it to correctly classify the nodes. Conversely, the rounding algorithm was not able to produce a good clustering of the nodes in real-world data.
{ "alphanum_fraction": 0.7197088306, "avg_line_length": 47.2894230769, "ext": "tex", "hexsha": "b3eea7f35fa0b9b4f033e774cc9f65fc59113952", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2ab4c0509a119d7b5f332b842a4101470a884351", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "morpheusthewhite/master-thesis", "max_forks_repo_path": "tex/results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2ab4c0509a119d7b5f332b842a4101470a884351", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "morpheusthewhite/master-thesis", "max_issues_repo_path": "tex/results.tex", "max_line_length": 159, "max_stars_count": 1, "max_stars_repo_head_hexsha": "2ab4c0509a119d7b5f332b842a4101470a884351", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "morpheusthewhite/master-thesis", "max_stars_repo_path": "tex/results.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-15T14:01:29.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-15T14:01:29.000Z", "num_tokens": 13556, "size": 49181 }
\documentclass{article} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \usepackage[dvipsnames]{xcolor} \usepackage{tikz} \usepackage{subcaption} \usepackage{amsmath, amsfonts} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\diag}{\mathbf{diag}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \newcommand{\E}{\mathbb{E}} \newcommand{\bs}{\boldsymbol} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{claim}[theorem]{Claim} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{definition}[theorem]{Definition} \newenvironment{proof}{{\bf Proof:}}{\hfill\rule{2mm}{2mm}} \title{On Farka's Lemma} \author{ Ruilin Li \\ \href{mailto:[email protected]}{[email protected]} \\ Georgia Institute of Technology } \date{\today} \begin{document} \maketitle \section{Farka's Lemma} Farka's lemma (Farks's Alternative Theorem) is a useful tool and can be used to prove strong duality of linear programming and derive KKT condition with linear independence constraint qualification (LICQ). \begin{lemma}[Farka's Lemma] Given a vector $\bs{b} \in \mathbb{R}^m$ and $A \in \mathbb{R}^{m \times n}$, it must be one of the following two cases, \begin{itemize} \item There exists a non-negative vector $\bs{x} \ge \bs{0}$ such that \[ A\bs{x} = \bs{b} \] \item There exists a non-negativevector $\bs{y} \ge \bs{0}$ such that \[ A^T\bs{y} \le \bs{0} \mbox{ and } \bs{b}^T\bs{y} > 0 \] \end{itemize} See figure \ref{fig:farka} for visual illustration. \end{lemma} \begin{figure}[h] \begin{subfigure}{.5\linewidth} \centering \begin{tikzpicture} \draw [fill=orange, opacity=0.2] (0,0) -- (4,4) -- (5,0); \draw [dotted] (0,0) -- (4,4); \draw [dotted] (0,0) -- (5,0); \draw [->, thick, purple] (0,0) -- (3,3); \node [right, purple] at (3,3) {$\bs{a}_1$}; \draw [->, thick, purple] (0,0) -- (4,0); \node [below, purple] at (4,0) {$\bs{a}_2$}; \draw [->, thick, cyan] (0,0) -- (3.5,2); \node [right, cyan] at (3.5,2) {$\bs{b}$}; \end{tikzpicture} \caption{$A\bs{y} = \bs{b}$}\label{fig:case1} \end{subfigure} ~ \begin{subfigure}{.5\linewidth} \centering \begin{tikzpicture} \draw [fill=orange, opacity=0.2] (0,0) -- (4,4) -- (5,0); \draw [dotted] (0,0) -- (4,4); \draw [dotted] (0,0) -- (5,0); \draw [->, thick, purple] (0,0) -- (3,3); \node [right, purple] at (3,3) {$\bs{a}_1$}; \draw [->, thick, purple] (0,0) -- (4,0); \node [below, purple] at (4,0) {$\bs{a}_2$}; \draw [->, thick, cyan] (0,0) -- (1,3); \node [above, cyan] at (1,3) {$\bs{b}$}; \draw [dotted] (1,3) -- (2,2); \draw [->, thick, cyan] (0,0) -- (2,2); \node [below right, cyan] at (2,2) {$\bs{b}^\star$}; \end{tikzpicture} \caption{$A^T\bs{y} \le \bs{0} \mbox{ and } \bs{b}^T\bs{y} > 0$}\label{fig:case2} \end{subfigure} \caption{Visualization of Farka's Alternatives}\label{fig:farka} \end{figure} \begin{proof} \begin{itemize} \item If there exists $\bs{x} \ge \bs{0}$ such that $A\bs{x}=\bs{b}$, then for any vector $\bs{y}$, we have \[ \bs{y}^T\bs{b} = \bs{y}^T A\bs{x}=(A^T \bs{y})^T\bs{x} \] and it is impossible to find $\bs{y} \ge \bs{0}$ to satisfy $A^T \bs{y} \ge \bs{0}$ and $\bs{b}^T\bs{y} < 0$ at the same time. \item If there does not exist $\bs{x} \ge \bs{0}$ such that $A\bs{x} =\bs{b}$, denote the cone of formed by column vectors of $A$ by $C(A)$, \[ C(A) = \{ \sum_{i=1}^m x_i \bs{a}_i| x_i \ge 0, i=1,2,\cdots, m\}\] Consider vector \[\bs{y} = \bs{b} - \bs{b}^\star\] where $\displaystyle \bs{b}^\star = \argmin_{\bs{x} \in C(A)} ||\bs{x} - \bs{b}||^2$. From the property of projection onto convex sets (POC), we have that for any vector $\bs{z} \in C(A)$, \[ \langle \bs{z} - \bs{b}^\star, \bs{b} - \bs{b}^\star \rangle \le 0, \quad \langle \bs{b} - \bs{b}^\star, \bs{b} - \bs{z}\rangle \ge 0\] and the second equality holds if and only if $\bs{b} =\bs{b}^\star$. By our assumption, $\bs{b} \notin C(A)$ so $\bs{b} \neq \bs{b}^\star$ hence $\langle \bs{b} - \bs{b}^\star, \bs{b} - \bs{z}\rangle > 0$. Let $\bs{z} =\bs{0}$, we immediate conclude that $\bs{b}^T\bs{y} > 0$. Next we would like to show $\langle \bs{z}, \bs{b} - \bs{b}^\star \rangle \le 0$ and it suffices to show $\langle \bs{b}^\star, \bs{b} - \bs{b}^\star\rangle =0$. This is due to the fact \[ f(\alpha) = ||\alpha \bs{b}^\star - \bs{b}||^2 \] is minimized when $\alpha = 1$ and \[f^\prime(1) = \langle \bs{b}^\star-\bs{b}, \bs{b}^\star \rangle = 0\] Therefore, we find a non-negative vector $\bs{y} \ge \bs{0}$ such that $A^T\bs{y} \ge \bs{0}$ and $\bs{y}^T\bs{b} > 0$. \end{itemize} \end{proof} \end{document}
{ "alphanum_fraction": 0.6150699677, "avg_line_length": 40.0431034483, "ext": "tex", "hexsha": "1daf901433884a2fae73c26079e367a58bb9cadd", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2019-08-30T20:58:25.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-07T02:58:58.000Z", "max_forks_repo_head_hexsha": "d563870157a99a5fe61569056db3199ec2b89118", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "qianli-hu/qianli-hu.github.io", "max_forks_repo_path": "documents/notes/Conjugate Gradient/conjugate gradient.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d563870157a99a5fe61569056db3199ec2b89118", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "qianli-hu/qianli-hu.github.io", "max_issues_repo_path": "documents/notes/Conjugate Gradient/conjugate gradient.tex", "max_line_length": 277, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d563870157a99a5fe61569056db3199ec2b89118", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "qianli-hu/qianli-hu.github.io", "max_stars_repo_path": "documents/notes/Conjugate Gradient/conjugate gradient.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-11T01:45:48.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-15T12:40:09.000Z", "num_tokens": 1949, "size": 4645 }
\section{Contributions} \subsection{Rani Rafid} Rani Rafid designed and implemented the logical architecture of the Indonesian Dot puzzle solver system. He assigned responsibilities to his teammates and managed the project throughout the iteration. He also set deadlines and organized team meetings. \subsection{Gabriel Harris} Gabriel Harris implemented the main (entry) program for the Indonesian Dot puzzle solver system. He contributed to the development of complementary features for the system and theorized creative solutions for past issues. \subsection{Manpreet Tahim} Manpreet Tahim implemented the search algorithms of the Indonesian Dot puzzle solver system. He explored and studied several heuristic functions. He also shared his findings with his teammates, and contributed to the development of the logical architecture.
{ "alphanum_fraction": 0.8305687204, "avg_line_length": 52.75, "ext": "tex", "hexsha": "a3742d819097f6c9732fe5ef9d78acf128d756f4", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-03-18T15:23:24.000Z", "max_forks_repo_forks_event_min_datetime": "2020-03-18T15:23:24.000Z", "max_forks_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Ra-Ni/Indonesian-Dot-Solver", "max_forks_repo_path": "documentation/sections/contributions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Ra-Ni/Indonesian-Dot-Solver", "max_issues_repo_path": "documentation/sections/contributions.tex", "max_line_length": 257, "max_stars_count": null, "max_stars_repo_head_hexsha": "2baf507d23816b686f046f89d4c833728b25f2dc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Ra-Ni/Indonesian-Dot-Solver", "max_stars_repo_path": "documentation/sections/contributions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 153, "size": 844 }
\chapter{Inner product spaces} One structure that we often use in $\R^n$, but which is missing from abstract vector spaces, is the dot product. In Chapter~\ref{cha:vectors-rn}, we saw how to use dot products to compute the length of a vector, the angle between two vectors, and to decide when two vectors are orthogonal. We also used dot products to define the projection of one vector onto another, and to find shortest distances between various objects (such as points and planes, two lines, etc). In this chapter, we will consider inner product spaces. An inner product space is essentially an abstract vector spaces that has been equipped with an operation that works ``like'' the dot product.
{ "alphanum_fraction": 0.7871428571, "avg_line_length": 46.6666666667, "ext": "tex", "hexsha": "d9206e9577753773324d65213649360721d0bbbe", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-06-30T16:23:12.000Z", "max_forks_repo_forks_event_min_datetime": "2020-11-09T11:12:03.000Z", "max_forks_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "selinger/linear-algebra", "max_forks_repo_path": "baseText/content/InnerProductSpaces.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "selinger/linear-algebra", "max_issues_repo_path": "baseText/content/InnerProductSpaces.tex", "max_line_length": 70, "max_stars_count": 3, "max_stars_repo_head_hexsha": "37ad955fd37bdbc6a9e855c3794e92eaaa2d8c02", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "selinger/linear-algebra", "max_stars_repo_path": "baseText/content/InnerProductSpaces.tex", "max_stars_repo_stars_event_max_datetime": "2021-06-30T16:23:10.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-21T06:37:13.000Z", "num_tokens": 158, "size": 700 }
% This file contains the content for a main section \regularsectionformat %% Modify below this line %% \chapter{Discussion} \label{chap:discussion} \section{Comparison of the ACES white point and CIE \texorpdfstring{D\textsubscript{60}}{D60}} \label{sec:comp} Frequently, the white point associated with various ACES encodings is said to be `D\textsubscript{60}' \cite{autodesk,bmforum,acescentralD60}. This shorthand notation has sometimes led to confusion for those familiar with the details of how the chromaticity coordinates of the CIE D series illuminants are calculated \cite{notD60}. The chromaticity coordinates of any CIE D series illuminant can be calculated using the equations found in Section 3 of CIE 15:2004 and reproduced in Equation~\ref{eq:xyeq} \cite{CIE152004}. \begin{floatequ}[!ht] \begin{alignat*}{2} & y_{D} && = -3.000x_{D}^{2}+2.870x_{D}-0.275 \\ & x_{D} && = \begin{dcases} 0.244063+0.09911{\frac {10^{3}}{T}}+2.9678{\frac {10^{6}}{T^{2}}}-4.6070{\frac {10^{9}}{T^{3}}} \qquad & 4,000\ \mathrm {K} \leq T\leq 7,000\ \mathrm {K} \\ 0.237040+0.24748{\frac {10^{3}}{T}}+1.9018{\frac {10^{6}}{T^{2}}}-2.0064{\frac {10^{9}}{T^{3}}} \qquad & 7,000\ \mathrm {K} < T\leq 25,000\ \mathrm {K} \end{dcases} \end{alignat*} \captionsetup{width=.75\textwidth} \caption{Calculation of CIE xy from CCT for CIE Daylight} \label{eq:xyeq} \end{floatequ} The CIE has specified four canonical daylight illuminants (D\textsubscript{50}, D\textsubscript{55}, D\textsubscript{65} and D\textsubscript{75}) \cite{CIE152004}. Contrary to what the names might imply, the correlated color temperature (CCT) values of these four canonical illuminants are not the nominal CCT values of \SI[mode=text]{5000}{\kelvin}, \SI[mode=text]{5500}{\kelvin}, \SI[mode=text]{6500}{\kelvin}, and \SI[mode=text]{7500}{\kelvin}. For instance, CIE D\textsubscript{65} does not have a CCT of \SI[mode=text]{6500}{\kelvin} but rather a CCT temperature of approximately \SI[mode=text]{6504}{\kelvin} \cite{wyszecki1982color}. The exact CCT values differ from the nominal CCT values due to a 1968 revision to $c_2$, the second radiation constant in Planck's blackbody radiation formula \cite{Durieux1970}. When the value of $c_2$ was changed from 0.014380 to 0.014388, it altered the CIE xy location of the Planckian locus for a blackbody. This small change to the Planckian locus' position relative to the chromaticity coordinates of the established CIE daylight locus had the effect of changing the correlated color temperature of the CIE D series illuminants ever so slightly. The precise CCT values for the established canonical CIE D series illuminants can be determined by applying the equation in Equation~\ref{eq:ccteq} to the nominal CCT values implied by the illuminant name. The exact CCT values of the canonical daylight illuminants are not whole numbers after the correction factor is applied, but it is common to round their values to the nearest Kelvin. The CCT values of the CIE canonical daylight illuminants before the 1968 change to $c_2$, after the 1968 change, and rounded to the nearest Kelvin can be found in Table~\ref{tab:d_cct}. \begin{floatequ}[!ht] \begin{equation} CCT_{new} = CCT\times \frac{1.4388}{1.4380} \end{equation} \captionsetup{width=.75\textwidth} \caption{Conversion of nominal pre-1968 CCT to post-1968 CCT} \label{eq:ccteq} \end{floatequ} \begin{table}[!ht] \centering \begin{tabularx}{.97\linewidth}{|Y|Y|Y|Y|} \hline \textbf{CIE D} & \textbf{CCT} & \textbf{CCT current} & \textbf{CCT current} \\ [-1.3ex] \textbf{Illuminant} & \textbf{before 1968} & \footnotesize{\textbf{(round to 3 decimal places)}} & \footnotesize{\textbf{(round to 0 decimal places)}} \\ \hline D\textsubscript{50} & \SI[mode=text]{5000}{\kelvin} & \SI[mode=text]{5002.782}{\kelvin} & \SI[mode=text]{5003}{\kelvin} \\ \hline D\textsubscript{55} & \SI[mode=text]{5500}{\kelvin} & \SI[mode=text]{5503.060}{\kelvin} & \SI[mode=text]{5503}{\kelvin} \\ \hline D\textsubscript{65} & \SI[mode=text]{6500}{\kelvin} & \SI[mode=text]{6503.616}{\kelvin} & \SI[mode=text]{6504}{\kelvin} \\ \hline D\textsubscript{75} & \SI[mode=text]{7500}{\kelvin} & \SI[mode=text]{7504.172}{\kelvin} & \SI[mode=text]{7504}{\kelvin} \\ \hline \end{tabularx} \captionsetup{width=.75\textwidth} \caption{CCT of canonical CIE daylight illuminants \cite{tableValsPython}} \label{tab:d_cct} \end{table} D\textsubscript{60} is not one of the four CIE canonical daylight illuminants so the exact CCT of such a daylight illuminant could be interpreted to be either approximately \SI[mode=text]{6003}{\kelvin} ($6000 \times \frac{1.4388}{1.4380}$) or \SI[mode=text]{6000}{\kelvin}. Regardless, the ACES white point chromaticity coordinates derived using the method specified in Section~\ref{chap:dervation} differs from both the chromaticity coordinates of CIE daylight with a CCT of \SI[mode=text]{6003}{\kelvin} and CIE daylight with a CCT of \SI[mode=text]{6000}{\kelvin}. The chromaticity coordinates of each, rounded to 5 decimal places, can be found in Table~\ref{tab:ciexy}. As illustrated in Figure~\ref{fig:cieuv}, the chromaticity coordinates of the ACES white point do not fall on the daylight locus nor do they match those of any CIE daylight spectral power distribution. The positions of the chromaticity coordinates in CIE Uniform Color Space (u$^\prime$v$^\prime$) and the differences from the ACES chromaticity coordinates in $\Delta$u$^\prime$v$^\prime$ can be found in Table~\ref{tab:cieuv}. \begin{table}[!ht] \centering \begin{tabularx}{.75\linewidth}{|l|Y|Y|} \hline & \textbf{CIE $\boldsymbol{x}$} & \textbf{CIE $\boldsymbol{y}$} \\ \hline ACES White Point & 0.32168 & 0.33767 \\ \hline CIE Daylight 6000K & 0.32169 & 0.33780 \\ \hline CIE Daylight 6003K & 0.32163 & 0.33774 \\ \hline \end{tabularx} \captionsetup{width=.75\textwidth} \caption{CIE xy chromaticity coordinates rounded to 5 decimal places \cite{tableValsPython}} \label{tab:ciexy} \end{table} \begin{table}[!ht] \centering \begin{tabularx}{.75\linewidth}{|l|Y|Y|Y|} \hline & \textbf{CIE $\boldsymbol{u^\prime}$} & \textbf{CIE $\boldsymbol{v^\prime}$} & $\boldsymbol{\Delta u^\prime v^\prime}$ \\ \hline ACES White Point & 0.20078 & 0.47421 & 0\\ \hline CIE Daylight 6000K & 0.20074 & 0.47427 & 0.00008 \\ \hline CIE Daylight 6003K & 0.20072 & 0.47423 & 0.00007 \\ \hline \end{tabularx} \captionsetup{width=.75\textwidth} \caption{CIE u$^\prime$v$^\prime$ chromaticity coordinates and $\Delta$u$^\prime$v$^\prime$ from the ACES white point rounded to 5 decimal places \cite{tableValsPython}} \label{tab:cieuv} \end{table} \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{cieuv.png} \caption{CIE UCS diagram with chromaticity coordinates} \label{fig:cieuv} \end{figure} Although the ACES white point chromaticity is not on either the Planckian locus or the daylight locus, the CCT of its chromaticity can still be estimated. There are a number of methods for estimating the CCT of any particular set of chromaticity coordinates \cite{robertson1968computation,mccamy1992correlated,hernandez1999calculating,Ohno2014}. The results of four popular methods can be found in Table~\ref{tab:aceswpcct}. Each of the methods estimates the CCT of the ACES white point to be very close to \SI[mode=text]{6000}{\kelvin}. \begin{table}[!ht] \centering \begin{tabularx}{.9\linewidth}{|Y|Y|} \hline \textbf{CCT Estimation Method} & \textbf{ACES white point CCT} \\ \hline Robertson & \SI[mode=text]{5998.98}{\kelvin} \\ \hline Hernandez-Andres & \SI[mode=text]{5997.26}{\kelvin} \\ \hline Ohno & \SI[mode=text]{6000.04}{\kelvin} \\ \hline McCamy & \SI[mode=text]{6000.41}{\kelvin} \\ \hline \end{tabularx} \captionsetup{width=.9\textwidth} \caption{Estimation of the CCT of the ACES white point rounded to 2 decimal places \cite{tableValsPython}} \label{tab:aceswpcct} \end{table} \section{Reasons for the ``\texorpdfstring{D\textsubscript{60}}{D60}-like'' white point} \label{sec:reasonForD60} The ACES white point was first specified by the Academy's ACES Project Committee in 2008 in Academy Specification S-2008-001. The details in S-2008-001 were later standardized in SMPTE ST 2065-1:2012. Prior to the release of the Academy specification the Project Committee debated various aspects of the ACES2065-1 encoding, including the exact white point, for many months. The choice of the ``D\textsubscript{60}-like'' white point was influenced heavily by discussions centered around viewer adaptation, dark surround viewing conditions, ``cinematic look'', and preference. In the end, the Committee decided to go with a white point that was close to that of a daylight illuminant but also familiar to those with a film heritage. The white point would later be adopted for use in other encodings used in the ACES system. It is important to note that the ACES white point does not dictate the chromaticity of the reproduction neutral axis. Using various techniques beyond the scope of this document the chromaticity of the equal red, green and blue (ACES2065-1 \rgbequal) may match the ACES white point, the display calibration white point, or any other white point preferred for technical or aesthetic reasons. The Committee felt that a white point with a chromaticity similar to that of daylight was appropriate for ACES2065-1. However, the exact CCT of the daylight was in question. Some felt D\textsubscript{55} was a reasonable choice given its historical use as the design illuminant for daylight color negative films. Others felt D\textsubscript{65} would be good choice given its use in television and computer graphics as a display calibration white point. Because the exact white point chromaticity would not prohibit users from achieving any reproduction white point, the Committee ultimately decided to use the less common CCT of \SI[mode=text]{6000}{\kelvin}. This choice was based on an experiment to determine the reproduction chromaticity of projected color print film, the relative location of the white point compared to other white points commonly used in digital systems, and the general belief that imagery reproduced with the white point felt aesthetically ``cinematic''. The projected color print film experiment involved simulating the exposure of a spectrally non-selective (neutral) gray scale onto color negative film, printing that negative onto a color print film, then projecting the color film onto a motion picture screen with a xenon-based film projector and measuring the colorimetry off the screen. The result of the experiment found that the CIE xy chromaticity coordinates of a projected LAD patch \cite{pytlak1976simplified,kodakLad} through a film system were approximately $x=0.32170$ $y=0.33568$. Figure~\ref{fig:filmprintthrough} shows a plot of the CIE $u^\prime v^\prime$ chromaticity coordinates of a scene neutral as reproduced by a film system compared to the CIE daylight locus and the ACES white point. The chromaticity of the film system LAD reproduction was determined to be closest to CIE daylight with the CCT of \SI[mode=text]{6000}{\kelvin} when the differences were calculated in CIE $u^\prime v^\prime$. A summary of the CIE $u^\prime v^\prime$ differences between CIE daylight at various CCTs and the LAD patch chromaticity are summarized in Table~\ref{tab:lad}. \begin{figure}[!ht] \centering \includegraphics[width=0.85\textwidth]{images/PrintThroughChromaticities.png} \caption{Film system print-through color reproduction of original scene neutral scale} \label{fig:filmprintthrough} \end{figure} \begin{table}[!ht] \centering \begin{tabularx}{.75\linewidth}{|Y|Y|} \hline \textbf{Daylight CCT} & $\boldsymbol{\Delta u^\prime v^\prime}$ \textbf{from LAD chromaticity}\\ \hline \SI[mode=text]{5500}{\kelvin} & 0.008183 \\ \hline \SI[mode=text]{5600}{\kelvin} & 0.006619 \\ \hline \SI[mode=text]{5700}{\kelvin} & 0.005112 \\ \hline \SI[mode=text]{5800}{\kelvin} & 0.003676 \\ \hline \SI[mode=text]{5900}{\kelvin} & 0.002354 \\ \hline \SI[mode=text]{6000}{\kelvin} & 0.001360 \\ \hline \SI[mode=text]{6100}{\kelvin} & 0.001448 \\ \hline \SI[mode=text]{6200}{\kelvin} & 0.002442 \\ \hline \SI[mode=text]{6300}{\kelvin} & 0.003627 \\ \hline \SI[mode=text]{6400}{\kelvin} & 0.004836 \\ \hline \SI[mode=text]{6500}{\kelvin} & 0.006035 \\ \hline \end{tabularx} \captionsetup{width=.75\textwidth} \caption{CIE $\Delta$u$^\prime$v$^\prime$ difference between projected LAD patch and CIE Daylight CCT chromaticity coordinates round to 6 decimal places \cite{tableValsPython}} \label{tab:lad} \end{table} \section{Reasons why the ACES white point doesn't match the CIE \texorpdfstring{D\textsubscript{60}}{D60} chromaticity coordinates} \label{sec:reasonsForDifference} As discussed in Section~\ref{sec:reasonForD60}, the ACES white point was chosen to be very close to that of CIE Daylight with a CCT of \SI[mode=text]{6000}{\kelvin}. This raises the question why the CIE chromaticity coordinates of $x=0.32169$ $y=0.33780$ were not used. The reasoning is somewhat precautionary; at the time, the exact chromaticity coordinates for the ACES white point were being debated, the ACES Project Committee was concerned about the implications the choice of any particular set of chromaticity coordinates could suggest. Those new to ACES can often misinterpret the specification of a set of ACES encoding white point chromaticity coordinates as a requirement that the final reproduction neutral axis chromaticity is limited to only that white point chromaticity. However, as pointed out in Section~\ref{sec:reasonForD60}, the ACES encoding white point does not dictate the chromaticity of the reproduction neutral axis, and regardless of the ACES white point chromaticity, the reproduction neutral axis may match the ACES white point, the display calibration white point, or any other white point preferred for technical or aesthetic reasons. The ACES white point chromaticity coordinates serve to aid in the understanding and, if desired, conversion of the colorimetry of ACES encoded images to any other encoding including those with a different white point. Just as the implication of the ACES encoding white point on reproduction can be misunderstood, the ACES Project Committee was also concerned that the ACES encoding white point might have unintended implications for image creators. Specifically, the Committee was concerned that the choice of a set of chromaticity coordinates that corresponded to a source with a defined spectral power distribution might be misunderstood to suggest that only that source could be used to illuminate the scene. For example, the Committee was concerned if the ACES white point chromaticity was chosen to match that of CIE Daylight with a CCT of \SI[mode=text]{6000}{\kelvin} then \textit{only} scenes photographed under CIE Daylight with a CCT of \SI[mode=text]{6000}{\kelvin} would be compatible with the ACES system. In reality, ACES does not dictate the source under which movies or television shows can be photographed. ACES Input Transforms handle the re-encoding of camera images to ACES2065-1 and preserve all the technical and artistic intent behind on-set lighting choices. For these reasons as well as an abundance of caution, the ACES Project Committee decided it would be best to use a set of chromaticity coordinates very near those of CIE Daylight with a CCT of \SI[mode=text]{6000}{\kelvin} but not exactly those of any easily calculated spectral power distribution.
{ "alphanum_fraction": 0.7488942247, "avg_line_length": 102.7662337662, "ext": "tex", "hexsha": "1d4daf84cebf1837fc5920cf89ed4e37a470a37c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_forks_repo_licenses": [ "AMPAS" ], "max_forks_repo_name": "colour-science/aces-dev", "max_forks_repo_path": "documents/LaTeX/TB-2018-001/sec-discussion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "86284e2f145a89e3612f05ec7ea5a3e9d92cc779", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AMPAS" ], "max_issues_repo_name": "colour-science/aces-dev", "max_issues_repo_path": "documents/LaTeX/TB-2018-001/sec-discussion.tex", "max_line_length": 1218, "max_stars_count": 2, "max_stars_repo_head_hexsha": "76ea982a988d278dd12b563602771f46a5da3b83", "max_stars_repo_licenses": [ "AMPAS" ], "max_stars_repo_name": "KelSolaar/aces-dev", "max_stars_repo_path": "documents/LaTeX/TB-2018-001/sec-discussion.tex", "max_stars_repo_stars_event_max_datetime": "2019-05-27T06:46:50.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-04T18:12:13.000Z", "num_tokens": 4627, "size": 15826 }
\documentclass{article} \title{Computation Theory} \author{Ashwin Ahuja} \usepackage{float} \usepackage{graphicx} \usepackage{listings} \usepackage{xcolor} \usepackage{tabto} \usepackage{amssymb} \usepackage{amsmath} \providecommand{\dotdiv}{% Don't redefine it if available \mathbin{% We want a binary operation \vphantom{+}% The same height as a plus or minus \text{% Change size in sub/superscripts \mathsurround=0pt % To be on the safe side \ooalign{% Superimpose the two symbols \noalign{\kern-.35ex}% but the dot is raised a bit \hidewidth$\smash{\cdot}$\hidewidth\cr % Dot \noalign{\kern.35ex}% Backup for vertical alignment $-$\cr % Minus }% }% }% } \usepackage[T1]{fontenc} \newenvironment{definition}{\par\color{blue}}{\par} \newenvironment{pros}{\par\color[rgb]{0.066, 0.4, 0.129}}{\par} \newenvironment{cons}{\par\color{red}}{\par} \newenvironment{example}{\par\color{brown}}{\par} \usepackage{fancyhdr} %% Margins \usepackage{geometry} \geometry{a4paper, hmargin={2cm,2cm},vmargin={2cm,2cm}} %% Header/Footer \pagestyle{fancy} \lhead{Ashwin Ahuja} \chead{Computation Theory} \rhead{Part IB, Paper 6} \lfoot{} \cfoot{\thepage} \rfoot{} \renewcommand{\headrulewidth}{1.0pt} \renewcommand{\footrulewidth}{1.0pt} \usepackage[export]{adjustbox} \usepackage{caption} \captionsetup{justification = raggedright, singlelinecheck = false} \lstset{ basicstyle=\ttfamily, columns=fullflexible, breaklines=true, postbreak=\raisebox{0ex}[0ex][0ex]{\color{red}$\hookrightarrow$\space} } \usepackage{listings} \lstset{ escapeinside={(*}{*)} } \begin{document} \makeatletter \renewcommand{\l@subsection}{\@dottedtocline{2}{1.6em}{2.6em}} \makeatother \begin{titlepage} \begin{center} \vspace*{1cm} \Huge \textbf{Computation Theory} \vspace{0.5cm} \LARGE University of Cambridge \vspace{1.5cm} \textbf{Ashwin Ahuja} \vfill Computer Science Tripos Part IB\\ Paper 6 \vspace{5cm} April 2019 \end{center} \end{titlepage} \tableofcontents \pagebreak \section{Algorithmically Undecidable Problems} \begin{definition} Mathematical problems which can't be solved even given unlimited time and working space, for example: \end{definition} \begin{itemize} \item Hilbert's Entsheidungsproblem \item Halting Problem \item Hilbert's 10th Problem \end{itemize} \subsection{Entsheidungsproblem (Decision Problem)} \begin{definition} Is there an algorithm st when fed a statement in formal language of first-order arithmetic, determines in a finite number of steps whether or not the statement is provable from Peano's axioms for arithmetic using the usual rules of first-order logic. This could help solve things like the Goldbach Conjecture (every even integer strictly greater than two is the sum of two primes) using the following: \end{definition} \begin{equation} \forall k>1 \exists p, q(2k=p+q \wedge prime(p) \wedge prime(q)) \end{equation} More formally, the associated problem is given: \begin{enumerate} \item A set \textbf{S} whose elements are finite data structures of some kind (formulas of first-order logic) \item A property \textbf{P} of elements of \textbf{S} (property of a formulat that it has a proof) \end{enumerate} the associated problem it: \textit{to find an algorithm which terminates with result 0 or 1 when fed an element s $\in$ S and yields the result 1 when fed s iff s has property P} \bigskip \noindent \textbf{Algorithm}: First issue is that there was no precise definition of an algorithm, just examples of what are algorithms. The features they included were that: \begin{enumerate} \item \textbf{Finite} description of the procedure in terms of elementary operations \item \textbf{Deterministic}: next step uniquely determined if there is one \item Can recognise when it terminates and what the result it (does not necessarily terminate) \end{enumerate} \bigskip \noindent \textbf{Negative Solutions to Entsheidungsproblem}: Given by Turing and Church in 1935/36. It is composed of: \begin{enumerate} \item Precise, mathematical definition of algorithm (\textit{Turing: Turing Machines} and \textit{Church: lambda-calculus}) \item Regard algorithms as data on which algorithms can act and reduce the problem to. \item Construct an algorithm encoding instances (A, D) of the Halting Problem as arithmetic statements $\Phi_{A, D}$ with the property that $\Phi_{A, D} \leftrightarrow A(D)\downarrow$ \end{enumerate} \subsection{The Halting Problem} Decision problem with: \begin{itemize} \item Set \textbf{S} consisting of all pairs \textbf{(A, D)}, where A is an algorithm and D is a datum on which it is designed to operate \item Property \textbf{P} holds for \textbf{(A, D)} if algorithm A when applied to datum D eventually produces a result (that is, it halts - \textbf{A(D) $\downarrow$} \end{itemize} Turing and Church's work shows that the Halting Problem is \textbf{undecidable: there is no algorithm H st $\forall$ (A, D) $\in$ S} \begin{equation} H(A, D)=\left\{\begin{array}{ll}{1} & {if A(D) \downarrow} \\ {0} & {otherwise }\end{array}\right. \end{equation} \noindent \textbf{Proofs that Halting Problem is Undecidable}: If there were such an H, let C be the following algorithm \begin{lstlisting} input A; compute H(A, A); if H(A, A) = 0 then return 1, else loop forever \end{lstlisting} Since H is total and by definition of H: \begin{equation} \forall A(C(A) \downarrow \; \leftrightarrow H(A, A)=0) \end{equation} \begin{equation} \forall A(H(A, A)=0 \leftrightarrow \neg A(A) \downarrow) \end{equation} \begin{equation} So \; \forall A(C(A) \downarrow \leftrightarrow \neg A(A) \downarrow) \end{equation} Taking A to be the algorithm C: \begin{equation} C(C) \downarrow \leftrightarrow \neg C(C) \downarrow \end{equation} which is a contradiction, therefore there exists no such algorithm H. \bigskip This doesn't quite work as we've taken A as a datum on which A is designed to operate - we've therefore assumed that we can pass a function to a function - not a first-order function then. This can be used to negatively prove the Entsheidungsproblem by showing that any algorithm deciding provability of arithmetic statements could be used to decide the Halting Problem - therefore no such exists. \subsection{Hilbert's 10th Problem} \begin{definition} Given an algorithm, which, when started with a Diophantine equation, determines in a finite number of operations whether or not there are natural numbers satisfying the equation. \bigskip \noindent \textbf{Diophantine Equations}: $ p\left(x_{1}, \ldots, x_{n}\right)=q\left(x_{1}, \ldots, x_{n}\right) $ where p and q are polynomials in unknowns x$_{1}$, ..., x$_{n}$ with coefficients from $ \mathbb{N}=\{0,1,2, \ldots\} $ \end{definition} Posed in 1900, but proved undecidable by reduction from the Halting Problem by Y Matijasevič, J Robinson, M Davis and H Putnam. The original proof used Turing machines, but a later, simpler proof used register machines (concept made by Minsky and Lambek). \section{Register Machines} \begin{definition} Operate on $ \mathbb{N} $ stored in finite (idealised) registers ($R_{0}, R_{1}, ..., R_{n}$) each storing a natural numbers, using the following elementary operations: \begin{enumerate} \item \textbf{Add 1} to the contents of a register \item Test whether the contents of a register is 0 \item \textbf{Subtract 1} from the contents of a register if it is non-zero \item \textbf{Jumps} \item \textbf{Conditionals} (if, then, else) \end{enumerate} And a program consisting of a finite list of instructions of the form \textit{label : body} where for i = 0, 1, 2, ..., the (i+1)$^{th}$ instruction has label L$_{i}$. The instruction body takes one of the three forms: \begin{enumerate} \item \textbf{R$^{+} \rightarrow L^{'}$} \subitem Add 1 to the contents of register R and jump to instruction labelled L$^{'}$ \item \textbf{R$^{-} \rightarrow L^{'}, L^{"}$} \subitem If the contents of R > 0, then subtract 1 from it and jump to L$^{'}$, else jump to L$^{"}$ \item \textbf{HALT} \subitem Stop executing instructions \end{enumerate} \bigskip \noindent \textbf{Configuration}: $ c=\left(\ell, r_{0}, \ldots, r_{n}\right) $ where $ \ell $ is the current label and r$_{i}$ is the current contents of R$_{i}$. R$_{i}$ = x [in configuration c] means $ c=\left(\ell, r_{0}, \ldots, r_{n}\right) $ with r$_{i}$=x. \noindent Initial Configuration c$_{0}$ = (0, r$_{0}$, r$_{1}$, ..., r$_{n}$) where r$_{i}$ = initial contents of register R$_{i}$ \bigskip \noindent \textbf{Computation}: finite sequence of configurations $c_{0}, c_{1}, c_{2}, \ldots $ where c$_{0}$ is an initial configuration and each c in the sequence determines the next configuration in the sequence by carrying out the program instruction labelled L$_{\ell}$ \bigskip \noindent \textbf{Halting}: For a finite computation, the last configuration is a halting configuration, where the instruction is either HALT \textbf{(proper halt)} or another instruction involving going to an instruction that doesn't exist (\textbf{erroneous halt}). \begin{itemize} \item Erroneous halts can always be turned into proper halts by adding extra HALT instructions to the list with appropriate labels \end{itemize} \end{definition} \subsection{Graphical Representation} One node in the graph for each instruction, with arcs representing jumps between instructions. \begin{figure}[H] \includegraphics[width=.3\textwidth, left] {./images/1.png} \end{figure} \subsection{Partial Functions} Relation between initial and final register contents which is defined by a register machine program is a partial function. This is because register machine computation is deterministic - in any non-halting configuration, the next configuration is uniquely determined by the program. \bigskip \begin{definition} Partial function from set X to set Y is specified by any subset f $f \subseteq X \times Y$ (ordered pairs) satisfying: \begin{equation} (x, y) \in f \wedge\left(x, y^{\prime}\right) \in f \rightarrow y=y^{\prime} \end{equation} $\forall x \in X \; and \; y, y^{\prime} \in Y$ For all x $\in$ X, there is at most one y $\in$ Y, with (x, y) $\in$ f \end{definition} \subsubsection{Notation} \begin{itemize} \item f(x) = y means (x, y) $\in$ f \item f(x) $\downarrow$ means $\exists y \in Y(f(x) = y)$ \item f(x) $\uparrow$ means $\neg \exists y \in Y(f(x) = y)$ \item X $\rightharpoonup$ means set of all partial functions from X to Y \item X $\rightarrow$ Y means set of all total functions from X to Y \end{itemize} \bigskip \noindent Partial function from set X to set Y is total if it satisfies f(x) $\downarrow$ $\forall x \in X$ \subsection{Computable Functions} $f \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ is (register machine) computable if there is a register machine M with at least n+1 registers such that $(x_{1}, ..., x_{n}) \in \mathbb{N}$ and all y $\in \mathbb{N}$, with the computation of M starting with R$_{0}=0$, R$_{1} = x_{1}$, ..., R$_{n} = x_{n}$ and all other registers set to 0, and halts with R$_{0}$=y iff f($x_{1}, ..., x_{n}$) = y \bigskip \textbf{Examples of Computable Functions} \begin{enumerate} \item Multiplication \begin{figure}[H] \includegraphics[width=.4\textwidth, left] {./images/2.png} \end{figure} \item Projection: $p(x, y) \triangleq x$ \item Constant: $c(x) \triangleq n$ \item Truncated Subtraction: $ x \dotdiv y \triangleq \left\{\begin{array}{ll}{x-y} & {\text { if } \; y \leq x} \\ {0} & {\text { if } \; y>x }\end{array}\right.$ \item Integer Division: $x \operatorname{div} y \triangleq \left\{\begin{array}{ll}{\text {integer part of } x / y} & {\text { if } y>0} \\ {0} & {\text { if } y=0}\end{array}\right.$ \item Integer Remainder: $x \bmod y \triangleq x \dotdiv y(x \; d i v \; y)$ \item Exponentiation base 2: $e(x) \triangleq 2^{x}$ \item Logarithm base 2: $\log _{2}(x) \triangleq \left\{\begin{array}{ll}{\text {greatest } y \text { such that } 2^{y} \leq x} & {\text { if } x>0} \\ {0} & {\text { if } x=0}\end{array}\right.$ \item Sequential Composition: M1; M2 \begin{figure}[H] \includegraphics[width=.4\textwidth, left] {./images/3.png} \end{figure} \item IF R=0 THEN M1 ELSE M2 \begin{figure}[H] \includegraphics[width=.4\textwidth, left] {./images/4.png} \end{figure} \item WHILE R$\neq$0 DO M \begin{figure}[H] \includegraphics[width=.4\textwidth, left] {./images/5.png} \end{figure} \end{enumerate} \section{Coding Programs as Numbers} Turing / Church solutions for Entscheidungsproblem use the idea that the algorithms can be the data on which the algorithms act - therefore need to be able to code Register Machines as numbers. In general, these codings are called \textbf{Gödel Numberings} \bigskip \textbf{Aim}: Coding st RM program and initial contents of the registers can be coded into a number and that can be decoded back into the RM programs and initial contents of the registers. \subsection{Numerical Coding of Pairs} A possible numerical coding of pairs is as follows: \begin{equation} \text { For } x, y \in \mathbb{N}, \text { define } \left\{\begin{array}{cc}{\langle\langle x, y\rangle\rangle} & {\triangleq 2^{x}(2 y+1)} \\ {\langle x, y\rangle} & {\triangleq 2^{x}(2 y+1)-1}\end{array}\right. \end{equation} \begin{itemize} \item <-, -> gives a bijection (one-one correspondence) between $\mathbb{N} \times \mathbb{N} and \mathbb{N}$ \item <<-, ->> gives a bijection between $\mathbb{N} \times \mathbb{N}$ and $\{ n\in \mathbb{N} | n \neq 0 \}$ \end{itemize} \subsubsection{Numerical Coding of Lists} For l $\in$ list $\mathbb{N}$ (set of all finite lists of natural numbers), define $\ulcorner$l$\urcorner$ $\in \mathbb{N}$ by induction on the length of list l: \begin{equation} \left\{ \begin{array}{l}{ \ulcorner [] \urcorner \triangleq 0 } \\ { \ulcorner x::l \urcorner \triangleq \; \ll x, \ulcorner l \urcorner \gg \; = 2^{x} (2 \ulcorner l \urcorner + 1) } \end{array} \right. \end{equation} Thus, $\ulcorner [x_{1}, x_{2}, ..., x_{n}] \urcorner$ = $\ll x_{1}, \ll x_{1}, ..., \ll x_{n}, 0 \gg ... \gg \; \gg$ \begin{figure}[H] \includegraphics[width=.45\textwidth, center] {./images/6.png} \end{figure} \subsection{Numerical Coding of Programs} $P \text { is the RM program } \left[ \begin{array}{c}{\mathrm{L}_{0} : \operatorname{body}_{0}} \\ {\mathrm{L}_{1} : \operatorname{body}_{1}} \\ {\vdots} \\ {\mathrm{L}_{n} : \operatorname{bod} y_{n}}\end{array}\right] $ \bigskip \noindent then the numerical code is: $ \ulcorner P \urcorner \triangleq \ulcorner [\ulcorner body_{0} \urcorner, ..., \ulcorner body_{n} \urcorner] \urcorner $ \bigskip \noindent where $\ulcorner body \urcorner$ is defined by: $\begin{cases} \ulcorner R_{i}^{+} \rightarrow L_{j} \urcorner \triangleq \; \ll 2i, j \gg \\ \ulcorner R_{i}^{-} \rightarrow L_{j}, L_{k} \urcorner \triangleq \; \ll 2i+1, <j, k> \gg \\ \ulcorner HALT \urcorner \; \triangleq \; 0 \end{cases}$ \bigskip \noindent \textbf{Decoding} \begin{lstlisting} if x=0 then body(x) is HALT, else (x>0 and) let x = <<y, z>> in if y=2i, then body(x) is Ri+ -> Lz else y=2i+1 let z = <j, k> in body(x) is Ri- -> Lj, Lk \end{lstlisting} \noindent So any e $\in \mathbb{N}$ decodes to a unique program prog(e), called the program with index e: \begin{equation} \operatorname{prog}(e) \triangleq \left[ \begin{array}{c}{\mathrm{L}_{0} : \operatorname{bod} y\left(x_{0}\right)} \\ {\vdots} \\ {\mathrm{L}_{n} : \operatorname{bod} y\left(x_{n}\right)}\end{array}\right] \text { where } e=\ulcorner\left[x_{0}, \ldots, x_{n}\right]\urcorner \end{equation} => prog(0) is the program with an empty list of instructions, which by convention is a Register Machine that does nothing - halts immediately \section{Universal Register Machine} Universal Register Machine U carries out the following (starting with R$_{0}$=0, R$_{1}$=e (code of a program), R$_{2}$=a (code of a list of arguments) and all other registers zeroed: \begin{enumerate} \item Decode e as a Register program P \item Decode a as a list of register values $a_{1}, ..., a_{n}$ \item Carry out the computation of the Register Machine program P starting with R$_{0}=0, R_{1} = a_{1}, ..., R_{n} = a_{n}$ (and any other registers occurring in P set to 0) \end{enumerate} \subsection{Register Usage} \begin{itemize} \item \textbf{R$_{1}$}: P - code of the RM to be simulated \item \textbf{R$_{2}$}: A - code of current register contents of simulated RM \item \textbf{R$_{3}$}: PC program counter - number of the current instruction \item \textbf{R$_{4}$}: N code of the current instruction body \item \textbf{R$_{5}$}: C type of the current instruction body \item \textbf{R$_{6}$}: R current value of the register to be incremented by current instruction \item \textbf{R$_{7}$}, \textbf{R$_{8}$} and \textbf{R$_{9}$} are auxiliary registers \end{itemize} \subsection{Structure of URM Program} \begin{enumerate} \item Copy PCth item of the list in P to N (halting if PC > length of list); goto 2 \item If N=0, then copy 0th item of list in A to $R_{0}$ and halt, else (decode N as $\ll y, z \gg$; C ::= y; N ::= z; goto 3 \begin{itemize} \item C = 2i and current instruction is $R_{i}^{+} \rightarrow L_{z}$ \item OR \item C = 2i + 1 and current instruction is $R_{i}^{-} \rightarrow L_{j}, L_{k}$ where z = <j, k> \end{itemize} \item Copy \textbf{i}th item of list in A to R; goto 4 \item Execute current instruction on R; update PC to next label; restore register values to A; goto 1 \end{enumerate} In order to do this, need to define Register Machines for manipulating codes of lists of numbers. \bigskip \noindent \textbf{Prerequisite RMs} \begin{itemize} \item \textbf{Copy Contents from R to S} $START \rightarrow S::=R \rightarrow HALT$ \begin{figure}[H] \includegraphics[width=.4\textwidth, left] {./images/7.png} \end{figure} \item \textbf{Push X to L} $START \rightarrow (X, L) ::= (0, X :: L) (2^{X}(2L+1))\rightarrow HALT$ \begin{figure}[H] \includegraphics[width=.4\textwidth, left] {./images/8.png} \end{figure} \item \textbf{Pop L to X}: \begin{lstlisting} if L = 0 then (X::= 0; goto EXIT) else let L = <<x, l>> in (X ::= x, L ::= l; goto HALT) \end{lstlisting} \begin{figure}[H] \includegraphics[width=.6\textwidth, left] {./images/10.png} \end{figure} \end{itemize} \subsection{Program for \textbf{U}} \begin{figure}[H] \includegraphics[width=.7\textwidth, left] {./images/11.png} \end{figure} \section{Halting Problem} \begin{definition} A register machine H decides the Halting Problem if for all e, $a_{1}, ..., a_{n} \in \mathbb{N}$, starting H with $R_{0}$=0, $R_{1}$=e and R$_{2}$=$\ulcorner [a_{1}, ..., a_{n}] \urcorner$ and all other registers zeroed, the computation of H always halts with $R_{0}$ containing 0 or 1; moreover when the computation halts, $R_{0}$ iff the register machine program with index e eventually halts when started with $R_{0}$=0, $R_{1} = a_{1}, ..., R_{n} = a_{n}$ and all other registered zeroed. \end{definition} \bigskip \noindent \textbf{Theorem:} No such register machine H can exist \subsection{Proof of Theorem} Assume $\exists$ a RM H that decides the Halting Problem and derive a contradiction as follows: \begin{enumerate} \item Let H' be obtained from H by replacing 'START $\rightarrow$' with 'START $\rightarrow$ Z ::= $R_{1} \rightarrow$ push Z to $R_{2}\rightarrow$ where Z is a register not mentioned in H's program \item Let C be obtained from H' by replacing each HALT (and each erroneous halt) by \begin{figure}[H] \includegraphics[width=.15\textwidth, left] {./images/12.png} \end{figure} \item Let $c \in \mathbb{N}$ be the index of C's program \item C started with $R_{1}$ = c eventually halts \item $\iff$ H' started with $R_{1}$=c halts with $R_{0}$=0 \item $\iff$ H started with $R_{1}$=c, R$_{2}=\ulcorner [c] \urcorner$ halts with R$_{0}=0$ \item $\iff$ prog(c) started with $R_{1}$=c does not halt \item $\iff$ C started with $R_{1}$=c does not halt - this is a contradiction \end{enumerate} \subsection{Enumerating Computable Functions} For each e $\in \mathbb{N}$, let $\varphi_{e} \in \mathbb{N} \rightharpoonup \mathbb{N}$ be the unary partial function computed by the Register Machine with program prog(e). So, for all x, y $\in \mathbb{N}$: \bigskip $\varphi_{e}(x)=y $ holds iff computation of prog(e) starts with $R_{0}$=0, $R_{1}$=x and all other registers zeroed eventually halts with $R_{0}=y$ \bigskip Therefore, $e \mapsto \varphi_{e}$ defines an onto function from $\mathbb{N}$ to the collection of all computation partial functions from $\mathbb{N}$ to $\mathbb{N}$ - therefore this collection is countable. Therefore $\mathbb{N} \rightharpoonup \mathbb{N}$ \bigskip \noindent \textbf{Example Uncomputable Function}: f $\in \mathbb{N} \rightharpoonup \mathbb{N}$ is the partial function with graph {(x, 0) | $\varphi_{x}(x)\uparrow$} \begin{equation} f(x)=\left\{\begin{array}{ll}{0} & {\text { if } \varphi_{x}(x) \uparrow} \\ {\text {undefined}} & {\text { if } \varphi_{x}(x) \downarrow}\end{array}\right. \end{equation} \begin{itemize} \item f is not computable, as if it were, then f = $\varphi_{e}$ for some $e \in \mathbb{N}$ and hence \item If $\varphi_{e}(e)\uparrow$, then f(e) = 0; so $\varphi_{e}(e)$=0 (since f = $\varphi_{e}$) hence $\varphi_{e}(e)\downarrow$ \item If $\varphi_{e}(e)\downarrow$ then $f(e)\downarrow$ (since f = $\varphi_{e}$) so $\varphi_{e}(e)\uparrow$ (by definition of f) \item Therefore this is a contradiction, therefore f cannot be computable \end{itemize} \subsection{Undecidable Sets of Numbers} Set ($S \subseteq \mathbb{N}$) is RM decidable if \begin{itemize} \item Its characteristic function $\chi_{s} \in \mathbb{N} \rightarrow \mathbb{N}$ is a register machine computable function. \item $\equiv$ iff there is a RM M with the property that: (1) $\forall x \in \mathbb{N}$, M started with $R_{0}=0, R_{1}=x$ and all other registers zeroed eventually halts with $R_{0}$ containing 1 or 0 and (2) $R_{0}=1$ on halting iff $x\in S$ \end{itemize} Otherwise it is called undecidable. In order to prove undecidability, generally try to prove that that decidability of S would imply decidability of the Halting Problem. \bigskip \noindent \begin{example} \noindent \textbf{Claim}: $S_{0} \triangleq {e | \varphi_{e}(0)\downarrow}$ is undecidable \noindent \textbf{Proof}: Suppose $M_{0}$ is a RM computing $\chi_{S_{0}}$. From $M_{0}$s program (using the same techniques as for constructing a universal Register Machine), we can construct a Register Machine to carry out. \begin{lstlisting}[escapeinside={(*}{*)}, frame=single] let e = R(*$_{1}$*) and (*$\ulcorner$[a1,...,an]$\urcorner$*) = R2 in (*$R_{1}::= \ulcorner (R_{1} ::= a_{1}); ...; (R_{n} ::= a_{n}); prog(e) \urcorner$*) (*R$_{2}$*) ::= 0; run M(*$_{0}$*) \end{lstlisting} \begin{figure}[H] \includegraphics[width=.6\textwidth, left] {./images/13.png} \end{figure} Therefore, by assumption on $M_{0}$, H decides the Halting Problem - this is a contradiction. Therefore no such $M_{0}$ exists, therefore $\chi_{S_{0}}$ is uncomputable => $S_{0}$ is undecidable \end{example} \section{Turing Machines} Register Machines computation abstracts away particular, concrete abstractions of numbers and the associated elementary operations of increment / decrement / zero-test. Turing Machines are more concrete: even numbers have to be represented in terms of a fixed finite alphabet of symbols and increment / decrement / zero-test programmed in terms of more elementary symbol manipulating operations \subsection{Features} \begin{enumerate} \item Linear tape, unbounded to right, divided into cells containing a symbol from a finite alphabet of tape symbols. Only finitely many cells contain non-blank symbols \item The machine is in one of a finite set of states \item The tape symbol is being scanned by a tape head \item The machine computes in discrete steps, each of which depends on the current state and the symbol being scanned by the tape head. \bigskip \noindent \textbf{Actions} \begin{itemize} \item Overwrite the current tape cell with a symbol \item Move left or right one cell \item Stay stationary \item Change state \end{itemize} \item \textbf{Alphabet} \begin{itemize} \item \textbf{$\triangleright$}: left endmarker symbol (start symbol) \item \textbf{\textvisiblespace}: blank symbol \end{itemize} \end{enumerate} \bigskip \noindent More accurately specified by: \begin{enumerate} \item \textbf{Q}: finite set of machine states \item \textbf{$\Sigma$}: finite set of tape symbols (disjoint from Q) \item $s \in Q$, an initial state \item \textbf{$\delta \in (Q \times \Sigma \rightarrow (Q \cup \{ acc, rej \}) \times \Sigma \times \{ L, R, S \}$}: transition function \begin{itemize} \item Specifies for each state and symbol a next state (or accept or reject) \item Symbol to overwrite the current symbol \item And direction for the tape head to move (left, right or stationary) \item $\forall q \in Q \; \exists q' \in Q \cup \{$accept, reject$\}$ with $\delta (q, \triangleright ) = (q', \triangleright, R)$ (left endmarker is never overwritten and machine always moves to the right when scanning it) \end{itemize} \end{enumerate} \subsection{Configuration} (q, w, u) where: \begin{enumerate} \item $q \in Q \cup \{ acc, rej \}$ = current state \item w = non-empty string (w = va) of tape symbols under ad to the left of the tape head, whose last element (a) is contents of cell under the tape head. \item u = (possibly empty) string of tape symbols to the left of the tape head (upto some point beyond which all symbols are \textvisiblespace) \item Hence, wu $\in \Sigma^{*}$ represents the current tape contents \item The initial configuration is (s, $\triangleright$, u) \end{enumerate} \subsection{Representing Transitions} Given a TM = (Q, $\Sigma$, s, $delta$), we write: \begin{equation} (q, w, u) \rightarrow_{M} (q', w', u') \end{equation} to mean: (1) q $\neq$ acc, rej, (2) w = va (for some v and a) and: \begin{enumerate} \item Either $\delta(q, a)=\left(q^{\prime}, a^{\prime}, L\right), w^{\prime}=v, \text { and } u^{\prime}=a^{\prime} u$ \item OR $\delta(q, a)=\left(q^{\prime}, a^{\prime}, S\right), w^{\prime}=v a^{\prime} \text { and } u^{\prime}=u$ \item OR $\begin{array}{l}{\delta(q, a)=\left(q^{\prime}, a^{\prime}, R\right), u=a^{\prime \prime} u^{\prime \prime} \text { is non-empty, }} \\ {w^{\prime}=v a^{\prime} a^{\prime \prime} \text { and } u^{\prime}=u^{\prime \prime}}\end{array}$ \item OR $\delta(q, a)=\left(q^{\prime}, a^{\prime}, R\right), u=\varepsilon \text { is empty, } w^{\prime}=v a^{\prime}$ \textvisiblespace and $u^{\prime} = \epsilon$ \end{enumerate} \subsection{Computation} Computation of a TM M is a (finite or infinite) sequence of configurations where (1) c$_{0}$ = ($s, \triangleright, u$) is an initial configuration and (2) $c_{i} \rightarrow _{M} c_{i+1}$ holds for i $\in \mathbb{Z}^{+}$. The computation: \begin{enumerate} \item Does not halt if the sequence is infinite \item Halts if the sequence is finite or if its last element is of the form (acc, w, u) or (rej, w, u) \end{enumerate} \subsection{Computation of a Turing Machine (M) can be implemented by a Register Machine} \textbf{Proof} \begin{enumerate} \item \textbf{Fix a numerical encoding of M's states, tape symbols, tape contents and configurations} \begin{enumerate} \item Identify states and type symbols with specific numbers: (1) acc=0, rej=1, Q={2, 3, ..., n} and (2) \textvisiblespace=0, $\triangleright$=1, $\Sigma$={0,1, ..., m} \item Code configurations c = (q, w, u) is given by: \noindent $\ulcorner c \urcorner = \ulcorner [q, [a_{n}, ..., a_{1}] \urcorner , \ulcorner [b_{1}, ..., b_{m}] \urcorner ] \urcorner$ where w = $a_{1} ... a_{n}$ (n > 0) and u = $b_{1} ... b_{m}$ (m $\geq$ 0) We reverse the w to make it easier to use our Register Machine programs for list manipulation \end{enumerate} \item \textbf{Implement M's transition function (finite table) using RM instructions on codes} \begin{enumerate} \item We use registers to represent (1) Q - current state, (2) A - current tape symbol, (3) D - current direction of the tape head (L = 0, R = 1, S = 2) \item Can turn the finite table of (argument, result)-pairs specifying $\delta$ into a Register Machine program $\rightarrow (Q, A, D) ::= \delta(Q, A) \rightarrow$ such that starting the program with Q=q, A=a, D=d and all other registers zero, it halts with Q=q', A=a', D=d' where (q', a', d') = $\delta(q, a)$ \end{enumerate} \item \textbf{Implement a Register Machine to repeatedly carry out $\rightarrow_{M}$} \begin{enumerate} \item Uses registers to store: (1) C - code of current configuration, (2) W - code of tape symbols at and left of tape head, (3) U - code of tape symbols right of tape head \item Starting with C containing the code of an initial configuration (and all other registers zeroed), the RM halts iff M halts and in that case C holds the code of the final configuration \begin{figure}[H] \includegraphics[width=.65\textwidth, left] {./images/14.png} \end{figure} \end{enumerate} \end{enumerate} \subsection{Computation of a Register Machine can be implemented by a Turing Machine} Can be reasonably easily proven, just need to show how to carry out the action of each type of the RM. \subsubsection{Tape encoding of list of numbers} \begin{definition} A tape over $\Sigma$ = {$\triangleright$, \textvisiblespace, 0, 1} codes a list of numbers if precisely two cells contain 0 and the only cells containing 1 occur between these \end{definition} \begin{figure}[H] \includegraphics[width=.45\textwidth, left] {./images/15.png} \end{figure} corresponds to the list [$n_{1}, n_{2}, ..., n_{k}$] \section{Notions of Computability} Church defined computability using $\lambda$-calculus and Turing used Turing machines - though Turing showed that the two approaches determine the same class of computable functions. \bigskip \begin{definition} \noindent \textbf{Church-Turing Thesis}: Every algorithm can be realised as a Turing machine \end{definition} \bigskip Further evidenced by: \begin{itemize} \item Goedel and Kleene (1936): partial recursive functions \item Post (1943) and Markov (1951): canonical systems for generating the theorems of a formal system \item Lambek and Minsky (1961): register machines \end{itemize} \subsection{Turing Computability} f $\in \mathbb{N}^{n} \rightharpoonup \mathbb{N}$ is Turing computable iff $\exists$ a Turing machine M with the following property: \begin{enumerate} \item Starting M from initial state with tape head on the left endmarker of a tape coding [0, $x_{1}, ..., x_{n}$], M halts iff f($x_{1}, ..., x_{n}$)$\downarrow$ and in that case the final tape codes a list whose first element is y where f($x_{1}, ..., x_{n}$) = y \end{enumerate} \subsection{Aim} A more abstract, machine-independent description of the collection of computable partial functions than provided by register / Turing machines. They form the smallest collection of partial functions containing some basic functions and closed under some fundamental operations for forming new functions from old - composition, primitive recursion and minimisation. \subsection{Functions} \begin{definition} \textbf{Kleene Equivalence of possibly-undefined expressions}: Either both LHS and RHS are undefined or they are both defined and equal. \end{definition} \begin{enumerate} \item \textbf{Projection}: $proj^{n}_{i} \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ $proj^{n}_{i}(x_{1}, ..., x_{n}) \triangleq x_{i}$ $\text { START } \rightarrow\left[\mathrm{R}_{0} : :=\mathrm{R}_{i}\right] \rightarrow \mathrm{HALT}$ \item \textbf{Constant with value 0}: $\mathrm{zero}^{n} \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ $z \operatorname{ero}^{n}\left(x_{1}, \ldots, x_{n}\right) \triangleq 0$ $\text { START } \rightarrow \mathrm{HALT}$ \item \textbf{Successor}: $\operatorname{succ} \in \mathbb{N} \rightarrow \mathbb{N}$ $\operatorname{succ}(x) \triangleq x+1$ $\text { START } \rightarrow \mathrm{R}_{1}^{+} \rightarrow\left[\mathrm{R}_{0} : :=\mathrm{R}_{1}\right] \rightarrow \mathrm{HALT}$ \item \textbf{Composition} of f $\in \mathbb{N}^{n} \rightharpoonup \mathbb{N}$ with $g_{1}, ..., g_{n} \in \mathbb{N}^{m} \rightharpoonup \mathbb{N}$ is $f \circ\left[g_{1}, \ldots, g_{n}\right] \in \mathbb{N}^{m} \rightarrow \mathbb{N}$ satisfying for all $x_{1}, \dots, x_{m} \in \mathbb{N}$: $f \circ\left[g_{1}, \ldots, g_{n}\right]\left(x_{1}, \ldots, x_{m}\right) \equiv f\left(g_{1}\left(x_{1}, \ldots, x_{m}\right), \ldots, g_{n}\left(x_{1}, \ldots, x_{m}\right)\right)$ \bigskip Therefore, $f \circ\left[g_{1}, \ldots, g_{n}\right]\left(x_{1}, \ldots, x_{m}\right)=z$ iff $\exists$ $y_{1}, \dots, y_{n}$ with $\boldsymbol{g}_{i}\left(\boldsymbol{x}_{\mathbf{1},} \ldots, \boldsymbol{x}_{m}\right)$ (for i = 1, ..., n) and $f\left(y_{1}, \ldots, y_{n}\right)=z$ \bigskip \textbf{Idea is that $f \circ\left[\boldsymbol{g}_{1}, \ldots, \boldsymbol{g}_{n}\right]$ is computable if f and $\boldsymbol{g}_{1}, \ldots, \boldsymbol{g}_{n}$ are} \bigskip \textit{ \textbf{Proof}: Given RM programs $\left\{\begin{array}{l}{F} \\ {G_{i}}\end{array}\right.$ computing $\left\{\begin{array}{l}{f\left(y_{1}, \ldots, y_{n}\right)} \\ {g_{i}\left(x_{1}, \ldots, x_{m}\right)}\end{array}\right.$ in $R_{0}$ starting with $\left\{\begin{array}{l}{\mathrm{R}_{1}, \ldots, \mathrm{R}_{n}} \\ {\mathrm{R}_{1}, \ldots, \mathrm{R}_{m}}\end{array}\right.$ set to $\left\{\begin{array}{l}{y_{1}, \ldots, y_{n}} \\ {x_{1}, \ldots, x_{m}}\end{array}\right.$, then we can define a RM program computing the composition ($f \circ\left[g_{1}, \ldots, g_{n}\right]\left(x_{1}, \ldots, x_{m}\right)$) starting with $\mathrm{R}_{1}, \ldots, \mathrm{R}_{m} \text { set to } x_{1}, \ldots, x_{m}$ } \begin{figure}[H] \includegraphics[width=.65\textwidth, left] {./images/16.png} \end{figure} \end{enumerate} \section{Primitive Recursion} Partial Function f is primitive recursive ($\in PRIM$) if it can be built in finitely many steps from the basic functions by use of the operations of composition and primitive recursion. PRIM is the smallest set (with respect to subset including) of partial functions containing the basic functions and closed under the operations of composition and primitive recursion. \bigskip \noindent \textbf{Theorem}: Given $f \in \mathbb{N}^{n} \rightharpoonup \mathbb{N}$ and $g \in \mathbb{N}^{n+2} \rightharpoonup \mathbb{N}$, $\exists! h \in \mathbb{N}^{n+1} \rightharpoonup \mathbb{N}$ which satisfies $\forall \vec{x} \in \mathbb{N}^{n} \text { and } x \in \mathbb{N}$: \begin{equation} \left\{\begin{array}{ll}{h(\vec{x}, 0)} & {\equiv f(\vec{x})} \\ {h(\vec{x}, x+1)} & {\equiv g(\vec{x}, x, h(\vec{x}, x))}\end{array}\right. \end{equation} \noindent This h is written as $\rho^{n}(f, g)$ and it is called the partial function defined by primitive recursion from f and g. \bigskip \noindent \textbf{Theorem:} All functions f $\in PRIM$ are computable \noindent \textbf{Proof}: \begin{itemize} \item Basic functions are computable, composition preserves computability as already proved, therefore must show: \begin{equation} \rho^{n}(f, g) \in \mathbb{N}^{n+1} \rightarrow \mathbb{N} \; computable \; if \; f \in \mathbb{N}^{n} \rightarrow \mathbb{N} \; and \; g \in \mathbb{N}^{n+2} \rightarrow \mathbb{N} are \end{equation} \item Suppose f and g are computed by RM programs F and G, then this RM computes $\rho^{n}(f, g)$ \begin{figure}[H] \includegraphics[width=.65\textwidth, left] {./images/18.png} \end{figure} \end{itemize} \noindent \textbf{All functions f $\in PRIM$ are all total as}: \begin{enumerate} \item All basic functions are total \item If f, $\boldsymbol{g}_{\mathbf{1},} \ldots, \boldsymbol{g}_{\boldsymbol{n}}$ are total, then so is $f \circ\left(\boldsymbol{g}_{1}, \ldots, \boldsymbol{g}_{n}\right)$ \item If f and g are total, then so is $\rho^{n}(f, g)$ \end{enumerate} \subsection{Examples of Primitive Recursive Functions} \textbf{Addition} \begin{equation} add \in \mathbb{N}^{2} \rightarrow \mathbb{N} = \left\{\begin{array}{ll}{a d d\left(x_{1}, 0\right)} & {\equiv x_{1}} \\ {a d d\left(x_{1}, x+1\right)} & {\equiv a d d\left(x_{1}, x\right)+1}\end{array}\right. \end{equation} Therefore $a d d=\rho^{1}(f, g) \text { where } \left\{\begin{array}{ll}{f\left(x_{1}\right)} & {\triangleq x_{1}} \\ {g\left(x_{1}, x_{2}, x_{3}\right)} & {\triangleq x_{3}+1}\end{array}\right.$ f = $proj_{1}^{1}$ and g = $\operatorname{succ} \circ \operatorname{proj}_{3}^{3}$ so add can be made from basic functions and so add $\in PRIM$ \bigskip \noindent \textbf{Predecessor} \begin{equation} \text { pred } \in \mathbb{N} \rightarrow \mathbb{N} = \left\{\begin{array}{ll}{\text {pred}(0)} & {\equiv 0} \\ {\operatorname{pred}(x+1)} & {\equiv x}\end{array}\right. \end{equation} Therefore, pred = $\rho^{0}(f, g) \text { where } \left\{\begin{array}{ll}{f( )} & {\triangleq 0} \\ {g\left(x_{1}, x_{2}\right)} & {\triangleq x_{1}}\end{array}\right.$ = $\rho^{0}\left(\mathrm{zero}^{0}, \mathrm{proj}_{1}^{2}\right)$ \bigskip \noindent \textbf{Multiplication} \begin{equation} m u l t \in \mathbb{N}^{2} \rightarrow \mathbb{N} = \left\{\begin{array}{ll}{\operatorname{mult}\left(x_{1}, 0\right)} & {\equiv 0} \\ {\operatorname{mult}\left(x_{1}, x+1\right)} & {\equiv \operatorname{mult}\left(x_{1}, x\right)+x_{1}}\end{array}\right. \end{equation} \begin{equation} mult = \rho ^{1} (zero^{1}, add \circ (proj^{3}_{3}, proj_{1}^{3})) \end{equation} Therefore, since mult can be made from composition and primitive recursion (since add can be) \section{Minimisation} Given a partial function $f \in \mathbb{N}^{n+1} \rightharpoonup \mathbb{N}$, define $\mu^{n} f \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ by: $$\mu^{n} f(\vec{x}) \triangleq $$ least x such that $f(\vec{x}, x)=0$ and for each i=0, ..., x-1, $f(\vec{x}, i)$ is defined and > 0 \bigskip OR \begin{equation} \mu^{n} f=\left\{(\vec{x}, x) \in \mathbb{N}^{n+1} | \exists y_{0}, \ldots, y_{x}\right. \left(\bigwedge_{i=0}^{x} f(\vec{x}, i)=y_{i}\right) \wedge\left(\bigwedge_{i=0}^{x-1} y_{i}>0\right) \wedge y_{x}=0 \} \end{equation} \section{Partial Recursion} Partial Function f is partial recursive ($\in PR$) if it can be built in finitely many steps from the basic functions by use of the operations of composition, primitive recursion \textbf{and minimisation}. PR is the smallest set (with respect to subset including) of partial functions containing the basic functions and closed under the operations of composition, primitive recursion \textbf{and minimisation}. The members of PR that are total are called recursive functions - their are recursive functions that are not primitive recursive - eg Fibonacci Numbers \bigskip \noindent \textbf{Theorem}: All functions f $\in PR$ are computable \noindent \textbf{Proof}: Suppose f is computer by RM program F. Then the following RM computes $\mu ^{n} f$ \begin{figure}[H] \includegraphics[width=.65\textwidth, left] {./images/17.png} \end{figure} \noindent \textbf{Theorem}: Every computable partial function is partial recursive \noindent \textbf{Proof}: \begin{itemize} \item Let $f \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ be computed by RM M with N $\geq$ n registers. \item construct primitive recursive functions lab, val$_{0}$, next$_{M}$ $\in \mathbb{N} \rightarrow \mathbb{N}$, satisfying: \begin{enumerate} \item $lab(\ulcorner[l, r_{0}, ..., r_{N}]\urcorner)=l$ \item $val_{0}(\ulcorner[l, r_{0}, ..., r_{N}]\urcorner)=r_{0}$ \item $next_{M}(\ulcorner[l, r_{0}, ..., r_{N}]\urcorner)=$ code of M's next configuration \end{enumerate} \item Writing $\vec{x}$ for $x_{1}, ..., x_{n}$, let $\operatorname{config}_{M}(\vec{x}, t)$ be the code of M's configuration after t steps, starting with initial register values: $\mathrm{R}_{0}=0, \mathrm{R}_{1}=x_{1}, \ldots, \mathrm{R}_{n}=x_{n}, \mathrm{R}_{n+1}=0, \dots, \mathrm{R}_{N}=0$. This is in PRIM because: $$ \left\{\begin{array}{ll}{\operatorname{config}_{M}(\vec{x}, 0)} & {=\ulcorner[0,0, \vec{x}, \vec{0}]\urcorner} \\ {\operatorname{config}_{M}(\vec{x}, t+1)} & {=n e x t_{M}\left(\operatorname{config}_{M}(\vec{x}, t)\right)}\end{array}\right.$$ \item Assume M has a single HALT as last instruction. Let $h a l t_{M}(\vec{x})$ be the number of steps M takes to halt, when started with initial register values $\vec{x}$. \item Satisfies $\operatorname{halt}_{M}(\vec{x}) \equiv \text { least } t$ st $I-\operatorname{lab}\left(\operatorname{config}_{M}(\vec{x}, t)\right)=0$ and hence in PR (because lab, $config_{M}, I - () \in PRIM$. \item Therefore, $f(\vec{x}) \equiv v a l_{0}\left(\operatorname{config} _{M}\left(\vec{x}, h a l t_{M}(\vec{x})\right)\right)$ and f $\in PR$ \end{itemize} \section{Ackermann's Function} \textbf{$$ack \in \mathbb{N}^{2} \rightarrow \mathbb{N}$$} $$ack(0, x_{2}) = x_{2}+1$$ $$ack(x_{1}+1, 0) = ack(x_{1}, 1)$$ $$ack(x_{1} + 1, x_{2}+1)=ack(x_{1}, ack(x_{1}+1, x_{2}))$$ \noindent ack is (1) computable and therefore is recursive and (2) grows faster than any primitive recursive function $f \in \mathbb{N}^{2} \rightarrow \mathbb{N}$: $\exists N_{F} \forall x_{1}, x_{2} \; > N_{f} (f(x_{1}, x_{2}) < ack(x_{1}, x_{2}))$ \noindent Hence ack is not primitive recursive \section{Lambda Calculus} \textbf{Function Definition Notation} \begin{enumerate} \item Named: let f be the function f(x) = x$^{2}$ + x + 1 \item Anonymous: f: x $\mapsto$ $x^{2}$ + x + 1 \item Lambda Notation: $\lambda x . x^{2} + x + 1$ \end{enumerate} \begin{definition} \noindent $\lambda$-Terms are built from a given, countable collection of variables (x, y, z) by operations for forming $\lambda$-terms: \begin{enumerate} \item $\lambda$-abstraction ($\lambda$x.M) where x is a variable and M is a $\lambda$-term \item Application which is left-associative (MM') where M and M\ are $\lambda$-terms \end{enumerate} \end{definition} \subsection{Notational Conventions} \begin{itemize} \item $\left(\lambda x_{1} x_{2} \ldots x_{n} \cdot M\right) \equiv \left(\lambda x_{1} \cdot\left(\lambda x_{2} \ldots\left(\lambda x_{n} \cdot M\right) \ldots\right)\right)$ \item $\left(M_{1} M_{2} \dots M_{n}\right) \equiv \left(\ldots \cdot\left(M_{1} M_{2}\right) \ldots M_{n}\right)$ \item Drop outermost parentheses and those enclosing the body of a $\lambda$-abstraction - eg: $(\lambda x .(x(\lambda y .(y x)))) \equiv \lambda x . x(\lambda y . y x)$ \item x $\#$ M means that the variable x does not occur anywhere in the $\lambda$-term M \end{itemize} \subsection{Free and Bound Variables} In $\lambda x.M$, x is the \textbf{bound variable} and M is the body of the $\lambda$-abstraction. Occurrence of x in a $\lambda$-term M is: \begin{enumerate} \item Binding if in between $\lambda$ and . \item Bound if in the body of a binding occurrence of x \item Free if neither binding nor bound \item Sets of free and bound variables: \begin{itemize} \item FV(x) = {x} \item FV($\lambda x.M$) = FV(M) - {x} \item FV(MN) = FV(M) $\cup$ FV(N) \item FV(M) = $\empty \implies M$ is a closed term, or combinator \item BV(x) = $\empty$ \item BV($\lambda x.M$) = BV(M) $\cup$ {x} \item BV(MN) = BV(M) $\cup$ BV(N) \end{itemize} \end{enumerate} \subsection{$\alpha$-Equivalence (M =$_{\alpha}$ M')} is the equivalence relation (reflexive, symmetric and transitive) inductively generated by the rules: \begin{enumerate} \item$\overline{x=_{\alpha} x}$ \item $\frac{z \#(M N) \; \; \; M\{z / x\}=_{\alpha} N\{z / y\}}{\lambda x . M=_{\alpha} \lambda y . N}$ \item $\frac{M =_{\alpha} M' \; \; \; N=_{\alpha}N'}{MN =_{\alpha}M'N'}$ \end{enumerate} where $M\{ z / x \}$ is M with all occurrences of x replaced by z. \bigskip This effectively says that the name of the bound variable is immaterial and therefore if M' = M{x'/x} is the result of taking M and changing all occurrences of x to some variable x' $\#$ M then $\lambda x.M$ and $\lambda x'.M'$ both represent the same function \subsection{$\beta$-Reduction} $\lambda x.M$ represented the function f st f(x) = M $\forall x$. Regard $\lambda x.M$ as a function on $\lambda$-terms via substitution: map each N to M[N/x] (result of substituting N for free x in M). \bigskip \noindent \textbf{Substitution N[M/x]}: Result of replacing all free occurrences of x in N with M, avoiding the capture of free variables in M by $\lambda$-binders in N \begin{enumerate} \item $x[M / x]=M$ \item $y[M / x]=y \quad \text { if } y \neq x$ \begin{itemize} \item y does not occur in M and \item y $\neq$ x \item This makes substitution capture-avoiding \begin{example} \item If $x \neq y$: $(\lambda y.x)[y/x] \neq \lambda y.y$ \end{example} \end{itemize} \item $(\lambda y . N)[M / x]=\lambda y . N[M / x] \quad \text { if } y \#(M x)$ \item $\left(N_{1} N_{2}\right)[M / x]=N_{1}[M / x] N_{2}[M / x]$ \end{enumerate} \noindent N $\mapsto$ N[M/x] induces a totally defined function from the set of $\alpha$-equivalence classes of $\lambda$-terms to itself \bigskip Natural notion of computation for $\lambda$-terms is given by stepping from: (1) $\beta$-redex \textit{($\lambda x.M$)N} to (2) $\beta$-reduct \textit{M[N/x]} \subsubsection{$\beta$-Reduction Rules} \begin{enumerate} \item $\overline{(\lambda x . M) N \rightarrow M[N / x]}$ \item $\frac{M \rightarrow M^{\prime}}{\lambda x \cdot M \rightarrow \lambda x . M^{\prime}}$ \item $\frac{M \rightarrow M^{\prime}}{M N \rightarrow M^{\prime} N}$ \item $\frac{M \rightarrow M^{\prime}}{N M \rightarrow N M^{\prime}}$ \item $\frac{N =_{\alpha}M \; \; \; M \rightarrow M^{\prime} \; \; \; M^{\prime} =_{\alpha}N^{\prime}}{N\rightarrow N^{\prime}}$ \end{enumerate} \subsubsection{$\beta$-Conversion (M $=_{\beta}$ N)} Holds if N can be obtained from M by performing zero or more steps of $\alpha$-equivalence, $\beta$-reduction or $\beta$-expansion \bigskip \noindent \textbf{Rules} \begin{enumerate} \item $\frac{M=_{\alpha} M^{\prime}}{M=_{\beta} M^{\prime}}$ \item $\frac{M \rightarrow M^{\prime}}{M=_{\beta} M^{\prime}}$ \item $\frac{M=_{\beta} M^{\prime}}{M^{\prime}=_{\beta} M}$ \item $\frac{M=_{\beta}M' \; \; \; M'=_{\beta}M"}{M=_{\beta}M"}$ \item $\frac{M=_{\beta} M^{\prime}}{\lambda x \cdot M=_{\beta} \lambda x . M^{\prime}}$ \item $\frac{M=_{\beta} M^{\prime} \quad N=_{\beta} N^{\prime}}{M N=_{\beta} M^{\prime} N^{\prime}}$ \end{enumerate} \subsubsection{Church-Rosser Theorem} \textbf{Theorem}: $\twoheadrightarrow$ is confluent ie $M_{1} \twoheadleftarrow M \twoheadrightarrow M_{2} \implies \exists M^{\prime} \; st \; M_{1} \twoheadrightarrow M^{1} \twoheadleftarrow M_{2}$ \noindent \textbf{Corollary}: $M_{1} =_{\beta} M_{2} \iff \exists M(M_{1} \twoheadrightarrow M \twoheadleftarrow M_{2})$ \noindent \textbf{Proof}: \begin{itemize} \item $=_{\beta}$ satisfies the rules generating $\twoheadrightarrow$, so $M \twoheadrightarrow M' \implies M =_{\beta} M'$ \item Therefore $M_{1} \twoheadrightarrow M \twoheadleftarrow M_{2} \implies M_{1}=_{\beta}M=_{\beta}M_{2} \implies M_{1}=_{\beta}M_{2}$ \end{itemize} \bigskip Conversely, relation $\left\{\left(M_{1}, M_{2}\right) | \exists M\left(M_{1} \twoheadrightarrow M \twoheadleftarrow M_{2}\right)\right\}$ satisfies the rules generating $=_{\beta}$: the only difficult case is the closure of the relation under transitivity and for this, we use Church-Rosser Theorem: \begin{figure}[H] \includegraphics[width=.35\textwidth, center] {./images/19.png} \end{figure} Therefore, $M_{1}=_{\beta}M_{2} \implies \exists M(M_{1} \twoheadrightarrow M' \twoheadleftarrow M_{2})$ \subsection{$\beta$-Normal Forms} \begin{definition} $\lambda$-term is in $\beta$-normal form if it contains no $\beta$-redexes (no sub-terms of the form $(\lambda x.M)M'$). M has $\beta$-nf N if M $=_{\beta}$N with N being a $\beta$-nf. $\beta$-nf of M is unique upto $\alpha$-equivalence if it exists (if $N_{1} =_{\beta} N_{2}$ with $N_{1}$ and $N_{2}$ both being $\beta$-nfs then $N_{1}=_{\alpha}N_{2}$) \end{definition} \bigskip Important to note that some $\lambda$ terms have no $\beta$-nf and that a term can possess both a $\beta$-nf and infinite chains of reduction from it. \subsubsection{Normal-order Reduction} Deterministic strategy for reducing $\lambda$-terms: reduce the left-most, outer-most redex first where: \begin{itemize} \item left-most means reduce M before N in MN \item outer-most means reduce ($\lambda x.M$)N rather than either of M or N \end{itemize} This is guaranteed to reduce to the $\beta$-nf if it possesses one \subsection{Lambda-Definable Functions} In order to relate $\lambda$-calculus to register and Turing Machine computation, or to compute partial recursive functions, we need to encode numbers, pairs and lists as $\lambda$-terms. \subsubsection{Church's Numerals} \begin{figure}[H] \includegraphics[width=.3\textwidth, left] {./images/20.png} \end{figure} Notation: $\left\{\begin{array}{ll}{M^{0} N} & {\triangleq N} \\ {M^{1} N} & {\triangleq M N} \\ {M^{n+1} N} & {\triangleq M\left(M^{n} N\right)}\end{array}\right.$ so we can write $\underline{n}$ as $\lambda f x . f^{n} x$ and we have $\underline{n} \boldsymbol{M} \boldsymbol{N}=_{\beta} \boldsymbol{M}^{n} \boldsymbol{N}$ \subsubsection{Definition} $f \in \mathbb{N}^{2} \rightharpoonup \mathbb{N}$ is $\lambda$-definable if there is a closed $\lambda$-term F that represents it: $\forall (x_{1}, ..., x_{n}) \in \mathbb{N}^{n}$ and $y \in \mathbb{N}$ \begin{enumerate} \item $f\left(x_{1}, \ldots, x_{n}\right)=y \implies F \vec{x_{1}} ... \vec{x_{n}} =_{\beta} \vec{y}$ \item $f\left(x_{1}, \ldots, x_{n}\right) \uparrow \implies F \vec{x_{1}} ... \vec{x_{n}}$ has no $\beta$-nf \subitem This condition can make it tricky to find a $\lambda$-term representing a non-total function \end{enumerate} \subsubsection{Computability} Partial function is computable iff it is $\lambda$-definable: gets split into: \begin{enumerate} \item Every partial recursive function is $\lambda$-definable \item $\lambda$-definable functions are Register Machine computable \end{enumerate} \subsubsection{Showing elements of PRIM are $\lambda$-definable} \begin{enumerate} \item $\operatorname{proj}_{i}^{n} \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ is represented by $\lambda x_{1} \ldots x_{n} \cdot x_{i}$ \item $\text { zero }^{n} \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ is represented by $\lambda x_{1} \ldots x_{n} \cdot \underline{0}$ \item $\operatorname{succ} \in \mathbb{N} \rightarrow \mathbb{N}$ is represented by $\lambda x_{1} f x . f\left(x_{1} f x\right)$ OR $\lambda x_{1} f x. x_{1} f (f x)$ \end{enumerate} \subsection{Representations} \subsubsection{Representing Composition} If total function f $\in \mathbb{N}^{n} \rightarrow \mathbb{N}$ is represented by F and total functions $g_{1}, \ldots, g_{n} \in \mathbb{N}^{m} \rightarrow \mathbb{N}$ are represented by $G_{1}, \dots, G_{n}$, then the composition ($f \circ\left(g_{1}, \ldots, g_{n}\right) \in \mathbb{N}^{m} \rightarrow \mathbb{N}$) is represented by $\lambda x_{1} \ldots x_{m} . F\left(G_{1} x_{1} \ldots x_{m}\right) \ldots\left(G_{n} x_{1} \ldots x_{m}\right)$ However, this does not necessarily work for partial functions \subsubsection{Representing Primitive Recursion} If $f \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ is represented by $\lambda$-term F and $g \in \mathbb{N}^{n+2} \rightarrow \mathbb{N}$ is represented by $\lambda$-term G, want to show $\lambda$-definability of the unique $h \in \mathbb{N}^{n+1} \rightarrow \mathbb{N}$ that satisfies h = $h=\Phi_{f, g}(h)$, where $\mathbf{\Phi}_{f, g} \in\left(\mathbb{N}^{n+1} \rightarrow \mathbb{N}\right) \rightarrow\left(\mathbb{N}^{n+1} \rightarrow \mathbb{N}\right)$ is given by: \begin{equation} \begin{aligned} h(\vec{a}, a)=& \text { if } a=0 \text { then } f(\vec{a}) \\ & \text { else } g(\vec{a}, a-1, h(\vec{a}, a-1)) \end{aligned} \end{equation} OR \begin{equation} \left\{\begin{array}{ll}{h(\vec{a}, 0)} & {=f(\vec{a})} \\ {h(\vec{a}, a+1)} & {=g(\vec{a}, a, h(\vec{a}, a))}\end{array}\right. \end{equation} \textbf{Strategy}: \begin{enumerate} \item Show that $\Phi_{f, g}$ is $\lambda$-definable \item Show that we can solve fixed point equations X = MX up to $\beta$-conversion in the $\lambda$-calculus \end{enumerate} \subsubsection{Representing Booleans} \begin{itemize} \item True $\triangleq \lambda xy.x$ \item False $\triangleq \lambda xy.y$ \item If $\triangleq \lambda f$ xy. f xy \end{itemize} \subsubsection{Representing Test-for-Zero} \begin{equation} Eq \; _{0} \triangleq \lambda x.x(\lambda y.False) \; True \end{equation} \subsubsection{Representing Ordered Pairs} $$Pair \triangleq \lambda x y f.f xy $$ $$Fst \triangleq \lambda f.f \; True$$ $$Snd \triangleq \lambda f.f \; False$$ \subsubsection{Representing Predecessor} Has to satisfy: \begin{enumerate} \item Pred $\vec{n+1} =_{\beta} \vec{n}$ \item Pred $\vec{0} =_{\beta} \vec{0}$ \end{enumerate} \begin{equation} Pred \triangleq \lambda y f x . Snd(y (G f)(Pair \; x \; x)) \end{equation} where \begin{equation} G \triangleq \lambda f p . Pair(f(Fst \; p))(Fst \; p) \end{equation} \subsubsection{Representing Primitive Recursion} $f \in \mathbb{N}^{n} \rightarrow \mathbb{N}$ represented by a $\lambda$-term F and $g \in \mathbb{N}^{n+2} \rightarrow \mathbb{N}$ is represented by $\lambda$-term G, we want to show $\lambda$-definability of the unique $h \in \mathbb{N}^{n+1} \rightarrow \mathbb{N}$ that satisfies $h = \Phi_{f, g}(h)$ where $\Phi_{f, g} \in\left(\mathbb{N}^{n+1} \rightarrow \mathbb{N}\right) \rightarrow\left(\mathbb{N}^{n+1} \rightarrow \mathbb{N}\right)$ is given by: \begin{equation} \begin{array}{c}{\Phi_{f, g}(h)(\vec{a}, a) \triangleq \text { if } a=0 \text { then } f(\vec{a})} \\ {\text { else } g(\vec{a}, a-1, h(\vec{a}, a-1))}\end{array} \end{equation} \bigskip \textbf{Strategy} \begin{enumerate} \item Show that $\Phi_{f, g}$ is $\lambda$-definable $$Y(\lambda z \overrightarrow{x} x . If( Eq _{0} \; x)(F \overrightarrow{x})(G \overrightarrow{x} ( Pred \; x)(z \overrightarrow{x} ( Pred \; x)))) $$ \item Show that we can solve fixed point equations (X = MX) up-to $\beta$-conversion in the $\lambda$-calculus \end{enumerate} \bigskip \textbf{Every $f \in$ PRIM is $\lambda$-definable}: in order to expand this to all recursive functions, we have to consider how to represent the minimisation. \subsection{Examples} \begin{enumerate} \item \textbf{Addition is $\lambda$-definable because represented by}: $P \triangleq \lambda x_{1} x_{2}, \lambda f x \cdot x_{1} f\left(x_{2} f x\right)$ $P \underline{m} \underline{n}=_{\beta} \lambda f x . \underline{m} f(\underline{n} fx)$ $P \underline{m} \underline{n}=_{\beta}\lambda f x . m f\left(f^{n} x\right)$ $P \underline{m} \underline{n}=_{\beta} \lambda f x \cdot f^{m}\left(f^{n} x\right)$ $= \lambda f x . f^{m+n} x$ (this is provable using induction on n) $\vec{m+n}$ \end{enumerate} \subsection{Curry's Fixed Point Combinator Y} \begin{tabular}{ |c|c|c| } \hline \textit{Name} & \textbf{Naive Set Theory} & \textbf{$\lambda$ calculus} \\ \textbf{Russell Set} & $R \triangleq \{ x | \neg (x \in x) \} $ & $not \triangleq \lambda b. If \; b \; False \; else \; True$ \\ & & $R \triangleq \lambda x . not (x x)$ \\ & & \\ \textbf{Russell's Paradox} & $R \in R \iff \neg (R \in R)$ & $RR =_{\beta} not (RR)$ \\ & & $Ynot =_{\beta} RR = (\lambda x. not(x x))(\lambda x . not (x x))$ \\ & & $Yf = (\lambda x. f(x x))(\lambda x . f(x x))$ \\ & & $Y = \lambda f . (\lambda x .f(xx))(\lambda x.f(xx))$ \\ \hline \end{tabular} \begin{equation} Y \triangleq \lambda f . (\lambda x . f(xx))(\lambda x.f(xx)) \end{equation} This satisfied $Y M \rightarrow (\lambda x . M(x x))(\lambda x . M(x x))$ $\rightarrow M((\lambda x . M(x x))(\lambda x . M(x x)))$ \bigskip Hence, $Y M \rightarrow M((\lambda x . M(x x))(\lambda x . M(x x))) \leftarrow M(Y M)$ \bigskip Therefore, for all $\lambda-terms$ M: $\mathbf{Y} \boldsymbol{M}=_{\beta} \boldsymbol{M}(\mathbf{Y} \boldsymbol{M})$ \subsection{Turing's Fixed Point Combinator} $$A \triangleq \lambda xy.y(xxy)$$ $$\Theta \triangleq AA$$ $$\Theta M = AAM = (\lambda xy.y(xxy))AM \twoheadrightarrow M(AAM) = M(\Theta M)$$ \subsection{Representing Minimisation} \begin{equation} \mu ^{n} f(\overrightarrow{x}) = g(\overrightarrow{x}, 0) \end{equation} \begin{equation} g(\overrightarrow{x}, x) = if \; f(\overrightarrow{x}, x) = 0 \; then \; x \; else \; g(\overrightarrow{x}, x+1) \end{equation} $\mu ^{n} f$ can be expressed in terms of a fixed point equation: $\mu^{n} f(\vec{x}) \equiv g(\vec{x}, 0)$ where $g=\Psi_{f}(g)$ with $\Psi_{f} \in\left(\mathbb{N}^{n+1} \rightharpoonup \mathbb{N}\right) \rightarrow\left(\mathbb{N}^{n+1} \rightharpoonup \mathbb{N}\right)$ defined by: \begin{equation} \Psi_{f}(g)(\vec{x}, x) \equiv \text { if } f(\vec{x}, x)=0 \text { then } x \text { else } g(\vec{x}, x+1) \end{equation} If a function f has a totally defined $\mu ^{n}f$, $\forall \overrightarrow{a} \in \mathbb{N}^{n}, \mu ^{n} f(\overrightarrow{a}) = g(\overrightarrow{a}, 0),$ with $g = \Psi _{f}(g)$ and $\Psi _{f}(g)(\overrightarrow{a}, a) = \; if(f(\overrightarrow{a}, a) = 0) \; then \; a \; else \; g(\overrightarrow{a}, a+1)$. Hence, if f is represented by $\lambda$-term F, then $\mu ^{n}f$ = \begin{equation} \lambda \vec{x} . Y\left(\lambda z \vec{x} x . \operatorname{lf}\left(E q_{0}(F \vec{x} x)\right) x(z \vec{x}(\operatorname{Succ} x)) \vec{x} \underline{0}\right. \end{equation} Hence, every recursive function is $\lambda$-definable as they can be expressed in standard form as: $f=g \circ\left(\mu^{n} h\right)$ for some $g, h \in \text { PRIM. }$ \subsection{Computability} \textbf{Theorem}: A partial function is computable iff it is $\lambda$-definable. Prove this by showing we can: \begin{enumerate} \item Code $\lambda$-terms as numbers - ensuring that operations for constructing and deconstructing terms are RM computable \begin{enumerate} \item Fix an enumeration $x_{0}, x_{1}, ...$ of the set of variables \item $\ulcorner x_{i} \urcorner = \ulcorner [0, i] \urcorner$ \item $\ulcorner \lambda x_{i} M \urcorner = \ulcorner [1, i, \ulcorner M \urcorner ] \urcorner$ \item $\ulcorner MN \urcorner = \ulcorner [2, \ulcorner M \urcorner , \ulcorner N \urcorner ] \urcorner$ \end{enumerate} \item Write a RM interpreter for $\beta$-reduction \end{enumerate} \end{document}
{ "alphanum_fraction": 0.6695128244, "avg_line_length": 50.0258118235, "ext": "tex", "hexsha": "3e59720b12173fd4fd02202fbf81740f2b05c835", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-12-16T20:13:00.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-03T16:24:03.000Z", "max_forks_repo_head_hexsha": "f3f0300cf698a6dc8c08d1bc08ae047f51123376", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ashwinahuja/Cambridge-Computer-Science-Tripos-Notes", "max_forks_repo_path": "Paper 6/Computation Theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f3f0300cf698a6dc8c08d1bc08ae047f51123376", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ashwinahuja/Cambridge-Computer-Science-Tripos-Notes", "max_issues_repo_path": "Paper 6/Computation Theory.tex", "max_line_length": 711, "max_stars_count": 13, "max_stars_repo_head_hexsha": "f3f0300cf698a6dc8c08d1bc08ae047f51123376", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ashwinahuja/Cambridge-Computer-Science-Tripos-Notes", "max_stars_repo_path": "Paper 6/Computation Theory.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T23:41:37.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-01T21:01:19.000Z", "num_tokens": 19720, "size": 60081 }
% Appendix Template \chapter{Deploying microservices in Kubernetes} % Main appendix title \label{AppendixB} % Change X to a consecutive letter; for referencing this appendix elsewhere, use \ref{AppendixX} As we stated in \autoref{Chapter6}, all the system has been built to be deployed using Kubernets YAML configuration files. However the process of deploying a microservice is not stated. Here we describe how to deploy Event Manager UI from scratch as an example. \section{Docker image} The first thing that needs to be done in to create a docker image. This can be done by defining a Dockerfile for the project. Once that is done, we can create the image by running \texttt{docker build -t projectepic/eventmanager-ui .} in the Dockerfile folder. \begin{lstlisting}[] FROM python:3.6-alpine RUN mkdir /code WORKDIR /code # Install dependencies from requirement.txt ADD requirements.txt /code/ RUN pip install -r requirements.txt #Add apps code to image ADD manage.py /code/ RUN mkdir /code/events ADD events /code/events RUN mkdir /code/eventmanager ADD eventmanager /code/eventmanager RUN mkdir /code/db # Collect static resources in image RUN python manage.py collectstatic --noinput # Expose port 80 from inside the container EXPOSE 80 # Add start script ADD start.sh /code/ # Add external mountable volume on the DB folder VOLUME ["/code/db",] # Define starting point ENTRYPOINT /code/start.sh \end{lstlisting} The base image is Python Alpine, which is a low resource image for Python. Thanks to this our new image is not as big as if we used the regular Python image. In addition we include a volume to save state betweet restarts and a port exposed to access the inner defined server. After being created we need to add a tag to the image so that we can decide what version to use on deployment. Once the image is created and tagged, we need to push it to an Image Registry service. The choice for this project has been DockerHub, but any other can be used as long as it's accessible by the Kubernetes cluster. \section{Kubernetes deployment YAML file} This is a YAML configuration file for the Event Manager UI. The important part is the specification section (\textit{spec}). There you set the number of replicas you want to be deployed and specify the template for each pod that will be deployed as part of the deployment. For the pod template you need to specify the image, the resources it will use and the environment variables. We also include a readiness probe, which will be executed on start to check wether or not the application in the container has been start and it's running smoothly. Finally we declare a volume to be mounted on the pod in the db path as defined in the Dockerfile previously. We also declare how to claim the volume by specifying a volume claim. This will ensure that the same volume is mounted between restarts making sure our data is kept. \begin{lstlisting}[float, floatplacement=H] apiVersion: extensions/v1beta1 kind: Deployment metadata: name: eventmanager-ui namespace: frontend spec: replicas: 1 template: metadata: labels: app: eventmanager-ui spec: terminationGracePeriodSeconds: 10 containers: - name: eventmanager image: projectepic/eventmanager-ui:1.1.8 ports: - containerPort: 80 resources: limits: cpu: 100m memory: 50Mi requests: cpu: 100m memory: 50Mi env: - name: KAFKA_SERVERS value: kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092 readinessProbe: httpGet: path: / port: 80 volumeMounts: - name: datadir mountPath: /code/db volumes: - name: datadir persistentVolumeClaim: claimName: eventmanager-db \end{lstlisting} To deploy we can use either the Kubectl command line properly configured or the web interface by uploading the configuration file. Both ways have the same effect.
{ "alphanum_fraction": 0.7372673849, "avg_line_length": 40.84, "ext": "tex", "hexsha": "56c737d5d4e8545ab3aebc132e9481df256be921", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-28T21:51:56.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-08T15:13:46.000Z", "max_forks_repo_head_hexsha": "57109a3886f82eee1b34448b66b0d877720be524", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "casassg/thesis", "max_forks_repo_path": "Appendices/AppendixB.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "57109a3886f82eee1b34448b66b0d877720be524", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "casassg/thesis", "max_issues_repo_path": "Appendices/AppendixB.tex", "max_line_length": 821, "max_stars_count": 1, "max_stars_repo_head_hexsha": "57109a3886f82eee1b34448b66b0d877720be524", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "casassg/thesis", "max_stars_repo_path": "Appendices/AppendixB.tex", "max_stars_repo_stars_event_max_datetime": "2018-01-08T15:13:45.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-08T15:13:45.000Z", "num_tokens": 940, "size": 4084 }
% In accordance with "Systems and software engineering — Life cycle processes — Requirements engineering" % (ISO/IEC/IEEE29148-2018) - https://standards.ieee.org/standard/29148-2018.html \documentclass{scrreprt} \usepackage{listings} \usepackage{underscore} \usepackage[bookmarks=true]{hyperref} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \hypersetup{ pdftitle={Software Requirement Specification}, pdfauthor={$<$Author$>$}, % author pdfsubject={TeX and LaTeX}, % subject of the document pdfkeywords={TeX, LaTeX, graphics, images, srs, software}, % list of keywords colorlinks=true, % false: boxed links; true: colored links linkcolor=blue, % color of internal links citecolor=black, % color of links to bibliography filecolor=black, % color of file links urlcolor=purple, % color of external links linktoc=page % only page is linked } \def\myversion{1.0 } \date{} \title{} \usepackage{hyperref} \begin{document} \begin{flushright} \rule{14cm}{5pt} \vskip1cm {\bfseries \Huge{Software Requirements\\ Specification}\\ \vspace{1.6cm} for\\ \vspace{1.6cm} $<$Project Name$>$\\ \vspace{1.6cm} \LARGE{Version \myversion approved}\\ \vspace{1.6cm} Prepared by $<$Author$>$\\ \vspace{1.6cm} $<$Company$>$\\ \vspace{1.6cm} \today\\ } \end{flushright} \tableofcontents \chapter*{Revision History} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Name & Date & Reason For Changes & Version\\ \hline $<$Rev Name 1$>$ & $<$Date 1$>$ & $<$Reason 1$>$ & $<$Ver 1$>$\\ \hline $<$Rev Name 2$>$ & $<$Date 2$>$ & $<$Reason 2$>$ & $<$Ver 2$>$\\ \hline \end{tabular} \end{center} \input{chapters/introduction.tex} \input{chapters/references.tex} \input{chapters/requirements.tex} \input{chapters/verification.tex} \input{chapters/appendices.tex} \end{document}
{ "alphanum_fraction": 0.6403641882, "avg_line_length": 27.4583333333, "ext": "tex", "hexsha": "2d244cc363d068b137af587db6951d4e91b018aa", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-04-10T15:41:46.000Z", "max_forks_repo_forks_event_min_datetime": "2021-04-10T15:41:46.000Z", "max_forks_repo_head_hexsha": "3e11af717099a0856193af2e152d884e0974a968", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bonellia/srs-ieee-latex", "max_forks_repo_path": "main.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "3e11af717099a0856193af2e152d884e0974a968", "max_issues_repo_issues_event_max_datetime": "2021-04-10T15:47:49.000Z", "max_issues_repo_issues_event_min_datetime": "2021-04-10T15:47:49.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bonellia/srs-ieee-latex", "max_issues_repo_path": "main.tex", "max_line_length": 105, "max_stars_count": 3, "max_stars_repo_head_hexsha": "3e11af717099a0856193af2e152d884e0974a968", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bonellia/srs-ieee-latex", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-11T19:57:09.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-10T17:15:50.000Z", "num_tokens": 619, "size": 1977 }
% !TEX root = ../00_thesis.tex %------------------------------------------------------------------------------- \section{Summary} %What we presented Establishing a consistent methodology for the design of networking experiments and the analysis of their data is a crucial step towards a more rigorous and reproducible scientific activity. This chapter presented \triscale, the first concrete proposal in that direction. % Key concept/novel idea \triscale implements a methodology grounded on non-parametric statistics into a framework that assists scientists in designing experiments and automating the data analysis. \triscale ultimately improves the legibility of results and helps quantifying the reproducibility of experiments, as highlighted in the case studies presented throughout the chapter. % Takeaways We expect \triscale's open availability to actively encourage its use by the networking community and promote better experimentation practices in the short term. The quest towards highly-reproducible networking experiments remains open, but we believe that \triscale represents an important stepping stone towards an accepted standard for experimental evaluations in networking.
{ "alphanum_fraction": 0.7782426778, "avg_line_length": 74.6875, "ext": "tex", "hexsha": "e5642f07a46751e576a39927137ee0aacd92bcef", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fd21e9f0cddeda91821eb061c9ab12df9f610da9", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "romain-jacob/doctoral-theis", "max_forks_repo_path": "20_TriScale/11_conclusions.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fd21e9f0cddeda91821eb061c9ab12df9f610da9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "romain-jacob/doctoral-theis", "max_issues_repo_path": "20_TriScale/11_conclusions.tex", "max_line_length": 216, "max_stars_count": null, "max_stars_repo_head_hexsha": "fd21e9f0cddeda91821eb061c9ab12df9f610da9", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "romain-jacob/doctoral-theis", "max_stars_repo_path": "20_TriScale/11_conclusions.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 207, "size": 1195 }
\documentclass{standalone} \begin{document} \subsection{Comparison with Manual Annotations} To check the pipeline performances, I have compared the obtained segmentation with the manual annotation. To do that I have considered $5$ scans from the Sant Orsola dataset for which was available also a ground truth. This ground truth consists in a semi-automatic segmentation made with a certified software and refined by an expert with more than $5$ years of experience. After that, the segmentation was validated by $5$ experts with at least $2$ year of experience. This segmentation process takes several days. To compare annotation and pipeline segmentation, I have computed the \emph{sensitivity} and \emph{specificity}: \paragraph{Sensitivity} refers to the ability to correctly detect ill areas. It is defined as the total number of voxels correctly classified as opacities (True Positives), over the total number of positives (True Positives + False Negatives) : \begin{equation}\label{eq:sensitivity} Sensitivity = \frac{True Positive}{True Positive + False Negatives} \end{equation} \paragraph{Specificity} relates to the ability to correctly reject healthy areas. Is defined as the number of rejected pixels(True Negative) against the total number of healthy areas (True Negative + False Positives) : \begin{equation} Specificity = \frac{True Negative}{True Negative + False Positives} \end{equation} The results are displayed in \tablename\,\ref{tab:Measures}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{Predicted} & \multicolumn{2}{c|}{Annotation} \\ \hline & Sensitivity & Specificity & Sensitivity & Specificity \\ \hline Patient 1 & $0.412$ & $\sim 1.00$ & $0.676$ & $ 0.999$ \\ Patient 2 & $0.399$ & $\sim 1.00$ & $0.698$ & $ 0.995$ \\ Patient 3 & $0.570$ & $\sim 1.00$ & $0.653$ & $ 0.999$ \\ Patient 4 & $0.512$ & $\sim 1.00$ & $0.325$ & $ 0.999$ \\ Patient 5 & $0.628$ & $\sim 1.00$ & $0.974$ & $ 0.999$ \\ \hline \end{tabular}\caption{Sensitivity and Specificity for the pipeline segmentation and annotation. As a ground truth was used a sem-automatic segmentation made and evaluated by $5$ experts with at least $2$ years of experience.}\label{tab:Measures} \end{table} The first thing we can notice is that both the method have a high specificity. That means that they rarely give positives results for healthy regions (lower type I error rate): there is a low probability to obtain false positives. The situation changes when we consider specificity. We can see that the annotation has achieved a better sensitivity than the pipeline. That means that they rarely give negative results for GGO regions (lower type II error rate). However, this coefficient does not take into account the rate of false positives, that means we cannot ensure that the detected GGO areas are really sick. This in agreement to the fact that, generally, the operator tends to include large lesion areas. However, that is not always the case, since will depends on the operator that performs the segmentation (subjectivity of these techniques). \begin{figure}[h!] \includegraphics[width=\linewidth, height=.25\textheight]{GTCOM2.png} \caption{Comparison between the ground truth (blue), the pipeline segmentation (pink) and the manual annotation (yellow). We can see that the segmentation obtained by the automatic pipeline is better than the one of the manual annotation.}\label{fig:conf2} \end{figure} In \figurename\,\ref{fig:conf2} I have reported the results of the segmentation of Patient $4$. We can see the ground truth (blue), the pipeline segmentation (pink) and the annotation (yellow). In this case, we can see that both the pipeline and the annotation correctly identified the lesion areas. However, we can see that the annotation is missing a lot of lesions. On the other hand, the segmentation achieved by the pipeline seems to correctly segment the whole areas. \begin{figure}[h!] \includegraphics[width=\linewidth]{GTCOMP1.png} \caption{Comparison between the ground truth (blue), the pipeline segmentation (pink) and the manual annotation (yellow). We can see that the GGO an CS areas are correctly identified.}\label{fig:conf1} \end{figure} In \figurename\,\ref{fig:conf1} vI have reported the segmentation results for the third patient. We can see the ground truth (blue), the pipeline segmentation (pink) and the annotation (yellow). Also, in this case, the pipeline seems to correctly identify the opacity. In this case, also, the annotation seems to agree with the ground truth. \begin{figure}[h!] \centering \includegraphics[width=.8\linewidth]{PATIENT1.png} \caption{Comparison between the gold standard segmentation(blue) and the pipeline results(pink) for an axial, sagittal and coronal view of a patient with a low involvement of lung volume. We can see that the main lesion areas are identified, even if an underestimation of the total volume is present together with some small misclassified points.}\label{fig:pat1} \end{figure} Up to now, I have considered only two cases in which the GGO and CS regions are well defined with high contrast respect to healthy lung volume. In \figurename\,\ref{fig:pat1} I have reported axial, sagittal and coronal view of the ground truth (blue) and pipeline segmentation (pink) for the first patient. This patient presents a low volume of GGO and CS. Moreover, the identification is difficult due to the low contrast between lesion areas healthy lung volume. As we can see the lesion areas are correctly identified even if some misclassified regions are presents. In the end, I have to point out that the annotation and ground truth since are obtained by the semi-automatic method, requires trained personnel and several hours (the first) or days (the seconds). Moreover, the automatic method has the advantages of the speed and independence from an external operator. \end{document}
{ "alphanum_fraction": 0.7509001637, "avg_line_length": 84.8611111111, "ext": "tex", "hexsha": "d4c68a21159020bc3912b3720c780e145c279727", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "RiccardoBiondi/SCDthesis", "max_forks_repo_path": "tex/Chapter3/Accuracy/GoldStandard.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "RiccardoBiondi/SCDthesis", "max_issues_repo_path": "tex/Chapter3/Accuracy/GoldStandard.tex", "max_line_length": 572, "max_stars_count": null, "max_stars_repo_head_hexsha": "2506df1995e5ba239b28d2ca0b908ba55f81761b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "RiccardoBiondi/SCDthesis", "max_stars_repo_path": "tex/Chapter3/Accuracy/GoldStandard.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1562, "size": 6110 }
\documentclass[12pt]{article} \usepackage{physics} \title{Rotating Wave Approximation for QOC} \author{ Ben Rosand \\ IBM Quantum \\ Quantum Intern -- Pulse Team\\ Yorktown Heights, NY } \date{\today} \begin{document} \maketitle \begin{abstract} This is the paper's abstract \ldots \end{abstract} \section{Introduction} This is time for all good men to come to the aid of their party! \paragraph{Outline} The remainder of this article is organized as follows. Section~\ref{previous work} gives account of previous work. Our new and exciting results are described in Section~\ref{results}. Finally, Section~\ref{conclusions} gives the conclusions. \section{Previous work}\label{previous work} A much longer \LaTeXe{} example was written by Gil~\cite{Gil:02}. \section{Results}\label{results} In this section we describe the results. \section{Rotating Wave Approximation}\label{RWA} \subsection{Single rotation} We follow the template laid out in [CITE FIscher] to perform the rotating wave approximation for an N-level system with one control field. For starters, we take the laboratory frame Hamiltonian [CITE FISCHER] \begin{equation}\large H_{lab} = H_0 + \sum^M_{m=1}\Omega_m \cos(\omega_m t + \phi_m) \sum_{n'>n} g_{nn'}\sigma^x_{nn'} \end{equation} This generalized hamiltonian maps perfectly to our systems, with $H_0$ representing all of our time independent elements. $g_{nn}$ represents the prefactors on various drive terms (represented in our printed hamiltonians as $\Omega_n$). Sometimes we can ignore certain transitions because they are off-resonance. I would think this would apply to the transitions to the 2 state for our systems, but need to DOUBLE CHECK. The first replacement is the basic RWA, replacing the driving terms with the terms from the RWA (FISCHER 3.25). also let $S = {(n,n')}$ which are on resonance and allowed. \begin{equation}\label{F325} H_{lab} \approx H_0 + \frac{\Omega}{2} \sum_S g_{nn'} \cos(\omega t + \phi)\sigma^x_{nn'} - \sin(\omega t + \phi)\sigma^y_{nn'}) \end{equation} Our goal here is to get rid of the oscillation $\omega t$. Again from fischer the desired transformation is: \begin{equation} \ket{\psi}_{rot} = e^{-iR} \ket{\psi}_{lab} \end{equation} We take the derivative and obtain an equation for the transformed Hamiltonian: \begin{equation}\label{eq:4} H_{rot} = e^{iR} H_{lab} e^{-iR} + \derivative{R}{t} \end{equation} From this equation, our goal is to find R. The core of the RWA is the following transformation: \begin{equation}\label{eq:5} e^{-iR} \left \{\cos(\omega t + \phi)\sigma^x_{nn'} - \sin(\omega t + \phi)\sigma^y_{nn'}\right \} e^{iR} = \cos (\phi) \sigma^x_{nn'} - sin(\phi)\sigma^y_{nn'} \end{equation} This transformation is applied over the set $S$ in the sum in \eqref{F325}. The final step to the transformation is to find R such that these transformations hold. With that R in hand, we can see that \eqref{eq:4} evaluates to \begin{equation}\label{eq:gen_trans} H_{rot} = H_0 + \derivative{R}{t} + \frac{1}{2}\Omega \sum_S g_{nn'} \left \{ \cos{\phi} \sigma^x_{nn'} - sin{\phi} \sigma^y_{nn'} \right \} \end{equation} This is the rotating frame Hamiltonian, the key difference being the lack of oscillation on $\omega$ (carrier frequency), which is moved into the "generalised detuning term" $H_0 + \derivative{R}{t}$. Note that in the two-level case this term reduces to $\frac{1}{2} \delta \omega \sigma^z$ What does equation \eqref{eq:gen_trans} look like for IBM Q devices? All variables can be fillled in except for R, which can be solved through another process which will be shown after. \begin{align} & H_0 = H_d + H_{coupling} + H_{occupation_operator} \\ & \Omega = \Omega_{d,i} \qquad \text{Note that this transformation is for only one drive} \\ & g_{nn'} = 1 \end{align} \textit{$g_{nn'}$ is technically contained in $\Omega$ because we are off resonance so only concerned with 0 to 1 transition} \subsection{Determining transformation matrix R} The matrix R is derived from equation \eqref{eq:5}. The two relationships that Fischer uses to make this derivation are the fact that R and $\sigma^z_{nn'}$ commute. \subsection{Multiple drives and rotations} \section{New RWA} \section{Conclusions}\label{conclusions} We worked hard, and achieved very little. \bibliographystyle{abbrv} \bibliography{main} \end{document}
{ "alphanum_fraction": 0.7187359928, "avg_line_length": 40.5636363636, "ext": "tex", "hexsha": "2101089a386866340cbb74278fad32d881a52e8a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5152c642cce7e65c6cd583d9ba539a8f7f7f142a", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "brosand/qiskit-terra", "max_forks_repo_path": "oct-qiskit-pulse/reports/RWA/rwa.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5152c642cce7e65c6cd583d9ba539a8f7f7f142a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "brosand/qiskit-terra", "max_issues_repo_path": "oct-qiskit-pulse/reports/RWA/rwa.tex", "max_line_length": 148, "max_stars_count": null, "max_stars_repo_head_hexsha": "5152c642cce7e65c6cd583d9ba539a8f7f7f142a", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "brosand/qiskit-terra", "max_stars_repo_path": "oct-qiskit-pulse/reports/RWA/rwa.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1274, "size": 4462 }
\section{Top Publishing Results for Kelly}
{ "alphanum_fraction": 0.7954545455, "avg_line_length": 14.6666666667, "ext": "tex", "hexsha": "569a04937d1f29a47444564f10fc64148fc27520", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c41e4ced8365cf15b3f7709fa587e67af4a595c2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "dfhawthorne/tex_projects", "max_forks_repo_path": "Organizational_Inertia/authorkelly.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c41e4ced8365cf15b3f7709fa587e67af4a595c2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "dfhawthorne/tex_projects", "max_issues_repo_path": "Organizational_Inertia/authorkelly.tex", "max_line_length": 42, "max_stars_count": null, "max_stars_repo_head_hexsha": "c41e4ced8365cf15b3f7709fa587e67af4a595c2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "dfhawthorne/tex_projects", "max_stars_repo_path": "Organizational_Inertia/authorkelly.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9, "size": 44 }
\documentclass{article} \usepackage[fancyhdr,pdf]{latex2man} \input{common.tex} \begin{document} \begin{Name}{3}{unw\_get\_proc\_info}{David Mosberger-Tang}{Programming Library}{unw\_get\_proc\_info}unw\_get\_proc\_info -- get info on current procedure \end{Name} \section{Synopsis} \File{\#include $<$libunwind.h$>$}\\ \Type{int} \Func{unw\_get\_proc\_info}(\Type{unw\_cursor\_t~*}\Var{cp}, \Type{unw\_proc\_info\_t~*}\Var{pip});\\ \section{Description} The \Func{unw\_get\_proc\_info}() routine returns auxiliary information about the procedure that created the stack frame identified by argument \Var{cp}. The \Var{pip} argument is a pointer to a structure of type \Type{unw\_proc\_info\_t} which is used to return the information. The \Type{unw\_proc\_info\_t} has the following members: \begin{description} \item[\Type{unw\_word\_t} \Var{start\_ip}] The address of the first instruction of the procedure. If this address cannot be determined (e.g., due to lack of unwind information), the \Var{start\_ip} member is cleared to 0. \\ \item[\Type{unw\_word\_t} \Var{end\_ip}] The address of the first instruction \emph{beyond} the end of the procedure. If this address cannot be determined (e.g., due to lack of unwind information), the \Var{end\_ip} member is cleared to 0. \\ \item[\Type{unw\_word\_t} \Var{lsda}] The address of the language-specific data-area (LSDA). This area normally contains language-specific information needed during exception handling. If the procedure has no such area, this member is cleared to 0. \\ \item[\Type{unw\_word\_t} \Var{handler}] The address of the exception handler routine. This is sometimes called the \emph{personality} routine. If the procedure does not define a personality routine, the \Var{handler} member is cleared to 0. \\ \item[\Type{unw\_word\_t} \Var{gp}] The global-pointer of the procedure. On platforms that do not use a global pointer, this member may contain an undefined value. On all other platforms, it must be set either to the correct global-pointer value of the procedure or to 0 if the proper global-pointer cannot be obtained for some reason. \\ \item[\Type{unw\_word\_t} \Var{flags}] A set of flags. There are currently no target-independent flags. For the IA-64 target, the flag \Const{UNW\_PI\_FLAG\_IA64\_RBS\_SWITCH} is set if the procedure may switch the register-backing store.\\ \item[\Type{int} \Var{format}] The format of the unwind-info for this procedure. If the unwind-info consists of dynamic procedure info, \Var{format} is equal to \Const{UNW\_INFO\_FORMAT\_DYNAMIC}. If the unwind-info consists of a (target-specific) unwind table, it is equal to to \Const{UNW\_INFO\_FORMAT\_TABLE}. All other values are reserved for future use by \Prog{libunwind}. This member exists for use by the \Func{find\_proc\_info}() call-back (see \Func{unw\_create\_addr\_space}(3)). The \Func{unw\_get\_proc\_info}() routine may return an undefined value in this member. \\ \item[\Type{int} \Var{unwind\_info\_size}] The size of the unwind-info in bytes. This member exists for use by the \Func{find\_proc\_info}() call-back (see \Func{unw\_create\_addr\_space}(3)). The \Func{unw\_get\_proc\_info}() routine may return an undefined value in this member.\\ \item[\Type{void~*}\Var{unwind\_info}] The pointer to the unwind-info. If no unwind info is available, this member must be set to \Const{NULL}. This member exists for use by the \Func{find\_proc\_info}() call-back (see \Func{unw\_create\_addr\_space}(3)). The \Func{unw\_get\_proc\_info}() routine may return an undefined value in this member.\\ \end{description} Note that for the purposes of \Prog{libunwind}, the code of a procedure is assumed to occupy a single, contiguous range of addresses. For this reason, it is alwas possible to describe the extent of a procedure with the \Var{start\_ip} and \Var{end\_ip} members. If a single function/routine is split into multiple, discontiguous pieces, \Prog{libunwind} will treat each piece as a separate procedure. \section{Return Value} On successful completion, \Func{unw\_get\_proc\_info}() returns 0. Otherwise the negative value of one of the error-codes below is returned. \section{Thread and Signal Safety} \Func{unw\_get\_proc\_info}() is thread-safe. If cursor \Var{cp} is in the local address-space, this routine is also safe to use from a signal handler. \section{Errors} \begin{Description} \item[\Const{UNW\_EUNSPEC}] An unspecified error occurred. \item[\Const{UNW\_ENOINFO}] \Prog{Libunwind} was unable to locate unwind-info for the procedure. \item[\Const{UNW\_EBADVERSION}] The unwind-info for the procedure has version or format that is not understood by \Prog{libunwind}. \end{Description} In addition, \Func{unw\_get\_proc\_info}() may return any error returned by the \Func{access\_mem}() call-back (see \Func{unw\_create\_addr\_space}(3)). \section{See Also} \SeeAlso{libunwind(3)}, \SeeAlso{unw\_create\_addr\_space(3)}, \SeeAlso{unw\_get\_proc\_name(3)} \section{Author} \noindent David Mosberger-Tang\\ Email: \Email{[email protected]}\\ WWW: \URL{http://www.nongnu.org/libunwind/}. \LatexManEnd \end{document}
{ "alphanum_fraction": 0.7418001526, "avg_line_length": 42.2903225806, "ext": "tex", "hexsha": "72621f1a6510f7c30f56d1910c2fa10c7f3fa01d", "lang": "TeX", "max_forks_count": 3629, "max_forks_repo_forks_event_max_datetime": "2022-03-31T21:52:28.000Z", "max_forks_repo_forks_event_min_datetime": "2019-11-25T23:29:16.000Z", "max_forks_repo_head_hexsha": "72bee25ab532a4d0636118ec2ed3eabf3fd55245", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pyracanda/runtime", "max_forks_repo_path": "src/coreclr/pal/src/libunwind/doc/unw_get_proc_info.tex", "max_issues_count": 37522, "max_issues_repo_head_hexsha": "72bee25ab532a4d0636118ec2ed3eabf3fd55245", "max_issues_repo_issues_event_max_datetime": "2022-03-31T23:58:30.000Z", "max_issues_repo_issues_event_min_datetime": "2019-11-25T23:30:32.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pyracanda/runtime", "max_issues_repo_path": "src/coreclr/pal/src/libunwind/doc/unw_get_proc_info.tex", "max_line_length": 155, "max_stars_count": 12278, "max_stars_repo_head_hexsha": "72bee25ab532a4d0636118ec2ed3eabf3fd55245", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pyracanda/runtime", "max_stars_repo_path": "src/coreclr/pal/src/libunwind/doc/unw_get_proc_info.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T21:12:00.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-29T17:11:33.000Z", "num_tokens": 1570, "size": 5244 }
% To compile single chapters put a % symbol in front of "\comment" and "}%end of comment" below % and take off the % symbol from "\end{document" at the bottom line. Undo before compiling % the complete thesis %%Note: You can only use \section command, you are not allowed, per TTU Graduate School, use %%\subsection command for ghigher level subheadings. At most level 2 subheadings are allowed. \chapter{Experiments and Results} \label{Experiments and Results} \section[Simulation Overview]{Testing Overview} Two machine learning approaches are utilized in Chapter \ref{DASP Device Classification Chapter} to determine the performance and applicability of DASP originated features for URE characterization and classification. A feature vector approach using LDA and k-NN learning algorithms was applied to the 1-D statistical feature vectors derived from DASP images, as described in Section \ref{Statistical Feature Extraction}, while a CNN deep learning algorithm was applied directly to the DASP images to perform image recognition based learning. Figure \ref{fig:dasp_lda_knn_process_flow} provides a detailed diagram of the DASP feature vector processing flow from URE signal capture to the LDA and k-NN learning algorithms. \begin{figure}[tb] \includegraphics[width=\textwidth]{./misc_graphics/dasp_lda_knn_process_flow.jpg} \centering \caption{Flow diagram of the DASP processing flow from URE signal capture through the LDA and k-NN learning process.} \label{fig:dasp_lda_knn_process_flow} \end{figure} Figure \ref{fig:dasp_lda_knn_process_flow} outlines the DASP processing flow to generate feature vectors for LDA and k-NN testing and training. All URE signal captures were initially processed through the DASP transforms with up to four images extracted from each DASP transform, the raw scaled image, the scatter peak detector image, the LoG edge detector image, and the radon line detector image. Statistical feature vectors were then extracted from the DASP images by first segmenting the image and then performing row and column summing to obtain statistical sample vectors. The statistical measurements were then concatenated in to a 1-D feature vector per sample and normalized by their z-score across all samples. The normalized feature vectors were evaluated through the LDA and k-NN learning processes. Figure \ref{fig:dasp_cnn_process_flow} provides an overview of the DASP processing flow for the CNN learning process. Instead of image segmentation and feature vector extraction, the CNN learner only requires an image input for learning. Similar to the feature vector processing flow, the CNN process generates up to four images for each DASP transform, the raw scaled image, the scatter peak detector image, the LoG edge detector image, and the radon line detector image. The resulting DASP images were saved as individual Tag Image File Format (TIFF) files for cataloging, labeling, training, and testing with the CNN learner. \begin{figure}[tb] \includegraphics[width=\textwidth]{./misc_graphics/dasp_cnn_process_flow.jpg} \centering \caption{Flow diagram of the DASP processing flow from URE signal capture through the CNN learning process.} \label{fig:dasp_cnn_process_flow} \end{figure} \section[DASP Processes and Transforms]{DASP Processes and Transforms} Because no \emph{a priori} knowledge of the test devices' URE characteristics were taken in to account in the DASP evaluation process, the DASP algorithms, transforms, and parameter settings were selected heuristically to include as much of the URE spectrum that could be reasonably achieved given a $2$MS/s collection system and a one second capture time per device sample. Table \ref{tab:dasp_config_parameters} provides a listing of the DASP algorithms and transforms used in the feature and image generation processes, along with their respective parameter settings. All DASP algorithms were evaluated, including the SCAP auto-covariance vector, $\bf{S}_F$, while it should also be noted that not all combinations of DASP algorithms and image processing transforms were utilized. In addition, the SCAP auto-covariance vector was not utilized by the CNN learner because it does not result in a 2-D image. \begin{table}[tb] \caption{Table of all DASP algorithms and image transforms used for image generation and feature extraction.} \centering \begin{tabular}{cc|c} \hline DASP Algorithm & Image Transform & Configuration Parameters\\ \hline CMASP & Array & $f_c = 10000$\\ CMASP & Edge Array & $Bc = 9500$ \\ CMASP & Radon Array & $f_m = 1000$ \\ & & $B_m = 950$ \\ \hline FASP & Array & $N = 1000$ \\ \hline HASP-F & Array & $f_c = 1000$ \\ HASP-F & Edge Array & $B = 1990$ \\ HASP-F & Radon Array & \\ \hline HASP-D & Array & $f_c = 1000$ \\ & & $B = 1990$ \\ \hline MASP (Low) & Array & $N = 10000$\\ MASP (Low) & Edge Array & $p = 0.02$ \\ MASP (Low) & Radon Array & \\ MASP (Low) & Scatter Plot &\\ \hline MASP (High) & Array & $N = 1000$ \\ MASP (High) & Edge Array & $p = 0.02$ \\ MASP (High) & Radon Array & \\ MASP (High) & Scatter Plot & \\ \hline SCAP & Array & $f_c = 1000$ \\ SCAP & Cross-Covariance Vector & $B = 1990$ \\ \hline \end{tabular} \label{tab:dasp_config_parameters} \end{table} Table \ref{tab:dasp_config_parameters} provides the configuration parameters for each DASP algorithm, based upon their specific input requirements. Two sets of MASP images were generated with different settings to characterize low and high frequency modulations, with the higher frequency MASP algorithm trading off modulation frequency resolution. The Edge and Radon Arrays were not applied to the FASP, HASP-D, and SCAP images because the additional benefit was not obvious given the multitude of curved lines present in the HASP-D and SCAP images and the large number of vertical lines in a typical FASP image. \section[Statistical Feature Extraction]{Statistical Feature Extraction} Statistical features were extracted from the whole and $3 \times 3$ segmented DASP images in Table \ref{tab:dasp_config_parameters} using the whole image, the summed row vector, and the summed column vector. The extracted features for each image segment were concatenated in to a single feature vector \begin{equation} \textit{\textbf{X}}_s= [\sigma^{2}_{\omega}, \gamma_{\omega}, \kappa_{\omega}, \sigma^{2}_{\rho}, \gamma_{\rho}, \kappa_{\rho}, \sigma^{2}_{\gamma}, \gamma_{\gamma}, \kappa_{\gamma}] \label{eq:featureeq} \end{equation} where $\omega$ are the whole segment features, $\rho$ are summed row features, and $\gamma$ are summed column features, resulting in $9$ features for each of the DASP image segments. The SCAP auto-covariance vector only provides a single vector for feature extraction and therefore only yielded three features, $[\sigma^{2}, \gamma, \kappa]$. Finally, the z-score for each feature was calculated across all data sets resulting in a normalized feature set that was utilized for subsequent training and classification analysis. The image segment feature vectors, $\textit{\textbf{X}}_s$, were then concatenated to form a full feature vector \begin{equation} \textit{\textbf{X}}= [\textit{\textbf{X}}_1, \textit{\textbf{X}}_2, \ldots, \textit{\textbf{X}}_9, \textit{\textbf{X}}_{10}] \label{eq:featureeq_cat} \end{equation} for each DASP image, where $\textit{\textbf{X}}_1$ is the top image segment and $\textit{\textbf{X}}_2$ through $\textit{\textbf{X}}_{10}$ are the $9$ image segments comprised of the $3 \times 3$ grid segmentation. The concatenation of the image segment feature vectors resulted in a total of $9 \times 10 = 90$ features per DASP image, except for the SCAP auto-covariance feature vector which only returned three features. \section[DASP Image Store]{DASP Image Store} The MATLAB\textsuperscript \textregistered ~~Convolutional Neural Network implementation, including supporting tools such as the \textit{imageDataStore} command, was designed to seamlessly catalog, label, train, and test on image files. To simplify the CNN learning process, each DASP image was saved as a \textit{.tif} file using the process in Algorithm \ref{alg:tiffimalg}. The TIFF image generation algorithm is identical to the image scaling algorithm, Algorithm \ref{alg:imscalealg}, with the addition of the $16$bit scaling, image resizing, and file saving steps. Each TIFF file was stored in its own directory structure within a labeled folder for class indexing. \begin{algorithm} \caption{TIFF Image Generation Algorithm} \label{alg:tiffimalg} \scriptsize \begin{algorithmic}[1] \Require~~ \Statex $\mathbf{I}$ - Input Image \Statex $\mathbf{I_b}$ - Background Image \Ensure~~ \Statex $\mathbf{I_s}$ - Saved TIFF Image \Statex \State $\mathbf{I_s} \gets \mathbf{I} - \mathbf{I_b}$ \State $\mathbf{I_s} \leq 0 \gets 0$ \State $\mathbf{I_s} \gets $ STANDARD DEVIATION FILTER of $\mathbf{I_s}$ \State $\mathbf{I_s} \gets \log{\mathbf{I_s}}$ \State $\mathbf{I_s} \gets 2^{16} \times \frac{\mathbf{I_s} - \min{\mathbf{I_s}}}{\max{\mathbf{I_s}} - \min{\mathbf{I_s}}}$ \State RESIZE $\mathbf{I_s}$ to $500 \times 500$ \State SAVE $\mathbf{I_s}$ as $16$bit TIFF Image file \end{algorithmic} \end{algorithm}
{ "alphanum_fraction": 0.7500265816, "avg_line_length": 83.2300884956, "ext": "tex", "hexsha": "ed743f1942e476dcd34db184c7c1618773f28519", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d815beb9c1f8fa3dd8cd917640ebcca5822205c3", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "argodev/learn", "max_forks_repo_path": "research/dissertation/ch06_experiments.tex", "max_issues_count": 15, "max_issues_repo_head_hexsha": "d815beb9c1f8fa3dd8cd917640ebcca5822205c3", "max_issues_repo_issues_event_max_datetime": "2022-03-11T23:21:02.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-28T22:25:10.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "argodev/learn", "max_issues_repo_path": "research/dissertation/ch06_experiments.tex", "max_line_length": 912, "max_stars_count": null, "max_stars_repo_head_hexsha": "d815beb9c1f8fa3dd8cd917640ebcca5822205c3", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "argodev/learn", "max_stars_repo_path": "research/dissertation/ch06_experiments.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2511, "size": 9405 }
\newpage \section*{Appendix B} \label{app:B} % tabelle degli esperimenti ordine SRST GSP SP \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \srst & 6 & - & 218.33 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 3 & 3739 & 63 \\ 1 & 3 & 3757 & 57 \\ \hline \end{tabular} \end{table} % \begin{multicols}{2} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 6 & 3 & {\bf 194.52} \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 3286 & 48 \\ 1 & 2 & 3516 & 51 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 6 & 3 & 177 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 2970 & 47 \\ 1 & 2 & 3295 & 52 \\ \hline \end{tabular} \end{table} % \end{multicols} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 6 & 5 & 142 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 3074 & 47 \\ 1 & 1 & 2355 & 38 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 6 & 5 & 138.98 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 2847 & 43 \\ 1 & 1 & 2355 & 35 \\ \hline \end{tabular} \end{table} % stesso setup ma con 4 robot \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \srst & 6 & - & 124.52 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 2060 & 31 \\ 1 & 2 & 2337 & 59 \\ 2 & 1 & 2434 & 38 \\ 3 & 1 & 1949 & 40 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 6 & 3 & 117.44 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 1 & 1860 & 32 \\ 1 & 1 & 1853 & 31 \\ 2 & 1 & 1675 & 40 \\ 3 & 1 & 1688 & 40 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 6 & 3 & {\bf 115.28} \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 1 & 1897 & 30 \\ 1 & 1 & 1854 & 37 \\ 2 & 1 & 1647 & 35 \\ 3 & 1 & 1412 & 32 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 6 & 5 & 93.4 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 1 & 1753 & 32 \\ 1 & 1 & 1960 & 35 \\ 2 & 1 & 2414 & 34 \\ 3 & {\bf 0} & (627) & (15) \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 6 & 5 & {\bf 91.8} \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 1 & 1105 & 29 \\ 1 & 1 & 2407 & 37 \\ 2 & 1 & 1959 & 33 \\ 3 & {\bf 0} & (715) & (24) \\ \hline \end{tabular} \end{table} % -------- \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \srst & 9 & - & 292.24 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 5 & 5631 & 93 \\ 1 & 4 & 4772 & 78 \\\hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 9 & 3 & 265.72 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 3 & 4578 & 69 \\ 1 & 3 & 4405 & 74 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 9 & 3 & 240.74 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 3 & 4412 & 79 \\ 1 & 3 & 4571 & 82 \\ \hline \end{tabular} \end{table} % ----> 9 task 4 robot C = 3 \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 9 & 5 & 232.84 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 3 & 4207 & 71 \\ 1 & 3 & 3875 & 67 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 9 & 5 & {\bf 168.34} \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 2970 & 45 \\ 1 & 2 & 3295 & 56 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \srst & 9 & - & 178.55 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 3 & 3582 & 64 \\ 1 & 3 & 2833 & 55 \\ 2 & 2 & 2635 & 46 \\ 3 & 1 & 1973 & 43 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 9 & 3 & 152.55 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 2921 & 61 \\ 1 & 2 & 2167 & 49 \\ 2 & 1 & 1994 & 39 \\ 3 & 1 & 1718 & 38 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 9 & 3 & 134.23 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 2826 & 52 \\ 1 & 1 & 1960 & 35 \\ 2 & 1 & 2135 & 39 \\ 3 & 1 & 1809 & 36 \\ \hline \end{tabular} \end{table} % c=5 9 task \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 9 & 5 & 134.23 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 2785 & 52 \\ 1 & 1 & 1960 & 35 \\ 2 & 1 & 2135 & 39 \\ 3 & 1 & 1514 & 36 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \sps & 9 & 5 & {\bf 93.05} \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 2 & 1374 & 42 \\ 1 & 1 & 2067 & 35 \\ 2 & 1 & 2054 & 36 \\ 3 & {\bf 0} & (626) & (16) \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \srst & 21 & - & 629 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 11 & 12091 & 165 \\ 1 & 10 & 11456 & 145 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 21 & 3 & 561.93 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 7 & 9943 & 130 \\ 1 & 7 & 10323 & 139 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 21 & 5 & 497.45 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 6 & 8816 & 115 \\ 1 & 6 & 9343 & 137 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \srst & 21 & - & 402 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 6 & 7069 & 152 \\ 1 & 5 & 5731 & 119 \\ 2 & 5 & 5928 & 109 \\ 3 & 5 & 6201 & 129 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 21 & 3 & 343.23 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 4 & 6134 & 97 \\ 1 & 4 & 5731 & 103 \\ 2 & 3 & 5452 & 95 \\ 3 & 3 & 4201 & 85 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \gsp & 21 & 5 & 294.4 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 3 & 4905 & 83 \\ 1 & 3 & 5455 & 72 \\ 2 & 3 & 4146 & 82 \\ 3 & 3 & 4227 & 75 \\ \hline \end{tabular} \end{table} \begin{table}[hbt] \centering \begin{tabular}{|c|c|c|c|} \hline {\bf Algorithm} &{\bf Number of task} & {\bf Capacity} & {\bf Time} \\ \hline \srst & 42 & - & 1283.2 \\ \hline {\bf robot} & {\bf processed task} & {\bf distance} & {\bf interference} \\ \hline 0 & 21 & 22243 & 335 \\ 1 & 21 & 22651 & 338 \\ \hline \end{tabular} \end{table}
{ "alphanum_fraction": 0.3892780359, "avg_line_length": 42.5339233038, "ext": "tex", "hexsha": "1b5f1dcc6c133ec6e1b69f91e617eb1fa2a554e1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d1dab6ff32485b3af26ef8be58e624d059282c8a", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Davidemb/LogisticAgent_ws", "max_forks_repo_path": "LaTex/Tesi/chapter/tabelle.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d1dab6ff32485b3af26ef8be58e624d059282c8a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Davidemb/LogisticAgent_ws", "max_issues_repo_path": "LaTex/Tesi/chapter/tabelle.tex", "max_line_length": 98, "max_stars_count": null, "max_stars_repo_head_hexsha": "d1dab6ff32485b3af26ef8be58e624d059282c8a", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Davidemb/LogisticAgent_ws", "max_stars_repo_path": "LaTex/Tesi/chapter/tabelle.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5021, "size": 14419 }
%%%%%%%%%%%%%%%% %% Background %% %% \section{Introduction} The default mode network (DMN) consists of an aggregation of brain regions that are active during rest, as measured by BOLD signal, and are associated with spontaneous thought and emotion regulation \cite{Raichle2001,Andrews-Hanna2014,Buckner2008}. The network is also commonly deactivated during cognitively demanding tasks \cite{Raichle2007}. Alterations to the DMN have been associated with a broad array of neuropsychiatric conditions \cite{Calhoun2008,Whitfield-Gabrieli2012}. However, as the DMN is most commonly assessed during rest \cite{Greicius2002} or as a result of deactivation during a task \cite{Harrison2008}, most studies fail to differentiate ability to modulate the DMN from the tendency to do so. Individuals likely vary both in their capacity to activate and/or deactivate the DMN and in spontaneous implementation of function related to the DMN. Just as the use of specific emotion regulation strategies (e.g., cognitive reappraisal) relate to psychopathology specifically in the tendency to use them vs. instructed use \cite{Ehring2010}, the ability to modulate the DMN and tendency to do so may represent distinct and important domains of neural and psychological function \cite{Anticevic2012}. Consistent with the National Institute of Mental Health’s Research Domain Criteria project \cite{Insel2010}, the ability and tendency to regulate the DMN may have both general and specific illness implications. For example, it could be that deficits in the ability to suppress DMN-related activity such as mind-wandering may be related to cognitive deficits across differing forms of mental illness \cite{Sheline2009}, while an increased likelihood to engage in specific forms of mind-wandering (e.g., worry, rumination) may be more specific to anxiety and depression \cite{Hamilton2011,Sheline2009,Sylvester2012}. While psychological functions related to the DMN can be targeted, the DMN itself is somewhat more difficult to target, as the functions it instantiates are presumably multiply determined (i.e., invoking specific aspects of the DMN and/or other networks) and varied (i.e., different in nature and possibly kind) \cite{Friston2002}. Recent advances in real-time fMRI (rt-fMRI) \cite{Craddock2012,LaConte2011,Soldati2013} have made it possible to provide participant-specific feedback about neural networks. These advances permit the addition of instructions to modulate given neural networks as well as the assessment of an individual’s ability to follow the instructions or modulate the specific network. In addition to collecting task-based and resting-state data, using rt-fMRI as neurofeedback may be critical to acquiring knowledge about tendencies and capability to regulate the DMN. The Default Network Regulation Neuroimaging Repository contains data from a suite of fMRI experiments aimed at better understanding individual variation in DMN activity and modulation. Although it is a separate project, it has been harmonized with, and is distributed alongside, the Enhanced Nathan Kline Institute-Rockland Sample (NKI-RS) \cite{Nooner2012}, which aims to capture deep and broad phenotyping of a large community-ascertained sample. In addition to the NKI-RS protocol, this project includes data collection from tasks that activate (Moral Dilemma task \cite{Harrison2008}) and deactivate the DMN (Multi-Source Interference Task \cite{Bush2006}), a resting state scan, a novel real-time fMRI neurofeedback-based paradigm that specifically probes DMN modulation \cite{Craddock2012}, and additional self-report measures. In this data descriptor, we provide an overview of planned data collection, methods used, summaries of data collected and available to date, and validation analyses. New data will be released on a regular basis and will be available at the Collaborative Informatics and Neuroimaging Suite (COINS) Data Exchange (\url{http://coins.mrn.org/dx}) \cite{Scott2011,Wood2014}, as well as the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC; \url{http://www.nitrc.org/}) and in the Amazon Web Services (\url{https://aws.amazon.com/s3/}). \section{Organization and access to the repository} Datasets for the present project can be accessed through the COINS Data Exchange (\url{http://coins.mrn.org/dx}) \cite{Scott2011,Wood2014}, NITRC (\url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/download.html}) or through the Amazon Web Services S3 bucket (\url{https://aws.amazon.com/s3/}). Documentation on downloading the datasets can be found at \url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/sharing.html}. Data are available through both COINS and NITRC in the form of .tar files, containing all imaging and phenotypic data. The COINS data exchange (only accessible with DUA) offers a query builder tool, permitting the user to target and download files by specific search criteria (e.g., full completion of certain phenotypic measures and/or imaging sequences). Data are available through AWS as individual compressed NIfTI files that are arranged in the brain imaging data structure (BIDS) file structure \cite{Gorgolewski2016}. All of the presented data are shared in accordance with the procedures of the NKI-RS study (\url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/sharing.html}). While a goal of the project is to maximize access to these data, privacy for the participants and their personal health information is paramount. De-identified neuroimaging data along with limited demographic information can be downloaded from the repository without restriction. To protect participant privacy, access to the high dimensional phenotypic and assessment data requires a data usage agreement (DUA). The DUA is relatively simple and requires a signature by an appropriate institutional representative. The DUA is available via Neuroimaging Informatics Tools and Resources Clearinghouse (Data Citation A1: \url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/data/DUA.pdf}). \subsection{Phenotypic data} Basic phenotypic data, which includes age, sex and handedness are available in the \texttt{participants.tsv} file at the root of the repository. Comprehensive phenotypic data are available as comma separated value (.csv) files from COINS after completing a minimal data usage agreement. Summary scores calculated from the RVIP task info area available through COINs and trial-by-trial response information is available via NITRC (Data Citation Y1: \url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/RVIP-master.zip}). \subsection{Imaging data} The imaging data is released in unprocessed form except that image headers have been wiped of protected health information and faces have been removed from the structural images. Data are available in NIfTI files arranged in the BIDS format \cite{Gorgolewski2016}. Acquisition parameters are provided in JSON files that are named the same as their corresponding imaging data. Task traces (including stimulus onsets and durations) and responses, and physiological recordings, are also provided in JSON files along with the data. Additional details for all MRI data, as well as corresponding task information are available via NITRC (Data Citation X1: \url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/mri\_protocol.html\# scans-acquired}). \subsection{Quality assessment data} Quality metrics are available for download from the QAP repository (\url{http://preprocessed-connectomes-project.org/quality-assessment-protocol}). They are available as two (anatomical and functional) comma separated value files. \section{Contents of the Repository} The Neurofeedback (NFB) repository contains neuroimaging and assessment data that characterizes DMN function in a community ascertained sample of adults (21-45 years old) with a variety of psychiatric diagnoses. The data is collected in a separate 2.5-hour visit that occurred within six months of completing the Enhanced Nathan Kline Institute-Rockland Sample (NKI-RS) protocol \cite{Nooner2012}. The NKI-RS entails a 1 to 2 -visit deep phenotyping protocol \cite{Nooner2012} and a Connectomes oriented neuroimaging assessment (data collection visit schedules are available online at: \url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/sched.html}). NKI-RS phenotyping includes a variety of cognitive and behavioral assessments, a blood draw , a basic fitness assessment, nighttime actigraphy, medical history questionnaire, and the administration of the Structured Clinical Interview for DSM-IV-TR Non-patient edition \cite{First2002}. The NKI-RS imaging protocol includes structural MRI, diffusion MRI, several resting state fMRI, and perfusion fMRI scans. Although the NKI-RS data is being openly shared (\url{http://fcon\_1000.projects.nitrc.org/indi/enhanced/}) it is not a part of the NFB repository and is not described in further detail herein. \subsection{Participants} The NFB repository will ultimately contain data from a total of 180 residents of Rockland, Westchester, or Orange Counties, New York, or Bergen County, New Jersey aged 21 to 45, (50\% male at each age year). Based on census data from 2013, Rockland County, New York has the following demographics \cite{Census2016}: Median age 36.0 years, population is 50.5\% female, 80.4\% White, 11.4\% Black/African American, 6.3\% Asian, 0.5\% Native American/Pacific Islander, and 1.4\% “Other”. With regards to ethnicity, 13.8\% of the population endorses being Hispanic/Latino. Minimally restrictive psychiatric exclusion criteria, which only screen out severe illness, were employed to include individuals with a range of clinical and sub-clinical psychiatric symptoms. Medical exclusions include: chronic medical illness, history of neoplasia requiring intrathecal chemotherapy or focal cranial irradiation, premature birth (prior to 32 weeks estimated gestational age or birth weight $<$ 1500g, when available), history of neonatal intensive care unit treatment exceeding 48 hours, history of leukomalacia or static encephalopathy, or other serious neurological (specific or focal) or metabolic disorders including epilepsy (except for resolved febrile seizures), history of traumatic brain injury, stroke, aneurysm, HIV, carotid artery stenosis, encephalitis, dementia or mild cognitive impairment, Huntington’s Disease, Parkinson’s, hospitalization within the past month, contraindication for MRI scanning (metal implants, pacemakers, claustrophobia, metal foreign bodies or pregnancy) or inability to ambulate independently. Severe psychiatric illness can compromise the ability of an individual to comply with instructions, tolerate the MRI environment and participate in the extensive phenotyping protocol. Accordingly, participants with severe psychiatric illness were excluded, as determined by Global Assessment of Function (GAF; DSM-IV TR \cite{First2002}) $<$ 50, history of chronic or acute substance dependence disorder, history of diagnosis with schizophrenia, history of psychiatric hospitalization, or suicide attempts requiring medical intervention. The technical validation reported in this paper includes 125 participants with the following demographics: median age 30, mean age, 31, std. dev. 6.6, 77 females, 59.2\% White, 28.8\% Black/African American, 0.56\% Asian, 0\% Native American/Pacific Islander, and 0.56\% “Other”, and 16.8\% Hispanic/Latino. A summary of participant diagnoses and current medications are listed in tables 1 and 2. \begin{table}[!ht] \caption{Medications used by participants on a daily basis. Medications were grouped by class and the number of participants who reported taking them is provided (Pts.). Full medication information regarding drug name, dosage, primary indication, and duration of time taken at a participant level can be accessed through COINs after completing a DUA.} \centering \begin{tabular}{lr} \textbf{Medication Class} & \textbf{Pts.} \\ \hline Vitamins/Supplements & 34 \\ Obstetrics/Gynecology & 12 \\ Psychiatric & 8 \\ Gastrointestinal & 6 \\ Endocrine & 5 \\ Analgesics & 4 \\ Cardiovascular & 3 \\ Rheumatologic & 2 \\ Asthma/Pulmonary & 2 \\ Allergy/Cold/ENT & 1 \\ Immunology & 1 \\ Dermatologic & 1 \\ Antihistamine & 1 \\ \end{tabular} \label{table:med} \end{table} \begin{table}[h!] \caption{Summary diagnostic information for the included 125 participants as determined by the SCID-IV or consensus diagnosis by the study psychiatrist. Diagnoses are accompanied with the SCID diagnostic code and the number of participants diagnosed (Pts.). Full diagnostic information at a participant level can be accessed through COINs after completing a DUA.} \centering \begin{tabular}{ lr } \textbf{Diagnosis (SCID-IV code)} & \textbf{Pts.} \\ \hline No Diagnosis or Condition on Axis I (V71.09) & 59 \\ Alcohol Abuse Past (305.00) & 21 \\ Major Depressive Disorder, Single Episode, In Full Remission (296.26) & 12 \\ Cannabis Abuse Current (305.20) & 12 \\ Cannabis Dependence Past (304.30) & 11 \\ Major Depressive Disorder, Recurrent, In Full Remission Past (296.36) & 8 \\ Major Depressive Disorder, Single Episode, Unspecified Past (296.20) & 5 \\ Posttraumatic Stress Disorder Current (309.81) & 5 \\ Alcohol Dependence Past (303.90) & 5 \\ Specific Phobia Past (300.29) & 5 \\ Generalized Anxiety Disorder Current (300.02) & 4 \\ Attention-Deficit/Hyperactivity Disorder, Inattentive Type Current (314.00) & 4 \\ Attention-Deficit/Hyperactivity Disorder NOS Current (314.9) & 3 \\ Attention-Deficit/Hyperactivity Disorder, Hyperactive-Impulsive Type Current (314.01) & 3\\ Cocaine Abuse Past (305.60) & 2 \\ Anorexia Nervosa Past (307.1) & 2 \\ Anxiety Disorder NOS Current (300.00) & 2 \\ Panic Disorder Without Agoraphobia Past (300.01) & 2 \\ Social Phobia Current (300.23) & 2 \\ Agoraphobia Without History of Panic Disorder Current (300.22) & 2 \\ Panic Disorder With Agoraphobia Past (300.21) & 2 \\ Hallucinogen Abuse Past (305.30) & 2 \\ Cocaine Dependence Past (304.20) & 2 \\ Trichotillomania (312.39) & 1 \\ Bulimia Nervosa Current (307.51) & 1 \\ Major Depressive Disorder, Recurrent, In Partial Remission Past (296.35) & 1 \\ Major Depressive Disorder, Recurrent, Moderate Current (296.32) & 1 \\ Bereavement (V62.82) & 1 \\ Hallucinogen Dependence Past (304.50) & 1 \\ Obsessive-Compulsive Disorder Current (300.3) & 1 \\ Body Dysmorphic Disorder Current (300.7) & 1 \\ Eating Disorder NOS Past (307.50) & 1 \\ Phencyclidine Abuse Past (305.90) & 1 \\ Delusional Disorder Mixed Type (297.1) & 1 \\ Amphetamine Dependence Past (304.40) & 1 \\ Opioid Abuse Past (305.50) & 1 \\ Sedative, Hypnotic, or Anxiolytic Dependence Past (304.10) & 1 \\ % \hline \end{tabular} \label{table:psych} \end{table} \subsection{Phenotypic Data} In addition to the NKI-RS protocol, participants completed a variety of assessments that probe cognitive, emotional, and behavioral domains that have been previously implicated with DMN function \cite{Andrews-Hanna2014,Hamilton2011,Sheline2009,Buckner2008}. These included the Affect Intensity Measure (AIM) \cite{Larsen1986}--to measure the strength or weakness with which one experiences both positive and negative emotions, Emotion Regulation Questionnaire (ERQ; \cite{Gross2003})--to assess individual differences in the habitual use of two emotion regulation strategies: cognitive reappraisal and expressive suppression, Penn State Worry Questionnaire (PSWQ; \cite{Meyer1990})--to measure worry, Perseverative Thinking Questionnaire (PTQ; \cite{Ehring2011})--to measure the broad idea of repetitive negative thought, Positive and Negative Affect Schedule – short version (PANAS-S; \cite{Watson1988})--to measure degrees of positive or negative affect, Ruminative Responses Scale (RRS; \cite{Treynor2003}) --to assess rumination that is related to, but not confounded by depression, and the Short Imaginal Process Inventory (SIPI; \cite{Huba1981})--to measure aspects of daydreaming style and content, mental style, and general inner experience. Assessments were completed using web-based forms implemented in COINs \cite{Scott2011}. Sample mean, standard deviation, and range of the above measures for the first 125 participants are provided in Table 3. Participants also completed a Rapid Visual Information Processing (RVIP) task to assess sustained attention and working memory. Response times and detection accuracy from this task have been previously correlated with DMN function \cite{Pagnoni2012}. The RVIP is administered using custom software, implemented in PsychoPy \cite{Peirce2008}, which conforms to the literature describing its original implementation in the Cambridge Neuropsychological Test Automated Battery (CANTAB \cite{Sahakian1992}). A pseudo-random stream of digits (0-9) is presented to the participants in white, centered on a black background, surrounded by a white box. Participants are instructed to press the space bar whenever they observe the sequences 2-4-6, 3-5-7, or 4-6-8. Digits are presented one after another at a rate of 100 digits per minute and the number of stimuli that occurred between targets varied between 8 and 30. Responses that occurred within 1.5 seconds of the last digit of a target sequence being presented were considered “hits”. Stimuli presentation continued until a total of 32 target sequences were encountered, which required on average 4 minutes and 20 seconds. Before performing the task, participants completed a practice version that indicated when a target sequence had occurred and provided feedback (“hit” or “false alarm”) whenever the participant pressed the space bar. Participants were allowed to repeat the practice until they felt that they were comfortable with the instructions. Summary statistics calculated from the RVIP included: mean reaction time, total targets, hits, misses, false alarms, hit rate ($H$), false alarm rate ($F$), and $A'$ (Eqn. \ref{aprime}), $A'$ is an alternative to the more common $d'$ in signal detection theory \cite{Stanislaw1999}). The task can be downloaded from the OpenCogLab Repository (\url{http://opencoglabrepository.github.io/experiment\_rvip.html}). \begin{equation} \label{aprime} A'=\left\{ \begin{matrix} .5 + \frac{(H-F)(1+H-F)}{4H(1-F)} & \textup{when} \, H\geq F \\ .5 + \frac{(F-H)(1+F-H)}{4F(1-H)} & \textup{when} \, H< F \end{matrix} \right. \end{equation} % A^'= {█(.5+ (H-F)(1+H-F)/4H(1-F) " " @.5-(F-H)(1+F-H)/4F(1-H) " " )┤ (when H≥F)¦(when H<F) (1) \subsection{MRI Acquisition} Data was acquired on a 3 T Siemens Magnetom TIM Trio scanner (Siemens Medical Solutions USA: Malvern PA, USA) using a 12-channel head coil. Anatomic images were acquired at $1 \times 1 \times 1$ mm$^3$ resolution with a 3D T1-weighted magnetization-prepared rapid acquisition gradient-echo (MPRAGE) sequence \cite{Mugler1990} in 192 sagittal partitions each with a $256 \times 256$ field of view (FOV), 2600 ms repetition time (TR), 3.02 ms echo time (TE), 900 ms inversion time (TI), 8$^\circ$ flip angle (FA), and generalized auto-calibrating partially parallel acquisition (GRAPPA) \cite{Griswold2002} acceleration factor of 2 with 32 reference lines. The sMRI data were acquired immediately after a fast localizer scan and preceded the collection of the functional data. \begin{table}[h!] \caption{Statistics for self-report measures from the neurofeedback specific visit related to perseverative thinking, emotion regulation, and imaginative processes. Mean, Standard Deviation, and Range are reported for the total AIM, ERQ, PSWQ, PTQ, PANAS, RRS, and SIPI scores, as well as each measure's sub-scales.} \centering \begin{tabular}{ llrrc } \multicolumn{2}{l}{\textbf{Scale}} & \textbf{Mean} & \textbf{Std. Dev.} & \textbf{Range} \\ \hline \multicolumn{2}{l}{\emph{Affect Intensity Measure}} & & & \\ & Total & 138.38 & 16.76 & 88-178 \\ & Positive Affect & 62.33 & 12.87 & 30-100 \\ & Positive Intensity & 26.48 & 4.99 & 14-40 \\ & Negative Affect & 26.77 & 5.35 & 14-38 \\ & Negative Intensity & 29.91 & 4.84 & 20-48 \\ \multicolumn{2}{l}{\emph{Emotion Regulation Questionnaire}} & & & \\ & Total & 44.15 & 8.14 & 12-60 \\ & Reappraisal & 31.03 & 6.88 & 6-42 \\ & Suppression & 13.11 & 4.59 & 4-23 \\ \multicolumn{2}{l}{\emph{Penn State Worry Questionnaire}} & & & \\ & Total & 28.03 & 10.71 & 11-52 \\ \multicolumn{2}{l}{\emph{Perseverative Thinking Questionnaire}} &&& \\ & Total & 20.01 & 12.79 & 0-57 \\ \multicolumn{2}{l}{\emph{Positive and Negative Affect Scale – State}} & & & \\ & Total & 44.21 & 9.32 & 25-82 \\ & Positive Affect & 30.98 & 8.33 & 13-50 \\ & Negative Affect & 13.24 & 5.16 & 10-38 \\ \multicolumn{2}{l}{\emph{Ruminative Responses Scale}} & & & \\ & Total & 40.32 & 13.76 & 22-81 \\ & Depression-Related & 21.52 & 7.89 & 11-45 \\ & Brooding & 9.54 & 3.36 & 5-20 \\ & Reflection & 9.24 & 3.92 & 5-20 \\ \multicolumn{2}{l}{\emph{Short Imaginal Processes Inventory}} &&& \\ & Total & 126.6 & 20.42 & 56-180 \\ & Positive Constructive Daydreaming & 49.81 & 8.59 & 27-72 \\ & Guilty Fear of Failure Daydreaming & 33.27 & 9.04 & 17-59 \\ & Poor Attentional Control & 43.83 & 9.39 & 16-70 \\ \end{tabular} \label{table:assessments} \end{table} A gradient echo field map sequence was acquired with the following parameters: TR 500 ms, TE1/TE2 2.72 ms/5.18 ms, FA 55$^\circ$, 64 x 64 matrix, with a 220 mm FOV, 30 3.6 mm thick interleaved, oblique slices, and in plane resolution of $3.4 \times 3.4$ mm$^2$. All functional data were collected with a blood oxygenation level dependent (BOLD) contrast weighted gradient-recalled echo-planar-imaging sequence (EPI) that was modified to export images, as they were acquired, to AFNI over a network interface \cite{Cox1995,LaConte2007}. FMRI acquisition consisted of 30 3.6mm thick interleaved, oblique slices with a 10\% slice gap, TR 2000 ms, TE 30 ms, FA 90$^\circ$, $64 \times 64$ matrix, with 220 mm FOV, and in plane resolution of $3.4 \times 3.4$ mm$^2$. Functional MRI scanning included a three volume “mask” scan and a six-minute resting state scan followed by three task scans (described later) whose order was counterbalanced across subjects. During all scanning, galvanic skin response (GSR), pulse and respiration waveforms were measured using MRI compatible non-invasive physiological monitoring equipment (Biopac Systems, Inc.). Rate and depth of breathing were measured with a pneumatic abdominal circumference belt. Pulse was monitored with a standard infrared pulse oximeter placed on the tip of the index finger of the non-dominant hand. Skin conductance was measured with disposable passive electrodes that were non-magnetic and non-metallic, and collected on the hand. The physiological recordings were synchronized with the imaging data using a timing signal output from the scanner. Visual stimuli were presented to the participants on a projection screen that they could see through a mirror affixed to the head coil. Audio stimuli were presented through headphones using an Avotec Silent-Scan® pneumatic system (Avotec, Inc.: Stuart FL, USA). \subsection{MRI acquisition order and online Processing} The real time fMRI neurofeedback experiment utilizes a classifier based approach for extracting DMN activity levels from fMRI data TR-by-TR, similar to \cite{Craddock2012}. The classifier is trained from resting state fMRI data using a time course of DMN activity extracted from the data using spatial regression to a publicly available template derived from a meta-analysis of task and resting state datasets \cite{Smith2009,fmrib_RSNS}. Several stages of online processing are necessary to perform this classifier training, as well as, denoising of fMRI data in real-time. These stages include calculating transforms required to convert the DMN template from MNI space to subject space, creating white matter (WM) and cerebrospinal fluid (CSF) masks for extracting nuisance signals, and training a support vector regression (SVR) model for extracting DMN activity. The MRI session was optimized to collect the data required for these various processing steps, and to perform the processing, while minimizing delays in the experiment. After acquiring a localizer, the scanning protocol began with the acquisition of a T1 weighted anatomical image used for calculating transforms to MNI space and white matter and CSF masks. Once the image was acquired it was transferred to a DICOM server on the real-time analysis computer (RTAC), which triggered initialization of online processing. The processing script started AFNI in real-time mode, configured it for fMRI acquisition, and began structural image processing. Structural processing included reorienting the structural image to RPI voxel order, skull-stripping using AFNI’s 3dSkullStrip \cite{Cox1996}, resampling the image to isotropic 2-mm voxels (to reduce computational cost of subsequent operations), segmentation into grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using FSL’s FAST \cite{Zhang2001}, and normalization into MNI space using FSL’s FLIRT \cite{Jenkinson2002,Jenkinson2001}. WM and CSF probability maps were binarized using a 90\% threshold. The CSF mask was constrained to the lateral ventricles using an ROI from the AAL atlas to avoid overlap with GM. In parallel with the structural processing a FieldMap was collected but not used in the online processing. Subsequently, a three volume “mask” EPI scan was acquired and transferred to the RTAC. The three images were averaged, reoriented to RPI, and used to create a mask to differentiate brain signal from background (using AFNI’s 3dAutomask \cite{Cox1996}). The mean image was coregistered to the anatomical image using FSL’s boundary based registration (BBR) \cite{Greve2009}. The resulting linear transform was inverted and applied to the WM and CSF masks to bring them into alignment with the fMRI data. Additionally, the transform was combined with the inverted anatomical-to-MNI transform and applied to the canonical map of the DMN (from \cite{Smith2009}). Next, a 6-minute resting state scan (182 volumes) was collected and used as training data to create the support vector regression (SVR) model. This procedure involved motion correction followed by a nuisance regression procedure to orthogonalize the data to six head motion parameters, mean WM, mean CSF, and global signal \cite{Friston1996,Fox2005,Lund2006}. A SVR model of the DMN was trained using a modified dual regression procedure in which a spatial regression to the unthresholded DMN template was performed to extract a time course of DMN activity. The Z-transformed DMN time course was then used as labels (independent variable), with the preprocessed resting state data as features (dependent variables), for SVR training (C=1.0, $\epsilon = 0.01$) using AFNI’s 3dsvm tool \cite{LaConte2005}. The result was a DMN map tailored to the individual participant based on preexisting expectations about DMN anatomy and function. After SVR training was completed (generally in less than 2 minutes), the MSIT (198 volumes), Moral Dilemma (144 volumes), and Neurofeedback test (412 volumes) scans were run. The order of the task based functional scans was counterbalanced and stratified for age and sex across participants. \subsection{Functional MRI Tasks} \subsubsection{Resting state scan} Participants were instructed to keep their eyes open during the scan and fixate on a white plus (+) sign centered on a black background. They were asked to let their mind wander freely and if they noticed themselves focusing on any particular train of thought, to let their mind wander away from it. \subsubsection{Moral Dilemma Task} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{md_vignettes.png} \caption{Examples of vignettes from the moral dilemma task. A) A control vignette: "Mr. Jones is practicing his three-point throw on the basketball court behind his house. He hasn’t managed to score a basket during the whole morning, despite all the practice. He concentrates hard and throws the ball one more time. This time his aim is more accurate, the ball curves through the air and falls cleanly into the basket. Mr. Jones has managed to score a basket for the first time." Question during task: Will he score? B) A dilemma vignette. "Mr. Jones and his only son are held in a concentration camp. His son tries to escape but he is caught. The guard watching them tells Mr. Jones that his son is going to be hanged and that it will be him (Mr. Jones) who has to push the chair. If he does not do it, not only will his son die but also five more people held in the concentration camp." Question during task: Would you push the chair? Vignettes are copied from \cite{Harrison2008} Support Information Appendix.} \label{fig:morald} \end{figure} The Moral Dilemma (MD) task involved a participant making a decision about what he or she would do in a variety of morally ambiguous scenarios presented as a series of vignettes (see Fig.~\ref{fig:morald}A for an example) \cite{Harrison2008}. As a control, the participant was asked to recall a detail from a dilemma-free vignette (see Fig.~\ref{fig:morald}B for an example). Prior work using these vignettes has shown strong activation of DMN regions during moral dilemma scenarios relative to control scenarios \cite{Harrison2008}. The MD task provided a basis to directly examine task-induced activation of the DMN. Just prior to the scan, participants listened to a recording of the vignette while viewing a corresponding image (see Fig.~\ref{fig:morald}A and \ref{fig:morald}B for examples) and were asked to decide how they would react in the scenario. The fMRI tasks consisted of 24 moral dilemma questions and 24 control questions presented in eight 30-second blocks, each consisting of six questions, that alternated between moral dilemma and control conditions. Participants viewed an image and heard a question corresponding to each vignette. Each image was displayed for 5 seconds and the audio began one second after image onset. Participants responded to the proposed question by pressing one of two buttons on a response box (index finger button for “yes”, middle finger button for “no”). The task began and ended with 20 second fixation blocks during which participants passively viewed a plus (+) sign centered on a grey background. The task was implemented in PsychoPy \cite{Peirce2008} using images provided by the authors of the original work \cite{Harrison2008} and can be downloaded from the OpenCogLab Repository (\url{http://opencoglabrepository.github.io/experiment\_moraldillema.html}). \subsubsection{Multi-Source Interference Task} The MSIT was developed as an all-purpose task to provide robust single-participant level activation of cognitive control and attentional regions in the brain \cite{Bush2006}. Early work with the MSIT suggests robust activation of regions associated with top-down control – regions that are often active when the DMN is inactive \cite{Bush2006}. The MSIT provided a basis for directly examining task-induced deactivation of the DMN. \begin{figure}[h!] \centering \includegraphics[]{msit_stim.png} \caption{Examples of MSIT task stimuli. For congruent trials, the paired digits are zero and the position of the target digit corresponds to its location. For incongruent trials, the distractor digits are non-zero and the target digits location is not the same as its value.} \label{fig:msit} \end{figure} During the task, participants were presented with a series of stimuli consisting of three digits, two of which were identical and one that differed from the other two (see Fig.~\ref{fig:msit} for examples). Participants were instructed to indicate the value and not the position of the digit that differed from the other two (e.g., 1 in 100, 1 in 221). During control trials, distractor digits were 0s and the targets were presented in the same location as they appear on the response box (e.g., 1 in 100 was the first button on the button box and the first number in the sequence). During interference trials, distractors were other digits and target digits were incongruent with the corresponding location on the button box (e.g., 221 – 1 was the first button on the button box but was the third number in the sequence). The task was presented as a block design with eight 42-second blocks that alternated between conditions, starting with a control block. Each block contained 24 randomly generated stimuli with an inter-stimulus interval of 1.75 seconds. The task began and ended with a 30 second fixation period during which they passively viewed a white plus (+) sign centered on a black background. The MSIT was implemented in PsychoPy (Peirce, 2008) and can be downloaded from the OpenCogLab Repository (\url{http://opencoglabrepository.github.io/experiment\_msit.html}). \subsubsection{Neurofeedback Task} In the real-time Neurofeedback (rt-NFB) task, fMRI data were processed during collection, allowing the experimenter to provide visual feedback of brain activity over the course of the experiment \cite{Cox1995,LaConte2011}. The rt-NFB task was developed to examine each individual participant’s ability to either increase or decrease DMN activity in response to instructions, accompanied by real-time feedback of activity from their own DMN \cite{Craddock2012}. \begin{figure}[h!] \centering \includegraphics[width=.9\textwidth]{nfb_stim.png} \caption{Stimuli for “Focus” and “Wander” conditions of the neurofeedback task. The needle moves to the left or the right based on DMN activity.} \label{fig:nfb} \end{figure} During the rt-NFB task, participants were shown an analog meter with ‘Wander’ at one end and ‘Focus’ at the other, along with an indicator of their current performance (see Fig. 3). The fixation point was a white square positioned equally between the two poles. Participants were instructed at the beginning of each block to attempt to either focus their attention (Focus) or to let their mind wander (Wander). The task began with a 30 second control condition and then proceeded with alternating blocks with a specific sequence of durations -- 30, 90, 60, 30, 90, 60, 60, 90, 30, 60, 90, and 30 seconds. At the end of each block, the participant was asked to press a button within a two second window. Starting condition (‘Focus’ vs. ‘Wander’) was counterbalanced, as was the location of each descriptor (‘Focus’, ‘Wander’; Right vs. Left) on the analog meter was counterbalanced across participants. Each fMRI volume acquired during the task was transmitted from the scanner to the RTAC shortly after it was reconstructed and passed to AFNI’s real-time plugin \cite{Cox1995} for online processing. The volume was realigned to the previously collected mean volume to correct for motion and to bring it into alignment with the tissue masks and DMN SVR model. Mean WM intensity, mean CSF intensity, and global mean intensity were extracted from the volume using masks calculated from the earlier online segmentation of the anatomical image. A general linear model was calculated at each voxel, using all of the data that were acquired up to the current time point, with WM, CSF, global, and motion parameters included as regressors of no interest. The recently acquired volume was extracted from the residuals of the nuisance variance regression, spatially smoothed (6-mm FWHM), and then a measure of DMN activity was decoded from the volume using the SVR model trained from the resting state data. The resulting measure of DMN activity was translated to an angle, which was added to the current position of the needle on the analog meter, moving it in the direction of focus or wander based on DMN activation or deactivation, respectively. This moving average procedure was used to smooth the motion of the needle. The position of the needle was reset to the center at each change between conditions. The neurofeedback stimulus was implemented in Vision Egg \cite{Straw2008} and can be downloaded from the OpenCogLab Repository (\url{http://opencoglabrepository.github.io/experiment\_RTfMRIneurofeedback.html}). \subsection{Quality Measures.} Metrics of the spatial quality of the structural and functional MRI data and temporal quality of the fMRI data were calculated using the Preprocessed Connectomes Project Quality Assessment Protocol (QAP; \url{http://preprocessed-connectomes-project.org/quality-assessment-protocol}, \cite{Shehzad2015}). For the structural data, spatial measures include: \begin{itemize} \item Signal-to-Noise Ratio (SNR): mean grey matter intensity divided by standard deviation of out-of-brain (OOB) voxels, larger values are better \cite{Magnotta2006}. \item Contrast-to-Noise Ratio (CNR): the difference between mean gray matter intensity and mean white matter intensity divided by standard deviation of OOB voxels, larger values are better \cite{Magnotta2006}. \item Foreground-to-Background Energy Ratio (FBER): variance of in-brain (IB) voxels divided by variance of OOB voxels, larger values are better. \item Percent artifact voxels (QI1): number of OOB voxels that are in structured noise (i.e. artifacts) divided by the total number of OOB voxels, smaller values are better \cite{Mortamet2009}. \item Spatial smoothness (FWHM): smoothness of voxels expressed as full-width half maximum of the spatial distribution of voxel intensities in units of voxels, smaller values are better \cite{Friedman2006}. \item Entropy focus criterion (EFC): the Shannon entropy of voxel intensities normed by the maximum possible entropy for an image of the same size, is a measure of ghosting and blurring induced by head motion, lower values are better \cite{Atkinson1997} \item Summary measures: mean, standard deviation, and size of different image compartments, including, foreground, background, WM, GM, and CSF. \end{itemize} \vskip1em Spatial measures of fMRI data include EFC, FBER, FWHM, and well as: \begin{itemize} \item Ghost-to-signal Ratio (GSR): mean of the voxel intensities in parts of the fMRI image (determined by the phase encoding direction) that are susceptible to ghosting, smaller values are better \cite{Giannelli2010}. \item Summary measures: mean, standard deviation, and size of foreground and background. \end{itemize} \vskip1em Temporal measures of fMRI data include: \begin{itemize} \item Standardized DVARS (DVARS): the mean intensity change between every pair of consecutive fMRI volumes, standardized to make it comparable between scanning protocols, smaller values are better \cite{Nichols2012}. \item Outliers: the mean number of outlier voxels found in each volume using AFNI’s 3dToutcount command, smaller values are better \cite{Cox1996}. \item Global correlation (gcorr): the average correlation between every pair of voxel time series in the brain, sensitive to physiological noise, head motion, as well as imaging technical artifacts, such as, signal bleeding in multi-band acquisitions, values that are closer to zero are better \cite{Saad2013}. \item Mean root mean square displacement (mean RMSD): the average distance between consecutive fMRI volumes \cite{Jenkinson99FD}. This has been shown to be a more accurate representation of head motion than mean frame-wise displacement (meanFD) proposed by Power et. al. \cite{power2012} \cite{yan2013}. \end{itemize} \section{Technical Validation} A variety of initial analyses have been performed on the first 125 participants to be released in the NFB repository to establish the quality of these data and the successful implementation of the tasks. A series of quality assessment measures were calculated from the raw imaging data and compared with data available through other data sharing repositories. Results of the behavioral tasks were evaluated to ensure consistency with the existing literature. Preliminary analyses of the various fMRI tasks were performed to verify activation and deactivation of the DMN as predicted. \subsection{Behavioral Assessment} Participant responses for the RVIP, MSIT, moral dilemma task and neurofeedback task were analyzed to evaluate whether the participants complied with task instructions, and whether their responses are consistent with previous literature. \emph{RVIP.} The RVIP python script calculated hit rate, false alarm rate, $A'$, and mean reaction time during task performance. Responses that occur within 1.5 seconds of the last digit of a target sequence being displayed were considered hits, multiple responses within 1.5 seconds are considered a hit followed by multiple false alarms, and responses that occur outside of the 1.5-second window are considered false alarms. The number of hits and false alarms are converted to rates by dividing by the total number of targets. Since the number of false alarms are not bounded, the false alarm rate can be higher than 100\%, resulting in $A'$ values greater than 1 (see Eqn. \ref{aprime}). In post-hoc analysis, false alarm raters greater than 1 were replaced with 1 and $A'$ values greater than 1 were replaced with 0. \emph{MSIT.} Reaction times and correctness of response were calculated for each trial by the MSIT python script. The response window begins at stimulus presentation and ends just prior to the presentation of the next stimulus. The last key pressed after stimulus presentation is considered in these calculations. A summary script distributed with the task was used to summarize response times and accuracy rates for congruent and incongruent trials. This script assumes that the task begins with a block of congruent stimuli. \emph{Moral Dilemma.} Reaction times and correctness of responses for control trials were calculated for each trial from the task log files using a script distributed with the task. The response window starts at the beginning of the auditory question prompt and the last key received in this window is used to calculate response times. The length of the auditory question was subtracted from reaction times to control for question length across stimuli. This may result in negative reaction times. These values were then summarized into average reaction time for control and dilemma stimuli. Response accuracy was calculated for the control trials, there is no definitively correct response to the dilemmas. \emph{Self-report sleep.} After completing the scans, participants were asked to list the scans that they fell asleep during, if any. The results are coded in the COINS database and were analyzed to determine which of the participants fell asleep during the training or testing scans. \emph{Neurofeedback.} Participants are asked to press a button at transitions between “focus” and “wander” blocks. The hit rate for these catch trials were calculated from the task log files using a script distributed with the task. \subsection{FMRI Analysis} All data were preprocessed using a development version of the Configurable Pipeline for the Analysis of Connectomes (C-PAC version 0.4.0, \url{http://fcp-indi.github.io}). C-PAC is an open-source, configurable pipeline for the automated preprocessing and analysis of fMRI data \cite{Craddock2013}. C-PAC is a software implemented in Python that integrates tools from AFNI \cite{Cox1996}, FSL \cite{Smith2004}, and ANTS \cite{Avants2008} with custom tools, using the Nipype \cite{Gorgolewski2011} pipelining library, to achieve high-throughput processing on high performance computing systems. Anatomical processing began with skull stripping using the BEaST toolset \cite{Eskildsen2012} with a study-specific library and careful manual correction of the results. The masks and BEaST library generated through this effort are shared through the Preprocessed Connectomes Project NFB Skullstripped repository (\url{http://preprocessed-connectomes-project.org/NFB_skullstripped/}) \cite{Puccio2016}. Skullstripped images were resampled to RPI orientation and then a non-linear transform between images and a 2mm MNI brain-only template (FSL, \cite{Smith2004}) was calculated using ANTs \cite{Avants2008}. The skullstripped images were additionally segmented into WM, GM, and CSF using FSL’s FAST tool \cite{Zhang2001}. A WM mask was calculated by applying a 0.96 threshold to the resulting WM probability map and multiplying the result by a WM prior map (avg152T1\_white\_bin.nii.gz - distributed with FSL) that was transformed into individual space using the inverse of the linear transforms previously calculated during the ANTs procedure. A CSF mask was calculated by applying a 0.96 threshold to the resulting CSF probability map and multiplying the result by a ventricle map derived from the Harvard-Oxford atlas distributed with FSL \cite{Makris2006}. The thresholds were chosen and the priors were used to avoid overlap with grey matter. Functional preprocessing began with resampling the data to RPI orientation, and slice timing correction. Next, motion correction was performed using a two-stage approach in which the images were first coregistered to the mean fMRI and then a new mean was calculated and used as the target for a second coregistration (AFNI 3dvolreg \cite{Cox1999}). A 7 degree of freedom linear transform between the mean fMRI and the structural image using FSL’s implementation of boundary-based registration \cite{Greve2009}. Nuisance variable regression (NVR) was performed on motion corrected data using a 2nd order polynomial, a 24-regressor model of motion \cite{Friston1996}, 5 nuisance signals, identified via principal components analysis of signals obtained from white matter (CompCor, \cite{Behzadi2007}), and mean CSF signal. WM and CSF signals were extracted using the previously described masks after transforming the fMRI data to match them in 2mm space using the inverse of the linear fMRI-sMRI transform. NVR residuals were written into MNI space at 3mm resolution and subsequently smoothed using a 6mm FWHM kernel. Individual level analyses of the MSIT and MD tasks were performed in FSL using a general linear model. The expected hemodynamic response of each task condition was derived from a boxcar model, specified from stimulus onset and duration times, convolved with a canonical hemodynamic response function. Multiple regressions were performed at each voxel with fMRI activity as the independent variable and task regressors as the dependent variables. Regression coefficients at each voxel were contrasted to derive a statistic for the difference in activation between task conditions (incongruent $>$ congruent for MSIT, dilemma $>$ control for MD). The resulting individual level task maps were entered into group-level one-sample t-tests, whose significance were assessed using multiple comparison correction via a permutation test (10,000 permutations) implemented by FSL’s randomise ($p<0.001$ FWE - Threshold Free Cluster Enhancement (TFCE) \cite{Salimi-Khorshidi2011}). Participants with missing behavioral data, or whose behavioral responses were outliers ($>1.5$ interquartile range), were excluded from group level analysis. Maps of DMN functional connectivity were extracted from the resting state and neurofeedback scans using a dual regression procedure \cite{Filippini2009}. Time courses from 10 commonly occurring intrinsic connectivity networks (ICNs) were extracted by spatially regressing each dataset onto templates derived from a meta-analysis of task and resting state datasets \cite{Smith2009}. The resulting time courses were entered into voxel-wise multiple regressions to derive connectivity map for each of the 10 ICNs. The ICN corresponding to the DMN was subsequently subtracted and entered into group-level one-sample t-tests, whose significance were assessed using multiple comparison correction via a permutation test (10,000 permutations) implemented by FSL’s randomise ($p<0.001$ FWE - Threshold Free Cluster Enhancement (TFCE) \cite{Salimi-Khorshidi2011}). To evaluate performance on the neurofeedback task, DMN time course for each participant were correlated with an ideal time course (obtained by using instruction onset and duration information – i.e., focus or wander). Participants who reported sleeping during the resting-state scan were excluded from its analysis and analysis of the feedback scan was performed with and without participants who reported sleep. \subsection{Validation Results} \subsubsection{Behavioral Assessment.} \begin{figure}[h!] \centering \includegraphics[width=.9\textwidth]{bxplots.pdf} \caption{Visualizations of behavioral results for all tasks. Panels A-C show data as violin plots (filled circle represents the mean, line range represents the confidence interval, ‘X’ data points indicate outliers (1.5x Interquartile range)). Panel A shows, A’, False Alarm Rate, Hit Rate, and mean reaction time for the RVIP. Panel B shows accuracy (left) and reaction time (right) for congruent relative to incongruent trials of the MSIT task. Panel C shows accuracy for the control trials, as well as reaction time for both the dilemma and control trials of the Moral Dilemma Task. Panel D shows self-reported sleep (none, sleep during train only, sleep during test only, and sleep during train and test sessions) as a percentage of the total sample on the left and correct response (i.e., pressed any button) during catch trials as a histogram.} \label{fig:bx_plots} \end{figure} The behavioral results for the Rapid Visual Information Processing (RVIP), Multi-Source Interference Task (MSIT), Moral Dilemma Task (MD), and Neurofeedback Task (NFB) are illustrated in Fig.~\ref{fig:bx_plots}. Reaction times, hit rates, and A’ calculated from the RVIP task were consistent with, although slightly worse than, those previously published in young adults (n=20, age 25.30 +/- 5.09 years, RVIP reaction time 425.6 +/- 43.0 ms, A’ = 0.936 +/- 0.05, \cite{Turner2003}). The differences in performance may have been due to the inclusion of participants with psychiatric diagnoses in the NFB population. The RVIP had the highest number of outliers of the tasks with 16 participants having false alarm rates greater than 1.5 times the inter-quartile range, 11 of which also had outlier A’ values (see Fig.~\ref{fig:outliers}). Similarly, Figure~\ref{fig:bx_plots}B illustrates that the performance on the MSIT was consistent with, although a little worse than, values published in a previous validation study (n = 8, 4 females, age = 30.4 +/- 5.6 years, congruent trial reaction time = 479 +/- 92 ms, incongruent trial reaction time = 787 +/- 129 ms \cite{Bush2003}). MSIT data was only available in 107 of the participants due to a faulty button box, of which a total of 13 had responses that were outliers on at least one of the statistics derived from the task (see Fig.~\ref{fig:outliers}). For MD, the difference in reaction times distributions between control and dilemma trials support the hypothesis that the dilemma trials require more processing, as is expected \cite{Harrison2008} (see Fig.~\ref{fig:bx_plots}C). MD results are available for 121 participants due to button box problems and 12 of the remaining have accuracy on the control trials that are outliers (see Fig. 5). Only 58.5\% of the participants reported remaining awake during both the resting state and neurofeedback scans, these subjective measures were validated by catch trial performance (see Fig.~\ref{fig:bx_plots}D). \begin{figure}[h!] \centering \includegraphics[width=.75\textwidth]{outliers.pdf} \caption{The intersection of behavioral assessment outliers and missing data. Participants are ordered by their number of outliers or missing measurements.} \label{fig:outliers} \end{figure} When using the data it might be desirable to exclude participants who had poor task performance, slept during the parts of the experiment, or experienced technical issues that resulted in missing data. Figure~\ref{fig:outliers} illustrates the intersection of these various problems across participants to get a better understanding of their total impact on sample size. Data is missing for twenty-three participants due to technical errors such as button box malfunction, scanner problems, and power outages. Two participants asked to be removed from the scanner before the task was completed. A total of 50 participants have some unusable data due to poor task performance or falling asleep during the resting state or feedback scans. \subsubsection{Quality Assessment Protocol} To evaluate the spatial and temporal quality of the NFB fMRI data, QAP measures calculated on the data were compared to those calculated from resting state fMRI scans from the Consortium on Reproducibility and Reliability (CoRR) \cite{Zuo2014}. The CoRR dataset contains scans on 1,499 participants using a variety of test-retest experiment designs. To avoid biasing the comparison with multiple scans from the same subjects, the first resting state scan acquired on the first scanning session was used for each participant. Signal-to-noise ratio, ghost-to-signal ratio, voxel smoothness, global correlation, standardized DVARS and mean RMSD of the NFB fMRI data are all comparable to CoRR (Fig.~\ref{fig:qaplots}). Head motion, as indexed by mean RMSD, is higher for the resting state (train) and neurofeedback scans than the other two tasks (Fig.~\ref{fig:qaplots}F). \begin{figure}[h!] \centering \includegraphics[width=.9\textwidth]{qaplots.pdf} \caption{Illustrations of the overall quality of the functional neuroimaging data. Distributions of the measures for fMRI data from the MSIT, Moral Dilemma, Resting Scan (NFB Train), and Neurofeedback (NFB Test) are shown in comparison to resting state fMRI data from the CoRR dataset. Quality measures are represented as violin plots overlaid with boxplots (center line represents the median, box range represents the confidence interval, ‘X’ data points indicate outliers (1.5x Interquartile range)). Lines representing the 5$^{th}$, 10$^{th}$, 25$^{th}$, 50$^{th}$, 75$^{th}$, 90$^{th}$, and 95$^{th}$ percentiles for the CoRR data are shown to simplify visual comparisons. Standardized DVARS is mean root-mean-square variance of the temporal derivative of voxels time courses that has been standardized to make the values comparable across different acquisition protocols. Mean RMSD is the mean root mean squared deviation (RMSD) of motion between consecutive volumes.} \label{fig:qaplots} \end{figure} \subsubsection{fMRI Assessment} The group level analysis of the MSIT task included 87 participants after 18 were excluded for missing data and 20 were excluded for being outliers on either accuracy of response or mean reaction times (see Fig.~\ref{fig:outliers}). 110 participants data were included in the group level analysis of the MD task after 5 participants were excluded due to technical problems, and another 10 were excluded due to outlier task performance. As expected from the literature, the MD task robustly activated the DMN (Fig.~\ref{fig:tfmri_plots}A, red) and deactivated attentional networks (Fig.~\ref{fig:tfmri_plots}A, blue) that are typically active during task performance \cite{Harrison2008}. Consistent with previous literature \cite{Bush2003} group level analysis of the MSIT showed robust activation of the dorsal and ventral attention networks (Fig~\ref{fig:tfmri_plots}B, red) and deactivation of the DMN (Fig~\ref{fig:tfmri_plots}B, blue). 21 participants reported falling asleep during the resting state scan and were excluded from group-level analysis. Figure~\ref{fig:tfmri_plots}C illustrates the expected pattern of anti-correlation between DMN and task networks \cite{Fox2005} for the remaining 104 participants. These results confirm that these three tasks are working as expected for deactivating, activating, and localizing the DMN. \begin{figure}[h!] \centering \includegraphics[width=.9\textwidth]{tfmri_plots.png} \caption{Illustrations of the overall quality of the functional neuroimagDefault network patterns extracted from the Moral Dilemma, MSIT, and Resting State fMRI Tasks. A) The dillemma $>$ control contrast from group-level analysis of the Moral Dilemma task results in DN activation. B) The incongruent $>$ congruent contrast of the MSIT shows DN deactivation. C) Functional connectivity of the DN extracted from the resting state task using dual regression. Statistical maps were generated by a permutation group analysis, thresholded at $p < 0.001$ TFCE FWE-corrected; overlay colors represent t statistics.} \label{fig:tfmri_plots} \end{figure} The neurofeedback task was analyzed with all of the participants that completed the scan (n=121) (Fig.~\ref{fig:nfb_plots}A and B) and again with only the participants who did not fall asleep during the resting state or neurofeedback tasks (Fig.~\ref{fig:nfb_plots}C). The group averaged DMN map extracted from this data is consistent with what we expect, with the exception of the prominent negative correlations (Fig.~\ref{fig:nfb_plots}A). Comparing the group mean DMN time course for all participants to the task ideal time course shows that overall the task trend is followed, with a good deal of high frequency noise (Fig 8B). When the participants that fell asleep are removed, the high frequency noise remains, but the amplitude difference between wander and focus trials becomes greater, driving a higher correlation with the task waveform (Fig.~\ref{fig:nfb_plots}C). This is further seen in the distribution of individual-level correlations between DMN activity and the task, where those who do not sleep perform marginally better for both conditions (Fig.~\ref{fig:nfb_perf}). \begin{figure}[h!] \centering \includegraphics[width=.9\textwidth]{nfb_plots.png} \caption{Technical validation for the neurofeedback paradigm. Panel A shows the functional connectivity map for the default mode network across all participants (p < 10-30 uncorrected, n=121) derived through dual regression. Panel B shows average overall time-series (in dark blue with shading indicating standard error) of the default mode network, across all participants (n=121), in relation to ideal time-series (in black). The coefficient of determination of the averaged time series is $R^2$ = 0.36. Panel C shows average overall time-series (in dark blue with shading indicating standard error) of the default mode network across participants who reported no sleep during both training and feedback trials (n=76), in relation to ideal time-series (in black). The coefficient of determination of the averaged time series is $R^2$ = 0.68.} \label{fig:nfb_plots} \end{figure} \begin{figure}[h!] \centering \includegraphics[]{nfb_perf.png} \caption{Distribution of individual correlations for participant default mode network time-series with ideal-times series as a function of sleep status. Each sleep status group is then sub-divided into the two task conditions -- Focus/Wander. Subjects with reported sleep during both trials show lower correlations with the model compared to subjects with no reported sleep. The dots indicate the mean within each group and the distributions of the correlations are plotted around the mean.} \label{fig:nfb_perf} \end{figure} \section{Usage Notes} The PSWQ and PTQ were added to the assessment battery in July 2014, approximately nine months after data collection began. As a result, scores for these measures are missing from the first 26 and 27 participants, respectively. Additionally, in July of 2014, the full scale Response Styles Questionnaire (RSQ) was replaced with the newer subscale RRS, which has better psychometric properties and fewer questions. The only difference between the RRS subscale of the RSQ and the newer RRS is that one item from the Depression-Related subscale in the RRS is absent from the RSQ. To correct for this missing item in those who received the RSQ, we suggest the following: (1) Divide the RSQ derived Depression Related subscale score by 11; (2) round down to the nearest whole number (3) add this number to the RSQ derived Depression Related subscale and RSQ derived RRS-subscale total score. This procedure was validated using responses from 13 participants who received the RRS and RSQ. Correlations between scores calculated using all of the questions and those calculated using the above procedure were r = 0.9994 for the Depression-Related subscale and r = 0.9997 for the RRS total score. For the MSIT, fMRI data should be analyzed as 42s blocks with the following onset times after dropping the first four TRs: congruent blocks - 22, 106, 190, 274; incongruent blocks –64,148, 232, 316. The Moral Dilemma fMRI data should be analyzed as 30s blocks with the following onset times after dropping the first four TRs: control blocks – 12, 72, 132, 192; dilemma blocks – 42, 102, 162, 222. Note that up until April 2014 (first 11 participants), no pre-experiment fixation period was implemented for the moral dilemma task. For those participants, all of the aforementioned stimulus onset times should be altered by subtracting 20s. For analysis of the NFB fMRI data, the first stimulus onset (and duration) times, in seconds, are as follows: 34(30), 162(60), 260(90), 418(60), 576(30), 674(90). The second stimulus onset (and duration) times, in seconds, are as follows: 68(90), 226(30), 354(60), 482(90), 610(60), 768(30). This timing information is also available in \texttt{events.tsv} files provided in the repository alongside the task fMRI data. For some participants, the button box used to record participant responses in the scanner was defective, resulting in unusable data for the MSIT task (18 participants) and Moral Dilemma task (5 participants). As previously discussed scripts are provided with each of the tasks for calculating response accuracies and reaction times. The assumptions made in calculating these scores may not be appropriate to all researchers, or in all applications. For this reason we have released the trial-by-trail response information in log files that accompany the fMRI data. The impact of sleep on intrinsic brain activity, and the preponderance of sleep that occurs during resting state fMRI acquisition, has been highlighted in the literature \cite{Duyn2011,Tagliazucchi2014}. Indeed, a large proportion of the participants in this study reported falling asleep during either the resting state or neurofeedback scans. Whether or not this data should be excluded is up to the researcher and depends on the analysis being performed. Users of this resource might also consider whether they trust the self-reports, or whether they should try to decode an objective measure of sleep from the data \cite{Tagliazucchi2014}. Additionally, researchers could potentially use the respiratory, heart rate, or galvanic skin response recordings provided in the repository to index wakefulness. See information in Data Privacy section of Data Records for restrictions and limitations on data use. \section{Discussion/Conclusions} This manuscript describes a repository of data from an experiment designed to evaluate DMN function across a variety of clinical and subclinical symptoms in a community-ascertained sample of 180 adults (50\% females, aged 21-45). The data includes assessments that cover a variety of domains that have been associated with or affected by DMN activity, including emotion regulation, mind wandering, rumination, and sustained attention. Functional MRI task data is included for tasks shown to activate, deactivate, and localize the DMN, along with a novel real-time fMRI neurofeedback paradigm for evaluating DMN regulation. Preliminary analysis of the first 125 participants released for sharing confirms that each of the tasks is operating as expected. Group level analysis of the neurofeedback data indicates that the participants are able to modulate their DMN along with the task instructions. For all participants the group average time-course co-varies significantly with the task model and increases when removing participants who reported sleep. More than half of the participants who reported no sleep had a significant correlation (p \textless 0.05; r\textgreater 0.15, phase randomization permutation test) with the task model (see Fig. ~\ref{fig:nfb_perf}). The large proportion of participants who were able to significantly modulate their DMN shows the neurofeedback protocol was effective. One unexpected result of our technical validation is a high amount of data loss due to poor participant performance and compliance. Excluding all of the participants who are considered outliers on at least one of the tasks or who fell asleep during the resting state or neurofeedback scans, would remove 73 of the 125 participants (see Fig.~\ref{fig:outliers}). Forty-six individuals could be excluded for sleep, which is not surprising given recent reports of the high incidence of sleep during resting state scans \cite{Tagliazucchi2014}. Interestingly, 26 of these participants were also outliers in at least one other task. This indicates that either they had trouble staying awake during the other tasks as well, or were non-compliant overall. Due to a paucity of information on participant compliance during fMRI experiments, it is hard to tell whether what we are seeing is common for our population, or whether there is a specific problem with our experiment protocol. We do believe that compliance would be higher if we were to utilize a younger healthy population, or participants who have been scanned multiple times, as is commonly done in cognitive neuroscience. These problems with sleep and poor performance illustrate the need to debrief participants, monitor their wakefulness during the scan, or try decode sleep from the fMRI data \cite{Tagliazucchi2014}. Additionally, it is important to check the data as it is acquired so that a study protocol can be adapted to reduce data loss. Although much of the interest in real-time fMRI based neurofeedback is focused on clinical interventions \cite{Stoeckel2014}, it is also a valuable paradigm for probing typical and atypical brain function. We believe that the data in this repository will have a substantial impact for understanding the nuances of DMN function and how variation in its regulation leads to variation in phenotype. This resource will be particularly useful to students and junior researchers for testing their hypothesis of DMN function, learning new techniques, developing analytical methods, and as pilot data for obtaining grants. We encourage users to provide us with feedback for improving the resource and are very interested to learn about the research performed with the data.
{ "alphanum_fraction": 0.7878666429, "avg_line_length": 181.0967741935, "ext": "tex", "hexsha": "c6ca159d5c4e7a2bc2070761f0da0a8a7691cc5d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1ca1e7a99c5558ddaa8f9ee55b4ec67658769388", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ccraddock/nfb_manuscript", "max_forks_repo_path": "mcdonald_nfb_datasharing_body.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1ca1e7a99c5558ddaa8f9ee55b4ec67658769388", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ccraddock/nfb_manuscript", "max_issues_repo_path": "mcdonald_nfb_datasharing_body.tex", "max_line_length": 2010, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1ca1e7a99c5558ddaa8f9ee55b4ec67658769388", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ccraddock/nfb_manuscript", "max_stars_repo_path": "mcdonald_nfb_datasharing_body.tex", "max_stars_repo_stars_event_max_datetime": "2018-03-18T21:53:19.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-18T21:53:19.000Z", "num_tokens": 15716, "size": 67368 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{fancyhdr} \usepackage[margin=2.8cm,twoside]{geometry} \usepackage[super]{nth} \usepackage[english]{babel} \usepackage{csquotes} \usepackage{hyperref} \usepackage[backend=biber,style=ieee]{biblatex} \addbibresource{bibliography.bib} \usepackage{float} \pagestyle{fancy} \fancyhf{} \fancyhead[LE,RO]{DID - 04 Preliminary Design (Hebb \& Stephan)} \fancyhead[LO,RE]{\leftmark} \fancyfoot[LE,RO]{\thepage} \usepackage{graphicx} \begin{document} \begin{titlepage} \begin{center} \vspace*{1cm} \LARGE\textsc{Royal Military College of Canada}\normalsize \vspace{0.2cm} \textsc{Department of Electrical and Computer Engineering} \vspace{1.5cm} \includegraphics[width=0.3\textwidth]{rmcLogo.png} \vspace{1.5cm} \LARGE{Designing Coatimunde\\} \vspace{0.2cm} \normalsize{Computer Optics Analyzing Trajectories In Mostly Unknown, Navigation Denied, Environments} \vspace{0.1cm} \normalsize{DID-06 - Schedule Update} \vfill \textbf{Presented by:}\\Amos Navarre \textsc{Hebb} \& Kara \textsc{Stephan}\\ \vspace{0.8cm} \textbf{Presented to:}\\Captain Anthony \textsc{Marasco} \& Dr. Rachid \textsc{Beguenane} \vspace{0.8cm} \today \end{center} \end{titlepage} \section{Document Purpose} The purpose of this document is to describe the requirements for the Schedule Update. It includes both a graphical and textual representation of the present current project schedule and identifies some schedule risk items. \section{Changes} The major cause of delay for our project is the proper identification of the targets and the robot's ability to move towards said target with obstacles. In the updated schedule, \ref{fig:SUp01}, it can be seen that the deadline for this task has been extended. Thus shortening the time we have to port to UAV later in the semester. This task is on the critical path for our project as the next task which is avoiding obstacles and moving towards an identified target with memory of past movements is dependent on the completion of this current task. \begin{figure}[H] \centering \includegraphics[width=\linewidth]{SUp01} \caption{Textual Representation of Schedule} \label{fig:SUp01} \end{figure} \begin{figure}[H] \centering \includegraphics[width=\linewidth]{SUp02} \caption{Graphical Representation of Schedule} \label{fig:SUp02} \end{figure} \section{Conclusion} Underestimating the difficulty of implementing even simple computer vision has resulted in significant delays. We intend to abstract away more of the control and use pre-existing modules in the event that there does not end up being as much time as we desire to port our logic to a flying platform. %\end{multicols} \end{document}
{ "alphanum_fraction": 0.7517780939, "avg_line_length": 29.2916666667, "ext": "tex", "hexsha": "a5d3dce467fd9800652ff7dab8bef93230358ba0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c41168ad7a51de9863da09bacee02cd09419d8f8", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "navh/coatimunde", "max_forks_repo_path": "Data Item Deliverables/ScheduleUpdate/ScheduleUpdate.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c41168ad7a51de9863da09bacee02cd09419d8f8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "navh/coatimunde", "max_issues_repo_path": "Data Item Deliverables/ScheduleUpdate/ScheduleUpdate.tex", "max_line_length": 550, "max_stars_count": null, "max_stars_repo_head_hexsha": "c41168ad7a51de9863da09bacee02cd09419d8f8", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "navh/coatimunde", "max_stars_repo_path": "Data Item Deliverables/ScheduleUpdate/ScheduleUpdate.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 819, "size": 2812 }
\documentclass{subfile} \begin{document} \section{PMO}\label{sec:pmo} \begin{problem}[$2019$ Finals, problem $5$] Let $a_{0},a_{1},\ldots$ be a sequence of positive real numbers such that $a_{0}$ is an integer, $a_{i}\leq a_{i-1}+1$ for $1\leq i\leq n$ and \begin{align*} \sum_{i=1}^{n}\dfrac{1}{a_{i}} & \leq 1 \end{align*} Prove that \begin{align*} n & \leq 4a_{0}\sum_{i=1}^{n}\dfrac{1}{a_{i}} \end{align*} \end{problem} \begin{problem}[$2017$ Finals, problem $6$] Three sequences $a_{0},\ldots,a_{n};b_{0},\ldots,b_{n};c_{0},\ldots,c_{2n}$ of non-negative real numbers are given such that for all $0\leq i,j\leq n$, we have $a_{i}b_{j}\leq c_{i+j}^{2}$. Prove that \begin{align*} \left(\sum_{i=0}^{n}a_{i}\right)\left(\sum_{i=0}^{n}b_{i}\right) & \leq \left(\sum_{i=0}^{2n}c_{i}\right)^{2} \end{align*} \end{problem} \begin{problem}[$2014$ Finals, problem $2$] Let $n$ be a positive integer, $k\geq2$ be an integer and $a_{1},\ldots,a_{k};b_{1},\ldots,b_{n}$ be integers such that \begin{align*} 1<a_{1}<\ldots<a_{k} & < b_{1}<\ldots<b_{n} \end{align*} Prove that if \begin{align*} a_{1}+\ldots+a_{k} & > b_{1}+\ldots+b_{n} \end{align*} then \begin{align*} a_{1}\cdots a_{k} & > b_{1}\cdots b_{n} \end{align*} \end{problem} \begin{problem}[$2013$ Finals, problem $5$] Let $k,m,n$ be distinct positive integers. Prove that \begin{align*} \left(k-\dfrac{1}{k}\right)\left(m-\dfrac{1}{m}\right)\left(n-\dfrac{1}{n}\right) & \leq kmn-(k+m+n) \end{align*} \end{problem} \begin{problem}[$2012$ Finals, problem $6$] Show that for any positive real numbers $a,b,c$, \begin{align*} \left(\dfrac{a-b}{c}\right)^{2}+\left(\dfrac{c-a}{b}\right)^{2}+\left(\dfrac{a-b}{c}\right)^{2} & \geq 2\sqrt{2}\left(\dfrac{a-b}{c}+\dfrac{b-c}{a}+\dfrac{c-a}{b}\right) \end{align*} \end{problem} \begin{problem}[$2008$ Finals, problem $1$] In each cell of an $n\times n$ matrix, a positive integer at most $n^{2}$ is written. In the first row, $1,\ldots,n$ are written, in the second $n+1,\ldots,2n$ are written and so on. $n$ numbers are selected such that no two numbers are in the same row or column. If $a_{i}$ is the number chosen from row $i$, prove that \begin{align*} \dfrac{1}{a_{1}}+\dfrac{2^{2}}{a_{2}}+\ldots+\dfrac{n^{2}}{a_{n}} & \geq \dfrac{n+2}{2}-\dfrac{1}{n^{2}+1} \end{align*} \end{problem} \begin{problem}[$2004$ Finals, problem $4$] Let $a,b,c$ be real numbers. Prove that \begin{align*} \sqrt{2(a^{2}+b^{2})}+\sqrt{2(b^{2}+c^{2})}+\sqrt{2(c^{2}+a^{2})} & \geq \sqrt{3(a+b)^{2}+3(b+c)^{2}+3(c+a)^{2}} \end{align*} \end{problem} \begin{problem}[$2003$ Finals, problem $2$] Let $a$ be a real number such that $0<a<1$. Prove that for all finite strictly increasing sequences $k_{1},\ldots,k_{n}$ of non-negative integers, \begin{align*} \left(\sum_{i=1}^{n}a^{k_{i}}\right)^{2} & < \left(\dfrac{1+a}{1-a}\right)\sum_{i=1}^{n}a^{2k_{i}} \end{align*} \end{problem} \begin{problem}[$2002$ Finals, problem $4$] Let $n\geq 3$ be an integer and $x_{1},\ldots,x_{n}$ be non-negative real numbers. Prove that at least one of the following inequalities are true. \begin{align*} \sum_{i=1}^{n}\dfrac{x_{i}}{x_{i+1}+x_{i+2}} & \geq \dfrac{n}{2}\\ \sum_{i=1}^{n}\dfrac{x_{i}}{x_{i-1}+x_{i-2}} & \geq \dfrac{n}{2} \end{align*} where $x_{n+2}=x_{2},x_{n+1}=x_{1},x_{0}=x_{n},x_{-1}=x_{n-1}$. \end{problem} \begin{problem}[$2001$ Finals, problem $1$] Let $n$ be a positive integer and $x_{1},\ldots,x_{n}$ be positive real numbers. Prove that \begin{align*} x_{1}+\ldots+nx_{n} & \leq \dfrac{n(n-1)}{2}+x_{1}+\ldots+x_{n}^{n} \end{align*} \end{problem} \begin{problem}[$1999$ Finals, problem $2$] Let $n$ be a positive integer and $a_{1},\ldots,a_{n};b_{1},\ldots,b_{n}$ be positive real numbers. Prove that \begin{align*} \sum_{1\leq i<j\leq n}|a_{i}-a_{j}|+\sum_{1\leq i<j\leq n}|b_{i}-b_{j}| & \leq \sum_{i=1}^{n}\sum_{j=1}^{n}|a_{i}-b_{j}| \end{align*} \end{problem} \begin{problem}[$1996$ Finals, problem $3$] Let $n$ be a positive integer and $a_{1},\ldots,a_{n};x_{1},\ldots,x_{n}$ be positive real numbers such that \begin{align*} a_{1}+\ldots+a_{n} & = x_{1}+\ldots+x_{n}=1 \end{align*} Show that \begin{align*} 2\sum_{1\leq i<j\leq n}x_{i}x_{j} & \leq \dfrac{n-2}{n-1}+\sum_{i=1}^{n}\dfrac{a_{i}x_{i}^{2}}{1-a_{i}} \end{align*} When do we have equality? \end{problem} \begin{problem}[$1995$ Finals, problem $1$] Let $n$ be a positive integer and $\mathbf{x}=(x_{1},\ldots,x_{n})$ be positive real numbers such that $\mathfrak{H}(\mathbf{x})=1$. Find the smallest possible value of \begin{align*} x_{1}+\ldots+\dfrac{x_{n}^{n}}{n} \end{align*} \end{problem} \begin{problem}[$1994$ Finals, problem $3$] Let $n>3$ be an integer and $X=\{x_{1},\ldots,x_{n}\}$ be set of distinct real numbers such that $\sum_{i=1}^{n}x_{i}=0$ and $\sum_{i=1}^{n}x_{i}^{2}=1$. Show that there are real numbers $a,b,c,d\in X$ such that \begin{align*} a+b+c+nabc & \leq \sum_{i=1}^{n}x_{i}^{3}\leq a+b+d+nabd \end{align*} \end{problem} \end{document}
{ "alphanum_fraction": 0.5794824399, "avg_line_length": 38.6428571429, "ext": "tex", "hexsha": "4f40728fa286129becd1415281d10cc0c1eacea9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ineq-tech/inequality", "max_forks_repo_path": "pmo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ineq-tech/inequality", "max_issues_repo_path": "pmo.tex", "max_line_length": 323, "max_stars_count": 1, "max_stars_repo_head_hexsha": "ebf89351c843b6a7516e10e2ebf0d64e3f1f3f83", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ineq-tech/inequality", "max_stars_repo_path": "pmo.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-06T08:29:30.000Z", "max_stars_repo_stars_event_min_datetime": "2022-02-06T08:29:30.000Z", "num_tokens": 2390, "size": 5410 }
% Copyright (c) 2005 Nokia Corporation % % Licensed under the Apache License, Version 2.0 (the "License"); % you may not use this file except in compliance with the License. % You may obtain a copy of the License at % % http://www.apache.org/licenses/LICENSE-2.0 % % Unless required by applicable law or agreed to in writing, software % distributed under the License is distributed on an "AS IS" BASIS, % WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. % See the License for the specific language governing permissions and % limitations under the License. \section{\module{e32dbm} --- DBM implemented using the Symbian native DBMS} \label{sec:e32dbm} \declaremodule{}{e32dbm} \platform{S60} \modulesynopsis{DBM implemented using the Symbian native DBMS} The \module{e32dbm} module provides a DBM API that uses the native Symbian RDBMS as its storage back-end. The module API resembles that of the \refmodule{gdbm} module. The main differences are: \begin{itemize} \item The \method{firstkey()} - \method{nextkey()} interface for iterating through keys is not supported. Use the \code{"for key in db"} idiom or the \method{keys} or \method{keysiter} methods instead. \item This module supports a more complete set of dictionary features than \refmodule{gdbm} \item The values are always stored as Unicode, and thus the values returned are Unicode strings even if they were given to the DBM as normal strings. \end{itemize} \subsection{Module Level Functions} \label{subsec:mylabel16} The \module{e32dbm} defines the following functions: \begin{funcdesc}{open}{dbname\optional{,flags, mode}} Opens or creates the given database file and returns an \class{e32dbm} object. Note that \var{dbname} should be a full path name, for example, \textsf{u'c:$\backslash \backslash $foo.db'}. Flags can be: \begin{itemize} \item \code{'r'}: opens an existing database in read-only mode. This is the default value. \item \code{'w'}: opens an existing database in read-write mode. \item \code{'c'}: opens a database in read-write mode. Creates a new database if the database does not exist. \item \code{'n'}: creates a new empty database and opens it in read-write mode. \end{itemize} If the character \code{'f'} is appended to flags, the database is opened in \textit{fast mode}. In fast mode, updates are written to the database only when one of these methods is called: \method{sync}, \method{close}, \method{reorganize}, or \method{clear}. \end{funcdesc} Since the connection object destructor calls \method{close}, it is not strictly necessary to close the database before exiting to ensure that data is saved, but it is still good practice to call the \method{close} method when you are done with using the database. Closing the database releases the lock on the file and allows the file to be reopened or deleted without exiting the interpreter. If you plan to do several updates, it is highly recommended that you open the database in fast mode, since inserts and updates are more efficient when they are bundled together in a larger transaction. This is especially important when you plan to insert large amounts of data, since inserting records to \refmodule{e32db} is very slow if done one record at a time. \subsection{e32dbm Objects} The \module{e32dbm} objects returned by the \method{open} function support most of the standard dictionary methods. The supported dictionary methods are: \begin{itemize} \item \code{__getitem__} \item \code{__setitem__} \item \code{__delitem__} \item \code{has_key} \item \code{update} \item \code{__len__} \item \code{__iter__} \item \code{iterkeys} \item \code{iteritems} \item \code{itervalues} \item \code{get} \item \code{setdefault} \item \code{pop} \item \code{popitem} \item \code{clear} \end{itemize} These work the same way as the corresponding methods in a normal dictionary. In addition, \class{e32dbm} objects have the following methods: \begin{methoddesc}[e32dbm]{close}{} Closes the database. In fast mode, commits all pending updates to disk. \method{close} raises an exception if called on a database that is not open. \end{methoddesc} \begin{methoddesc}[e32dbm]{reorganize}{} Reorganizes the database. Reorganization calls \method{compact} on the underlying \refmodule{e32db} database file, which reclaims unused space in the file. Reorganizing the database is recommended after several updates. \end{methoddesc} \begin{methoddesc}[e32dbm]{sync}{} In fast mode, commits all pending updates to disk. \end{methoddesc}
{ "alphanum_fraction": 0.7687555163, "avg_line_length": 40.8288288288, "ext": "tex", "hexsha": "16e6d9c62d4cb36b034c8769492516964ca14871", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "205414c33fba8166167fd8a6a03eda1a68f16316", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "tuankien2601/python222", "max_forks_repo_path": "Doc/lib/libe32dbm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "205414c33fba8166167fd8a6a03eda1a68f16316", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "tuankien2601/python222", "max_issues_repo_path": "Doc/lib/libe32dbm.tex", "max_line_length": 201, "max_stars_count": 1, "max_stars_repo_head_hexsha": "205414c33fba8166167fd8a6a03eda1a68f16316", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "dillionhacker/python222", "max_stars_repo_path": "Doc/lib/libe32dbm.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-17T13:55:02.000Z", "max_stars_repo_stars_event_min_datetime": "2022-03-17T13:55:02.000Z", "num_tokens": 1188, "size": 4532 }
\documentclass[letterpaper,twocolumn,10pt]{article} \usepackage{epsfig,endnotes} \usepackage{siunitx} % typesets numbers with units very nicely \usepackage{enumerate} % allows us to customize our lists % \usepackage[T1]{fontenc} \usepackage{amssymb, amsmath, graphicx, titling} \usepackage[up,bf,raggedright]{titlesec} \usepackage{blindtext} \usepackage[usenames,dvipsnames]{color} \usepackage{fancyvrb} \usepackage{hyperref} \begin{document} \color{Black} \title{Chasm: Fault-Tolerant, Information-Theoretic Secure Cloud Backup using Secret Sharing} \author{Alex Grinman, Akshay Ravikumar, Julian Fuchs, Kevin Li \\ \texttt{\{agrinman, akshayr, jfuchs, kmli\}@mit.edu} \\\\ \url{https://github.com/agrinman/chasm}} \date{5/11/2016} \maketitle \begin{abstract} Most "secure" cloud backup software encrypts a user's data under a symmetric key that either they store on their computer or backup elsewhere. If a user loses the key when their computer crashes, their encrytped, backed-up data is unrecoverable! Some systems deal with this problem by generating a symmetric key that is dervied from a password the user is asked to memorize. A password sacrifices entropy and ultimately weakens the security of the system. The cost of real security in these types of systems is fault-tolerance, which is the underlying motivation for backing up to the cloud. In this paper we present {\bf Chasm}, a fault-tolerant secure cloud backup solution based on Shamir's $k$-out-of-$n$ secret-sharing scheme. \end{abstract} \section{Motivation} There are many commericial backup solutions that claim to provide confidentiallity and integrity of user data. Some examples include Mozy, Carbonite, Crashplan, and Backblaze. All of these products essentially work the same way: data is encrytped using symmetric encryption along with an messege authentication code for integrity and sent to the storage service's cloud data store. Unfortunately, this mechanism of encrypting the data does not solve any problem, it simply concentrates the problem in a short symmetric key. The maintainence of this symmetric key is central to the fundamental security of the system. \\\\ The threat model that most of these existing solutions operate under is that the adversary is a "hacker" who breaks into the storage service and steals data. Hence, in this adversarial model, the hacker will not be able to decrypt the data without the key. \\\\ However, we find three problems with this solution: \begin{enumerate} \item {\bf Low-entropy for password based derived keys.} If the symmetric key used is dervided from a user memorizable password, then the entropy of the key is much lower, and therefore a hacker or even the a powerful cloud storage service could decrypt the data by brute-force guessing the password. \item {\bf Fault-tolerance for lost keys.} If the symmetric key is not password-derviced, it must be random and too long to memorize, hence the user must store it somewhere. This defeats the purpose of performing the cloud backup in the first place, as now if they key is lost due to a computer crashing, the data is irrecoverable. \item {\bf Usability.} Remembering passwords or storing keys is an extra hassle for the user, which makes system less usable. In fact, most users do not encrypt their personal cloud backup either because the software is too complicated or the extra burden is simply not worth the promised protection. \end{enumerate} \section{Introduction} With these motivating flaws of existing solution in mind, we present the design and implementation of Chasm, a secure cloud backup solution that is truly fault-tolerant and provides information-theoretic confidentiality and integrity based on Shamir's Secret Sharing Scheme. \\\\ The Chasm system is designed for the every-day user. Chasm has no passwords and provides a user-interface that everyone already knows: simply drag and drop a file into the Chasm folder, and the rest is taken care of. \\\\ Chasm operates over already existing, independently operated cloud storage services like Dropbox, Google Drive, iCloud Drive, Microsoft One Drive, or AWS, which most users today already have accounts with. \\\\ The security strength of Chasm is determined by $N$, the number of cloud services the user has delegated Chasm to use, and $K$ the user chosen recoverability threshold. Chasm secret share the user's private data between $N$ cloud storage services, setting the recoverability threshold to $K$ using Shamir's scheme. \section{Threat-Model} Before we describe the details of Chasm, we first present our adversarial and threat models. We consider the following adversaries: \begin{enumerate} \item {\bf A Malicious Cloud Storage Service} is motivated to read user data for a variety of reasons like system performance upgrades, market research, or possibly even to sell for more profits. \item {\bf A Hacker who breaks into a storage service} motivated to find valuable user data to expose for blackmail, sell, or even use to access bank accounts. \item {\bf A Nation-state actor} could be motivated to learn more about a political dissident or state enemy, spy on its own citizens, or spy on citizens of foreign countries. \end{enumerate} Given these adversaries, we consider the following threats: \begin{itemize} \item Cloud storage services can directly read and copy any data that they are storing for the user \item Large cloud hosting services, nation state actors, or even hackers can have large computational resources, which may allow them to brute-force the passwords or weak keys \item A nation-state actor (like the NSA for example) can by legal means coerce $K$ of the storage services to leak user-data. \item A nation-state actor may even by legal means require the cloud storage services to deny the user access to their data, thereby causing a denial-of-service attack. \end{itemize} Chasm is designed to defend against these adversial threats with the assumption that at-least $K$-out-of-$N$ cloud storage services are not corrupted in these ways. The fear of nation-state attackers is especially prevalent in Europe, following the revelations of Edward Snowden. There, concerns grow over the fact that most cloud storage services are US-based and hence susceptible to strong-arming by the American government. Many businesses hence seek alternatives or secure backup schemes in which the cloud storage service cannot necessarily be trusted \cite{gastermann}. \section{Related Work} As mentioned previously, existing commercial backup schemes suffer problems of over-reliance on a password. But in addition to this, their numerous features add a layer of confusion between the user and the backup provider, and layers of complication magnify potential for human error to introduce vulnerabilities. In a sense, these services seem to offer a security-through-obscurity scheme in which the user simply has to trust that the services provide the promised security. And even if they do live up to the promised standards, there continues to exist a vulnerability under our current threat model; namely, that exactly one organization controls the data. From a risk management point of view, increasing the number of organizations responsible and decentralizing via a secret sharing scheme can reduce the probability of a breach resulting in harm considerably, and increase the survivability of storage \cite{wylie}. There exist a number of proposals in the literature for backup schemes of various styles. Each of these trades off different levels of confidentiality, availability, and integrity of data, especially with considerations for the performance of secret sharing schemes and the space costs of replication schemes. One common technique is to increase availability in the case of a breach is state machine replication, whereby many machines provide the same service, in this case, backup the same data, and if during restoration, a majority of the machines agree on an answer, that answer is treated as correct \cite{schneider}. In order to preserve confidentiality, the data needs to be encrypted before upload. However, in addition to being very costly space-wise, the need to encrypt data presents a number of problems. In particular, in a state machine replication scheme, key management is difficult to deal since all the data need to be redistributed \cite{reiter}. Another, more efficient form of replication is a quorum system. Rather than have all the servers duplicate the data, in a quorum system each of the servers have some subset of the data such that they overlap in data with other systems. More specifically, each server has some nonempty intersection with every other server \cite{naor}. One such scheme that combines a quorum system with secret sharing distributes multiple shares to each server, such that they overlap and multiple servers have copies of the same shares, but no $x - 1$ of the servers together have all of the shares and any $x$ of them do \cite{ito}. While these replication systems are used to improve Byzantine fault tolerance, against situations even where servers arbitrarily deviate from what they are intended to do, they are bulky and somewhat inflexible. For example, share renewal in long-term storage systems is A number of solutions have been proposed that combine various replication systems with secret sharing \cite{subbiah, lakshmanan}. The motivation for such solutions primarily stems from the fact that secret sharing is a fairly computationally expensive process, as demonstrated by benchmarks performed by Subbiah and Blough (2005) \cite{subbiah}. To improve on various problems with pure secret sharing schemes, other improvements have been suggested. For example, POTSHARDS uses two layers of secret sharing to improve computational performance \cite{greenan}, while another scheme, which is more computationally expensive, allows verification of shares after they have been computed, and provides for simpler methods of share renewal \cite{chor}. There also exist schemes, which instead of secret sharing the data itself, encrypt the data with an AES key and secret share the key \cite{herlihy}. While such a scheme is much more computationally feasible, Subbiah and Blough (2005) points out that such a scheme still is susceptible to the weaknesses of the cryptographic keys itself, so compromise of the keys by brute-force or by holes in application security can reveal all the information \cite{subbiah}. Finally, while all these schemes exist within the minds and writings of academics, one example of secret sharing-based backup being used in enterprise systems is at the Kyoto University Hospital, where the confidentiality, integrity, and availability of patient records are all crucial to maintain, and the data must be recovered quickly in the case of database disaster \cite{kuroda}. \section{How Chasm Works} The primary interface to Chasm is the command-line utility, written in Go, which can perform all of the following tasks. For specific usage information, run \texttt{chasm ---help}. There is also a GUI under development. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Chasm} \caption{Basic structure of Chasm. When a file is added to the Chasm folder, its contents are secret shared across all connected storage services.} \end{figure} \subsection{Setup} The main step in setting up Chasm is connecting it to several cloud storage services, such as Dropbox or Google Drive. Chasm requires at least two to function and does support multiple accounts on the same service. The user is responsible for authenticating themselves to those services; in most cases, the service will provide a one-time code that the user can copy-paste into Chasm to allow it to read and write the user's cloud storage. When Chasm is started for the first time, it creates a "Chasm folder" at the specified location (by default, the user's home directory) and initializes it with a \texttt{.chasm} file. The \texttt{.chasm} file stores all the information necessary for Chasm to function: authentication tokens for each cloud service and metadata about each file (see below for specifics). \subsection{Sharing/backup} The Chasm folder can be used like any regular directory: users can add, modify, and remove both files and directories. Chasm listens for all filesystem events within this folder; when a file is created or modified, Chasm splits it into $n$ shares using Shamir's secret sharing scheme, and uploads one share to each cloud store. Removing a file causes all of its shares to be deleted from the cloud stores. Performance is not an issue: the Go implementation of Shamir's sharing can process multi-megabyte files for small $n$ and $k$ in less than a second; moreover, the sharing and uploading all happens in the background. To reduce the amount of information available to an adversary who has access to a cloud store, Chasm assigns random IDs to each file and flattens the directory structure. These features are both implemented through the \texttt{.chasm} file, which contains a mapping between the IDs and the full original file paths. The only file not obscured in this way is the \texttt{.chasm} file itself, since Chasm needs to be able to find it first to restore the rest. One additional benefit of this design is that moving or renaming a file in the Chasm folder only requires resharing the \texttt{.chasm}, and not the affected file, since its content did not change. \subsection{Restoring} To restore the backed-up files, Chasm must first be connected to all the cloud stores as described in section 5.1. After that, Chasm can completely restore the file contents and directory hierarchy of the original Chasm folder. It downloads the shares stored on every cloud store, and restores \texttt{.chasm} first. It then recreates all the other files, and recreates the necessary folder structure according to the metadata in \texttt{.chasm}. Shamir's secret sharing is a malleable scheme, so an adversary with write access to the cloud stores could corrupt the user's data. To prevent this, the \texttt{.chasm} file also includes the SHA-256 hash of each file, allowing Chasm to detect modified files. If $k<n$, then Chasm can also try different combinations of shares to figure out which share was corrupted and restore the file uncorrupted. \section{System Guarantees} Chasm is based off Shamir's Secret Sharing Scheme, which provides information-theoretic security. Therefore, we may make the following guarantees: \begin{itemize} \item \textbf{Confidentiality} As long as less than $k$ out of $n$ services collude, no adversaries can gain any information about the user's data. Therefore, we have information-theoretic confidentiality. \item \textbf{Fault-Tolerance} As long as the user still has access to data from $k$ out of $n$ services, she can recover any lost data when, for example, her computer crashes. \item \textbf{Integrity} The integrity of shares is verified by computing the SHA-256 hash of each share. Therefore, as long as $k$ out of $n$ shares have not been corrupted, the original data can be reconstructed. \end{itemize} The user can set the values of $k$ and $n$ as needed, determining the extent to which these three properties are upheld. \section{Usability} Chasm provides a simple solution for distributed secret sharing of potentially sensitive data. Because many users already have services like Dropbox and Google Drive, users can install Chasm with minimal overhead. Similarly, the recovery process is extremely simple. For example, if a user's computer crashes, he can simple install Chasm on his new computer, log into his file storage services, and recover his data with a simple command. Restoration can happen at any time on any device. The only requirement is that users know the passwords to these services, which is a reasonable expectation. The user does not need to remember or store any encryption keys. Finally, Chasm offers a simple and usable interface. Upon starting the Chasm process, users can simply drag-and-drop files into the Chasm folder to sync it across connecting file stores. A GUI for a desktop application is currently being created, which removes the technical overhead in using the command line and increases the simplicity in syncing, restoring, and logging into the different services. \section{Issues} There exist several known issues and security vulnerabilities, but fortunately their solutions are within reach. In the current framework, it is very easy for an adversary to determine the number of files and size of each of the files. While the file structure and the names of each of the files is hidden, this information leakage could potentially be unacceptable to a highly paranoid user. The solution to this is to concatenate data and then redivide into blocks of fixed size, so that only the quantity of data being backed up is visible. While the cloud storage services gain no information about the data since they only receive one of the shares, a potentially malicious service provider or anyone listening on the network on which backups are taking place might be able to look at the outbound shares. Fortunately, most of the cloud storage services that people use often use secure communication protocols to transmit data over networks. While using secret sharing precludes the need for passwords or keys to encrypt data, Chasm still relies on users having secure accounts on cloud storage websites. The insinuation is that users might more easily remember their account information for commonly used websites such as Google, compared with their passwords for seldom-used backups that don't often require password inputs. This presents the potential problem that for users with poor password security on their various cloud storage accounts still have vulnerable; namely, adversaries can exploit weak or reused passwords to gain access to backed-up data if they are aware of the scheme that is being used. To improve security on this front is outside the realm of what Chasm can do for users; Chasm's security presupposes its users having adequate security on their cloud storage accounts. Luckily, for many users this is a simple fix - they can adopt longer, more unique passwords, and use two-factor authentication. Researchers in backup systems often point to the fact that secret sharing is extraordinarily computationally expensive. Benchmarks done by, for one, Subbiah and Blough (2005) \cite{subbiah}, show that for large values of $n$ or $x$, or for very large amounts of data, processes such as polynomial interpolation in finite fields can grow very expensive. In that respect, Chasm is not designed necessarily for very large networks on servers used to communicate data; notwithstanding the needs of extraordinarily paranoid users, the use of up to $n = 5$ is far more than enough to provide the security needed under our threat model. Furthermore, Chasm is not designed necessarily for backup of large files like videos, and even if it is used for this, uploading is done in the background and will not be an extreme inconvenience to users. The restore function, since it is not done very often, is not bad if it is too costly anyway. There exists, still, the possibility that an adversary, or a malicious cloud storage organization, might silently modify the shares. By sharing the hashes of the shares, Chasm is still able to detect these modifications, and it is computationally difficult for an adversary to modify data completely silently, assuming target collision resistance. If more than $n - x$ of the servers are compromised and data is modified, then Chasm can do nothing to recover the data, but the extent of this intolerance is no worse than Chasm's Byzantine fault tolerance or crash tolerance: if more than $n - x$ servers simply crashed, or did not return the right data, then the data is irrecoverable anyway. \section{Next Steps} Possible future additions to the core project concept include: \begin{itemize} \item Support for more storage services, allowing larger values for $x$ and $n$ \item Storing the uploaded files as blocks of uniform size to obscure their size \item Encrypting the data before uploading \item Detecting if the storage services have colluded (possibly not feasible) \item Increasing performance by parallelizing the generation of shares and uploading to the file stores \item Improving performance using methods outlined in any of \cite{subbiah, reiter, greenan} \item GUI for setup and login instead of command-line utility \end{itemize} \section{Conclusion} In this paper we presented Chasm: an application of Shamir's Secret Sharing scheme that uses indepdent, pre-existing cloud storage services, like Google Drive and Dropbox, to provide everyday users with information-theoretic confidentiality, integrity, and fault-tolerance for sensitive data and file backup in the cloud. Chasm beats competing secure backup services both in usability and security while maintaing almost zero trust beyond the correctness of the client side application. Chasm operates in a more realistic threat model for today's ever increasing dependence on cloud services. Chasm distributes trust instead of relying on any single point of failure, a principle which we hope more services and application will start to adopt in the future. \begin{thebibliography}{9} \bibitem{chor} Chor, Benny, Shafi Goldwasser, Silvio Micali, and Baruch Awerbuch. "Verifiable Secret Sharing and Achieving Simultaneity in the Presence of Faults." \emph{26th Annual Symposium on Foundations of Computer Science} (1985): 383-95. \bibitem{gastermann} Gastermann, Bernd, Markus Stopper, Anja Kossik, and Branko Katalinic. "Secure Implementation of an On-premises Cloud Storage Service for Small and Medium-sized Enterprises." \emph{Procedia Engineering} 100 (2015): 574-83. \bibitem{greenan} Greenan, K., M. Storer, E.l. Miller, and C. Maltzahn. "POTSHARDS : Storing Data for the Long-term Without Encryption." \emph{Third IEEE International Security in Storage Workshop} (2005). \bibitem{herlihy} Herlihy, Maurice P., and J. D. Tygar. "How to Make Replicated Data Secure." \emph{Advances in Cryptology - CRYPTO '87 Lecture Notes in Computer Science} (1988): 379-91. \bibitem{ito} Ito, Mitsuru, Akira Saito, and Takao Nishizeki. "Secret Sharing Scheme Realizing General Access Structure." \emph{Electronics and Communications in Japan (Part III: Fundamental Electronic Science)} 72.9 (1989): 56-64. \bibitem{lakshmanan} Lakshmanan, S., M. Ahamad, and H. Venkateswaran. "Responsive Security for Stored Data." 23rd International Conference on Distributed Computing Systems, 2003. Proceedings. \bibitem{kuroda} Kuroda, Tomohiro, Eizen Kimura, Yasushi Matsumura, Yoshinori Yamashita, Haruhiko Hiramatsu, Naoto Kume, and Atsushi Sato. "Applying Secret Sharing for HIS Backup Exchange." \emph{35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society} (2013): 171-74. \bibitem{naor} Naor, Moni, and Avishai Wool. "Access Control and Signatures via Quorum Secret Sharing." \emph{Proceedings of the 3rd ACM Conference on Computer and Communications Security} 9.9 (1996): 909-22. \bibitem{reiter} Reiter, Michael K., and Kenneth P. Birman. "How to Securely Replicate Services." \emph{ACM Transactions on Programming Languages and Systems} 16.3 (1994): 986-1009. \bibitem{schneider} Schneider, Fred B. "Implementing Fault-tolerant Services Using the State Machine Approach: A Tutorial." \emph{CSUR ACM Computing Surveys} 22.4 (1990): 299-319. \bibitem{subbiah} Subbiah, Arun, and Douglas M. Blough. "An Approach for Fault Tolerant and Secure Data Storage in Collaborative Work Environments." \emph{Proceedings of the 2005 ACM Workshop on Storage Security and Survivability} (2005): 84-93. \bibitem{wylie} Wylie, Jay J., Michael W. Bigrigg, John D. Strunk, Gregory D. Ganger, Han Kiliccote, and Pradeep K. Khosla. "Survivable Information Storage Systems." \emph{Computer} 33.8 (2000): 61-68. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.8009777115, "avg_line_length": 114.3981042654, "ext": "tex", "hexsha": "e4ba0429cdbffe957c3694cccad2c92ae64d9503", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2019-12-06T17:17:27.000Z", "max_forks_repo_forks_event_min_datetime": "2017-05-30T17:11:28.000Z", "max_forks_repo_head_hexsha": "09a66093177ed6bfb98a589a5b932bae55b173dd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "agrinman/chasm", "max_forks_repo_path": "docs/writeup/writeup.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "09a66093177ed6bfb98a589a5b932bae55b173dd", "max_issues_repo_issues_event_max_datetime": "2018-03-09T13:57:50.000Z", "max_issues_repo_issues_event_min_datetime": "2016-06-16T03:56:29.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "agrinman/chasm", "max_issues_repo_path": "docs/writeup/writeup.tex", "max_line_length": 1019, "max_stars_count": 26, "max_stars_repo_head_hexsha": "09a66093177ed6bfb98a589a5b932bae55b173dd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "agrinman/chasm", "max_stars_repo_path": "docs/writeup/writeup.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-22T18:58:24.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-04T06:24:00.000Z", "num_tokens": 5309, "size": 24138 }
\chapter{The GMAT Design Approach} \section{Approach to Meeting Requirements} \section{GMAT's Design Philosophy} \section{Extendability Considerations} \section{Platform Considerations}
{ "alphanum_fraction": 0.8157894737, "avg_line_length": 19, "ext": "tex", "hexsha": "fce51f6f2872d564fdb044317d5909c90804367a", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": [ "NASA-1.3" ], "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_path": "doc/SystemDocs/ArchitecturalSpecification/Philosophy.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_licenses": [ "NASA-1.3" ], "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_path": "doc/SystemDocs/ArchitecturalSpecification/Philosophy.tex", "max_line_length": 42, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_path": "doc/SystemDocs/ArchitecturalSpecification/Philosophy.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "num_tokens": 42, "size": 190 }
%---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[letterpaper]{twentysecondcv} % a4paper for A4 \usepackage{enumitem} \setlist[description]{style=nextline} %---------------------------------------------------------------------------------------- % PERSONAL INFORMATION %---------------------------------------------------------------------------------------- \profilepic{alaric_dsouza.jpg} % Profile picture \cvname{Alaric D'Souza} % Your name \cvjobtitle{Resident Physican} % Job title/career %\cvdate{} % Date of birth %\cvaddress{} % Short address/location, use \newline if more than 1 line is required \cvnumberphone{714-926-4569} % Phone number \cvsite{\href{https://alaricdsouza.github.io/}{alaricdsouza.github.io}}% website \cvorcid{orcid: \href{https://orcid.org/0000-0002-8744-8136}{0000-0002-8744-8136}} % ORCID \cvmail{[email protected]} % Email address %\newpage to flush to next page %\newline for new line %---------------------------------------------------------------------------------------- \begin{document} %---------------------------------------------------------------------------------------- % ABOUT ME %---------------------------------------------------------------------------------------- \aboutme{ Alaric is a Resident Pediatrician in the Boston Combined Residency Program where he cares for pediatric patients at Boston Children's Hospital and Boston Medical Center. He did his MD/PhD at Washington University in St. Louis, and his Computational and Systems Biology dissertation research on Microbial Ecology and antibiotic resistance genes in the Dantas Lab. After residency, Alaric hopes to specialize in Infectious Diseases and lead his own research group studying microbial communities and host-pathogen interactions. } %---------------------------------------------------------------------------------------- % SKILLS %---------------------------------------------------------------------------------------- \skills{ \begin{description}[leftmargin=0cm] \item[Software Packages] Adobe Illustrator, Adobe Photoshop, EndNote, LaTeX, Microsoft Office \item[Programming Languages] Python, R, Bash, Wolfram Mathematica \item[Languages] English (native)\newline Hindi (beginner) \item[Misc.] Computer Hardware, Machining and Fabrication, Welding \end{description} } %---------------------------------------------------------------------------------------- \makeprofile % Print the sidebar %---------------------------------------------------------------------------------------- % INTERESTS %---------------------------------------------------------------------------------------- \section{research interests} Microbial ecology, bacterial genomics, and antibiotic resistance. %---------------------------------------------------------------------------------------- % EDUCATION %---------------------------------------------------------------------------------------- \section{\href{https://alaricdsouza.github.io/\#education}{education}} \begin{twenty} % Environment for a list with descriptions \twentyitem{2014-22}{MD}{Washington University in St. Louis}{} \twentyitem{2016-20}{PhD}{Dantas Lab, Washington University in St. Louis}{Computational and Systems Biology} \twentyitem{2010-14}{BS}{Yale University}{Ecology and Evolutionary Biology} \end{twenty} %---------------------------------------------------------------------------------------- % EXPERIENCE %---------------------------------------------------------------------------------------- \section{research and work experience} \begin{twenty} % Environment for a list with descriptions \twentyitem{2022-pres.}{Pediatrics Residency}{Boston Combined Residency Program}{Resident Physician | Medical care for pediatric patients at Boston Children's Hospital and Boston Medical Center.} \twentyitem{2016-20}{Dantas Lab}{Washington University in St. Louis}{Doctoral Candidate | Ecology of hospital built environment bacteria and infant gut microbiota.} \twentyitem{2015}{Dantas Lab}{Washington University in St. Louis}{Rotation Student | Autologous Fecal Microbial Transplant Analysis} \twentyitem{2012-14}{Powell Lab}{Yale University}{Undergraduate Researcher | Foraging behavior of \textit{Aedes aegypti} mosquitoes.} \twentyitem{2013}{Gagneux Lab}{Swiss Tropical and Public Health Institute}{Undergraduate Researcher | Antibiotic resistance and mutation rates in \textit{Mycobacterium tuberculosis}.} \twentyitem{2012}{Schlesinger Lab}{The Ohio State University}{Undergraduate Researcher | Macrophage receptor signaling for \textit{Mycobacterium tuberculosis} phagosome lysosome fusion escape.} \twentyitem{2011}{Seva Dhan}{Mumbai, India}{Summer Intern | Treatment of alcoholics and drug addicts.} \twentyitem{2011}{Course Consulting}{Canyon High School}{Consultant | Course design for AP Psychology and IB Theory of Knowledge.} \twentyitem{2009}{Mentee of Dr. P.S. Wijewarnasuriya}{Army Research Labs}{Summer Intern | Semiconductor photoresist optimization for infrared detectors.} \end{twenty} %---------------------------------------------------------------------------------------- % PUBLICATIONS %---------------------------------------------------------------------------------------- \section{\href{https://alaricdsouza.github.io/\#publications}{publications}} * authors contributed equally \begin{twenty} % Environment for a list with descriptions \twentyitem{2022}{Acute and persistent effects of commonly-used antibiotics on the gut microbiome and resistome in healthy adults}{}{Anthony WE, Wang B, Sukhum KV, D’Souza AW, Hink T, Cass C, Seiler S, Reske KA, Coon C, Dubberke ER, Burnham CA, Dantas G, Kwon JH \newline \textit{Cell Reports} | 10.1016/j.celrep.2022.110649 } \twentyitem{2022}{Structural and molecular rationale for the diversification of resistance mediated by the Antibiotic\_NAT family}{}{Stogios PJ, Bordeleau E, Xu Z, Skarina T, Evdokimova E, Chou S, Diorio-Toth L, D'Souza AW, Patel S, Dantas G, Wright GD, Savchenko A \newline \textit{Communications Biology} | 10.1038/s42003-022-03219-w } \twentyitem{2021}{Destination shapes antibiotic resistance gene acquisitions, abundance increases, and diversity changes in Dutch travelers.}{}{D'Souza AW*, Boolchandani M*, Patel S, Galazzo G, Hattem J, Arcilla M, Melles D, Jong M, Schultsz C, Dantas G, Penders J \newline \textit{Genome Medicine} | 10.1186/s13073-021-00893-z } \end{twenty} %---------------------------------------------------------------------------------------- % SECOND PAGE EXAMPLE %---------------------------------------------------------------------------------------- \newgeometry{left=1cm,top=0.7cm,right=1cm,bottom=0.7cm,nohead,nofoot} \newpage % Start a new page \begin{twenty} \twentyitem{2021}{Manure microbial communities and resistance profiles reconfigure after transition to manure pits and differ from fertilized field soil}{}{Sukhum K, Vargas R, D'Souza AW, Boolchandani M, Patel S, Kesaraju A, Walljasper G, Hedge H, Ye Z, Valenzuela R, Gunderson P, Bendixsen C, Dantas G, Shukla S \newline \textit{mBio} | 10.1128/mBio.00798-21 } \twentyitem{2020}{Environmental remodeling of human gut microbiota and antibiotic resistome in livestock farms}{}{Sun J*, Liao XP*, D'Souza AW*, Boolchandani M*, Li SH*, Cheng K, Martinez JL, Li L, Feng YJ, Fang LX, Huang T, Xia J, Yu Y, Zhou YF, Sun YX, Deng XB, Zeng ZL, Jiang HX, Fang BH, Tang YZ, Lian XL, Zhang RM, Fang ZW, Yan QL, Dantas G, Liu YH \newline \textit{Nature Communications} | 10.1038/s41467-020-15222-y } \twentyitem{2019}{Genomic characterization of antibiotic resistant \textit{Escherichia coli} isolated from domestic chickens in Pakistan}{}{Rafique M*, Potter RF*, Ferreiro A, Wallace MA, Rahim A, Malik AA, Siddique N, Abbas MA, D'Souza AW, Burnham CD, Ali N, Dantas G \newline \textit{Frontiers in Microbiology} | 10.3389/fmicb.2019.03052} \twentyitem{2019}{Cotrimoxazole prophylaxis increases resistance gene prevalence and $\alpha$-diversity but decreases $\beta$-diversity in the gut microbiome of HIV-exposed, uninfected infants.}{}{D'Souza AW*, Moodley-Govender E*, Berla B, Kelkar T, Wang B, Sun X, Daniels B, Coutsoudis A, Trehan I, Dantas G \newline \textit{Clinical Infectious Diseases} | 10.1093/cid/ciz1186} \twentyitem{2019}{Breakpoint beware: reliance on historical breakpoints for Enterobacteriaceae leads to discrepancies in interpretation of susceptibility testing for carbapenems and cephalosporins and gaps in detection of carbapenem-resistant organisms.}{}{Yarbrough ML, Wallace MA, Potter RF, D'Souza AW, Dantas G, Burnham CD \newline \textit{European Journal of Clinical Microbiology \& Infectious Diseases} | 10.1007/s10096-019-03711-y} \twentyitem{2019}{Spatiotemporal dynamics of multidrug resistant bacteria on intensive care unit surfaces.}{}{D'Souza AW*, Potter RF*, Wallace M, Shupe A, Patel S, Sun X, Gul D, Kwon JH, Andleeb S, Burnham CD, Dantas G \newline \textit{Nature Communications} | 10.1038/s41467-019-12563-1} \twentyitem{2019}{Phenotypic and genotypic characterization of linezolid resistant \textit{Enterococcus faecium} from the USA and Pakistan.}{}{Wardenburg K*, Potter R*, D'Souza AW, Hussain T, Wallace M, Andleeb A, Burnham CA, Dantas G \newline \textit{Journal of Antimicrobial Chemotherapy} | 10.1093/jac/dkz367} \twentyitem{2019}{Sequencing-based methods and resources to study antimicrobial resistance.}{}{Boolchandani M*, D'Souza AW*, Dantas G \newline \textit{Nature Reviews Genetics} | 10.1038/s41576-019-0108-4} \twentyitem{2018}{Infant diet and maternal gestational weight gain predict early metabolic maturation of gut microbiomes.}{}{Baumann-Dudenhoeffer AM, D'Souza AW, Tarr PI, Warner BB, Dantas G \newline \textit{Nature Medicine} | 10.1038/s41591-018-0216-2} \twentyitem{2018}{\textit{Superficieibacter electus} gen. nov., sp. nov., an Extended-Spectrum $\beta$-Lactamase Possessing Member of the Enterobacteriaceae Family, Isolated From Intensive Care Unit Surfaces.}{}{Potter RF*, D'Souza AW*, Wallace MA, Shupe A, Patel S, Gul D, Kwon JH, Beatty W, Andleeb S, Burnham CD, Dantas G \newline \textit{Frontiers in Microbiology} | 10.3389/fmicb.2018.01629} \twentyitem{2017}{Draft Genome Sequence of the \textit{bla}OXA-436- and \textit{bla}NDM-1-Harboring \textit{Shewanella putrefaciens} SA70 Isolate.}{}{Potter RF*, D'Souza AW*, Wallace MA, Shupe A, Patel S, Gul D, Kwon JH, Andleeb S, Burnham CD, Dantas G \newline \textit{Genome Announcements} | 10.1128/genomeA.00644-17} \twentyitem{2016}{The rapid spread of carbapenem-resistant Enterobacteriaceae.}{}{Potter RF*, D'Souza AW*, Dantas G \newline \textit{Drug Resistance Updates} | 10.1016/j.drup.2016.09.002} \twentyitem{2014}{Malignant cancer and invasive placentation: A case for positive pleiotropy between endometrial and malignancy phenotypes.}{}{D'Souza AW and Wagner GP \newline \textit{Evolution, Medicine, and Public Health} | 10.1093/emph/eou022} \end{twenty} %\end{twenty} \newpage %\begin{twenty} %---------------------------------------------------------------------------------------- % Published Abstracts %---------------------------------------------------------------------------------------- \section{published abstracts} \begin{twenty} \twentyitem{2021}{Microbiome and immune disruption accompany mouse death in a gnotobiotic mouse model of neonatal sepsis}{}{Schwartz D, Wardenburg K, Shalon N, Ning J, Crofts T, D?Souza A, Robinson J, Henderson J, Warner B, Tarr P, Dantas G \newline \textit{Journal of the Pediatric Infectious Diseases Society} | 10.1093/jpids/piab031.010} \twentyitem{2020}{The Gut Microbiome and Resistome of Healthy Volunteers are Restructured After Short Courses of Antibiotics}{}{Anthony W, Sukhum K, Cass C, Reske K, Seiler S, Hink T, Coon C, D'Souza A, Wang B, Sun S, Dubberke E, Burnham C-A, Dantas G, Kwon JH \newline \textit{Infection Control \& Hospital Epidemiology} | 10.1017/ice.2020.476} \twentyitem{2017}{Longitudinal Analysis of ICU Surface Multidrug-resistant Organism Contamination in the US and Pakistan}{}{Kwon JH, D'Souza A, Potter R, Wallace M, Shupe A, Patel S, Gul D, Andleeb S, Burnham C-A, Dantas G \newline \textit{Open Forum Infectious Diseases} | 10.1093/ofid/ofx163.245} \twentyitem{2015}{Autologous Fecal Microbiota Transplantation as a Strategy to Prevent Colonization With Multidrug-Resistant Organisms Following Antimicrobial Therapy}{}{Bulow C, D'Souza A, Hink T, Wallace M, Reske K, Sun X, Kwon JH, Burnham C-A, Dantas G, Dubberke E \newline \textit{Open Forum Infectious Diseases} | 10.1093/ofid/ofv133.479} \end{twenty} %---------------------------------------------------------------------------------------- % INVITED TALKS %---------------------------------------------------------------------------------------- \section{invited talks} \begin{twenty} % Environment for a list with descriptions \twentyitem{2021}{6 mo M presenting with 5 days of fever, emesis, and irritability}{}{BJC/SLCH Infectious Diseases Grand Rounds} \twentyitem{2021}{An exploration of Emergency Department turnaround time data}{}{WUSTL Clinical Informatics Seminar} \twentyitem{2021}{Total thyroidectomy in a 15 yo F with familial adenomatous polyposis}{}{WUSTL GI Seminar} \twentyitem{2021}{Angiomyolipoma in a 9 yo F with tuberous sclerosis}{}{WUSTL Pathology Seminar} \twentyitem{2020}{16S rRNA gene sequencing in bacterial endocarditis}{}{WUSTL Lab Medicine Seminar} \twentyitem{2019}{Effects of antibiotic prophylaxis on the HIV-exposed uninfected infant gut microbiome}{}{WUSTL MSTP Work in Progress Seminar} \twentyitem{2018}{Bacterial contamination on hospital ICU surfaces in the United States and Pakistan}{}{WUSTL Microbiology Department Seminar} \twentyitem{2018}{Bacterial contamination on hospital ICU surfaces in the United States and Pakistan}{}{WUSTL Center for Genome Sciences} \twentyitem{2018}{Bacterial contamination on hospital ICU surfaces in the United States and Pakistan}{}{American Society of Microbiology Micro} \twentyitem{2017}{Bacterial contamination on hospital ICU surfaces in the United States and Pakistan}{}{CSB-MMMP-PMB Interdepartmental Retreat} \twentyitem{2017}{Multidrug resistant organisms on hospital surfaces in the United States and Pakistan}{}{Computational and Systems Biology Retreat} \twentyitem{2017}{Longitudinal analysis of ICU surface microbial contamination in the United States and Pakistan}{}{American Society of Microbiology Micro} \twentyitem{2017}{Effects of trimethoprim-sulfamethoxazole on the microbiomes of HIV- infants born to HIV+ mothers}{}{WUSTL Center for Genome Sciences} \twentyitem{2017}{Antibiotic Resistance: a growing public health crisis}{}{WUSTL Institute of Public Health} \twentyitem{2013}{Drop Team and Rayleigh-Taylor instability}{}{Canyon High School AP biology class} \twentyitem{2012}{Mannose Receptor Signaling}{}{SUCCESS} \end{twenty} % Environment for a list with descriptions %\end{twenty} \newpage %\begin{twenty} %---------------------------------------------------------------------------------------- % PATENTS %---------------------------------------------------------------------------------------- \section{patents} \begin{twenty} % Environment for a list with descriptions \twentyitem{2018}{Store-Easy Automotive wheelchair storage device.}{}{Patent US9937088B2 \newline Guertler C, D'Souza AW, Milgrom R, Hong T, Xiao Y, Nadell S} \end{twenty} %---------------------------------------------------------------------------------------- % POSTERS %---------------------------------------------------------------------------------------- \section{posters} \begin{twenty} % Environment for a list with descriptions \twentyitem{2019}{Spatiotemporal dynamics of multidrug resistant bacteria on intensive care unit surfaces.}{}{IMM/MICRO Mini WIP} \twentyitem{2019}{Cotrimoxazole prophylaxis increases resistance gene prevalence and $\alpha$-diversity but decreases $\beta$-diversity in the gut microbiome of HIV-exposed, uninfected infants.}{}{American Society of Microbiology (ASM) Microbe} \twentyitem{2019}{Longitudinal analysis of ICU surface microbial contamination in the US and Pakistan}{}{NIAID/IDSA Infectious Diseases Research Careers Meeting} \twentyitem{2018}{Longitudinal Analysis of ICU Surface Microbial Contamination in the US and Pakistan}{}{American Physician Scientist Association (APSA) Midwest meeting} \twentyitem{2018}{Longitudinal Analysis of ICU Surface Microbial Contamination in the US and Pakistan}{}{Lake Arrowhead Microbial Genomics Conference} \twentyitem{2018}{Longitudinal Analysis of ICU Surface Microbial Contamination in the US and Pakistan}{}{American Society of Microbiology (ASM) Microbe} \twentyitem{2017}{Longitudinal Analysis of ICU Surface Microbial Contamination in the US and Pakistan}{}{American Society of Microbiology (ASM) Microbe} \twentyitem{2017}{Longitudinal Analysis of ICU Surface Microbial Contamination in the US and Pakistan}{}{5th annual Global Health \& Infectious Disease Conference} \end{twenty} \begin{twenty} \twentyitem{2014}{Invasive populations of \textit{Aedes aegypti} in Central California}{}{Yale Ecology and Evolutionary Biology Poster Session} \twentyitem{2013}{The genetics of foraging behavior in \textit{Aedes aegypti}}{}{Cell Symposia: Genes, Circuits, and Behavior} \twentyitem{2012}{Mannose receptor signaling linking spleen tyrosine kinase and MAP kinase p38 in human macrophages}{}{The Ohio State University SUCCESS program} \twentyitem{2009}{Relationship of spinning time and speed to photoresist thickness on semiconductor wafers}{}{Army Research Labs: Science and Engineering Apprentice Program} \end{twenty} %---------------------------------------------------------------------------------------- % AWARDS %---------------------------------------------------------------------------------------- \section{awards} \begin{twentyshort} % Environment for a short list with no descriptions \twentyitemshort{2022}{F. Sessions Cole Award in Pediatrics} \twentyitemshort{2022}{Alexander Berg Prize} \twentyitemshort{2022}{Washington University Internal Medicine Club Research Award} \twentyitemshort{2021}{Gold Humanism Honor Society Inductee} \twentyitemshort{2020}{Spencer T. and Ann W. Olin Fellow} \twentyitemshort{2017-18}{Burroughs Wellcome Fund, Institutional Program Unifying Population and Laboratory-Based Sciences} \twentyitemshort{2016}{BioEnterpreneurship Core Bench to Business Competition} \twentyitemshort{2015}{IDEA Labs Demo Day Finalist} \twentyitemshort{2013}{Alan S. Tetelman Fellowship for International Research in the Sciences} \twentyitemshort{2013}{Paul K. and Evelyn Elizabeth Cook Richter Fellowship} \twentyitemshort{2013}{Connecticut Space Grant College Consortium Student Project Grant} \twentyitemshort{2012}{CBC Spouses Cheerios Brand Health Initiative Scholarship} \twentyitemshort{2011}{CBC Louis Stokes Health Scholarship} \twentyitemshort{2010}{CBC Louis Stokes Health Scholarship} \end{twentyshort} %---------------------------------------------------------------------------------------- % PROFESSIONAL MEMBERSHIPS %---------------------------------------------------------------------------------------- \section{professional memberships} \begin{twentyshort} % Environment for a short list with no descriptions \twentyitemshort{2021-pres.}{Gold Humanism Honor Society} \twentyitemshort{2019-pres.}{Infectious Diseases Society of America} \twentyitemshort{2016-pres.}{American Society for Microbiology} \end{twentyshort} \newpage %---------------------------------------------------------------------------------------- % TEACHING %---------------------------------------------------------------------------------------- \section{teaching} \begin{twentyshort} % Environment for a short list with no descriptions \twentyitemshort{2017-20}{Computational Teaching Assistant | Fundamentals of Biostatistics | Washington University in St. Louis} \twentyitemshort{2017-20}{Graduate Mentor | Visiting scholars, rotation students, and undergraduates | Dantas Lab} \twentyitemshort{2017}{Teaching Assistant | Microbes and Pathogenesis | Washington University in St. Louis} \twentyitemshort{2017}{Graduate Mentor | IPH Summer Program | Washington University in St. Louis} \twentyitemshort{2016}{Graduate Mentor | Eureka Program | Washington University in St. Louis} \twentyitemshort{2013-14}{Lecturer | Origins of Life | Yale Splash} \twentyitemshort{2011-14}{Lecturer | The Science of Ice Cream | Yale Splash} %\twentyitemshort{<dates>}{<title/description>} \end{twentyshort} %---------------------------------------------------------------------------------------- % ACTIVITIES AND LEADERSHIP %---------------------------------------------------------------------------------------- \section{activities and leadership} \begin{twenty} % Environment for a list with descriptions \twentyitem{2020}{Volunteer}{Maker Task Force, Washington University in St. Louis/Barnes Jewish Hospital}{Helped produce face shields for hospital} \twentyitem{2019-pres.}{Volunteer}{Saturday Neighborhood Health Clinic}{Seeing patients as a medical student volunteer for the free clinic} \twentyitem{2014-pres.}{Yale Admissions Interviewer}{Yale Alumni Schools Committee}{Interviewer for Yale College admissions of prospective undergraduate students} \twentyitem{2015-17}{Physicians for Human Rights}{Washington University in St. Louis}{Co-President of club focusing on human rights and political issues that relate to medicine, including healthcare for prisoner, refugees, and underserved communities} \twentyitem{2014-17}{Store-Easy}{Washington University in St. Louis - Sling Health}{Company seeking to improve manual wheelchair storage in automobiles} \twentyitem{2012-14}{NASA Microgravity University (Drop Team)}{Yale University}{Undergraduate research team selected by NASA to conduct fluid mechanics research in zero-gravity} \twentyitem{2012-14}{Splash at Yale}{Yale University}{Undergraduate group teaching short courses to middle and high school students} \twentyitem{2011-14}{Big Sib}{Yale University - Pierson College}{Mentor to freshman students at Yale University} \end{twenty} %\begin{twenty} % Environment for a list with descriptions % \twentyitem{2017-pres.}{Computational Teaching Assistant | Fundamentals of Biostatistics}{Washington University School of Medicine}{} % \twentyitem{2017}{Graduate Mentor}{Dantas Lab}{Mentor visiting scholars, rotation students, and undergraduates in the lab setting.} % \twentyitem{2017}{Teaching Assistant}{Washington University School of Medicine}{Teach weekly 2 hour review sessions and write quizzes and tests for first year Microbes and Pathogenesis course.} % \twentyitem{2016}{Graduate Mentor}{Eureka Program}{Expose undergraduate students to scientific research in a laboratory setting and generate interest in scientific careers.} % \twentyitem{2013-14}{Origins of Life}{Yale Splash}{Theories on the origins of life including RNA world and Simple Beginnings with descriptions of biochemistry and the Miller-Urey experiment} % \twentyitem{2011-14}{The Science of Ice Cream}{Yale Splash}{How ice cream is made, details on colloids, and demonstration of making ice cream with liquid nitrogen} % %\twentyitem{<dates>}{<title/description>} %\end{twenty} \end{document}
{ "alphanum_fraction": 0.6807795413, "avg_line_length": 78.0764119601, "ext": "tex", "hexsha": "fb2569636b550adddb49e7bdbf17bff5ebd77dfc", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0b8ef3c6fe575314580b28dd16f1ebb9a3c274ab", "max_forks_repo_licenses": [ "MIT", "Unlicense" ], "max_forks_repo_name": "AlaricDSouza/CV-AlaricDSouza", "max_forks_repo_path": "DSouzaAlaric_CV.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0b8ef3c6fe575314580b28dd16f1ebb9a3c274ab", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT", "Unlicense" ], "max_issues_repo_name": "AlaricDSouza/CV-AlaricDSouza", "max_issues_repo_path": "DSouzaAlaric_CV.tex", "max_line_length": 524, "max_stars_count": null, "max_stars_repo_head_hexsha": "0b8ef3c6fe575314580b28dd16f1ebb9a3c274ab", "max_stars_repo_licenses": [ "MIT", "Unlicense" ], "max_stars_repo_name": "AlaricDSouza/CV-AlaricDSouza", "max_stars_repo_path": "DSouzaAlaric_CV.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5857, "size": 23501 }
\chapter{Introduction} \label{sec:Introduction} Optimizing predictive models on datasets obtained from citizen-science projects can be computationally expensive as these datasets grow in size. Consequently, running models based on Multi-layered Neural Networks, Integer Programming, and other optimization routines can become more computationally difficult as the number of parameters increase, despite using the faster Central Processing Units (CPUs) in the market. Incidentally, it becomes difficult for citizen-science projects, which often deal with large datasets, to scale if the organizers do not employ special processing units to run neural networks as optimization models, which require extensive tensor operations. One such special processing unit is the Graphical Processing Unit (GPU), which offers numerous cores to parallelize computation. GPUs can often outperform CPUs in computing such predictive models if these models \textit{heavily} rely on large-scale tensor operations \cite{ParallelNVIDIA, cuDNNPaper}. By using GPUs over CPUs to accelerate computation on a citizen-science project, the model could achieve better optimization in less time, enabling the project to scale. \section{Avicaching} \label{sec:Avicaching} Part of the eBird project, which aims to ``maximize the utility and accessibility of the vast numbers of bird observations made each year by recreational and professional bird watchers'' \cite{EBird}, Avicaching is an incentive-driven game trying to homogenize the spatial distribution of citizens' (agents') observations \cite{Xue2016Avi1, Xue2016Avi2}. Since the dataset of agents' observations in eBird is geographically heterogeneous (concentrated in some places like cities and sparse in others), Avicaching homogenizes the observation set by placing rewards at and attracting agents to under-sampled locations \cite{Xue2016Avi1}. For the agents, collecting rewards increases their `utility' (excitement, fun etc.), while for the organizers, a more homogeneous observation dataset means better sampling and higher confidence in using it for other models. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth]{avicaching_change} \caption[Avicaching 2014-15 Results in Tompkins and Cortland Counties]{Avicaching 2014-15 Results in Tompkins and Cortland Counties, NY: With the previous model \cite{Xue2016Avi1, Xue2016Avi2, EBird}, Avicaching was able to attract `eBirders' to under-sampled locations by distributing rewards over the location set.} \label{fig:Avicaching 2014-15 Results in Tompkins and Cortland Counties} \end{figure} To accomplish this task of specifying rewards at different locations based on the historical records of observations, Avicaching would learn how agents change their behavior when a certain sample of rewards was applied to the set of locations, and then redistribute rewards across the locations based on those learned parameters \cite{Xue2016Avi2}. This requirement naturally translates into a optimization problem, which we implement using multi-layered neural networks and linear programming. \section{Important Questions} \label{sec:Important Questions} Although the previously devised solutions to Avicaching were conceptually effective \cite{Xue2016Avi1, Xue2016Avi2}, using CPUs to solve Mixed Integer Programming and (even) shallow neural networks made the models impractical to scale. Solving the problems faster would have also allowed organizers to find better results (more optimized). These concerns form the pivots of our research study. \subsection{Solving Faster} \label{sec:Important Questions - Solving Faster} We were interested in using GPUs to run our optimization models because of their capability to accelerate problems based on large tensor operations \cite{ParallelNVIDIA, cuDNNPaper}. Newer generation NVIDIA GPUs, equipped with thousands of CUDA (NVIDIA's parallel computing API) cores \cite{NVIDIA}, could have empowered Avicaching's organizers to scale the game, if the underlying models were computed using simple arithmetic operations on tensors rather than using conditional logic (see~\Cref{sec:Avoiding Specific Operations in GPUs}). Since even the faster CPUs --- in the range of Intel Core i7 chipsets --- are sequential in processing and do not provide as comparable parallel processing\footnote{CPUs often have multiple cores nowadays, but very few compared to what many GPUs have.} as GPUs do, we sought to solve the problem much faster using GPUs. But \textbf{how fast could we do it?} \subsection{Better Results} \label{sec:Important Questions - Better Results} For learning parameters in agents' change of behavior on a fixed set of rewards, the previously devised sub-model delivered predictions that differed $26\%$ from Ground Truth \cite[Table~1]{Xue2016Avi2}. This model was then used to redistribute rewards in a budget. If we could get closer to the Ground Truth, i.e., better learn the parameters for the change, we could redistribute rewards with superior prediction/accuracy. Since the organizers require the \textit{best} distribution of rewards, we will need a set of learned parameters that is closer to the Ground Truth (in terms of Normalized Mean Squared Error \cite[Section~4.2]{Xue2016Avi2}). In a gist, we aimed to \textbf{learn the parameters more suitably}, and \textbf{find the best allocation of rewards.} \subsection{Adjusting the Model's Features} \label{sec:Important Questions - Adjusting the Model's Features} Once the model starts delivering better results than the previously devised models, one thinks if some characteristics\footnote{Tinkering with hyper-parameters like learning rate or adding a weight regularizer.} of the model can be changed to get more preferable results (one could also build a better model). While a goal of ``getting better results'' is an unending struggle, there is a trade-off with practicality as these adjustments take time and computation power to test --- and we didn't have unlimited resources. Therefore, we asked if one could \textbf{reasonably adjust the model's features to improve performance and optimization.} \section{Computation Using GPUs} \label{sec:Computation Using GPUs} The use of GPUs has changed drastically in the last decade --- from rendering superior graphics to parallelizing floating point operations. Companies like NVIDIA are now providing General Purpose GPUs (GPGPUs) that are capable of executing parallel algorithms using thousands of cores and threads \cite{NVIDIA}. Furthermore, by working with newly-developed parallel programming APIs like CUDA, one can handle a GPU's threads more efficiently and optimize a task's datapath \cite{CUDADocs}. In the next sections, we briefly describe NVIDIA GPU's architecture, instruction set, and best practices for optimization. Although we do not implement the CUDA back-end manually and use PyTorch's implementation \cite{PTDocs}, understanding the basics of the processor was helpful for designing our models. \subsection{GPU Architecture} \label{sec:GPU Architecture} We describe the structure and abstractions of an NVIDIA GPU with Pascal architecture\footnote{We use NVIDIA Quadro P4000 (Pascal architecture) for our tests.}. These devices are multi-threaded multi-processors for executing ``fine-grained threads efficiently'' \cite[Appendix~B.4]{PattersonARM}. Unlike Personal Computer CPUs, which currently comprise 1-8 cores with fast and low-latency datapaths, GPUs provide scalable parallel processing power with high throughput and latency\footnote{High latency is undesirable.} \cite{PattersonARM, DemystifyingGPU}. Informally, GPUs are often referred to as a collection of many `dumb' cores, unlike the few `smart' cores in a CPU. Nonetheless, GPUs are efficient in executing simple instructions in parallel, using hierarchies of cores, threads, and memory. \subsubsection{Core Organization} In most architectures by NVIDIA, many Scalar Processors (SP or CUDA cores) are clustered in a Streaming Multiprocessor (SM), which are then organized in Grid Processing Clusters (GPCs). A GPU can have several GPCs, as shown in \Cref{fig:Organization of Cores in NVIDIA Pascal GP100 GPU} \cite{CUDADocs, ParallelNVIDIA, DemystifyingGPU}. \begin{figure} \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{gpu_pascal} \caption[Organization of Cores in NVIDIA Pascal GP100 GPU]{Organization of Cores in NVIDIA Pascal GP100 GPU: In 6 GPCs and 60 SMs, 3840 SP or CUDA Cores (depicted by green blocks) are arranged \cite{PascalWhitepaper, ParallelNVIDIA}.} \label{fig:Organization of Cores in NVIDIA Pascal GP100 GPU} \end{subfigure}\vspace*{1em} \begin{subfigure}{\textwidth} \centering \includegraphics[width=.7\textwidth]{gpu_pascal_sm} \caption[Constituents of a Streaming Multiprocessor]{Constituents of a Streaming Multiprocessor \cite{PascalWhitepaper, ParallelNVIDIA}.} \label{fig:Constituents of a Streaming Multiprocessor} \end{subfigure} \caption[The NVIDIA Pascal Architecture]{The NVIDIA Pascal Architecture: A GPU, like a typical CPU, contains multiple hierarchies of memory. In addition, NVIDIA also builds a hierarchy of core organization.} \label{fig:The NVIDIA Pascal Architecture} \end{figure} The Streaming Multiprocessor (\Cref{fig:Constituents of a Streaming Multiprocessor}), which is the primary multi-threaded multi-processor, houses a shared memory unit for all SPs along with registers and Special Function Units (SFUs), which can calculate specific mathematical functions in fewer clock cycles than threads. The SM is responsible for relaying instructions to threads (in SPs) and maintaining synchronized parallel computation at barriers \cite{PascalWhitepaper, DemystifyingGPU}. GPCs and other hierarchies provide abstractions to memory and program access. \subsubsection{Memory Organization} Even the memory units are organized into hierarchies like multi-level caches in CPUs (\Cref{fig:The NVIDIA Pascal Architecture}). In a SM, there are: dedicated local memory units for threads, register files shared by several SPs in a SM, and shared memory units for all SP Cores in a SM \cite{PascalWhitepaper, ParallelNVIDIA}. The global internal memory of the GPU (like the RAM of a computer), which is housed separately around the clusters of SMs, stores the datasets for a program. Data is then distributed in the shared and local memory units once threads start executing an instruction \cite[Appendix~B]{PattersonARM}. Moreover, similar to the multi-level cache structure in CPUs, GPUs also have on-chip cache hierarchy (\Cref{fig:The NVIDIA Pascal Architecture}), which enables threads to quickly access working data, thus increasing the performance. On a side note, GPUs' caches are often not big enough to hold full datasets as programs' often require large datasets, leading to higher miss rates than those of caches in CPUs \cite[Appendix~B]{PattersonARM}. \subsubsection{Thread Organization} According to NVIDIA, the parallel structure is obtained by organizing threads into a hierarchy (``grids'' of ``blocks'' of ``warps'' of ``threads''), where threads of a warp (an abstraction) execute in lockstep per clock cycle \cite{CUDADocs, DemystifyingGPU} \cite[Appendix~B]{PattersonARM}. In other words, warps are the basic units of execution in a GPU, which receive singular instructions from the control unit, and forward them to their constituent threads for execution \cite{CUDADocs, DemystifyingGPU}. This abstraction allows the GPU to execute independent sub-tasks in parallel on a large scale (\Cref{fig:GPU - Thread Organization in a GPU}), reducing the total runtime of the program. \begin{figure}[!htbp] \centering \includegraphics[width=.55\textwidth]{gpu_threads_blocks} \caption[Thread Organization in a GPU]{Thread Organization in a GPU: Warps are only an abstraction for identifying threads executing a single instruction; however, threads are physically organized into blocks and grids \cite{CUDADocs,ParallelNVIDIA}} \label{fig:GPU - Thread Organization in a GPU} \end{figure} NVIDIA calls this setup ``Single-Instruction Multiple-Thread (SIMT)'' \cite{CUDADocs} \cite[Appendix~B.4]{PattersonARM}, where a single instruction is executed in lockstep by multiple threads in a warp (though allowed to branch and diverge). The differences between SIMT and Single-Instruction Multiple-Data (SIMD), a common feature in CPUs, are very subtle. While SIMD distributes data sequences into vectors to be executed by a single core/thread of the CPU (data-level parallelism), SIMT requires a particular thread to only operate on a scalar element of the dataset \cite{PattersonARM}. In this way, having thousands of threads running in locksteps provides better throughput than doing vector operations per clock cycle. SIMT also enables programmers to write a program meant for a single, independent thread instead of managing vectors in their code, which promotes ease of use and programming \cite[Chapter~4]{CUDADocs}. \subsubsection{Instruction Set} The Instruction Set of a GPU is limited, which causes complex instructions to take several clock cycles. Even though a Special Functional Unit (SFU) computes complex functions (sine, square-root, reciprocal function etc.) in fewer clock cycles, their count is small compared to the commonplace threads \cite[Appendix~B]{PattersonARM}. The instruction set for Pascal architecture contains basic operations to load and store memory, perform basic floating point, integer (add, subtract, multiply, divide, min, max, left-shift etc.), binary logical (or, and, xor etc.), and branch-jump operations \cite{CUDABinUtils, DemystifyingGPU}. This comprises a very basic set of instructions unlike those in Intel CPUs, for example, which have coalesced and dedicated datapaths for complex instructions\footnote{Intel x86 CPUs have Complex Instruction Set Computer (CISC) Design.} \cite{PattersonARM}. \subsection{Avoiding Specific Types of Operations in GPUs} \label{sec:Avoiding Specific Operations in GPUs} GPUs are not better than CPUs at many operations, unlike popular perception. With parallel processing comes concerns of synchronization delays, blocking instructions, program correctness etc. Moreover, since GPUs are separate units of processors, usually connected to CPUs by PCIe lanes, back and forth memory transfer can cause slowdowns in overall performance. Even the GPUs low-level caches are not big enough to deliver hit rates as compared to CPUs. Often there exists a tradeoff when programming with GPUs, and one should take care to avoid such delays. \subsubsection{Branch and Diverge} Since NVIDIA GPUs deal with program correctness through intra-warp synchronization (keeping warp threads in lockstep), threads must remain in sync even when conditional blocks force different datapaths for different threads. This ultimately reduces performance as some threads become inactive, which can aggregate and slowdown the performance of the whole program. To avoid such delays and maximize throughput, one should elude from using extensive branching in the program. Using conditional checks can change a thread's datapath, which can possibly stall other threads in the warp from proceeding to the next instruction in the program. This behavior is referred to as `branch and diverge' (\Cref{fig:Decrease in GPU Throughput due to Branch and Diverge}), which occurs when some threads in a warp follow a different datapath due to conditional branching, `diverging' from the group. Now when the warp's threads are forced to converge, those diverging threads would take longer to complete the instruction, causing the non-diverging threads to wait. This radically decreases the throughput of the warp, which again diverges from the block, adding up the latency \cite[Appendix~B]{PattersonARM}\cite{DemystifyingGPU}. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth]{gpu_branch_diverge} \caption[Decrease in GPU Throughput due to Branch and Diverge]{Decrease in GPU Throughput due to Branch and Diverge: In this case, while some threads execute \texttt{A, B}, others execute \texttt{X, Y}. If those sets of operations are all executed by threads of the same warp, the threads could diverge from the group and decrease the throughput \cite{ParallelNVIDIA}.} \label{fig:Decrease in GPU Throughput due to Branch and Diverge} \end{figure} To ensure program correctness, the diverging threads in a warp are stalled while the other non-diverging threads are executed and vice-versa, according to NVIDIA \cite{CUDADocs,ParallelNVIDIA}. For example, in \Cref{fig:Decrease in GPU Throughput due to Branch and Diverge}, the 4 threads with thread-indexes $=\{0,1,2,3\}$ will have to wait while the other threads in the warp execute instructions \texttt{X} and \texttt{Y}, and, the other threads will be stalled while the 4 threads execute instructions \texttt{A} and \texttt{B}. This leads to a $50\%$ decrease in GPU performance as instructions get `serialized' \cite{ParallelNVIDIA}, though ensuring program correctness. The overhead can slowdown the run drastically if the program is extensively branched\footnote{On a side note, functions like \texttt{min, max, abs} would not add overhead due to branch and diverge since they have devoted datapaths, and operation codes in the GPUs' instruction set.}. This overhead can either be decreased by reducing the number of conditional statements or by intentionally directing conditional statements to threads to different warps in the block, which execute independently \cite{ParallelNVIDIA}. The latter, at the maximum, only affects full program synchronization if the operations in conditional blocks have unequal runtimes. Furthermore, the performance is usually much better than the otherwise decrease due to diverging intra-warp threads. \subsubsection{Memory Limitations} PCIe lanes between GPU's internal memory and system's main memory are not as fast as on-chip cache access\cite{CUDADocs, ParallelNVIDIA}. This means that transferring datasets back and forth between GPUs and CPUs can add up considerably. Often the limits of GPU's internal memory or required operations on a dedicated device can force the user to transfer datasets multiple times. Even so, one should look for optimizations that can reduce the number of transfers whenever possible, especially when large datasets are involved. This suggestion is also applicable when transferring datasets in memory intra-GPU (between caches). Miss rates in a GPU's local caches are often high due to limited size, and data requests from higher-level caches can often be expensive. \subsection{GPUs' Strengths} \label{sec:Gpu's Strengths} Inferring from GPUs' weaknesses and architecture, we can say that GPUs are good at handling independent sub-tasks with less diverging sections. These independent sub-tasks can usually be clubbed into matrices and tensors, which the threads operate on in parallel. The fewer conditional branching we have in our algorithms, the better. Therefore, GPUs are particularly good at doing linear algebra \cite{PattersonARM, ParallelNVIDIA} since most arithmetic operations on matrix elements can be independently executed without branching. Avoiding branching often requires extensive dataset preprocessing; consequently, one should meticulously design the structure of the models' datasets. Tensor-based datasets are often required in machine learning problems, graphics rendering routines, graph-based algorithms etc., which GPUs can potentially accelerate \cite{PattersonARM}.
{ "alphanum_fraction": 0.8075301052, "avg_line_length": 200.8265306122, "ext": "tex", "hexsha": "8d8ef7cf149e418f0fefb52142377718d00c65c9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "anmolkabra/avicaching-summ17", "max_forks_repo_path": "report/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "anmolkabra/avicaching-summ17", "max_issues_repo_path": "report/introduction.tex", "max_line_length": 1445, "max_stars_count": null, "max_stars_repo_head_hexsha": "3b85c1b70adcbe5d5b2764195090b28093081b1f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "anmolkabra/avicaching-summ17", "max_stars_repo_path": "report/introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4270, "size": 19681 }
\section{CDF Instances} \label{sec:instance} Part~\ref{part:semantics} simplified the syntax of CDF instances in certain ways. In this section we describe those aspects of the actual CDF implementation that differ from the abstract presentation Part~\ref{part:semantics}. The first major difference concerns CDF identifiers. \index{CDF Identifer} \index{component tag} \begin{definition}[CDF Identifiers] \label{def:cdfids} A {\em CDF identifier} has the functor symbol {\tt cid/2}, {\tt oid/2}, {\tt rid/2}, or {\tt crid/2}. The second argument of a CDF identifier is a Prolog atom and is called its {\em component tag}, while the first argument is either \begin{enumerate} \item a Prolog atom or \item a term $f(I_1,\ldots,I_n)$ where $I_1,\ldots,I_n$ are CDF identifiers. \end{enumerate} In the first case, an identifier is called {\em atomic}; in the second it is called a {\em product identifier}. \end{definition} Thus in an implementation of, say, Example~\ref{ex:suture1} all identifiers would have component tags. For instance the identifier {\tt cid(absorbableSutures)} might actually have the form {\tt cid(absorbableSutures,unspsc)} and {\tt oid(sutureU245H)} would have the form {\tt oid(sutureU245H,sutureRus)}. These component tags have two main uses. First, they allow ontolgies from separate soruces to be combined, and thus function in a manner somewhat analogous to XML namespaces. Second, the component tags are critical to the CDF component system, described in \secref{sec:components}. %---------------------------------------------------------------------------- \subsection{Extensional Facts and Intensional Rules} \index{extensional facts} \index{intensional rules} % An actual CDF instance is built up of {\em extensional facts} and {\em intensional rules} defined for the CDF predicates {\tt isa/2} {\tt allAttr/3}, {\tt hasAttr/3}, {\tt classHasAttr/3}, {\tt coversAttr/3}, {\tt minAttr/4} and {\tt maxAttr/4}. Extensional facts for these predicates add the suffix {\tt \_ext} to the suffix name leading to {\tt isa\_ext/2}, {\tt allAttr\_ext/2} and so on. Intensional rules add the suffix {\tt \_int} leading to {\tt isa\_int/2}, {\tt allAttr\_int/2} etc. Extensional facts make use of XSB's sophisticated indexing of dynamic predicates. Since CDF Extensional Facts use functors such as {\tt cid/1} or {\tt oid/1} to type their arguments, traditional Prolog indexing, which makes use only of the predicate name and outer functor of the first argument, is unsuitable for large CDF instances. CDF extensional facts use XSB's star-indexing~\cite{XSB-home}. For ground terms, star-indexing can index on up to the first five positions of a specified argument. In addition, various arguments and combinations of arguments can be used with star-indexing of dynamic predicates. The ability to index within a term is critical for the performance of CDF; also since star-indexing bears similarities to XSB's trie-indexing~\cite{RRSSW98}, it is spatially efficient enough for large-scale use. Section~\ref{sec:config} provides information on default indexing in CDF and how it may be reconfigured. Intensional rules may be defined as XSB rules, and may use any of XSB's language or library features, including tabling, database, and internet access. Intensional rules are called on demand, making them suitable for implementing functionality from lazy database access routines to definitions of primitive types. \begin{example} \rm \label{ex:intrules} In many ontology management systems, integers, floats, strings and so on are not stored explicitly as classes, but are maintained as a sort of {\em primitive type}. In CDF, primitive types are easily implemented via intensional rules like the following. % {\small {\sf \begin{tabbing} foo\=foo\=foooooooooooooooooooooooooooooooooooooooo\=foo\=\kill % \> isa\_int(oid(Float),cid(allFloats)):- \\ \> \> (var(Float) -$>$ cdf\_mode\_error ; float(Float). \\ \end{tabbing} } } \end{example} % CDF provides intensional rules defining all Prolog primitive types as CDF primitive types in the component \component{cdfpt} (see below). Other, more specialized types can be defined by users by defining intensional rules along the same lines {\sc fill in 'below'; tabling and intensional rules} As mentioned above, the predicate {\tt immed\_hasAttr/3}, (and {\tt immed\_allAttr/3}, etc) is used to store basic CDF information that is used by predicates implementing {\tt hasAttr/3} and other relations. {\tt immed\_hasAttr/3} itself is implemented as: % {\small {\sf \begin{tabbing} foo\=foo\=foooooooooooooooooooooooooooooooooooooooo\=foo\=\kill % \> immed\_hasAttr(X,Y,Z):- hasAttr\_ext(X,Y,Z). \\ \> immed\_hasAttr(X,Y,Z):- hasAttr\_int(X,Y,Z). \\ \> immed\_hasAttr(X,Y,Z):- immed\_minAttr(X,Y,Z,\_). \end{tabbing} } } % \noindent The above code fragment illustrates two points. First, {\tt immed\_hasAttr/3} is defined in terms of {\tt immed\_minAttr/3}, fulfilling the semantic requirements of Section \ref{sec:type0}. It also illustrates that {\tt immed\_hasAttr/3} is implemented in terms both of extensional facts {\tt hasAttr\_ext/3} and intensional rules {\tt hasAttr\_int(X,Y,Z)}. %------------------------------------------- \subsection{The Top-level Hierarchy and Primitive Types} All CDF instances share the same top-level hierarchy, as depicted in Figure~\ref{fig:toplevel}. All classes and objects are subclasses (through the {\tt isa} relation) to {\tt cid('CDF Classes',cdf)}, all relations are subrelations of {\tt rid('CDF Object-Object Relations',cdf)} and all class relations are subrelations of {\tt crid('CDF Class-Object Relations',cdf)}. %------------------------------------------- \index{identifiers!cid('CDF Classes',cdf)} \index{identifiers!rid('CDF Object-Object Relations,cdf)} \index{identifiers!crid('CDF Class-Object Relations',cdf)} \index{identifiers!cid('CDF Primitive Types',cdf)} \index{identifiers!cid(allIntegers,cdf)} \index{identifiers!cid(allFloats,cdf)} \index{identifiers!cid(allAtoms,cdf)} \index{identifiers!cid(allStructures,cdf)} \index{identifiers!cid(atomicIntegers,cdf)} \begin{figure}[htb] {\small {\it \begin{center} \begin{bundle}{cid('CDF Classes',cdf)} \chunk{\begin{bundle}{cid('CDF Primitive Types',cdf)} \chunk{\begin{bundle}{cid(allIntegers,cdf)\ \ \ \ \ \ } \chunk{oid(Integer,cdfpt)} \end{bundle} } \chunk{\begin{bundle}{cid(allFloats,cdf)\ \ \ \ \ \ } \chunk{oid(Float,cdfpt)} \end{bundle} } \chunk{\begin{bundle}{cid(allAtoms,cdf)\ \ \ \ \ \ } \chunk{oid(Atom,cdfpt)} \end{bundle} } \chunk{\begin{bundle}{cid(allStructures,cdf)\ \ \ \ \ \ } \chunk{oid(Structure,cdfpt)} \end{bundle} } \chunk{\begin{bundle}{cid(atomicIntegers,cdf)} \chunk{oid(AInteger,cdfpt)} \end{bundle} } \end{bundle} } \end{bundle} \end{center} \ \\ \begin{center} rid('CDF Object-Object Relations',cdf) \end{center} \ \\ \begin{center} \begin{bundle}{crid('CDF Class-Object Relations',cdf)} \chunk{crid('Name',cdf)} \chunk{crid('Description',cdf)} \end{bundle} \end{center} } } \caption{Built-in Inheritance Structure of CDF} \label{fig:toplevel} \end{figure} %------------------------------------------- An immediate subclass of {\tt cid('CDF Classes',cdf)} is {\tt cid('CDF Primitive Types',cdfpt)}. This class allows users to maintain in CDF any legally syntactic Prolog element, and forms an exception to Definition~\ref{def:cdfids}. Specifically {\tt cid('CDF Primitive Types',cdf)} contains Prolog atoms, integers, floats, structures and what are termed ``atomic integers'' --- integers that are represented in atomic format, e.g. '01234'. Primitive types are divided into five subclasses, {\tt cid(allIntegers,cdf)}, {\tt cid(allFloats,cdf)}, {\tt cid(allAtoms,cdf)}, {\tt cid(allStructures,cdf)}, and {\tt cid(atomicInteger,cdf)}. Each of these in turn have various objects as their immediate subclasses~\footnote{Recall that objects in CDF are singleton classes.}, whose inheritance relation is defined by an intensional rule like the one presented in Example~\ref{ex:intrules}. Thus, if the number 3.2 needs to be added to an ontology, perhaps as the value of an attribute, it is represented as {\tt oid(3.2,cdfpt)}, and it will fit into the inheritance hierarchy as a subclass of {\tt cid(allFloats,cdf)}. The intensional rules are structured so that for any Prolog syntactic element {\tt X}, when {\tt X} is combined with the component \component{cdfpt}, then {\tt cid(X,cdfpt)} will be a subclass of {\tt cid('CDF Primitive Types',cdfpt)}, as will be {\tt oid(X,cdfpt)}. \subsection{Basic CDF Predicates} \begin{description} \ourpredmodrptitem{isa\_ext/2}{usermod} \ourpredmodrptitem{allAttr\_ext/3}{usermod} \ourpredmodrptitem{hasAttr\_ext/3}{usermod} \ourpredmodrptitem{classHasAttr\_ext/3}{usermod} \ourpredmodrptitem{minAttr\_ext/4}{usermod} \ourpredmodrptitem{maxAttr\_ext/4)}{usermod} %\ourpredmodrptitem{coversAttr\_ext/3)}{usermod} \ourpredmoditem{necessCond\_ext/2)}{usermod} % These dynamic predicates are used to store extensional facts in CDF. They can be called directly from the interpreter or from files that are not modules, but must be imported from {\tt usermod} by those files that are modules. Extensional facts may be added to a CDF system via \pred{newExtTerm/1} (\secref{sec:update}), or imported from a \ttindex{cdf\_extensional.P} file (\secref{sec:components}). \ourpredmodrptitem{isa\_int/2}{usermod} \ourpredmodrptitem{allAttr\_int/3}{usermod} \ourpredmodrptitem{hasAttr\_int/3}{usermod} \ourpredmodrptitem{classHasAttr\_int/3}{usermod} \ourpredmodrptitem{minAttr\_int/4}{usermod} \ourpredmodrptitem{maxAttr\_int/4)}{usermod} %\ourpredmodrptitem{coversAttr\_int/3)}{usermod} \ourpredmoditem{necessCond\_int/2)}{usermod} % These dynamic predicates are used to store intensional rules in CDF. They can be called directly from the interpreter or from files that are not modules, but must be imported from {\tt usermod} by those files that are modules. Intensional rules may be added to a CDF system via \pred{???newIntRule/1} (\secref{sec:update}), or imported from a \ttindex{cdf\_intensional.P} file (\secref{sec:components}). \ourpredmoditem{immed\_isa/2}{cdf\_init\_cdf} {\tt immed\_isa(SubCid,SupCid)} is true if there is a corresponding fact in \pred{isa\_ext/2} or in the intensional rules. It does not use the Implicit Subclassing Axiom \ref{ax:implsc}, the Domain Containment Axiom~\ref{ax:contained}, or reflexive or transitive closure. \ourpredmodrptitem{immed\_allAttr/3}{cdf\_init\_cdf} \ourpredmodrptitem{immed\_hasAttr/3}{cdf\_init\_cdf} \ourpredmodrptitem{immed\_classHasAttr/3}{cdf\_init\_cdf} \ourpredmodrptitem{immed\_minAttr/4}{cdf\_init\_cdf} \ourpredmodrptitem{immed\_maxAttr/4)}{cdf\_init\_cdf} %\ourpredmodrptitem{immed\_coversAttr/3)}{cdf\_init\_cdf} \ourpredmoditem{immed\_necessCond/2)}{cdf\_init\_cdf} Each of these predicates calls the corresponding extensional facts for the named predicate as well as the intensional rules. No inheritance mechanisms are used, and any intensional rules unifying with the call must support the call's instantiation pattern. \ourpredmoditem{cdf\_id\_fields/4}{cdf\_init\_cdf} {\tt cdf\_id\_fields(ID,Functor,NatId,Component)} is true if {\tt ID} is a legal CDF identifier, {\tt Functor} is its main functor symbol, {\tt NatId} is its first field and {\tt Component} is its second field. This convenience predicate provides a faster way to examine CDF identifiers than using {\tt functor/3} and {\tt arg/3}. \end{description}
{ "alphanum_fraction": 0.7452019106, "avg_line_length": 43.4528301887, "ext": "tex", "hexsha": "3c90ac4145ec7eacb252b1159d200e303746ea6a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "99a0c8d48a8bc69d8714334606f3de8f52e1d3cd", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "HoaiNguyenofficial/AutoPenTest-using-DRL", "max_forks_repo_path": "repos/XSB/packages/altCDF/doc/instances.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "99a0c8d48a8bc69d8714334606f3de8f52e1d3cd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "HoaiNguyenofficial/AutoPenTest-using-DRL", "max_issues_repo_path": "repos/XSB/packages/altCDF/doc/instances.tex", "max_line_length": 77, "max_stars_count": null, "max_stars_repo_head_hexsha": "99a0c8d48a8bc69d8714334606f3de8f52e1d3cd", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "HoaiNguyenofficial/AutoPenTest-using-DRL", "max_stars_repo_path": "repos/XSB/packages/altCDF/doc/instances.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3330, "size": 11515 }
\documentclass[fleqn,10pt]{article} \include{preamble} \begin{document} \nocite{*} \subfile{title} \subfile{introduction} \clearpage \paragraph{To-do list} We see the following steps that must be undertaken before production can be started: \begin{itemize} \item Re-run the experiment with \item Verify \item Failure-tree to ensure that power levels can never go above specified values. \item Obtain special permission from the FCC? \end{itemize} \clearpage \begin{multicols}{1} [Frolich 1968] [Frolich 1980] \textrightarrow \ (\supercite{[Hung reflectarray 2014]}[\cite{hungFocusingReflectarrayIts2014} \href{https://doi.org/10.1002/2014RS005481}{\faExternalLink}] $\parallel$ \supercite{[Yang Efficient 2015]}[\cite{yangEfficientStructureResonance2015} \href{https://doi.org/10.1038/srep18030}{\faExternalLink}] $\parallel$ \supercite{[Sun rod-shaped 2017]}{})\footnote{We have not conducently unlike human cellular structures, as we shall see} - coincidentally have just the right size, shape, stiffness, and net charge distribution to form a weak (Q=2) spherical dipole resonance mode which couples well to the microwave spectrum at approximately 8 GHz. More critically, [\cite{dahlPixelRecursiveSuper2017}] (and, in parallel, [Hung 2014]) theoretically model and then experimentally validate in various strains of Influenza A that - due to this acoustic-resonance effect - the power density levels required to crack the lipid envelope are near the safety limits for continuous exposure to humans.\footnote{Sort of. See below and supplemental.} They demonstrate this with both a plaque and PCR assay, finding good agreement with the theoretical model.\footnote{As we will discuss, there are a few issues with the experimental technique.; sham, blinding, and dosimetry demanded by [Vjl.] are not mentioned.} Like pumping a swing, this effect allows an otherwise inconsequential field magnitude to store energy over a small number of cycles until the virus is destroyed. \\ \\\\ \paragraph{Ramifications}\ If this mechanism exists, it would seem to provide significant advantages over existing UV or cold-plasma sterilization. If the extensions made in our paper are valid, this is a non-ionizing, non-thermal % \footnote{It may be useful to define 'non-thermal'; it caused us some confusion. Certainly the proteins of the virion locally absorb energy and increase in temperature. The key is that, when excited in this manner, the energies in the envelope are poorly modelled by a Maxwell-Boltzmann distribution; they are not given sufficient time to 'thermalize'. In contrast, with typical 2.4 GHz microwave exposure, sterilization can only occur by aggregate heating of the fluid and tissue.} % , non-chemical technique, harmless for continuous exposure to tissue, which can sterilize air and surfaces alike, including skin, eyes, and within hair; it evolves no ozone, can readily be generated with \$1 USD - scale devices, acts instantaneously, and - perhaps most critically - could be made to act {\bf within infected tissues}. The X-band is also minimally absorbed by air, allowing action in the far-field and scaling to square-kilometer areas with single installations. \begin{autem} {\it autem}\\ The previous paragraph may cut an impressive figure; but it hinges on all the rest of this paper being correct. \end{autem} \lettrine{It} cannot be overstated how unexpected and positively dubious this finding appears to be -at least, based on our limited research and experience to date. For even the RF power limits set by standards organizations appear to be based on the observation that no significant resonance modes exist in biological tissues, and (as we shall see), this is grounded in solid {\it in vitro} (albeit more limited {\it in vivo}) evidence. This may account for why this paper has been ignored. The precise structure and charge of the virion appears to be a singular anomaly in this otherwise catagorical non-existence. \footnote{It should be noted that this {\it confined} acoustic resonance is subtly distinct from common-and-garden pipe-organ acoustic resonance; this is apparently not a strictly classical phenomenon. Besides standard Coulomb-like and Lennard-Jones-like interactions between constituent particles, if Fr\"{o}hlich is to be believed at these nanoscopic scales there are also wave-function interactions among the particles of the virus which can shift storage to modes not otherwise expected. We confess to not yet understanding this phenomenon; fortunately, while helpful, the details of how this mode appears are not critical to implementing this technique.} \footnote{``{[B]elow about 6 GHz, where EMFs penetrate deep into tissue (and thus require depth to be considered), it is useful to describe this in terms of “specific energy absorption rate” (SAR), which is the power absorbed per unit mass $(W/kg)$. Conversely, above 6 GHz, where EMFs are absorbed more superficially (making depth less relevant), it is useful to describe exposure in terms of the density of absorbed power over area $W/m^2$, which we refer to as “absorbed power density”}'' [ICNIRP 2020 \faExternalLink] } \footnote{All values have been converted to $\text{W/m}^2$ to avoid confusion. 100 $\text{ W/m}^2 = 10 \text{ mW/cm}^2 = 10 \text{ dBm/cm}^2$.} \end{multicols} \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% A poignant summary of the consensus is found in [IEEE C95.1-2005], Annex B, "Identification of levels of RF exposure responsible for adverse effects: summary of the literature". \begin{quote} %%%%%%%%%%% Further examination of the RF literature reveals no reproducible low level (non-thermal) effect that would occur even under extreme environmental exposures. The scientific consensus is that there are no accepted theoretical mechanisms that would suggest the existence of such effects. This consensus further supports the analysis presented in this section, i.e., that harmful effects are and will be due to excessive absorption of energy, resulting in heating that can result in a detrimentally elevated temperature. The accepted mechanism is RF energy absorbed by the biological system through interaction with polar molecules (dielectric relaxation) or interactions with ions (ohmic loss) is rapidly dispersed to all modes of the system leading to an average energy rise or temperature elevation. Since publication of ANSI C95.1-1982 [B6], significant advances have been made in our knowledge of the biological effects of exposure to RF energy. This increased knowledge strengthens the basis for and confidence in the statement that the MPEs and BRs in this standard are protective against established adverse health effects with a large margin of safety. \end{quote} Unfortunately, the buried lede is that the power levels shown by [Yang 2015] are about twice the continuous safety levels. However, there is an obvious avenue of optimization. If the virus is destroyed in a few dozen nanoseconds via an electric field amplitude incidentally caused by an instantaneous power $\text{P}$, but tissue damage requires a temperature rise due to an energy deposition $\text{P} \ \text{dt}$, then we should minimize dt.\ We thus turn the CW microwave signal into a series of effectively instantaneous pulses. The safety standards of [IEEE] and [ICNIRP] account for such time-domain modulation by specifying both an average power limit over a 6 minute period, and a time-integrated energy deposition limit. All known non-thermal and thermal physiological effects, including changes in the permeability of membranes, direct nerve stimulation, etc, are accounted for by obeying those two limits. To drive home this point, apparently the best-quality evidence available fulfills sham requirements [Vijayalaxmi 2006] provide in-vitro. They expose blood lymphocytes to pulse power in precisely the regime required in this work; 8.2 GHz, 8 ns duration and 50 khz repetition rate, a whopping pulse power density of 250,000 $\text{W/m}^2$\footnote{computed from average power / duty cycle} (2500x the time-averaged power density safety limit), average power density 100 $\text{W/m}^2$ (the safety limit), for 2 hours, finding no change in any of the measured quantities. Simlarly, [Chemeris 2004] use 8.8 GHz, 180 ns pulse width, peak power 65,000 W, repetition rate 50 Hz, exposure duration 40 min, on frog etheyrocytes, and find genotoxicity only from the temperature rise. So it appears that this is not merely rules-lawyering to exploit a loophole in the regulations; it is a physically-informed effect. \begin{autem} These papers (the best quality we are aware of) were extensively cherry-picked from the literature based on the results of a meta-analysis [Vj. 2019], listing the requirements for reliable in-vitro RF experiments.\\ To illustrate how sensitive this research is, [Chemeris 2004] mention: \begin{quote} The increase in DNA damage after exposure of cells to HPPP EMF shown in Table 2 was due to the temperature rise in the cell suspension by $3.5\pm0.1^{\circ} $C. This was confirmed in sham-exposure experiments and experiments with incubation of cells for 40 min under the corresponding temperature conditions." \end{quote} There are a substantial number of papers in the literature which show positive effect sizes; these are listed in the bibliography. \\ The literature reviews conducted by standards organizations.\\ We are not aware of any past uses of these microwave en-masse. We do not have the \end{autem} \cite{POLAROGRAPHICOXYGENSENSORS} \clearpage {\Large \it Talkin' 'bout the Variation}\\ To re-cap, [Yang 2015] theoretically model the virus to determine the minimum electric field required to destroy it. They assume that the virus is a simple damped harmonic oscillator, where the 'core' and 'shell' oscillate in opposition. They determine the net charge experimentally from microwave absorption measurements. Since [Yang] try to compute the {\it threshold}, they use a value of 400 pN for the breaking strength of the envelope, obtained from [Li 2011]. However, "95%.". \begin{center} \begin{tabular}{||c c c c||} \hline Col1 & Col2 & Col2 & Col3 \\ [0.5ex] \hline\hline 1 & 6 & 87837 & 787 \\ \hline 2 & 7 & 78 & 5415 \\ \hline 3 & 545 & 778 & 7507 \\ \hline 4 & 545 & 18744 & 7560 \\ \hline 5 & 88 & 788 & 6344 \\ [1ex] \hline \end{tabular} \end{center} Of course, this application is hardly much better than an N95 mask, except that it is non-disposable and provides protection for eyes and skin. Vomit [Vj] mention the importance of precise dosimetry. Even simple structures can produce hot-spots Yang et al use both a plastic cuvette, and a single drop of solution on a glass slide; in either case, a sharp change dielectric constant is present. To the extreme, some papers have used \clearpage \paragraph{\textbf{Time dependence}}\ The fact that the viral inactivation is non-thermal Both [Yang 2015] and [Hung 2014] use an apparently arbitrary 15-minute exposure in their tests - a very reasonable decision, given the focus of their paper. The effectiveness against airborne particles, and to minimize the power required in a dwelling phased-array beam, we must establish the required duration of exposure. {\color{red} speculative hypothesizing \{ } In contrast to chemical inactivation, where the time dependence appears to be dominated by viscous fluid dynamic effects [Hirose 2017], or UV inactivation, where a certain quantized dose of photons must be absorbed, we expected RF to act instantaneously. As a damped, driven oscillator, the ring-up time of the virus depends on the Q factor. Yang et al. state the Q of Inf. A as between 2 and 10, so at 8 GHz the steady-state amplitude should be reached in well under 100 nanoseconds.???????????FIXME [] found a significant mechanical fatigue effect in phage capsids, where a small strain applied repetitively eventually causes a fracture. Such a mechanism could perhaps extend the exposure required to break the capsid or membrane. Other mechanisms could include some sort of lipid denaturation, requiring an absolute amount of energy absorption to break or twist bonds and modify properties before the envelope fractures. {\color{red} \} } \subfile{safety} \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{multicols}{1} {\Large The Experiment}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \paragraph{\textbf{Centrifugal microfluidics}}\ The field of centrifugal microfluidics is accelerating. Many CD microfluidics systems use standard CD molding techniques for the channels and machining techniques, using either acrylic or silicone. The turbidity sensor is most sensitive if the plastic is clear. Sterilization does not seem to be discussed. Polypropylene is the ideal material, being almost indefinitely autoclavable. It is quite difficult to machine. \end{multicols} \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{multicols}{1} {\Large Modes of application}\\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \paragraph{\textbf{Personal 'electromagnetic mask'}}\ Even with judicious use of phased-arrays, spatial power-combining, etc, each transistor can only reasonably maintain sterility in approx. $ 0.1 \text{ m}^3 $. We therefore demonstrate this form factor because, superficially, there are fewer people than there are places. On the other hand, a personal device may present issues with participation and production volume. \paragraph{\textbf{Direct treatments at the fundamental}}\ Via [Hand 1982], the $E_{mag}=1/e$ penetration depth\footnote{"skin skin depth?"} is approximately 40 mm for 'dry' tissues and 5 mm for 'wet'. SARS is found widely distributed throughout many of the most favoured organs [Ding 2004], shielded by an average of 4 cm of chest wall [Schroeder 2013]; so safe external treatment of the body is unlikely. However, destruction of lung tissue appears to be significant in the lethality of SARS [Nicholls 2006]. A bronchoscopic technique may therefore be effective, very similarly to that demonstrated by [Yuan 2019]: in adults, the bronchi are less than 2 mm thick [Theriault 2018] and the lungs themselves are only on the order of 7 mm thick [Chekan 2016]. The main bronchus is about 8 mm in diameter, which is smaller than the patch antenna used here; a monopole (or multiple, phased monopoles) like is probably more suitable. \paragraph{\textbf{Subharmonics}}\ The 8 GHz wave need not be the fundamental. A modulated carrier wave with a more deeply penetrating wavelength which then locally interferes to produce the 8 GHz tone within the tissue could be useful. The penetration depth shortens as frequency increases, until the so-called 'optical windows' in the absorption spectra of tissues, particularly in the near-IR range. \paragraph{\textbf{More fanciful concepts}}\ Many systems operate in these X-band frequency ranges; and precision It is easy to produce megawatts of power at these frequency ranges using Klystrons. Assuming an appropriate antenna, a single 50MW SLAC XL-class klystron with a PRF of 180 Hz could sanitize almost a square kilometer every second. \footnote{Giving 'asceptic field' new meaning.} On the other hand, if this is conducted in outdoor free-space, sunlight Existing marine, weather, and aviation radar systems often use the X-band; depending on the focusing power \paragraph{Sensing} Surprisingly, there is little discussion on microwave virus detection, with the notable exception of [Mehrotra 2019]. Such scholarly silence is usually an indicator that a technique is impossible for reasons so obvious they scarcely bear repeating. However, [Oberoi 2012] find that - in controlled conditions - they can detect a single E. Coli bacterium in 50 uL of broth via a microwave cavity. Of course, discriminating miniscule returns in free-space is not easy; but with precise knowledge of this resonance mode, it may become somewhat easier. Frequency-domain measurements can be made with incredible precision; and there are many parameters by which virions could be discriminated; ring-up time of the resonance, excitation non-linearity, etc. \paragraph{} It has been shown that virions can effect large-scale changes in the electric charges of their []. Use of this technique will provide a selection bias towards immunity to electromagnetic fields, which could perhaps be effected by preferring extreme-sized mutants (shifting the resonance away from the applied field). We do not have the biological knowledge to know if this is plausible; it simply seemed worth mentioning. \end{multicols} \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\Large Microwave Musings}\\ \begin{multicols}{1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \noindent\fbox{\parbox{\linewidth}{ Toolchain: \begin{itemize} \item Failed oscillator feedback-loop optimization toolchain: QUCS 0.0.20 + python-qucs + scipy's 'basinhopper' \item Successful A slightly modified ngspice + ngspyce + pyEVTK \item gprMax for FDTD EM simulation \item KiCAD, wcalc, scikit-rf, ngspice \end{itemize} }} % Microwave design has a reputation for being the purview of wizards. Modern RF software packages like HFSS, Microwave Office, Agilent's ADS, and Mathworks' RF toolbox, for which; component models Luckily, this project falls perfectly within the subset of microwave technologies that do not require a goatee. The vast majority of the behavior of most circuits can be accurately modelled with the very same SPICE tools as one would use at low frequencies. With judicious application of reference designs, it seems to be possible to design Before about 2005, however, it seems to have been somewhat the norm to write a simple numerical code to solve the problem at hand based on underlying principles. We are also aided by the fact that what took a \$150,000 computing cluster a single decade ago can now be done in a few minutes by a single budget GPU. \end{multicols} \clearpage %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\Large Interference}\\ \begin{multicols}{1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% {\Large Mass production} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Almost all components on the device can be replicated with fully vertically-integrated first-principles. Capacitors can be If this 'electromagnetic mask' form is the ideal (not nearly), The largest RFID plants can produce a minimum of 5 GaAs or SiGe:C transistors will be required. Without SOI or, it does not appear that As a lower bound, there are 1 million hospital beds in the U.S. [AHA 2018]; and as an upper bound obtaining global herd immunity would take 1.75 billion units. It is difficult to determine the supply capacity for these semiconductor processes; information is not forthcoming from the manufacturers. Fermi estimates \footnote{GaAs MMIC market of \$2.2 Bn USD / random sample of MMIC prices = about 2 Bil devices / yr, 3e9 wifi connected devices produced each year,} It is possible to use common Si-based devices at these frequencies, especially with second-harmonic techniques [Winch 1982]. However, obtaining the required gain and output power is not a trivial matter. The techniques and equipment required to produce these devices are extremely complex; load-lock UHV, Figures are not forthcoming, Given the supply difficulties of comparatively simple materials such as Tyvek at pandemic scales, it is difficult to imagine that RF semiconductor production can be immediately re-tasked and scaled to this degree. \paragraph{\textbf{Vacuum RF triode}}\ Especially combined with an integrated titanium sorption pump, One concern is the large filament heater power, which prevents the use of low-cost button cells for power. Use of cold-cathode field-emitter arrays would alleviate this issue, but at the cost of complexity. Small tungsten incandescent lights are available down to 0.05 watts. With a suitable high-efficiency cathode coating, a pulsed heater power of less than 0.02 \end{multicols} \subfile{acknowledgement} \subfile{supplemental} \bibliography{references} \bibliographystyle{} \end{document}
{ "alphanum_fraction": 0.7617579406, "avg_line_length": 47.4083526682, "ext": "tex", "hexsha": "123555a44cd92ba13272803b6fef58b181908f96", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e9c103e5e62bc128169400998df5f5cd13bd8949", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "0xDBFB7/covidinator", "max_forks_repo_path": "documents/backup/backup_paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e9c103e5e62bc128169400998df5f5cd13bd8949", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "0xDBFB7/covidinator", "max_issues_repo_path": "documents/backup/backup_paper.tex", "max_line_length": 678, "max_stars_count": null, "max_stars_repo_head_hexsha": "e9c103e5e62bc128169400998df5f5cd13bd8949", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "0xDBFB7/covidinator", "max_stars_repo_path": "documents/backup/backup_paper.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4672, "size": 20433 }
\chapter{Example} \label{ch:example} !!! Please delete this chapter after finishing your work !!! \section{Settings} To add your name and the title of your work, please use the ``Settings.tex'' file! Additionally, switch there between German and English version. \section{How to Make Sections and Subsections} Use section and subsection commands to organize your document. \LaTeX{} handles all the formatting and numbering automatically. Use ref and label commands for cross-references. \subsection{How to Make Lists} You can make lists with automatic numbering \dots \begin{enumerate} \item Like this, \item and like this. \end{enumerate} \dots or bullet points \dots \begin{itemize} \item Like this, \item and like this. \end{itemize} \dots or with words and descriptions \dots \begin{description} \item[Word] Definition \item[Concept] Explanation \item[Idea] Text \end{description} \section{Section} You have to write text between each headline. \section{Citation} This part describes the three types of citations which are possible: \section{Direct Citation} The maximum for a direct citation is a ${1/2}$ page. \begin{quotation} Overview first, zoom and filter, then details-on-demand \autocite{shneiderman_eyes_1996} \end{quotation} \section{Floating Text Citation} \textcite{shneiderman_eyes_1996} defined the Visual Information Seeking Mantra as ``Overview first, zoom and filter, then details-on-demand''. \section{Indirect Citation} Some text which summarizes a paper or a book chapter. This could take several lines. Find attached a citation of a website~\autocite{kaley_match_2018}. \newpage \section{Figures} To place a figure use the following code example \begin{figure}[ht!] \centering \includegraphics[width=1\columnwidth]{Figures/Example} \caption{Interactive data exploration with multiple devices.} \label{fig:example} \end{figure} \begin{figure}[h] \centering \subfigure[Figure A]{ \label{fig:a} \includegraphics[width=60mm]{Figures/Example} } \subfigure[Figure B]{ \label{fig:b} \includegraphics[width=60mm]{Figures/Example} } \caption{Wearables worn for experiments 1, 2, and 3.}\label{fig:figure2} \end{figure} Refer to a figure in the following forms:\\ If you take a look at Figure~\ref{fig:example} ...\\ ... text text (see Figure~\ref{fig:example}) ... \section{Listings} \begin{lstlisting}[caption=A bit of source code., label=lst:test] if( true == questions ) { std::cout << "Let me google it for you"; } else { std::cout << "Great"; } \end{lstlisting} Now lets take a look at Listing~\ref{lst:test}. \section{Table} \begin{table}[ht!] \caption{My caption with a very useful description. die kann auch etwas länger sein und über mehrere Zeilen gehen und so weiter.} \label{my-label} \begin{tabular}{llr} \hline \multicolumn{2}{c}{Item} & \\ \cline{1-2} Animal & Description & Price (\$) \\ \hline Gnat & per gram & 13.65 \\ & each & 0.01 \\ Gnu & stuffed & 92.50 \\ Emu & stuffed & 33.33 \\ Armadillo & frozen & 8.99 \\ \hline \end{tabular} \end{table} For the fast generation of tables from Excel use \url{http://www.heise.de/download/excel2latex.html} \section{Equations} \LaTeX{} is great at typesetting equations. Let $X_1, X_2, \ldots, X_n$ be a sequence of independent and identically distributed random variables with $\text{E}[X_i] = \mu$ and $\text{Var}[X_i] = \sigma^2 < \infty$, and let $$S_n = \frac{X_1 + X_2 + \cdots + X_n}{n}$$ This was a equation without a label. \begin{equation} S_n = \frac{1}{n}\sum_{i}^{n} X_i \label{eq:test} \end{equation} This is the reference to equation~\ref{eq:test}. denote their mean. Then as $n$ approaches infinity, the random variables $\sqrt{n}(S_n - \mu)$ converge in distribution to a normal $\mathcal{N}(0, \sigma^2)$.
{ "alphanum_fraction": 0.6963879768, "avg_line_length": 27.3034482759, "ext": "tex", "hexsha": "5211bc19c3af87cb09214b766f0b176353790fc8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2017-04-11T09:56:21.000Z", "max_forks_repo_forks_event_min_datetime": "2017-04-11T09:56:21.000Z", "max_forks_repo_head_hexsha": "6881e1198f4ba8eba83dbc9c90df702ce095f777", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "fhstp/ThesisTemplate-FH-StP", "max_forks_repo_path": "Example.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6881e1198f4ba8eba83dbc9c90df702ce095f777", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "fhstp/ThesisTemplate-FH-StP", "max_issues_repo_path": "Example.tex", "max_line_length": 223, "max_stars_count": 2, "max_stars_repo_head_hexsha": "6881e1198f4ba8eba83dbc9c90df702ce095f777", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "fhstp/ThesisTemplate-FH-StP", "max_stars_repo_path": "Example.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-14T17:53:55.000Z", "max_stars_repo_stars_event_min_datetime": "2017-03-12T10:51:47.000Z", "num_tokens": 1126, "size": 3959 }
\subsection{Steps Followed for the Sample Script} In this section we will look at the script (shown in Listing~\ref{listing:FunctionExample}) along with the four functions used by this script (listings~\ref{listing:LoadCartState} through~\ref{listing:Cross}), and examine the behavior of the Mission Control Sequence, Function Control Sequences, Configuration, Sandbox, Sandbox Object Map, Global Object Store, and Function Object Stores as the script is loaded, executed, and removed from memory. This discussion will be broken into four distinct processes: \begin{enumerate} \item Script Parsing -- the process of reading the script in Listing~\ref{listing:FunctionExample} and building the resources and Mission Control Sequence. \item Initialization -- The process of passing the configuration and MCS into the Sandbox. \item Execution -- The process of running the MCS, including calls to the functions. \item Finalization -- Steps taken when the run is complete. \end{enumerate} As we will see, each of these steps can be further subdivided to a discrete set of substeps. We'll begin by examining what happens when the script is first read into memory. \subsubsection[Script Parsing]{Script Parsing} The details of script parsing are described fully in Chapter~\ref{chapter:ScriptRW}, (``Script Reading and Writing''). That chapter discusses the modes that the interpreter goes through when reading a script file, starting with the object property mode, moving through the command mode, and finishing with the final pass through the mission resources. You should review the relevant sections of that chapter if this terminology confuses you. Table~\ref{table:FnStatusAtStart} shows the state of the components of the engine at the start of script reading. This table does not include any elements specific to the Sandbox, because the Sandbox is in an idle state at this point. When the Sandbox elements become relevant, they will be added to the tables summarizing the state of the system. \begin{center} \tablecaption{\label{table:FnStatusAtStart}Status at Start of Script Parsing} \tablefirsthead{\hline \multicolumn{1}{|c|}{\textbf{Configuration}} & \multicolumn{1}{c|}{\textbf{MCS}} & \multicolumn{1}{c|}{\textbf{Interpreter Mode}} & \multicolumn{1}{c|}{\textbf{Sandbox}} \\ \hline} \tablelasttail{\hline} \begin{supertabular}{|m{1.3in}|m{1.3in}|m{1.3in}|m{1.3in}|} Empty & Empty & Object Property & Idle\\ \end{supertabular} \end{center} The Script Interpreter remains in Object Property mode until the first command is encountered in the script. That means that the following lines are all parsed in Object Property mode: \begin{quote} \begin{verbatim} % Create a s/c Create Spacecraft Sat; Create ForceModel Prop_FModel; GMAT Prop_FModel.PrimaryBodies = {Earth}; Create Propagator Prop; GMAT Prop.FM = Prop_FModel; % Variables and arrays needed in calculations Create Variable SMA ECC RAAN; Create Variable r v pi2 mu d2r Energy; Create Variable SMAError ECCError RAANError; Create Array rv[3,1] vv[3,1] ev[3,1] nv[3,1]; % Create a report to output error data Create ReportFile Cart2KepConvert; GMAT Cart2KepConvert.Filename = FunctDiffs.report; GMAT Cart2KepConvert.ZeroFill = On; mu = 398600.4415; pi2 = 6.283185307179586232; d2r = 0.01745329251994329509 \end{verbatim} \end{quote} \noindent After these lines have been parsed, the table of objects looks like this: \begin{center} \tablecaption{\label{table:StatusPostObjectParse}Status after Parsing the Objects} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & Empty & Object Property & Idle\\ ForceModel & Prop\_FModel & & & \\ Propagator & Prop & & & \\ Variable & SMA & & & \\ Variable & ECC & & & \\ Variable & RAAN & & & \\ Variable & r & & & \\ Variable & v & & & \\ Variable & pi2 & & & \\ Variable & mu & & & \\ Variable & d2r & & & \\ Variable & Energy & & & \\ Variable & SMAError & & & \\ Variable & ECCError & & & \\ Variable & RAANError & & & \\ Array & rv & & & \\ Array & vv & & & \\ Array & ev & & & \\ Array & nv & & & \\ ReportFile & Cart2KepConvert & & & \\ \end{supertabular} \end{center} \noindent At this point, the configuration is complete. The objects contained in the configuration all have valid data values; those that are not set explicitly in the script are given default values, while those that are explicitly set contain the specified values. Note that at this point, the configuration does not contain any functions. GMAT functions are added to the configuration when they are encountered, as we'll see when we encounter a script line that includes a GMAT function. The next line of the script contains a command: \begin{quote} \begin{verbatim} While Sat.ElapsedDays < 1 \end{verbatim} \end{quote} \noindent When the Script Interpreter encounters this line, it toggles into command mode. Once this line of script has been parsed, the state of the engine looks like this (note that I'm abbreviating the configuration here -- it still contains all of the objects listed above): \begin{center} \tablecaption{Status after Parsing the First Command} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Command & Idle\\ ForceModel & Prop\_FModel & & & \\ Propagator & Prop & & & \\ Variable & SMA & & & \\ ... & ... & & & \\ Array & nv & & & \\ ReportFile & Cart2KepConvert & & & \\ \end{supertabular} \end{center} \noindent The Script Interpreter parses the next line (a Propagate line) as expected, giving this state: \begin{center} \tablecaption{Status after Parsing the Propagate Command} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Command & Idle\\ ForceModel & Prop\_FModel & +-- Propagate & & \\ Propagator & Prop & & & \\ Variable & SMA & & & \\ ... & ... & & & \\ Array & nv & & & \\ ReportFile & Cart2KepConvert & & & \\ \end{supertabular} \end{center} \noindent The next script line is a function call: \begin{quote} \begin{verbatim} [rv, vv, r, v] = LoadCartState(Sat); \end{verbatim} \end{quote} \noindent When the Script Interpreter encounters this function call, several things happen: \begin{enumerate} \item The line is decomposed into three sets of elements: outputs (rv, vv, r, and v), the function name (LoadCartState), and inputs (Sat) \item The Script Interpreter builds a CallFunction command\footnote{Note that each CallFunction -- and, as we'll see later, FunctionRunner -- that is created includes an instance of the FunctionManager class. This internal object is used to make all of the calls needed on the Function consistent between the two avenues used to invoke a Function. All of the Function calls needed by the command or MathTree evaluation are made through these FunctionManager instances.}. \item The Script Interpreter sends a request to the Moderator for a function named LoadCartState. The Moderator sends the request to the Configuration Manager. Since the configuration does not contain a function with this name, the Configuration Manager returns a NULL pointer, which is returned to the ScriptInterpreter. \item The Script Interpreter sees the NULL pointer, and calls the Moderator to construct a GmatFunction object named LoadCartState. The Moderator calls the Factory Manager requesting this object. It is constructed in a function factory, and returned through the Moderator to the Script Interpreter. The Moderator also adds the function to the Configuration. \item The Script Interpreter passes the GmatFunction into the CallFunction command. \item The CallFunction command sends the GmatFunction to its FunctionManager instance. \item The Script Interpreter passes the list of input and output parameters to the CallFunction. \item The CallFunction passes the list of input and output parameters to its FunctionManager. \end{enumerate} This completes the parsing step for the CallFunction line. Note that (1) the Function Control Sequence is not yet built, and (2) the function file has not yet been located in the file system. These steps are performed later. At this point, the system has this state: \begin{center} \tablecaption{Status after Parsing the CallFunction Command} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Command & Idle\\ ForceModel & Prop\_FModel & +-- Propagate & & \\ Propagator & Prop & +-- CallFunction & & \\ Variable & SMA & & & \\ ... & ... & & & \\ Array & nv & & & \\ ReportFile & Cart2KepConvert & & & \\ GmatFunction & LoadCartState & & & \\ \end{supertabular} \end{center} \noindent Now that we've encountered the first function in the script, it is useful to start watching the data structures for the function. We'll do this in a separate table: \begin{center} \begin{small} \tablecaption{Function Properties after Parsing the First CallFunction} \tablefirsthead{\hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller\footnotemark{}}} & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l|l|m{0.56in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|} LoadCartState & CallFunction & Names set & Empty & NULL & Empty & Empty & Empty\\ \end{supertabular} \end{small} \end{center} \footnotetext{``Caller'' in this context is the type of object -- a CallFunction or a FunctionRunner -- that is used to execute the function in this example. It is possible that a function could be called from both types of object in the same script.} One feature that it is worth noting at this point is that there are two locations used for input and output arguments. The list managed in the FunctionManager tracks the parameters as listed in the function call in the control sequence that is calling the function. These parameters are listed in the order found in the call. Thus for this CallFunction, the StringArrays containing the arguments in the FunctionManager contain these data: \begin{quote} \begin{verbatim} inputNames = {"Sat"} outputNames = {"v", "vv", "r", "v"} \end{verbatim} \end{quote} The inputs and outputs maps in the Function object map the names used in the function to the associated objects. Since the function itself has not been built at this stage, these maps are empty, and will remain empty until the function file is parsed. The Function Object Store itself is empty at this point. It provides a mapping between the function scope object names and the objects. Since the function has not yet been parsed, this object store remains empty. The next two script lines do not make function calls, so they can be parsed and built using the features described in Chapter\ref{chapter:ScriptRW}. After these two lines are built: \begin{quote} \begin{verbatim} Energy = v\^{}2/2 - mu/r; SMA = -mu/2/Energy; \end{verbatim} \end{quote} \noindent the state tables contain these data: \begin{center} \tablecaption{Status after Parsing the next two Commands} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Command & Idle\\ ForceModel & Prop\_FModel & +-- Propagate & & \\ Propagator & Prop & +-- CallFunction & & \\ Variable & SMA & +-- Assignment & & \\ ... & ... & +-- Assignment & & \\ Array & nv & & & \\ ReportFile & Cart2KepConvert & & & \\ GmatFunction & LoadCartState & & & \\ \end{supertabular} \end{center} \noindent and \begin{center} \begin{small} \tablecaption{Function Properties after Parsing the First Two Assignment Lines} \tablefirsthead{\hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller}} & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l|l|m{0.56in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|} LoadCartState & CallFunction & Names set & Empty & NULL & Empty & Empty & Empty\\ \end{supertabular} \end{small} \end{center} Both of the lines listed here generate Assignment commands. The right side of these assignments are MathTree elements, built using the inline math features described in Chapter~\ref{chapter:InlineMath}. As you might expect, the Mission Control Sequence contains these new commands, but nothing else has changed at this level. The next line also generates an Assignment line: \begin{quote} \begin{verbatim} ev = cross(vv, cross(rv, vv)) / mu - rv / r; \end{verbatim} \end{quote} \noindent This line also builds a MathTree for the right side of the equation. The resulting tree contains two function calls, both made to the GMAT function named ``cross.'' The MathTree built from this Assignment line is shown in Figure~\ref{figure:TwoFunMathTree}. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.5]{Images/CrossMathTree.eps} \caption{\label{figure:TwoFunMathTree}A MathTree with Two Function Calls.} \end{center} \end{figure} \noindent Once this command has been built, the state of the system can be tabulated as in Tables~\ref{table:EngineStatusWithCross} and~\ref{table:FunctionStatusWithCross}. \begin{center} \tablecaption{\label{table:EngineStatusWithCross}Status after Parsing the Assignment Line containing Two Calls to the cross Function} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{1in}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablehead{ \hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{1in}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tabletail{\hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Command & Idle\\ ForceModel & Prop\_FModel & +-- Propagate & & \\ Propagator & Prop & +-- CallFunction & & \\ Variable & SMA & +-- Assignment & & \\ ... & ... & +-- Assignment & & \\ Array & nv & +-- Assignment & & \\ ReportFile & Cart2KepConvert & & & \\ GmatFunction & LoadCartState & & & \\ GmatFunction & cross & & & \\ \end{supertabular} \end{center} \noindent and \begin{center} \begin{small} \tablecaption{\label{table:FunctionStatusWithCross}Function Properties after Parsing the cross Assignment Line} \tablefirsthead{\hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller}} & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tablehead{ \hline \multicolumn{8}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller}} & & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tabletail{\hline \multicolumn{8}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l|l|m{0.56in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|} LoadCartState & CallFunction & Names set & Empty & NULL & Empty & Empty & Empty\\ cross & Function\-Runner & Names set & Empty & NULL & Empty & Empty & Empty\\ \end{supertabular} \end{small} \end{center} There are two FunctionRunner nodes in the MathTree shown in Figure~\ref{figure:TwoFunMathTree}. Each one has its own FunctionManager. The inputs and outputs StringArrays have the following values for these FunctionManagers: \begin{itemize} \item Inner FunctionRunner MathNode \end{itemize} \begin{quote} \begin{verbatim} inputNames = {"rv", "vv"} outputNames = {""} \end{verbatim} \end{quote} \begin{itemize} \item Outer FunctionRunner MathNode \end{itemize} \begin{quote} \begin{verbatim} inputNames = {"vv", ""} outputNames = {""} \end{verbatim} \end{quote} \noindent Note that at this point in the process, the unnamed arguments are marked using empty strings in the StringArrays. This is a general feature of the argument arrays generated in a FunctionManager associated with a FunctionRunner: empty strings are used to indicate arguments that must exist, but that do not have names that can be looked up in the object stores. In general, these empty strings indicate either output data or results that come from lower calculations performed in the MathTree. The next script line, \begin{quote} \begin{verbatim} [ECC] = magnitude(ev); \end{verbatim} \end{quote} \noindent builds another function call using a CallFunction, this time to the magnitude function. The resulting attributes are shown in Tables~\ref{table:EngineStatusWithMagnitude} and~\ref{table:FunctionStatusWithMagnitude}. \begin{center} \tablecaption{\label{table:EngineStatusWithMagnitude}Status after Parsing the Call to the magnitude Function} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablehead{ \hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tabletail{\hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Command & Idle\\ ForceModel & Prop\_FModel & +-- Propagate & & \\ Propagator & Prop & +-- CallFunction & & \\ Variable & SMA & +-- Assignment & & \\ ... & ... & +-- Assignment & & \\ Array & nv & +-- Assignment & & \\ ReportFile & Cart2KepConvert & +-- CallFunction & & \\ GmatFunction & LoadCartState & & & \\ GmatFunction & cross & & & \\ GmatFunction & magnitude & & & \\ \end{supertabular} \end{center} \noindent and \begin{center} \begin{small} \tablecaption{\label{table:FunctionStatusWithMagnitude}Function Properties after Parsing the magnitude Line} \tablefirsthead{\hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller}} & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tablehead{ \hline \multicolumn{8}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller}} & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tabletail{\hline \multicolumn{8}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l|l|m{0.56in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|} LoadCartState & CallFunction & Names set & Empty & NULL & Empty & Empty & Empty\\ cross & Function\-Runner & Names set & Empty & NULL & Empty & Empty & Empty\\ magnitude & CallFunction & Names set & Empty & NULL & Empty & Empty & Empty\\ \end{supertabular} \end{small} \end{center} This process continues through the remaining lines of the script: \begin{quote} \begin{verbatim} nv(1,1) = x*vz-z*vx; nv(2,1) = y*vz-z*vy; nv(3,1) = 0; [n] = magnitude(nv); RAAN = acos( nv(1,1)/n ); If nv(2,1) < 0; RAAN = (pi2 - RAAN) / d2r; EndIf; SMAError = Sat.SMA - SMA; ECCError = Sat.ECC - ECC; RAANError = Sat.RAAN - RAAN; Report Cart2KepConvert Sat.SMA SMA SMAError ... Sat.ECC ECC ECCError Sat.RAAN RAAN RAANError; EndWhile \end{verbatim} \end{quote} \noindent The only line that calls a GMAT function here is the fourth line, a CallFunction command that again calls the magnitude function. At the end of parsing, our tables of object properties look like this: \begin{center} \tablecaption{\label{table:EngineStatusAfterParsing}Status after Parsing the Script} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablehead{ \hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tabletail{\hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Command & Idle\\ ForceModel & Prop\_FModel & +-- Propagate & & \\ Propagator & Prop & +-- CallFunction & & \\ Variable & SMA & +-- Assignment & & \\ Variable & ECC & +-- Assignment & & \\ Variable & RAAN & +-- Assignment & & \\ Variable & r & +-- CallFunction & & \\ Variable & v & +-- Assignment & & \\ Variable & pi2 & +-- Assignment & & \\ Variable & mu & +-- Assignment & & \\ Variable & d2r & +-- CallFunction & & \\ Variable & Energy & +-- Assignment & & \\ Variable & SMAError & +-- If & & \\ Variable & ECCError & +--+-- Assignment & & \\ Variable & RAANError & +-- EndIf & & \\ Array & rv & +-- Assignment & & \\ Array & vv & +-- Assignment & & \\ Array & ev & +-- Assignment & & \\ Array & nv & +-- Report & & \\ ReportFile & Cart2KepConvert & EndWhile & & \\ GmatFunction & LoadCartState & & & \\ GmatFunction & cross & & & \\ GmatFunction & magnitude & & & \\ \end{supertabular} \end{center} \noindent and \begin{center} \begin{small} \tablecaption{\label{table:FunctionStatusAfterParsing}Function Properties after Parsing the Script} \tablefirsthead{\hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller}} & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tablehead{ \hline \multicolumn{8}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multirow{2}{*}{\textbf{Function}} & \multirow{2}{*}{\textbf{Caller}} & \multicolumn{3}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function Attributes}} \\ \cline{3-8} & & inputs, outputs & Function Object Store & Global Object Store & inputs & outputs & FCS\\ \hline } \tabletail{\hline \multicolumn{8}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l|l|m{0.56in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|m{0.55in}|} LoadCartState & CallFunction & Names set & Empty & NULL & Empty & Empty & Empty\\ cross & Function\-Runner & Names set & Empty & NULL & Empty & Empty & Empty\\ magnitude & CallFunction & Names set & Empty & NULL & Empty & Empty & Empty\\ \end{supertabular} \end{small} \end{center} At this point in the process, the Configuration and Mission Control Sequence have been populated, and three GMAT functions have been identified but not yet located. The ScriptInterpreter has finished parsing the script, but has not yet made its final pass through the objects created during parsing. During the final pass, object pointers and references are set and validated. the ScriptInterpreter uses the final pass to locate the function files for all of the GmatFunction objects built during parsing. The path to each function is set at this time. The ScriptInterpreter makes a call, through the Moderator, and locates the function file on the GmatFunctionPath. The file must be named identically to the name of the function, with a file extension of ``.gmf'' -- so, for example, the function file for the magnitude function must be named ``magnitude.gmf''. These file names are case sensitive; a file named ``Magnitude.gmf'' will not match the ``magnitude'' function. If there is no matching file for the function, the ScriptInterpreter throws an exception. Once this final pass is complete, script parsing has finished, and the ScriptInterpreter returns to an idle state. The steps followed to parse the Mission Control Sequence, described above, give GMAT enough information to fully populate the GUI so that it can present users with a view of the mission contained in the script. The GUI includes entries for each of the functions in the main script, and displays these functions along with all of the other configured objects on the Resource tree. \subsubsection{Initialization in the Sandbox} \begin{figure} \begin{center} \includegraphics[scale=0.5]{Images/InitializeSequence.eps} \caption{\label{figure:InitializeControlFunctionChapter}Initializing a Control Sequence} \end{center} \end{figure} At this point, GMAT knows about the functions described in the Mission Control Sequence, but has not yet constructed any of these functions. That step is performed when the mission is passed into the Sandbox and initialized. The basic initialization process is described in Chapters~\ref{chapter:TopLevel} and~\ref{chapter:Sandbox}. The process followed can be described in four stages: \begin{enumerate} \item \textbf{Population}: The objects in GMAT's configuration are cloned into the Sandbox Object Map, and the Mission Control Sequence is set. \item \textbf{Object Initialization}: The objects in the Sandbox Object Map are initialized. \item \textbf{Global Object Management}: Objects that have their isGlobal flag set are moved into the Global Object Store. \item \textbf{Command Initialization}: The Mission Control Sequence is initialized. \end{enumerate} Outside of the cloning process, the GMAT function objects are not affected by the first two of these steps. Figure~\ref{figure:InitializeControlFunctionChapter}\footnote{This figure needs some modification based on the text in the rest of this document.}, copied from Chapter~\ref{chapter:Sandbox}, shows the process followed in the fourth step to initialize the Mission Control Sequence. Before going into the details of Figure~\ref{figure:InitializeControlFunctionChapter}, I'll describe the activities performed in the first two steps. \paragraph{Initialization Step 1: Passing Objects to the Sandbox} The first step in initialization is cloning the objects in the configuration into the Sandbox. At the start of this step, the system status looks like Table~\ref{table:EngineStatusAtInitStart}. The Interpreter subsystem will not play a role in this part of the initialization process -- the Interpreters remain idle -- so I will remove that column for the time being in subsequent tables. One feature of GMAT's design that can be overlooked is that there is a separate Mission Control Sequence for each Sandbox, and there is a one-to-one relationship between the Mission Control Sequences and the Sandbox instances. What that means for this discussion is that the Mission Control Sequence shown in the table already belongs to the Sandbox shown there. The Mission Control Sequence is not cloned into the Sandbox\footnote{This relationship between the Mission Control Sequences and the array of Sandboxes is managed in the Moderator. The behavior described here is the default behavior, and is the behavior used in current implementations of GMAT. Implementations that use multiple Sandboxes -- particularly when used in a distributed manner -- will implement a different relationship between the Mission Control Sequence viewed by the user and the Mission Control Sequences in the Sandboxes.}. \begin{center} \tablecaption{\label{table:EngineStatusAtInitStart}Status Immediately Before Initialization Starts} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tablehead{ \hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multirow{2}{*}{\textbf{MCS}} & \multirow{2}{*}{\textbf{Interpreter Mode}} & \multirow{2}{*}{\textbf{Sandbox}} \\ \multicolumn{1}{|c}{\textit{Type}} & \multicolumn{1}{c|}{\textit{Name}} & & & \\ \hline } \tabletail{\hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Idle & Idle, Sandbox Object\\ ForceModel & Prop\_FModel & +-- Propagate & & Map is Empty \\ Propagator & Prop & +-- CallFunction & & \\ Variable & SMA & +-- Assignment & & \\ Variable & ECC & +-- Assignment & & \\ Variable & RAAN & +-- Assignment & & \\ Variable & r & +-- CallFunction & & \\ Variable & v & +-- Assignment & & \\ Variable & pi2 & +-- Assignment & & \\ Variable & mu & +-- Assignment & & \\ Variable & d2r & +-- CallFunction & & \\ Variable & Energy & +-- Assignment & & \\ Variable & SMAError & +-- If & & \\ Variable & ECCError & +--+-- Assignment & & \\ Variable & RAANError & +-- EndIf & & \\ Array & rv & +-- Assignment & & \\ Array & vv & +-- Assignment & & \\ Array & ev & +-- Assignment & & \\ Array & nv & +-- Report & & \\ ReportFile & Cart2KepConvert & EndWhile & & \\ GmatFunction & LoadCartState & & & \\ GmatFunction & cross & & & \\ GmatFunction & magnitude & & & \\ \end{supertabular} \end{center} The objects in the configuration, on the other hand, are contained in GMAT's engine, outside of the Sandbox. The Moderator accesses these configured objects by type, and passes each into the Sandbox for use in the mission. The Sandbox makes copies of these objects using the object's Clone() method. These clones are stored in the Sandbox Object Map. The clones contain identical data to the objects in the configuration; making clones at this stage preserves the user's settings on the configured objects while providing working copies that are used to run the mission. Table\ref{table:EngineStatusAfterConfigCloning} shows the status of the system after the Moderator has passed the objects into the Sandbox. The Sandbox Object Map is a mapping between a text name and a pointer to the associated object. Since the map is from the name to the object, the Sandbox Object Map in the table lists the name first. \begin{center} \tablecaption{\label{table:EngineStatusAfterConfigCloning}Status Immediately After Cloning into the Sandbox} \tablefirsthead{\hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multicolumn{3}{c|}{\textbf{Sandbox}} \\ \hline \multicolumn{1}{|c}{\textit{Scripted}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textit{Name}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{MCS}}} & \multicolumn{2}{c|}{\textbf{Sandbox Object Map}} \\ \cline{4-5} \multicolumn{1}{|c}{\textit{Type}}& & & \textit{Name} & \textit{Type} \\ \hline } \tablehead{ \hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multicolumn{2}{|c|}{\textbf{Configuration}} & \multicolumn{3}{c|}{\textbf{Sandbox}} \\ \hline \multicolumn{1}{|c}{\textit{Scripted}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textit{Name}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{MCS}}} & \multicolumn{2}{c|}{\textbf{Sandbox Object Map}} \\ \cline{4-5} \multicolumn{1}{|c}{\textit{Type}}& & & \textit{Name} & \textit{Type} \\ \hline } \tabletail{\hline \multicolumn{5}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|l l|l|l|l|} Spacecraft & Sat & While & Sat & Spacecraft*\\ ForceModel & Prop\_FModel & +-- Propagate & Prop\_FModel & ForceModel* \\ Propagator & Prop & +-- CallFunction & Prop & PropSetup*\\ Variable & SMA & +-- Assignment & SMA & Variable* \\ Variable & ECC & +-- Assignment & ECC & Variable* \\ Variable & RAAN & +-- Assignment & RAAN & Variable* \\ Variable & r & +-- CallFunction & r & Variable* \\ Variable & v & +-- Assignment & v & Variable* \\ Variable & pi2 & +-- Assignment & pi2 & Variable* \\ Variable & mu & +-- Assignment & mu & Variable* \\ Variable & d2r & +-- CallFunction & d2r & Variable* \\ Variable & Energy & +-- Assignment & Energy & Variable* \\ Variable & SMAError & +-- If & SMAError & Variable* \\ Variable & ECCError & +--+-- Assignment & ECCError & Variable* \\ Variable & RAANError & +-- EndIf & RAANError & Variable* \\ Array & rv & +-- Assignment & rv & Array* \\ Array & vv & +-- Assignment & vv & Array* \\ Array & ev & +-- Assignment & ev & Array* \\ Array & nv & +-- Report & nv & Array* \\ ReportFile & Cart2KepConvert & EndWhile & Cart2KepConvert & ReportFile* \\ GmatFunction & LoadCartState & & LoadCartState & GmatFunction* \\ GmatFunction & cross & & cross & GmatFunction* \\ GmatFunction & magnitude & & magnitude & GmatFunction* \\ CoordinateSystem\footnote & EarthMJ2000Eq & & EarthMJ2000Eq & CoordinateSystem* \\ CoordinateSystem & EarthMJ2000Ec & & EarthMJ2000Ec & CoordinateSystem* \\ CoordinateSystem & EarthFixed & & EarthFixed & CoordinateSystem* \\ \end{supertabular} \end{center} \footnotetext{The 3 coordinate systems listed at the end of the configuration table are automatically created by the Moderator.} Once the Moderator has passed the Configuration into the Sandbox, the mission run no longer depends on the Configuration. For that reason, most of the tables shown in the rest of this document will not include a list of the contents of the configuration. If needed, the Configuration will be displayed separately. \paragraph{Initialization Step 2: Object Initialization} Now that the Sandbox has been populated with the configured objects and the Mission Control Sequence, the Moderator can pass control to the Sandbox to continue the initialization process. This hand off is made through a call to the Sandbox::Initialize() method. The Sandbox initializes objects in the following order: \begin{enumerate} \item CoordinateSystem \item Spacecraft \item All others except Parameters and Subscribers \item System Parameters \item Other Parameters \item Subscribers \end{enumerate} \noindent The initialization of these objects follows this basic algorithm: \begin{itemize} \item Send the Sandbox's solar system to the object \item Set pointers for all of objects referenced by this object \item Call the object's Initialize() method \end{itemize} \noindent The basic initialization for Function objects are part of element 3 in the list above. At that point in the initialization process, the Function objects are not yet populated, so this step does not perform any substantive action. The Sandbox checks each GmatFunction to ensure that the path to the function file is not empty as part of this initialization. \paragraph{\label{section:GlobalObjectManagement}Initialization Step 3: Global Object Management} Once the objects in the Sandbox Object Map are initialized, the objects flagged as global objects are moved from the Sandbox Object Map into the Global Object Store. The Sandbox does this by checking the object's isGlobal flag, a new attribute of the GmatBase class added for global object management. Some object types are automatically marked as global objects. All instances of the PropSetup class, Function classes, and coordinate system classes fall into this category, and are built with the isGlobal flag set. \paragraph{Initialization Step 4: Control Sequence Initialization} The final step in Sandbox initialization is initialization of the Mission Control Sequence. This step in the initialization process includes construction of the Function Control Sequences, and does the first portion of initialization that is needed before the Function Control Sequence can be executed. At this stage in the initialization process, the Global Object Store contains clones of all of the configured objects marked as globals, the Sandbox Object Map contains clones of all other configured objects, and the GmatFunction objects know the locations of the function files. The Function Control Sequences are all empty, and the system has not identified any functions called from inside of functions that are not also called in the Mission Control Sequence. The objects in the Sandbox Object Map have the connections to referenced objects set, and are ready for use in the Mission Control Sequence. So far, we have encountered three GmatFunctions, shown in Table~\ref{table:FunctionStatusCSInitStart} with their data structures: \begin{center} \tablecaption{\label{table:FunctionStatusCSInitStart}GmatFunction Status at the Start of Control Sequence Initialization} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.7in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ % \hline % \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ % \hline % %\color{blue} % \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & % \multicolumn{1}{m{0.8in}|}{\textbf{Function Object Store}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & % \multicolumn{1}{m{0.7in}|}{\textbf{inputs}} & % \multicolumn{1}{m{0.7in}|}{\textbf{outputs}} & % \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ % %\color{black} % \hline } \tabletail{ % \hline % \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|m{0.9in}|m{0.8in}|m{0.7in}|m{0.7in}|m{0.7in}|m{0.8in}|m{0.7in}|} LoadCartState & empty & NULL & empty & empty & empty & empty\\ \hline cross & empty & NULL & empty & empty & empty & empty\\ \hline magnitude & empty & NULL & empty & empty & empty & empty\\ \hline \end{supertabular} \end{center} \noindent As we will see, the call stack, implemented as the ``objectStack'' attribute in the GmatFunction class, remains empty throughout the initialization process. The Sandbox initialized the Mission Control Sequence by walking through the list of commands in the sequence, and performing the following tasks on each: \begin{itemize} \item Send the pointers to the Sandbox Object Map and the Global Object Map to the command \item Set the solar system pointer for the command \item Set the transient force vector for the command \item If the command uses a GmatFunction, build that function as described below \item Call the command's Initialize() method \end{itemize} \noindent In order to see how these actions work with GmatFunctions, we'll continue walking through the sample script. For clarity's sake, it is useful to have a complete picture of the contents of the Mission Control Sequence. The Mission Control Sequence, listed by node type and script line, and numbered for reference, can be written like this: \begin{center} \tablefirsthead{} \tablehead{} \tabletail{} \tablelasttail{} \begin{supertabular}{lllp{4in}} \begin{footnotesize}1\end{footnotesize} & While & & \texttt{While Sat.ElapsedDays < 1}\\ \begin{footnotesize}2\end{footnotesize} & Propagate & & \texttt{\ \ \ Propagate Prop(Sat)}\\ \begin{footnotesize}3\end{footnotesize} & CallFunction & & \texttt{\ \ \ [rv, vv, r, v] = LoadCartState(Sat);}\\ \begin{footnotesize}4\end{footnotesize} & Assignment & & \texttt{\ \ \ Energy = v\^{}2/2 - mu/r;}\\ \begin{footnotesize}5\end{footnotesize} & Assignment & & \texttt{\ \ \ SMA = -mu/2/Energy;}\\ \begin{footnotesize}6\end{footnotesize} & Assignment & & \texttt{\ \ \ ev = cross(vv,cross(rv, vv))/mu - rv/r;}\\ \begin{footnotesize}7\end{footnotesize} & CallFunction & & \texttt{\ \ \ [ECC] = magnitude(ev);}\\ \begin{footnotesize}8\end{footnotesize} & Assignment & & \texttt{\ \ \ nv(1,1) = x*vz-z*vx;}\\ \begin{footnotesize}9\end{footnotesize} & Assignment & & \texttt{\ \ \ nv(2,1) = y*vz-z*vy;}\\ \begin{footnotesize}10\end{footnotesize} & Assignment & & \texttt{\ \ \ nv(3,1) = 0;}\\ \begin{footnotesize}11\end{footnotesize} & CallFunction & & \texttt{\ \ \ [n] = magnitude(nv);}\\ \begin{footnotesize}12\end{footnotesize} & Assignment & & \texttt{\ \ \ RAAN = acos( nv(1,1)/n );}\\ \begin{footnotesize}13\end{footnotesize} & If & & \texttt{\ \ \ If nv(2,1) < 0;}\\ \begin{footnotesize}14\end{footnotesize} & Assignment & & \texttt{\ \ \ \ \ \ RAAN = (pi2 - RAAN) / d2r;}\\ \begin{footnotesize}15\end{footnotesize} & EndIf & & \texttt{\ \ \ EndIf;}\\ \begin{footnotesize}16\end{footnotesize} & Assignment & & \texttt{\ \ \ SMAError = Sat.SMA - SMA;}\\ \begin{footnotesize}17\end{footnotesize} & Assignment & & \texttt{\ \ \ ECCError = Sat.ECC - ECC;}\\ \begin{footnotesize}18\end{footnotesize} & Assignment & & \texttt{\ \ \ RAANError = Sat.RAAN - RAAN;}\\ \begin{footnotesize}19\end{footnotesize} & Report & & \texttt{\ \ \ Report Cart2KepConvert Sat.SMA SMA SMAError ...}\\ & & & \ \ \ \ \ \ \ \ \texttt{Sat.ECC ECC ECCError Sat.RAAN RAAN RAANError}\\ \begin{footnotesize}20\end{footnotesize} & EndWhile & & \texttt{EndWhile}\\ \end{supertabular} \end{center} \noindent (The line of script associated with each node is shown on the right in this list.) At the start of the Mission Control Sequence initialization, the Sandbox Object Map and Global Object Store contain the following items: \pagebreak \begin{center} \tablecaption{\label{table:CSInitStartSandboxMaps}The Sandbox Maps} \tablefirsthead{ \hline \multicolumn{2}{|c|}{\textbf{Sandbox Object Map}} & \multicolumn{2}{c|}{\textbf{Global Object Store}} \\ \hline \multicolumn{1}{|c}{\textit{Name}} & \multicolumn{1}{c|}{\textit{Type}} & \multicolumn{1}{c}{\textit{Name}} & \multicolumn{1}{c|}{\textit{Type}}\\ \hline } \tablehead{ \hline\multicolumn{4}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline \multicolumn{2}{|c|}{\textbf{Sandbox Object Map}} & \multicolumn{2}{c|}{\textbf{Global Object Store}} \\ \hline \multicolumn{1}{|c}{\textit{Name}} & \multicolumn{1}{c|}{\textit{Type}} & \multicolumn{1}{c}{\textit{Name}} & \multicolumn{1}{c|}{\textit{Type}}\\ \hline } \tabletail{\hline\multicolumn{4}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline} \tablelasttail{\hline} \begin{supertabular}{|p{1.4in}p{1.4in}|p{1.4in}p{1.4in}|} Sat & Spacecraft* & Prop & PropSetup* \\ Prop\_FModel & ForceModel* & EarthMJ2000Eq & CoordinateSystem* \\ Prop & PropSetup* & EarthMJ2000Ec & CoordinateSystem*\\ SMA & Variable* & EarthFixed & CoordinateSystem*\\ ECC & Variable* & LoadCartState & GmatFunction*\\ RAAN & Variable* & cross & GmatFunction*\\ r & Variable* & magnitude & GmatFunction*\\ v & Variable* & & \\ pi2 & Variable* & & \\ mu & Variable* & & \\ d2r & Variable* & & \\ Energy & Variable* & & \\ SMAError & Variable* & & \\ ECCError & Variable* & & \\ RAANError & Variable* & & \\ rv & Array* & & \\ vv & Array* & & \\ ev & Array* & & \\ nv & Array* & & \\ Cart2KepConvert & ReportFile* & & \\ LoadCartState & GmatFunction* & & \\ cross & GmatFunction* & & \\ magnitude & GmatFunction* & & \\ EarthMJ2000Eq & CoordinateSystem* & & \\ EarthMJ2000Ec & CoordinateSystem* & & \\ EarthFixed & CoordinateSystem* & & \\ \end{supertabular} \end{center} \noindent These maps stay the same until either a Global command is encountered or a Create command is encountered that creates an object that is automatically global. The steps listed above for command initialization are performed for the first two commands in the list, items 1 and 2, without changing any of the object maps or function attributes. Item 3: \begin{quote} \begin{verbatim} [rv, vv, r, v] = LoadCartState(Sat); \end{verbatim} \end{quote} \noindent is a CallFunction that initializes a GmatCommand, so we need to look more closely at the initialization for this line. The CallFunction at this point has a FunctionManager which contains the name of a GmatFunction object and StringArrays for the inputs and outputs. The StringArrays contain the following data: \begin{quote} \begin{verbatim} inputNames = {"Sat"} outputNames = {"rv", "vv", "r", "v"} \end{verbatim} \end{quote} The Sandbox passes the pointers for the Sandbox Object Map and the Global Object Store to the CallFunction command. Once the CallFunction has received the Global Object Store, it uses that mapping to locate the function needed by the Function Manager, and passes the pointer to that function into the FunctionManager. The FunctionManager determines the type of the function -- in this example, the function is a GmatFunction. The function attributes at this point are shown in Table~\ref{table:GMFPostGOSSOM}. \pagebreak \begin{center} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.7in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.7in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \tablecaption{\label{table:GMFPostGOSSOM}GmatFunction Status after Setting the GOS and SOM} \begin{supertabular}{|m{0.9in}|m{0.8in}|m{0.7in}|m{0.7in}|m{0.7in}|m{0.8in}|m{0.7in}|} \hline LoadCartState & empty & Set & empty & empty & empty & empty\\\hline cross & empty & NULL & empty & empty & empty & empty\\\hline magnitude & empty & NULL & empty & empty & empty & empty\\\hline \end{supertabular} \end{center} The Sandbox then passes in the pointers to the solar system and transient force vector, which the CallFunction passes into the FunctionManager. Since the function in the FunctionManager is a GmatFunction, these pointers will be needed later in initialization and execution, so the FunctionManager passes these pointers into the function for later use. (If the function in the FunctionManger was not a GmatFunction, the pointers would have been discarded.) At this point, all of the items needed to build the Function Control Sequence exist. The Sandbox retrieves the pointer for the GmatFunction from the CallFunction command. It checks to see if the function's Function Control Sequence has been built. If the Function Control Sequence is NULL, the Sandbox calls the Moderator::InterpretGmatFunction() method to construct the Function Control Sequence, which in turn calls the ScriptInterpreter::InterpretGmatFunction() method. Both of these calls take the function pointer as input arguments, so that the interpreter has the local Sandbox instance of the GmatFunction that it uses to build the Function Control Sequence. The ScriptInterpreter::InterpretGmatFunction() method builds the Function Control Sequence and returns it, through the Moderator, to the Sandbox. The LoadCartState GmatFunction that is constructed here is built from this scripting: \begin{quote} \begin{verbatim} function [rv, vv, r, v] = LoadCartState(Sat); % This function fills some arrays and variables with % Cartesian state data Create Variable r v Create Array rv[3,1] vv[3,1] rv(1,1) = Sat.X; rv(1,2) = Sat.Y; rv(1,3) = Sat.Z; vv(1,1) = Sat.VX; vv(1,2) = Sat.VY; vv(1,3) = Sat.VZ; [r] = magnitude(rv); [v] = magnitude(vv); \end{verbatim} \end{quote} The process followed in the ScriptInterpreter::InterpretGmatFunction() method will be described below. Upon return from this function call, the functions contain the attributes shown in Table~\ref{table:GMFPostLoadCartState}. \pagebreak \begin{center} \tablecaption{\label{table:GMFPostLoadCartState}GmatFunction Status after Building the LoadCartState FCS} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.7in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.75in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ % \hline % \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ % \hline % %\color{blue} % \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Function Object Store}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & % \multicolumn{1}{m{0.75in}|}{\textbf{inputs}} & % \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & % \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ % %\color{black} % \hline } \tabletail{ % \hline % \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.9in}|p{0.7in}|p{0.7in}|p{0.75in}|p{0.75in}|p{0.8in}|p{0.7in}|} \hline \begin{small}LoadCartState\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'Sat'->NULL\end{small} & \begin{small}'rv'->NULL 'vv'->NULL 'r'->NULL 'v'->NULL\end{small} & \begin{small}Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}cross\end{small} & \begin{small}empty\end{small} & \begin{small}NULL\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\\hline \begin{small}magnitude\end{small} & \begin{small}empty\end{small} & \begin{small}NULL\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\ \hline \end{supertabular} \end{center} The Sandbox then checks the Function Control Sequence generated in the ScriptInterpreter, and checks to see if that sequence contains a GmatFunction. If it does, then for each GmatFunction encountered, the process is repeated. The Sandbox checks the Function Control Sequence by starting at the first node, and checking each Assignment and CallFunction command in that control sequence to see if it references a GmatFunction. Our example script does contain such a call to a GmatFunction -- it calls the magnitude function twice, in the last two CallFunction commands in the Function Control Sequence. Each of the FunctionManagers associated with these CallFunction commands have StringArrays containing the names of the input and output objects that will be used during execution -- more specifically, the FunctionManager associated with the first CallFunction has these StringArrays: \begin{quote} \begin{verbatim} inputNames = {"rv"} outputNames = {"r"} \end{verbatim} \end{quote} \noindent while the second has these: \begin{quote} \begin{verbatim} inputNames = {"vv"} outputNames = {"v"} \end{verbatim} \end{quote} When the Sandbox detects the GmatFunction in the first CallFunction command, it performs the same tasks as were performed on the CallFunction in the Mission Control Sequence -- more specifically: \begin{enumerate} \item The Sandbox passes the pointer for the Global Object Store to the CallFunction command. (Note that the Sandbox does not pass in the Sandbox Object Map; the Sandbox Object Map is only used in commands in the Mission Control Sequence.) \item Once the CallFunction has received the Global Object Store, it uses that mapping to locate the function needed by the FunctionManager. \begin{itemize} \item If the function was found, the CallFunction passes the pointer to that function into the FunctionManager \item If the function was not found, the pointer referenced by the Function Manager remains NULL. \end{itemize} \item The FunctionManager determines the type of the function. If the function is not a GmatFunction, the process ends. \item The Sandbox passes the pointers to the solar system and transient force vector to the CallFunction, which passes them into the FunctionManager. \item The FunctionManager passes these pointers into the function for later use. \end{enumerate} At this point, all of the items needed to build the nested Function Control Sequence exist. Returning to our example, the state of the function object attributes at this point is shown in Table~\ref{table:GMFFirstNestedFunction}. \begin{center} \tablecaption{\label{table:GMFFirstNestedFunction}GmatFunction Status after Detecting the First Nested CallFunction} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.7in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.75in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ % \hline % \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ % \hline % %\color{blue} % \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Function Object Store}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Global Object Store}} & % \multicolumn{1}{m{0.75in}|}{\textbf{inputs}} & % \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & % \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & % \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ % %\color{black} % \hline } \tabletail{ % \hline % \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.9in}|p{0.7in}|p{0.7in}|p{0.75in}|p{0.75in}|p{0.8in}|p{0.7in}|} \hline \begin{small}LoadCartState\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'Sat'->NULL\end{small} & \begin{small}'rv'->NULL 'vv'->NULL 'r'->NULL 'v'->NULL\end{small} & \begin{small}Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}cross\end{small} & \begin{small}empty\end{small} & \begin{small}NULL\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\\hline \begin{small}magnitude\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\ \hline \end{supertabular} \end{center} The Sandbox then calls the Moderator::InterpretGmatFunction() method to build the Function Control Sequence for the magnitude command. The magnitude function is scripted like this: \begin{quote} \begin{verbatim} function [val] = magnitude(vec1) % This function takes a 3-vector in a GMAT array and % calculates its magnitude Create Variable val val = sqrt(dot(vec1, vec1)); \end{verbatim} \end{quote} \noindent so the resulting Function Control Sequence and other attributes have the values shown in Table~\ref{table:GMFFirstNestedFunctionBuilt} when the Moderator returns control to the Sandbox. \begin{center} \tablecaption{\label{table:GMFFirstNestedFunctionBuilt}GmatFunction Status after Parsing the First Nested CallFunction} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.9in}|p{0.65in}|p{0.62in}|p{0.83in}|p{0.75in}|p{0.8in}|p{0.7in}|} \hline \begin{small}LoadCartState\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'Sat'->NULL\end{small} & \begin{small}'rv'->NULL 'vv'->NULL 'r'->NULL 'v'->NULL\end{small} & \begin{small}Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}cross\end{small} & \begin{small}empty\end{small} & \begin{small}NULL\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}magnitude\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'vec1'->NULL\end{small} & \begin{small}'val'->NULL\end{small} & \begin{small} Create Assignment \end{small} & \begin{small}empty\end{small}\\ \hline \end{supertabular} \end{center} The Assignment command in the newly created Function Control Sequence is particularly interesting, because it contains inline mathematics, which use a previously unencountered GmatFunction named dot. The MathTree for this Assignment command is shown in Figure~\ref{figure:DotFunMathTree}. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.5]{Images/DotMathTreeExample.eps} \caption{\label{figure:DotFunMathTree}The MathTree for the Assignment command in the magnitude GmatFunction} \end{center} \end{figure} Note that while the dot GmatFunction has been identified as a needed element for the Assignment line, there is not yet an instance of a GmatFunction object that is associated with the dot function, even though the MathTree shown in Figure~\ref{figure:DotFunMathTree} has a FunctionRunner MathNode that requires it. This issue will be resolved shortly. The Sandbox takes this new Function Control Sequence, and checks it for the presence of a GmatFunction by walking through the list of commands in the control sequence. When it checks the Assignment command, it finds that there is a function dependency, and that the associated function does not exist in the Global Object Store. Since all function types except for GmatFunctions must be created before they can be used, the Sandbox assumes that the needed function is a GmatFunction and asks the Moderator to create an unnamed GmatFunction\footnote{The GmatFunction is unnamed so that it will not be passed to the configuration.}. The Moderator calls the Factory Manager to create the function, and returns the pointer of the new function to the Sandbox. The Sandbox then sets its name to be ``dot'' and adds it to the Global Object Store. The Sandbox also performs the preinitialization steps described above: it sets the solar system pointer and transient force vector pointer on the function, sets any pointers referenced by the function, and calls the function's Initialize() method. Finally, the Sandbox calls the Moderator to locate the function file for the GmatFunction and sets the path to the file, completing this piece of the initialization. The Sandbox then passes the function pointer to the Assignment command, which passes it, in turn, into the FunctionRunner node. At this point, the Sandbox can continue initializing the Assignment command. The GmatFunction data is set as shown in Table~\ref{table:GMFdotFunctionDiscovered}. \begin{center} \tablecaption{\label{table:GMFdotFunctionDiscovered}GmatFunction Status after Creating the dot Function} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.9in}|p{0.65in}|p{0.62in}|p{0.83in}|p{0.75in}|p{0.8in}|p{0.7in}|} \hline \begin{small}LoadCartState\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'Sat'->NULL\end{small} & \begin{small}'rv'->NULL 'vv'->NULL 'r'->NULL 'v'->NULL\end{small} & \begin{small}Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}cross\end{small} & \begin{small}empty\end{small} & \begin{small}NULL\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}magnitude\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'vec1'->NULL\end{small} & \begin{small}'val'->NULL\end{small} & \begin{small} Create Assignment \end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}dot\end{small} & \begin{small}empty\end{small} & \begin{small}NULL\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\ \hline \end{supertabular} \end{center} Recall that we are at the point in the initialization where the Sandbox is checking the Function Control Sequence for the magnitude GmatFunction for internal function calls. The Sandbox found the dot function as an internal dependency, and built the corresponding GmatFunction. The final step performed by the Sandbox at this point is to build the Function Control Sequence for the dot command. The text of the dot file looks like this: \begin{quote} \begin{verbatim} function [val] = dot(vec1, vec2) % This function takes two 3-vectors in a GMAT array and % constructs their dot product Create Variable val val = vec1(1,1) * vec2(1,1) + vec1(2,1) * vec2(2,1) +... vec1(3,1) * vec2(3,1); \end{verbatim} \end{quote} The Sandbox calls the Moderator::InterpretGmatFunction() method to build the control sequence for the dot function. Upon return, the function attribute table has the contents shown in Table~\ref{table:GMFdotFunctionBuilt}. \begin{center} \tablecaption{\label{table:GMFdotFunctionBuilt}GmatFunction Status after Interpreting the dot Function} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.9in}|p{0.65in}|p{0.62in}|p{0.83in}|p{0.75in}|p{0.8in}|p{0.7in}|} \hline \begin{small}LoadCartState\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'Sat'->NULL\end{small} & \begin{small}'rv'->NULL 'vv'->NULL 'r'->NULL 'v'->NULL\end{small} & \begin{small}Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}cross\end{small} & \begin{small}empty\end{small} & \begin{small}NULL\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}magnitude\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'vec1'->NULL\end{small} & \begin{small}'val'->NULL\end{small} & \begin{small} Create Assignment \end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}dot\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small} 'vec1'->NULL 'vec2'->NULL \end{small} & \begin{small} 'val'->NULL \end{small} & \begin{small} Create Assignment \end{small} & \begin{small}empty\end{small}\\ \hline \end{supertabular} \end{center} %%% Need to fix this text: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The Sandbox takes the Function Control Sequence built for the dot function, and checks it for internal function calls. There is an Assignment command in this control sequence that references inline mathematics, but the corresponding MathTree does not contain any functions. Therefore, the initialization for the dot function is complete, and the method that built it returns control to the calling method. In this case, the calling method is actually the same method that called the first CallFunction -- the call was a recursive call, because we were checking the Function Control Sequence for the dot function, which was called part way through the check of the Function Control Sequence for the magnitude function. That call was made for the Assignment command in the magnitude function. The check for the magnitude Assignment command has now built all of the functions it needs, so control is returned to the method that was performing the check on the magnitude function. Again, the calling method is the method that checks for function calls, this time for the first CallFunction in the LoadCartState function. All of the function references in that CallFunction have been resolved and initialized, so the function check method moves to the second CallFunction. That CallFunction makes a call to the magnitude function. All of the internal structures needed to execute the magnitude function have been built, following the procedures discussed above. The check for this CallFunction does detect that there is a GmatFunction in the call -- a call to the magnitude function. It then checks the magnitude GmatFunction, and finds that it has been initialized, so it proceeds to the next command in the LoadCartState Function Control Sequence. Since this second CallFunction was the last command in that Function Control Sequence, the LoadCartState function control sequence is now fully initialized and ready to execute. %%% Fix to here %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We have now initialized all of the system except for the cross function. The Sandbox is partway through the check on the Mission Control Sequence for function calls -- all of the preceding GmatFunction initialization was performed to fully initialize the CallFunction command in the Mission Control Sequence. The next function encountered in the main script is in the third Assignment command. That command was generated by the script line \begin{quote} \begin{verbatim} ev = cross(vv, cross(rv, vv)) / mu - rv / r; \end{verbatim} \end{quote} \noindent When the Sandbox checks that line, it finds that there are two FunctionRunner nodes in the associated MathTree. The first of these nodes requires an initialized cross function, so the Sandbox follows the process described above to build the Function Control Sequence for the cross function. Once this first node has been handled by the Sandbox, the function attribute table looks like Table~\ref{table:GMFcrossFunctionBuilt}. \begin{center} \tablecaption{\label{table:GMFcrossFunctionBuilt}GmatFunction Status after Interpreting the cross Function} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|p{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{1}{|m{0.9in}|}{\textbf{Function}} & \multicolumn{1}{m{0.65in}|}{\textbf{Function Object Store}} & \multicolumn{1}{m{0.62in}|}{\textbf{Global Object Store}} & \multicolumn{1}{m{0.83in}|}{\textbf{inputs}} & \multicolumn{1}{m{0.75in}|}{\textbf{outputs}} & \multicolumn{1}{m{0.8in}|}{\textbf{Function Control Sequence}} & \multicolumn{1}{m{0.7in}|}{\textbf{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.9in}|p{0.65in}|p{0.62in}|p{0.83in}|p{0.75in}|p{0.8in}|p{0.7in}|} \hline \begin{small}LoadCartState\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'Sat'->NULL\end{small} & \begin{small}'rv'->NULL 'vv'->NULL 'r'->NULL 'v'->NULL\end{small} & \begin{small}Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction\end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}cross\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small} 'vec1'->NULL 'vec2'->NULL \end{small} & \begin{small} 'vec3'->NULL \end{small} & \begin{small} Create Assignment Assignment Assignment \end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}magnitude\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small}'vec1'->NULL\end{small} & \begin{small}'val'->NULL\end{small} & \begin{small} Create Assignment \end{small} & \begin{small}empty\end{small}\\ \hline \begin{small}dot\end{small} & \begin{small}empty\end{small} & \begin{small}Set\end{small} & \begin{small} 'vec1'->NULL 'vec2'->NULL \end{small} & \begin{small} 'val'->NULL \end{small} & \begin{small} Create Assignment \end{small} & \begin{small}empty\end{small}\\ \hline \end{supertabular} \end{center} The Sandbox then checks the second FunctionRunner node, and finds that it uses a function that has already been built -- the cross function -- so no further action is necessary for this Assignment command. It moves to the next command in the Mission Control Sequence, and finds that that command -- a CallFunction that uses the magnitude GmatFunction -- is also ready to execute. This process continues through all of the remaining commands in the Mission Control Sequence. All of the commands and called functions have been initialized, so the commands and functions used in the Sandbox have now been fully prepared for the mission run. \paragraph{Additional Notes on Initialization} \subparagraph{Function and FunctionManager Status Summary} The scripting in our example generates seven specific places where a FunctionManager interface is built in order to implement the structure needed to run a GmatFunction. Table~\ref{table:GMFFunctionSummary} shows each of these interfaces, along with the string descriptors that are set in the interface tables for each of these instances. The actual data structures that contain the input and output objects are not set during initialization; they are built the first time the function is called during execution of the Mission Control Sequence. That process is described in the execution section of this text. \begin{center} \tablecaption{\label{table:GMFFunctionSummary}Summary of the Function Interfaces} \tablefirsthead{ \hline %\color{blue} \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Script Line}}} & \multicolumn{1}{c|}{\multirow{2}{0.5in}{\textbf{Interface Type}}} & \multicolumn{2}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function}} \\ \cline{3-7} & & \multicolumn{1}{c|}{\textit{inputNames}} & \multicolumn{1}{c|}{\textit{outputNames}} & \multicolumn{1}{c|}{\textit{name}} & \multicolumn{1}{c|}{\textit{inputs}} & \multicolumn{1}{c|}{\textit{outputs}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Script Line}}} & \multicolumn{1}{c|}{\multirow{2}{0.5in}{\textbf{Interface Type}}} & \multicolumn{2}{c|}{\textbf{Function Manager}} & \multicolumn{3}{c|}{\textbf{Function}} \\ \cline{3-7} & & \multicolumn{1}{c|}{\textit{inputNames}} & \multicolumn{1}{c|}{\textit{outputNames}} & \multicolumn{1}{c|}{\textit{name}} & \multicolumn{1}{c|}{\textit{inputs}} & \multicolumn{1}{c|}{\textit{outputs}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{7}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.9in}|p{0.5in}|p{0.62in}|p{0.73in}|p{0.65in}|p{0.8in}|p{0.8in}|} \hline \begin{small} [rv, vv, r, v] = LoadCartState( Sat) \end{small} & \begin{small} Call\-Function \end{small} & \begin{small} 'Sat' \end{small} & \begin{small} 'rv' 'vv' 'r' 'v' \end{small} & \begin{small} Load\-Cart\-State \end{small} & \begin{small} 'Sat'->NULL \end{small} & \begin{small} 'rv'->NULL 'vv'-> NULL 'r'->NULL 'v'->NULL \end{small}\\ \hline % Second row \begin{small} % Script line ev = cross(vv, cross(rv, vv)) / mu - rv / r; \end{small} & \begin{small} % Interface Type Function\-Runner (Two instances) \end{small} & \begin{small} % inputNames 'rv' 'vv' \end{small} & \begin{small} % outputNames '' \end{small} & \begin{small} % name cross (inner instance) \end{small} & \begin{small} % inputs 'vec1'-> NULL 'vec2'-> NULL \end{small} & \begin{small} % outputs 'vec3'-> NULL \end{small}\\ \cline{3-7} & & \begin{small} % inputNames 'vv' '' \end{small} & \begin{small} % outputNames '' \end{small} & \begin{small} % name cross (outer instance) \end{small} & \begin{small} % inputs 'vec1'-> NULL 'vec2'-> NULL \end{small} & \begin{small} % outputs 'vec3'-> NULL \end{small}\\ \hline \begin{small} % Script line [ECC] = magnitude(ev) \end{small} & \begin{small} % Interface Type Call\-Function \end{small} & \begin{small} % inputNames 'ev' \end{small} & \begin{small} % outputNames 'ECC' \end{small} & \begin{small} % name magnitude \end{small} & \begin{small} % inputs 'vec1'-> NULL \end{small} & \begin{small} % outputs 'val'-> NULL \end{small}\\ \hline \begin{small} % Script line [n] = magnitude(nv) \end{small} & \begin{small} % Interface Type Call\-Function \end{small} & \begin{small} % inputNames 'nv' \end{small} & \begin{small} % outputNames 'n' \end{small} & \begin{small} % name magnitude \end{small} & \begin{small} % inputs 'vec1'-> NULL \end{small} & \begin{small} % outputs 'val'-> NULL \end{small}\\ \hline \begin{small} % Script line [r] = magnitude(rv) \end{small} & \begin{small} % Interface Type Call\-Function \end{small} & \begin{small} % inputNames 'rv' \end{small} & \begin{small} % outputNames 'r' \end{small} & \begin{small} % name magnitude \end{small} & \begin{small} % inputs 'vec1'-> NULL \end{small} & \begin{small} % outputs 'val'-> NULL \end{small}\\ \hline \begin{small} % Script line [v] = magnitude(vv) \end{small} & \begin{small} % Interface Type Call\-Function \end{small} & \begin{small} % inputNames 'vv' \end{small} & \begin{small} % outputNames 'v' \end{small} & \begin{small} % name magnitude \end{small} & \begin{small} % inputs 'vec1'-> NULL \end{small} & \begin{small} % outputs 'val'-> NULL \end{small}\\ \hline \begin{small} % Script line val = sqrt(dot( vec1, vec1)); \end{small} & \begin{small} % Interface Type Function\-Runner \end{small} & \begin{small} % inputNames 'vec1' 'vec1' \end{small} & \begin{small} % outputNames '' \end{small} & \begin{small} % name dot \end{small} & \begin{small} % inputs 'vec1'-> NULL 'vec2'-> NULL \end{small} & \begin{small} % outputs 'val'-> NULL \end{small}\\ \end{supertabular} \end{center} Before we examine execution, a few items should be mentioned about the work performed in the ScriptInterpreter when the InterpretGmatFunction() method is invoked. \subparagraph{Details of the ScriptInterpreter::InterpretGmatFunction() Method} The Interpreter::Interpret\-GmatFunction()\footnote{While this method is most naturally assigned to the ScriptInterpreter -- since it is interpreting a text file describing the function -- the method itself is found in the Interpreter base class.} method is very similar to the ScriptInterpreter::Interpret() method. The differences arise in the Interpreter state, the parsing for the function line in the function file, and the management of the commands created during the parsing of the function file. The InterpretGmatFunction() method has this signature: \begin{quote} \begin{verbatim} GmatCommand* Interpreter::InterpretGmatFunction(Function *funct) \end{verbatim} \end{quote} The InterpretGmatFunction() method does not manage state in the same sense as the Interpret() method. At the point that the InterpretGmatFunction() method is invoked, there is no longer a sense of ``object mode'' and ``command mode,'' because every executable line in a GmatFunction file has an associated command -- in other words, there is no ``object mode'' at this point in the process. Since there is no sense in tracking state, the Interpreter treats the entire time spent reading and building the GmatFunction as if it were in Command mode. When the InterpretGmatFunction() method is invoked, it takes the Function pointer from the function's argument list and retrieves the function file name and path from that object. It opens the referenced file, and uses the ScriptReadWriter and TextParser helper classes to parse the function file, one logical block at a time. The first logical block in a GmatFunction file defines the function, and must start with the ``function'' keyword. An example of this line can be see in the first line of the cross function in Listing~5: \begin{quote} \begin{verbatim} function [vec3] = cross(vec1, vec2) \end{verbatim} \end{quote} \noindent If the keyword ``function'' is not encountered as the first element in the command section of the the first logical block in the file, the method throws an exception stating that the Interpreter expected a GmatFunction file, but the function definition line is missing. The remaining elements in this logical block are used to check the function name for a match to the expected name, and to set the input and output argument lists for the function. The list contained in square brackets is sent, one element at a time, into the function as the output elements using the SetStringParameter() method. Similarly, the function arguments in parentheses following the function name generate calls to the SetStringParameter() method, setting the names for the input arguments. Thus, for example, the function definition line above for the cross function generates the following calls into the GmatFunction object that was passed into the InterpretGmatFunction() method: \begin{quote} \begin{verbatim} // Calls that are made to the cross function. These are not // intended to be actual code; they are representative calls. // The actual code will loop through the argument lists rather // than perform the linear process shown here. // Given these values from the TextParser: // inputList = {``vec1'', ``vec2''} // functionName = ``cross'' // outputList = {``vec3''} // First check the name if (functionName != funct->GetName()) throw CommandException("The GmatFunction \"" + funct->GetName() + "\" in the file \"" + funct->GetStringParameter("Filename") + "\" does not match the function identifier in the file."); // Next set the input argument(s)\newline funct->SetStringParameter(INPUTPARAM_ID, inputList[0]); funct->SetStringParameter(INPUTPARAM_ID, inputList[1]); // And the output argument(s): funct->SetStringParameter(OUTPUTPARAM_ID, outputList[0]); \end{verbatim} \end{quote} \noindent (Of course, the exception message should be changed to conform to GMAT's usual message formats.) The code in the GmatFunction is built to receive these values, and populate the internal data structures accordingly. This, for example, when the line \begin{quote} \begin{verbatim} funct->SetStringParameter(INPUTPARAM_ID, inputList[0]); \end{verbatim} \end{quote} \noindent is executed, the GmatFunction checks the inputs map and, if the input value is not in the map, adds it to the map, something like this: \begin{quote} \begin{verbatim} // on this call: SetStringParameter(INPUTPARAM_ID, "vec1"), // the GmatFunction does this: if (inputs.find("vec1") == inputs.end()) inputs["vec1"] = NULL; else throw FunctionException("Invalid operation: an attempt was" " made to add an input argument named \"" + "vec1" + "\", but an argument with that name already exists."); \end{verbatim} \end{quote} Once the function definition line has been parsed, the process followed to build the Function Control Sequence begins. The Function Control Sequence is built using the same process as is followed for the Mission Control Sequence: the function file is read one logical block at a time, the command corresponding to that logical block is constructed, and the command is appended to the control sequence. The only difference for this phase of initialization is this: when GMAT is building a Mission Control Sequence, the sequence that receives the new commands is the Mission Control Sequence associated with the current Sandbox. For GmatFunction, the control sequence is the Function Control Sequence associated with the current function. \subsubsection{GmatFunction Execution} Once the Mission Control Sequence and all referenced Function Control Sequences have been initialized, they are ready for execution in the Sandbox. The Moderator launches execution by calling the Sandbox::Execute() method. When this method is called, the Sandbox sets an internal pointer to the first command in the Mission Control Sequence, and then enters a loop that walks through the Mission Control Sequence one command at a time. For each command in the Mission Control Sequence, the Sandbox performs the following actions: \begin{enumerate} \item Check to see if a user interrupt has occurred, and if so, respond to it. \item Call the Execute() method on the current command. \item Set the current command pointer to the command returned by calling GetNext() on the command that just executed. \item If the new current command pointer is not NULL, loop to step 1; otherwise, the Mission Control Sequence is finished executing and control returns to the Moderator. \end{enumerate} In this section, we will examine the behavior of the execution of the commands that reference GmatFunctions exclusively. Readers interested in the general execution of the Mission Control Sequence are referred to Chapters~\ref{chapter:TopLevel} through~\ref{chapter:Sandbox} and Chapter~\ref{chapter:Commands} of the GMAT Architectural Specification. The first command that references a GmatFunction is the command near the top of the While loop which was generated by this text: \begin{quote} \begin{verbatim} [rv, vv, r, v] = LoadCartState(Sat); \end{verbatim} \end{quote} This script line generates a CallFunction command. That CallFunction has a FunctionManager that references the LoadCartState GmatFunction. The first time Execute() is called for this CallFunction, these objects have the attributes shown in Table~\ref{table:LCFBeforeFirstExec}. (For the CallFunction, only the pointers needed in this discussion are shown in the object stores. The example used here does not use any global objects, so just the status of the Global Object Store is not indicated.) \begin{center} \tablecaption{\label{table:LCFBeforeFirstExec}CallFunction Attributes Prior to First Execution} \tablefirsthead{ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.5in}|p{0.5in}|p{0.4in}|p{0.4in}|p{0.5in}|p{0.68in}|p{0.67in}|p{0.5in}| p{0.5in}|} \hline \begin{small} Sat rv vv r v \end{small} & \begin{small} set \end{small} & \begin{small} 'Sat' \end{small} & \begin{small} 'rv' 'vv' 'r' 'v' \end{small} & \begin{small} empty \end{small} & \begin{small} Sat->NULL \end{small} & \begin{small} rv->NULL vv->NULL r->NULL v->NULL \end{small} & \begin{small} NULL \end{small} & \begin{small} NULL \end{small} \\ \end{supertabular} \end{center} The first time a CallFunction or FunctionRunner is executed, the final piece of initialization is performed so that all of the data structures used for the execution are set and the Function Control Sequence is fully initialized. Subsequent calls into the same CallFunction or FunctionRunner updates the data used in the function calls by copying the data into the Function Object Store using the object's assignment operator. Both of these processes are described below, and illustrated using our sample functions. \paragraph{Steps Performed on the First Execution} The first time a CallFunction or FunctionRunner executes, the following processes are performed: \begin{enumerate} \item The CallFunction tells the FunctionManager to build the Function Object Store. The FunctionManager performs the following actions in response: \begin{itemize} \item First the input arguments are set up: \begin{itemize} \item The FunctionManager looks first in the Local Object Store, then in teh Global Object Store, and finds each input object listed in the inputNames StringArray \item The input object is cloned, using its Clone() method, and wrapped in an ObjectWrapper\footnote{For the examples shown here, the function arguuments are all objects, so they use ObjectWrappers. Other data types -- real numbers, for example -- use wrappers compatible with their type.} \item The Function is queried for the name of the matching input argument \item The clone is set in the Function Object Store, using the function's argument name as the map key \item An ElementWrapper is built for the clone \item The ElementWrapper is passed to the Function as the input argument \end{itemize} \item Then the output arguments are set up: \begin{itemize} \item The FunctionManager finds each output object listed in the outputNames StringArray \item The output object is stored in an ObjectArray for later use \end{itemize} \item If this process fails for any input or output object, an exception is thrown and the process terminates \item The Function Object Store and Global Object Store are passed into the Function \end{itemize} \end{enumerate} At this point, the objects managed by this CallFunction have the attributes shown in Table~\ref{table:LCFAfterFOSBuild}. \begin{center} \tablecaption{\label{table:LCFAfterFOSBuild}CallFunction Attributes After Building the Function Object Store} \tablefirsthead{ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.5in}|p{0.5in}|p{0.4in}|p{0.4in}|p{0.5in}|p{0.68in}|p{0.67in}|p{0.5in}| p{0.5in}|} \hline \begin{small} Sat rv vv r v \end{small} & \begin{small} set \end{small} & \begin{small} 'Sat' \end{small} & \begin{small} 'rv' 'vv' 'r' 'v' \end{small} & \begin{small} 'Sat'-> Sat clone \end{small} & \begin{small} Sat-> clone wrapper \end{small} & \begin{small} rv->NULL vv->NULL r->NULL v->NULL \end{small} & \begin{small} set \end{small} & \begin{small} set \end{small} \\ \end{supertabular} \end{center} \begin{enumerate} \setcounter{enumi}{1} \item Initialize the Function by calling Function-{\textgreater}Initialize(). This call makes the Function complete initialization for each command in the Function Control Sequence. Each command in the Function Control Sequence (1)~receives the pointer to the Function Object Store, Global Object Store, transient force vector, and Solar System, and then (2)~calls the Initialize() method on the command. \item Execute the Function Control Sequence by walking through the linked list of commands in the sequence, calling Execute() on each command in the sequence and using the command's GetNext() method to access the next command that is executed. Some details are provided below for the behavior of CallFunction commands and FunctionRunner MathNodes encountered during this process. Create commands encountered during this execution sequence add their objects to the Function Object Store. Global commands add the identified objects to the Global Object Store as well. At the end of the execution step, the attributes for the CallFunction example are listed in Table~\ref{table:LCFAfterCreateCmd}. Note that the pointers in the outputs attribute have not been set yet. \end{enumerate} \begin{center} \tablecaption{\label{table:LCFAfterCreateCmd}CallFunction Attributes After Executing the Create commands} \tablefirsthead{ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.5in}|p{0.5in}|p{0.4in}|p{0.4in}|p{0.5in}|p{0.68in}|p{0.67in}|p{0.5in}| p{0.5in}|} \hline \begin{small} Sat rv vv r v \end{small} & \begin{small} set \end{small} & \begin{small} 'Sat' \end{small} & \begin{small} 'rv' 'vv' 'r' 'v' \end{small} & \begin{small} 'Sat'-> Sat clone 'rv'->rv 'vv'->vv 'r'->r 'v'->v \end{small} & \begin{small} Sat-> clone wrapper \end{small} & \begin{small} rv->NULL vv->NULL r->NULL v->NULL \end{small} & \begin{small} set \end{small} & \begin{small} set \end{small} \\ \end{supertabular} \end{center} \begin{enumerate} \setcounter{enumi}{3} \item Retrieve the output data generated from the execution, and use it to set data in the output arguments that were stored in step 1. The output arguments are retrieved through a call to \begin{quote} \begin{verbatim} ElementWrapper* Function::GetOutputArgument(Integer argNumber) \end{verbatim} \end{quote} \noindent which finds the output argument at the indicated location and returns it \item Reset the Function Control Sequence so it is ready for subsequent calls to this function. The final state of the function attributes is shown in Table~\ref{table:LCFAfterExecOne}. \end{enumerate} \begin{center} \tablecaption{\label{table:LCFAfterExecOne}CallFunction Attributes After Execution} \tablefirsthead{ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.5in}|p{0.5in}|p{0.4in}|p{0.4in}|p{0.5in}|p{0.68in}|p{0.67in}|p{0.5in}| p{0.5in}|} \hline \begin{small} Sat rv vv r v \end{small} & \begin{small} set \end{small} & \begin{small} 'Sat' \end{small} & \begin{small} 'rv' 'vv' 'r' 'v' \end{small} & \begin{small} 'Sat'-> Sat clone 'rv'->rv 'vv'->vv 'r'->r 'v'->v \end{small} & \begin{small} Sat-> clone wrapper \end{small} & \begin{small} rv->rv vv->vv r->r v->v \end{small} & \begin{small} NULL \end{small} & \begin{small} set \end{small} \\ \end{supertabular} \end{center} \paragraph{Steps Performed on the Subsequent Executions} Subsequent calls into a CallFunction or FunctionRunner that has executed once have a simplified first step, because the structures in the FunctionManager are initialized in the first call. Subsequent calls follow the following procedure: \begin{enumerate} \item The CallFunction tells the FunctionManager to refresh the Function Object Store. The FunctionManager performs the following actions in response: \begin{itemize} \item The input arguments are updated using the assignment operator to set the clones equal to the original objects. \item The Function Object Store is passed into the Function. \end{itemize} \noindent At this point, the objects managed by this CallFunction have the attributes shown in Table~\ref{table:LCFAfterExecTwo}. \end{enumerate} \begin{center} \tablecaption{\label{table:LCFAfterExecTwo}CallFunction Attributes After Execution} \tablefirsthead{ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{2}{|c|}{\textbf{CallFunction}} & \multicolumn{3}{c|}{\textbf{FunctionManager}} & \multicolumn{4}{c|}{\textbf{LoadCartState Function}} \\ \hline \multicolumn{1}{|m{0.5in}|}{\centering\textit{Sandbox Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{input\-Names}} & \multicolumn{1}{m{0.4in}|}{\centering\textit{output\-Names}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{c|}{\centering\textit{inputs}} & \multicolumn{1}{c|}{\centering\textit{outputs}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.5in}|}{\centering\textit{Global Object Store}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.5in}|p{0.5in}|p{0.4in}|p{0.4in}|p{0.5in}|p{0.68in}|p{0.67in}|p{0.5in}| p{0.5in}|} \hline \begin{small} Sat rv vv r v \end{small} & \begin{small} set \end{small} & \begin{small} 'Sat' \end{small} & \begin{small} 'rv' 'vv' 'r' 'v' \end{small} & \begin{small} 'Sat'-> Sat clone \end{small} & \begin{small} Sat-> clone wrapper \end{small} & \begin{small} rv->NULL vv->NULL r->NULL v->NULL \end{small} & \begin{small} set \end{small} & \begin{small} set \end{small} \\ \end{supertabular} \end{center} \begin{enumerate} \setcounter{enumi}{1} \item Initialize the Function by calling Function-{\textgreater}Initialize(). This call makes the Function complete initialization for each command in the Function Control Sequence. Each command in the Function Control Sequence (1)~receives the pointer to the Function Object Store, Global Object Store, transient force vector, and Solar System, and then (2)~calls the Initialize() method on the command. (This repetition of step 2 is required because the same function can be called from multiple locations, with different input objects, so the object pointers in the Function Control Sequence have to be refreshed each time a function is entered.) \item Execute the Function Control Sequence by walking through the linked list of commands in the sequence, calling Execute() on each command in the sequence and using the command's GetNext() method to access the next command that is executed. \item Retrieve the output data generated from the execution, and use it to set data in the output arguments. \item Reset the Function Control Sequence so it is ready for subsequent calls to this function. \end{enumerate} \paragraph{Functions within Functions} GmatFunctions can call other GmatFunctions, either in a nested manner, or by calling recursively into the same function. When a GmatFunction detects that it is about to call into a GmatFunction in this manner, it needs to preserve the current state of the function data so that, upon return from the nested call, the function can resume execution. This preservation of function data is accomplished using a call stack, implemented as the GmatFunction::\-objectStack data member. An example of the use of the call stack can be seen in the example script that we've been working through. The first function call, made to the LoadCartState function, uses a CallFunction in the Mission Control Sequence. When the Sandbox calls this function, the steps outlined in the previous section are performed, initializing and setting the Function Object Store and Function Control Sequence, and then calling the Execute method on each command in the Function Control Sequence to run the function. The use of the call stack can be seen when we examine the details of this process, as we will do in the following paragraphs. When the Sandbox receives a message to execute the Mission Control Sequence, it sets its state to ``RUNNING'' and sets the current command pointer to the first command in the Mission Control Sequence. For our example, that means the current pointer start out pointing to the While command generated by this line of script: \begin{quote} \begin{verbatim} While Sat.ElapsedDays {\textless} 1 \end{verbatim} \end{quote} \noindent The command is executed, and control returned to the Sandbox. The Sandbox then calls the GetNext() method to determine the next command to execute. The command pointer returned from that call points back to the While command again, because the While command is a branch command. The Sandbox polls for a user interrupt, and then calls the Execute() method on the While command again. The While command begins the execution of the commands inside of the While loop by calling its ExecuteBranch() method. That call executes the first command in the while loop, \begin{quote} \begin{verbatim} Propagate Prop(Sat) \end{verbatim} \end{quote} \noindent which advances the spacecraft one step and returns control to the While command. The While command then calls GetNext() on the Propagate command that just executed, and sets its loop command pointer to the returned value -- in this case, a pointer to the CallFunction command generated by this line: \begin{quote} \begin{verbatim} [rv, vv, r, v] = LoadCartState(Sat); \end{verbatim} \end{quote} \noindent The While command then returns control to the Sandbox. The Sandbox calls GetNext() on the While command, and receives, again, a pointer back to the While command, since the While command is running the commands in the while loop. The Sandbox polls for interrupts, and then calls Execute() on the While command, which calls ExecuteBranch(), which, in turn, calls Execute() on the CallFunction command. The CallFunction command and FunctionManager have completed initialization of the GmatFunction as described above, and the CallFunction has made a call into the FunctionManager::\-Execute() method to run the function. The following discussion picks up at that point. I'll refer to this long sequence of calls as the ``Sandbox call chain'' for the rest of this section -- in other words, the Sandbox call chain is the sequence \begin{quote} \begin{verbatim} Sandbox::Execute() --> While::Execute() --> While::ExecuteBranch() --> CallFunction::Execute() --> FunctionManager::Execute() \end{verbatim} \end{quote} The function that is executing at this point is the LoadCartState GmatFunction, which has the Function Control Sequence, Function Object Store, and call stack shown in Table~\ref{table:FunSubfunStart}. The functions called during execution of this function are also listed in this table, along with their attributes. The pointer in the FCS column shows the next command that will be executed; for example, the first Create command in the LoadCartState will be executed at the point where we resume discussion of the actual process in the next paragraph. \begin{center} \tablecaption{\label{table:FunSubfunStart}Attributes of the LoadCartState GmatFunction and Subfunctions} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} >Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction \end{small} & \begin{small} Sat clone \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} The first call on the Sandbox call chain at this point executes the Create command \begin{quote} \begin{verbatim} Create Variable r v \end{verbatim} \end{quote} placing the variables r and v into the function object store, as is shown in Table\ref{table:FunSubfunCreate1}. \begin{center} \tablecaption{\label{table:FunSubfunCreate1}Attributes of the LoadCartState GmatFunction After the Executing the First Create Command} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create >Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction \end{small} & \begin{small} Sat clone r v \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} The next call executes the second Create command \begin{quote}\begin{verbatim} Create Array rv[3,1] vv[3,1] \end{verbatim} \end{quote} \noindent adding the rv and vv arrays to the Function Object Store. The next six calls execute the six assignment commands that are used to set the elements of the rv and vv arrays: \begin{quote} \begin{verbatim} rv(1,1) = Sat.X; rv(1,2) = Sat.Y; rv(1,3) = Sat.Z; vv(1,1) = Sat.VX; vv(1,2) = Sat.VY; vv(1,3) = Sat.VZ; \end{verbatim} \end{quote} Once all of these commands have executed, the attributes contain the data shown in Table~\ref{table:FunSubfunSixAssigns}, the next command to be executed is the first CallFunction command, and the function is ready to call the first nested function. \begin{center} \tablecaption{\label{table:FunSubfunSixAssigns}Attributes of the LoadCartState Function After the Executing the Six Assignment Commands} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment >CallFunction CallFunction \end{small} & \begin{small} Sat clone r v rv vv \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} \noindent The CallFunction that is about to be invoked was generated from the script line \begin{quote} \begin{verbatim} [r] = magnitude(rv); \end{verbatim} \end{quote} Whenever the Sandbox call chain invokes a command, the following actions occur in the FunctionManager\-::Execute() method: \begin{enumerate} \item The FunctionManager::Execute() method checks to see if the command that needs to be executed makes a function call. If it does: \begin{itemize} \item\label{enum:SetNestedRunExecutingFlag} A flag is set indicating that a nested function is being run. (This flag is used to prevent repetition of the following bullets when the FunctionManager::Execute() method is reentered after polling for a user interrupt.) \item\label{enum:MakeFOSClone} The Function Object Store is cloned. \item The Function Object Store is placed on the call stack. \item The nested function (or functions, if more than one function call is made) is initialized. The clone of the Function Object Store made in step~\ref{enum:MakeFOSClone} is used as the Local Object Map that supplies the arguments that are set in the new Function Object Store, which is then passed to the nested function during this initialization. \end{itemize} \item The Execute() method is called for the command. \item The GetNext() method is called for the command. If the pointer returned from this call is NULL, the flag set in step~\ref{enum:SetNestedRunExecutingFlag} is cleared. \item Control is returned to the caller so that interrupt polling can occur. \end{enumerate} \noindent Once this process is started, calls from the Sandbox call chain into the FunctionManager::Execute() method as the result of polling for user interrupts skip the first step. For the CallFunction command under discussion here, the attribute table shown in Table~\ref{table:FunSubfunCall1Init} describe the internal state of the data immediately following the initialization in step one. \begin{center} \tablecaption{\label{table:FunSubfunCall1Init}The LoadCartState Function after Initializing the First CallFunction} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment >CallFunction CallFunction \end{small} & \begin{small} Clones of: Sat clone r v rv vv \end{small} & \begin{small} Original FOS: Sat clone r v rv vv \end{small} & \begin{small} >Create Assignment \end{small} & \begin{small} 'vec1'-> clone of vv clone \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} The magnitude GmatFunction is now ready to be run through the LoadCartState function. The next call through the Sandbox call chain invokes a call to the magnitude function's Create() command, which builds a variable named val. Table~\ref{table:FunSubfunCall1cmd1} shows the attributes after running this command. \begin{center} \tablecaption{\label{table:FunSubfunCall1cmd1}Attributes of the Function After Running the First magnitude Command} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment >CallFunction CallFunction \end{small} & \begin{small} Clones of: Sat clone r v rv vv \end{small} & \begin{small} Original FOS: Sat clone r v rv vv \end{small} & \begin{small} Create >Assignment \end{small} & \begin{small} 'vec1'-> clone of vv clone val \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} The next call through the Sandbox call chain invokes the magnitude function's Assignment command, built off of this line of script: \begin{quote} \begin{verbatim} val = sqrt(dot(vec1, vec1)); \end{verbatim} \end{quote} \noindent The right side of this equation generates a MathTree. One node of that MathTree is a FunctionRunner, constructed to run the dot GmatFunction. Hence the check performed by the FunctionManager that is running the magnitude function detects that there is a nested function call in its Assignment command. Accordingly, when it is time to evaluate the MathTree, the controlling FunctionManager passes a pointer to itself, through the Assignment command, into the MathTree, which passes that pointer to each FunctionRunner node in the tree. Then when the MathTree makes the call to evaluate the FunctionRunner node, the FunctionRunner starts by calling the controlling FunctionManager::\-PushToStack() method, which clones its local Function Object Store, places the original on its call stack, and build the Function Object Store for the nested function. It then sets the clone as the Function Object Store for the FunctionManager inside of the FunctionRunner, and calls that FunctionManager's Evaluate() method. The Evaluate method starts by initializing the function, using the newly cloned Function Object Store as the source for the objects needed for initialization. The resulting attributes are shown in Table~\ref{table:FunSubfunCall1FunRunPrep}. \begin{center} \tablecaption{\label{table:FunSubfunCall1FunRunPrep}LoadCartState Attributes After Running the First magnitude Command} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment >CallFunction CallFunction \end{small} & \begin{small} Clones of: Sat clone r v rv vv \end{small} & \begin{small} Original FOS: Sat clone r v rv vv \end{small} & \begin{small} Create >Assignment \end{small} & \begin{small} Clones of: 'vec1'-> clone of vv clone val \end{small} & \begin{small} Original FOS: 'vec1'-> clone of vv clone val \end{small} & \begin{small} >Create Assignment \end{small} & \begin{small} 'vec1'-> clone of clone of vv clone 'vec2'-> clone of clone of vv clone \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} The dot function can now be run. This execution is made by calling the Evaluate() method on the FunctionRunner. In turn, the FunctionRunner executes the function. Fortunately, this function does not call another. Upon completion of the execution of the dot function, the attributes have the values shown in Table~\ref{table:FunSubfunCall1FunRundotDone}. \begin{center} \tablecaption{\label{table:FunSubfunCall1FunRundotDone}LoadCartState Attributes After Evaluating the dot Function in the magnitude Function} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment >CallFunction CallFunction \end{small} & \begin{small} Clones of: Sat clone r v rv vv \end{small} & \begin{small} Original FOS: Sat clone r v rv vv \end{small} & \begin{small} Create >Assignment \end{small} & \begin{small} Clones of: 'vec1'-> clone of vv clone val \end{small} & \begin{small} Original FOS: 'vec1'-> clone of vv clone val \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} 'vec1'-> clone of clone of vv clone 'vec2'-> clone of clone of vv clone val \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} At this point we can start unwinding the call stack. The Function Object Store for the dot function includes a Variable, val, that has the scalar product of the vv Array with itself. Once the dot function has completed execution, the FunctionManager retrieves this value, and saves it so that it can be passed to the MathTree as the result of the Evaluate() call on the FunctionRunner node. The FunctionManger then finalizes the dot function, clearing the Function Object Store pointer in the dot function. The FunctionRunner then calls the controlling FunctionManager's PopFromStack() method, which deletes the cloned call stack and restores the Function Object Store that was on the call stack. The MathTree completes its evaluation, retrieving the values obtained from the dot function, and using that value to build the resultant needed by the Assignment command that contains the MathTree. The attributes at this point are shown in Table~\ref{table:FunSubfunCall1FunRunmagDone}. \begin{center} \tablecaption{\label{table:FunSubfunCall1FunRunmagDone}LoadCartState Attributes After Evaluating the magnitude Assignment Command} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment >CallFunction CallFunction \end{small} & \begin{small} Clones of: Sat clone r v rv vv \end{small} & \begin{small} Original FOS: Sat clone r v rv vv \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} 'vec1'-> clone of vv clone val \end{small} & \begin{small} empty\end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} %%% Try to make clearer %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The Assignment command that called into the dot function used the results of that function to set the value of the val Variable in the magnitude function's Function Object Store. That Assignment command was the last command in the magnitude function's Function Control Sequence, so the call to the magnitude function made from the LoadCartState function has completed execution. The FunctionManager for the LoadCartState function retrieves the output argument -- in this case, the val Variable -- from the magnitude function. It then deletes the cloned function object store, pops the Function Object Store off of the call stack, locates the object set to contain the output -- that is, the r Variable -- in this Function Object Store, and calls the assignment operator to set these two objects equal. That process is followed for all of the output arguments in the function call, and then the FunctionManager clears the magnitude function, completing the execution of the CallFunction command. These steps result in the attributes tabulated in Table~\ref{table:FunSubfunCall1FunRunmagClear}. %%% To here %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{center} \tablecaption{\label{table:FunSubfunCall1FunRunmagClear}LoadCartState Attributes After Clearing the magnitude Function} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction >CallFunction \end{small} & \begin{small} Sat clone r v rv vv \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty\end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} This process is repeated for the last CallFunction in the LoadCartState Function Control Sequence, resulting in calls that set the value of the v Variable in the LoadCartState Function Object Store. Once this final CallFunction has been evaluated, the FunctionManager in the Mission Control Sequence CallFunction command that started this process -- that is, the FunctionManager that is running the LoadCartState function -- retrieves the output objects, one at a time, and sets the objects in the Sandbox Object Map referenced by the CallFunction command equal to the objects found in the LoadCartState Function Object Store using the corresponding assignment operators. This completes the LoadCartState function execution, so the CallFunction FunctionManager finalizess the LoadCartState function, resulting in the attributes shown in Table~\ref{table:FunSubfunCall1FunRunAllClear}. The LoadCartState function is now ready for a new call, should one be encountered later in the mission. \begin{center} \tablecaption{\label{table:FunSubfunCall1FunRunAllClear}Attributes after running the LoadCartState Function} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{LoadCartState}} & \multicolumn{3}{c|}{\textbf{magnitude}} & \multicolumn{3}{c|}{\textbf{dot}} \\ \hline \multicolumn{1}{|m{0.75in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} & \multicolumn{1}{m{0.69in}|}{\centering\textit{FCS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{FOS}} & \multicolumn{1}{m{0.45in}|}{\centering\textit{Call Stack}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{9}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.75in}|p{0.45in}|p{0.45in}|p{0.69in}|p{0.45in}|p{0.45in}|p{0.69in}| p{0.44in}|p{0.44in}|} \hline \begin{small} Create Create Assignment Assignment Assignment Assignment Assignment Assignment CallFunction CallFunction \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty\end{small} & \begin{small} Create Assignment \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} \subsubsection{Finalization} The final step in running scripts that use GMAT functions is the cleanup after the function has been run. The normal procedure followed in the Sandbox is to call RunComplete() on the Mission Control Sequence, which gives each command the opportunity to reset itself for a subsequent run. The CallFunction and Assignment commands that access GmatFunctions use this call to execute the RunComplete() method in the Function Control Sequences contained in those functions. The Sandbox Object Map and Global Object Store are left intact when GMAT finishes a run. Subsequent runs in GMAT start by clearing and reloading these object stores. The preservation of the final states of the objects in the Sandbox makes it possible to query these objects for final state data after a run completes execution. \subsection{Global Data Handling: Another Short Example} In this section, we will examine another short sample to show how global data is managed in GMAT when functions are present. The main script that drives this example is shown here: \begin{quote} \begin{verbatim} Create ImpulsiveBurn globalBurn; Create Spacecraft globalSat; Create Variable index; Create ForceModel fm fm.PrimaryBodies = {Earth} Create Propagator prop prop.FM = fm Create OpenGLPlot OGLPlot1; GMAT OGLPlot1.Add = {globalSat, Earth}; Global globalBurn globalSat Propagate prop(globalSat) {globalSat.Earth.Periapsis} For index = 1 : 4 RaiseApogee(index); Propagate prop(globalSat) {globalSat.Earth.Periapsis} EndFor \end{verbatim} \end{quote} The function called here, RaiseApogee, applies a maneuver to the spacecraft so that subsequent propagation moves the spacecraft on different trajectory. The function is defined like this: \begin{quote} \begin{verbatim} function [] = RaiseApogee(burnSize) Global globalBurn globalSat globalBurn.Element1 = burnSize / 10.0; Maneuver globalBurn(globalSat); \end{verbatim} \end{quote} This function uses two objects that are not defined in the function, and that are also not passed in using arguments to the function. These objects are placed in the Sandbox's Global Object Store. In the next few pages we will examine this object repository during initialization, execution, and finalization. \subsubsection{Globals During Initialization} At the start of initialization in the Sandbox, the Global Object Store is empty, the Sandbox Object Map contains the objects from the Configuration, and the Mission Control Sequence has been built from parsing of the script. The state of the objects in the Sandbox immediately before the start of Mission Control Sequence initialization is shown in Table~\ref{table:GlobalExampleStart}. \begin{center} \tablecaption{\label{table:GlobalExampleStart}The Objects in the Globals Example at the Start of Initialization} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|} \hline \begin{small} globalBurn globalSat index fm prop EarthMJ2000Eq EarthMJ2000Ec EarthFixed OGLPlot1 RaiseApogee \end{small} & \begin{small} empty \end{small} & \begin{small} Global Propagate For CallFunction Propagate EndFor \end{small} & \begin{small} NULL \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} The first thing the Sandbox does after initializing the objects in the Sandbox Object Map is to collect all objects in the Sandbox Object Store that are marked as globals via the isGlobal flag, and moves those objects into the Global Object Store. This includes the objects that are automatically set as global in scope. Other objects are set as globals using a check box on the GUI, or using the ``MakeGlobal'' object property in the script file. For this example, neither case is met, so the only global objects are the automatic globals -- the Propagator and the Function found in the script, along with the three coordinate systems that GMAT automatically creates. Table~\ref{table:GlobalExampleMoved} shows the resulting rearrangement of objects. Note that the objects marked by the Global command in the script are not put into the Global Object Store at this point. They are moved when the Global command is executed. \begin{center} \tablecaption{\label{table:GlobalExampleMoved}The Objects in the Globals Example after moving the Globals} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|} \hline \begin{small} globalBurn globalSat index fm OGLPlot1 \end{small} & \begin{small} prop EarthMJ2000Eq EarthMJ2000Ec EarthFixed RaiseApogee \end{small} & \begin{small} Global Propagate For CallFunction Propagate EndFor \end{small} & \begin{small} NULL \end{small} & \begin{small} NULL \end{small} & \begin{small} empty \end{small} \\ \end{supertabular} \end{center} \noindent Note that the global objects have been moved from the Sandbox Object Map into the Global Object Store. This feature -- glossed over in the earlier discussion -- makes memory management for the objects at the Sandbox level simple. When the Sandbox is cleared, all of the objects in the Sandbox Object Map and the Global Object Store are deleted. This feature has implications for global objects created inside of functions as well. If an object created inside of a function is declared global, either explicitly using a Global command or implicitly by virtue of its type, the Create or Global command checks the Global Object Store to see if an object of that name is already stored in it. If the object already exists in the Global Object Store, the types of the objects are compared, and if they do not match, an exception is thrown. Additional discussion of the interplay between the Create command and the Global command are provided in the design specifications for those commands. Once the automatic globals have been moved into the Global Object Store, the Sandbox proceeds with initialization of the commands in the Mission Control Sequence. This process follows the procedure described in the preceding sections, so the results are summarized here, with details related to global objects discussed more completely. The first command of interest in this discussion is the Global command. At construction, this command was given the names of the global objects identified for the command. These names are stored in the command for use at execution time. No action is applied for this command during initialization. The next command of interest is the CallFunction command. When the CallFunction command initializes, the Global Object Store pointer is passed into the function contained in the CallFunction -- in this case, the RaiseApogee function. Then the solar system and transient force vector pointers are set in the function. The function is then retrieved by the Sandbox, and passed to the ScriptInterpreter::InterpretGmatFunction() method, which builds the Function Control Sequence. Upon return, the attributes are set as shown in Table~\ref{table:GlobalExampleIGFReturn}. \begin{center} \tablecaption{\label{table:GlobalExampleIGFReturn}The Objects in the Globals Example on return from InterpretGmatFunction} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|} \hline \begin{small} globalBurn globalSat index fm OGLPlot1 \end{small} & \begin{small} prop EarthMJ2000Eq EarthMJ2000Ec EarthFixed RaiseApogee \end{small} & \begin{small} Global Propagate For CallFunction Propagate EndFor \end{small} & \begin{small} NULL \end{small} & \begin{small} set \end{small} & \begin{small} Global Assignment Maneuver \end{small} \\ \end{supertabular} \end{center} Like the Mission Control Sequence, the Function Control Sequence contains a Global command. The names of the global objects identified for this command are set in the InterpretGmatFunction() method when the GmatFunction is parsed. Nothing else happens for the Global command during the initialization that builds the Function Control Sequence. The Sandbox continues initializing the commands in the Mission Control Sequence until they are all initialized, completing the process. \subsubsection{Globals During Execution} Next we will examine the behavior of the Global commands during execution of the Mission Control Sequence. The first command that is executed in the Mission Control Sequence is the Global command defined by the line \begin{quote} \begin{verbatim} Global globalBurn globalSat \end{verbatim} \end{quote} \noindent in the Function Control Sequence. This command contains a list of the global objects, specified as objects named ``globalBurn'' and ``globalSat''. When the Global::\-Execute() method is called, it takes this list and, for each element in the list, performs these actions: \begin{enumerate} \item\label{enum:GlobalEnumStart} Check the command's object map (in this case the Sandbox Object Store) for the named object. \item If the object was found: \begin{enumerate} \item Check the Global Object Store for an object with the same name. \item\label{enum:MoveToGlobal} If no such object was found, remove the object from the object map and set it in the Global Object Store. Continue at step~\ref{enum:ContinueOrExit}. \item If the object was found in the Global Object Store, throw an exception stating that an object was found in the Global Object Store with the same name as one that was being added, and terminate the run. \end{enumerate} \item The object is not in the object map, so the Global command needs to verify that it was set by another process in the Global Object store. Looks for the object, and verify that it in the Global Object Store and that its pointer is not NULL. If the pointer is NULL, throw an exception and terminate the run. \item\label{enum:ContinueOrExit} Get the next name from the list of global objects. If the list is finished, exit, otherwise, return to step~\ref{enum:GlobalEnumStart} to process the next global object. \end{enumerate} The Global command in the Mission Control Sequence follows the process shown in step~\ref{enum:MoveToGlobal}, moving the declared objects into the Global Object store, as shown in Table~\ref{table:GlobalExampleIGFRunDone}. \begin{center} \tablecaption{\label{table:GlobalExampleIGFRunDone}The Objects in the Globals Example after Executing the Global Command in the Mission Control Sequence} \tablefirsthead{ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tablehead{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued from previous page}\end{small}}\\ \hline %\color{blue} \multicolumn{3}{|c|}{\textbf{Mission Objects}} & \multicolumn{3}{c|}{\textbf{RaiseApogee Function}} \\ \hline \multicolumn{1}{|m{0.95in}|}{\centering\textit{Sandbox Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Mission Control Sequence}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Object Store}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Global Object Map}} & \multicolumn{1}{m{0.95in}|}{\centering\textit{Function Control Sequence}} \\ %\color{black} \hline } \tabletail{ \hline \multicolumn{6}{|r|}{\begin{small}\textit{Continued on next page}\end{small}}\\ \hline } \tablelasttail{\hline} \begin{supertabular}{|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|p{0.95in}|} \hline \begin{small} index fm OGLPlot1 \end{small} & \begin{small} prop EarthMJ2000Eq EarthMJ2000Ec EarthFixed RaiseApogee globalBurn globalSat \end{small} & \begin{small} Global Propagate For CallFunction Propagate EndFor \end{small} & \begin{small} NULL \end{small} & \begin{small} set \end{small} & \begin{small} Global Assignment Maneuver \end{small} \\ \end{supertabular} \end{center} \noindent Execution of the Global command in the Mission Control Sequence simply verifies that the global objects are set as specified. \subsection{Additional Notes and Comments} This section contains a few items that may need additional notes to fully explain the function design. \subsubsection{Search Order for Reference Objects} Previous builds of GMAT contain a single mapping for reference object names, the Sandbox Object Map. The function subsystem design requires the addition of two new mappings between names and object pointer: the Global Object Store and the Function Object Store. In context, a command only has access to two of these three possible mappings. The Global Object Store is visible to all commands. Commands that are part of the Mission Control Sequence also access the Sandbox Object Map, while commands in a Function Control Sequence access the function's Function Object Store. I'll refer to this second mapping -- either the Sandbox Object Map or the Function Object Store, depending on context -- as the Local Object Store. It is possible that objects in the Local Object Store have identical names to objects in the Global Object Store. As an example, both the dot and cross functions described in the function example in this document use local objects named vec1 and vec2. If one of these functions declared vec1 as a global object, a call to execute that function would move the local vec1 object into the Global Object Store. A subsequent call to the other function would result in a case where both the Local Object Store and the Global Object Store contain an object named vec1, and the commands that use this object would need a rule that specifies how to resolve the referenced object between these two object mappings. The general rule for resolving reference objects for this type of scenario is that local objects have precedence over global objects. When reference object pointers are located prior to executing a command, the Local Object Store is searched first for the named object. If the Local Object Store does not contain the reference object, then the Global Object Store is used to resolve the reference. If the object is not found there either, an exception is thrown stating that a referenced object -- with a specified name that is states in the exception message -- was not available for use by the command. \subsubsection{Identifying Global Objects using the isGlobal Flag} The GmatBase base class gains a new attribute as part of the function design. This attribute is a boolean data member named isGlobal, which defaults to false in the base class. The isGlobal attribute can be accessed using the Get/SetBooleanParameter methods through the script identifier ``MakeGlobal''. Thus, in parameter mode, the following scripting: \begin{quote} \begin{verbatim} Create Spacecraft Satellite Satellite.MakeGlobal = true \end{verbatim} \end{quote} \noindent specifies that the Spacecraft named Satellite is a global object, and should be placed in the Global Object Store when that mapping is populated -- for example, as part of the global object mapping described in Initialization Step 3 (see section~\ref{section:GlobalObjectManagement}). The isGlobal flag is used by GMAT's GUI to determine the state of the Global checkbox on resource panels -- if the isGlobal flag is true, then a call to the Get\-Boolean\-Parameter(\-"MakeGlobal") method returns true, and the box is checked. When a user changes the state of the checkbox, that change is passed to the object using a call to the Set\-Boolean\-Parameter(\-``MakeGlobal'', newValue) method. In the Sandbox, the Global command moves local objects into the Global Object Store. When the command moves an object, it also toggles the isGlobal flag so that queries made to the object can identify that the object is globally acessible. This state data can be used by the Create command to determine if an object that it manages has been added to the Global Object Store, and therefore does not need to be resent to an object map.
{ "alphanum_fraction": 0.7336448598, "avg_line_length": 32.8901340804, "ext": "tex", "hexsha": "f86fd63ec70c6f6416a6a41d9b63c7f9c2ccc5c6", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": [ "NASA-1.3" ], "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_path": "doc/SystemDocs/ArchitecturalSpecification/GmatFunctionEndToEnd.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_licenses": [ "NASA-1.3" ], "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_path": "doc/SystemDocs/ArchitecturalSpecification/GmatFunctionEndToEnd.tex", "max_line_length": 100, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_path": "doc/SystemDocs/ArchitecturalSpecification/GmatFunctionEndToEnd.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "num_tokens": 51497, "size": 164352 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[a4paper, total={6in, 9.5in}]{geometry} \usepackage{amsmath} \usepackage{amssymb} \usepackage{minted} \usepackage{graphicx} \graphicspath{{images/}} \title{Sheet 10} \author{Digdarshan Kunwar} \date{November 2018} \begin{document} \maketitle \section*{Problem 10.1} \begin{minted}{C} #include <unistd.h> int main(int argc, char *argv[]) { for (; argc > 1; argc--) { if (0 == fork()) { (void) fork(); } } return 0; } \end{minted} \textbf{a)}\\\\ \textbf{Case 1} ./foo, \\ $ argc = 1$ \\ $\bullet$ Parent process never loops\\ $\bullet$ There will be 0 child process as there is no loop \\ \texttt{Total Process :1 } \texttt{Child Processes:0}\\ \\ \textbf{Case 2} ./foo a,\\ $ argc = 2$\\ $\bullet$ The main process will be in the loop once\\ $\bullet$ The child of main will never call the loop again, and the child will call fork() one time in the child process itself\\ $\bullet$ There will be 2 child processes in total \\ \texttt{Total Process :3 } \texttt{Child Processes:2}\\ \\ \textbf{Case 3} ./foo a b,\\ $ argc = 3$.\\ $\bullet$There will be 2 child process when $argc == 2$, which will have 3 process in total.\\ $\bullet$The main will also fork again if it loops.\\ $\bullet$In total, $ 3 * 3 = 9$ process, which means 8 child process . \\ \texttt{Total Process :9 } \texttt{Child Processes:8}\\ \\ We can observe in every iteration the number processes splits into 1 child and another sub child process.And if the child process is not the main parent(the first process) then it is also counted in the child process.So there will be 3 process in total in 1 iteration.\\ So we can relate it to a formula $Total Process=3^{n}$ for the total number of processes including the main parent process.\\ where $n$ is (argc-1).\texttt{(argc is argument count)}\\ Also the number of child processes can be written as $Total Process -1$\\\\ \textbf{Case 4} ./foo a b c, \\ $argc=4$\\ $\bullet$Here we have total of $3^{3}$ processes.\\ $\bullet$Excluding the main parent process we have 26 child process.\\ \texttt{Total Process :27 } \texttt{Child Processes:26}\\ \\ \textbf{Case 5}./foo a b c d, \\ $argc=4$\\ $\bullet$Here we have total of $3^{4}$ processes.\\ $\bullet$Excluding the main parent process we have 80 child process.\\ \texttt{Total Process :81 } \texttt{Child Processes:80}\\ \\ \subsection*{\textbf{b)}} The code is given below : \begin{minted}{C} #include <unistd.h> #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { for (; argc > 0; argc--) { int pid; pid=fork(); if (pid == 0) { //delay the child process sleep(10); exit(1); }else{ //print the process id of childs printf("The process id is : %d\n",pid); } } printf("\nThe main process ended\n\n"); return 0; } \end{minted} Here the child or the zombie process is still running even if the parent process ends with the message.\\So we can see the zombie process with the top utility.Because they are still running the sleep() command in the process.\\ So they can be seen the the process list and we can check for their PID(Process Id).\\ The implementation is shown in the attached pictures as well.\\ \textbf{Creating zombie with different number of arguments :}\\ \includegraphics[scale=0.27]{image/1.png}\\ \includegraphics[scale=0.27]{image/2.png}\\ \includegraphics[scale=0.27]{image/3.png}\\ \includegraphics[scale=0.27]{image/4.png}\\ \includegraphics[scale=0.27]{image/5.png}\\ \noindent\rule{18cm}{0.4pt} \section*{Problem 10.2} The code is given below ::\\ \begin{minted}{C} #include<iostream> #include <dirent.h> #include<string> using namespace std; /* This program recursively shows the content of the current working directory if no arguments are passed to the main() function of the program. Otherwise, starts a recursive listing for each of the arguments that are passed to the main() function. */ void search (char* name); int main (int charc , char *charv[]) { char* root=new char[1]{'.'}; if (charc==1){ search(root); delete root; return 0; }else{ //if the argc is more than 1 for (int i =1;i<charc;i++){ //Calling a search function in main search(charv[i]); } } return 0; } //This is the search function void search (char* name){ DIR *dir; char* fname; string loc,dirName; struct dirent *current; dir=opendir(name); //Check if the dir is NULL if (dir!=NULL){ while((current=readdir(dir))!=NULL ){ //assign fname with the current->d_name fname=current->d_name; //if the file starts with . then neglect printing it if (current->d_name[0] != '.'){ if (name[0]=='.'&& name[1]=='\0'){ cout<<fname<<endl; //location of contents (files) of that directory from current directory loc=fname; }else{ cout<<name<<"/"<<fname<<endl; if (current->d_type== DT_DIR){ //Initial location of the file is name loc=name; //dirName is the filename of a file that is a directory dirName=fname; //loc is location of contents (files) of that directory from current directory loc=loc+"/"+dirName; } } } /*if the file is a directory then we have to read if that directory has other files in itself*/ if (current->d_type== DT_DIR && current->d_name[0] != '.' ){ //The below steps copy a string to a pointer //Copies the loc to char pointer nloc char * nloc = new char[loc.size() + 1]; copy(loc.begin(), loc.end(), nloc); nloc[loc.size()] = '\0'; //Recusively go and read the files in directory search(nloc); delete nloc; } } }else{ //If the name doesnt match the directory or if dir is NULL cout<<"Error opening the Directory : "<<name<<endl; } //Close the dir closedir(dir); } \end{minted} \end{document}
{ "alphanum_fraction": 0.6021256932, "avg_line_length": 30.4788732394, "ext": "tex", "hexsha": "8be8f53c4f00ca563d6a0b6802c5a9a145780013", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d56a70c13ce3863bcf3140990fab5b634b30a2af", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Devang-25/CS-Jacobs", "max_forks_repo_path": "Jacobs_ICS/HomeWorks/Sheet 10/Sheet 10 Latex/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d56a70c13ce3863bcf3140990fab5b634b30a2af", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Devang-25/CS-Jacobs", "max_issues_repo_path": "Jacobs_ICS/HomeWorks/Sheet 10/Sheet 10 Latex/main.tex", "max_line_length": 270, "max_stars_count": null, "max_stars_repo_head_hexsha": "d56a70c13ce3863bcf3140990fab5b634b30a2af", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Devang-25/CS-Jacobs", "max_stars_repo_path": "Jacobs_ICS/HomeWorks/Sheet 10/Sheet 10 Latex/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1747, "size": 6492 }
% Education \ifswedish \section{Utbildning} \position {aug 2011 \textemdash{} okt 2016} {Doktorsexamen} {Uppsala universitet (Sverige)} {\textit{Hydrologi} \newline Doktorsavhandling: Informationsbehov inom vattenförvaltning och riskhantering: Värdet av hårda hydro-meteorologiska data och mjuk information.} \position {aug 2009 \textemdash{} jun 2011} {Masterexamen} {Uppsala universitet (Sverige)} {\textit{Geovetenskaper (Hydrologi/Hydrogeologi)} \newline Mastersavhandling: Modelling Climatic and Hydrological Variability in Lake Babati, Northern Tanzania.} \position {sep 2004 \textemdash{} jun 2009} {Kandidatexamen} {Universitat Autònoma de Barcelona (Spanien)} {\textit{Geologi}} \position {aug 2007 \textemdash{} jun 2008} {Utbytesstudent} {Université Joseph Fourier --- Grenoble I (Frankrike)} {\textit{Geovetenskap}} \else \section{Education} \position {aug 2011 \textemdash{} oct 2016} {Doctor of Philosophy (PhD)} {Uppsala University (Sweden)} {\textit{Hydrology and Water Resources Science} \newline Thesis: Information Needs for Water Resource and Risk Management: Hydro-Meteorological Data Value and Non-Traditional Information.} \position {aug 2009 \textemdash{} jun 2011} {Master of Science (MSc)} {Uppsala University (Sweden)} {\textit{Earth Sciences (Hydrology/Hydrogeology)} \newline Thesis: Modelling Climatic and Hydrological Variability in Lake Babati, Northern Tanzania.} \position {sep 2004 \textemdash{} jun 2009} {Bachelor of Science (BSc)} {Universitat Autònoma de Barcelona (Spain)} {\textit{Geology}} \position {aug 2007 \textemdash{} jun 2008} {Exchange student} {Université Joseph Fourier --- Grenoble I (France)} {\textit{Geology and Earth Sciences}} \fi
{ "alphanum_fraction": 0.6906624935, "avg_line_length": 40.7872340426, "ext": "tex", "hexsha": "79e0dea34841e7014d3ebf80412a1901442e2f7b", "lang": "TeX", "max_forks_count": 11, "max_forks_repo_forks_event_max_datetime": "2022-01-04T18:03:28.000Z", "max_forks_repo_forks_event_min_datetime": "2018-01-30T05:27:35.000Z", "max_forks_repo_head_hexsha": "a4e7f1dee9d05e047b7d62886ecc9520278d8d7c", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "GironsLopez/GironsLopez-Resume", "max_forks_repo_path": "sections/education.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a4e7f1dee9d05e047b7d62886ecc9520278d8d7c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "GironsLopez/GironsLopez-Resume", "max_issues_repo_path": "sections/education.tex", "max_line_length": 194, "max_stars_count": 21, "max_stars_repo_head_hexsha": "a4e7f1dee9d05e047b7d62886ecc9520278d8d7c", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "GironsLopez/GironsLopez-Resume", "max_stars_repo_path": "sections/education.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-08T09:47:32.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-09T15:03:18.000Z", "num_tokens": 559, "size": 1917 }
\chapter{Theory of Algebraic Diagrammatic Construction} Essentially, ADC is a kind of many-body perturbation theory, whose basic idea is to divide Hamiltonian to unperturbed part and perturbed part, and the sum over different orders of contribution. If the summation is over all the contributions from infinite orders, then the exact energy will be obtained. Obviously, numerically it is impossible to do so, thus a simple idea is to truncate the summation to some particular order. However, when the order increases, on one hand, the expression for a direct perturbation become very complicated, on the other hand, the size inconsistency problem appears again. Although it is shown that in the first few order all ill-behaved terms that break size consistency are canceled finally, a reason and proof is needed for higher orders. On the other hand, instead of designed for solving ground state like what all the methods we discussed previously does, ADC is designed for ionization potential, electron affinity or excited state. Interestingly, all these different purposes are based on a same theoretical framework, which is propagator, Green function and Feynman diagram. There similarity is that all these processes include gain and loss of electrons, which is obvious in the ionization potential and electron affinity case. In the excited state case, it can be viewed as gain of an electron with higher energy and loss with lower energy. Thus, number of electrons is never conserved in ADC, which is hard to deal with in the formal quantum mechanics framework, which means a new tool is needed. In order to finish these purposes, a many-body field theory approach is required, which originates from quantum field theory which is developed for a theoretical elementary particle physics. After quantum field theory is constructed, the idea of field theory is quickly transferred to many-body physics, which becomes the basis of many-body perturbation theory. In many-body field theory, a language of second quantization is used. We have used a little second quantization when discussing the Post-Hartree Fock part for convenience. However, we didn't give a formal definition for the notations we give. Thus, in this chapter, we will firstly formally introduce second quantization, Green function and Feynman diagram. Then we will discuss how these concepts are applied to ADC and to calculate the three kinds of energies mentioned above. Finally, we will discuss the concept intermediate state and its relation with size consistency and compactness, and prove that ADC is both a canonical and size consistent method.
{ "alphanum_fraction": 0.8167619048, "avg_line_length": 97.2222222222, "ext": "tex", "hexsha": "95543b476d46be5c16e0514874d71e2e78191550", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ed40c7edf566436e9083f67172bba966732026c", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "SUSYUSTC/bachelor_thesis", "max_forks_repo_path": "chapters/theory.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ed40c7edf566436e9083f67172bba966732026c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "SUSYUSTC/bachelor_thesis", "max_issues_repo_path": "chapters/theory.tex", "max_line_length": 197, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ed40c7edf566436e9083f67172bba966732026c", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "SUSYUSTC/bachelor_thesis", "max_stars_repo_path": "chapters/theory.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 506, "size": 2625 }
The first tutorial shows how to create a dike with multiple layers and multiple piezometric levels. Note that this tutorial is meant to show the possibilities of KratosGeoMechanics in GiD, rather than creating a realistic dike. \section{Geometry} \begin{enumerate} \item In GiD, press the icon \includegraphics{CreateLine.png} and draw the outlines of your geometry according to Table \ref{tab:tut1_outline}. Make sure to press "Join" in the Create point procedure when connecting the last line to the first point. % \begin{table}[h!] % \centering % \caption{Outline geometry tutorial 1} % \label{tab:tut1_outline} % \begin{tabular}{llll} % Point ID & x-coordinate & y-coordinate & Escape \\ % 1 & -50, -15 & 50, -15 & False \\ % 2 & 50 & -15 & False \\ % 3 & 40 & 0 & False \\ % 4 & 10 & 0 & False \\ % 5 & 0 & 5 & False \\ % 6 & -5 & 5 & False \\ % 7 & -15 & 0 & False \\ % 8 & -50 & 0 & False \\ % 1 & -50 & -15 & True % \end{tabular} % \end{table} \begin{table}[h!] \centering \caption{Outline geometry tutorial 1} \label{tab:tut1_outline} \begin{tabular}{lll} Line ID & Point 1 (x,y)& Point 2 (x,y) \\ 1 & -50, -15 & 50, -15 \\ 2 & 50, -15 & 50, 0 \\ 3 & 50, 0 & 10, 0 \\ 4 & 10, 0 & 0, 5 \\ 5 & 0, 5 & -5, 5 \\ 6 & -5, 5 & -15, 0 \\ 7 & -15, 0 & -50, 0 \\ 8 & -50, 0 & -50, -15 \\ \end{tabular} \end{table} \item Now create lines to set the outlines of the soil layers. According to Table \ref{tab:tut1_outline_sl}. % \begin{table}[h!] % \centering % \caption{Outline soil layer tutorial 1} % \label{tab:tut1_outline_sl} % \begin{tabular}{llll} % Point ID & x-coordinate & y-coordinate & Escape \\ % 9 & -50, -10 & -10 & False \\ % 10 & 50 & -10 & True \\ % 7 & -15 & 0 & False \\ % 4 & 10 & 0 & True \\ % \end{tabular} % \end{table} \begin{table}[h!] \centering \caption{Outline soil layer tutorial 1} \label{tab:tut1_outline_sl} \begin{tabular}{lll} Line ID & Point 1 (x,y)& Point 2 (x,y) \\ 9 & (-50, -10) & (50, -10) \\ 10 & (-15, 0) & (10, 0 ) \\ \end{tabular} \end{table} \item Create vertical lines at the location of the turning points of the phreatic level which will be created in a further step, according to Table \ref{tab:tut1_verticals_water}. % \begin{table}[h!] % \centering % \caption{Verticals turning points phreatic line tutorial 1} % \label{tab:tut1_verticals_water} % \begin{tabular}{llll} % Point ID & x-coordinate & y-coordinate & Escape \\ % 11 & -7.5 & 5 & False \\ % 12 & -7.5 & -15 & True \\ % 4 & -15 & 0 & False \\ % 13 & 10 & 0 & True \\ % \end{tabular} % \end{table} \begin{table}[h!] \centering \caption{Verticals turning points phreatic line tutorial 1} \label{tab:tut1_verticals_water} \begin{tabular}{lll} Line ID & Point 1 (x,y)& Point 2 (x,y) \\ 11 & (-7.5, 5) & (-7.5, -15) \\ 12 & (-15, 0) & (10, 0) \\ \end{tabular} \end{table} \item Create all the intersections between the lines by clicking on the \includegraphics{divideLineAtIntersectLine.png} button and selecting everything. The outlines should look as displayed in Figure \ref{fig:tut1_all_outline}. Further references to line ID's are done using the line ID's as shown in the figure. \begin{figure}[h!] \includegraphics[width=0.9\textwidth]{outlinesTutorial1.png} \caption{All outlines tutorial 1} \label{fig:tut1_all_outline} \end{figure} \item Create surfaces by clicking on the \includegraphics{createNurbsSurface.png} button and selecting the lines as displayed in Table \ref{tab:tut1_surfaces}. \begin{table}[h!] \centering \caption{Surfaces tutorial 1} \label{tab:tut1_surfaces} \begin{tabular}{ll} Surface ID & line ID's \\ 1 & (7, 27, 35, 33, 21) \\ 2 & (28, 25, 34, 35) \\ 3 & (3, 16, 24, 25) \\ 4 & (33, 36, 13, 22) \\ 5 & (34, 26, 31, 36) \\ 6 & (24, 15, 32, 26) \\ 7 & (18, 29, 27) \\ 8 & (17, 5, 4, 28, 29) \\ \end{tabular} \end{table} \item Delete line (19) and the attached point by clicking on the \includegraphics{deleteButton.png} and the \includegraphics{deleteAllButton.png} in the Geometry toolbar. The geometry should look like as displayed in Figure \ref{fig:tut1_geometry}. Further references to surface ID's are made to the ID's as shown in the figure. \begin{figure}[h!] \includegraphics[width=0.9\textwidth]{geometryTut1.png} \caption{Geometry tutorial 1} \label{fig:tut1_geometry} \end{figure} \end{enumerate} \section{Boundary conditions} \begin{enumerate}[resume] \item The next step is setting the boundary conditions. In this tutorial, the side boundaries can move freely in vertical direction; the bottom boundary is fixed. In the top menu bar, select \textit{GeoMechanicsApplication->Dirichlet Constraints}. The window as shown in Figure \ref{fig:tut1_dirichlet_constraints_window} should appear. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{dirichletConstraintsWindow.png} \caption{Dirichlet constraints window} \label{fig:tut1_dirichlet_constraints_window} \end{figure} \item For the side boundaries, deselect "\textit{SOLID DISPLACEMENT Y}". Now in the bottom left corner of the Dirichlet Constaints window, fill in "side boundary". In this same window click on the line button \includegraphics{assignToLine.png}. And click on the create new group button, \includegraphics{createNewGroup}. Now select the lines (21, 22, 15, 16) in the geometry and press "ESC". \item For the bottom boundary, in the Dirichlet Constaints window, select all the \textit{SOLID DISPLACEMENT} buttons and keep the values at 0. Name this boundary "bottom boundary", and assign this boundary condition to the lines (13, 31 ,32). \end{enumerate} \section{Materials} \begin{enumerate}[resume] \item Now the materials have to be assigned to the surfaces. In this tutorial, 3 materials will be created. In the top menu bar, select \textit{GeoMechanicsApplication->Elements}. A new window should appear. \item In the elements window which appeared, from the top drop-down menu, select "soil-drained". And fill in the values as shown in Figure \ref{fig:tut1_clay_mat}. For the parameter, "UDSM Name", fill in the address of the "MohrCoulomb.dll", including extension. For the description of the Mohr Coulomb parameters, see @@@. Assign the material to the surfaces (1, 2, 3). \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{clayMaterialWindowTut1.png} \caption{Clay parameters tutorial 1} \label{fig:tut1_clay_mat} \end{figure} \item For the second material, select "soil-drained" and fill in the parameters as shown in Figure \ref{fig:tut1_sand_mat}. Again for the "UDSM Name" fill in the address if the "MohrCoulomb.dll", including extension. Assign the material to the surfaces (4, 5, 6) \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{sandMaterialWindowTut1.png} \caption{Sand parameters tutorial 1} \label{fig:tut1_sand_mat} \end{figure} \item For the last material, again select "soil-drained" and fill in the parameters as shown in Figure \ref{fig:tut1_dike_mat}, again referring to "MohrCoulomb.dll" for "UDSM Name". Assign the material to surfaces (7, 8). \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{dikeMaterialWindowTut1.png} \caption{Dike parameters tutorial 1} \label{fig:tut1_dike_mat} \end{figure} \end{enumerate} \section{Water} \begin{enumerate}[resume] \item The first water level to be added is the river water level. Click on \textit{GeoMechanicsApplication -> Dirichlet Constraints}. In the Dirichlet Constraints window, select Fluid Pressure from the top drop-down menu. Fill in the parameters as shown in Figure \ref{fig:tut1_river_level}. Assign this fluid pressure to line 7. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{riverLevelTut1.png} \caption{River water level tutorial 1} \label{fig:tut1_river_level} \end{figure} \item Now rename the previous Fluid pressure condition to "\verb|river_level_surface|". And assign the condition to surface 7 \item The water pressure due to the river water level is now set, however since the water level lies above the surface, it is required to explicitly define the normal load due to the water. Click on \textit{GeoMechanicsApplication -> Loads}. From the top drop-down menu in de Loads window, select \textit{Normal Load}. Fill in the values as shown in Figure \ref{fig:tut1_river_load}. Assign the condition to the lines 17 and 18. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{riverLoadTut1.png} \caption{River load tutorial 1} \label{fig:tut1_river_load} \end{figure} \item The water level in the dike is an inclined phreatic line. Click on \textit{GeoMechanicsApplication -> Dirichlet Constraints}. Select \textit{Fluid Pressure} from the top drop-down menu in the Dirichlet Constraints table and fill in the values as shown in Figure \ref{fig:tut1_dike_level}. Assign the condition to surface 8. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{dikeLevelTut1.png} \caption{Dike water level tutorial 1} \label{fig:tut1_dike_level} \end{figure} \item For the last part of the phreatic level, select \textit{fluid pressure}, choose \textit{Hydrostatic} pressure distribution. Set the reference coordinate on -15. In the Fluid pressure table fill in 0 on time 0 and 2; fill in 15 at time 3 and 4. Name the condition \verb|"polder level"| and assign the condition to line 3. \item Now for the water level in the aquifer, select \textit{fluid pressure}, choose \textit{Hydrostatic} pressure distribution. Set the reference coordinate on -15. In the Fluid pressure table fill in 0 on time 0 and 2; fill in 18 at time 3 and 4. Name the condition \verb|"aquifer_level"| and assign the condition to surface 4, 5 and 6. \item Lastly, select \textit{Interpolate line} pressure distribution, set \textit{Imposed Pressure} on \textit{Table Interpolation} and assign the condition to surface 1, 2 and 3. \end{enumerate} \section{Loads} \begin{enumerate}[resume] \item In this tutorial, the only loads will be applied are water loads and the own weight of the soils. The water loads are already applied in the previous section. To apply the soil weight, click on \textit{GeoMechanicsApplication -> Loads} and select \textit{Body Acceleration} from the drop-down menu. \item In the Loads window, for \textit{Imposed Body Acceleration Y} select table interpolation. Fill in 0 for time 0; and fill in -9.81 for time 1 and 4. Name this load \verb|"body_surface"| and assign the condition to the surfaces: 1, 2, 3, 4, 5, 6. \item For the self weight of the dike, in the Loads window, for \textit{Imposed Body Acceleration Y} select table interpolation. Fill in 0 for time 0 and 1; and fill in -9.81 for time 2 and 4. Name this load \verb|"body_dike"| and assign the condition to the surfaces: 7, 8. \end{enumerate} \section{Meshing} \begin{enumerate}[resume] \item On the top menu bar, click on \textit{Mesh -> Unstructured -> Assign sizes on surfaces}. Fill in 1.0 in the dialog window which pops up. And assign this size on the surfaces 7 and 8. \item Now fill in 3.0 in the dialog window and assign this size to the surfaces: 1, 2, 3, 4, 5, 6. \item On the top menu bar, click on \textit{Mesh -> Quadratic type -> Quadratic}. Such that quadratic elements are generated. \item On the top menu bar, click on \textit{Mesh -> Generate mesh...}. A Mesh generation window pops up where the element size of the mesh can be filled in. However, since the mesh size of all the surfaces is already defined in previous steps, the number which is filled in into the mesh generation window does not matter. \end{enumerate} \section{Project Parameters} \begin{enumerate}[resume] \item Before the model can be calculated, the project parameters should be filled in correctly. Click on \textit{GeoMechanicsApplication -> Project Parameters}. The problem data Window should appear. \item In the \textit{Problem data} window on the \textit{Problem Data} tab, set the \textit{Start Time} to 0.0 and the \textit{End Time} to 4.0. \item For the first calculation in this tutorial, in the \textit{Problem data} window on the \textit{Solver Settings} tab, set \textit{Displacement Relative Tolerance} on 1e-2. Keep the rest of the settings on the default values. And press \textit{Accept}. \end{enumerate} \section{Calculate} \begin{enumerate}[resume] \item In order to calculated, on the top menu bar click on \textit{Calculate -> Calculate} \item Calculation progress can be monitored by clicking on \textit{Calculate -> View Process info...} \item Make sure to save after the calculation is done \end{enumerate} \section{Post-process part 1} \begin{enumerate}[resume] \item After calculation, check the results in the post-process window by clicking on the \includegraphics{togglePostProcess.png} icon on the Standard GiD Toolbar. \item As an intermediate check before going on to the next part of the tutorial, check if the Water Pressure, the x-displacement and the y-displacement are calculated correctly. In Post-process, click on \includegraphics{contourFillIcon.png} and select: "WATER PRESSURE", then "DISPLACEMENT -> X-DISPLACEMENT" and lastly,"DISPLACEMENT -> y-DISPLACEMENT". Check if the contours are the same as shown in Figure \ref{fig:tut1_p1_results} \begin{figure}[h!] \includegraphics[width=\textwidth]{resultsTutorial1Part1.png} \caption{Contour results tutorial 1 part 1, top to bottom: water pressure, x-displacement, y-displacement} \label{fig:tut1_p1_results} \end{figure} \item Go back to the pre-process window. If the results are equal go on to the next part of the tutorial, if not, check if all input is correctly filled in. \end{enumerate} \section{Staged construction} \begin{enumerate}[resume] \item While Kratos GeoMechanics can handle staged calculation, the GiD problemtype can not. In order to perform a staged calculation, some additional actions have to be performed within and outside the Gid pre-processor. The following stages will be added: \begin{itemize} \item Initial stage \item Addition of dike load \item Addition of daily water level \item High water level \end{itemize} \item Stage 1: \begin{enumerate} \item Save the existing file as \verb|"tutorial_1_stage_1"| \item Go to \textit{GeoMechanicsApplication -> Dirichlet Constraints -> Excavation}. Select the option \textit{Deactivate soil}. Name the condition \verb|dike_excavation| and assign the condition to the surfaces 7 and 8. \item Go to \textit{GeoMechanicsApplication -> Problem data}. On the Problem data tab, set start and end time from 0.0 to 1.0. \item On the Solver Settings tab, set Solution type to K0- procedure. And set the Displacement Relative Tolerance on 1e-3. \item Re-mesh. \item Click on Calculate and then cancel process. Note that clicking on Calculate is necessary to generate the input files. Results of the in between stages are not relevant, therefore it is not required to wait until the calculation is finished. \item Save the project again \end{enumerate} \item Stage 2 \begin{enumerate} \item Save the existing file as \verb|"tutorial_1_stage_2"| \item Go to \textit{GeoMechanicsApplication -> Dirichlet Constraints -> Excavation}. Deselect the option \textit{Deactivate soil} of the \verb|dike_excavation| condition. \item Go to \textit{GeoMechanicsApplication -> Problem data}. On the Problem data tab, set start and end time from 1.0 to 2.0. \item On the Solver Settings tab, set Solution type to Quasi-Static. \item Re-mesh. \item Click on Calculate and then cancel process. \item Save the project again \end{enumerate} \item Stage 3 \begin{enumerate} \item Save the existing file as \verb|"tutorial_1_stage_3"| \item Go to \textit{GeoMechanicsApplication -> Problem data}. On the Problem data tab, set start and end time from 2.0 to 3.0. \item Click on Calculate and then cancel process. \item Save the project again \end{enumerate} \item Stage 4 \begin{enumerate} \item Save the existing file as \verb|"tutorial_1_stage_4"| \item Go to \textit{GeoMechanicsApplication -> Problem data}. On the Problem data tab, set start and end time from 3.0 to 4.0. \item Go to \textit{GeoMechanicsApplication -> Dirichlet Constraints -> Fluid Pressure}, set Y 2 of the dike \verb|dike_level| condition on -19. \item Click on Calculate and then cancel process. \item Save the project again \end{enumerate} \end{enumerate} \section{Calculate staged construction} A staged calculation cannot be calculated via GiD. This has to be done via the command prompt or python. \begin{enumerate}[resume] \item Take the template file "MainKratos$\_$multiple$\_$stages$\_$template.py" from ./KratosGeoMechanics/applications/GeoMechanicsApplication. And copy it to the main KratosGeoMechanics directory. Rename the file to "MainKratos.py" \item right click on "MainKratos.py" and select "edit". \item Edit the project$\_$paths variable such that it refers to the correct *.gid files. Note that the file paths should stay within the quotation marks. \item In windows explorer, go to the KratosGeoMechanics directory. In the adress-bar, type in: "cmd" and press "enter", such that command prompt is opened while already referring to the correct directory. In the command prompt, type in: "runkratos MainKratos.py". Now the calculation should start running with as many stages as preferred. \end{enumerate} \section{Post-process part 2} when all the stages are calculated, all the stages can be viewed separately in post process. \begin{enumerate}[resume] \item In GiD go to post-processing. \item Select "Open PostProcess" and go to the directory of your preferred stage. Click on the *.bin file to open the postprocess file. \item Note that "DISPLACEMENT" shows the displacement of the current stage. "TOTAL$\_$DISPLACEMENT" shows the total displacement over the previously calculated stages. \end{enumerate}
{ "alphanum_fraction": 0.7016679905, "avg_line_length": 51.7397260274, "ext": "tex", "hexsha": "29dfa3bf0bc335431bdc4aa83bd929a0f71546a6", "lang": "TeX", "max_forks_count": 224, "max_forks_repo_forks_event_max_datetime": "2022-03-06T23:09:34.000Z", "max_forks_repo_forks_event_min_datetime": "2017-02-07T14:12:49.000Z", "max_forks_repo_head_hexsha": "e8072d8e24ab6f312765185b19d439f01ab7b27b", "max_forks_repo_licenses": [ "BSD-4-Clause" ], "max_forks_repo_name": "lkusch/Kratos", "max_forks_repo_path": "applications/GeoMechanicsApplication/documents/tutorials/ChapterTutorial1.tex", "max_issues_count": 6634, "max_issues_repo_head_hexsha": "e8072d8e24ab6f312765185b19d439f01ab7b27b", "max_issues_repo_issues_event_max_datetime": "2022-03-31T15:03:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-01-15T22:56:13.000Z", "max_issues_repo_licenses": [ "BSD-4-Clause" ], "max_issues_repo_name": "lkusch/Kratos", "max_issues_repo_path": "applications/GeoMechanicsApplication/documents/tutorials/ChapterTutorial1.tex", "max_line_length": 435, "max_stars_count": 778, "max_stars_repo_head_hexsha": "e8072d8e24ab6f312765185b19d439f01ab7b27b", "max_stars_repo_licenses": [ "BSD-4-Clause" ], "max_stars_repo_name": "lkusch/Kratos", "max_stars_repo_path": "applications/GeoMechanicsApplication/documents/tutorials/ChapterTutorial1.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T03:01:51.000Z", "max_stars_repo_stars_event_min_datetime": "2017-01-27T16:29:17.000Z", "num_tokens": 5541, "size": 18885 }
\documentclass{article} \usepackage[T1]{fontenc} \usepackage[margin=1in]{geometry} \usepackage{palatino} \usepackage{amsthm,amsmath,amssymb} \usepackage{xcolor} \usepackage[utf8]{inputenc} \usepackage[colorlinks=true,urlcolor=blue]{hyperref} \usepackage{listings} \newcommand{\task}{\par\noindent\emph{Task:}\ } \definecolor{darkblue}{rgb}{0,0,0.5} \definecolor{lightgray}{gray}{0.75} \lstset{basicstyle=\ttfamily\small, numbers=left, xleftmargin=3em,extendedchars=true, numberstyle=\tiny\color{lightgray}, keywordstyle=\color{darkblue}, morekeywords={and, as, begin, check, do, done, downto, else, end, effect, external, finally, for, fun, function, handle, handler, if, in, match, let, new, of, operation, perform, rec, val, while, to, type, then, with}, literate=% {→}{{$\rightarrow$}}1% {×}{{$\times$}}1% {←}{{$\leftarrow$}}1% {↦}{{$\mapsto$}}1% {↝}{{$\leadsto$}}1% {…}{{$\ldots$}}1% {⇒}{{$\Rightarrow$}}1% {∈}{{$\in$}}1% {≡}{{$\equiv$}}1% {λ}{{$\lambda$}}1% {⊢}{{$\vdash$}}1% {κ}{{$\kappa$}}1% {Σ}{{$\Sigma$}}1% {Δ}{{$\Delta$}}1% {Γ}{{$\Gamma$}}1% {Θ}{{$\Theta$}}1% {₀}{{${}_0$}}1% {₁}{{${}_1$}}1% {₂}{{${}_2$}}1% {ᵢ}{{${}_\mathtt{i}$}}1% {ⱼ}{{${}_\mathtt{j}$}}1% } {\theoremstyle{definition} \newtheorem{problem}{Problem}[section]} \begin{document} \title{Algebraic effects and handlers\\(OPLSS 2018 lecture notes)} \author{Andrej Bauer\\University of Ljubljana} \date{July 2018} \maketitle These are the notes and materials for the lectures on algebraic effects and handlers at the \href{https://www.cs.uoregon.edu/research/summerschool/summer18/index.php}{Oregon programming languages summer school 2018 (OPLSS)}. The notes were originally written in Markdown and converted to {\LaTeX} semi-automatically, please excuse strange formatting. You can find all the resources at \href{https://github.com/OPLSS/introduction-to-algebraic-effects-and-handlers}{the accompanying GitHub repository}. The lectures were recorded on video that are available at the summer school web site. \hypertarget{general-resources-reading-material}{% \subsubsection*{General resources \& reading material}\label{general-resources-reading-material}} \begin{itemize} \item \href{https://github.com/yallop/effects-bibliography}{Effects bibliography} \item \href{https://github.com/effect-handlers/effects-rosetta-stone}{Effects Rosetta Stone} \item \href{http://www.eff-lang.org}{Programming language Eff} \end{itemize} \hypertarget{what-is-algebraic-about-algebraic-effects-and-handlers}{% \section{What is algebraic about algebraic effects and handlers?}\label{what-is-algebraic-about-algebraic-effects-and-handlers}} The purpose of the first two lectures is to review algebraic theories and related concepts, and connect them with computational effects. We shall start on the mathematical side of things and gradually derive from it a programming language. \hypertarget{outline}{% \subsubsection*{Outline}\label{outline}} Pretty much everything that will be said in the first two lectures is written up in \href{https://arxiv.org/abs/1807.05923}{``What is algebraic about algebraic effects and handlers?''}, which still a bit rough around the edges, so if you see a typo please \href{https://github.com/andrejbauer/what-is-algebraic-about-algebraic-effects}{let me know}. Contents of the first two lectures: % \begin{itemize} \item signatures, terms, and algebraic theories \item models of an algebraic theory \item free models of an algebraic theory \item generalization to parameterized operations with arbitrary arities \item sequencing and generic operations \item handlers \item comodels and tensoring of comodels and models \end{itemize} \hypertarget{problems}{% \subsection{Problems}\label{problems}} Each section contains a list of problems, which are ordered roughly in the order of difficulty, either in terms of trickiness, the amount of work, or prerequisites. I recommend that you discuss the problems in groups, and pick whichever problems you find interesting. \begin{problem}[The theory of an associative unital operation] Consider the theory $T$ of an associative operation with a unit. It has a constant $\epsilon$ and a binary operation $\cdot$ satisfying equations % \begin{align*} (x \cdot y) \cdot z &= x \cdot (y \cdot z) \\ \epsilon \cdot x &= x \\ x \cdot \epsilon &= x \end{align*} % Give a useful description of the free model of $T$ generated by a set $X$. You can either guess an explicit construction of free models and show that it has the required universal property, or you can analyze the free model construction (equivalence classes of well-founded trees) and provide a simple description of it. \end{problem} \begin{problem}[The theory of apocalypse] We formulate an algebraic theory $\mathsf{Time}$ in it is possible to explicitly record passage of time. The theory has a single unary operation $\mathsf{tick}$ and no equations. Each application of $\mathsf{tick}$ records the passage of one time step. \task Give a useful description of the free model of the theory, generated by a set $X$. \task Let a given fixed natural number $n$ be given. Describe a theory $\mathsf{Apocalypse}$ which extends the theory $\mathsf{Time}$ so that a computation crashes (aborts, necessarily terminates) if it performs more than $n$ of ticks. Give a useful description of its free models. Advice: do \emph{not} concern yourself with any sort of operational semantics which somehow ``aborts'' after $n$ ticks. Instead, use equations and postulate that certain computations are equal to an aborted one. \end{problem} \begin{problem}[The theory of partial maps] The models of the empty theory are precisely sets and functions. Is there a theory whose models form (a category equivalent to) the category of sets and \emph{partial} functions? Recall that a partial function $f : A \hookrightarrow B$ is an ordinary function $f : S \to B$ defined on a subset $S \subseteq A$. (How do we define composition of partial functions?) \end{problem} \begin{problem}[Models in the category of models] In \href{https://arxiv.org/abs/1807.05923}{Example 1.27 of the reading material} it is calculated that a model of the theory $\mathsf{Group}$ in the category $\mathsf{Mod}(\textsf{Group})$ is an abelian group. We may generalize this idea and ask about models of theory $T_1$ in the category of models $\mathsf{Mod}(T_2)$ of theory $T_2$. The \textbf{tensor product $T_1 \otimes T_2$} of algebraic theories $T_1$ and $T_2$ is a theory such that the category of models of $T_1$ in the category $\mathsf{Mod}(T_2)$ is equivalent to the category of models of $T_1 \otimes T_2$. Hint: start by outlining what data is needed to have a $T_1$-model in $\mathsf{Mod}(T_2)$ is, and pay attention to the fact that the operations of $T_1$ must be interpreted as $T_2$-homomorphisms. That will tell you what the ingredients of $T_1 \otimes T_2$ should be. \end{problem} \subsubsection{Problem: Morita equivalence} It may happen that two theories $T_1$ and $T_2$ have equivalent categories of models, i.e., % \begin{equation*} \mathsf{Mod}(T_1) \simeq \mathsf{Mod}(T_2) \end{equation*} % In such a case we say that $T_1$ and $T_2$ are \textbf{Morita equivalent}. Let $T$ be an algebraic theory and $t$ a term in context $x_1, \ldots, x_i$. Define a \textbf{definitional extension $T + (\mathsf{op} {{:}{=}} t)$} to be the theory $T$ extended with an additional operation $\mathsf{op}$ and equation % \begin{equation*} x_1, \ldots, x_i \mid \mathsf{op}(x_1, \ldots, x_i) = t \end{equation*} % We say that $\mathsf{op}$ is a \textbf{defined operation}. % \task Confirm the intuitive feeling that $T + (\mathsf{op} {{:}{=}} t)$ by proving that $T$ and $T + (\mathsf{op} {{:}{=}} t)$ are Morita equivalent. \task Formulate the idea of a definitional extension so that we allow an arbitrary set of defined operations, and show that we still have Morita equivalence. \begin{problem}[The theory of a given set] Given any set $A$, define the \textbf{theory $T(A)$ of the set $A$} as follows: % \begin{itemize} \item for every $n$ and every map $f : A^n \to A$, $\mathsf{op}(f)$ is an $n$-ary operation \item for all $f : A^i \to A$, $g : A^j \to A$ and $h_1, \ldots, h_i : A^j \to A$, if % \begin{equation*} f \circ (h_1, \ldots, h_i) = g \end{equation*} % then we have an equation % \begin{equation*} x_1, \ldots, x_j \mid \mathsf{op}(f)(\mathsf{op}(h_1)(x_1,\ldots,x_j), \ldots, h_i(x_1,\ldots,x_j)) = g(x_1, \ldots, x_j) \end{equation*} \end{itemize} \task Is $T(\{0,1\})$ Morita equivalent to another, well-known algebraic theory? \end{problem} \begin{problem}[A comodel for non-determinism] In \href{https://arxiv.org/abs/1807.05923}{Example 4.6 of the reading material} it is shown that there is no comodel of non-determinism in the category of sets. Can you suggest a category in which we get a reasonable comodel of non-determinism? \end{problem} \begin{problem}[Formalization of algebraic theories] If you prefer avoiding doing Real Math, you can formalize algebraic theories and their comodels in your favorite proof assistant. A possible starting point is \href{https://gist.github.com/andrejbauer/3cc438ab38646516e5e9278fdb22022c}{this gist}, and a good goal is the construction of the free model of a theory generated by a set (or a type). Because the free model requires quotienting, you should think ahead on how you are going to do that. Some possibilities are: % \begin{itemize} \item use homotopy type theory and make sure that the types involved are h-Sets \item use setoids \item suggest your own solution \end{itemize} % It may be wiser to first show as a warm-up exercise that theories without equations have initial models, as that only requires the construction of well-founded trees (which are inductive types). \end{problem} \section{Designing a programming language} Having worked out algebraic theories in previous lectures, let us turn the equational theories into a small programming language. What we have to do: % \begin{enumerate} \item Change mathematical terminology to one that is familiar to programmers. \item Reuse existing concepts (generators, operations, trees) to set up the overall structure of the language. \item Add missing features, such as primitive types and recursion, and generally rearrange things a bit to make everything look nicer. \item Provide operational semantics. \item Provide typing rules. \end{enumerate} \subsection{Reading material} There are many possible ways and choices of designing a programming language around algebraic operations and handlers, but we shall mostly rely on Matija Pretnar's tutorial \href{http://www.eff-lang.org/handlers-tutorial.pdf}{An Introduction to Algebraic Effects and Handlers. Invited tutorial paper}. A more advanced treatment is available in \href{https://arxiv.org/abs/1306.6316}{An effect system for algebraic effects and handlers}. \subsection{Change of terminology} \begin{itemize} \item The elements of $\mathsf{Free}_\Sigma(V)$ are are \textbf{computations} (instead of trees). \item The elements of $V$ are \textbf{values} (instead of generators). \item We speak of \textbf{value types} (instead of sets of generators). \item We speak of \textbf{computation type} (instead of free models). \end{itemize} Henceforth we ignore equations. \subsection{Abstract syntax} We add only one primitive type, namely $\mathsf{bool}$. Other constructs (integers, products, sums) are left as exercises. \noindent Value: % \begin{lstlisting} v ::= x (variable) | false (boolean constants) | true | h (handler) | λ x . c (function) \end{lstlisting} % Handler: % \begin{lstlisting} h ::= handler { return x ↦ c_ret, ... opᵢ(x, κ) ↦ cᵢ, ... } \end{lstlisting} % Computation: % \begin{lstlisting} c ::= return v (pure computation) | if v then c₁ else c₂ (conditional) | v₁ v₂ (application) | with v handle c (handling) | do x ← c₁ in c₂ (sequencing) | op (v, λ x . c) (operation call) | fix x . c (fixed point) \end{lstlisting} % We introduce \textbf{generic operations} as syntactic abbreviation and let $\mathsf{op}\;v$ stand for $\mathsf{op}(v, \lambda x . \mathsf{return}\; x)$. \subsection{Operational semantics} We provide small-step semantics, but big step semantics can also be given (see reading material). In the rules below \lstinline{h} stands for % \begin{lstlisting} handler { return x ↦ c_ret, ... opᵢ(x,y) ↦ cᵢ, ... } \end{lstlisting} % We write \lstinline{e₁[e₂/x]} for \lstinline{e₁} with \lstinline{e₂} substituted for \lstinline{x}. The operational rules are: % \begin{lstlisting} ________________________________ (if true then c₁ else c₂) ↦ c₁ _________________________________ (if false then c₁ else c₂) ↦ c₂ ______________________ (λ x . c) v ↦ c[v/x] _____________________________________ with h handle return v ↦ c_ret[v/x] _____________________________________________________________ with h handle opᵢ(v,κ) ↦ cᵢ[v/x, (λ x . with h handle κ x)/y] _________________________________ do x ← return v in c₂ ↦ c₂[v/x] _______________________________________________________ do x ← op(v, κ) in c₂ ↦ op(v, λ y . do x ← κ y in c₂) ______________________________ fix x . c ↦ c[(fix x . c)/x] \end{lstlisting} \subsection{Effect system} \subsubsection{Value and computation types} Value type: % \begin{lstlisting} A, B := bool | A → C | C ⇒ D \end{lstlisting} % Computation type: % \begin{lstlisting} C, D := A!Δ \end{lstlisting} % Dirt: % \begin{lstlisting} Δ ::= {op₁, …, opⱼ} \end{lstlisting} % The idea is that a computation which returns values of type \lstinline{A} and \emph{may} perform operations \lstinline{op₁, …, opⱼ} has the computation type \lstinline|A!{op₁, …, opⱼ}|. \subsubsection{Signature} We presume that some way of declaring operations is given, i.e., that we have a signature \lstinline{Σ} which lists operations with their parameters and arities: % \begin{lstlisting} Σ = { …, opᵢ : Aᵢ ↝ Bᵢ, … } \end{lstlisting} % Note that the the parameter and the arity types \lstinline{Aᵢ} and \lstinline{Bᵢ} are both value types. \subsubsection{Typing rules} A typing context assigns value types to free variables: % \begin{lstlisting} Γ ::= x₁:A₁, …, xᵢ:Aᵢ \end{lstlisting} % We think of \lstinline{Γ} as a map which takes variables to their types. There are two forms of typing judgment: % \begin{enumerate} \item \lstinline{Γ ⊢ v : A} -- value \lstinline{v} has value type \lstinline{A} in context \lstinline{Γ} \item \lstinline{Γ ⊢ c : C} -- computation \lstinline{c} has computation type \lstinline{C} in context \lstinline{Γ} \end{enumerate} Rules for value typing: % \begin{lstlisting} Γ(x) = A _________ Γ ⊢ x : A ________________ Γ ⊢ false : bool ________________ Γ ⊢ true : bool Γ, x : A ⊢ c_ret : B!Θ Γ, x : Pᵢ, κ : Aᵢ → B!Θ ⊢ cᵢ : B!Θ (for each opᵢ : Pᵢ ↝ Aᵢ in Δ) _______________________________________________________________________ Γ ⊢ (handler { return x ↦ c_ret, ... opᵢ(x) κ ↦ cᵢ, ... }) : A!Δ ⇒ B!Θ Γ, x:A ⊢ c : C _____________________ Γ ⊢ (λ x . c) : A → C \end{lstlisting} % Rules for computation typing: % \begin{lstlisting} Γ ⊢ v : A __________________ Γ ⊢ return v : A!Δ Γ ⊢ v : bool Γ ⊢ c₁ : C Γ ⊢ c₂ : C ________________________________________ Γ ⊢ (if v then c₁ else c₂) : C Γ ⊢ v₁ : A → C Γ ⊢ v₂ : A ____________________________ Γ ⊢ v₁ v₂ : C Γ ⊢ v : C ⇒ D Γ ⊢ c : C ___________________________ Γ ⊢ (with v handle c) : D Γ ⊢ c₁ : A!Δ Γ, x:A ⊢ c₂ : B!Δ _________________________________ Γ ⊢ (do x ← c₁ in c₂) : B!Δ Γ ⊢ v : Aᵢ opᵢ ∈ Δ opᵢ : Aᵢ ↝ Bᵢ ___________________________________ Γ ⊢ op v : Bᵢ!Δ Γ, x:A ⊢ c : A!Δ _____________________ Γ ⊢ (fix x . c) : A!Δ \end{lstlisting} \subsection{Safety theorem} If \lstinline{⊢ c : A!Δ} then: % \begin{enumerate} \item \lstinline{c = return v} for some \lstinline{⊢ v : A} \emph{or} \item \lstinline{c = op(v, κ)} for some \lstinline{op ∈ Δ} and some value \lstinline{v} and continuation \lstinline{κ}, \emph{or} \item \lstinline{c ↦ c'} for some \lstinline{⊢ c' : A!Δ}. \end{enumerate} % For a mechanized proof see \href{https://arxiv.org/abs/1306.6316}{An effect system for algebraic effects and handlers}. \subsection{Other considerations} The effect system suffers from the so-called \emph{poisoning}, which can be resolved if we introduce \textbf{effect subtyping}. Recursion requires that we use domain-theoretic denotational semantics. Such a semantics turns out to be adequate (but not fully abstract for the same reasons that domain theory is not fully abstract for PCF). See \href{https://arxiv.org/abs/1306.6316}{An effect system for algebraic effects and handlers} where the above points are treated carefully. \subsubsection{Problems} \begin{problem}[Products] Add simple products \lstinline{A × B} to the core language: % \begin{enumerate} \item Extend the syntax of values with pairs. \item Extend the syntax of computations with an elimination of pairs, e.g., \lstinline{do (x,y) ← c₁ in c₂}. \item Extend the operational semantics. \item Extend the typing rules. \end{enumerate} \end{problem} \begin{problem}[Sums] Add simple sums \lstinline{A + B} to the core language: % \begin{enumerate} \item Extend the syntax of values with injections. \item Extend the syntax of computations with an elimination of sums (a suitable \lstinline{match} statement). \item Extend the operational semantics. \item Extend the typing rules. \end{enumerate} \end{problem} \begin{problem}[\lstinline{empty} and \lstinline{unit} types] Add the \lstinline{empty} and \lstinline{unit} types to the core language. Follow the same steps as in the previous exercises. \end{problem} \begin{problem}[Non-terminating program] Define a program which prints infinitely many booleans. You may assume that the \lstinline{print : bool → unit} operation is handled appropriately by the runtime environment. For extra credit, make it "funny". \end{problem} \begin{problem}[Implementation] Implement the core language from Matija Pretnar's \href{http://www.eff-lang.org/handlers-tutorial.pdf}{tutorial}. To make it interesting, augment it with recursive function definitions, integers, and product types. Consider implementing the language as part of the \href{http://plzoo.andrej.com}{Programming Languages Zoo}. \end{problem} \hypertarget{programming-with-algebraic-effects-and-handlers}{% \section{Programming with algebraic effects and handlers}\label{programming-with-algebraic-effects-and-handlers}} In the last lecture we shall explore how algebraic operations and handlers can be used in programming. \hypertarget{eff}{% \subsection{Eff}\label{eff}} There are several languages that support algebraic effects and handlers. The ones most faithful to the theory of algebraic effects are \href{http://www.eff-lang.org}{Eff} and the \href{https://github.com/ocamllabs/ocaml-multicore}{multicore OCaml}. They have very similar syntax, and we could use either, but let us use Eff, just because it was the first language with algebraic effects and handlers. You can \href{http://www.eff-lang.org/try/}{run Eff in your browser} or \href{https://github.com/matijapretnar/eff/\#installation--usage}{install it} locally. The page also has a quick overview of the syntax of Eff, which mimics the syntax of OCaml. \hypertarget{reading-material-1}{% \subsection{Reading material}\label{reading-material-1}} We shall draw on examples from \href{http://www.eff-lang.org/handlers-tutorial.pdf}{An introduction to algebraic effects and handlers} and \href{https://arxiv.org/abs/1203.1539}{Programming with algebraic effects and handlers}. Some examples can be seen also at the \href{https://github.com/effect-handlers/effects-rosetta-stone}{Effects Rosetta Stone}. Other examples, such as I/O and redirection can be seen at the \href{http://www.eff-lang.org/try/}{try Eff} page. \hypertarget{basic-examples}{% \subsection{Basic examples}\label{basic-examples}} \subsubsection*{Exceptions} \label{sec:exceptions} \lstinputlisting{./eff-examples/exception.eff} \subsubsection*{State} \label{sec:state} \lstinputlisting{./eff-examples/state.eff} \hypertarget{multi-shot-handlers}{% \subsection{Multi-shot handlers}\label{multi-shot-handlers}} A handler has access to the continuation, and it may do with it whatever it likes. We may distinguish handlers according to how many times the continuation is invoked: \begin{itemize} \item an \textbf{exception-like} handler does not invoke the continuation \item a \textbf{single-shot} handler invokes the continuation exactly once \item a \textbf{multi-shot} handler invokes the continuation more than once \end{itemize} Of course, combinations of these are possible, and there are handlers where it's difficult to ``count'' the number of invocations of the continuation, such as multi-threading below. An exception-like handler is, well, like an exception handler. A single-shot handler appears to the programmer as a form of dynamic-dispatch callbacks: performing the operation is like calling the callback, where the callback is determined dynamically by the enclosing handlers. The most interesting (and confusing!) are multi-shot handlers. Let us have a look at one such handler. \hypertarget{ambivalent-choice}{% \subsubsection{Ambivalent choice}\label{ambivalent-choice}} Ambivalent choice is a computational effect which works as follows. There is an exception $\mathsf{Fail} : \mathsf{unit} \to \mathsf{empty}$ which signifies failure to compute successfully, and an operation $\mathsf{Select} : \alpha\; \mathsf{list} \to \alpha$, which returns one of the elements of the list. It has to do return an element such that the subsequent computation does \emph{not} fail (if possible). With ambivalent choice, we may solve the $n$-queens problem (of placing $n$ queens on an $n \times n$ chess board so they do not attack each other): \lstinputlisting{./eff-examples/queens.eff} \hypertarget{cooperative-multi-threading}{% \subsection{Cooperative multi-threading}\label{cooperative-multi-threading}} Operations and handlers have explicit access to continuations. A handler need not invoke a continue, it may instead store it somewhere and run \emph{another} (previously stored) continuation. This way we get \emph{threads}. \lstinputlisting{./eff-examples/thread.eff} \hypertarget{tree-representation-of-a-functional}{% \subsection{Tree representation of a functional}\label{tree-representation-of-a-functional}} Suppose we have a \textbf{functional} % \begin{equation*} h : (\mathsf{int} \to \mathsf{bool}) \to \mathsf{bool} \end{equation*} When we apply it to a function $f : \mathsf{int} \to \mathsf{bool}$, we feel that $h \; f$ will proceed as follows: $h$ will \emph{ask} $f$ about the value $f \; x_0$ for some integer $x_0$. Depending on the result it gets, it will then ask some further question $f \; x_1$, and so on, until it provides an \emph{answer}~$a$. We may therefore represent such a functional $h$ as a \textbf{tree}: % \begin{itemize} \item the leaves are the answers \item a node is labeled by a question, which has two subtrees representing the two possible continuations (depending on the answer) \end{itemize} % We may encode this as the datatype: % \begin{verbatim} type tree = | Answer of bool | Question of int * tree * tree \end{verbatim} % Given such a tree, we can recreate the functional $h$: % \begin{verbatim} let rec tree2fun t f = match t with | Answer y -> y | Question (x, t1, t2) -> tree2fun (if f x then t1 else t2) f \end{verbatim} % Can we go backwards? Given $h$, how do we get the tree? It turns out this is not possible in a purely functional setting in general (but is possible for out specific case because $\mathsf{int} \to \mathsf{bool}$ is \emph{compact}, Google ``impossible functionals''), but it is with computational effects. \lstinputlisting{./eff-examples/fun_tree.eff} \hypertarget{problems-1}{% \subsection{Problems}\label{problems-1}} \begin{problem}[Breadth-first search] Implement the \emph{breadth-first search} strategy for ambivalent choice. \end{problem} \begin{problem}[Monte Carlo sampling] % The \href{http://www.eff-lang.org/try/}{online Eff} page has an example showing a handler which modifies a probabilistic computation (one that uses randomness) to one that computes the \emph{distribution} of results. The handler computes the distribution in an exhaustive way that quickly leads to inefficiency. Improve it by implement a \href{https://en.wikipedia.org/wiki/Monte_Carlo_method}{Monte Carlo} handler for estimating distributions of probabilistic computations. \end{problem} \begin{problem}[Recursive cows] Contemplate the \href{https://github.com/effect-handlers/effects-rosetta-stone/tree/master/examples/recursive-cow}{recursive cows}. \end{problem} \end{document} %%% Local Variables: %%% coding: utf-8 %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7346387576, "avg_line_length": 32.5284237726, "ext": "tex", "hexsha": "ea4a0d7cace57a9adcedea677073fde9c656bc3f", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2020-07-09T19:47:04.000Z", "max_forks_repo_forks_event_min_datetime": "2018-09-04T15:30:19.000Z", "max_forks_repo_head_hexsha": "b8b81be200f4ba0ad49deb86c82cc9254ade0f89", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "williamdemeo/introduction-to-algebraic-effects-and-handlers", "max_forks_repo_path": "notes.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b8b81be200f4ba0ad49deb86c82cc9254ade0f89", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "williamdemeo/introduction-to-algebraic-effects-and-handlers", "max_issues_repo_path": "notes.tex", "max_line_length": 129, "max_stars_count": 71, "max_stars_repo_head_hexsha": "b8b81be200f4ba0ad49deb86c82cc9254ade0f89", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "williamdemeo/introduction-to-algebraic-effects-and-handlers", "max_stars_repo_path": "notes.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T21:50:54.000Z", "max_stars_repo_stars_event_min_datetime": "2018-07-16T20:41:16.000Z", "num_tokens": 7347, "size": 25177 }
\chapter{Rcssagent3d} \label{cha:rcssagent3d} This chapter introduces Rcssagent3D, the demo agent implementation available for SimSpark. \section{Behaviors} Rcssagent3D is a demo agent implementation for use with the SimSpark server. It serves as a testbed for agent implementation. It implements all low level details of connecting and communicating with the server. It further implements an abstract run loop. The behavior of this agent skeleton is implemented using plugins. On of these plugins is configured and installed at run time. \subsection{SoccerbotBehavior} The \texttt{SoccerbotBehavior} is a minimal example of an agent that acts on a soccer field and controls our current humanoid soccer bot model. It demonstrates reading preceptor values and controlling the robot with the installed effectors in a control loop. The implemented behavior resembles the classic \texttt{hello world} program in a way. It moves up the arms of the robot and repeatedly waves hello. \section{How to change Behaviors?} The different Behaviors of Rcssagent3D are encapsulated in classes that derive from the \texttt{Behavior} class. Classes of this type implement an interface with two functions that each return a command string that is then send to the SimSpark server. The first implemented function is \texttt{Init}. This function is called once when the agent initially connected to the server. It is typically used to construct the agent representation in the server with the help of the \texttt{scene} effector and to move the agent to a suitable start position in the simulation. The second implemented function is called \texttt{Think}. This function is called every simulation cycle. As a parameter it receives the sensor data as reported from the server in the last cycle. This function should implements the main behavior run loop of each agent. %%% Local Variables: %%% mode: latex %%% TeX-master: "user-manual" %%% End:
{ "alphanum_fraction": 0.8059855521, "avg_line_length": 40.375, "ext": "tex", "hexsha": "9045fac145be36e1fa35a3a164a3f4a469194e80", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "12e91d9372dd08a92feebf98e916c98bc2242ff4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "IllyasvielEin/Robocup3dInstaller", "max_forks_repo_path": "SimSpark/rcssserver3d/doc/users/rcssagent3d.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "12e91d9372dd08a92feebf98e916c98bc2242ff4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "IllyasvielEin/Robocup3dInstaller", "max_issues_repo_path": "SimSpark/rcssserver3d/doc/users/rcssagent3d.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "12e91d9372dd08a92feebf98e916c98bc2242ff4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "IllyasvielEin/Robocup3dInstaller", "max_stars_repo_path": "SimSpark/rcssserver3d/doc/users/rcssagent3d.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 432, "size": 1938 }
\documentclass[a4paper, 12pt]{classycv} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[english]{babel} \usepackage[pangram]{blindtext} \classycvusetheme{harvard} \classycvuselibrary{attachments} \classycvuselibrary{drawings} \classycvuselibrary{icons} \palette{blue}% \submitter{% , name=Peter Peterson%, , street=Rightward Way% , house=134 % , city=Portland% , zipcode=04019% , country=United States% , countrycode=USA% , phone=774-987-4009% , [email protected]% , website=myspace.com/peterpeterson% } \recipient{% name=Carol Corporate% , position=Head of human resources% , company=Evil Corp.% , street=Copernoll Hollow Dr.% , house=1% , city=Bosstown% , zipcode=WI 53581% , country=USA% } \begin{document} % % Cover letter % \begin{coverletter} \printSubmitter% \printDate{\today}% \printRecipient% \printSalutation{Dear Ms. Corporate:}% \Blindtext[2][10] \Blindtext[3][8] \printClosing{Best regards,} \printAttachments{Attachments: A, B} \end{coverletter} % % Resume % \begin{resume} % % Education % \vspace*{-4\baselineskip} \section{\MakeUppercase{Education}} \entry{Ph.D. in Social Anthropology}{Expected May 2017}{Harvard University}{Cambridge, MA} \begin{block} Secondary Field in Science, Technology and Society. Awarded Presidential Scholar Award in 2016. \Blindtext[2][5] \end{block} \entry{B.A. in Anthropology with Highest Honors}{May 2013}{University of California}{Berkeley Berkeley, CA} \begin{block} Minors in Japanese and American Studies. Phi Beta Kappa. Awarded 2011 National Undergrad- uate Paper Prize. \end{block} \entry{Coursework in Japanese, Gender Studies, and Cultural Studies}{Sept 2011 - July 2012}{University of Tokyo}{Tokyo, Japan} % % Higher Education Experience % \section{\MakeUppercase{Higher Education Experience}} \entry{Office of Admissions Graduate Admissions Associate}{ Sept 2015 - Present}{Harvard University}{Cambridge, MA} \begin{itemize} \item Supported recruitment and outreach efforts, including Diversity Recruitment Program, 1 open house, 2 information sessions, and 2 interview days (for doctoral applicant finalists). \item Researched and contacted 27 new marketing opportunities to advertise graduate programs. \item Prepared comparative marketing report on higher education recruitment and outreach strategies for Assistant Director and Director of Admissions. \item Analyzed trends in applicant survey data to improve future recruitment and outreach efforts. \item Pre-screened 400+ graduate program applications. \item Evaluated 8 applications in mock admissions review session held by Assistant Directors. \item Provided assistance to 100+ prospective graduate students on application process. \item Aided Assistant Directors with research projects and administrative tasks. \end{itemize} \entry{Teaching Fellow}{Sept 2016 - Present}{Harvard University}{Cambridge, MA} \begin{itemize} \item Taught and facilitated 4 tutorial sections for undergraduates in medical anthropology, en- vironmental policy, and gender studies. \item Advised 60 students on course material, research design, and extracurricular opportuni- ties. \item Received excellent student evaluation scores that surpassed course benchmarks for teach- ing quality (4.67/5, with course benchmark of 4.07; and 4.47/5, with course benchmark of 4.17). \item Assisted faculty with administrative tasks and curriculum development. \end{itemize} \entry{Political Ecology Working Group}{Program Coordinator Sept 2015 - Present}{Harvard University}{Cambridge, MA} \begin{itemize} \item Planned and implemented workshop program (14 workshop sessions per academic year). \item Facilitated introduction of speakers and discussion during workshop sessions. \item Trained incoming coordinator to assist with program, budget, and recruitment. \item Managed annual budget of \$3,000. \item Developed and launched recruitment campaign (increased membership by 500 and increased membership diversity by 4 academic disciplines and 2 university affiliations). \item Organized, executed, and fundraised \$1,600 for graduate student conference (90 attendees). \end{itemize} \entry{Department of East Asian Languages and Civilizations}{Senior Tutor Aug 2015 - Present}{Harvard University}{Cambridge, MA} \begin{itemize} \item Advised 2 undergraduates on senior theses concerning East Asia, and edited thesis drafts. \item Evaluated and assigned grades for theses while serving as member of faculty committee. \end{itemize} % % Higher Education Experience % \section{\MakeUppercase{Additional Experience}} \entry{Contributing Editor}{Dec 2016 - Present}{Cultural Anthropology (Journal)}{} \begin{itemize} \item Developed content for and strategized branding of journal through social media activities (Twitter, Facebook) as part of Social Media Team. \item Analyzed data (Google Analytics) to improve site content and increase site traffic. \item Edited 4 articles submitted to journal. \end{itemize} \entry{Research and Outreach Program Assistant}{July 2011 - July 2011, Jan 2013 - Aug 2013}{University of California Berkeley}{Berkeley, CA} \begin{itemize} \item Supported faculty with molecular ecology experiments and administrative tasks. \item Facilitated public education and outreach efforts, such as Biotechnology Outreach Program (21 events on 4 islands) and Genius Day Program for elementary students (4 events). \end{itemize} \entry{Director of Members and Honorary Members}{Aug 2011 - May 2013}{Golden Key International Honor Society}{Berkeley, CA} \begin{itemize} \item Planned and managed 18 volunteer opportunities, 2 blood drives, and 4 award ceremonies. \item Supervised 10 undergraduate volunteers at each event. \item Trained 2 incoming directors to use student and alumni database. \item Analyzed attendee data to improve structure and content of future award ceremonies. \item Coordinated high-profile alumni and honorary member participation at events (e.g. famous local comedian and local singer) for entertainment at 2 award ceremonies. \end{itemize} \entry{Student Health Advisory Council}{Chair (2010-2011) and Vice Chair (2009-2010)}{Berkeley, CA}{Aug 2011 - May 2013} \begin{itemize} \item Advocated for student interests on key university health policies and services, in particular on-campus HIV/AIDS testing and affordable health insurance. \item Chaired and facilitated Council meetings to discuss agenda and university health policy. \item Trained incoming Chair to plan, execute, evaluate, and lead Council events and meetings. \item Collected and summarized student survey data to identify and prioritize healthcare needs. \item Planned Council activities and managed 4+ members during events (e.g. blood drive). \end{itemize} % % Skills % \section{\MakeUppercase{Skills}} \begin{tabular}[\textwidth]{rp{0.7\textwidth-1\tabcolsep}} \alert{Computer} & Macintosh and Windows operating systems, Microsoft Office, Adobe Photoshop, Blackboard, and Technolutions Slate (student database system). \\\alert{Language} & Fluent in Japanese. Traveled extensively in Asia. \end{tabular} % % Publications and Conference Presentations % \section{\MakeUppercase{Publications and Conference Presentations}} \begin{tabular}[\textwidth]{rp{0.6\textwidth-1\tabcolsep}} \alert{Publications} & 4 refereed journal articles and 2 book chapters. \\\alert{Conference Presentations} & 8 refereed conference papers at national conferences. \\\alert{Invited Lectures} & 2 invited lectures at universities in Japan and Australia. \end{tabular} \end{resume} % % Attachments % \pdfattachment{Attachment A}{../assets/attachment.pdf} \blindattachment[B1]{Attachment B} \blindattachment[B2]{Attachment B} \end{document}
{ "alphanum_fraction": 0.7284112595, "avg_line_length": 42.7755102041, "ext": "tex", "hexsha": "e4294d4bb23f3113c7423734366205528f1bb5f1", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2019-05-19T12:49:43.000Z", "max_forks_repo_forks_event_min_datetime": "2018-02-24T06:35:47.000Z", "max_forks_repo_head_hexsha": "41dba0701fe9622f61cc9e3c7be1f0e74250f133", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "classysoftware/classycv", "max_forks_repo_path": "examples/harvard/harvard.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "41dba0701fe9622f61cc9e3c7be1f0e74250f133", "max_issues_repo_issues_event_max_datetime": "2019-11-05T15:28:27.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-16T08:46:28.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "classysoftware/classycv", "max_issues_repo_path": "examples/harvard/harvard.tex", "max_line_length": 198, "max_stars_count": 14, "max_stars_repo_head_hexsha": "41dba0701fe9622f61cc9e3c7be1f0e74250f133", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "classysoftware/classycv", "max_stars_repo_path": "examples/harvard/harvard.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-14T08:36:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-03T03:18:47.000Z", "num_tokens": 1997, "size": 8384 }
\chapter{Supporting Materials for Chapter \ref{ga-lottery}} %1 \begin{table}[htbp] \caption{Counties created by 1805 and 1807 lotteries. \label{counties-tab}} \begin{tabularx}{\linewidth}{l*{7}{Y}} \toprule \multicolumn{7}{l}{\textbf{Panel A: 1805}} \\ \midrule Counties & No. Districts & Lot sizes (acres)& Lot length (chains square)& Lot orientation (degrees) & Grant fee (\$) & Est. value of lot (\$)\\ \hline Baldwin & 5 & 202.5 & 45& 45 / 60 & 8.10 & 839.17\\ Wayne & 3 & 490 &70 &13 / 77 & 19.60 & 842.64 \\ Wilkinson & 5 & 202.5 & 45&45 / 60 & 8.10 & 811.25 \\ \end{tabularx} \begin{tabularx}{\linewidth}{l*{7}{Y}} \toprule \multicolumn{7}{l}{\textbf{Panel B: 1807}} \\ \midrule Counties & No. Districts & Lot sizes (acres)& Lot length (chains square)& Lot orientation (degrees) & Grant fee (\$) & Est. value of lot (\$) \\ \hline Baldwin & 15 & 202.5 & 45& 45 / 60 & 12.15 & 827.35\\ Wilkinson & 23 & 202.5 & 45&45 / 60 & 12.15 & 799.82 \\ \bottomrule \end{tabularx} \footnotesize{Notes: counties and land lots specified by Acts of 11 May 1803 and 9 June 1806. Lot orientation is degrees from the meridian. Lot values are estimated by averaging the cash value of farms minus the value of farming implements and machinery by the number of (improved and unimproved) acres of land in farms \citep{haines2004,bleakley2013up}. The 1850 values are deflated to 1805 dollars (Panel A) and 1807 dollars (Panel B) using a historical consumer price index \citep{officer2012}.} \end{table} %2 \begin{table}[htbp] \begin{center} \caption{Distribution of census wealth by officeholding status.} \label{officeholders-1820-1850} \resizebox{1\width}{!}{\input{/media/jason/Dropbox/ga-lottery-local/online-appendix/officeholders-1820-1850}}\\ \end{center} \footnotesize{Notes: slave wealth adjusted to 1850\$ values \citep{williamson2016}. $p$-value is obtained from a Mann-Whitney-Wilcoxon test under the null hypothesis that officeholder and non-officeholder distributions are equal.} \end{table} %9 \begin{table}[htbp] \begin{center} \caption{Record classification ensemble.\label{ensemble-tab-link}} \begin{tabular}{llcc} \hline Algorithm & Parameters & MSE & Weight \\ \hline \rowcolor{Gray} Super Learner & default & 0.02 & - \\ Generalized boosted regression & default & 0.02 & 0.05 \\ GLM with elasticnet regularization & $\alpha=0$ & 0.02 & 0 \\ % ridge GLM with elasticnet regularization & $\alpha=0.25$ & 0.02& 0 \\ GLM with elasticnet regularization & $\alpha=0.5$ & 0.02 & 0 \\ GLM with elasticnet regularization & $\alpha=0.75$ & 0.02 & 0 \\ GLM with elasticnet regularization & $\alpha=1$ & 0.02 & 0.52 \\ % lasso Neural network & default & 0.13 & 0 \\ Random forests & default & 0.02 & 0.32 \\ Random forests & $\# \, \text{variables sampled} =1$ & 0.04 & 0.09 \\ Random forests & $\# \, \text{variables sampled}=5$ & 0.02 & 0 \\ Random forests & $\# \, \text{variables sampled}=10$ & 0.03 & 0 \\ \hline \end{tabular} \end{center} \footnotesize{Notes: cross-validated risk and weights used for each algorithm in Super Learner prediction ensemble for record classification model. \textit{MSE} is the ten-fold cross-validated mean squared error for each algorithm. \textit{Weight} is the coefficient for the Super Learner, which is estimated using non-negative least squares based on the Lawson-Hanson algorithm. $\alpha$ is the elastic net mixing parameter, where $\alpha = 0$ is the ridge penalty and $\alpha = 1$ is the Lasso penalty. $\# \, \text{variables sampled}$ is the number of predictors sampled for splitting at each node.} \end{table} %12 \begin{table}[htbp] \begin{center} \caption{Robustness: ITT treatment effects on officeholding.} \label{officeholding-robust-table} \resizebox{0.8\width}{!}{\input{/media/jason/Dropbox/ga-lottery-local/online-appendix/officeholder-robust-table}} \end{center} \footnotesize{Notes: \textit{Officeholder (match prob.)} is the officeholder match probability. See notes to Table \ref{candidate-robust-table}.} \end{table} %13 \begin{table}[htbp] \begin{center} \caption{Robustness: ITT treatment effects on candidacy.} \label{candidate-robust-table} \resizebox{0.8\width}{!}{\input{/media/jason/Dropbox/ga-lottery-local/online-appendix/candidate-robust-table}} \end{center} \footnotesize{Notes: \textit{Candidate (match prob.)} is the candidate match probability. Covariates included are those that yield $p <0.10$ in Fig. \ref{balance-plot}.} \end{table} %14 \begin{table}[htbp] \begin{center} \caption{Robustness: ITT treatment effects on slave wealth (1820\$).} \label{slave-robust-table} \resizebox{0.8\width}{!}{\input{/media/jason/Dropbox/ga-lottery-local/online-appendix/slave-robust-table}} \end{center} \footnotesize{Notes: \textit{Slave wealth (weighted)} is the same measure weighted by the census match probability. See notes to Table \ref{candidate-robust-table}.} \end{table} \begin{figure}[htbp] %15 \begin{center} \caption{Power analysis by simulation for binary response variable.\label{power-plot-bin}} \includegraphics[width=1\textwidth]{/media/jason/Dropbox/ga-lottery-local/online-appendix/power-plot-bin.png} \end{center} \footnotesize{Notes: $N=21,732$ and $\mathcal{I} =100$ iterations. The horizontal line indicates the 80\% power that is normally required to justify a study.} \end{figure} \begin{figure}[htbp] % 16 \begin{center} \caption{Quantile regression treatment effect estimates on slave wealth for 1805 winners \& losers. \label{qreg-plot} } \includegraphics[width=1\linewidth]{/media/jason/Dropbox/ga-lottery-local/online-appendix/qreg-plot.png} \\ \end{center} \footnotesize{Notes: estimates from a quantile regression of the treatment effect on imputed slave wealth for participants linked to the 1820 Census ($N=5,252$). The points are quantile-specific estimates of the treatment effect and the error bars represent 95\% confidence intervals constructed from bootstrapped standard errors. Quantiles above 0.98 are omitted for display purposes. The line is a LOESS-smoothed estimate of the treatment effect.} \end{figure}
{ "alphanum_fraction": 0.7117289869, "avg_line_length": 58.4859813084, "ext": "tex", "hexsha": "8ceac922fcc3ecb85c62618f16e14bc925892d11", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "15a691e8cb940ea427641155c7d69df3be152569", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jvpoulos/thesis", "max_forks_repo_path": "a2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "15a691e8cb940ea427641155c7d69df3be152569", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jvpoulos/thesis", "max_issues_repo_path": "a2.tex", "max_line_length": 603, "max_stars_count": null, "max_stars_repo_head_hexsha": "15a691e8cb940ea427641155c7d69df3be152569", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jvpoulos/thesis", "max_stars_repo_path": "a2.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2064, "size": 6258 }
\section{Butchering}\label{perk:butchering} \textbf{Cost:} 50 CP\\ \textbf{Requirements:} -\\ \textbf{Skill, Passive, Source(50 Gold), Repeatable}\\ You can use a salvaging knife to butcher parts of a creature's corpse. Doing so requires a DE check, and you can add your crafting level to it. When doing so, the GM compares your check to the Butchering table of the slain enemy.\\ The amount of time it takes to butcher a corpse is based on the size of the creature. Depending on the condition of the corpse, your check may be reduced, or some parts of the enemy might not be able to be butchered.\\ \\ Level Progression:\\ II: 150 CP, You can also add 1d4 on butchering checks.\\ III: 350 CP, You can also add 2d4 on butchering checks.\\ IV: 750 CP, You can also add 3d4 on butchering checks.\\
{ "alphanum_fraction": 0.7522012579, "avg_line_length": 56.7857142857, "ext": "tex", "hexsha": "b7386e89cd2702ac3d208af7f572c6915736650a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_forks_repo_path": "crafting/perks/harvest/butchering.tex", "max_issues_count": 155, "max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_issues_repo_path": "crafting/perks/harvest/butchering.tex", "max_line_length": 132, "max_stars_count": 6, "max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_stars_repo_path": "crafting/perks/harvest/butchering.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z", "num_tokens": 232, "size": 795 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %2345678901234567890123456789012345678901234567890123456789012345678901234567890 % 1 2 3 4 5 6 7 8 \documentclass[letterpaper, 10 pt, conference]{ieeeconf} % Comment this line out if you need a4paper %\documentclass[a4paper, 10pt, conference]{ieeeconf} % Use this line for a4 paper \IEEEoverridecommandlockouts % This command is only needed if % you want to use the \thanks command \overrideIEEEmargins % Needed to meet printer requirements. % See the \addtolength command later in the file to balance the column lengths % on the last page of the document % The following packages can be found on http:\\www.ctan.org \usepackage{graphics} % for pdf, bitmapped graphics files \usepackage{epsfig} % for postscript graphics files \usepackage{mathptmx} % assumes new font selection scheme installed \usepackage{times} % assumes new font selection scheme installed \usepackage{amsmath} % assumes amsmath package installed \usepackage{amssymb} % assumes amsmath package installed \usepackage{amsthm} \newtheorem{myTheo}{Theorem} \theoremstyle{remark} \newtheorem{myrem}{Remark} \title{\LARGE \bf Geometric Part* } \author{Xiao Ming Xiu$^{1}$ \\% <-this % stops a space \\ \footnotesize\textit{\today} \thanks{*This work was supported by Standard-Robots Ltd.,co.}% <-this % stops a space \thanks{$^{1}$Xiao Ming Xiu is with Visual SLAM Algorithm Group, Standard-Robots Ltd.,co. Shenzhen, China. {\tt\small [email protected]}}% } \begin{document} \maketitle \thispagestyle{empty} \pagestyle{empty} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{abstract} Given 3 2-D points' coordinates in two different Reference Systems, Barcode System (BS) and Camera System (CS), output two things: $\vec{t}$ and $\theta$, defined as the CS origin's coordinates in BS and the rotated angle of CS relative to BS. \end{abstract} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Problem Setting} \subsection{Given} Notation: $$ BS --- S $$ $$ CS --- S' $$ Three Points' coordinates in both systems are known: $$ P1: (x_1,y_1), (x_1',y_1') $$ $$ P2: (x_2,y_2), (x_2',y_2') $$ $$ P3: (x_3,y_3), (x_3',y_3') $$ \subsection{Want} $$\vec{t} = (t_x, t_y)$$ $$ \theta $$ where, \\$\vec{t}$ is defined by S' origin in S.\\ $\theta$ is defined by \begin{equation} \begin{aligned} (\epsilon_x',\epsilon_y')&= (\epsilon_x,\epsilon_y)*R\\ &= (\epsilon_x,\epsilon_y)* \begin{bmatrix} \cos{\theta}&-\sin{\theta}\\ \sin{\theta}&cos{\theta} \end{bmatrix} \end{aligned} \end{equation} ($\epsilon_x, \epsilon_y$ denotes the basis vector in S. $\epsilon_x', \epsilon_y'$ denotes the basis vector in S'.) \subsection{Relationship Equation} Denote $\vec{P}$ (column vector)as vector coordinate representation in S and $\vec{P}'$ in S'. We have \begin{equation} \vec{P} = \vec{t} + R*\vec{P}' \end{equation} Write in explicit form: \begin{equation} \begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} t_x \\ t_y \end{bmatrix} + \begin{bmatrix} \cos{\theta}&-\sin{\theta}\\ \sin{\theta}&cos{\theta} \end{bmatrix} * \begin{bmatrix} x' \\ y' \end{bmatrix} \end{equation} Eq. (3) is necessary and sufficient.\\ You have 3 unknown variables, $t_x, t_y, \theta$. Given 1 pair coordinates $\vec{P}$ and $\vec{P}'$, you will get 2 independent equations. At least, 2 points are needed to solve all the 3 variables, though 1 equation is over determined. \\ \section{Question: what is minimal set of input sufficient to solve?} \begin{myTheo} Given 6 Rits of $x_1,y_1,x_1',y_1',x_2,x_2'$ is sufficient. \end{myTheo} 3 independent equations have already been prepared up from Eq.(3). Only left with a Two-fold ambiguity.\\ \begin{myrem} The set of 1.5 points, not 2, is the minimum input. \end{myrem} \begin{myrem} Do there exist the following case: the input given format cannot exactly fit the sufficient and necessary solution of Problem, at least have some input as overdetermined condition and called unavoidable waste input? \end{myrem} \section{Solution} Eliminate $\theta$ first from Eq. (3) by Left Multiply (based on $R^T*R=I$) : ( Eq. (3) represents 2 equations.) \begin{equation} (x-t_x)^2 + (y-t_y)^2 = {x'}^2 + {y'}^2 \end{equation} Rely on Point P1 and P2. Substitute Eq.(4) as: \begin{equation} (x_1-t_x)^2 + (y_1-t_y)^2 = {x_1'}^2 + {y_1'}^2 \end{equation} \begin{equation} (x_2-t_x)^2 + (y_2-t_y)^2 = {x_2'}^2 + {y_2'}^2 \end{equation} Use Eq.(5)- Eq.(6), we get \begin{equation} t_y=k t_x + b \end{equation} where \begin{equation} k=-\frac{x_1 - x_2}{y_1 - y_2} \end{equation} \begin{equation} \begin{aligned} b=&{\frac{1}{2}}*\{ y_1 + y_2 \\ &- {\frac{1}{y_1 - y_2}}[x'_1^2 + y'_1^2 - x'_2^2 -y'_2^2 -x_1^2 + x_2^2]\} \end{aligned} \end{equation} Substitute $t_y$ in Eq. (5) \begin{equation} At_x^2 + Bt_x + C =0 \end{equation} Where, \begin{equation} \begin{aligned} A&=1+k^2 \\ B&=-2[x_1 + k(y_1 -b)]\\ C&= x_1^2 + (y_1 -b)^2 - x_1'^2 - y_1'^2 \end{aligned} \end{equation} Solve this quadratic equation $$\Delta = B^2 - 4AC$$ \begin{equation} t_x= \frac{-B \pm \sqrt{\Delta} }{2A} \end{equation} The corresponding $t_y$ can be derived from Eq. (7). Here is a two-fold ambiguity. Only one of the roots are correct. Check them by the third point P3: \begin{equation} (x_3-t_x)^2 + (y_3 - t_y)^2 = x'_3^2 + y'_3^2 \end{equation} Define \begin{equation} \delta_{check} = |(x_3-t_x)^2 + (y_3 - t_y)^2 - ( x'_3^2 + y'_3^2 )| ?=? 0 \end{equation} The right root will satisfy Eq. (13). In practice we can set the threshold of $\delta_{check}$. \section{Experiments of Oct.17} In the evening of Oct.17, a 2cm translation experiment was conducted. \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.6in]{0cm1017.jpg} \caption{0cm Case No Lean} \label{fig:side:a} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.6in]{2cm1017.jpg} \caption{2cm Case No Lean} \label{fig:side:b} \end{minipage} \end{figure} \subsection{2cm case data} Notice you need to transform the S' pixel unit to cm. \begin{table}[h] \caption{2cm1017.jpg's P1,P2,P3 } \label{table_example} \begin{center} \begin{tabular}{|c|c|c|} \hline 2cmNoLean & BS (cm)& CS(pixel)\\ \hline P1 & (0,0)& (267,455) \\ \hline P2 & $(3*cos(30^\circ),3*sin(30^\circ))$ & (271,256) \\ \hline P3 & $(-3*cos(30^\circ),3*sin(30^\circ))$ & (466,460)\\ \hline \end{tabular} \end{center} \end{table} Output: $$t_x=7.94855$$ $$t_y=-0.215983$$ $$\theta=120^\circ$$ \subsection{0cm case data} \begin{table}[h] \caption{0cm1017.jpg's P1,P2,P3 } \label{table_example} \begin{center} \begin{tabular}{|c|c|c|} \hline 0cmNoLean & BS (cm)& CS(pixel)\\ \hline P1 & (0,0)& (399,458) \\ \hline P2 & $(3*cos(30^\circ),3*sin(30^\circ))$ & (406,257) \\ \hline P3 & $(-3*cos(30^\circ),3*sin(30^\circ))$ & (599,463)\\ \hline \end{tabular} \end{center} \end{table} Output: $$t_x=8.82632$$ $$t_y=-2.04685$$ $$\theta=118.005^\circ$$ \subsection{Translation Calculation} \begin{equation} \begin{aligned} Translation&=\sqrt{ (7.94855-8.82632)^2 +(-0.215983 + 2.04685)^2}\\ &=2.03 cm \end{aligned} \end{equation} The real translation is just 2 cm. So the precision has attained to 0.3mm. \section{Experiments of Oct.19} In the evening of Oct.19, just use the data collected in Sep., to check the algorithm again in Lean case. \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.6in]{0cm96Lean.jpg} \caption{0cm Case No Lean} \label{fig:side:a} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.6in]{1cm96Lean.jpg} \caption{2cm Case No Lean} \label{fig:side:b} \end{minipage} \end{figure} \end{document}
{ "alphanum_fraction": 0.6367751407, "avg_line_length": 25.1507692308, "ext": "tex", "hexsha": "657a67695e6b5461662c757c8233f92d2c035c96", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e08af4da5a578ec5f818ca56055993f851d3edee", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xiaomingxiu/DirectMethod-Geometric-Part-of-Barcode", "max_forks_repo_path": "root.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e08af4da5a578ec5f818ca56055993f851d3edee", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xiaomingxiu/DirectMethod-Geometric-Part-of-Barcode", "max_issues_repo_path": "root.tex", "max_line_length": 246, "max_stars_count": null, "max_stars_repo_head_hexsha": "e08af4da5a578ec5f818ca56055993f851d3edee", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xiaomingxiu/DirectMethod-Geometric-Part-of-Barcode", "max_stars_repo_path": "root.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2817, "size": 8174 }
\section{Measurement and Analysis} \label{rate-limiter:sec:measurement} \subsection{Performance of Linux HTB} \input{rate_limiter/figs/exp_setup.tex} We first measure the performance of Linux HTB rate limiter. Compared with other software rate limiters, HTB ensures that a minimum amount of bandwidth is guaranteed to each traffic class and if the required minimum amount of bandwidth is not fully used, the remaining bandwidth is distributed to other classes. The distribution of spare bandwidth is in proportion to the minimum bandwidth specified to a class~\cite{linux-htb-intro}. We set up servers in CloudLab, and each server is equipped with 10 Gbps NICs. The experiment setup is shown in Figure~\ref{fig:exp-setup}. %On each server, we configure HTB (the native implementation in Linux 4.2) as software rate limiters. In the experiments, we configure a rate limiter for each sender VM; for each sender-receiver VM pair, we use iperf to send background traffic. We configure HTB to control the bandwidth for each pair, and vary the number of flows between each pair. We also control the total sending rate of all pairs (i.e., using the hierarchical feature of HTB). With these settings, we run sockperf between sender and receiver pairs to measure TCP RTT. We conducted two sets of experiments. In the first set of experiments, we have one sender VM and one receiver VM. We specify the speed of the sender side rate limiter to 1Gbps, 2Gbps, 4Gbps and 8Gbps. We vary the number of iperf flows (1, 2, 8 and 16) from the sender to receiver. In the second set of experiments, we configure two rate limiters on the sender server and set up two VMs (one rate limiter each VM). We configure the minimum rate of each rate limiter to 2Gbps, 3Gbps, 4Gbps and 5Gbps and the total rate of the two rate limiters is always 10Gbps. \input{rate_limiter/figs/htb_result.tex} \input{rate_limiter/figs/htb.tex} The experiments results are shown in Table~\ref{tbl:htb-1rec} and~\ref{tbl:htb-2rec}. In all experiment scenarios, network saturation ratio is from 91\%-96\%. Note that in Table~\ref{tbl:htb-2rec}, if the sum of individual rate limiter's rate is smaller than the configured total rate, HTB would allow all flows to utilize and compete for the spare bandwidth. Another observation is that more flows lead to lower bandwidth saturation, because there is more competition between flows, which leads to throughput oscillation. We further look into the scenario of one receiver VM. We visualize the TCP RTT results in Figure~\ref{fig:htb-1rec}; each subfigure shows the CDF of sockperf trace RTT with different rate limiter speed. Different subfigures show scenarios with a varying number of background iperf flows. We can draw three conclusions based on the measurement data. First, TCP RTT increases dramatically when packets go through a congested rate limiter. In the baseline case where no HTB is configured and no iperf background flow running, the median TCP RTT is 62us. While with one background iperf flow and rate limiter speed from 1Gbps to 8Gbps, the median RTT increases to 957us-1583us, which is 15-25X larger compared with the baseline case. Second, TCP RTT increases with more background flows running. For example, with 1Gbps rate limiting, the median RTT is 957us for one background flow and 3192us for 16 background flows. Third, TCP RTT decreases with larger rate limiter speed configured~\footnote{The only exception is the case with one background flow and 8Gbps rate limiting, which is suspected to be experiment noise.}. Because rate limiter speed determines the dequeue speed of HTB queue, thus, with larger dequeue speed, the queue tends to be drained faster. In the experiments with two receiver VMs (Figure~\ref{fig:htb-2rec}), we can draw the same conclusions regarding RTT increase and the impact of the number of background flows. The difference is that TCP RTT increases with larger rate limiting speed configured. In these experiments, we did not constrain the total rate, allowing HTB to utilize spare bandwidth. Thus, the dequeue speed is constant in each figure (10Gbps/numFlow). The possible reason for the trend is that enqueue speed is higher when rate limiter speed is higher. For a fixed dequeue speed, larger enqueue speed implies higher latency. \subsection{Strawman Solution: DCTCP + ECN} \input{rate_limiter/figs/dctcp.tex} Inspired by solutions to reduce queueing latency in switches~\cite{alizadeh2010data}, we test a strawman solution. In the strawman solution, we implement ECN marking in Linux HTB queues and enable DCTCP at the end-points. For ECN marking, there is a tunable parameter \textemdash\xspace \textit{marking threshold}. When the queue length exceeds the marking threshold, all enqueued packets would have their ECN bits set; otherwise, packets are not modified. DCTCP reacts to ECN marking and adjusts sender's congestion window based on the ratio of marked packets~\cite{alizadeh2010data}. We set up experiments with one sender and one receiver. We configure the HTB rate limiter to be 1Gbps and 2Gbps, and vary ECN marking threshold. The experiment results are shown in Figure~\ref{fig:dctcp}. We observe that TCP RTT can be reduced significantly (<1ms) by extending ECN into rate limiter queues. For example, with the marking threshold set to 60KB, median TCP RTT is 224us (Figure~\ref{fig:dctcp-1g}), which is less than 1/4 of native HTB's (957us). A smaller ECN marking threshold can achieve even lower latency \textemdash\xspace with the threshold from 100KB to 20KB, median TCP RTT is reduced from 375 us to 93us. While latency can be improved, we observe a negative effect of DCTCP+ECN on the throughput, as shown in Figure~\ref{fig:dctcp}. TCP throughput appears to have large oscillation, which implies applications cannot get constantly high throughput and the bandwidth is not fully utilized. For example, with 2Gbps rate limiting (Figure~\ref{fig:dctcp-2g}), even we set the threshold to be 100KB (much larger than the best theoretical value according to~\cite{alizadeh2010data}), there is still occasional low throughput (e.g., 1000Mbps) within a 20-second experiment duration. \subsection{Throughput Oscillation Analysis} Directly applying the existing ECN marking technique to rate limiter queue causes TCP throughput oscillation. There are two reasons. First, end-host networking stacks enable optimization techniques such TSO (TCP Segmentation Offload~\cite{tcp-segment-offload}) to improve throughput and reduce CPU overhead. Therefore, end-host networking stack (including the software rate limiters) processes TCP segments instead of MTU-sized packets. The maximum TCP segment size is 64KB by default. A TCP segment's IP header is copied into each MTU-sized packet in the NIC using TSO. So marking one TCP segment in the rate limiter queue results in a bunch of consecutive MTU-sized packets to be marked. For example, marking a 64KB segment means 44 consecutive Ethernet frames are marked. Such coarse-grained segment-level marking finally causes the accuracy of congestion estimation in DCTCP to be greatly decreased. Second, ECN marking happens on the transmitting path, and it takes one RTT for congestion feedback to travel back to the sender before congestion control actions are taken. Moreover, TCP RTT can be affected by the ``in network" latency. ``In network" latency can be milliseconds or tens of milliseconds. Thus, this one RTT control loop latency would cause the ECN marks to be ``outdated'', not precisely reflecting the instantaneous queue length when the marks are used for congestion window update in DCTCP. Without congestion control based on instantaneous queue length, the one-RTT control loop latency exacerbates the incorrect segment-level ECN marking. Thus, congestion window computation in DCTCP tends to change more dramatically, leading to the throughput oscillation. \subsection{Call for Better Software Rate Limiters} \begin{table}[!tb] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline R1 & high throughput \\ R2 & low throughput oscillation \\ R3 & low latency \\ R4 & generic\\ \hline \end{tabular} \caption{Software rate limiter design requirements} \label{rate-limiter-design-requirements} \end{center} \end{table} \begin{table}[!tb] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline & Stable Tput & Low Latency & Generic \\ \hline Raw HTB & \cmark & \xmark & \cmark \\ DCTCP+ECN & \xmark & \cmark & \xmark \\ \hline \end{tabular} \caption{Raw HTB and DCTCP+ECN can not meet the design requirements} \label{cannot-meet-design-requirements} \end{center} \end{table} \iffalse Having observed the flaws of Linux native HTB and the strawman solution, it is challenging to achieve both \textit{high throughput and low latency} for rate limiters. They are as follows. \textbf{\#1. Low oscillation.} As is shown in the strawman solution, achieving requirement \#1 would lead to oscillation in throughput. Thus, the solution should overcome this flaw, i.e., achieving low oscillation. \textbf{\#2. Generic.} Another weakness of the strawman solution is that it requires the end host to support ECN compatible congestion (e.g., DCTCP). While in clouds, the TCP configuration is not always within the cloud operators' control. So the solution should be generic to be applied to any TCP invariants in tenants' VMs. Fortunately, software rate limiters are implemented on end host, which give opportunities to overcome the challenges. First, end host has enough memory to store per-flow information, so that we can store per-flow queue information and correlate the incoming an ACK with its outgoing queue length. Second, in end host kernel, we have sufficient programmability (e.g., loadable OVS module), so we can compute per-flow window size and encode it in packets before they arrive at VMs. \fi We list the design requirements for better software rate limiters, as shown in Table~\ref{rate-limiter-design-requirements}. First the rate limiter should be able to provide high network throughput. Second, network throughput should be stable with low oscillation. Third, flows traversing the rate limiter should experience low latency. Finally, the rate limiter can be able to handle various kinds of traffic \textemdash\xspace ECN-capable and non ECN-capable. Neither raw HTB nor HTB with DCTCP+ECN can meet the four design requirements (see Table~\ref{cannot-meet-design-requirements}). For Raw HTB, it can achieve high and stable throughput but with very high latency as our measurement results show earlier. Raw HTB is generic and can handle both ECN-capable and non ECN-capable flows. For HTB with DCTCP+ECN, low latency requirement can be satisfied but it can not achieve stable high throughput and can only handle ECN-capable flows. Therefore, this is a need for better software rate limiters. Fortunately, software rate limiters are implemented on the end-host, which gives us opportunities to design and implement better software rate limiters for cloud networks. First, end-host has enough memory to store per-flow information. Second, we have sufficient programmability (e.g., loadable OVS module). For example, we can correlate an incoming ACK with its outgoing queue length; we can compute per-flow window size and encode it in packets before they arrive at VMs.
{ "alphanum_fraction": 0.7898284314, "avg_line_length": 67.2, "ext": "tex", "hexsha": "635f0d9ad9600bfe633a8aa93b3a0326890e9d77", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "keqhe/phd_thesis", "max_forks_repo_path": "rate_limiter/sections/measurement.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "keqhe/phd_thesis", "max_issues_repo_path": "rate_limiter/sections/measurement.tex", "max_line_length": 479, "max_stars_count": 2, "max_stars_repo_head_hexsha": "770fc637f9b7d908f349bbbfa112cbc17d898be3", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "keqhe/phd_thesis", "max_stars_repo_path": "rate_limiter/sections/measurement.tex", "max_stars_repo_stars_event_max_datetime": "2017-10-20T14:28:43.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-27T08:03:16.000Z", "num_tokens": 2792, "size": 11424 }
\chapter{References} \begin{itemize} \item http://alloy.mit.edu/alloy/tutorials/online/ \item http://www.nyu.edu/classes/jcf/g22.2440-001\_sp09/handouts/UMLBasics.pdf \item Software Engineering 2 lecture slides \end{itemize}
{ "alphanum_fraction": 0.7831858407, "avg_line_length": 28.25, "ext": "tex", "hexsha": "1c21380aaf6c2c8ca8c3b53b48d891e702ff74e5", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2019-10-19T08:25:23.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-06T15:07:29.000Z", "max_forks_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "keyblade95/DamicoGabboliniParroni", "max_forks_repo_path": "RASD/cap6_references.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "keyblade95/DamicoGabboliniParroni", "max_issues_repo_path": "RASD/cap6_references.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "85a52acdefa1df6355ee05dd67240297d99356a6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "keyblade95/DamicoGabboliniParroni", "max_stars_repo_path": "RASD/cap6_references.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 71, "size": 226 }
\chapter{Image processing and tracking of all the cells in the given $\textbf{\textit{E.coli}}$ movie using u-track 2.0, and perform data analysis} % Main chapter title \label{Part1_chapter} % For referencing the chapter elsewhere, use \ref{Chapter1} \section{Get data form movie} For the video: 1 pixel = 0.65 $\mu m$, 1 frame = 0.1 second.\\ The data were extracted by u-track to track the cell by method of Single-Particles and Gaussian Mixture-model Fitting. There are more than 14000 tracks. \newpage \section{Single bacteria trajectory} Form 14721 tracks, I chose the track with the biggest number of un-NAN values and removed all of the NAN value to plot the movement of this single bacteria. %---------------------------------------------------------------------------------------- \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{Figures/P3_fig1.png} \caption{Single Bacteria Random Walk Trajectory} \label{P2_fig1} \end{figure} \section{Bacteria population trajectory} Also track all the cells from the data, to calculate the population motions. The statistical results are shown in Figure 3.2. \begin{figure}[H] \centering \includegraphics[width=0.75\linewidth]{Figures/P3_fig2.png} \caption{The ``mean displacement'' and ``mean displacement-square'' overtime for the cells in vedio} \label{P2_fig2} \end{figure} \section{Fitting the Data with the Model} For a series of scatter points $(x_i, y_i)(i=1,2,...,n)$, if a direct proportional function was used to fit the data, we have:\\ \begin{equation*} \begin{aligned} \centering y &= \beta x \\ \beta &= \sum_{i=1}^n x_i y_i \Big / \sum_{i=1}^n x_i^2 \\ \end{aligned} \end{equation*} As for our model: \\ \begin{equation*} \begin{aligned} \centering \frac{1}{n} \sum_{i=1}^{n}x_i(t)^2 &= v_x^2t \\ \frac{1}{n} \sum_{i=1}^{n}y_i(t)^2 &= v_y^2t \\ \end{aligned} \end{equation*} So there are the fitting model: \\ \begin{equation*} \begin{aligned} \centering \frac{1}{n} \sum_{i=1}^{n}x_i(t)^2 &= \beta_x t = v_x^2t \\ \frac{1}{n} \sum_{i=1}^{n}y_i(t)^2 &= \beta_y t = v_y^2t \\ \beta_x = \sum_{i=1}^n & t_i x_i \Big / \sum_{i=1}^n t_i^2 \\ \beta_y = \sum_{i=1}^n & t_i y_i \Big / \sum_{i=1}^n t_i^2 \\ \end{aligned} \end{equation*} We get: \\ \begin{equation*} \begin{aligned} \centering \beta_x &= 0.9862 \mu m^2 /s^2 \\ v_x &= 0.9931 \mu m /s \\ \beta_y &= 0.8366 \mu m^2 /s^2 \\ v_y &= 0.9147 \mu m /s \\ \end{aligned} \end{equation*} %betaX = 0.9862 %betaY = 0.8366 \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{Figures/P3_fig3.png} \caption{Fitting the Data by Model} \label{P3_fig3} \end{figure} %---------------------------------------------------------------------------------------- %---------------------------------------------------------------------------------------- %The \code{biblatex} package is used to format the bibliography and inserts references such as this one \parencite{Reference1}. The options used in the \file{main.tex} file mean that the in-text citations of references are formatted with the author(s) listed with the date of the publication. Multiple references are separated by semicolons (e.g. \parencite{Reference2, Reference1}) and references with more than three authors only show the first author with \emph{et al.} indicating there are more authors (e.g. \parencite{Reference3}). This is done automatically for you. To see how you use references, have a look at the \file{Chapter1.tex} source file. Many reference managers allow you to simply drag the reference into the document as you type.
{ "alphanum_fraction": 0.6626438395, "avg_line_length": 40.0337078652, "ext": "tex", "hexsha": "c246a078d8a79e651241e186b775f9bf6b3fac5b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bd72b7e7d1238e22901b410b3254a4622d249964", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "c235gsy/Sustech_Systems-Biology", "max_forks_repo_path": "HW3/Chapters/Chapter3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bd72b7e7d1238e22901b410b3254a4622d249964", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "c235gsy/Sustech_Systems-Biology", "max_issues_repo_path": "HW3/Chapters/Chapter3.tex", "max_line_length": 750, "max_stars_count": null, "max_stars_repo_head_hexsha": "bd72b7e7d1238e22901b410b3254a4622d249964", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "c235gsy/Sustech_Systems-Biology", "max_stars_repo_path": "HW3/Chapters/Chapter3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1089, "size": 3563 }
%\chapter{Results} \chapter{Pairwise Interactions and Competition outcomes} \section{$T^p$ - $T^-$} A set of initial runs was carried out to test the extent to which cell type-specific doubling times affected the competitive outcomes between $T^p$ and $T^-$, from which the following observations were made: \begin{enumerate} \item Only when $T^p$ is not severely testosterone limited ($ul_{test,T^p}$ is low), $T^p$ can coexist with or outcompete $T^-$ as shown in \autoref{fig_Tpro-Tneg_testlims}. In every other case, $T^-$ drives $T^p$ to extinction. \item These competitive outcomes are also dependent on the initial proportion of $T^p$, all the other parameters being the same as shown in \autoref{fig_Tpro-Tneg_testlims}. \item When $T^-$ is strongly oxygen-limited ($ll_{O_2,T^-} \geq 0.6$) but $T^p$ is also limited by testosterone. In this case, $T^-$ wins out eventually as oxygen levels rise faster than testosterone through the external supply term, $p_{O_2}$ as shown in \autoref{fig_Tpro-Tneg_o2lims}. \item When $T^-$ is oxygen limited but with poor oxygen production (lower $p_{O_2}$), $T^p$ is able to drive $T^-$ to extinction as $T^p$ can grow and consume enough oxygen to keep the oxygen levels below those required for $T^-$ to grow as shown in \autoref{fig_Tpro-Tneg_o2lims}. \end{enumerate} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Tpro-Tneg_testlims} \caption[Pairwise $T^p - T^-$ time-series, testosterone limitation]{Pairwise $T^p - T^-$ time-series, when $T^p$ is testosterone limited and not testosterone limited (columns) and at different initial proportions of $T^p$ (rows). $T^p$ is testosterone limited at $ul_{test,T^p}=0.5$ and not testosterone limited at $ul_{test,T^p}=0.1$.} \label{fig_Tpro-Tneg_testlims} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Tpro-Tneg_o2lims} \caption[Pairwise $T^p - T^-$ time-series, oxygen limitation]{Pairwise $T^p - T^-$ time-series, when $T^-$ is oxygen limited and at different oxygen production (column). $T^-$ is oxygen limited at $ll_{O_2,T^-}=0.6$ and $T^p$ is testosterone limited at $ul_{test,T^p}=0.5$. The normal and poor production of oxygen are 0.11 and 0.0675 min$^{-1}$ respectively} \label{fig_Tpro-Tneg_o2lims} \end{figure} Taken together, these provide the first proof-of-concept that competitive outcomes can be controlled through the interaction of cell types with the resources in the environment. \newpage Building on these initial observations, a brute force parameter space exploration was done over a large combination of parameters. The entire dataset is large and a few generalised observations collated from these data have been listed below. \begin{enumerate} \item $T^-$ drives $T^p$ to extinction when $ll_{O_2,T^p} \geq 0.6$, regardless of the other parameters; in other words, $T^p$ should not be limited by oxygen if it is to compete with $T^-$. \item $T^-$ drives $T^p$ to extinction when $ll_{test,T^p} \geq 0.2$, regardless of the other parameters; in other words, $T^p$ needs to be able to grow even on the smallest amount of testosterone to compete with $T^-$. \item $T^-$ drives $T^p$ to extinction when $ul_{test,T^p} \geq 0.3$ and $ll_{O_2,T^-} \leq 0.4$ but not when $ll_{O_2,T^-} \geq 0.6$; in other words, $T^p$ shouldn’t be testosterone limited when $T^-$ is not oxygen limited to be to compete with $T^-$. The $ul_{test,T^p}$ required for $T^p$ to not go extinct also increases with increased $ll_{O_2,T^-}$, that is, $T^p$ can afford to be more testosterone limited as $T^-$ becomes more oxygen limited. \end{enumerate} These observations allow us to then define levels of resource limitation for each cell type, which represent the competitive strategy employed by that cell type. We fix three levels each of $T^p\ test$ limitation: no, moderate and severe corresponding to $ul_{test,T^p}=0.1, 0.3, 1$ respectively and three levels each of $T^-\ O_2$ limitation: low, high and severe corresponding to $ll_{O_2,T^-}=0, 0.6, 0.8$ respectively. As found earlier, $T^p$ goes extinct at any level of oxygen limitation and therefore offers no scope for exploration along this axis. We also explore two levels of $O_2$ production: normal and poor, corresponding to $p_{O_2}=0.11, 0.0675$ min$^{-1}$ respectively. Pairwise competitive runs were done over all combinations of these (as shown in \autoref{tab_Tpro-Tneg_cases}) with varying initial cell seeding. \begin{table} \centering \begin{tabular}{|l|l|l|} \hline \textbf{$\boldsymbol{O_2}$ production} & \textbf{$\boldsymbol{T^-\ O_2}$ limitation} & \textbf{$\boldsymbol{T^p\ test}$ limitation}\\ \hline normal & low & no \\ \hline normal & low & moderate \\ \hline normal & low & severe \\ \hline normal & high & no \\ \hline normal & high & moderate \\ \hline normal & high & severe \\ \hline normal & severe & no \\ \hline normal & severe & moderate \\ \hline normal & severe & severe \\ \hline poor & low & no \\ \hline poor & low & moderate \\ \hline poor & low & severe \\ \hline poor & high & no \\ \hline poor & high & moderate \\ \hline poor & high & severe \\ \hline poor & severe & no \\ \hline poor & severe & moderate \\ \hline poor & severe & severe \\ \hline \end{tabular} \caption{Table of cases for $T^p$ - $T^-$ pairwise} \label{tab_Tpro-Tneg_cases} \end{table} \newpage The following were observed from the cases as visualised in \autoref{fig_Tpro-Tneg_cases}: \begin{enumerate} \item Coexistence is observed only when there is no or moderate limitation of testosterone for $T^p$ and low limitation of oxygen for $T^-$. For low $T^p$ initial seeding, $T^-$ dominates over $T^p$ and causes it to go extinct, but as $T^p$ initial seeding increases the favour shifts towards $T^p$. \item $T^-$ causes $T^p$ to go extinct for all initial seedings when $T^p$ is severely testosterone limited. Even with a high initial seeding advantage, $T^-$ grows, overtakes $T^p$ and eventually causes $T^p$ to go extinct. $T^-$ also goes extinct in this case if it is limited by oxygen under poor oxygen production. Despite this, $T^p$ is weighed down by both the testosterone limitation and density-dependent competition of the remaining $T^-$ cells and goes extinct as a result. \item The outcome switches from $T^p$ going extinct to $T^-$ going extinct for higher $T^p$ initial seeding when $T^-$ is highly limited by oxygen under either poor or normal oxygen production and when $T^-$ is severely limited by oxygen under normal oxygen production. Similar to the cases with coexistence, for low $T^p$ initial seeding, $T^-$ dominates over $T^p$ and causes it to go extinct, but as $T^p$ initial seeding increases the favour shifts towards $T^p$. However, in this case the oxygen levels don’t go above the levels required for $T^-$ to grow before it goes extinct and only $T^p$ remains. \item $T^-$ goes extinct for all initial seedings when it is severely oxygen limited under poor oxygen production. The oxygen limitation on $T^-$ is too high and the oxygen levels never reach the levels required for a non-zero growth for $T^-$. \item Additionally, total population size has a weaker effect than initial proportion for the dynamics and outcomes for each particular case. \end{enumerate} These observations expand and broaden the scope of the earlier finding that cell-intrinsic doubling times have a relatively marginal effect on competitive outcome; with the data in \autoref{fig_Tpro-Tneg_cases}, it is possible to pinpoint which resource conditions favour each cell type based on their relative limitation strengths for that resource. \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{Tpro-Tneg_cases_normal} \caption{normal production} \label{fig_Tpro-Tneg_cases_normal} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{Tpro-Tneg_cases_poor} \caption{poor production} \label{fig_Tpro-Tneg_cases_poor} \end{subfigure} \caption[Final $T^p$ ratio of pairwise $T^p - T^-$ runs under different cases]{Final $T^p$ ratio of pairwise $T^p - T^-$ runs under different cases. Subfigures: $O_2$ Production, Rows: $T^-\ O_2$ limitation, Columns: $T^p\ test$ limitation. Note: Ratio = -0.1 is used when both cell types go extinct.} \label{fig_Tpro-Tneg_cases} \end{figure} \clearpage \section{$T^+$ - $T^p$} The initial runs for this pair involved changing both the upper and lower limits of the resource response function together as a way of sampling resource limitation conditions coarsely before a more detailed exploration. The following observations were made based on these results, represented in \autoref{fig_Tpos-Tpro_o2lims} and \autoref{fig_Tpos-Tpro_testlims}: \begin{enumerate} \item Both $T^+$ and $T^p$ are limited by both oxygen and testosterone, and compete for both resources. As with the other pair, strength of limitation for any particular resource can be modulated through the corresponding upper and lower thresholds. \item When $T^p$ is limited by testosterone more than $T^+$ ($ul_{test,T^p} > ul_{test,T^+}$), $T^+$ can consume and grow on the limited testosterone present, and this is enough for the density-dependent competition to drive $T^p$ to extinction. Without $T^p$ to provide testosterone, $T^+$ subsequently goes extinct. \item When $T^p$ is weakly limited by testosterone relative to $T^+$ ($ul_{test,T^p} \leq ul_{test,T^+}$), both cells coexist. Due to weaker testosterone limitation, $T^p$ can grow faster initially and secrete enough testosterone for $T^+$ without being negatively affected by $T^+$. This is visualised in \autoref{fig_Tpos-Tpro_testlims}. \item When both are severely testosterone limited but not oxygen limited, $T^p$ causes $T^+$ to go extinct. However, in a special scenario when both are oxygen limited with $T^+$ being more limited, coexistence is observed. A balance of sort is achieved here, where, in the initial period of low oxygen, $T^p$ can grow more than $T^+$ and secrete enough testosterone to sustain both population but doesn’t grow as much as to drive $T^+$ to extinction. This is visualised in \autoref{fig_Tpos-Tpro_o2lims}. \end{enumerate} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Tpos-Tpro_testlims} \caption[Pairwise $T^+ - T^p$ time-series, testosterone limitation]{Pairwise $T^+ - T^p$ time-series, when $T^p$ is more testosterone limited than $T^+$ and when $T^+$ is more testosterone limited than $T^p$. $T^p$ is more limited testosterone limited at $ul_{test,T^+}=0.3,ul_{test,T^p}=0.5$ and $T^+$ is limited more at $ul_{test,T^+}=0.5,ul_{test,T^p}=0.3$.} \label{fig_Tpos-Tpro_testlims} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Tpos-Tpro_o2lims} \caption[Pairwise $T^+ - T^p$ time-series, oxygen limitation]{Pairwise $T^+ - T^p$ time-series, when both cell types are testosterone limited and not oxygen limited at $ll_{O_2,T^+}=0.0, ll_{O_2,T^p}=0.0$ and $T^+$ is oxygen limited and $T^p$ moderately at $ll_{O_2,T^+}=0.6, ll_{O_2,T^p}=0.4$.} \label{fig_Tpos-Tpro_o2lims} \end{figure} As with $T^p-T^-$, cell type competitive strategies were encoded in terms of levels of resource limitation for each cell type. Three levels each of $T^p\ test$ and $T^+\ test$ limitation: no, moderate and severe corresponding to $ul_{test,T^i}=0.1, 0.3, 1$ respectively, Three levels each of $T^p\ O_2$ and $T^+\ O_2$ limitation: low, moderate and severe corresponding to $ll_{O_2,T^i}=0, 0.4, 0.8$ respectively were considered and pairwise competitive runs were done over some combinations of these (as shown in \autoref{tab_Tpos-Tpro_cases}) with varying initial cell seeding. Only two levels of limitations were considered for each resource when combinations of both $O_2$ or $test$ limitations were done to reduce the number of combinations for better interpretability. Similarly, different levels of $O_2$ or $test$ production is not considered in these cases for the same reason. Production terms will ultimately affect resource availability, and adjusting the response function of the cell should be qualitatively equivalent to reducing actual resource concentrations. \begin{table} \centering \begin{tabular}{|l|l|l|l|} \hline \textbf{$\boldsymbol{T^+\ O_2}$ limitation} & \textbf{$\boldsymbol{T^p\ O_2}$ limitation} & \textbf{$\boldsymbol{T^+\ test}$ limitation} & \textbf{$\boldsymbol{T^p\ test}$ limitation} \\ \hline low & low & no & no \\ \hline low & low & no & moderate \\ \hline low & low & no & severe \\ \hline low & low & moderate & no \\ \hline low & low & moderate & moderate \\ \hline low & low & moderate & severe \\ \hline low & low & severe & no \\ \hline low & low & severe & moderate \\ \hline low & low & severe & severe \\ \hline low & moderate & no & no \\ \hline low & moderate & no & moderate \\ \hline low & moderate & moderate & no \\ \hline low & moderate & moderate & moderate \\ \hline low & severe & no & no \\ \hline moderate & low & no & no \\ \hline moderate & low & no & moderate \\ \hline moderate & low & moderate & no \\ \hline moderate & low & moderate & moderate \\ \hline moderate & moderate & no & no \\ \hline moderate & moderate & no & moderate \\ \hline moderate & moderate & moderate & no \\ \hline moderate & moderate & moderate & moderate \\ \hline moderate & severe & no & no \\ \hline severe & low & no & no \\ \hline severe & moderate & no & no \\ \hline severe & severe & no & no \\ \hline \end{tabular} \caption{Table of cases for $T^+$ - $T^p$ pairwise} \label{tab_Tpos-Tpro_cases} \end{table} \newpage The following were observed from the different cases. For better visualisation, the figures are divided into testosterone limitations in \autoref{fig_Tpos-Tpro_cases_test}, oxygen limitations in \autoref{fig_Tpos-Tpro_cases_o2} and combinations of the limitations in \autoref{fig_Tpos-Tpro_cases_combi}. \begin{enumerate} \item Severe limitation of either oxygen or testosterone for $T^+$ relative to $T^p$ causes it to go extinct. In a special case, when $T^p$ numbers are high enough to produce an excess of testosterone, a small fraction of $T^+$ survives regardless of the strength of $T^+$ test limitation. Conversely, when neither resource is limiting, coexistence occurs at all seeding densities and proportions of $T^p$, which suggests that competitive exclusion of either cell type is strongly dependent on environmental conditions and resource limitation. When $T^+$ is moderately oxygen limited relative to $T^p$, $T^+$ can coexist at low initial density of $T^p$ but goes extinct at higher initial densities. \item $T^p$ is driven to extinction in every case where $T^p$ limitation of either oxygen or testosterone is more severe relative to $T^+$ limitation of the same resource. Extinction of $T^p$ then leads to extinction of $T^+$ trivially. Such is the case for the most part with moderate limitation of testosterone for $T^p$ relative to $T^+$. However, this $T^p$ extinction is seen to be rescued for higher initial density of $T^p$ relative to $T^+$ as this allows the former to overcome competition, leading to coexistence. \item In very broad terms, coexistence is more common when the strength of limitation of either resource is the same for both cell types-these are the main diagonals in \autoref{fig_Tpos-Tpro_cases_test} and \autoref{fig_Tpos-Tpro_cases_o2}. Generally, increasing the relative proportion of $T^p$ gives it a competitive edge presumably by increasing net availability of testosterone in the system. However, under high testosterone limitation for both cell types, a larger $T^p$ proportion is marginally detrimental to $T^p$ success possibly due to density-dependent intraspecific competition. \item With moderate limitation of oxygen for $T^p$ relative to $T^+$, $T^+$ still requires testosterone from $T^p$ for survival which could weaken the growth inhibition of $T^p$, despite the lower oxygen limitation of $T^+$ compared to $T^p$. Interestingly, this is also the case with coexistence at lower final $T^p$ proportions than any other case. Coexistence here is therefore driven by the dependence on $T^p$ by $T^+$ for testosterone, which overrides any advantage from a better oxygen use strategy. \item Coexistence is also observed when $T^+$ is moderately testosterone limited relative to $T^p$. However, in this case, a lower initial proportion of $T^p$ favours $T^p$ and leads to a dip in the plot. At a low initial proportion of $T^p$, $T^+$ being limited by testosterone dies out until sufficient testosterone is established and this might give an advantage for $T^p$ to establish a larger population before $T^+$ has the capacity to compete. \item The behaviour of the system is very similar if the resource limitation is symmetric across the two cell types. This can be seen from the first and last columns or from the first and last rows of \autoref{fig_Tpos-Tpro_cases_combi}. Although, with the higher testosterone limitations of $T^p$, a higher $T^p$ initial seeding is required to have $T^p$ overcome suppression by $T^+$. Additionally, when $T^p$ is moderately limited by oxygen relative to $T^+$, the higher testosterone limitation of $T^+$ leads to higher $T^p$ required for the testosterone. \item When both testosterone and oxygen are moderately limiting for a cell type relative to the other, the combined overall limitation is severe and that particular cell type is driven to extinction similar to when only one resource was severely limiting. \item When $T^p$ is moderately limited by oxygen relative to $T^+$ and $T^+$ moderately limited by testosterone relative to $T^p$, a balance is achieved. $T^+$ can outcompete $T^p$ due to the excess oxygen but soon is limited by testosterone and has to allow a sizeable population of $T^p$ to grow to maintain the required testosterone levels. However, with the inverse case where $T^p$ is testosterone limited and $T^+$ is oxygen limited, the outcomes are unstable and it switches from $T^p$ driven to extinction at low initial proportion to $T^+$ going extinct for higher initial proportion. \item Additionally, total population size has a weaker effect than initial proportion for the dynamics and outcomes for each particular case. \end{enumerate} \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{Tpos-Tpro_cases_test} \caption{$test$ limitations. Columns: $T^p\ test$ limitation, Rows: $T^+\ test$ limitation.} \label{fig_Tpos-Tpro_cases_test} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{Tpos-Tpro_cases_o2} \caption{$O_2$ limitations. Columns: $T^p\ O_2$ limitation, Rows: $T^+\ O_2$ limitation.} \label{fig_Tpos-Tpro_cases_o2} \end{subfigure} \caption[]{(Continued on next page)} \end{figure} \begin{figure}[h!]\ContinuedFloat \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{Tpos-Tpro_cases} \caption{Combination of $test$ and $O_2$ limitations. Columns: $T^+,T^p\ test$ limitations, Rows: Columns: $T^+,T^p\ O_2$ limitations.} \label{fig_Tpos-Tpro_cases_combi} \end{subfigure} \caption[Final $T^p$ ratio of pairwise $T^+ - T^p$ runs under different cases]{Final $T^p$ ratio of pairwise $T^+ - T^p$ runs under different cases. Note: Ratio = -0.1 is used when both cell types go extinct.} \label{fig_Tpos-Tpro_cases} \end{figure} \chapter{All cell types-interactions and competition outcomes} With all three cell types, the number of combinations and permutations increase combinatorially which restricts the tractability of giving different strategies to each cell type as has been done with the pairwise simulations. We are therefore starting with a simpler case of the same strategy for all the three cell types. The strategies are still defined in terms of levels of resource limitations, with the distinction that now, the level of each strategy is the same for all the three cell types. The values of lower and upper limits corresponding to each level are given in \autoref{tab_all3_limits}. \begin{table} \centering \begin{tabular}{|l|l|l|l|} \hline \textbf{Resource} & \textbf{Limitation} & \textbf{$\boldsymbol{ll_{res,i}}$} & \textbf{$\boldsymbol{ul_{res,i}}$} \\ \hline \multirow{3}{*}{Oxygen} & no & 0.0 & 1.1 \\ \cline{2-4} & low & 0.0 & 1.0 \\ \cline{2-4} & moderate & 0.4 & 0.1 \\ \hline \multirow{3}{*}{Testosterone} & no & 0 & 0.1 \\ \cline{2-4} & moderate & 0 & 0.3 \\ \hline \end{tabular} \caption[Table of resource limitations for all cell types]{Table of limits corresponding to limitations for different resources of all three cell-types} \label{tab_all3_limits} \end{table} \newpage Competitive runs were done over all the six combinations of limitations for two different ratios of seeding and three initial total seedings. The following were observed from the different cases as visualised in \autoref{fig_all3}. The time-series of the same is given in \autoref{fig_all3-time-series}. \begin{enumerate} \item Moderate limitation of testosterone leads to stronger interspecific competition relative to intraspecific competition as $T^p$ and $T^+$ numbers cannot increase enough to produce self-inhibition. In these cases, inhibition by $T^-$ is much stronger and they are driven to extinction, as opposed to when $T^p$ and $T^+$ coexisted when seeded without $T^-$. However, consistent with the $T^p - T^+$ results, coexistence can be recovered between all three cell types when $T^p$ is seeded at a much higher proportion than the other two, presumably because of higher net available testosterone in the system. Therefore, when testosterone is a strongly limiting resource, it also leads to strong positive dependence on $T^p$ density for coexistence. \item No limitation of testosterone leads to weaker interspecific competition relative to intraspecific competition, but only above a threshold proportion of $T^p$ that is required to produce a minimum amount of testosterone for survival. Again, we see that self-production of testosterone by $T^p$ leads to progressively lower limitation of the hormone across runs with increasing number of $T^p$ cells, as well as within a given run with increasing time. Additionally, in the moderate testosterone limitation cases, it can be seen that lower oxygen limitation reinforces this positive feedback on $T^p$ further, leading to coexistence that is further biased towards the $T^p - T^+$ pair. \item Testosterone then acts as a private resource and the primary limitation between $T^p$ and $T^+$ that serves to produce positive feedback above some minimum test concentration, thus leading to coexistence between all three cell types even though $T^-$ has a significantly shorter doubling time. \end{enumerate} Taken together, the resource limitation space can be divided into three broad zones of cell type coexistence in the system. At high limitations of testosterone, the growth of $T^p$ is so strongly inhibited that coexistence is rare, independent of the availability of oxygen; indeed, the addition of oxygen limitation only drives $T^p$ down further and makes coexistence even more remote. At the other extreme of low limitation of testosterone, $T^p$ growth inhibition is relieved and the positive feedback from $T^p$-produced testosterone is strong enough that oxygen limitations become relatively less important for coexistence. This is abundantly clear from when $T^p$ is seeded at a much higher density than the other cell types [\autoref{fig_all3_8:1:1}]. Oxygen limitations are seen to impact coexistence only when testosterone availability is above some minimum threshold required for $T^p - T^+$ growth, but not so high as to saturate all growth. In this range of resource availability, within the limits of the parameter values tested here, continuous responses to change in resource concentration are observed, and the outcomes of competition are pushed either towards coexistence or extinction of $T^p$ and $T^+$ by the oxygen limitation. Therefore, across these three zones, limitations of oxygen seem to have a subordinate impact on competition and instead supplement the competition imposed by testosterone; where oxygen limitation does affect competitive outcomes, it does so in the same direction as testosterone limitation. \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_1:1:1} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1 } \label{fig_all3_1:1:1} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_efficiency_8:1:1} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1} \label{fig_all3_8:1:1} \end{subfigure} \caption[Final ratio of all cell types under different limitations]{Final ratio of all cell types under different oxygen limitation (columns), testosterone limitation (rows) and initial seeding proportions (subfigures).} \label{fig_all3} \end{figure} \chapter{All cell types-therapy outcomes} \section{Standard of care (SOC)} Under standard-of-care, the drug is applied from the beginning of the simulation at the maximum tolerated dose for all the cases explored in the previous chapter. In our model, this is simulated as a permanently reduced rate of testosterone secretion by $T^p$. We observe from \autoref{fig_therapy-SOC} that $T^+$ and $T^p$ go extinct in all the cases, regardless of the limitations of either resource, the seeding proportion or initial total seeding. This can be rationalised based on the results from the previous chapter, where testosterone limitation had a similarly drastic effect on coexistence, or lack thereof, between the three cell types. In this case, with therapy, the nearly immediate elimination of $T^+$ and $T^p$ due to insufficient levels of testosterone leads to competitive release of $T^-$, and the total population reaches its maximum effective carrying capacity. Given that abiraterone acts by suppressing testosterone secretion, an entire population of $T^-$ cells would be fully unresponsive to abiraterone treatment, and further treatment cannot lead to any reduction of cell number. \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy-SOC_1:1:1} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1} \label{fig_therapy-SOC_1:1:1} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy-SOC_8:1:1} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1} \label{fig_therapy-SOC_8:1:1} \end{subfigure} \caption[Final ratio of all cell types under standard-of-care]{Final ratio of all cell types under standard-of-care under different oxygen limitation (columns), testosterone limitation (rows) and initial seeding proportions (subfigures).} \label{fig_therapy-SOC} \end{figure} \newpage \section{Adaptive Therapy (AT)} Implementation of adaptive therapy (AT) is based on thresholds of population size; the On threshold is the size above which therapy would be applied and the Off threshold is the size below which therapy would be withdrawn. A range of such size thresholds were tried out for the case where testosterone is not limiting and oxygen has low limitation. For the purposes of this model, abiraterone therapy thresholds were defined in terms of the population sizes of the hormone-dependent subpopulations alone, $T^+$ and $T^p$. There are some indications from the literature that distinguishing between hormone-dependent and -independent fractions of prostate cancer cells may be possible clinically based on levels of specific prostate-specific antigen (PSA) types secreted by each \cite{Takahashi}. The following general observations could be made based on our exploration of population size thresholds, some of which are visualised in \autoref{fig_therapy-AT_standardization}. \begin{enumerate} \item In the AT modelling literature, a $50\%$ rule is commonly-used where therapy is applied above the initial population and withdrawn below $50\%$ of that. In our exploration, we found that for low threshold sizes, the $T^p$ and $T^+$ populations are so low that competitive inhibition by $T^-$ drives them to extinction. This low threshold size was also found to include the population size range where the $50\%$ rule would operate, which casts some doubt on the effectiveness of this threshold. \item It follows directly from above that a higher threshold would be better for $T^p$ and $T^+$ to survive the competition by $T^-$ as well as to suppress $T^-$, while increased effectiveness of a higher threshold has also been shown elsewhere \cite{Hansen}. However, increasing the threshold close to the effective carrying capacity eventually leads to no therapy being applied for the entire simulation duration as the population sizes never cross the on threshold. While there may be some grounds for a clinical decision not to apply any treatment, such a situation is not suited for an exhaustive theoretical study of the factors affecting AT. The threshold of On:6000 and Off:4000 was therefore chosen for further exploration as a reasonable middle ground between competitive release and no therapy. \item As mentioned earlier, only $T^p$ and $T^+$ cell types were considered for the thresholds for therapy. For the sake of completeness, the case where all the three cell types are considered for the thresholds for therapy was also tried out as visualised in \autoref{fig_therapy-AT_standardization-Total}. However, all the cases led to competitive release in the initial few days, suggesting that total population size makes for a poorer cue for AT application and withdrawal. \end{enumerate} \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy-standardization} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy-standardization-sw} \end{subfigure} \caption[Standardisation of threshold for adaptive therapy]{Standardisation of threshold for adaptive therapy, Columns: On-Off threshold, Rows: $T^p:T^+:T^-$ Seeding} \label{fig_therapy-AT_standardization} \end{figure} \newpage \subsection{Without delay} With fixed On and Off thresholds of 6000 and 4000 respectively, AT was then applied systematically to every combination of resource limitation studied in the previous section. The results from these different cases are summarised below and visualised in \autoref{fig_therapy-AT}. \begin{enumerate} \item Tumours with higher numbers of $T^p$ and $T^+$ would be more responsive to abiraterone and hence more treatable. Coexistence is of importance here as extinction of $T^p$ and $T^+$ would lead to no response. \item For cases when testosterone is moderately limiting and testosterone levels below the requirement of $T^p$ and $T^+$, these two cell types go extinct just by the competition from $T^-$ and produce no response from abiraterone naturally. Therefore, abiraterone efficacy depends strongly on $T^p$ seeding densities and total population, as with coexistence. \item In cases where coexistence is achieved, the $T^-$ cells quickly replace the space left by the dead $T^p$ and $T^+$ cells on applying therapy. In periods of no therapy, $T^p$ and $T^+$ cells compete with and replace the $T^-$ cells, but are soon met with therapy as they cross the on threshold. The total population size therefore remains high for most of the simulation, except when abiraterone is being applied where the $T^p$ population falls sharply. Despite this dip, the total population size recovers rapidly once therapy is withdrawn although there is some indication that total population size subsequently declines gradually until the next application, as $T^-$ cells are driven down by competition. Net decrease in the total population size itself, across multiple oscillations, is only seen in some combinations of resource limitation. In particular, this happens when testosterone is limiting (moderate limitation case) and/or the tumour population is still highly responsive to abiraterone (an active and considerable proportion of $T^p$ in the population), raising the possibility that even though AT, with a fixed window of treatment, outperforms SOC across the board, its effectiveness is modified strongly by the ecological state of the tumour population, in terms of cell type proportions and resource limitations. \end{enumerate} \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_1:1:1-1000} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 1000} \label{fig_therapy-AT_1:1:1-1000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_8:1:1-1000} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 1000} \label{fig_therapy-AT_8:1:1-1000} \end{subfigure} \caption[]{(Continued on next page)} \end{figure} \begin{figure}[h!]\ContinuedFloat \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_1:1:1-2000} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 2000} \label{fig_therapy-AT_1:1:1-2000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_8:1:1-2000} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 2000} \label{fig_therapy-AT_8:1:1-2000} \end{subfigure} \caption[]{(Continued on next page)} \end{figure} \begin{figure}[h!]\ContinuedFloat \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_1:1:1-4000} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 4000} \label{fig_therapy-AT_1:1:1-4000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_8:1:1-4000} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 4000} \label{fig_therapy-AT_8:1:1-4000} \end{subfigure} \caption[Time-series of all cell types with adaptive therapy]{Time-series of all cell types with adaptive therapy (On:6000, Off:4000) under different oxygen limitation (columns), testosterone limitation (rows) and initial seeding proportions, initial total seeding (subfigures).} \label{fig_therapy-AT} \end{figure} \clearpage \subsection{With delay} As noted earlier, higher the amount of available testosterone, weaker the interspecific competition relative to intraspecific competition and therefore, better the coexistence. Since coexistence is tied strongly to the effectiveness of therapeutic outcomes, it stands to reason that the window of population sizes of $T^p$ and $T^+$ used in adaptive therapy be chosen to allow for sufficient numbers of $T^p$ and $T^+$ to remain in the population. Applying this idea specifically to the case with no limitation for either oxygen or testosterone, $T^p$ and $T^+$ population fractions increase monotonously with time even without treatment. It is therefore possible that early treatment could be detrimental to coexistence by causing early competitive release of $T^-$, and it is tempting to speculate whether delaying the onset of treatment could improve therapeutic outcomes by allowing for a better balance of $T^p - T^+$ and $T^-$. However, the conditions that we have tested here, shown in \autoref{fig_therapy-AT-delay}, do not show such an advantage to delay, possibly because we do not see much variability in the population fractions of each cell type within the delay time periods considered here. It is also important to note that this idea of delayed treatment does not account for the physiological cost of maintaining a growing tumour population until treatment is administered, which would be very important in a clinical setting. \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_200day_1:1:1} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 2000} \label{fig_therapy-AT-delay200_1:1:1-2000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{All3_therapy_200day_8:1:1} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 2000} \label{fig_therapy-AT-delay200_8:1:1-2000} \end{subfigure} \caption[Time-series of all cell types with delayed adaptive therapy]{Time-series of all cell types with adaptive therapy (On:6000, Off:4000) delayed by 200 days under different oxygen limitation (columns), testosterone limitation (rows) and initial seeding proportions (subfigures).} \label{fig_therapy-AT-delay} \end{figure} \clearpage \subsection{Combination therapy with docetaxel} Another aspect of prostate cancer therapy that could be of translational value to the model is combination therapy. Typically, hormone-specific agents like abiraterone are pairs with a general cytotoxic drug like docetaxel that causes cell death independent of cell type \cite{West}. A comprehensive exploration of combination therapy regimes would be time-consuming, but a smaller scale test-of-concept analysis was attempted here, in which AT was administered with both abiraterone and docetaxel; abiraterone was still responsive to $T^p - T^+$ alone, but docetaxel was administered based on thresholds of total population size. Earlier data from AT with abiraterone alone leads to the expectation that there are potential spaces where docetaxel could be used to alleviate some of the $T^-$ pressure on $T^p$ and $T^+$, especially where they’re driven to extinction. However, initial data visualised in \autoref{fig_therapy-AT-combi} show that the direct negative effect of docetaxel on $T^p$ and $T^+$ outweigh any positive effect from reduction of $T^-$ competition, suggesting that the scope for docetaxel application, at least based on the current modality, is highly limited in this system as it disrupts the sensitive balance of numbers between the three cell types. Nevertheless, more extensive testing is warranted. \begin{figure}[h!] \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.9\textwidth]{All3_therapy-combi_1:1:1} \caption{Equal Seeding - $T^p:T^+:T^-$ :: 1:1:1, Initial Total seeding: 2000} \label{fig_therapy-AT-combi_1:1:1-2000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.9\textwidth]{All3_therapy-combi_8:1:1} \caption{High $T^p$ seeding- $T^p:T^+:T^-$ :: 8:1:1, Initial Total seeding: 2000} \label{fig_therapy-AT_combi_8:1:1-2000} \end{subfigure} \caption[Time-series of all cell types with combination adaptive therapy]{Time-series of all cell types with combination adaptive therapy of abiraterone (On:6000, Off:4000; $T^+ + T^p$) and docetaxel (On:6000, Off:4000; $T^+ + T^p + T^-$) under different oxygen limitation (columns), testosterone limitation (rows) and initial seeding proportions (subfigures). Note: The timescale of drug holiday is too small to be visible in this resolution and hence, the dtx dosage appears as a continuum.} \label{fig_therapy-AT-combi} \end{figure}
{ "alphanum_fraction": 0.7510827388, "avg_line_length": 97.4268292683, "ext": "tex", "hexsha": "b63da39bcb50958fc07f0567c8f4670e5dcdae99", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_forks_repo_path": "writing/MSThesis/chapters/Results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_issues_repo_path": "writing/MSThesis/chapters/Results.tex", "max_line_length": 1539, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e4decacd5779e85a68c81d0ce3bedf42dea2964f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Harshavardhan-BV/Cancer-compe-strat", "max_stars_repo_path": "writing/MSThesis/chapters/Results.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-18T15:54:26.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-18T15:54:26.000Z", "num_tokens": 10638, "size": 39945 }
\chapter{Product Brief} \label{product-brief} The Roa Logic AHB-Lite Timer IP is a fully parameterized soft IP implementing a user-defined number of timers and functions as specified by the RISC-V Privileged 1.9.1 specification. The IP features an AHB-Lite Slave interface, with all signals defined in the \emph{AMBA 3 AHB-Lite v1.0} specifications fully supported, supporting a single AHB-Lite based host connection. Bus address \& data widths as well as the number of timers supported are specified via parameters. The timebase of the timers is derived from the AHB-Lite bus clock, scaled down by a programmable value. The module features a single Interrupt output which is asserted whenever an enabled timer is triggered \begin{figure}[tbh] \includegraphics{assets/img/AHB-Lite-Timer-sig.png} \caption{AHB-Lite Timer} \label{fig:ahb-lite-timer-sig} \end{figure} \section{Features}\label{features} \begin{itemize} \item AHB-Lite Interface with programmable address and data width \item User defined number of counters (Up to 32) \item Programmable time base derived from AHB-Lite bus clock \end{itemize}
{ "alphanum_fraction": 0.7887700535, "avg_line_length": 32.0571428571, "ext": "tex", "hexsha": "ab3d24601f8b72712d96314dcd988bd6b13a4bc4", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2019-10-25T06:08:40.000Z", "max_forks_repo_forks_event_min_datetime": "2018-08-22T04:45:00.000Z", "max_forks_repo_head_hexsha": "03b7e81400d51774ed5bc6fb4341bcd8af689bf5", "max_forks_repo_licenses": [ "RSA-MD" ], "max_forks_repo_name": "vfinotti/ahb3lite_timer", "max_forks_repo_path": "docs/tex/introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "03b7e81400d51774ed5bc6fb4341bcd8af689bf5", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "RSA-MD" ], "max_issues_repo_name": "vfinotti/ahb3lite_timer", "max_issues_repo_path": "docs/tex/introduction.tex", "max_line_length": 72, "max_stars_count": 8, "max_stars_repo_head_hexsha": "03b7e81400d51774ed5bc6fb4341bcd8af689bf5", "max_stars_repo_licenses": [ "RSA-MD" ], "max_stars_repo_name": "vfinotti/ahb3lite_timer", "max_stars_repo_path": "docs/tex/introduction.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-17T23:13:34.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-23T12:36:39.000Z", "num_tokens": 299, "size": 1122 }
\documentclass[12pt]{article} \usepackage[top=1in,bottom=1in,left=0.75in,right=0.75in,centering]{geometry} \usepackage{fancyhdr} \usepackage{epsfig} \usepackage[pdfborder={0 0 0}]{hyperref} \usepackage{palatino} \usepackage{wrapfig} \usepackage{lastpage} \usepackage{color} \usepackage{ifthen} \usepackage[table]{xcolor} \usepackage{graphicx,type1cm,eso-pic,color} \usepackage{hyperref} \usepackage{amsmath} \usepackage{wasysym} \usepackage{latexsym} \usepackage{amssymb} \def\course{CS 4102: Algorithms} \def\homework{Module 2 - Graphs: Basic Written HW} \def\semester{Spring 2021} \newboolean{solution} \setboolean{solution}{false} % add watermark if it's a solution exam % see http://jeanmartina.blogspot.com/2008/07/latex-goodie-how-to-watermark-things-in.html \makeatletter \AddToShipoutPicture{% \setlength{\@tempdimb}{.5\paperwidth}% \setlength{\@tempdimc}{.5\paperheight}% \setlength{\unitlength}{1pt}% \put(\strip@pt\@tempdimb,\strip@pt\@tempdimc){% \ifthenelse{\boolean{solution}}{ \makebox(0,0){\rotatebox{45}{\textcolor[gray]{0.95}% {\fontsize{5cm}{3cm}\selectfont{\textsf{Solution}}}}}% }{} }} \makeatother \pagestyle{fancy} \fancyhf{} \lhead{\course} \chead{Page \thepage\ of \pageref{LastPage}} \rhead{\semester} %\cfoot{\Large (the bubble footer is automatically inserted into this space)} \setlength{\headheight}{14.5pt} \newenvironment{itemlist}{ \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt}} {\end{itemize}} \newenvironment{numlist}{ \begin{enumerate} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt}} {\end{enumerate}} \newcounter{pagenum} \setcounter{pagenum}{1} \newcommand{\pageheader}[1]{ \clearpage\vspace*{-0.4in}\noindent{\large\bf{Page \arabic{pagenum}: {#1}}} \addtocounter{pagenum}{1} \cfoot{} } \newcounter{quesnum} \setcounter{quesnum}{1} \newcommand{\question}[2][??]{ \begin{list}{\labelitemi}{\leftmargin=2em} \item [\arabic{quesnum}.] {#2} \end{list} \addtocounter{quesnum}{1} } \definecolor{red}{rgb}{1.0,0.0,0.0} \newcommand{\answer}[2][??]{ \ifthenelse{\boolean{solution}}{ \color{red} #2 \color{black}} {\vspace*{#1}} } \definecolor{blue}{rgb}{0.0,0.0,1.0} \begin{document} \section*{\homework} %---------------------------------------------------------------------- \question[3]{ Describe an algorithm that, given a directed graph G represented as an \emph{adjacency matrix}, returns whether or not the graph contains vertex with in-degree $|V| - 1$ and out-degree $0$. In other words, does the graph have a node such that every other node points to it, but it does not point to any other node. Your algorithm must be $O(V)$. Note that there are $\Theta(V^2)$ cells in your adjacency matrix so you'll need to be clever here. } \answer[0.5in]{...} \question[1]{ Let's say a graph $G$'s {\em circumference} is the number of edges in the shortest cycle in $G$. Describe an efficient algorithm to find the circumference of a graph in $\Theta(V \times E)$. (Note: you must make use of algorithms studied in this module, and not re-invent the wheel.) } \answer[0.5in]{...} \question[3]{ Let $G$ be an undirected graph with $n$ nodes (let's assume $n$ is even). Prove or provide a counterexample for the following claim: If every node of $G$ has a degree of at least $\frac{n}{2}$, then $G$ must be connected. } \answer[0.5in]{...} \question[1]{ For a given undirected graph $G$, prove that the depth of a DFS tree cannot be smaller than the depth of the BFS tree. (Clearly state your proof strategy or technique.) } % ------------------------------------------ % ------------------------------------------ % ------------------------------------------ \iffalse \question[1]{ In the CLRS code for {\em DFS-Visit} at line 8, we have completed recursively visiting all then un-visited vertices adjacent to vertex $u$, so we change $u$'s color to black. For this problem, consider what happens if instead we change the color of $u$ to {\em white} at this line. Explore what this causes to happen by tracing this altered algorithm on a small connected undirected graph. After you have done that, submit answers to the following questions. \begin{enumerate} \renewcommand{\theenumi}{\Alph{enumi}} \item In no more than a few sentences, clearly describe what the ``DFS'' tree from an initial node $s$ found by this algorithm represents. (Note: we put DFS in quotes here because with this change this algorithm is no longer DFS!) \item Give a mathematical formula (not just an order class) for the number of leaves in this tree for a worst-case input graph. Describe this worst-case input graph. (Again, tracing this algorithm on small connected graphs may help you find the answer more quickly.) \end{enumerate} } \question[1]{ The textbook describes two variables that can be associated with each node in a graph $G$ during the execution of \emph{Depth-First Search}: discovery time ($v.d$) and finish time ($v.f$). These are integer values that are unique. Every time a node is discovered (i.e., DFS sees the node for the first time) that node's $v.d$ is set to the next available integer. When DFS is finished exploring ALL of this node's children, $v.f$ is set to the next available integer. For this question, consider a single edge in a graph $G$ after DFS finishes executing. You might need to reference the textbook or slides for definitions of tree edge, forward edge, back edge, and cross edge. Argue that each edge $e=(u,v)$ is: \begin{enumerate} \item A tree edge or forward edge if and only if $u.d < v.d < v.f < u.f$ \item A back edge if and only if $v.d \leq u.d < u.f \leq v.f$ \item A cross edge if and only if $v.d < v.f < u.d < u.f$ \end{enumerate} You can describe your answers intuitively, but your answers must be clearly articulated. } \question[1]{ Given an undirected graph $G$, the {\em eccentricity} of a node $v$ is the largest of the shortest possible distances from $v$ to any other node in the graph. The minimum eccentricity in the graph is called the {\em graph radius} for $G$. All the nodes in $G$ that have eccentricity equal to the graph radius form a set called the {\em graph center} of G. Describe (using pseudo-code or a very clea rtext explanation) an efficient algorithm to find the graph center of a graph $G$ and describe its complexity. (Note: you must make use of algorithms studied in this module, and not re-invent the wheel.) } \fi \end{document}
{ "alphanum_fraction": 0.711790393, "avg_line_length": 37.7176470588, "ext": "tex", "hexsha": "4f4159f805fb09a97690bfba67d71be51b3b7936", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-03-29T23:06:24.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-31T21:10:50.000Z", "max_forks_repo_head_hexsha": "75f93b5801644e50cb512f9d00e0fda3e9d5dcf7", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "csonmezyucel/cs4102-f21", "max_forks_repo_path": "homeworks/module2/graphs-written-basic.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "75f93b5801644e50cb512f9d00e0fda3e9d5dcf7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "csonmezyucel/cs4102-f21", "max_issues_repo_path": "homeworks/module2/graphs-written-basic.tex", "max_line_length": 467, "max_stars_count": null, "max_stars_repo_head_hexsha": "75f93b5801644e50cb512f9d00e0fda3e9d5dcf7", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "csonmezyucel/cs4102-f21", "max_stars_repo_path": "homeworks/module2/graphs-written-basic.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1824, "size": 6412 }
% Preamble. \documentclass[12pt]{article} \usepackage[margin=1.25in]{geometry} \usepackage[fleqn]{amsmath} \usepackage{textcomp} \usepackage{gensymb} \usepackage{amsfonts} \usepackage{enumitem} %\usepackage{tikz} % Include for figures. %\usepackage{subfiles} % Include for subfiles. %% Title macros. \newcommand{\HOMEWORKNUM}{27} \newcommand{\NAME}{D. Choi} \newcommand{\DATE}{2020-07-02} \title{\vspace{-2\baselineskip}MATH 225 - Homework \#\HOMEWORKNUM} \author{\NAME} \date{\DATE} %% Formatting options. %\pagenumbering{gobble} % Include for single-page document. % Document. \begin{document} \maketitle \section*{1.} \textit{Solve:} \begin{equation*} y^{\prime\prime} - 9y = 0, \quad y(0) = 2, \quad y^\prime(0) = -1. \end{equation*} The roots of the characteristic equation are \begin{equation*} r^2 - 9 = 0 \quad \Rightarrow \quad r_1 = 3, \quad r_2 = -3. \end{equation*} With $2$ distinct real roots, the general solution is \begin{equation*} y(t) = c_1 e^{3 t} + c_2 e^{-3 t}. \end{equation*} Using the system of equations given by the initial conditions, \begin{alignat*}{2} y(0) &=& \quad 2 &= c_1 + c_2 \\ y^\prime(0) &=& \quad -1 &= 3 c_1 - 3 c_2, \end{alignat*} the coefficients can be solved for as \begin{equation*} c_1 = \frac{5}{6}, \quad c_2 = \frac{7}{6}. \end{equation*} Thus, \begin{equation*} \boxed{ y(t) = \frac{5}{6} e^{3 t} + \frac{7}{6} e^{-3 t} }. \end{equation*} \section*{2.} \textit{Solve:} \begin{equation*} y^{\prime\prime} + 6y^\prime + 9y = 0, \quad y(0) = 1, \quad y(1) = 1. \end{equation*} The roots of the characteristic equation are \begin{equation*} r^2 + 6r + 9 = 0 \quad \Rightarrow \quad r_1 = -3, \quad r_2 = -3. \end{equation*} With a repeated real root, the general solution is \begin{equation*} y(t) = c_1 e^{-3 t} + c_2 e^{-3 t} t. \end{equation*} Using the system of equations given by the initial conditions, \begin{alignat*}{2} y(0) &=& \quad 1 &= c_1 \\ y(1) &=& \quad 1 &= c_1 e^{-3} + c_2 e^{-3}, \end{alignat*} the coefficients can be solved for as \begin{equation*} c_1 = 1, \quad c_2 = e^3 - 1. \end{equation*} Thus, \begin{equation*} \boxed{ y(t) = e^{-3 t} + (e^3 - 1) e^{-3 t} t }. \end{equation*} \section*{3.} \textit{Solve:} \begin{equation*} y^{\prime\prime} + 7y^\prime + 10y = 0, \quad y(0) = -1, \quad y^\prime(0) = 0. \end{equation*} The roots of the characteristic equation are \begin{equation*} r^2 + 7r + 10 = 0 \quad \Rightarrow \quad r_1 = -2, \quad r_2 = -5. \end{equation*} With $2$ distinct real roots, the general solution is \begin{equation*} y(t) = c_1 e^{-2 t} + c_2 e^{-5 t}. \end{equation*} Using the system of equations given by the initial conditions, \begin{alignat*}{2} y(0) &=& \quad -1 &= c_1 + c_2 \\ y^\prime(0) &=& \quad 0 &= -2 c_1 - 5 c_2, \end{alignat*} the coefficients can be solved for as \begin{equation*} c_1 = -\frac{5}{3}, \quad c_2 = \frac{2}{3}. \end{equation*} Thus, \begin{equation*} \boxed{ y(t) = -\frac{5}{3} e^{-2 t} + \frac{2}{3} e^{-5 t} }. \end{equation*} \end{document}
{ "alphanum_fraction": 0.6367831246, "avg_line_length": 24.272, "ext": "tex", "hexsha": "1eda26a2cb08c7aa1b05c1f9e6559aefe3bfcc96", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Floozutter/coursework", "max_forks_repo_path": "usc-20202-math-225-39425/hw27/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Floozutter/coursework", "max_issues_repo_path": "usc-20202-math-225-39425/hw27/main.tex", "max_line_length": 68, "max_stars_count": null, "max_stars_repo_head_hexsha": "244548f415553f058098cae84ccdd4ce3f58c245", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Floozutter/coursework", "max_stars_repo_path": "usc-20202-math-225-39425/hw27/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1228, "size": 3034 }
\documentclass[ shortnames]{jss} \usepackage[utf8]{inputenc} \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \author{ Shannon K. Gallagher\\Biostatistics Research Branch\\ National Institute of Allergy\\ and Infectious Diseases \And Benjamin LeRoy\\Dept. of Statistics \& Data Science\\ Carnegie Mellon University } \title{Time invariant analysis of epidemics with \pkg{EpiCompare}} \Plainauthor{Shannon K. Gallagher, Benjamin LeRoy} \Plaintitle{Time invariant analysis of epidemics with EpiCompare} \Shorttitle{\pkg{EpiCompare}} \Abstract{ We present \pkg{EpiCompare}, an \proglang{R} package that suppliments and enhances current infectious disease analysis pipelines and encourages comparisons across models and epidemics. A major contribution of this work is the set of novel \textit{time-invariate} tools for model and epidemic comparisons - including time-invariate prediction bands. \pkg{EpiCompare} embraces \proglang{R}'s \textit{tidy} coding style to make adoption of the package easier and analysis faster. This paper provides an overview of both the tools in and intuition behind \pkg{EpiCompare} and a thorough demonstrating of the tools through a detailed example of a full data analysis pipeline. } \Keywords{keywords, not capitalized, \proglang{Java}} \Plainkeywords{keywords, not capitalized, Java} %% publication information %% \Volume{50} %% \Issue{9} %% \Month{June} %% \Year{2012} %% \Submitdate{} %% \Acceptdate{2012-06-04} \Address{ Shannon K. Gallagher\\ Biostatistics Research Branch\\ National Institute of Allergy\\ and Infectious Diseases\\ 5603 Fishers Lane\\ Rockville, MD 20852\\ E-mail: \email{[email protected]}\\ URL: \url{http://skgallagher.github.io}\\~\\ Benjamin LeRoy\\ Dept. of Statistics \& Data Science\\ Carnegie Mellon University\\ 5000 Forbes Ave.\\ Pittsburgh, PA 15213\\ E-mail: \email{[email protected]}\\ URL: \url{https://benjaminleroy.github.io/}\\~\\ } % Pandoc citation processing % Pandoc header \usepackage{booktabs} \usepackage{longtable} \usepackage{array} \usepackage{multirow} \usepackage{wrapfig} \usepackage{float} \usepackage{xcolor} \usepackage{rotating} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \usepackage{afterpage} \usepackage[normalem]{ulem} \begin{document} \newcommand{\shannon}[1]{\textcolor{orange}{#1}} \newcommand{\shan}[1]{\textcolor{brown}{#1}} \newcommand{\ben}[1]{\textcolor{violet}{#1}} \newtheorem{theorem}{Theorem} \section[Intro]{Introduction}\label{sec:intro} The recent (and on-going) COVID-19 global pandemic has galvanized public interest in understanding more about infectious disease modeling and has highlighted the usefulness of research in the area of infectious disease epidemiology. Infectious diseases inflict enormous burdens on the world: millions of lives lost and trillions of dollars spent yearly. Infectious disease models typically attempt to do one or more of the following: 1) predict the spread of current and future epidemics \citep[e.g. flu prediction][]{Biggerstaff2016}, 2) analyze past and current epidemics to increase scientific knowledge \citep[e.g. historical measle outbreaks][]{Neal2004}, and 3) forecast or project epidemic scenarios under pre-specified parameters \citep[e.g.][]{ferguson2020}. At the same time, descriptive statistics and visualizations from universities, many branches and levels of government, and news organizations are an important first step of the process \textcolor{violet}{as has been seen in the current COVID-19 epidemic}\citep{dong2020,cdc-covid-tracker2021,wp-covid-tracker2021}. \footnote{\textcolor{violet}{[Ben says: probably should have a conclusion sentence here - seems to end abruptly. *This is less so the case now.]}} With the many visualization and exploratory tools, models and modeling paradigms, and reviews and comparisons in the literature and through the MIDAS (Models of Infectious Disease Agent Study) network \citep{midasnetwork2021}, this field has a lot of devices to aid an individual practitioner decide the correct approach. For example, \proglang{R} packages such as \pkg{surveillance}, \pkg{EpiModel}, and \pkg{pomp} have all made significant steps in standardizing the flow of the data analysis pipeline for epidemic modeling through digitizing data sets, making accessible statistical models, and providing a plethora of educational material for both coding novices and experts alike \citep{surveillance2017,Jenness2018,King2016}. At the same time, analysis packages often only address a specific portion of the analysis pipeline\textcolor{violet}{\sout{, for instance focusing on certain types of models.}} \textcolor{violet}{These m}odeling tools\textcolor{violet}{\sout{, which}} usually require learning package-specific syntax\textcolor{violet}{\sout{,}} and often don't provide easy ways to compare and assess their models on new data. Moreover, exploring\textcolor{violet}{, \sout{and}} modeling \textcolor{violet}{and comparing} epidemics require transforming and \textit{tidying} data in different ways. To fill these gaps, we present our \proglang{R} package \pkg{EpiCompare}. Our package's primary focus is to aid and advance research in the area of comparison and assessment of epidemic and epidemiological models. In Figure \ref{fig:pipeline}, we illustrate the data analysis pipeline of infectious diseases as 1) data pre-processing, 2) exploratory data analysis (EDA), 3) modeling and simulating, 4) post-processing, and 5) comparison and assessment; where each previous part of the pipeline influences the next. \pkg{EpiCompare} provides tools to aids practitioners in all areas of this pipeline. \begin{figure}[!ht] \centering \includegraphics[width = 1\textwidth]{images/pipeline1.png} \caption{An idealized epidemiological data analysis pipeline.} \label{fig:pipeline} \end{figure} \pkg{EpiCompare} also emphasizes the value of analyzing epidemics in a \textit{time-invariant} way. Epidemics, despite by definition being a process that evolves over time, often need to be compared in a way not constrained to initial times or time scales to understand the processes at play. \textcolor{violet}{Time-invariant analysis can also make it easier to compare state-space models in a more global, holistic fashion. \sout{Moreover, m} M}any current \textcolor{violet}{time-dependent} comparison tools for state-space models (e.g.~SIR models) \textcolor{violet}{\sout{highlight} examine} the proportion of individuals in each state (at a given time) in a piece-wise / marginal fashion. \textcolor{violet}{These \sout{This}} approach\textcolor{violet}{es} may reduce the amount of connections that can be seen, similar to projections of a multidimensional distribution onto a single axis at a time. Tools in \pkg{EpiCompare} give the user the ability to extend their toolkit to evaluate epidemics within a time-invariant lens. The goal of \pkg{EpiCompare} is not to supplant existing infectious disease modeling tools and software but, rather, is a concerted effort to create standard and fair comparisons among models developed for disease outbreaks and outbreak data. This paper is broken up into the following sections; section \ref{sec:time-invariant} motivates and showcases tools of time-invariant analysis, section \ref{sec:overview} presents an outline of how \pkg{EpiCompare} aids a practitioner in every step of the pipeline and section \ref{sec:tour} provides a \textcolor{violet}{\sout{thorough}} demonstrating of the tools through a detailed example of a full data analysis pipeline. \section[Time-invariant]{Motivation and tools for time-invariant analysis}\label{sec:time-invariant} \pkg{EpiCompare} delivers \textit{time-invariant} analysis by (1) taking a global, not marginal view of how epidemics move through populations and (2) by treating full epidemics as filamental trajectories. The following section aims to highlight the strengths of \textit{time-invariant} \textcolor{orange}{analysis} and define the mathematical foundations that \pkg{EpiCompare}'s tools stand upon. Mathematically, epidemics are complex objects. They can be hard to assess and compare to one another due to the differences in the diseases, the location where the outbreak occurs, how the affected population reacts, and the time \sout{related}\textcolor{orange}{related} features (including start of the epidemic, speed of infection and more). Time-invariant analysis makes different epidemics easier to compare by removing many time dependent aspects of an epidemic. \sout{Instead,} \textcolor{orange}{Time-invariant analysis} focuses \textcolor{orange}{on the global pattern of an epidemic, via filamental trajectories, and emphasizes the number of lives affected.} \textcolor{violet}{[Ben wants to try this sentence again.]} \subsection[motivating through R0]{Motivating time-invariant analysis through the reproduction number \(R_0\)}\label{sec:r0} Time-invariant analysis, as it appears in \pkg{EpiCompare}, \textcolor{orange}{bypasses} many difficulties \textcolor{orange}{in} comparing different epidemics. With time-invariant analysis, comparing the decades-long outbreak of HIV in the US to a 10 day outbreak of norovirus on a cruise ship is \sout{still} possible. Time-dependent problems can arise when estimating epidemiological parameters, including the reproduction number \(R_0\). \sout{\textcolor{violet}{We will use $R_0$ to motivate the usefulness of time-invariant analysis in this section.}}\footnote{\textcolor{orange}{I don't think this is a necessary sentence.} \textcolor{violet}{I still think it adds value to the story and I'm not sure people really read section titles that are long.}} \(R_0\) is probably the most famous \sout{time-invariant} numerical summary of an epidemic and is often associated with the Susceptible-Infectious-Recovered (SIR) model \citep{hethcote2000}. \(R_0\) is \sout{a one-number summary of a disease and is }defined as the expected number of infections caused by a single infector who is added to a completely susceptible population \citep{anderson1992}. \textcolor{violet}{This \textcolor{orange}{definition has no mention of time and hence} means that $R_0$ is a time-invariant parameter\textcolor{orange}{. Yet $R_0$ is} estimated with time-\sout{based}\textcolor{orange}{dependent} data, which can make it a difficult quantity to estimate.} \textcolor{orange}{For example, \cite{Gallagher2020} demonstrate how $R_0$ can be sensitive to time-\sout{based}\textcolor{orange}{dependent} parameters such as the beginning and end of an epidemic, two quantities that generally are}\textcolor{violet}{hard to define precisely.\sout{do not have precise definitions}}. To demonstrate the difficulty of discerning \(R_0\) in \textcolor{violet}{\sout{a}other}\footnote{\textcolor{violet}{I change this so we don't confused readers that we're going show the impact in tools beyond just the estimation itself.}} time-dependent analysis, we first introduce \citet{Kermack1927}'s SIR model. This model captures the transitions from one state to the next as a system of ordinary differential equations, where \(N\) is the total number of individuals, \(\beta\) is the rate of infection, and \(\gamma\) is the rate of recovery, \begin{align}\label{eq:sir-ode} S^\prime(t) &= -\frac{\beta S(t)I(t)}{N} \\ I^\prime(t) &= \frac{\beta S(t)I(t)}{N} - \gamma I(t) \nonumber\\ R^\prime(t) &= \gamma I(t) \nonumber. \end{align} From this model,\textcolor{orange}{the reproduction number is the ratio of the infection rate to the recovery rate,} \(R_0 = \beta/\gamma\)\sout{, aka the ratio of the infection rate compared to the recovery rate.} \textcolor{violet}{From this definition, given \sout{Since}} \(\beta\) and \(\gamma\) are both rates, \textcolor{violet}{it should be clear that} the ratio of the two, \(R_0\), is a time-invariant quantity.\footnote{\textcolor{violet}{I am trying to make it look like we are not repeating ourselves by saying $R_0$ is time-invariant.}} \sout{ Once $R_0$ is estimated, practitioners can infer important epidemic quantities such as the total number of infections or the percent of a population needed to be vacccinated to stop the sustained spread of an epidemic. Moreover, $R_0$ allows us to compare different diseases and different instances of outbreaks on the same scale. }\footnote{cool facts about r0, but not the central point} \textcolor{violet}{[Ben says: It's unclear to me why we have a subtitle here - isn't it just more motivation of time-invariant anlysis with $R_0$? Also, I feel like the story is weak here. The point is to leverage $R_0$ to show the value of time-invariant analysis - this seems a bit more like just discussing properties of $R_0$. In the follow rewrite I use "[" and "]" to indicate that this is a section from your earlier draft.]} \textcolor{orange}{[Shannon says: Tried to tie this better to the previous part since it's no longer a new section. also highlighted tie to time-invariant analysis and $R_0$. I also wanted to bring the punch line (overlapping epidemics = same r0) closer to the beginning so those who don't want to slog through mathematical details can get the takeaway.]}\textcolor{orange}{ Shannon tries again in \textcolor{blue}{blue}} \sout{\textcolor{violet}{[Ben says: this paragraph needs to still be looked at. Also I'm not sure why this particular paragraph was changed so much at all. Currentlly, the way we present comparing these epidemic's $R_0$s isn't well grouped/motivated.]} \textcolor{orange}{Time-invariant analysis helps practitioners to more easily compare $R_0$ from different outbreaks.} For example, consider two epidemics generated from the Kermack and McKendrick SIR equations. The first epidemic has parameters $\beta_1, \gamma_1 = (0.8,0.4)$ and the second has $\beta_2, \gamma_2 = (0.64,0.32)$. Both epidemics have populations of 1000 people with 10 individuals initially infected. Additionally note that the two reproduction numbers are the same for each epidemic, $R_0 = 2=0.8/0.4 = 0.64/0.32$. We plot the epidemics with traditional $state$ vs. $time$ plots}\footnote{This sentecne is out of place / doesn't connect with the other sentences.}. \sout{In Fig. \ref{fig:different-scales-standard} we show the time-based paths for the $S$, $I$, and $R$ states for the first 15 days of observed data. In this time-variant view, we may believe that \textcolor{violet}{epidemic} 1 has a larger $R_0$ than \textcolor{violet}{epidemic} 2 because the peak of infection occurs more quickly than in Epidemic 2. On the other hand, we may believe \textcolor{violet}{epidemic} 2 has a larger $R_0$ because its unclear if the number of infections in that \textcolor{violet}{epidemic} has not yet peaked at time 15. In this time-variant view, we cannot determine if one epidemic has larger value of $R_0$}\footnote{\textcolor{violet}{This sentnce doesn't connect wit previous examples.}}. \textcolor{blue}{Since $R_0$ is an important value, it would be helpful to have more intuitive ways of comparing one $R_0$ to another. Usually numerical summaries of $R_0$ are presented, which while overall very helpful, may be confusing when presented along side epidemic data that are visualized in a traditional, time-dependent manner.} \textcolor{blue}{For example, consider two epidemics generated from the Kermack and McKendrick SIR equations where both have the same value of $R_0$. The first epidemic has parameters $\beta_1, \gamma_1 = (0.8,0.4)$ and the second has $\beta_2, \gamma_2 = (0.64,0.32)$. Both epidemics have populations of 1000 people with 10 individuals initially infected. An analysis may present an estimate of $\hat{R}_0 = 2$ alongside state vs. time plots like those shown in Figure \ref{fig:different-scales-standard}. The paths of the epidemics in the state vs. time view seem to differ from one another including having different infection peaks. From these traditional time-based plots, there is no intuitive way to conclude that these two epidemics have the same value of $R_0$.} \begin{CodeChunk} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-2-1} } \caption{\label{fig:different-scales-standard}Example of two epidemics with different $\beta$ and $\gamma$ paremeters but the same initial reproduction number $R_0$ = 2. Both epidemics are generated from models with $N= 1000$ individuals with $S(0) = 990$ and $I(0) = 10$.}\label{fig:unnamed-chunk-2} \end{figure} \end{CodeChunk} \textcolor{blue}{\pkg{EpiCompare} provides a time-invariant tool to visualize these epidemics in a more intuitive manner, at least in regards to comparing values of $R_0$.} \sout{A time-invariant approach to visualizing epidemics, in comparison, allows us to directly compare $R_0$ from a single plot.} \textcolor{violet}{For every time \sout{point} $t$ we have a \textcolor{orange}{point} $(S(t),I(t), R(t))$\sout{, so we can treat epidemics as a trajectory in this three-dimensional space, as we visual in the left subplot of Figure \ref{fig:different-sacles-tern}.} \textcolor{orange}{so we can visualize the trajectory of the epidemic in three-dimensional space (see Fig. \ref{fig:different-scales-tern} (left)).} For state space models like in our example, given the constraint that $S(t) + I(t)+R(t)$ is always equal to $N$ (the total population size), we can visual these point in a a two-dimesional \textit{ternary} plot, as seen in Figure \ref{fig:different-scales-tern} (right). \sout{In Fig. \ref{fig:different-scales-tern} we plot the filamental trajectories of the two epidemics in a time-invariant view. We will explain how and why this works shortly. The important takeaway is that in this time-invariant view, i}I}t is apparent that these epidemics are on ``the same path.'' In this case, this indicates that two epidemics have the same value of \(R_0\). \begin{CodeChunk} \begin{figure}[H] {\centering \includegraphics[width=0.49\linewidth]{images/vis3d} \includegraphics[width=0.49\linewidth]{Figs/unnamed-chunk-3-2} } \caption{\label{fig:different-scales-tern}Left: trajectory of epidemic in three-dimensional space, plotting $(S(t), I(t), R(t))$. Right: the gray-shaded region and epidemic trajectory shown from (left) now shown in two-dimensional space. This is more commonly known as a ternary plot.}\label{fig:unnamed-chunk-3} \end{figure} \end{CodeChunk} \textcolor{violet}{Underlying our time-invariant visualization that allowed for the comparison of $R_0$ in Fig. \ref{fig:different-scales-tern} is the treatment of the epidemic as a single filamental trajectory in the state space. \sout{The reason why we can visually compare $R_0$ in Fig. \ref{fig:different-scales-tern} is because of the time-invariant nature of the filamental trajectory associated with an epidemic.}} A filamental trajectory can be mathematically viewed as a set of points in space that have an ordering, and that all points on the line between these ordered points are also contained in the geometric object. For a SIR epidemic, we can represent the associated filamental trajectory \(\psi\) as \[ \psi = \left \{(S(t), I(t), R(t)): S, I, R \ge 0, S + I + R = N \right \}_{t\in[0,T]}, \] where a mapping \(\xi : s \to \mathbb{R}\) that is strictly monotonically increasing would not change the definition of \(\psi\), i.e.~\(\psi_\xi \equiv \psi\) where : \[ \psi_\xi = \left \{(S(\xi(s)), I(\xi(s)), R(\xi(s))): S, I, R \ge 0, S + I + R = N \right \}_{s\in[0,T]}. \] \textcolor{violet}{[Ben says: removed this paragraph now]} Since the number \textcolor{orange}{of individuals} in each state is non-negative and the sum over the three states for a given time point \sout{sums to}\textcolor{orange}{is} \(N\), then \textcolor{violet}{all points in} \(\psi\) will lay in a \textbackslash ben\{two-dimensional triangular plane in three-dimensional space. \sout{We can then which can be visualize the \textcolor{orange}{full filamental trajectory} in a two-dimensional ternary plot} As a result, we can visualize \textcolor{orange}{the full filamental trajectory} of an epidemic in a single \textcolor{violet}{ternary plot \sout{2d plot and ultimately $R_0$}}. \textcolor{orange}{[Shannon says: Show pic here?]}{]}\footnote{Which? \textcolor{orange}{3d space one? But I'm leaning against it now. 3d never looks good in a paper.}} \sout{\textcolor{violet}{[This section could be a bit less wordy. But generally good.]} We visualize the two epidemics in a global, ternary view in Figure \ref{fig:different-scales-tern}. Without getting into too much detail of the intricacies of this plot, we immediately see the points of the two filaments $\psi$ seem to form the same trajectory. Now, it is much clearer that \textcolor{violet}{\sout{Model} epidemic} 2 is following the same trajectory as \textcolor{violet}{\sout{Model} epidemic} 1 but is not as far along in the infection process. } \sout{\textcolor{violet}{As suggested a few paragraphs back, t}}The filamental trajectories in Fig. \ref{fig:different-scales-tern} \sout{seem to overlap}, and we may suspect that something is fundamentally linking these two different epidemics together. Mathematically, we can show this fundamental link \sout{turns} is \(R_0\). Let our two epidemics be presented as \(\{(S_1(t), I_1(t), R_1(t))\}_{t\geq0}\), \(\{(S_2(s), I_2(s), R_2(s))\}_{s \geq 0}\) respectively. As with the example, assume both models have the same initial values \((S(0), I(0), R(0))\), and let \(R_0 =\frac{\beta_1}{\gamma_1} = \frac{\beta_2}{\gamma_2}\) where \(\beta_i\) and \(\gamma_i\) are the average infection rate and recovery rate, respectively, for SIR model \(i=1, 2\). And define \(a>0\) to be the relative scalar such that \(\beta_2 = a \beta_1\) if and only if \(\gamma_2 = a \gamma_1\). \begin{theorem}\label{thm:sir-scale} Let there be two SIR models as described above. Then for all $t > 0$ there exists an $s>0$ such that $(S_1(t), I_1(t), R_1(t)) = (S_2(s), I_2(s), R_2(s))$. Moreover, $s = \frac{1}{a}t$. \end{theorem} The proof of Theorem \ref{thm:sir-scale} relies on a fairly recent result from \cite{Harko2014} and is shown in detail in Proof \ref{proof:thm}. The consequence of Theorem \ref{thm:sir-scale} is that for two SIR models that have the same initial percent of individuals in each state and \(R_0\) then for every point on the epidemic path of the first SIR model \textcolor{violet}{\sout{is also} can be mapped to} a point on the epidemic path of the second SIR model. \textcolor{orange}{In other words, the two epidemics form the same filamental trajectory. For SIR models with similar initial state percentages, time-invariant analysis allows practitioners to compare values of $R_0$ at a glance.} \subsection[Beyond R0 and SIR]{Time-invariant analysis beyond \(R_0\) and \sout{Kermack’s and McKendrick} SIR Models\footnote{Probably will need to change this title...}}\label{sec:beyond-r0-sir} Through the \(R_0\) example, we see that treating epidemics like filamental trajectories embedded in a lower dimensional space allows us to \sout{better}\textcolor{orange}{more fully} compare the overall structure of the epidemic and see how the population is directly impacted. Time-invariant tools \sout{that} can be useful even when the underlying generative model for the epidemic is unknown or have more than three epidemic states. \textcolor{violet}{New paragraph} Viewing epidemics as filamental trajectories provides \sout{a lot }new ways to compare and examine epidemics in a time-invariant manner. \textbackslash ben\{For \textcolor{orange}{completed?}epidemics that have ended, one way to examine their filamental trajectories is to \sout{redefine}\textcolor{orange}{represent} the \textcolor{orange}{filamental} trajectory as a \textcolor{orange}{finite sequence of equally spaced points.} \sout{finite sequence of points on the filamental trajectory that are equally spaced (i.e. equa-distance between pairs of ordered points).} \sout{For epidemics that have "played" themselves out, one way to represent their filamental trajectories to avoid \textcolor{orange}{confusion stemming from}\sout{impacts of} temporal structure is to define them as a sequence of points their trajectory with equi-distance between each point \textcolor{orange}{[Shannon says: are we missing a few words in this definition?]}}. This representation induces a natural distance between this type of representation between epidemics, specifically: \[ d_\text{equi-distance}(\psi_1, \psi_2) = \int_{s \in [0,1]} (\psi_1'(s) - \psi_2'(s))^2 ds \] where \(\psi_i'(s)\) the point along \(\psi_1\) that is \(s\) fraction of \(|\psi_1|\) distance away from the start of \(\psi_i\).\footnote{\textcolor{orange}{I think you're trying to say something about a distance based on the equally space points. Some clarifying questions: is $\psi^\prime$ the derivative? Does proportion make more sense than fraction? or simply $\frac{|\psi|}{s}$? It's only naturally time-invariant if we have a well defined ending point, right?}} This distance is naturally time-invariant, and can be plugged into multiple distance-based assessment tools to examine the overall ``extremeness'\,' of points, including pseudo-density estimators and depth/local depth functions \citep[for examples see][]{Ciollaro2016, Geenens2017}. These extremeness estimators can be \sout{very} useful when comparing \sout{between a setof simulation} \textcolor{orange}{a set of simulated} epidemics and the true epidemic\textcolor{orange}{. Moreover these extremness estimators,} \sout{and does} \textcolor{orange}{} not constrain the number of states of the model\sout{, though we recommend projecting the points into the unit simplex}\footnote{\textcolor{violet}{I'm not sure we've talked about this before... \textcolor{orange}{I don't think we have but am wondering if we're getting in the weeds}}}\sout{(by making all values the proportion of the population in the given state)}. \textcolor{violet}{New paragraph:} \textcolor{violet}{\sout{If the set of epidemics that one is examining have only gone through a single cycle of the outbreak} If \sout{one} \textcolor{orange}{a practicioner} is interested in understanding \textcolor{orange}{an} epidemic\sout{s} through a single \sout{cycle}\textcolor{orange}{realization} of \sout{their}\textcolor{orange}{its} outbreak} (before the population of individuals have become susceptible again), then additional time-invariant tools, including prediction regions can be leveraged \textcolor{orange}{awk sentence}\footnote{\{\textcolor{orange}{What are we predicting if the epidemic is done? Update 4/6: I'm satisfied.}\}}. In these settings, \textcolor{orange}{\pkg{EpiCompare} goes} \sout{we go }a step further and treats epidemics more like geometric filaments \textcolor{orange}{(i.e. filamental trajectories without an ordering of points)} than filamental trajectories. In \pkg{EpiCompare}, we create prediction regions that contain a the top (\(1-\alpha\)) proportion of simulated curves by defining geometric regions defined by the union of small \textcolor{orange}{geometric?} filaments around the subset of simulations (\sout{subset} \textcolor{orange}{grouped} by measures like the above pseudo-density estimates or depth estimates). These regions \sout{look at}\textcolor{orange}{show} where in the state-space we expect the epidemic to traverse\sout{, and }. \textcolor{orange}{Additionally, }we can compare prediction regions defined by different models using \textcolor{violet}{many set difference distances \sout{the Haussdorf \textcolor{orange}{why Haussdorf specifically?} distance}} as well as examining how well the truth epidemic matches the simulations by examining if the epidemic's trajectory lies within the prediction region. All these \textcolor{orange}{mentioned?} geometric structures and distance notations apply to epidemics with any number of states, and at the end of Section \ref{sec:overview} we \sout{also} highlight how these prediction regions can aid in visual comparisons for epidemics with 3 states (like the SIR models). \section[Package overview]{Overview of \pkg{EpiCompare}}\label{sec:overview} \afterpage{\clearpage} \begin{sidewaysfigure}[!ht] \centering \includegraphics[width = 1\textwidth]{images/pipeline2_1.pdf} \caption{How \pkg{EpiCompare} supplements and aids in the epidemiological data analysis pipeline.} \label{fig:pipeline2} \end{sidewaysfigure} In this section, we present the tools implemented in \pkg{EpiCompare} and explain how they aid in the data analysis pipeline. In Figure \ref{fig:pipeline2}, we show how our package's functions fit into the data analysis pipeline introduced in Figure \ref{fig:pipeline}. All front-facing functions in \pkg{EpiCompare} are aimed to be as user-friendly as possible. We also focus on providing the user ``tidyverse'' style functions, that encourage piping objects from one function to the next and follow clear ``verb'' naming schemes \citep{Wickham2019}. Although users can incorporate \pkg{EpiCompare} into any step in the data analysis pipeline, there are two primary points of entry. The first point of entry is the very beginning with pre-processing and visualizing raw data, and the second point of entry is after modeling and simulation. Figure \ref{fig:pipeline2} captures these different paths, and we highlight how to leverage \pkg{EpiCompare} functionalities in the subsections below. \textbf{Data pre-processing} The first step of most data analysis is ``cleaning'' the raw data so it can be explored. Before data can be explored, they must be collected. Sometimes individual records are collected, with times of different states of the epidemic (infection, recovery, etc.) as well as individual information like network structure, location, and sub-population information. Other data collections focus on aggregate counts of individuals in each epidemic state. In fact, many times only the number of new infections at each time step (e.g.~weekly case counts) is observed. In this setting, compartment totals (amounts of individuals in each state) are then imputed from those case counts and using other information about the disease and the population of interest. In \pkg{EpiCompare}, we focus on understanding the overall impact of an outbreak at the aggregate/population level, which allows for streamlined examination of overall trends of an epidemic. To help the practitioner examine epidemics from an aggregate/population lens, we provide a function called \texttt{agents\_to\_aggregate()}. This function transforms data about individual/agents' initial entry into each state (e.g.~start of infection, start of recovery, etc.) to an aggregate view of how many individuals were in a state at a given time. Researchers, including \citet{rvachev1985,anderson1992,worby2015}, often are interesting in more granular trends that can be detected by aggregation, conditional on subpopulations (e.g.~subpopulations defined by age or sex). By combining the function \pkg{dplyr}::\texttt{group\_by()} and \texttt{agents\_to\_aggregate()}, \pkg{EpiCompare} provides group level aggregation. Besides aiding subpopulation analysis, \texttt{agents\_to\_aggregate()} can accommodate a wide range of information about each individual. In fact, this function can account for infinitely many states. This functionality allows the practitioner to aggregate information relative to common states (e.g.~``Susceptible'', ``Infectious'', and ``Recovered'') as well as more complex states (e.g.~``Exposed'', ``iMmune'', ``Hospitalized''). Additionally, \texttt{agents\_to\_aggregate()} permits indicators for death/exit and birth/entry dates. Overall, this function is a powerful tool for pre-processing data. \textbf{Exploratory data analysis (EDA)} In the early stages of a project, familiarizing oneself with the data usually means figuring out useful combinations of visualizations and numerical summaries of the data both at population and subpopulation level. An expert coder can start with \texttt{agents\_to\_aggregate()} to successfully accomplish exploratory data analysis (EDA) in many ways. \pkg{EpiCompare} also includes tools that allow a novice coder to rapidly explore data, provided there are three unique epidemiological states (like in the SIR model). Building on \pkg{ggplot2} and \pkg{ggtern} packages, \pkg{EpiCompare}'s \texttt{geom\_aggregate()} provides a way to explore how different subpopulations experience of an epidemic \citep{Wickham2016, Hamilton2018}. The function \texttt{geom\_aggregate()} provides a visualization tool to holistically examine aggregate level information across different subpopulations by visualizing each subpopulation's epidemic trajectory in the three-dimensional state space. Visualization tools for three-state models were developed because SIR models are some of the most common and basic epidemic state-based models and our three-dimensional simplex representation of these epidemics emphasizes a time-invariant representation of the data (for a refresher see Section \ref{sec:time-invariant}). \textbf{Model fitting and simulations} After getting a sense of what a past or current epidemic looks like with EDA, the next step in the data analysis pipeline is often model fitting and/or simulation. While \pkg{EpiCompare} does not focus on fitting models to data, we do provide some flexible functions for simulation of basic discrete-time epidemic-state models. These functions simulate individual-level information based on practitioner estimated transition rates between states and can be combined with \texttt{agents\_to\_aggregate()} to view these simulations through an aggregate lens. The function \texttt{simulate\_SIR\_agents()} simulates a basic SIR epidemic with user inputs for the number of simulations, the initial number in each state, the infection and recovery parameters \((\beta, \gamma\)), and the total number of discrete time steps. Beyond SIR models, the function \texttt{simulate\_agents()} takes as input a user-specified state-transition matrix and other epidemic parameters to allow the user to create simulations for an outbreak with \textit{any} number of states and any number of transitions among them. This flexibility in states can be used to also reflect group-based dynamics. Both of these functions allow users to explore the space of models in an intuitive way without getting bogged down by too much mathematical detail. For consistency, we have made output from \texttt{simulate\_agents()} and \texttt{simulate\_SIR\_agents()} compatible with \texttt{agents\_to\_aggregate()} so aggregate information may easily be accessed. \textbf{Post-processing} If practitioners wish to compare models-to-observations or even models-to-models, they need to post-process their models and simulations to disseminate the results in an easily digestible format. In \pkg{EpiCompare}, we provide (1) functions to standardize simulation and model output from external packages and (2) a function to transform standardized simulation and model output into a format amenable to time-invariant analysis. Modeling and simulation output can be very complex objects, and as a result, a number of epidemic modeling \proglang{R} packages return a special class. The special classes often contain a plethora of information about residuals, model diagnostics, input parameters, and more. While incredibly useful, these special classes can be difficult for novice coders to handle. To this end, \pkg{EpiCompare} provides a series of fortify-style methods, called \code{fortify_aggregate()} which transform output from infectious disease modeling and simulation packages like \pkg{pomp} and \pkg{EpiModel} into tidy-styled data frames which contain information about the total number of individuals in each state at a given time, for a given simulation. These fortify functions have output that is consistent with that of \code{agents_to_aggregate()}. These standardized outputs can then be piped to summaries, tables, and plots. Because epidemic data is stored in a temporal way, we provide the function, \code{filament_compression()}, to transform temporally defined epidemics to their filamental representations. These filaments can then be fairly compared to one another or passed to further time-invariant analysis tools described below. \textbf{Comparisons and assessment} The last step of the data analysis pipeline often ends with plots, tables, and summary statistics that are used to assess model performance and compare across models or simulations. In \pkg{EpiCompare} we provide a set of comparison and assessment tools for model and simulation results that extend beyond the standard performance metrics (e.g.~mean squared error or AIC) and into the lens of time-invariant analysis. We have found that these tools are specifically applicable for situations where only one season or cycle of an epidemic has occurred or is the object of interest. The first set of of tools surround the creation of prediction regions. We can create a prediction regions from model simulations to examine if our model simulations capture the true epidemic trajectory. We do so in a time-invariant way and utilizing filamental representations of the model simulations and the true epidemic. For three-state epidemic models, we provide the \texttt{ggplot}/\texttt{ggtern} extension \texttt{geom\_prediction\_band()} which creates a prediction region around the top \(1-\alpha\) proportion of the simulations. In this visual setting, comparing this prediction region to the true epidemic trajectory can be done by eye. In \pkg{EpiCompare}, we also provide these prediction regions for epidemic models with with more than three states. The functions \texttt{create\_convex\_hull\_structure()} and \texttt{create\_delta\_ball\_structure()} create different geometric representations of prediction regions for any dimensional state-based model. For both of these geometric structures, we provide functions to check if a path is contained (\texttt{contained()}){]}. We can also use these prediction regions to visually or mathematically compare how similar two sets of simulations are. In \pkg{EpiCompare} we provide the \texttt{hausdorff\_dist()} function to calculate the Hausdorff distance between multiple prediction regions, when visual comparison is not possible. We also provide functions to calculate the ``extremeness'' of a true epidemic trajectory compared to simulated epidemics via the equi-distance filamental trajectory representation as mentioned in Section \ref{sec:beyond-r0-sir}.{]} We provide implementations of a few distance-based score functions that capture how ``reasonable'' an epidemic is relative to other epidemics, and these scores can be turned into an extremeness measure with \texttt{mean(sim\_scores\ \textgreater{}\ truth\_score)}. {[}Specifically, functions like \texttt{distance\_pseudo\_density\_function()} can calculate a pseudo-density estimate of the true epidemic relative to simulated ones. Functions \texttt{distance\_depth\_function()} and \texttt{local\_distance\_depth\_function()} provide depth scores that suggest how geometrically central an epidemic is to simulations. \section[Tour]{A tour of \pkg{EpiCompare}}\label{sec:tour} \textcolor{orange}{I have substantially overhauled section 4. The older versions are kept after the entirety of this section but note that much of the text (but not the structure) of this section is different. I've tried to better motivate why we use the epicompare functions to answer our questions of interest.} UPDATE 6/11/2021. I am now adding overall goals of section and each subheading. Additionally there are a few updates from last draft (unmarked since you didn't comment on last draft). SECTION GOALS: The main goal of this section is to show how a person can use EpiCompare in a real data analysis. With that said my primary focus is on EpiCompare and not the data analysis -- a point I think you may be having an issue with. I've given this some thought and believe that the data analysis is a means for an end to us -- showing off EpiCompare. That said again, I've made some changes in the first paragraph below. To accomplish this main goal, I shadow the data analysis pipeline we have introduced and show how epicompare can be used in the steps. The secondary goals are figuring out useful information about this measles outbreak. \textcolor{orange}{[[NEWEST TEXT} {[}{[}sub-section goal: outline goal of section -- how to use epicompare in a real situation To conclude our paper, we demonstrate how \pkg{EpiCompare} streamlines the data analysis process with a case study of a measles outbreak in 1861-1862 Germany. With the help of \pkg{EpiCompare}, we can answer important questions such as is a SIR model a good fit for the data, does the outbreak spread differently within the different school classes of the children, and can incorporating an underlying network structure enhance model fit? After introducing the data available in this outbreak, we show how \pkg{EpiCompare} can aid in each step of the data analysis pipeline (see Fig. \ref{fig:pipeline2}) to answer these questions motivated by this outbreak.{]}{]} \hypertarget{background-for-1861-1862-measles-outbreak}{% \subsection{Background for 1861-1862 measles outbreak}\label{background-for-1861-1862-measles-outbreak}} {[}{[}Goal of this section isto provide context for our case study.{]}{]} We analyze an outbreak of measles in the town of Hagelloch, Germany from 1861-1862, a data set organized by \cite{pfeilsticker1863}. The data was later made visible by \cite{oesterle1992} and made available in an \proglang{R} by \cite{surveillance2017}. In this outbreak, 188 children were infected with measles over the course of three months. This data set includes a rich collection of features including day of measles rash, age, sex, school class, household and household location, and alleged infector of each child. We show a subset of the data in Table \ref{tab:hags-people}. We are particularly interested if the SIR model is a good fit to our data, whether we can identify subgroups with differing infection behavior, and whether incorporating those subgroups into a network-based SIR model enhances model fit compared to the baseline SIR. \begin{CodeChunk} \begin{table}[!h] \caption{\label{tab:hags-people}Subset of Hagelloch infection data. Features include the person ID, household ID (HH ID), age, sex, class level (Pre-K/1st/2nd), date of first symptoms, date of the appearance of the measles rash, and the alleged infector ID of the individual.} \centering \begin{tabular}[t]{rrlrllllr} \toprule ID & HH ID & Name & Age & Sex & Class & Symp. Start & Rash Date & Infector ID\\ \midrule 1 & 61 & Mueller & 7 & female & 1st class & 1861-11-21 & 1861-11-25 & 45\\ 2 & 61 & Mueller & 6 & female & 1st class & 1861-11-23 & 1861-11-27 & 45\\ 3 & 61 & Mueller & 4 & female & preschool & 1861-11-28 & 1861-12-02 & 172\\ 4 & 62 & Seibold & 13 & male & 2nd class & 1861-11-27 & 1861-11-28 & 180\\ 5 & 63 & Motzer & 8 & female & 1st class & 1861-11-22 & 1861-11-27 & 45\\ 45 & 51 & Goehring & 7 & male & 1st class & 1861-11-11 & 1861-11-13 & 184\\ \bottomrule \end{tabular} \end{table} \end{CodeChunk} \hypertarget{pre-processing-and-eda}{% \subsection{Pre-processing and EDA}\label{pre-processing-and-eda}} {[}{[}The goal of this is to show how EpiCompare can be used to transform the data for basic tablaes and plotting, especially with regards to aggregating the data for the SIR model.{]}{]} We begin our analysis by first examining and transforming the raw data (\code{hagelloch_raw}) to explore whether the SIR model is a good fit to the data. The raw data is in the format of individual-level data and first needs to be aggregated. By specifying the time of infection (\code{tI}) and the time of recovery (\code{tR}), the function \code{agents_to_aggregate()} calculates the number susceptible, infectious, and recovered individuals at each time step. Once aggregated, we can plot the SIR values through a time-invariant lens using \pkg{ggplot2} and \pkg{ggtern} functions (as shown in Fig. \ref{fig:hag-tern-raw}) or with our custom \code{geom}, \code{geom_aggregate()}, which takes the raw agent data as input. This is show in the below code. \begin{CodeChunk} \begin{CodeInput} R> hagelloch_sir <- hagelloch_raw %>% + agents_to_aggregate(states = c(tI, tR), + min_max_time = c(0, 55)) %>% + rename(time = t, S = X0, I = X1, R = X2) R> R> R> ggplot(hagelloch_sir, aes(x = S, y = I, z = R))+ + coord_tern() + + geom_path() + + labs(x = "S", y = "I", z = "R", + title = "Time invariant view of Hagelloch measles outbreak") + + theme_sir(base_size = 24) \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-4-1} } \caption{\label{fig:hag-tern-raw}Time invariant view of the Hagelloch epidemic where we view the individuals in Susceptible, Infectious, or Recovered states. We see there are two peaks of infection (the vertical axis).}\label{fig:unnamed-chunk-4} \end{figure} \end{CodeChunk} In Figure \ref{fig:hag-tern-raw}, we can focus on the infections over time by analyzing the vertical axis. Specifically, we see two peaks of infection. This is interesting because the SIR model, which is sometimes used to model the spread of measles, generally has one defined peak of infection. We may wonder if the two peaks in the observed data may be due to random noise or if a model more complex than the simple SIR is needed to adequately capture these two peaks. With regards to our goal of identifying interesting subgroups with different infection behavior, previous study tells us that measles outbreaks are often associated with children within the same grade level, and we examine if this is the case here. By combining the \pkg{dplyr} function \code{facet_wrap()} with \code{geom_aggregate()} we can easily analyze this scenario, \begin{CodeChunk} \begin{CodeInput} R> hagelloch_raw %>% + ggplot(aes(y = tI, z = tR, color = CL)) + + geom_aggregate(size = 2) + coord_tern() + + labs(x = "S", y = "I", z = "R", + color = "Class") + + scale_color_brewer(palette = "Dark2") + + facet_wrap(~CL) \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-5-1} } \caption{\label{fig:tern-class-data}Time invariant outbreak curves for the three class groups. The pre-school class has a distinct peak of infection whereas the peak infection point for the other two classes are less well defined.}\label{fig:unnamed-chunk-5} \end{figure} \end{CodeChunk} Immediately in Fig. \ref{fig:tern-class-data}, we see that time invariant infection curve is different for the pre-school class compared to the 1st class. In the 1st class, we see about 95\% of the class become infected and less than 10\% of them having recovered, which may be indicative of a super-spreading event. This suspicion is further supported in that 26 of the 30 1st class students have been reportedly infected by the same individual. We now have some evidence that class structure may play a role in the spread of infection. We can further analyze this claim with modeling and simulation. \hypertarget{modeling-and-simulation}{% \subsection{Modeling and Simulation}\label{modeling-and-simulation}} {[}{[}The goal is to `test' (although not formally) whether our observations from EDA are correct. We want to see whether SIR model is a good fit and whether including these found sub-groups helps model fit{]}{]} We now use modeling and simulation to informally test whether the baseline SIR model is a good fit for the data and whether incorporating a network-based structure dependent on the class of the children improves model fit. We first try to model the Hagelloch data with a baseline stochastic SIR model, which we refer to as the `simple SIR.' In our full vignette \textcolor{orange}{(available online),} we show how to fit this simple SIR model via maximum likelihood, a common approach used to fit parameters, and simulate from the model with those best fit parameters. Our function \code{simulate_agents()} (or \code{simulate_SIR_agents()}) generates individual level data according to discrete-time multinomial draws, which depend on the number of individuals in each state at the previous time step and a matrix of transition probabilities. For example, the below code generates 100 simulations of an outbreak of a disease with one initial infector in a population of \(n= 188\) individuals, a scenario analogous to the actual outbreak. \begin{CodeChunk} \begin{CodeInput} R> trans_mat <- matrix(c("X0 * (1 - X1 * par1 / N)", "X0 * X1 * par1 / N", "0", + "0", "X1 * (1 - par2)", "par2 * X1", + "0", "0", "X2"), byrow = TRUE, nrow = 3) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> set.seed(2020) R> R> best_params <- c("beta" = .36, "gamma" = .13) R> ## This is the SIR representation R> R> rownames(trans_mat) <- c("S", "I", "R") R> init_vals <- c(187, 1, 0) R> par_vals <- c(par1 = best_params[1], par2 = best_params[2]) R> max_T <- 55 R> n_sims <- 100 R> R> agents <- simulate_agents(trans_mat, + init_vals, + par_vals, + max_T, + n_sims, + verbose = FALSE) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> agg_model <- agents %>% group_by(sim) %>% + agents_to_aggregate(states = c(I, R)) %>% + mutate(Type = "Simple SIR") \end{CodeInput} \end{CodeChunk} The result of our simulation is the object \code{agents} which is a 18800 \(\times\) 5 data frame, which details the time of entry into the \(S\), \(I\), and \(R\) states for a given simulation. To fit a more complex SIR model with a network structure, we use the package \pkg{EpiModel} \citep{Jenness2018}. The below code sets up a network of individuals (which includes class as a variable) and then simulates infection and recovery over this network. \begin{CodeChunk} \begin{CodeInput} R> library(EpiModel) R> ## WARNING: Will take a minute or two R> R> set.seed(42) R> nw <- network.initialize(n = 188, directed = FALSE) R> nw <- set.vertex.attribute(nw, "group", rep(0:2, each = 90, 30, 68)) R> formation <- ~edges + nodematch("group") + concurrent R> target.stats <- c(200, 300, 200) R> coef.diss <- dissolution_coefs(dissolution = ~offset(edges), duration = 5) R> est1 <- netest(nw, formation, target.stats, coef.diss, edapprox = TRUE) R> R> param <- param.net(inf.prob = 0.1, act.rate = 5, rec.rate = 0.1) R> status.vector <- c(rep(0, 90), rep(0, 30), rep(0, 67), 1) R> status.vector <- ifelse(status.vector == 1, "i", "s") R> init <- init.net(status.vector = status.vector) R> control <- control.net(type = "SIR", nsteps = 55, + nsims = 100, epi.by = "group") R> epimodel_sir <- netsim(est1, param, init, control) \end{CodeInput} \end{CodeChunk} The output of this network model is \code{epimodel_sir}, an object of class \code{netsim}, which contains a plethora of modeling information. In the following section, we will use the capabilities of \pkg{EpiCompare} to streamline the process of comparing the two models contained in the objects \code{agg_model} (the simple SIR) and \code{epimodel_sir} (the network SIR model). \hypertarget{post-processing-and-comparison}{% \subsection{Post-processing and comparison}\label{post-processing-and-comparison}} {[}{[}Finally we want to answer the questions in our analysis by comparing the models to the data and the models to each other{]}{]}. Ultimately we want to compare the models to the data and the models to one another to answer our questions of interest. The \pkg{EpiCompare} function \code{fortify_aggregate()} takes in an object from specialized classes of modeling output \textcolor{orange}{(like those made by \code{netsim()})} and transforms it into a tidy-style data frame. \begin{CodeChunk} \begin{CodeInput} R> fortified_net <- fortify_aggregate(epimodel_sir, + states = c("s.num", "i.num", "r.num")) %>% + mutate(Type = "EpiModel SIR", + sim = as.numeric(gsub("sim", "", sim))) \end{CodeInput} \end{CodeChunk} With the two modeling objects both in the same format, we can then compare the models side-by-side. The results are shown in Figure \ref{fig:hag-simple-sir}, where a 90\% prediction region is estimated for the two models and the actual data are plotted as black dots. For the Simple SIR model, we see that while the prediction region covers the data fairly well, the prediction region clearly misses the second peak of infection. This indicates that the simple SIR is not a good fit to our data, even after incorporating noise. We also see that the prediction region is very large, covering up a large area of the ternary plot. Together, this indicates that the simple SIR model produces a biased model with a large amount of variance. On the other hand, for the EpiModel network model, we see that the prediction region covers the data quite well and takes up less area compared to the simple SIR. With \pkg{EpiCompare}, we see that the model using the class structure is a better fit to the outbreak than the simple SIR model. \begin{CodeChunk} \begin{CodeInput} R> both_models <- bind_rows(agg_model, fortified_net) R> R> R> g <- ggplot() + geom_prediction_band(data = both_models %>% filter(t != 0) %>% + mutate(Type = factor(Type, levels = c("Simple SIR", + "EpiModel SIR"))), + aes(x = X0, y = X1, z = X2, + sim_group = sim, fill = Type), + alpha = .5, + conf_level = .90) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> g + geom_path(data = both_models %>% filter(t != 0) %>% + mutate(Type = factor(Type, levels = c("Simple SIR", + "EpiModel SIR"))), + aes(x = X0, y = X1, z = X2, group = paste(Type, sim)), + alpha = .3, col = "gray40") + + coord_tern() + theme_sir(base_size = 24) + + geom_point(data = hagelloch_sir, + aes(x = S, y = I, z =R), col = "black") + + labs(title = "Simple SIR model", + subtitle = "90% Prediction band and original data", + x = "S", y = "I", z = "R") + + scale_fill_manual(values = c("#006677", "#AA6600")) + + facet_wrap(~Type) + + theme(legend.position = "bottom") \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-12-1} } \caption{\label{fig:hag-simple-sir} Original Hagelloch SIR data (black) along with 90\% prediction band and actual simulation paths from the Simple SIR and the EpiModel SIR models.}\label{fig:unnamed-chunk-12} \end{figure} \end{CodeChunk} Although the prediction region generated by the network model covers the observed data well visually, that does not mean that individual filaments generated from the network model are good fits to the observed data. We can further examine the model fits by incorporating these filaments into our visualization. In Fig. \ref{fig:hag-simple-sir} we show the individual filaments generated from the two sets of models as gray lines. Examining these, we see that the individual lines typically only have one defined peak, whereas the data certainly looks like it has two distinct peaks, a feature possibly caused by our speculated super-spreader event. We can also examine these filaments quantitatively by using functions that take the distance between the filaments with the observed data. In the below code, we transform the simulations to a more computationally-friendly format with the function \code{filament_compression()}. Following that, we calculate the distance between the simulated filaments and the observed filament with the function \code{dist_matrix_innersq_direction()} and calculate the probability of the truth with respect to those simulations with the function \code{compare_new_to_rest_via_distance()}. The estimated pseudo-density of the observed epidemic (relative to the simulations from either model) is much less likely then \textbf{any} of the simulations (reported in Table \ref{tab:hags-extreme}). This indicates that neither of two SIR models are good fits to the data at the filament level. \begin{CodeChunk} \begin{CodeInput} R> simple_sir <- both_models %>% filter(Type == "Simple SIR") %>% + rename(S = "X0", I = "X1", R = "X2") %>% + select(Type, sim, t, S, I, R) R> R> hagelloch_sir2 <- hagelloch_sir %>% + rename(t = "time") %>% + mutate(Type = "true observation", + sim = 0) %>% + select(Type, sim, t, S, I, R) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> #-- after cleaning up and combining -- R> all_together_df <- rbind(simple_sir, + hagelloch_sir2) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{table}[!h] \caption{\label{tab:cif-all-together-df}Top and bottom 2 rows of \tt{all\_together\_df}\textnormal{, combining both simulated epidemics and the true epidemic.}} \centering \begin{tabular}[t]{lrrrrr} \toprule Type & sim & t & S & I & R\\ \midrule Simple SIR & 1 & 0 & 188 & 0 & 0\\ Simple SIR & 1 & 1 & 187 & 1 & 0\\ true observation & 0 & 54 & 1 & 0 & 187\\ true observation & 0 & 55 & 1 & 0 & 187\\ \bottomrule \end{tabular} \end{table} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> compression_df <- all_together_df %>% group_by(Type, sim) %>% + filament_compression(data_columns = c("S","I","R"), + number_points = 20) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> tdmat <- compression_df %>% + dist_matrix_innersq_direction( + position = c(1:length(compression_df))[ + names(compression_df) %in% c("S","I", "R")], + tdm_out = T) R> R> simple_sir_true_obs_info <- tdmat %>% + compare_new_to_rest_via_distance( + new_name_id = data.frame(Type = "true observation", sim = 0), + distance_func = distance_psuedo_density_function, + sigma = "20%") \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{table}[!h] \caption{\label{tab:hags-extreme}The extremeness of the true simulations based on comparing pseudo-density estimates between true vs simulated curves} \centering \begin{tabular}[t]{l>{\raggedleft\arraybackslash}p{6cm}>{\raggedleft\arraybackslash}p{6cm}} \toprule Type & simulations-based estimated pseudo-density & proportion of simulations with lower estimated pseudo-density\\ \midrule Simple SIR & 0.0036733 & 0.00\\ EpiModel SIR & 0.0118283 & 0.03\\ \bottomrule \end{tabular} \end{table} \end{CodeChunk} In conclusion, \pkg{EpiCompare} allows us to fully examine this outbreak at every step in the data analysis pipeline (see Fig. \ref{fig:pipeline2}) in a streamlined fashion. With EDA, we saw evidence that class structure may be important in the spread of of measles. We then compared a baseline simple SIR model to a more complicated SIR model which incorporated a network structure which included the class structure. Based on the prediction regions generated from these models, we saw that the network model fit the data better than the simple SIR model. However, when we examined the individual filaments generated by the network model, we found that the data are unlikely to be generated from such a model. For further analysis, we would recommend looking into models that can more accurately capture super-spreading events based on the observation that one child was allegedly responsible for nearly all of his classmates' infections. Overall, this analysis demonstrates how \pkg{EpiCompare} aids in the data analysis pipeline for both novice and expert practitioners and coders alike. {]}{]} \textcolor{orange}{[[NEWER TEXT} \footnote{\textcolor{violet}{[Ben says: Generally I found this whole section to be pretty passive and not well motivated on why we'd actually do the analysis.]}} To conclude our paper, we demonstrate the capabilities of \pkg{EpiCompare} with a complete data analysis of a measles outbreak in 1861-1862 Germany. Specifically, we demonstrate how tools in \pkg{EpiCompare} can be used in each step of the data analysis pipeline \footnote{\textcolor{violet}{[Ben says: based on an earlier read of this section, I might suggest something like "and streamline the analysis process" - is that a selling point you want to highlight?]} \textcolor{orange}{check out new re-write. new goal is to motivate question of hagelloch and then show how epicompare helps}} \textcolor{violet}{\sout{(see Figure \ref{fig:pipeline})}}\footnote{\textcolor{violet}{[Ben says: It's unclear why you want to highlight the figure again - could you be clearere on that? / what does it add to the conv/ why should they reference it?]}}. Additionally, we highlight how time-invariant analysis (see Section \ref{sec:time-invariant}) can be used to enhance understanding of an outbreak\footnote{\textcolor{violet}{[Ben says: I'm not sure this is necessary to state. Additionally - is it really true do we highlight this?]}}. \footnote{\textcolor{violet}{[Ben says:The data introduction could be another subsection. Why does it fit better here?. Other comment: This paragraph is pretty rambly. It is unclear what you want someone to take away from it. I might suggest highlight the fact that the data is "an ideal testing group for methodology".]}} Before demonstrating \pkg{EpiCompare}, we provide some context for the measles outbreak presented here.\footnote{\textcolor{violet}{[Ben says: this first sentence is pretty sign-post-y yet it only relates to the next few sentences. Update so it's less sign-post-y.]}} The data was originally organized by \cite{pfeilsticker1863}, later made visible by \cite{oesterle1992}, and made available in an \proglang{R} by \cite{surveillance2017}. This data set includes a rich collection of features including household location, class level, and allged infector ID, and is an ideal testing ground for methodology in infectious disease epidemiology \cite{Neal2004,britton2011,groendyke2012,becker2016}\footnote{\textcolor{violet}{[Ben says: This citation doesn't make sense - should it be "citep"?]}}. In this data set, there are 188 children who became infected with the measles over the course approximately 90 days. \textcolor{orange}{]]} \textcolor{orange}{[[LESS NEW TEXT} Finally, in this section we show how tools from \pkg{EpiCompare} can be used in each step of the data analysis pipeline shown in Fig. \ref{fig:pipeline}. We analyze an outbreak of measles in the town of Hagelloch, Germany from 1861-1862, a data set organized by \cite{pfeilsticker1863}. The data was later made visible by \cite{oesterle1992} and made available in an \proglang{R} by \cite{surveillance2017}. This data set includes a rich collection of features and is an ideal testing ground for methodology in infectious disease epidemiology \cite{Neal2004,britton2011,groendyke2012,becker2016}.\footnote{\textcolor{orange}{The old first paragraph from data and exploratory analysis paragraph was combined with the intro as a better lead-in to what's going on.}} \textcolor{orange}{]]} {[}{[}\textcolor{red}{OLD TEXT} \footnote{\textcolor{violet}{[Ben says: Shannon, would you mind reading this whole section over again once we've finished edits for section 2 and 3? This initial paragraph seems to be stating section 3's story.]}}In this section, we highlight many of the tools available in \pkg{EpiCompare}. As previously discussed, these tools include data cleaning; visualization; modeling and simulation; post-processing; and comparison and model assessment, in accordance with the data analysis pipeline (Fig. \ref{fig:pipeline}). We show a full data analysis from beginning to end that can be accomplished in a streamlined and standardized manner via \pkg{EpiCompare}. {]}{]} \subsection{Pre-processing and EDA} \footnote{\textcolor{violet}{[Ben says: The first two sentences is very similar to the data paragraph in the section above. Given it's not really connecting the 2 sections I suggest a rewrite - could move some of this stuff above.]}}The Hagelloch data include a rich set of features at the individual level, \textcolor{orange}{and the tools in \pkg{EpiCompare} help with pre-processsing and EDA}. Recorded features include household members, school level, household locations, date of first symptoms (prodromes), date of measles rash, and even the alleged infector. A subset of the data is shown in Table \ref{tab:hags-people}. \textcolor{orange}{For example,}\footnote{\textcolor{violet}{[Ben says: I'm unclear of what this is actually an example of.]}} with \pkg{EpiCompare}, we can easily \textcolor{orange}{pre-process the data to} obtain the empirical cumulative incidence function with respect to the measles rash appearance (variable \code{ERU}) with the following tidy-style function, \code{agents_to_aggregate()}. The function \code{agents_to_aggregate()} is a key component of \pkg{EpiCompare}, allowing the user to easily switch from an individual-level (i.e.~an agent) \sout{view} lens of a disease to an aggregate \sout{level} lens. For example\footnote{\textcolor{violet}{[Ben says: Same comment as before.]}}, the below code shows how we can convert the agent data to a cumulative incidence \textcolor{orange}{plot} of the measles rash\sout{, in order to see how the disease spread through the population over time.} We can then compare the cumulative incidence of the rash to the cumulative incidence of the prodromes, i.e.~the initial\footnote{\textcolor{violet}{[Ben says: please be clearer on what these could be given you comparing them to start of the rash - which seems like an early symptom to me...]}} symptoms\footnote{\textcolor{violet}{[Ben says: This action in the analysis pipeline is unmotivated - which naturally makes me want to ask "why would I do this?"]}}. We do this with the below code, and a part of the cumulative incidence data output is shown in Table \ref{tab:cif-rash}. The argument \code{integer_time_expansion} indicates whether we should include all time points in the recorded range of the data or only when there is a change in the incidence. \begin{CodeChunk} \begin{CodeInput} R> cif_rash <- hagelloch_raw %>% + mutate(time_of_rash = as.numeric(ERU - min(PRO, na.rm = TRUE))) %>% + agents_to_aggregate(states = time_of_rash, + integer_time_expansion = FALSE) %>% + mutate(type = "Rash") \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{table}[!h] \caption{\label{tab:cif-rash}Turning the individual-level information from the Hagelloch data to an aggregate view of the cumulative incidence of the measles rash in the population over time.} \centering \begin{tabular}[t]{rrr} \toprule Time & \# Susceptible & \# Total rash appearances\\ \midrule 0 & 188 & 0\\ 4 & 187 & 1\\ 7 & 186 & 2\\ 9 & 185 & 3\\ 12 & 183 & 5\\ \bottomrule \end{tabular} \end{table} \end{CodeChunk} One possible question of interest is the duration between initial onset of prodromes and the appearance of the measles rash\footnote{\textcolor{violet}{[Ben says: You don't give a good definition of prodromes above, and you only use the name twice. Is this a super common term in Epi? I find it a bit taxing on the reader to remember what this is referring to.]}}. Since \code{agents_to_aggregate()} outputs a tidy-style data frame, it is a simple task to plot the two sets of incidence curves on the same graph (Fig. \ref{fig:cifs}). \begin{CodeChunk} \begin{CodeInput} R> cif_prodromes <- hagelloch_raw %>% + mutate(time_of_PRO = as.numeric(PRO - min(PRO, na.rm = TRUE))) %>% + agents_to_aggregate(states = time_of_PRO, + integer_time_expansion = FALSE) %>% + mutate(type = "Pro") \end{CodeInput} \end{CodeChunk} \footnote{\textcolor{violet}{I'm confused why Figure 5 is included. What is the conclusion you'd like to take away? / Why do people create plots like this?}} \begin{CodeChunk} \begin{CodeInput} R> plot_df <- bind_rows(cif_rash, cif_prodromes) R> R> ggplot(data = plot_df, + aes(x = t, y = X1, col = type)) + + geom_step() + + labs(title = "Cumulative incidence of measles appearance", + x = "Time (days relative to first prodrome appearance)", + y = "Cumulative incidence of event") + + coord_cartesian(xlim = c(0, 55)) + + scale_color_manual(values = c("blue", "red")) \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-22-1} } \caption{\label{fig:cifs}Empirical cumulative incidence functions of prodrome (symptom) onset and measles rash appearance. We see that there is approximately a a constant lag between the two curves.}\label{fig:unnamed-chunk-22} \end{figure} \end{CodeChunk} The real power of \code{agents_to_aggregate()} lies in its ability to aggregate over any number of pre-specified states. For example, the Hagelloch data sets contains two columns, \code{tI} and \code{tR}, the time of infection and recovery, respectively of each individual. We can then\footnote{\textcolor{violet}{[Ben says: using "then" here captures a very progression step of analysis but I stopped here and asked "what is this following" - and the previous "step" occurred a paragraph back and wasn't described as a direct progression but just a possible thing to do.]}} plot the SIR values through a time-invariant lens using \pkg{ggplot2} and \pkg{ggtern} functions (as shown in Fig. \ref{fig:hag-tern-raw}) or with our custom \code{geom}, \code{geom_aggregate}, which takes the raw agent data as input. \begin{CodeChunk} \begin{CodeInput} R> hagelloch_sir <- hagelloch_raw %>% + agents_to_aggregate(states = c(tI, tR), + min_max_time = c(0, 55)) %>% + rename(time = t, S = X0, I = X1, R = X2) R> R> R> ggplot(hagelloch_sir, aes(x = S, y = I, z = R))+ + coord_tern() + + geom_path() + + labs(x = "S", y = "I", z = "R", + title = "Time invariant view of Hagelloch measles outbreak") + + theme_sir(base_size = 24) \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-24-1} } \caption{\label{fig:hag-tern-raw}Time invariant view of the Hagelloch epidemic where we view the individuals in Susceptible, Infectious, or Recovered states. We see there are two peaks of infection (the vertical axis).}\label{fig:unnamed-chunk-24} \end{figure} \end{CodeChunk} \footnote{\textcolor{violet}{I found this pargraph very unmotivated. I recommend first arguing why we might a care to look into the class subpopulation grouping. And maybe comment that is is a common desire for practioners.}}Moreover, we can look at the outbreaks of the disease by group within \code{agent_to_aggregate()} or \code{geom_aggregate()}. This allows us to examine differences among the different groups of individuals. For example, we show the time invariant outbreak by class level in Figure \ref{fig:tern-class-data}. Immediately, we see that time invariant infection curve is different for the pre-school class compared to the 1st class. In the 1st class, we see about 95\% of the class become infected and less than 10\% of them having recovered, which may be indicative of a super-spreading event. This suspicion is further confirmed in that 26 of the 30 1st class students have been reportedly infected by the same individual. \begin{CodeChunk} \begin{CodeInput} R> hagelloch_raw %>% + ggplot(aes(y = tI, z = tR, color = CL)) + + geom_aggregate(size = 2) + coord_tern() + + labs(x = "S", y = "I", z = "R", + color = "Class") + + scale_color_brewer(palette = "Dark2") + + facet_wrap(~CL) \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-25-1} } \caption{\label{fig:tern-class-data}Time invariant outbreak curves for the three class groups. The pre-school class has a distinct peak of infection whereas the peak infection point for the other two classes are less well defined.}\label{fig:unnamed-chunk-25} \end{figure} \end{CodeChunk} \textcolor{violet}{\sout{Along with multiple epidemic states, the function}} \code{agents_to_aggregate()} \textcolor{violet}{\sout{can also be extended to populations with vital dynamics (e.g. birth and death) and examples of this are shown in the package vignette. In summary, }} \code{agents_to_aggregate()} \textcolor{violet}{\sout{is a multi-purpose workhorse that may be leveraged to convert individual level records into aggregate information that may be more useful for some forms of epidemic modeling such as compartment modeling.}}\footnote{\textcolor{violet}{Is this not just a repeat of section 3?}} \hypertarget{modeling-and-simulation-1}{% \subsection{Modeling and simulation}\label{modeling-and-simulation-1}} \footnote{\textcolor{orange}{section headings to align with our pipeline}} Up to this point, we have used \pkg{EpiCompare} in the context of observed data. \footnote{\textcolor{violet}{[Ben says: Why?]}}We also want to compare statistical models, and \pkg{EpiCompare} aids in that process via a simple yet flexible individual-level simulator\sout{, conversion tools for popular epidemic model packages, and model assessments.} We demonstrate an example\footnote{\textcolor{violet}{[Ben says: to me this whole section is 1 example - as such this wording is confusing to me.]}} here. We first try to model the Hagelloch data with a stochastic SIR model, which we refer to as the `simple SIR.'\footnote{\textcolor{violet}{[Ben says: could / should this be thought of as a "base" model?]}} In our full vignette \textcolor{orange}{(available online),} we show how to fit this simple SIR model via maximum likelihood and simulate from the model with those best fit parameters\footnote{\textcolor{violet}{[Ben says: should you highlight that this is a common approach?]}}. Our function \code{simulate_agents()}\textbackslash footnote\{\textcolor{violet}{[Ben Should somethings be said about the } \texttt{simulation\_SIR\_agents()} \textcolor{violet}{function?]}\} generates individual level data according to discrete time multinomial draws, which depend on the number of individuals in each state at the previous time step and a matrix of transition probabilities. For example, the below code generates 100 simulations of an outbreak of a disease with one initial infector in a population of \(n= 188\) individuals. \begin{CodeChunk} \begin{CodeInput} R> trans_mat <- matrix(c("X0 * (1 - X1 * par1 / N)", "X0 * X1 * par1 / N", "0", + "0", "X1 * (1 - par2)", "par2 * X1", + "0", "0", "X2"), byrow = TRUE, nrow = 3) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> set.seed(2020) R> R> best_params <- c("beta" = .36, "gamma" = .13) R> ## This is the SIR representation R> R> rownames(trans_mat) <- c("S", "I", "R") R> init_vals <- c(187, 1, 0) R> par_vals <- c(par1 = best_params[1], par2 = best_params[2]) R> max_T <- 55 R> n_sims <- 100 R> R> agents <- simulate_agents(trans_mat, + init_vals, + par_vals, + max_T, + n_sims, + verbose = FALSE) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> agg_model <- agents %>% group_by(sim) %>% + agents_to_aggregate(states = c(I, R)) %>% + mutate(Type = "Simple SIR") \end{CodeInput} \end{CodeChunk} The result of our simulation is the object \code{agents} which is a 18800 \(\times\) 5 data frame, which details the time of entry into the \(S\), \(I\), and \(R\) states for a given simulation. \footnote{\textcolor{violet}{[Ben says: please motivate - through model comparison?]}}Before we examine the results of this simple SIR model, we will also examine another, more sophisticated SIR model, this time from the package \pkg{EpiModel} \citep{Jenness2018}. Briefly, this model first fits a contact network to the set of individuals, where the class of the child is a covariate\footnote{\textcolor{violet}{[Ben says: Make this connect better to the problem at hand - why do you think you should build this bigger model?]}}. The model then simulates a SIR-epidemic on that network. \begin{CodeChunk} \begin{CodeInput} R> library(EpiModel) R> ## WARNING: Will take a minute or two R> R> set.seed(42) R> nw <- network.initialize(n = 188, directed = FALSE) R> nw <- set.vertex.attribute(nw, "group", rep(0:2, each = 90, 30, 68)) R> formation <- ~edges + nodematch("group") + concurrent R> target.stats <- c(200, 300, 200) R> coef.diss <- dissolution_coefs(dissolution = ~offset(edges), duration = 5) R> est1 <- netest(nw, formation, target.stats, coef.diss, edapprox = TRUE) R> R> param <- param.net(inf.prob = 0.1, act.rate = 5, rec.rate = 0.1) R> status.vector <- c(rep(0, 90), rep(0, 30), rep(0, 67), 1) R> status.vector <- ifelse(status.vector == 1, "i", "s") R> init <- init.net(status.vector = status.vector) R> control <- control.net(type = "SIR", nsteps = 55, + nsims = 100, epi.by = "group") R> epimodel_sir <- netsim(est1, param, init, control) \end{CodeInput} \end{CodeChunk} The output of this model is \code{epimodel_sir}, an object of class \code{netsim}, which contains a plethora of modeling information. \footnote{\textcolor{violet}{[Ben says: what's the point for this sentence. It also doesn't flow/ connect to previous and later text.]}} \hypertarget{post-processing-and-comparison-1}{% \subsection{Post-processing and comparison}\label{post-processing-and-comparison-1}} The next step is to compare the simple SIR model to the EpiModel SIR model. We provide\footnote{\textcolor{violet}{[Ben says: the phrase "We provide" is very passive / distance from the current demonstration at hand. Moreover section 3 already phrases things this way.]}\textcolor{orange}{more of sentence 2 of ben than 1.}} the function \code{fortify_aggregate()}, which can take objects from specialized classes of modeling output \textcolor{orange}{(like those made by \code{netsim()})} and transform it into a tidy-style data frame. \begin{CodeChunk} \begin{CodeInput} R> fortified_net <- fortify_aggregate(epimodel_sir, + states = c("s.num", "i.num", "r.num")) %>% + mutate(Type = "EpiModel SIR", + sim = as.numeric(gsub("sim", "", sim))) \end{CodeInput} \end{CodeChunk} We can then analyze the results of the two models side by side as time-invariant\footnote{\textcolor{violet}{[Ben says: this isn't a clear phrrase here - what are you trying to say?]}} epidemic curves. The results are shown in Figure \ref{fig:hag-simple-sir}, where a 90\% prediction band is estimated from the delta ball\footnote{\textcolor{violet}{[Ben says: this has never been discussed anyway.]}} method for each of the two models. For the Simple SIR model, we see that the data generally covers the data fairly well but clearly misses the second peak of infection\footnote{\textcolor{violet}{[Ben says: This could be better movitated with talk of model fit...]}}. We also see that the prediction band is very large, covering up a large area of the ternary plot. On the other hand, for the EpiModel network model, we see that the prediction band covers the data quite well and takes up less area. \begin{CodeChunk} \begin{CodeInput} R> both_models <- bind_rows(agg_model, fortified_net) R> R> R> g <- ggplot() + geom_prediction_band(data = both_models %>% filter(t != 0) %>% + mutate(Type = factor(Type, levels = c("Simple SIR", + "EpiModel SIR"))), + aes(x = X0, y = X1, z = X2, + sim_group = sim, fill = Type), + alpha = .5, + conf_level = .90) \end{CodeInput} \end{CodeChunk} \textcolor{violet}{[Ben says: In figure 8 I changed the order of the facets given we talk about the simple model first and its more like the "base" model. I think the title should be changed?]} \begin{CodeChunk} \begin{CodeInput} R> g + geom_path(data = both_models %>% filter(t != 0) %>% + mutate(Type = factor(Type, levels = c("Simple SIR", + "EpiModel SIR"))), + aes(x = X0, y = X1, z = X2, group = paste(Type, sim)), + alpha = .3, col = "gray40") + + coord_tern() + theme_sir(base_size = 24) + + geom_point(data = hagelloch_sir, + aes(x = S, y = I, z =R), col = "black") + + labs(title = "Simple SIR model", + subtitle = "90% Prediction band and original data", + x = "S", y = "I", z = "R") + + scale_fill_manual(values = c("#006677", "#AA6600")) + + facet_wrap(~Type) + + theme(legend.position = "bottom") \end{CodeInput} \begin{figure}[H] {\centering \includegraphics{Figs/unnamed-chunk-32-1} } \caption{\label{fig:hag-simple-sir} Original Hagelloch SIR data (black) along with 90\% prediction band and actual simulation paths from the Simple SIR and the EpiModel SIR models.}\label{fig:unnamed-chunk-32} \end{figure} \end{CodeChunk} However, both models are not a good fit to the filamental path as opposed to the individual points in \((S, I, R)\)-space. This can be\footnote{\textcolor{violet}{[Ben says: this is a very passive way to say such things. Try being more direct.]}} captured with the set of simulations both models predict (gray lines), which all generally have a single defined peak of infection whereas the data certainly looks like it has two distinct peaks, likely caused by our assumed super-spreader event. This observation is backed up\footnote{\textcolor{violet}{[Ben says: describe this?]}} by the below analysis that demonstrates that the estimated pseudo-density of the observed epidemic (relative to the simulations from either model) is much less likely then \textbf{any} of the simulations (reported in Table \ref{tab:hags-extreme})\footnote{\textcolor{orange}{Ben, do we want to add another sentence or two explaining the two columns in the table? The second one I think makes sense to me but not the first.}} In conclusion, \pkg{EpiCompare} makes it clear that, at a glance, 1) the EpiModel network model is a better fit than the Simple SIR model, and 2) the fit is only good at the \sout{geometric filamental level as opposed to the epidemic trajectory filamental level.} \textcolor{orange}{individual point level as opposed to the geometric filamental level.}\footnote{\textcolor{violet}{[Ben says: how would this look with the time plots? Do we add value here?]}} \begin{CodeChunk} \begin{CodeInput} R> #-- after cleaning up and combining -- R> all_together_df <- rbind(simple_sir, + hagelloch_sir2) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{table}[!h] \caption{\label{tab:cif-all-together-df}Top and bottom 2 rows of \tt{all\_together\_df}\textnormal{, combining both simulated epidemics and the true epidemic.}} \centering \begin{tabular}[t]{lrrrrr} \toprule Type & sim & t & S & I & R\\ \midrule Simple SIR & 1 & 0 & 188 & 0 & 0\\ Simple SIR & 1 & 1 & 187 & 1 & 0\\ true observation & 0 & 54 & 1 & 0 & 187\\ true observation & 0 & 55 & 1 & 0 & 187\\ \bottomrule \end{tabular} \end{table} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> compression_df <- all_together_df %>% group_by(Type, sim) %>% + filament_compression(data_columns = c("S","I","R"), + number_points = 20) \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{CodeInput} R> tdmat <- compression_df %>% + dist_matrix_innersq_direction( + position = c(1:length(compression_df))[ + names(compression_df) %in% c("S","I", "R")], + tdm_out = T) R> R> simple_sir_true_obs_info <- tdmat %>% + compare_new_to_rest_via_distance( + new_name_id = data.frame(Type = "true observation", sim = 0), + distance_func = distance_psuedo_density_function, + sigma = "20%") \end{CodeInput} \end{CodeChunk} \begin{CodeChunk} \begin{table}[!h] \caption{\label{tab:hags-extreme}The extremeness of the true simulations based on comparing pseudo-density estimates between true vs simulated curves} \centering \begin{tabular}[t]{l>{\raggedleft\arraybackslash}p{6cm}>{\raggedleft\arraybackslash}p{6cm}} \toprule Type & simulations-based estimated pseudo-density & proportion of simulations with lower estimated pseudo-density\\ \midrule Simple SIR & 0.0036733 & 0.00\\ EpiModel SIR & 0.0149686 & 0.02\\ \bottomrule \end{tabular} \end{table} \end{CodeChunk} \footnote{\textcolor{violet}{I think this paragraph captures some good goals, but I don't think we've done some of this. For example - we don't really highlight novice/expert usage, and we don't highlight side-by-side comparisons of models.}}Overall, \pkg{EpiCompare} aids in the data analysis pipeline for both novice and expert practitioners and coders alike. These tools encourage model and simulation exploration of many of the existing and well-supported packages that already exist, and side-by-side comparison thereof. Finally, we hope that practicioners will consider using time-invariant analysis when trying to assess and compare epidemics and epidemic models. \hypertarget{a.-appendix}{% \section*{A. Appendix}\label{a.-appendix}} \addcontentsline{toc}{section}{A. Appendix} \hypertarget{a.1-proof-of-theorem}{% \subsection*{\texorpdfstring{A.1 Proof of Theorem \ref{thm:sir-scale}}{A.1 Proof of Theorem }}\label{a.1-proof-of-theorem}} \addcontentsline{toc}{subsection}{A.1 Proof of Theorem \ref{thm:sir-scale}} \begin{proof}\label{proof:thm} \cite{Harko2014} provide an analytical solution for the Kermack and McKendrick equations (Eq. \eqref{eq:sir-ode}) by reparameterizing the ODEs so that $\mathcal{S}(u) = S(t)$, $\mathcal{I}(u) = S(t)$, and $\mathcal{R}(u) = R(t)$ for $0< u_T < 1$ with \begin{align}\label{eq:harko-odes} \mathcal{S}(u) &= S(0)u\\ \mathcal{I}(u) &= N - R(0) + NR_0^{-1}\log u - S(0)u \nonumber\\ \mathcal{R}(u) &= R(0) - NR_0^{-1} \log u, \nonumber \end{align} and $u$ and t are related by the following integral, \begin{align*} t &= \int_{u}^1 \frac{N}{\beta \tau (N - R(0) + R_{0}^{-1} \log \tau - S(0)\tau)}d\tau \\ &= \int_{u}^1 \frac{1}{\beta f(S(0), R(0), N, R_0, \tau)} d \tau\\ &= \int_{u}^1 \frac{1}{\beta f(\tau)} d\tau, \end{align*} where we have made the denominator of the integral a function of $N$, the initial values, $R_0$, and $\tau$, which we further condense to $f(\tau)$ for brevity. Then for a given $t$ we want to find $s$ such that $(S_1(t), I_1(t), R_1(t)) = (S_2(s), I_2(s), R_2(s))$. Or equivalently, for a fixed $u$ want to find $v$ such that $\mathcal{S}_1(u) = \mathcal{S}_2(v)$ and then the corresponding $t$ and $s$ are given by \begin{align*} t & = \int_{u}^1 \frac{1}{\beta_1 f(\tau)} d\tau \\ s & = \int_{v}^1 \frac{1}{\beta_2 f(\tau)} d\tau. \end{align*} Note that since the equations in Eq. \eqref{eq:harko-odes} are functions of the initial values and $R_0$, then $u = v$. We then can find a relation for $s$, \begin{align*} s & = \int_{u}^1 \frac{1}{\beta_2 f(\tau)} d\tau \\ & = \int_{u}^1 \frac{1}{a\beta_1 f(\tau)} d\tau \\ &= \frac{1}{a}\int_{u}^1 \frac{1}{\beta_1 f(\tau)} d\tau \\ &= \frac{1}{a}t. \end{align*} \end{proof} \bibliography{EpiCompare.bib} \end{document}
{ "alphanum_fraction": 0.748350203, "avg_line_length": 53.1843065693, "ext": "tex", "hexsha": "48c2fc23f209c443011343c5f2cac9a3919618c9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-09-09T23:06:15.000Z", "max_forks_repo_forks_event_min_datetime": "2019-09-09T23:06:15.000Z", "max_forks_repo_head_hexsha": "6433b7a7dfa4eb8b95a1a8a10aba498a73102d19", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "skgallagher/timeternR", "max_forks_repo_path": "jss/EpiCompare-paper/EpiCompare-paper.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "6433b7a7dfa4eb8b95a1a8a10aba498a73102d19", "max_issues_repo_issues_event_max_datetime": "2021-08-21T18:10:53.000Z", "max_issues_repo_issues_event_min_datetime": "2020-03-18T18:43:25.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "skgallagher/EpiCompare", "max_issues_repo_path": "jss/EpiCompare-paper/EpiCompare-paper.tex", "max_line_length": 939, "max_stars_count": 2, "max_stars_repo_head_hexsha": "6433b7a7dfa4eb8b95a1a8a10aba498a73102d19", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "skgallagher/timeternR", "max_stars_repo_path": "jss/EpiCompare-paper/EpiCompare-paper.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-18T15:10:25.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-19T16:52:46.000Z", "num_tokens": 23634, "size": 87435 }
\chapter{Results} \section{Problem 1: Simple network} In order to be able to use our forward propagation function $x_j=f(x_{i}*w_{ij})$(from section 2.1), the given data (from section 1.1) were formulated as matrices resulting in the following data: \begin{itemize} \item The input vector $x_0=\left[ \begin{array}{rr} 0.7 & 0.5 \\ \end{array}\right]$ \item The weight matrix $w_01=\left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array}\right]$ \item The weight matrix $w_12=\left[ \begin{array}{rrr} 0.9 & 0.3 & 0.9\\ 0.1 & 0.2 & 0.4\\ \end{array}\right]$ \item The weight matrix $w_23=\left[ \begin{array}{rrr} 0.1 & 0.8 & 0.4\\ 0.5 & 0.1 & 0.6\\ 0.6 & 0.7 & 0.3\\ \end{array}\right]$ \item The weight matrix $w_34=\left[ \begin{array}{rrr} 0.5 & 0.7 & 0.3\\ \end{array}\right]$ \end{itemize} Thus the final equation can be summarized as:\\ $\;\;\;\;\;f(\,f\,(\,f(\,f(x_0*w_01)*w_12)*w_23)*w_34$\\\\ The output of the neural network is $0.7451673339899871$. \\\\ Alternatively the input vector $x_0=\left[ \begin{array}{rr} 0.7 & 0.5 \\ \end{array}\right]$ was used, resulting in the value\\ $0.7453676512649436 $. \section{Problem 2: Backpropagation} Using the given equations from section 1.2 the following gradients were calculated for a: \begin{itemize} \item $\partial sigmoid = sigmoid*(1-sigmoid)$ \item $\frac {\partial cost}{\partial prediction} = -1 $ \item $\frac {\partial prediction}{\partial y} = \partial sigmoid(y)$ \item $\frac {\partial y}{\partial w_1}=x_1$ \item $\frac {\partial y}{\partial w_2}=x_2$ \item $\frac {\partial y}{\partial b}=1$ \item $\frac {\partial cost}{\partial w_1}=\frac {\partial cost}{\partial prediction}*\frac {\partial prediction}{\partial y}*\frac {\partial y}{\partial w_1}$ \item $\frac {\partial cost}{\partial w_2}=\frac {\partial cost}{\partial prediction}*\frac {\partial prediction}{\partial y}*\frac {\partial y}{\partial w_2}$ \item $\frac {\partial cost}{\partial b}=\frac {\partial cost}{\partial prediction}*\frac {\partial prediction}{\partial y}*\frac {\partial y}{\partial b}$ \end{itemize} This results in the following weights (assuming a learning rate of $1$): \begin{itemize} \item $w1 = w1-\frac {\partial cost}{\partial w_1} = 1.0088313531066455$ \item $w2 = w2 - \frac {\partial cost}{\partial w_2}=1.0264940593199368$\\ \item $b = b - \frac {\partial cost}{\partial b}=2.017662706213291$ \end{itemize} For part b only the cost function(see section 1.2 b)is different with its derivative being: \begin{itemize} \item $\frac {\partial cost}{\partial prediction} = -(z'-z)$ \item \end{itemize} And giving the following weight values: This results in the following weights (assuming a learning rate of $1$): \begin{itemize} \item $w1 = w1-\frac {\partial cost}{\partial w_1} = 1.0001588425712256$ \item $w2 = w2 - \frac {\partial cost}{\partial w_2}=1.0004765277136765$\\ \item $b = b - \frac {\partial cost}{\partial b}=2.000317685142451$ \end{itemize} \section{Problem 3: Artificial neural network} For Problem three we defined a neural network with only one hidden layer, consisting of 11 neurons. It was trained for 10 thousand epochs with the following parameters: \begin{itemize} \item $batchsize = 8000$ \item $learning rate = 1$ \end{itemize} The training curve gives the following results: \begin{figure}[h] \centering \includegraphics[height=12cm]{img/problem3_curves.png} \caption{Training curves for problem 3} \label{problem4_imput_data} \end{figure} Unfortunately the testing did not work from scratch and we were not able to fix the Issue: ('CUDA error: an illegal memory access was encountered',) \section{Problem 4: Gradient Descent} Using the given python code and applying the gradient and weight calculation described in Section~\ref{ch:methods:sec:4} results in the line shown in Figure~\ref{problem4_result}. The original line is has a $m$ of $3.30$ and a $c$ of $5.3$ and the learned values are $3.28$ for $m$ and $5.27$ for $c$. \begin{figure}[h] \centering \includegraphics[width=17cm]{img/problem4_result_160.png} \caption{Result of gradient decent after 160 iterations} \label{problem4_result_160} \end{figure} \begin{figure}[h] \centering \includegraphics[width=17cm]{img/problem4_result.png} \caption{Result of gradient decent} \label{problem4_result} \end{figure} The loss is also plotted in figure~\ref{problem4_result_loss}. \begin{figure}[h] \centering \includegraphics[width=17cm]{img/problem4_result_loss.png} \caption{Loss of the gradient decent over the iterations} \label{problem4_result_loss} \end{figure}
{ "alphanum_fraction": 0.6777754634, "avg_line_length": 43.2522522523, "ext": "tex", "hexsha": "fc346d33113f1cb0bafa90d14f24bb3fb5f4a6f0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6f40cf72b2f911b84e0a1f62c229ffe0dde8e060", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "worldpotato/Advanced_Remote_Sensing_Methods", "max_forks_repo_path": "assignment_4/content/06_results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6f40cf72b2f911b84e0a1f62c229ffe0dde8e060", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "worldpotato/Advanced_Remote_Sensing_Methods", "max_issues_repo_path": "assignment_4/content/06_results.tex", "max_line_length": 195, "max_stars_count": null, "max_stars_repo_head_hexsha": "6f40cf72b2f911b84e0a1f62c229ffe0dde8e060", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "worldpotato/Advanced_Remote_Sensing_Methods", "max_stars_repo_path": "assignment_4/content/06_results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1534, "size": 4801 }
\documentclass[12pt,letterpaper]{article} \usepackage{amsmath} \usepackage{gensymb} \usepackage{enumitem} \usepackage{graphicx} \usepackage[margin=1in]{geometry} \setlength{\parindent}{0pt} \title{Physics Club Practice Test 1} \usepackage[export]{adjustbox} \usepackage[none]{hyphenat} \usepackage{titlesec} \titlespacing{\section}{0pt}{4.0ex plus .2ex}{-3.3ex} \titlespacing{\paragraph}{0pt}{0pt}{1em} % \setenumerate[2]{label={\alph*.}} \usepackage{fancyhdr} \fancypagestyle{firstpage} { \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0.4pt} \lfoot{Physics Club} \cfoot{$F=ma$ Practice Test 1} \rfoot{\thepage} } \thispagestyle{firstpage} \pagestyle{fancy} \fancyhf{} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0.4pt} \lfoot{Physics Club} \cfoot{$F=ma$ Practice Test 1} \rfoot{\thepage} \begin{document} \section*{Physics Club: $F=ma$ Practice Test 1}\hfill Version 1.1 \vspace{-2.5pt} \begin{center} \textsc{25 Questions -- 75 Minutes} \end{center} \vspace{-5pt} Assume the acceleration due to gravity near the surface of the Earth $g = 10$ m/s$^2$. \smallskip Correct answers will be awarded one point; incorrect answers will result in a deduction of 1/4 point. There is no penalty for leaving an answer blank. \smallskip You may use a scientific calculator. Its memory must be cleared of data and programs. % \vspace{-7.5pt} \hrulefill \begin{enumerate} % --- F=ma Question --- % \item % A 0.3 kg apple falls from rest through a height of 50 cm onto a flat surface. Upon impact, the apple comes to rest in 0.2 s, and 5 cm$^2$ of the apple comes into contact with the surface during the impact. What is the average pressure exerted upon the apple during the impact? % \begin{enumerate} % \item 25 000 Pa % \item 19 000 Pa % \item 9500 Pa % \item 95 Pa % \item 4.7 Pa % \end{enumerate} % \end{enumerate} % --- F=ma Question --- % \textbf{The following information is used for questions 2 and 3.}\\ % Three blocks of identical mass are placed on a frictionless table as shown. The center block is at rest, whereas the other two blocks are moving directly toward it at identical speeds $v$. The center block is initially closer to the left block than the right one. All motion takes place along a single horizontal line. % --- F=ma Question --- % \begin{enumerate}[resume] % \item % Suppose that all collisions are instantaneous and perfectly elastic. After a long time, which of the following is true? % \begin{enumerate} % \item The center block is at rest somewhere to the left of its initial position. % \item The center block is at rest at its initial position. % \item The center block is moving to the left. % \item The center block is at rest somewhere to the right of its initial position. % \item The center block is moving to the right. % \end{enumerate} % --- F=ma Question --- % \item % Suppose, instead, that all collisions are instantaneous and perfectly inelastic. After a long time, which of the following is true? % \begin{enumerate} % \item The center block is at rest somewhere to the left of its initial position. % \item The center block is at rest at its initial position. % \item The center block is moving to the left. % \item The center block is at rest somewhere to the right of its initial position. % \item The center block is moving to the right. % \end{enumerate} % --- F=ma Question --- % \item % A spaceman of mass 80 kg is sitting in a spacecraft near the surface of the Earth. The spacecraft is accelerating downward at one-third of the acceleration due to gravity. What is the force on the spacecraft by the spaceman? % \begin{enumerate} % \item 4800 N % \item 4000 N % \item 3200 N % \item 800 N % \item 530 N % \end{enumerate} \item Starting from rest at time $t = 0$, a car moves in a straight line with an acceleration given by the accompanying graph. What is the speed of the car at $t = 2$ s? \begin{tabular}{l r} \begin{minipage}{0.575\textwidth} \begin{enumerate} \item 1.0 m/s \item 2.0 m/s \item 8.0 m/s \item 10.5 m/s \item 12.5 m/s \end{enumerate} \end{minipage} & \begin{minipage}{0.4\textwidth} \includegraphics[width=0.65\textwidth,center]{agraph.png} \end{minipage} \end{tabular} \end{enumerate} \textbf{The following information is used for questions 2 and 3.}\\ A flare is dropped from a plane flying over level ground at a velocity of 70 m/s in the horizontal direction. At the instant the flare is released, the plane begins to accelerate horizontally at 1.75 m/s$^2$. The flare takes 4.5 s to reach the ground. \begin{enumerate}[resume] \item Relative to a spot directly under the flare at release, the flare lands \begin{enumerate} \item directly on the spot. \item 333 m in front of the spot. \item 274 m in front of the spot. \item 280 m in front of the spot. \item 315 m in front of the spot. \end{enumerate} \item As seen by the pilot of the plane and measured relative to a spot directly under the plane when the flare lands, the flare lands \begin{enumerate} \item 333 m behind the plane. \item 18 m behind the plane. \item directly under the plane. \item 36 m in front of the plane. \item 315 m in front of the plane. \end{enumerate} \vfill \newpage \item A certain football quarterback can throw a football a maximum range of 76 meters on level ground. What is the highest point reached by the football when this maximum range is thrown? \begin{enumerate} \item 76 m \item 54 m \item 38 m \item 28 m \item 19 m \end{enumerate} \item A car of mass $m$ is slipping down a slope of inclination angle $\theta$ at a constant acceleration $a$. The coefficient of static friction between the wheels and the slope is $\mu$. What is the frictional force between the wheels and the slope? \begin{enumerate} \item $\mu mg\cos\theta$ \item $\mu mg$ \item $mg\left(\sin\theta-\mu\right)$ \item $m\left(g-a\right)$ \item $mg\sin\theta-ma$ \end{enumerate} \item Which solid vector in the accompanying figures best represents the acceleration of the pendulum mass at the intermediate point in its swing indicated? \includegraphics[width=0.9\textwidth,center]{swing.png} % \item % A force $F$ is used to hold two blocks of mass $m_1$ and $m_2$ on an incline as shown in the diagram. The plane makes an angle $\theta$ with the horizontal and $F$ is perpendicular to the plane. The coefficients of static friction between the plane and the block $m_2$ and between the two blocks are both equal to $\mu$ with $\mu < \tan\theta$. What is the minimum force $F$ necessary to keep both blocks at rest? % \begin{enumerate} % \item $\mu\left(m_1+m_2\right)$ % \item $\left(m_1+m_2\right)g\cos\theta$ % \item $\displaystyle m_1g\left(\frac{\sin\theta}{\mu}-\cos\theta\right)$ % \item $\displaystyle m_2g\left(\frac{\sin\theta}{\mu}-\cos\theta\right)$ % \item $\displaystyle \left(m_1+m_2\right)g\left(\frac{\sin\theta}{\mu}-\cos\theta\right)$ % \end{enumerate} \item A ball of mass $m$ is fastened to a string. The ball swings in a vertical circle of radius $R$ with the other end of the string held fixed. The difference between the string's tension at the bottom of the circle and at the top is \begin{enumerate} \item $8mg$. \item $6mg$. \item $4mg$. \item $2mg$. \item $mg$. \end{enumerate} % --- F=ma Question --- % \item % You are given a standard kilogram mass and a tuning fork that is calibrated in Hz. You are also provided with a complete collection of laboratory equipment, but none of it is calibrated in SI units. You do not know the values of any fundamental constants. Which of the following quantities could you measure in SI units? % \begin{enumerate} % \item The density of a block of ice % \item The acceleration due to gravity % \item The spring constant of a given spring % \item The speed of sound in the air % \item The air temperature in the room % \end{enumerate} % --- F=ma Question --- % \item % Satellite A of mass $2m$, B of mass $3m$, and C of mass $4m$ are in coplanar orbits around a planet as shown in the figure. Satellite A is in a circular orbit of radius $2R$ and C is in a circular orbit of radius $R$. Satellite B is in an elliptical orbit. The minimum distance between satellite B and the planet is $R$. The maximum distance between the satellite B and the planet is $2R$. The magnitudes of the angular momenta of the satellites as measured about the plant are $L_A$, $L_B$, and $L_C$. Which of the following statements is correct? % \begin{enumerate} % \item $L_C > L_A > L_B$ % \item $L_C > L_B > L_A$ % \item $L_B > L_C > L_A$ % \item $L_B > L_A > L_C$ % \item The relationship between the magnitudes is different at various instants in time. % \end{enumerate} % \end{enumerate} % --- F=ma Question --- % \textbf{The following information is used for questions 15 and 16.}\\ % Two stars orbit their common center of mass as shown in the diagram below. The masses of the two stars are $9M$ and $M$. The distance between the stars is $d$. % --- F=ma Question --- % \begin{enumerate}[resume] % \item % What is the value of the gravitational potential energy of the two star system? % \begin{enumerate} % \item $-GM^2$/$d$ % \item $-9GM^2$/$d$ % \item $-GM^2$/$d^2$ % \item $9GM^2$/$d$ % \item $-9GM^2$/$d^2$ % \end{enumerate} % --- F=ma Question --- % \item Determine the period of orbit for the star of mass $9M$. % \begin{enumerate} % \item $\displaystyle \pi\sqrt\frac{2d^3}{5GM}$ % \item $\displaystyle \frac{2\pi}{5}\sqrt\frac{d^3}{GM}$ % \item $\displaystyle \pi\sqrt\frac{2d^2}{5GM}$ % \item $\displaystyle \frac{2\pi}{3}\sqrt\frac{d^3}{GM}$ % \item $\displaystyle \pi\sqrt\frac{2d^3}{3GM}$ % \end{enumerate} % \item % As shown, a big box of mass $M$ is resting on a horizontal smooth floor. On the bottom of the box there is a small box also of mass $M$. The block is given an initial peed $v_0$ relative to the floor, and starts to bounce back and forth between the two walls of the box. Find the final speed of the box when the block has finally come to rest in the box. % \begin{enumerate} % \item 0 % \item $v_0$ % \item $v_0$/2 % \item $v_0$/3 % \item $v_0$/4 % \end{enumerate} \vfill \newpage \item Two objects of mass $m$ and $3m$ are placed at either end of a spring of spring constant $k$ and the whole system is placed on a horizontal frictionless surface. At what angular frequency $\omega$ does the system oscillate? \begin{enumerate} \item $\displaystyle \sqrt\frac{k}{m}$ \item $\displaystyle \sqrt\frac{3k}{m}$ \item $\displaystyle \sqrt\frac{3k}{2m}$ \item $\displaystyle \sqrt\frac{2k}{3m}$ \item $\displaystyle \sqrt\frac{4k}{3m}$ \end{enumerate} % \item % Two astronauts, A and B, both with mass of 60 kg, are moving along a straight line in the same direction in a ``weightless'' spaceship. Relative to the spaceship the speed of A is 2 m/s and that of B is 1 m/s. A is carrying a bag of mass 8 kg with him. To avoid collision with B, A throws the bag with a speed $v$ relative to the spaceship toward B and B catches it. Find the minimum value of $v$. % \begin{enumerate} % \item 7.8 m/s % \item 26.0 m/s % \item 14.0 m/s % \item 9.2 m/s % \item 5.5 m/s % \end{enumerate} % --- F=ma Question --- % \item % A mass is attached to an ideal spring. At time $t=0$ the mass is released from rest from a position a distance $x_0$ from the position where the spring is at its natural length; the period of the ensuing (one-dimensional) simple harmonic motion is $T$. At what time is the power delivered to the mass by the spring first a maximum? % \begin{enumerate} % \item $t=0$ % \item $t=T$/8 % \item $t=T$/4 % \item $t=3T$/8 % \item $t=T$/2 % \end{enumerate} \item Two uniform circular disks of the same material and thickness are adhered together at point N. Both disks lie in the same vertical plane. The rigid body is hinged at point P and it can rotate freely about P in the vertical plane. PO $\perp$ OM and ON $= 3$ NM, where point O and point M are the centers of the disks. When static equilibrium is reached, find the angle $\theta$ between PO and the vertical direction. \begin{tabular}{l r} \begin{minipage}{0.6\textwidth} \begin{enumerate} \item $16.7\degree$ \item $26.6\degree$ \item $30\degree$ \item $5.2\degree$ \item $7.6\degree$ \end{enumerate} \end{minipage} & \begin{minipage}{0.3\textwidth} \includegraphics[width=\textwidth,left]{disks.png} \end{minipage} \end{tabular} % \item % A uniform rectangular wood block of mass $M$, with length $a$ and height $b=3a$, rests on an incline as shown. The incline and the wood block have a coefficient of static friction $\mu_s$. The incline is moved upward from an angle of zero through and angle $\theta$. At some critical angle the block will either tip over or the block will tip over (and not slip) at the critical angle. % \begin{enumerate} % \item $\mu_s > 1$/3 % \item $\mu_s > 2$/2 % \item $\mu_s > 1$/2 % \item $\mu_s < 1$/3 % \item $\mu_s < 2$/3 % \end{enumerate} % --- F=ma Question --- % \item % Two disks are mounted on thin, lightweight rods oriented through their centers and normal to the disks. These axles are constrained to be vertical at all times, and the disks can pivot frictionlessly on the rods. The disks have identical thickness and are made of the same material, but have differing radii $r_1$ and $r_2$ with $r_1=3r_2$. The disks are given angular velocities of magnitudes $\omega_1$ and $\omega_2$, respectively, and brought into contact at their edges. After the disks interact via friction it is found that both disks come exactly to a halt. Which of the following must hold? Ignore effects associated with the vertical rods. % \begin{enumerate} % \item $27\omega_1 = \omega_2$ % \item $\omega_1 = 27\omega_2$ % \item $3\omega_1 = \omega_2$ % \item $\omega_1 = 3\omega_2$ % \item $9\omega_1 = \omega_2$ % \end{enumerate} % \end{enumerate} % --- F=ma Question --- % \textbf{The following information is used for questions 24 and 25.}\\ % A massless elastic cord (that obeys Hooke's Law) will break if the tension in the cord exceeds a maximum value $T_\text{max}$. One end of the cord is attached to a fixed point, the other is attached to an object of mass $2m$. If a second, smaller object of mass $m$ moving at an initial speed $v_0$ strikes the larger mass and the two stick together, the cord will stretch and break, but the final kinetic energy of the two masses will be zero. If instead the two collide with a perfectly elastic one-dimensional collision, the cord will still break, and the larger mass wil move off with a final speed of $v_f$ All motion occurs on a horizontal, frictionless surface. % --- F=ma Question --- % \begin{enumerate}[resume] % \item What is $v_f$/$v_0$? % \begin{enumerate} % \item 1/$\sqrt{2}$ % \item $\sqrt{11}$/10 % \item $\sqrt{10}$/6 % \item 1/$\sqrt{3}$ % \item $\sqrt{5/18}$ % \end{enumerate} % --- F=ma Question --- % \item % Find the ratio of the total kinetic energy of the system of two masses after the perfectly elastic collision and the cord has broken to the initial kinetic energy of the smaller mass prior to the collision. % \begin{enumerate} % \item 1/4 % \item 2/3 % \item 1/2 % \item 3/4 % \item 4/5 % \end{enumerate} % --- F=ma Question --- % \item % A person standing on the edge of a fire escape simultaneously launches two apples, one straight up with a speed of 6 m/s and the other straight down at the same speed. How far apart are the two apples 2.5 seconds after they were thrown, assuming that neither has hit the ground? % \begin{enumerate} % \item 14 m % \item 21 m % \item 30 m % \item 41 m % \item 54 m % \end{enumerate} % \item % A car is moving with constant speed $v=10$ m/s along a straight line. A ball is thrown out at a height of 15 m with initial speed $\sqrt{2}v$ when the car is just passing below. The angle above the horizontal at which the ball is thrown is such that the ball will hit the car. Find the horizontal distance the car travels from the time the ball is thrown to the time the car is hit. % \begin{enumerate} % \item 45 m % \item 40 m % \item 35 m % \item 30 m % \item 25 m % \end{enumerate} % --- F=ma Question --- % \item % A 2.0 kg mass undergoes an acceleration as shown below. How much work is done on the mass? % \begin{enumerate} % \item $-32$ J % \item $-16$ J % \item 5 J % \item 16 J % \item 32 J % \end{enumerate} % \item % As shown in the diagram, a man pushes forward on the compartment which is accelerating uniformly to the left relative to the ground. The man stays at rest relative to the compartment. Which of the following statements in respect to the ground reference frame is correct? % \begin{enumerate} % \item The man does positive work on the compartment. % \item The man does negative work on the compartment. % \item The man does zero work on the compartment. % \item It cannot be determined % \item Both (a) and (c) can be correct. % \end{enumerate} % --- F=ma Question --- % \item % A wooden block of mass $M$ is hung from a peg by a massless rope. A speeding bullet of mass $m$ and initial speed $v_0$ collides with the block at time $t=0$ and embeds in it. Let S be the system consisting of the block and bullet. Which quantities are conserved between $t=-8$ s and $t=+8$ s? % \begin{enumerate} % \item The total linear momentum of S. % \item The horizontal component of the linear momentum of S. % \item The mechanical energy of S. % \item The angular momentum of S as measured about a perpendicular axis through the peg. % \item None of the above are conserved. % \end{enumerate} % --- F=ma Question --- % \item % Lucy (mass 33.1 kg), Mary (mass 61.7 kg), and Henry (mass 24.3 kg) sit on a lightweight seesaw at evenly spaced 2.79 m intervals (in the order in which they are listed; Mary is between Lucy and Henry) so that the seesaw balances. Who exerts the most torque (in terms of magnitude) on the seesaw about the pivot point of the seesaw? % \begin{enumerate} % \item Henry % \item Lucy % \item Mary % \item They all exert the same torque. % \item There is not enough information to answer the question. % \end{enumerate} \item A uniform 2 kg cylinder rests on a laboratory trolley as shown. The coefficient of static friction between the cylinder and the trolley is 0.5. If the cylinder is 4 cm in diameter and 10 cm in height, what is the condition on the acceleration $a$ of the trolley to cause the cylinder to tip over? \begin{tabular}{l r} \begin{minipage}{0.6\textwidth} \begin{enumerate} \item $a > 2.5$ m/s$^2$ \item $a < 2.5$ m/s$^2$ \item $a > 5$ m/s$^2$ \item $a > 4$ m/s$^2$ \item The cylinder would tip over at any acceleration. \end{enumerate} \end{minipage} & \begin{minipage}{0.4\textwidth} \includegraphics[width=0.65\textwidth,center]{trolley.png} \end{minipage} \end{tabular} % --- F=ma Question --- % \item A flat uniform disk of mass 40 kg and radius 1 m rotates about an axis perpendicular to the plane of the disk and through the center of the disk. A net constant resistive torque acting on the disk between 0 and 3 seconds causes the angular velocity of the disk to vary as shown in the graph below. Find the average power dissipated due to the net resistive torque for the 3 seconds duration. % \begin{enumerate} % \item 50 W % \item 40 W % \item 30 W % \item 20 W % \item 10 W % \end{enumerate} \vfill \newpage \item Four masses $M$ are arranged at the vertices of a square of side length $a$. What is the gravitational potential energy of this arrangement? \begin{enumerate} \item $\displaystyle -\left(4+\sqrt{2}\right)\frac{GM^2}{a}$ \item $\displaystyle -\left(4-\sqrt{2}\right)\frac{GM^2}{a}$ \item $\displaystyle -4\frac{GM^2}{a}$ \item $\displaystyle -\left(2+\sqrt{2}\right)\frac{GM^2}{a}$ \item $\displaystyle -\left(2-\sqrt{2}\right)\frac{GM^2}{a}$ \end{enumerate} \item A particle of mass $m$ moving along the $x$-axis with speed $v$ collides with a particles of mass $2m$ initially at rest. After the collision, the first particle has come to rest, and the second particle has split into two equal-mass pieces that move at equal angle $\theta$ with the $x$-axis, as shown in the figure. Which of the following statements correctly describes the speeds of the two pieces? \begin{tabular}{l r} \begin{minipage}{0.5\textwidth} \begin{enumerate} \item Both pieces move with speed $v$. \item One of the pieces moves with speed $v$, the other moves with speed less than $v$. \item Both pieces move with speed $v$/2. \item One of the pieces moves with speed $v$/2, the other moves with speed greater than $v$/2. \item Both pieces move with speed greater than $v$/2. \end{enumerate} \end{minipage} & \begin{minipage}{0.5\textwidth} \includegraphics[width=0.75\textwidth,left]{collision.png} \end{minipage} \end{tabular} \item A spacecraft is orbiting on the surface of an unknown planet. In order to determine the density of the planet, which one of the following should be measured? \begin{enumerate} \item Radius of the planet \item The volume of the planet \item The moving speed of the spacecraft \item The period of the motion \item None of the above measurements would determine the density of the planet. \end{enumerate} \end{enumerate} \vfill \newpage \textbf{The following information is used for questions 14 and 15.}\\ Two weights, both of mass $m$, are joined by a weightless spring of natural length $L$ and force constant $k$. They are placed on a smooth surface and at rest. One weight is given an impulse and acquires an initial velocity $v$ toward the other weight. \begin{enumerate}[resume] \item What is the speed of the center of mass of the weights-spring system? \begin{enumerate} \item $0.5v$ \item $\displaystyle 0.5v - \sqrt\frac{kL^2}{2m}$ \item $\displaystyle \sqrt\frac{kL^2}{2m} - 0.5v$ \item $v$ \item $0.5v - \sqrt\frac{kL^2}{m}$ \end{enumerate} \item What is the minimum distance between the two weights? \begin{enumerate} \item $\displaystyle L - \frac{v}{2}\sqrt\frac{m}{k}$ \item $\displaystyle L - v\sqrt\frac{m}{2k}$ \item $\displaystyle L - v\sqrt\frac{m}{k}$ \item $\displaystyle v\sqrt\frac{m}{k}$ \item $\displaystyle \frac{v}{2}\sqrt\frac{m}{k}$ \end{enumerate} \item A rod of mass $M$ and length $L$ mass moment of inertia $\left(1/12\right)ML^2$ about its center of mass. A sphere of mass $m$ and radius $R$ has a moment of inertia $\left(2/5\right)mR^2$ about its center of mass. A combined system is formed by centering the sphere at one end of the rod and placing an axis at the other. What is the moment of inertia $I$ of the combined system about the axis shown? \begin{tabular}{l r} \begin{minipage}{0.4\textwidth} \begin{enumerate} \item $\displaystyle \frac{1}{12}ML^2 + \frac{2}{5}mR^2$ \item $\displaystyle \frac{1}{12}ML^2 + \frac{2}{5}mR^2 + ML^2$ \item $\displaystyle \frac{1}{3}ML^2 + \frac{2}{5}mR^2 + mL^2$ \item $\displaystyle \frac{1}{12}ML^2 + mL^2$ \item $\displaystyle \frac{1}{3}ML^2 + mL^2$ \end{enumerate} \end{minipage} & \begin{minipage}{0.5\textwidth} \includegraphics[width=\textwidth,left]{rod.png} \end{minipage} \end{tabular} \item The mass in the figure below slides on a frictionless surface. When the mass is pulled out, spring 1 is stretched to a distance $x_1$ from its equilibrium position and spring 2 is stretched a distance $x_2$. The spring constants are $k_1$ and $k_2$, respectively. Find the force pulling back on the mass. \begin{tabular}{l r} \begin{minipage}{0.35\textwidth} \begin{enumerate} \item $-k_2x_1$ \item $-k_1x_2$ \item $-\left(k_1x_1+k_2x_2\right)$ \item $\displaystyle -\frac{k_1+k_2}{2}\left(x_1+x_2\right)$ \item $\displaystyle -\frac{k_1k_2}{k_1+k_2}\left(x_1+x_2\right)$ \end{enumerate} \end{minipage} & \begin{minipage}{0.75\textwidth} \includegraphics[width=0.75\textwidth,left]{spring.png} \end{minipage} \end{tabular} \item An object with a mass of 3 kilograms is accelerated from rest. The graph at right shows the magnitude of the net force in newtons as a function of time in seconds. At time $t=4$ seconds the object's velocity would have been closest to which of the following? \begin{tabular}{l r} \begin{minipage}{0.525\textwidth} \begin{enumerate} \item 2.3 m/s \item 3.5 m/s \item 5.8 m/s \item 7.0 m/s \item 11.5 m/s \end{enumerate} \end{minipage} & \begin{minipage}{0.75\textwidth} \includegraphics[width=0.5\textwidth,left]{fgraph.png} \end{minipage} \end{tabular} % \item % As shown in the figure, AB $= 3.5$ m, AC $=3.0$ m, AD $= 0.5$ m. The two rods AC and BC weight 200 N each. The floor is frictionless. Find the tension in the rope. % \begin{enumerate} % \item 280 N % \item 500 N % \item 150 N % \item 302 N % \item 180 N % \end{enumerate} \item Three small objects, all of mass $M$, are released simultaneously from the top of three inclined planes of the same height $H$. The objects and incline angles of the corresponding incline planes are described as follows: \begin{enumerate}[label=\Roman*.] \item a cube of side $R$ from an incline plane with an incline angle of $30\degree$ \item a solid cylinder of radius $R$ from an incline plane with an incline angle of $45\degree$ \item a hollow cylinder of radius $R$ from an incline plane with an incline angle of $60\degree$ \end{enumerate} Assume that the cylinders roll down their corresponding incline planes without slipping and the cube slides down the plane without friction. Which objects reach the bottom of the plane first? \begin{enumerate} \item I \item II \item III \item I \& II \item II \& III \end{enumerate} \vfill \newpage \item Two monkeys of the same weight are holding tightly the two end of a rope with negligible mass. The rope passes through a smooth pulley, as shown in the figure. The two monkeys are initially at rest. Now the right monkey starts to climb up the rope at an average speed $v$ relative to the pulley to get closer to the left monkey. Let the initially vertical separation of the monkeys be $h$. Find the average velocity of the right monkey relative to the left monkey. \begin{tabular}{l r} \begin{minipage}{0.7\textwidth} \begin{enumerate} \item $2v$ upward \item $2v$ downward \item $v + \sqrt{2gh}$ upward \item $v + \sqrt{2gh}$ downward \item 0 \end{enumerate} \end{minipage} & \begin{minipage}{0.25\textwidth} \includegraphics[width=0.75\textwidth,left]{monkey2.png} \end{minipage} \end{tabular} \item Someone is using scissors to cut a wire of circular cross section and negligible weight. The wire slides in the direction away from the hinge until the angle between the scissors blades becomes $2\alpha$. The coefficient of kinetic friction between the blades and the wire is closest to \begin{tabular}{l r} \begin{minipage}{0.65\textwidth} \begin{enumerate} \item $\sqrt{1-\tan\alpha}$. \item $\cos{2\alpha}$. \item $\tan{2\alpha}$. \item $\tan\alpha$. \item $\sqrt{2\cos^2\alpha - 1}$. \end{enumerate} \end{minipage} & \begin{minipage}{0.25\textwidth} \includegraphics[width=\textwidth,left]{scissors.png} \end{minipage} \end{tabular} \item As shown in the figure, a wedge of mass $M$ is placed on a smooth inclined ramp that makes an angle $\theta$ with the horizontal. An object of mass $m$ rests on top of the wedge. The system is sliding down the ramp at acceleration $a$. Determine the apparent weight of the object $m$ as it slides down. Note that there is friction between the object and the wedge so that the object remains relatively at rest on the wedge. \begin{tabular}{l r} \begin{minipage}{0.6\textwidth} \begin{enumerate} \item $mg\cos\theta$ \item $mg\cos^2\theta$ \item $mg\sin\theta\cos\theta$ \item $mg\tan\theta$ \item $mg$ \end{enumerate} \end{minipage} & \begin{minipage}{0.3\textwidth} \includegraphics[width=\textwidth,left]{incline.png} \end{minipage} \end{tabular} \vfill \newpage \item A cup of water is placed in a car under constant acceleration to the left, as shown. Inside the water is a small air bubble. The following figures show five situations of the shape of the water surface and the direction of motion of the bubble as indicated by the arrow on the bubble. Which one is correct? \includegraphics[width=0.75\textwidth,center]{fluid.png} \item A uniform sphere of mass $m$ and radius $R$ has moment of inertia $\left(2/5\right)mR^2$ about its center of mass. A uniform spherical ball is observed to roll without slipping down a plane inclined at an angle $\theta$ with the horizontal. It follows that the coefficient of static friction between the ball and the plane $\mu_s$ must satisfy the relation: \begin{enumerate} \item $\displaystyle \mu_s \geq \frac{2}{7}\tan\theta$ \item $\displaystyle \mu_s = \frac{2}{7}\tan\theta$ \item $\displaystyle \mu_s \leq \frac{2}{7}\tan\theta$ \item $\displaystyle \mu_s \geq \frac{5}{7}\tan\theta$ \item $\displaystyle \mu_S \leq \frac{5}{7}\tan\theta$ \end{enumerate} % ------------------------------------------------------------------------------- % \item % A small weight of mass $2M$ is attached to the bottom of a hollow sphere of mass $M$ and radius $R$. Half of the sphere is submerged when floating in water. Find the period $T$ of the simple harmonic oscillation of the sphere in the vertical direction. % \begin{enumerate} % \item $\displaystyle 2\pi\sqrt\frac{R}{2g}$ % \item $\displaystyle 2\pi\sqrt\frac{3R}{2g}$ % \item $\displaystyle 2\pi\sqrt\frac{R}{g}$ % \item $\displaystyle 2\pi\sqrt\frac{2R}{3g}$ % \item $\displaystyle 2\pi\sqrt\frac{R}{3g}$ % \end{enumerate} % ------------------------------------------------------------------------------- \item A ball is released vertically from a height $H$ above an incline and makes several bounces. The angle of the incline is $\theta$. Assume the ball bounces elastically in each hit. Calculate the distance along the incline from the first hit to the fifth hit. \begin{tabular}{l r} \begin{minipage}{0.6\textwidth} \begin{enumerate} \item $4H\cos\theta$ \item $24H\cos\theta$ \item $48H\sin\theta$ \item $64H\sin\theta$ \item $80H\sin\theta$ \end{enumerate} \end{minipage} & \begin{minipage}{0.3\textwidth} \includegraphics[width=\textwidth,left]{bounce2.png} \end{minipage} \end{tabular} \end{enumerate} \end{document}
{ "alphanum_fraction": 0.7240560516, "avg_line_length": 41.8781512605, "ext": "tex", "hexsha": "a1cefeefdff8d466f179b64722f348f5d852eb24", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "justinyangusa/physics", "max_forks_repo_path": "test1/test1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "justinyangusa/physics", "max_issues_repo_path": "test1/test1.tex", "max_line_length": 670, "max_stars_count": 1, "max_stars_repo_head_hexsha": "716f5e7489d3b5fd5dede24eb2bba4673128af0b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "justinyangusa/physics", "max_stars_repo_path": "test1/test1.tex", "max_stars_repo_stars_event_max_datetime": "2016-09-11T07:10:09.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-11T07:10:09.000Z", "num_tokens": 8969, "size": 29901 }
% BLAST Extension % % Basic description of the extension stage of the BLAST algorithm \subsection{BLAST Extension} \begin{frame} \frametitle{Extension} \begin{itemize} \item The best-scoring seeds are extended in each direction \item BLAST does not explore the complete search space, so a rule (heuristic) to stop extension is needed \item Two-stage process: \begin{itemize} \item Extend, keeping alignment score, and \emph{drop-off} score \item When drop-of score reaches a threshold $X$, trim alignment back to top score \end{itemize} \end{itemize} \begin{center} \includegraphics[width=0.5\textwidth]{images/extension} \end{center} \end{frame} \begin{frame} \frametitle{Example} \begin{itemize} \item<1-> Consider two sentences (match=+1, mismatch=-1) \begin{itemize} \item \texttt{The quick brown fox jumps over the lazy dog.} \item \texttt{The quiet brown cat purrs when she sees him.} \end{itemize} \item<2-> Extend to the right from the seed \texttt{T} \begin{itemize} \item \texttt{The quic} \item \texttt{The quie} \item \texttt{123 4565 <- score} \item \texttt{000 0001 <- drop-off score} \end{itemize} \end{itemize} \end{frame} \begin{frame} \frametitle{Example} \begin{itemize} \item Consider two sentences (match=+1, mismatch=-1) \begin{itemize} \item \texttt{The quick brown fox jumps over the lazy dog.} \item \texttt{The quiet brown cat purrs when she sees him.} \end{itemize} \item Extend to drop-off threshold \begin{itemize} \item \texttt{The quick brown fox jump} \item \texttt{The quiet brown cat purr} \item \texttt{123 45654 56789 876 5654 <- score} \item \texttt{000 00012 10000 123 4345 <- drop-off score} \end{itemize} \end{itemize} \end{frame} \begin{frame} \frametitle{Example} \begin{itemize} \item Consider two sentences (match=+1, mismatch=-1) \begin{itemize} \item \texttt{The quick brown fox jumps over the lazy dog.} \item \texttt{The quiet brown cat purrs when she sees him.} \end{itemize} \item Trim back from drop-off threshold to get optimal alignment \begin{itemize} \item \texttt{The quick brown} \item \texttt{The quiet brown} \item \texttt{123 45654 56789 <- score} \item \texttt{000 00012 10000 <- drop-off score} \end{itemize} \end{itemize} \end{frame} \begin{frame} \frametitle{Notes on implementation} \begin{itemize} % \item This example represents ungapped BLAST; gapped BLAST is more similar to dynamic programming \item $X$ controls termination of alignment extension, but dependent on: \begin{itemize} \item substitution matrix \item gap opening and extension parameters \end{itemize} \end{itemize} \begin{center} \includegraphics[width=0.5\textwidth]{images/extension} \end{center} \end{frame}
{ "alphanum_fraction": 0.674974739, "avg_line_length": 33.3595505618, "ext": "tex", "hexsha": "fcffea3e18732d927773e7f7b43683d603a3890d", "lang": "TeX", "max_forks_count": 14, "max_forks_repo_forks_event_max_datetime": "2021-06-08T18:24:04.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-05T20:54:37.000Z", "max_forks_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "peterjc/Teaching-Intro-to-Bioinf", "max_forks_repo_path": "presentation/sections/subsection_blastextension.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8", "max_issues_repo_issues_event_max_datetime": "2018-10-05T10:53:49.000Z", "max_issues_repo_issues_event_min_datetime": "2016-11-25T11:55:43.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "peterjc/Teaching-Intro-to-Bioinf", "max_issues_repo_path": "presentation/sections/subsection_blastextension.tex", "max_line_length": 109, "max_stars_count": 20, "max_stars_repo_head_hexsha": "d642804aa73e80546e2326d2c2537c5727ac3ee8", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "peterjc/Teaching-Intro-to-Bioinf", "max_stars_repo_path": "presentation/sections/subsection_blastextension.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-20T13:38:44.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-28T18:29:42.000Z", "num_tokens": 893, "size": 2969 }
% ********************************************************************** % Author: Ajahn Chah % Translator: % Title: Epilogue % First published: Taste of Freedom % Comment: Taken from a talk given in England to a Western Dhamma student in 1977 % Copyright: Permission granted by Wat Pah Nanachat to reprint for free distribution % ********************************************************************** \chapterFootnote{\textit{Note:} This talk has been previously published as `\textit{Epilogue}' in `\textit{A Taste of Freedom}'} \chapter{Just This Much} \index[general]{internal and external} \index[general]{practice!vs. study} \index[general]{six senses} \dropcaps{D}{o you know where} it will end? Or will you just keep on studying like this? Or is there an end to it? That's okay but it's external study, not internal study. For internal study you have to study these eyes, these ears, this nose, this tongue, this body and this mind. This is the real study. The study of books is just external study, it's really hard to get it finished. \index[general]{contact} \index[general]{defilements} \index[general]{completion} When the eye sees form what sort of thing happens? When ear, nose and tongue experience sounds, smells and tastes, what takes place? When the body and mind come into contact with touches and mental states, what reactions take place ? Are greed, aversion and delusion still there? Do we get lost in forms, sounds, smells, tastes, textures and moods? This is the internal study. It has a point of completion. \index[similes]{man raising cows!practice vs. study} If we study but don't practise we won't get any results. It's like a man who raises cows. In the morning he takes the cow out to pasture, in the evening he brings it back to its pen -- but he never drinks the cow's milk. Study is all right, but don't let it be like this. You should raise the cow and drink its milk too. You must study and practise as well to get the best results. \index[similes]{man raising chickens!practice vs. study} Here, I'll explain it further. It's like a man who raises chickens, but doesn't collect the eggs. All he gets is the chicken dung! This is what I tell the people who raise chickens back home. Watch out you don't become like that! This means we study the scriptures but we don't know how to let go of defilements, we don't know how to `push' greed, aversion and delusion from our mind. Study without practice, without this `giving up', brings no results. This is why I compare it to someone who raises chickens but doesn't collect the eggs, he just collects the dung. It's the same thing. \index[general]{restraint!body, speech and mind} \index[general]{goodness!development of} Because of this, the Buddha wanted us to study the scriptures, and then to give up evil actions through body, speech and mind; to develop goodness in our deeds, speech and thoughts. The real worth of mankind will come to fruition through our deeds, speech and thoughts. If we only talk, without acting accordingly, it's not yet complete. Or if we do good deeds but the mind is still not good, this is still not complete. The Buddha taught to develop goodness in body, speech and mind; to develop fine deeds, fine speech and fine thoughts. This is the treasure of mankind. The study and the practice must both be good. \index[general]{Noble Eightfold Path!as body, speech and mind} The \glsdisp{eightfold-path}{eightfold path} of the Buddha, the path of practice, has eight factors. These eight factors are nothing other than this very body: two eyes, two ears, two nostrils, one tongue and one body. This is the path. And the mind is the one who follows the path. Therefore both the study and the practice exist in our body, speech and mind. \index[general]{actions!body, speech and mind} \index[similes]{ladle in a pot!practice vs. study} Have you ever seen scriptures which teach about anything other than the body, the speech and the mind? The scriptures only teach about this, nothing else. Defilements are born right here. If you know them, they die right here. So you should understand that practice and study both exist right here. If we study just this much we can know everything. It's like our speech: to speak one word of truth is better than a lifetime of wrong speech. Do you understand? One who studies and doesn't practise is like a ladle in a soup pot. It's in the pot every day but it doesn't know the flavour of the soup. If you don't practise, even if you study till the day you die, you'll never know the taste of freedom!
{ "alphanum_fraction": 0.7485187623, "avg_line_length": 111.1463414634, "ext": "tex", "hexsha": "1f0f7384cad4d79b1c7dc97d5325ffc1372a3723", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-12-15T15:03:46.000Z", "max_forks_repo_forks_event_min_datetime": "2018-12-15T15:03:46.000Z", "max_forks_repo_head_hexsha": "b12017212627a50c5c635d43acf0739a325a4fa7", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "profound-labs/ajahn-chah-collected-singlevol", "max_forks_repo_path": "manuscript/tex/taste_epilogue.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b12017212627a50c5c635d43acf0739a325a4fa7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "profound-labs/ajahn-chah-collected-singlevol", "max_issues_repo_path": "manuscript/tex/taste_epilogue.tex", "max_line_length": 702, "max_stars_count": null, "max_stars_repo_head_hexsha": "b12017212627a50c5c635d43acf0739a325a4fa7", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "profound-labs/ajahn-chah-collected-singlevol", "max_stars_repo_path": "manuscript/tex/taste_epilogue.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1056, "size": 4557 }
% % The MIT License (MIT) % % Copyright (c) 2017 Paul Batty % % Permission is hereby granted, free of charge, to any person obtaining a copy % of this software and associated documentation files (the "Software"), to deal % in the Software without restriction, including without limitation the rights % to use, copy, modify, merge, publish, distribute, sublicense, and/or sell % copies of the Software, and to permit persons to whom the Software is % furnished to do so, subject to the following conditions: % % The above copyright notice and this permission notice shall be included in % all copies or substantial portions of the Software. % % THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR % IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, % FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE % AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER % LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, % OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN % THE SOFTWARE. % \section{Background} The following chapter will cover two sections. Firstly, looking at where the idea for continuous development came from and the how the field ended up where it is today. The second part looks at some of the tools, ideas and concepts that are needed when building or using such a system. \subsection{History of Continuous deployment} Continuous deployment is in a group of methodologies under the name of extreme programming (XP) which in turn is part of the Agile process \citep{XP}. The core principle of extreme programming is to be adaptive to change and quick feedback for everyone involved. Developers get feedback on the code, bugs and features. Clients get the features they need and Managers can make decisions about the direction of the project without bringing the whole system down. \citep{mf} \\\\ This movement started in March 1996 by Ken Back \citep{XPH} with continuous integration going further back to 1991 by Grady Booch \citep{CIF}. The main change between that seen in Booch's design is that Booch placed one integration per day limit, whereas extreme programming favours much more. \\\\ The core concept behind Booch's initial design is to avoid problems when a new release is integrated into an old system. It could achieve this goal via automated unit tests. Each test would run through a single public method and make sure that it is performing as it should. For example if a method takes two numbers and return the sum of the numbers. A unit test would test that \textit{1+1} will return \textit{2}, trying edge cases such as using letters and so on. In total there would be a group of tests for every public function. \citep{unit_tests} \\\\ After the developer has made a change to the code base they would run the tests if they all passed then the code was OK to be check in and used in the next release. This was enhanced with the idea of test driven development, where the test are written first then the change. \\\\ This all started to kick off around 1997 with the continuous integration being placed inside of the extreme programming movement. This continued until 1999 through various books and publications by the movement, namely Kent Beck. \citep{CIF} \\\\ Up to this point continuous integration just consisted of developers writing unit tests and running them locally to make sure that everything passes. When all the tests pass the developer would then check the changes into the version control system (VCS). Other developers then working on the same code base will be able to get the latest code and know that it works. \\\\ This started to change around 2001 with the release of CruiseControl \citep{cc}, because in the previous systems if a developer did not run the unit tests forgot or check in some files or had incorrectly formatted their code, it would work fine on their local set-up but nowhere else. Therefore rather than leaving it up to the developer it could be automated. This introduced the idea of build servers. \\\\ A build server will sit there and depending on the particular set up and workflow of the project, take the changes run the tests against them and then send out a report to the developer, or anyone who is interested. Now if the developer forgets something it will be caught before anyone else started working on top of the changes. \\\\ So far most of the work is performed by developers for developers, in order to assure that the current state of the code base is in an always working condition. This continues until 2008 when Patrick Debois and Andrew Shafer met up and discussed bridging the gap between development, system administrators and other roles within the agile infrastructure. For example the developer environment is different to the test environment which in turn is different to QA and production environments. \\\\ This then sparked the next stage in the movement, the creation of devops. This in turn created a whole host of new tools such as Jenkins (Hudson), Puppet and Chef just to name a few. These new tools made continuous integration easier than ever, and as they gained maturity started to see a lot of use in industry. \citep{devop_history} \\\\ As these tools started to gain popularity and with the internet being widespread, there was a shift to not only be able to test, but as the code is in an always working condition push out to the user so they can always have the latest version, features and so on. This goes under the name of continuous deployment. This allows bugs to be fixed almost as quickly as they are found due to the reproduction of the users environment back over in the developers workstation. \\\\ Today, the transition over to continuous deployment is still being made, with more tools arriving. The idea of serverless severs and tools such as Docker in order to increase the reproducibility of the environments faster and with better accuracy. \\\\ In short the automation of the pipeline has been around science 1991 using a variety of tools to get to today. In addition to this the entire pipeline is beneficial to everyone and comes in three different levels, automated builds, continuous integration and continuous deployment. Each being a step up from the one before it. \subsection{Tools} The previous part of this section named various tools that are used in the pipeline, they fall into three large categories, version control systems, testing and infrastructure. This next part of the paper will look at each of these categories in turn to understand where they fit into the big picture and how they are used. \subsubsection{Version control systems} \paragraph{What are version control systems} A version control system (VCS) or version control goes under two other names, revision control and source control. A VCS has a simple premise, to manage the changes performed on a document for example, every time a document is edited, the changes are stored alongside the timestamp, user and other metadata. Then if at some point down the line a user wanted to see who made the changes or undo the changes as it broke the system, they should be able to easily. \\\\ In short it is a system to manage a documents version, over long periods of time even when the system is closed, restarted or moved. This lays the foundations for more complex operations. \paragraph{Merging and pull requests} As the changes are stored comparisons of different versions can be made, allowing them to be merged together. If \textit{user A} spots an error in a document that the owner has not fixed, they can show the changes to the owner, if the owner agrees and likes the changes they can merge them in. This applies the changes from \textit{user A} into the main document. This is also known as a pull request. \\\\ If the owner makes a change to the document they do not need to send a pull request as they own the original, therefore they can merge right away. This merging is also called checking in, as the person is checking in their changes to the VCS. \\\\ The VCS however, does not just record changes to just a single document but everything inside of a folder. Therefore it will track adding new files, deleting files and renaming files. Each VCS handles it slightly differently, this is irrelevant to this paper. The folder in the case of software development will be the project root. \\\\ If a VCS is tracking all files, then when checking in changes it would be a waste to do this per file, instead changes are grouped together under a commit. This commit is then checked in one go. Rather than a single file at a time, this allows relevant changes to be grouped together creating a nicer timeline / history for the project. \paragraph{Branching} VCSs have one more main function to cover in respects to this paper, called branching. Similar to that in a tree VCSs will have a main or master branch called the trunk. A branch is similar to a workspace, and represents the changes on that workspace that got it to its current point. \\\\ When branching in the VCS it creates a copy of the current branch allowing it to take off in a different direction. Therefore development can go into two different directions without issues of people working on the same file. Then if needed can be merged back. This is better represented graphically as seen below in figure \ref{fig:vcs_branching}: \begin{figure}[H] \centering \includegraphics[scale=0.30]{images/branching.jpg} \caption{VCS Branching} \label{fig:vcs_branching} \end{figure} Each circle on the figure represents a commit (a group of changes). The figure also shows that branch one splits off from master, using it as its base, then is merged back in. While branch two uses branch one as its base, however has not yet been merged. There are different workflows around this feature covered more in depth in section \ref{sec:vcs}. \paragraph{VCS Systems} So far the paper has gone on about a folder that exists, this folder goes under the name of a repository. The next part will look at where the repository is located, such that multiple people or a single person can work on the project taking advantage of VCSs, and how they differ. There are two main way that this is achieved, both use a client server architecture. \\\\ The first has the server contain the repository, then the developer will create a local copy of the repository according to the branch they are on. They then work on the local copy editing files, however anything else such as creating a branch, merging or checking out files is performed by the server therefore a connection is required. \\\\ The second way is distributed and has the developer create a local repository that mimics the server version then everything can be performed locally. When a connection to the server is available they can push their changes onto the server repository. This can be seen figure \ref{fig:vcs_systems}: \begin{figure}[H] \centering \includegraphics[scale=0.30]{images/systems.jpg} \caption{VCS systems modified from \cite{VCSSYSTEMS}} \label{fig:vcs_systems} \end{figure} There are a lot VCSs out there, however the most popular are GIT, TFS and Subversion (\cite{vcspop}). Git follows the distributed system whereas TFS and Subversion uses the repository server system. \\\\ There are a lot more intricate details to VCS however they will not be covered this paper. \subsubsection{Testing} Testing tools refers to the frameworks used to create the mentioned unit tests and other forms of tests. In line with this paper the frameworks will be based around code testing code so it can be automated, rather then that of a manual nature. \paragraph{Unit Testing} Unit testing is generally the first form the tests that are run on a project, as they are fast, lightweight and give good coverage of the system. If the unit tests passed then the developer should be safe in knowing that the program is in a baseline state of working order. \\\\ As mentioned earlier, unit tests, test all public functions to make sure that each one can handle invalid and valid input correctly. The individual tests are then packed into test suites allowing the developer to run only the tests relevant to the change. In addition to helping test organisation as there are normally as many if not more lines of code in tests as the project itself. \\\\ The most popular testing framework is the \textit{xUnit} family, where \textit{x} is the first few letters in the language of choice, for example, JUnit for Java, CUnit for C and CPPUnit for C++. The \textit{xUnit} family are upheld as the standard for unit testing. \\\\ Before going any further, a quick tangent must be made into build systems. A build system takes the project and produces the final output that can then be placed where required. The build system in a basic set-up is a script that will call commands to build the project. This may involve coping files, deleting, renaming or checking that the environment has what it needs. \\\\ Therefore when building the project the build system can build the test suites as well. In a full extreme programming set up these tests will be ran automatically when the project is built. However, this does not prevent the tests from being ran without re-building, built separately or ran at all. This is covered again later in the paper, section \ref{sec:testing}. \paragraph{Integration and Acceptance tests} There are other forms of testing that can be performed on a project, such as usability, regression, and security. However, the main focus will be on integration and acceptance testing, with the rest earning a brief mention later on (section \ref{sec:testing}). \\\\ Unit tests, test each method individually to make sure that it is performing up to scratch, however unseen side affect may occur when these parts are put together. For example, given a phone case, a test that could be performed might check that it can hold an object of a certain size, and that it does not fall out or that it will not break went dropped. But when the phone is placed inside the case it is the wrong size. This is integration testing, making sure each part while tested individually works as a whole. \\\\ These tests are the next step up from unit test, as in order to function they will need to simulate some of the running functionality, whereas unit tests will avoid this at all costs. For example integration tests may require the use of a database. Whereas a unit test may fake a database just to see if the correct command is being sent. \\\\ Similarly to unit tests and will be a theme throughout, there are frameworks that are deigned to support and run integration tests, with varying levels of sophistication. \\\\ Following on from integration tests are acceptance tests. Acceptance tests are performed on a fully set up system, to decide whether the system is acceptable for use by the users. This will include making sure that the system has all of the functionality available, performance and completes each task successfully, from the users perceptive. \\\\ Acceptance tests will normally be process through the user interface (UI) that a normal user would go through. For example a web application the user would use their browser. Whereas a command line application the testing would go through the command line. \\\\ For web application there are frameworks such as selenium that allow the tests to interact through a mock browser. Others will need to have a browser on the system and available to run rather than mocking. For desktop application TestFX can be used to test JavaFX application and similar for other user interfaces. \\\\ Overall integration and acceptance tests are designed to make sure the system can be put together and when done so will work as expected. This makes sure that when the users interact with the system that it works and does not collapse because of a glaring issue such as the phone case being the wrong size. \subsubsection{Infrastructure} So far the paper has looked at where the code and other project files are stored, including some of the main type of tests that are ran on a project. However, continuous deployment aims to automate everything, therefore a pipeline is needed. This is where infrastructure tools step in. They act as the conveyor belt moving the project between the steps. \paragraph{Automation tool} At the start of the pipeline there are standard tools such as Jenkins, Teamcity and Bamboo. Each provide a system to create steps that are then ran in order. To start running the steps the tools will provide several triggers. \\\\ Triggers come in four main types, manual which will require someone to press a button to start it. Sometimes this will require typing in arguments that are needed such as the version number. \\\\ Time based triggers come in two forms, when a certain time is met such as 6PM, or on a recurring frequency such as every two hours. \\\\ The third type ties into the version control system and can trigger on a commit or every five commits or when a commit is sent to a certain branch. \\\\ The final type is not so much a way to start the entire process but rather a way to keep it continuous, allowing the pipeline to be triggered whenever another step is completed. \\\\ Each step in the pipeline will often be calls to other programs and then storing the logs in a central place, keeping an eye on the entire process even across machines. The support for programs such as testing frameworks and other programming languages will vary from tool to tool. Most modern tools allow several pipeline to be running at once, it does this in the same way that it can track the pipeline across several machines. \\\\ Firstly, it provides a central web server install, this is often the part that the user will interact with and set up the system. This will often require the creation of user accounts and access settings. \\\\ Alongside the web server they will provide agents, this name will change depending on the tool. An agent is a program that will run the steps. When installed the agent will register itself to the central server allowing it to take on jobs. This allows there to be several agents on different or the same machines so several steps can be ran at once. \\\\ Some steps however may require a specific set up for example running a Linux build of the project will not work on Windows. Therefore the central server will factor this into job assignment to make sure that the agent can run the job successfully. This may lead to the situation where the first three steps are ran on one agent then the next three on another due to the different requirements. \paragraph{Automated Deployment} Now that a automation tool is set-up and can be triggered, the end of the pipeline requires deploying the project onto the servers. In order to successfully deploy, the servers in question need to be set up correctly. This will require making sure libraries and programs are installed, correct operating system and configuration files are correct. Depending on the project this may require more or less. In addition to this the project may be located on one or twenty thousand servers. Therefore it will need to be able to set up and manage all the servers. The server set-up may depend on the project version, user or location. \\\\ This is where tools such as Chef, Puppet and Octopus come to use. There are a lot more available but, for this next part Chef will be used as an example due to its easier to understand terminology, however, they all work in the same way with different names. \\\\ First of all they run in a client server architecture where there is a central server and nodes that report to and take jobs from. As seen in the automation tools. Similarly the server has an interface that will allow management of the software. \\\\ Once the central server is installed and the chefs are placed on the servers where the project will be deployed to. The configuration on how to handle the software needed is set-up in "recipes". A recipe will be made for each of the programs used and needed by the project. As it is a recipe there can be variation for different versions or other reasons. \\\\ Once the recipes are made, they are grouped together in a "cookbook". The start of the cookbook will list the programs that need to be installed and in what order. Following the installation will be the recipes. When a deployment is issued the correct cookbook is passed to the chef(s) who then set up the server by running through the cookbook front to back. \\\\ This then allow the user to set-up cookbooks and recipes for each configuration needed, and with a single click can start up a new server or deploy over an old one easily. \\\\ This section has heavily focuses on the deployment of servers and other web application, for user downloaded programs such as that for desktop and mobile, rather then dealing with all the set-up binaries and other needed files should be packaged and sent to the user. More on this in section \ref{sec:deployment} \paragraph{Deployment} The previous section of the paper went over how deployments are achieved but not what it is deploying. This has been wrapped up as software, library or the project. This part aims to clear this up. \\\\ When talking about servers there are two parts, firstly, the hardware that the server is made out of and secondly, the part the projects uses. Generally when using a server rather than running directly on the hardware via the OS. A virtual machine (VM) is installed, with several instances. The VM is what people are normally referring to when they say server. This will be the case throughout this paper. \\\\ This brings about how to deploy. As a VM with nothing extra, it will act like normal server and allows the users to deploy how they like, this may involve using VM snapshots, VM resetting or re-installing the software. This comes down to how the software works and the standards around that. \\\\ However, more recently other tools have come to light such as Docker. Docker replaces the VMs, and so rather than creating a new Linux, windows or other environment instead acts as anther layer of abstraction. Allowing the developers to deploy to Docker, rather then the OS. Then providing the OS has Docker installed the software should run on it without any problems. This also allows the sharing of libraries and other software. In order to understand this the image in figure \ref{fig:docker-vm} shows the difference: \begin{figure}[H] \centering \includegraphics[scale=0.45]{images/docker-vm.png} \caption{Dover vs VMs from \cite{docker-vm}} \label{fig:docker-vm} \end{figure} Not only this, but docker utilises a form of VCS, each container as they are called, is spawned from a Docker image. An image is the final product of the project and the library's that it needs. These images can then be placed into the Dockers VCS. \\\\ Then when it comes to deploy, just one image needs to be made, no matter the OS pushed into the VCS and pulled onto the servers. This not only eases deployment but also bug reproduction as the image is guaranteed to be the same no matter the machine. Allowing developers to reproduce the productions environment in a matter of minutes. \\\\ Another new technology that is emerging is server-less servers. This does not mean there are no servers, or that servers are not servers, servers are still used. However it switches the developer from thinking about servers to tasks. \\\\ Rather than building one project instead it is built out of small tasks, each task is then spun up when requested then, and subsequently removed when the task is complete. Meaning it only exists while the task is needed. This can be combined with Docker to spin up an image then remove it afterwards. In this way updates are pushing new images into the VCS. \subsection{Background Final words} This section has given a brief overview of where automated testing and continuous deployment has come from and where it sits at the time of writing this paper. It has gone over the various tools to a certain level, there are certainly more complex and deeper details then that covered here. However for the purposes of this paper it is not needed. \\\\ The software named throughout this section are some of the most well known in the industry, but there are certainly more each bringing their own advantages and disadvantages to the table. \\\\ From this point on the paper will shift over from what each part is to how it fits together in the best way possible.
{ "alphanum_fraction": 0.7923092491, "avg_line_length": 116.5330188679, "ext": "tex", "hexsha": "3e359e6f93861709733bb4a519b7577c51a5ac48", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "85757cb6675aa7d63362710b1bce611d3e75c957", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Paulb23/continuous_deployment_dissertation", "max_forks_repo_path": "report/chapters/background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "85757cb6675aa7d63362710b1bce611d3e75c957", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Paulb23/continuous_deployment_dissertation", "max_issues_repo_path": "report/chapters/background.tex", "max_line_length": 630, "max_stars_count": null, "max_stars_repo_head_hexsha": "85757cb6675aa7d63362710b1bce611d3e75c957", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Paulb23/continuous_deployment_dissertation", "max_stars_repo_path": "report/chapters/background.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5131, "size": 24705 }
\section{TxInfo Construction} \label{sec:txinfo} The context of $\PlutusVII$ needs to be adjusted to contain the new features. Additionally, the redeemers are provided to the context, but without the execution units budget. \begin{figure*}[htb] \emph{Conversion Functions} \begin{align*} & \fun{toPlutusType}_{\Script} \in \Script \to \type{P.ScriptHash} \\ & \fun{toPlutusType}_{\Script}~ s = \fun{hash}~s \nextdef & \fun{toPlutusType}_{\TxOut} \in \TxOut \to \type{P.TxOut} \\ & \fun{toPlutusType}_{\TxOut}~ (a, v, d, s) = (a_P, v_P, d_P, s_P) \end{align*} \caption{TxInfo Constituent Type Translation Functions} \label{fig:txinfo-translations} \end{figure*} \begin{figure} \emph{Ledger Functions} % \begin{align*} &\fun{txInfo} : \Language \to \PParams \to \EpochInfo \to \SystemStart \to \UTxO \to \Tx \to \TxInfo \\ &\fun{txInfo}~\PlutusVII~pp~ ei~ sysS~ utxo~tx = \\ & ~~~~ (\{~(\var{txin}_P, \var{txout}_P) \mid \var{txin}\in\fun{spendInputs}~tx,~\var{txin}\mapsto\var{txout}\in\var{utxo}~\}, \\ & ~~~~ \hldiff{\{~(\var{txin}_P, \var{txout}_P) \mid \var{txin}\in\fun{refInputs}~tx,~\var{txin}\mapsto\var{txout}\in\var{utxo}~\}}, \\ & ~~~~ \{~\var{tout}_P\mid\var{tout}\in\fun{txouts}~{tx}~\} , \\ & ~~~~ (\fun{inject}~(\fun{txfee}~{tx}))_P, \\ & ~~~~ (\fun{mint}~{tx})_P , \\ & ~~~~ [~ c_P \mid c \in \fun{txcerts}~{tx} ~] , \\ & ~~~~ \{~(s_P,~c_P)\mid s\mapsto c \in \fun{txwdrls}~{tx}~\} , \\ & ~~~~ \fun{transVITime} ~pp ~ei~ sysS~ (\fun{txvldt}~tx) , \\ & ~~~~ \{~k_P\mid k \in \dom \fun{txwitsVKey}~{tx}~\} , \\ & ~~~~ \hldiff{\{ (sp_P, d_P) \mid sp \mapsto (d, \_) \in \fun{indexedRdmrs}~tx \}}, \\ & ~~~~ \{~(h_P,~d_P)\mid h\mapsto d \in \fun{txdats}~{tx}~\} , \\ & ~~~~ (\fun{txid}~{tx})_P) \end{align*} \caption{Transaction Summarization Functions} \label{fig:txinfo-funcs} \end{figure}
{ "alphanum_fraction": 0.5667851701, "avg_line_length": 45.7906976744, "ext": "tex", "hexsha": "006a7d292fe72a28eccdf8ec0e56fcbd29c06dae", "lang": "TeX", "max_forks_count": 29, "max_forks_repo_forks_event_max_datetime": "2022-03-29T12:10:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-03-25T11:13:24.000Z", "max_forks_repo_head_hexsha": "c5f3e9db1c22af5d284885ddb1785f1bd7755c67", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "RoyLL/cardano-ledger", "max_forks_repo_path": "eras/babbage/formal-spec/txinfo.tex", "max_issues_count": 545, "max_issues_repo_head_hexsha": "c5f3e9db1c22af5d284885ddb1785f1bd7755c67", "max_issues_repo_issues_event_max_datetime": "2022-03-31T21:41:28.000Z", "max_issues_repo_issues_event_min_datetime": "2019-03-19T17:23:38.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "RoyLL/cardano-ledger", "max_issues_repo_path": "eras/babbage/formal-spec/txinfo.tex", "max_line_length": 143, "max_stars_count": 67, "max_stars_repo_head_hexsha": "c5f3e9db1c22af5d284885ddb1785f1bd7755c67", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "RoyLL/cardano-ledger", "max_stars_repo_path": "eras/babbage/formal-spec/txinfo.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-29T01:57:29.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-20T21:30:17.000Z", "num_tokens": 833, "size": 1969 }
\subsection{Energy Storage} Energy Storage is a large component of Integrated Energy Systems. Currently there are two models of Energy Storage in the repository. Electric Battery Storage, characterized largely as Li-ion battery technology, and two-tank sensible heat thermal energy storage that uses Therminol-66 as the working fluid. \subsubsection{Electric Battery Storage} Electric Battery Storage, shown in Figure \ref{Top View Logical Battery}, is largely characterized as fast and expensive. Due to the speed with which battery storage systems operate, on the order milliseconds, the battery within the hybrid repository has been modeled as a simple logical battery system. The battery can both charge and discharge based upon the direction of electricity flow through the port. It is assumed to be a “perfect” battery and due to the speed of the system, subcomponents have not been modeled simply because they would operate faster than would be useful for the types of analysis utilized with the system. The battery has user-based inputs that control how fast or slow the system can charge and discharge as well as how much energy can be stored within the battery before it is considered full. \begin{figure}[hbtp] \centering \includegraphics[scale=0.4]{pics/Battery_Storage.png} \caption{Top Level Depiction of the Logical Battery in the NHES package} \label{Top View Logical Battery} \end{figure} \subsubsection{Two-Tank Thermal Energy Storage} Sensible heat storage involves the heating of a solid or liquid without phase change and can be deconstructed into two operating modes: charging and discharging. A two-tank TES system, shown in Figure \ref{Top View Two Tank Sensible Storage}, is a common configuration for liquid sensible heat systems. In the charging mode cold fluid is pumped from a cold tank through an Intermediate Heat Exchanger (IHX), heated, and stored in a hot tank while the opposite occurs in the discharge mode. Such systems have been successfully demonstrated in the solar energy field as a load management strategy. The configuration of the TES system held within the repository involves an outer loop interfaces with the energy manifold. Bypass steam is directed through an IHX and ultimately discharged to the main condenser of an Integrated system. An inner loop containing a TES fluid consists of two large storage tanks along with several pumps to transport the TES fluid between the tanks, the IHX and a steam generator. Flow Bypass Valves (FBVs) are included in the discharge lines of both the “hot” and “cold” tanks to prevent deadheading the pumps when the Flow Control Valves (FCVs) are closed. Therminol-66 is chosen as the TES fluid as it is readily available, can be pumped at low temperatures, and offers thermal stability over the range (-3°C–343°C) which covers the anticipated operating range of the light water reactor systems (203°C–260°C). Molten salts (e.g. 48 percent NaNO3 – 52 percent KNO3) were not considered, as the anticipated operating temperatures fall below their 222°C freezing temperature. The TES system is designed to allow the power plant to run continuously at ~100 percent power over a wide range of operating conditions. During periods of excess capacity, bypass steam is directed to the TES unit through the auxiliary bypass valves where it condenses on the shell side of the IHX. TES fluid is pumped from the cold tank to the hot tank through the tube side of the IHX at a rate sufficient to raise the temperature of the TES fluid to some set point. The TES fluid is then stored in the hot tank at constant temperature. Condensate is collected in a hot well below the IHX and drains back to the main condenser or can be used for some other low pressure application such as chilled water production, desalination or feed-water heating. The system is discharged during periods of peak demand, or when process steam is desired, by pumping the TES fluid from the hot tank through a boiler (steam generator) to the cold tank. This process steam can then be reintroduced into the power conversion cycle for electricity production or directed to some other application through the PCV. A nitrogen cover gas dictates the tank pressures during charging and discharging operation. Full details of the model and its use within integrated energy systems can be found in report \cite{2018ThermalStorage}, \cite{FrickThermalStorage}. The model itself is coded in a non-conventional manner compared to the rest of the modelica models. It is coded in an input, output sense rather than in a fluid-port, electric-port based modeling system. This is because the model was transferred over from a FORTRAN style code rather than initially coded in Modelica. To modify the two-tank thermal storage system the user will need to look at each individual model within the charging mode and the discharge mode. Base components within the models are fully commented within the code. Like the HTSE the thermal storage unit is finely tuned and thus use outside of its current state will take a bit of work. To help with this the thermal storage unit has been sized to be compatible for varying sizes of offtake from a power unit. One is sized to take 20 percent of nominal steam from a standard 3400MWt Westinghouse plant, and one is designed for 5 percent offtake. Both designed to provide energy back as a peaking unit. The peaking unit is held within the discharge side of the model and is assumed to have its own turbine or is sent back to the low-pressure turbine. Explicit modeling of the coupling back with the low-pressure turbine has not been done. Future updating of the two-tank thermal storage unit to be consistent with other models is planned. \begin{figure}[hbtp] \centering \includegraphics[scale=0.3]{pics/Sensible_Heat_System.png} \caption{Top Level Depiction of the Two-Tank Sensible Heat Storage Unit in the NHES package} \label{Top View Two Tank Sensible Storage} \end{figure} % content \subsubsection{Thermocline Packed Bed Thermal Energy Storage} A thermocline storage system, shown in Figure \ref{Top View Thermocline}, stores heat via hot and cold fluid separated by a thin thermocline region that arises due to density differential between the fluid. Assuming low mixing via internal flow characteristics and structural design, this thermocline region can be kept relatively small in comparison with the size of the tank. Additionally, large buoyancy changes and low internal thermal conductivity are also extremely useful in maintaining small relative thermocline thickness. To increase the cost-effective nature of these designs, it is common to fill the tank with a low-cost filler material, such as concrete or quartzite. These filler materials are cheap, have high density, and high thermal conductivity. By using such material, a reduction in the amount of high cost thermal fluid can be achieved, thereby increasing the economic competitiveness of such designs. The thermocline system was modeled from a modified set of Schumann equations that were originally introduced in 1927 \cite{Schumann}. The equation set governs energy conservation of fluid flow through porous media. His equation set has been widely adopted in the analysis of thermocline storage tanks. The modified equations adopted a new version of the convective heat-transfer coefficient to incorporate low and no-flow conditions from Gunn in 1978 \cite{specialHeattransfer}. Additionally, a conductive heat-transfer term was added for the heat conduction through the walls of the tank. Self-degradation of the thermocline in the axial direction is neglected due to low relative values when during standard operation, this is a known limit of the model during times of no flow. \begin{figure}[hbtp] \centering \includegraphics[scale=0.3]{pics/Thermocline_Test.png} \caption{Top Level Depiction of the Single Tank Packed Bed Heat Storage Unit in the NHES package} \label{Top View Thermocline} \end{figure} \subsubsection{Concrete Solid Media Storage} Three different models exist for concrete solid media storage, shown in Figure \ref{CTES}. Concrete solid media storage uses inexpensive materials to charge and discharge heat from a fluid system. The models within the Hybrid system use HeatCrete as the concrete material, the properties of which are found in literature. The initial development has been done with energy arbitrage for a Rankine system in mind. However, there are not limitations on the heat transfer fluid that can be used in conjunction with the concrete system. Two design modalities exist with the three developed models. One is a single pipe model and the other is a dual pipe model. All models use simplified flow models: a single pressure and mass flow rate is imposed within a pipe (which allows for significantly improved performance at low and no flow rate conditions) leaving the dynamics of the system to primarily be described by the conservation of energy equation. Within the concrete model, heat conducts both radially and axially in a 2-D nodalized vector. All models calculate values for an average pipe and the system behavior is then scaled by the number of pipes. The single pipe model is so called because it only models one fluid pipe within a concrete system. This modeling choice imposes a restriction that heat transfer fluid only flows in one direction or the other (note, no error signals are sent by this, m\textunderscore flow\textunderscore internal = m\textunderscore flow\textunderscore ch-m\textunderscore flow\textunderscore dis). The behavior of this system lends itself better to batch energy applications. The power level during quenching (HTF quenching during charging or concrete quenching during discharging) is an order of magnitude higher than the steady power level. The pressure of the system is taken at the cold end, and defaults to the charging pressure. This model can operate in charging, discharging, OR standby mode. The single pipe model operates with an established axial thermocline and a reversing radial thermal gradient depending on operation. The dual pipe and dual pipe two HTFs model contain separate pipes for the charging and discharging flow. This model operates more steadily over long periods of time. Heat is always conducted within the CTES and between the concrete and the HTFs. The dual pipe CTES can operate in charging, discharging, standby, or thermal buffer (both charging and discharging) modes. The dual pipe system operates with mono-directional thermal gradients in both axial and radial directions in all operating modes. \begin{figure}[hbtp] \centering \includegraphics[scale=0.8]{pics/CTES.png} \caption{Top Level Depiction of Concrete Thermal Energy Storage System} \label{CTES} \end{figure} %\input{Installation/clone.tex} \subsubsection{Latent Heat Thermal Energy Storage} Latent heat energy storage stores energy in materials undergoing phase change. These phase change materials (PCMs) can undergo melting and fusion, boiling and condensation, or hydration and dehydration. PCMs have high volumetric and mass-specific energy storage densities. They can theoretically operate isothermally as the phase change occurs at a single temperature. The ideal PCM would have a large latent heat associated with its phase change, little to no density change between phases, indefinite cyclability between phases, and high thermal conductivity. Research continues in PCMs to identify enhancements that would allow lab-scale experiments to increase size to demonstrate grid-scale applications. A significant challenge is enhancing the thermal conductivity in the PCMs. As the charging or discharging HTF will drive the phase change condition at the heat transfer surface, continuing that process into the PCM mass is based on the thermal conductivity of the material. Conductivity enhancement via geometry specification, micro-encapsulation, and material impregnation has been investigated over time. The most common PCMs identified operate between the solid and liquid phases where the density change is minimal. Because the melting or solidification front is key to moving heat between the HTF and the PCMs, many high-fidelity models have been developed across the research field. The model available in the HYBRID repository is based on a paraffin wax experiment. The experiment used water as the HTF to melt and solidify paraffin wax. The original model was built in Star-CCM+ and then converted to Matlab. The Matlab version subsequently has been converted into a Modelica model within HYBRID. The generalized low-fidelity model has been built to accommodate new geometric designs, materials, and HTFs. The model is a two-dimensional radial conductivity model across three materials: the HTF, a pipe wall, then through the PCM as shown in Figure \ref{PCM}. Within the fluid, the following equation set applies. The equations are also applied within the tube, but the velocity term disappears without internal mass movement. The term $\alpha$ is thermal resistance, equal to thermal conductivity divided by the density and specific heat capacity. \[\frac{dT}{dt} = \alpha\nabla^{2}T - (\overrightarrow{v}\cdot\nabla T)\] Effectively, the equation set is applied across the material interfaces, with an effective interface thermal conductivity \[k_{eff}=\frac{2k_{1}k_{2}}{k_{1}+k_{2}}\] Because there is phase change within the PCM. The equation is written in terms of enthalpy instead of temperature, and the PCM temperature is calculated as a function of the enthalpy (h). \[\frac{dh}{dt} = \alpha\nabla^{2}h\] \begin{figure}[hbtp] \centering \includegraphics[scale=0.5]{pics/PCM.png} \caption{PCM Nodalization diagram} \label{PCM} \end{figure} The initial models built in Matlab and Star-CCM+ used constant fluid properties and an assumed inlet velocity. The Modelica model has a fluid inlet and exit to allow for replacement of the HTF in the model, and for outer conditions to dictate to the PCM model the fluid conditions at the interfaces.
{ "alphanum_fraction": 0.8109417296, "avg_line_length": 146.2291666667, "ext": "tex", "hexsha": "6e3d493eb757f6dabd75ea156f8f6f80573d3e46", "lang": "TeX", "max_forks_count": 8, "max_forks_repo_forks_event_max_datetime": "2022-03-14T21:47:39.000Z", "max_forks_repo_forks_event_min_datetime": "2021-06-03T00:22:26.000Z", "max_forks_repo_head_hexsha": "d8a82bcdb9d0516a22205eed0de75f63764fa004", "max_forks_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_forks_repo_name": "klfrick2/HYBRID", "max_forks_repo_path": "doc/user_manual/Model_Description/ES.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "d8a82bcdb9d0516a22205eed0de75f63764fa004", "max_issues_repo_issues_event_max_datetime": "2022-02-15T19:05:55.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-16T14:33:34.000Z", "max_issues_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_issues_repo_name": "klfrick2/HYBRID", "max_issues_repo_path": "doc/user_manual/Model_Description/ES.tex", "max_line_length": 2942, "max_stars_count": 16, "max_stars_repo_head_hexsha": "d8a82bcdb9d0516a22205eed0de75f63764fa004", "max_stars_repo_licenses": [ "ECL-2.0", "Apache-2.0" ], "max_stars_repo_name": "klfrick2/HYBRID", "max_stars_repo_path": "doc/user_manual/Model_Description/ES.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-20T13:25:11.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-10T21:37:01.000Z", "num_tokens": 2927, "size": 14038 }
%%% Intro.tex --- %% %% Filename: Intro.tex %% Description: %% Author: Ola Leifler %% Maintainer: %% Created: Thu Oct 14 12:54:47 2010 (CEST) %% Version: $Id$ %% Version: %% Last-Updated: Thu May 19 14:12:31 2016 (+0200) %% By: Ola Leifler %% Update #: 5 %% URL: %% Keywords: %% Compatibility: %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %%% Commentary: %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %%% Change log: %% Completed Langauge Reading MBS %% %% RCS $Log$ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %%% Code: \chapter{TLM-Based Co-Modeling Editor and Co-Simulation Framework} \label{cha:tlm} This chapter is based on the following paper: \begin{itemize} \item \textbf{Alachew Mengist}, Adeel Asghar, Adrian Pop, Peter Fritzson, Willi Braun, Alexander Siemers, and Dag Fritzson.\textbf{ An Open-Source Graphical Composite Modeling Editor and Simulation Tool Based on FMI and TLM Co-Simulation.} In Proceedings of the 11th International Modelica Conference, Versailles, France, September 21-23, 2015. \end{itemize} \section{Introduction} \label{sec:tlmintroduction} Industrial products often consist of many components that have been developed by different suppliers using different modeling and simulation tools. Integrated modeling and simulation support is needed in order to integrate all the parts of a complex product model. TLM-based modeling and co-simulation is an important technique for modeling, connecting, and simulation of mechanical systems. It is simple, numerically stable, and efficient. A number of tool-specific simulation models, such as Modelica models, SimuLink models, Adams models, BEAST models, etc., have been successfully connected and simulated using TLM-based co-simulation. This has been successfully demonstrated by integrating and connecting several different simulation models, especially for mechanical applications. Such an integrated model, consisting of several model parts, is referred to here as a composite model, since it is composed of several sub-models. Another name that is used for such a model is meta-model, since it is a model of models. In earlier work \cite{tlmalexander05,tlmsiemers06}, Modelica, with its object-oriented modeling capabilities and its standardized graphical notations, has been used to demonstrate the potential benefits of meta-modeling/composite modeling of mechanical systems using TLM. The availability of a general XML-based composite modeling language \cite{tlmalexander05} is an important element of our TLM-based modeling and co-simulation framework. However, modelers developing composite models are likely to take advantage of the additional availability of tools that assist them with respect to the composite modeling process (i.e., the process of creating and/or editing a composite model, here represented and stored as XML). We introduce a graphical composite model editor that is an extension and specialization of the OpenModelica connection editor, OMEdit. In the context of this work, a composite model is composed of several sub-models, including the interconnections between these sub-models. The editor supports creating, viewing, and editing a composite model both in textual and graphical representation. The system supports simulation of composite models consisting of sub-models created using different tools. It is also integrated with the SKF TLM-based co-simulation framework. \section{TLM-Based Co-simulation Framework} \label{sec:tlmframework} As men\-tioned, a gen\-eral frame\-work for com\-pos\-ite model-based co-simulation has pre\-vi\-ously been de\-signed and im\-ple\-mented in \cite{tlmalexander05}. The design goals for the simulation part of that framework were portability, computational efficiency, and simplicity of incorporation for additional simulation tools. It is also the framework that is used for the \acrshort{tlm}-based composite model co-simulation described in this chapter. TLM composite model co-simulation is primarily handled by the central simulation engine of the framework called the \acrshort{tlm} simulation manager. It is a stand-alone program that reads an \acrshort{xml} definition of the coupled simulation as defined in \cite{tlmalexander05}. It then starts external model simulations and provides a communication bridge between the running simulations using the TLM \cite{tlmlakov06} method. The external models only communicate with the TLM simulation manager, which acts as a broker and performs communication and marshalling of information between the external models. The simulation manager sees every external model as a black box with one or more external interfaces. The information is then communicated between the external interfaces belonging to the different external models. Additionally, the simulation manager opens a network port for monitoring all communicated data. TLM simulation monitor is another stand-alone program that connects to the \acrshort{tlm} simulation manager via the network port. The TLM simulation manager sends the co-simulation status and progress to the \acrshort{tlm} simulation monitor via TCP/IP. The simulation monitor receives the data and writes it to an \acrshort{xml} file. \section{Composite Model XML Schema} \label{sec:tlmschema} The composite model \acrshort{xml}-Schema for validating the co-simulation composite model is designed according to its specification described in \cite{tlmalexander05}. The following is a sample composite model \acrshort{xml} representation: In order to use graphical notations in the composite model editor, the composite model \acrshort{xml} file needs to describe annotations for each sub-model and connections between them. We propose to extend the composite model specification by including the \textit{Annotation} element in the \textit{SubModel} and \textit{Connection} elements. <Annotation Origin="{-50,54}" Extent="{-10,-10,10,10}" Rotation="0" Visible="true"/> The contents of our composite model \acrshort{xml} root element, namely \textit{Model} is depicted in Figure \ref{fig:tlmerootelement}. \begin{figure} [!h] \includegraphics[width=\linewidth]{tlm_root_element.png} \caption{The Model (root) element of the Composite Model Schema.} \label{fig:tlmerootelement} \end{figure} The root element can contain a list of connected \textit{SubModels} and TLM \textit{Connections}. \textit{SimulationParams} element is also inside the root element. It has an attribute \textit{Name} representing the name of the composite model. The \textit{SimulationParams} element specifies the start time and end time for the co-simulation. The \textit{SubModel} element, presented in Figure \ref{fig:tlmsubmodelelement}, represents the simulation model component that participates in the co-simulation. \begin{figure} [!h] \includegraphics[width=\linewidth]{tlm_submodel_element.png} \caption{The SubModel element from the Composite Model Schema.} \label{fig:tlmsubmodelelement} \end{figure} The required attributes for a \textit{SubModel} are \textit{Name} of the sub-model, \textit{ModelFile} (file name of the submodel), and \textit{StartCommand} (the start method command to participate in the co-simulation). Each \textit{SubModel} also contains a list of interface points. \textit{InterfacePoint} elements are used to specify the \acrshort{tlm} interfaces of each simulation component (sub-model). The \textit{Connection} element of the composite model \acrshort{xml} schema is shown in Figure \ref{fig:tlmconnectionelement}. \begin{figure} [!h] \includegraphics[width=\linewidth]{tlm_connection_element.png} \caption{The Connection element from the Composite Model Schema.} \label{fig:tlmconnectionelement} \end{figure} The \textit{Connection} element defines connections between two connected interface points, that is, a connection between two TLM interfaces. Its attributes \textit{From} and \textit{To} define which interfaces of which submodels are connected. Other attributes of the \textit{Connection} element specify the delay and maximum step size. \section{Composite Model Graphical Editor} \label{sec:tlmeditor} One of the primary contributions of the composite model editor is interoperability in modeling and simulation. Our effort leverages OpenModelica for graphical composite model editing as well as SKF’s co-simulation framework for TLM-based co-simulation. The implementation is an extension of OMEdit, which is implemented in C++ using the Qt graphical user interface library. An overview of the different components that the graphical composite model editor relies on is shown in Figure \ref{fig:tlmeditorinteraction}. \begin{figure} [!h] \includegraphics[width=\linewidth]{tlm_editor_interaction.png} \caption{An overview of the interaction between the composite model (meta-model) graphic editor and the other components.} \label{fig:tlmeditorinteraction} \end{figure} The graphical composite model editor communicates with the OpenModelica compiler to retrieve the interface points for the external model and SKF’s co-simulation framework to run the \acrshort{tlm} simulation manager and simulation monitor. The full graphical functionality of the composite modeling process can be expressed in the following steps: \begin{enumerate} \item Import and add the external models to the composite model editor, \item Specify startup methods and interfaces of the external model, \item Build the composite models by connecting the external models, \item Set the co-simulation and TLM parameters in the composite model. \end{enumerate} In the graphic composite model editor, the modeling page area is used for visual composite modeling or text composite modeling. This allows users to create, modify, and delete sub-models in a user-friendly manner. Each tool component is described in the following subsections. \subsection{Visual Modeling} \label{sec:tlmvisual} Each composite model has two views: a Text view and a Diagram view. In the diagram view, each simulation model component (sub-model) of the \acrshort{tlm} co-simulation can be dragged and dropped from the library browser to this view, causing the sub-model to be automatically translated into a textual form by fetching the interface name for the \acrshort{tlm} based co-simulation. The user can complete the composite model (see Figure \ref{fig:tlmtest}) by graphically connecting components (sub-models). \begin{landscape} \begin{figure} \includegraphics[width=\linewidth,height=11cm]{test_tlm.png} \caption{A screenshot of visual composite modeling of a double pendulum.} \label{fig:tlmtest} \end{figure} \end{landscape} The test model (see Figure \ref{fig:tlmtest}) is a multibody system that consists of three sub-models: Two OpenModelica \textit{Shaft} sub-models (\textit{Shaft1} and \textit{Shaft2}) and one SKF/BEAST bearing sub-model that are assembled to build a \textit{double pendulum}. The SKF/BEAST bearing submodel is a simplified model with only three balls to speed up the simulation. \textit{Shaft1} is connected with a spherical joint to the world coordinate system. The end of \textit{Shaft1} is connected via a \acrshort{tlm} interface to the outer ring of the BEAST bearing model. The inner ring of the bearing model is connected via another \acrshort{tlm} interface to \textit{Shaft2}. Together they build the double pendulum with two shafts, one spherical OpenModelica joint, and one BEAST bearing. \subsection{Textual Modeling and Viewing} \label{sec:tlmtextual} The text view (see Figure \ref{fig:tlmtextview}) allows users to view the contents (sub-models, connections, and simulation parameters) of any loaded composite model. It also enables users to edit a composite model textually as part of the composite modeling construction process. To facilitate the process of textual composite modeling and to provide users with a starting point, the text view (see Figure \ref{fig:tlmtextview}) includes the composite model \acrshort{xml} schema elements and the default simulation parameters. \begin{landscape} \begin{figure} \includegraphics[width=\linewidth]{tlm_textview.png} \caption{A screenshot of textual composite modeling.} \label{fig:tlmtextview} \end{figure} \end{landscape} \subsection{Composite Model Validation} \label{sec:tlmvalidation} Since model validation is part of the composite modeling process, the composite model editor supports users by validating the composite model to ensure that it follows the structure and content rules specified in the composite model schema described in Section 4. In general, the composite model editor validation mechanism permits users to verify that: \begin{itemize} \item The basic structure of the elements and attributes in the composite model matches the composite model schema. \item All information required by the composite model schema is present in the composite model. \item The data conforms to the rules of the composite model schema. \end{itemize} \subsection{OpenModelica Runtime Enhancement} \label{sec:tlmruntime} To support \acrshort{tlm}-based co-simulation, the OpenModelica runtime has been enhanced. The added functionality supports single solver step simulation so that the executed simulation model can work together with the \acrshort{tlm} manager. New flags to enable this functionality in the simulation executable are now available: \begin{itemize} \item -noEquidistantOutputFrequency \item -noEquidistantOutputTime \end{itemize} The new flags control the output, e.g., the frequency of steps and the time increment. \clearpage \subsection{Communication with the SKF TLM-Based Co-Simulation Framework} \label{sec:tlmskf} The graphic composite model editor in OpenModelica provides a graphical user interface for co-simulation of composite models. It can be launched by clicking the \acrshort{tlm} co-simulation icon from the toolbar. %%% , see %%% Figure \ref{fig:tlmsetup}. %%% \begin{figure} [!h] %%% \centering %%% \includegraphics[width=\linewidth]{tlm_setup.png} %%% \caption{TLM co-simulation setup.} %%% \label{fig:tlmsetup} %%%\end{figure} The editor runs the \acrshort{tlm} simulation manager and simulation monitor. The simulation manager reads the composite model from the editor, starts the co-simulation, and provides the communication bridge between the running simulations. Figure \ref{fig:tlmcosimulationprogress} shows the running status of the \acrshort{tlm} co-simulation. \begin{figure} [!h] \includegraphics[width=\linewidth]{tlm_cosimulation_progress.png} \caption{TLM co-simulation.} \label{fig:tlmcosimulationprogress} \end{figure} The simulation monitor communicates with the simulation manager and writes the status and progress of the co-simulation to a file. This file is read by the editor, which uses the data to show the co-simulation progress bar to the user. The editor also provides the means of reading the log files generated by the simulation manager and monitor. \clearpage During the post-processing stage, simulation results are collected and visualized in the OMEdit plotting perspective, as shown in Figure \ref{fig:tlmsimulationresults}. \begin{figure} [!h] \includegraphics[width=\linewidth]{tlm_simulation_results.png} \caption{Results of TLM co-simulation.} \label{fig:tlmsimulationresults} \end{figure} \subsection{Industrial Application of Composite Modeling with TLM Co-Simulation} \label{sec:tlmapplication} SKF has successfully used the \acrshort{tlm} co-simulation framework to simulate composite models. For example, Figure \ref{fig:tlmapplication} shows one such application with an MSC.ADAMS \cite{adams} car model containing an integrated SKF BEAST \cite{beast} hub-unit sub-model connected via \acrshort{tlm}-connections. \clearpage \begin{figure} [!h] \includegraphics[width=\linewidth]{tlm_application.png} \caption{A composite model of an MSC.ADAMS car model with an integrated SKF BEAST hub-unit sub-model (green), connected via TLM connections for co-simulation.} \label{fig:tlmapplication} \end{figure} \section{Summary} \label{sec:tlmsummary} In this chapter, we introduced a general open-source graphical editor and simulation tool for composite modeling and co-simulation as well as its integration with SKF’s TLM-based co-simulation framework for TLM based co-simulation. The graphical editor combines a number of features to support end-users with respect to the creation of composite models and co-simulation. These include adding, removing, and connecting components (submodels) both textually and graphically, as well as integrated co-simulation and visualization of simulation results. The composite model editor supports external non-Modelica models represented in XML form (essentially black boxes with interfaces) inside the component tree, which can be used for composite model composition. A number of tool-specific simulation models, such as Modelica models, SimuLink models, Adams models, BEAST models, etc., have been successfully connected and simulated using TLM-based co-simulation. A schema for validation of composite modeling has been developed as a part of this work. \subsection*{Acknowledgements} \label{sec:tlmAcknowledgements} The work has been supported by Vinnova in the ITEA2 MODRIO project, by EU in the INTO-CPS project, and by the Swedish Government in the Swedish Government in the ELLIIT project. The open-source Modelica Consortium supports the OpenModelica work. The TLM-based co-simulation framework is provided by SKF. %\nocite{scigen} %We have included Paper \ref{art:scigen} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Intro.tex ends here %%% Local Variables: %%% mode: latex %%% TeX-master: "demothesis" %%% End:
{ "alphanum_fraction": 0.7845792301, "avg_line_length": 54.3221884498, "ext": "tex", "hexsha": "0d6338a0d04e87926def49fe778f627c51356699", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1dce77ac23d71017c6bf728efbb910850d43e8c2", "max_forks_repo_licenses": [ "Linux-OpenIB" ], "max_forks_repo_name": "alash325/lictest", "max_forks_repo_path": "tlm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1dce77ac23d71017c6bf728efbb910850d43e8c2", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Linux-OpenIB" ], "max_issues_repo_name": "alash325/lictest", "max_issues_repo_path": "tlm.tex", "max_line_length": 874, "max_stars_count": null, "max_stars_repo_head_hexsha": "1dce77ac23d71017c6bf728efbb910850d43e8c2", "max_stars_repo_licenses": [ "Linux-OpenIB" ], "max_stars_repo_name": "alash325/lictest", "max_stars_repo_path": "tlm.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4099, "size": 17872 }
\documentclass[aps,pra,notitlepage,amsmath,amssymb,letterpaper,12pt]{revtex4-1} \usepackage{amsthm} \usepackage{graphicx} % Above uses the Americal Physical Society template for Physical Review A % as a reasonable and fully-featured default template % Below define helpful commands to set up problem environments easily \newenvironment{problem}[2][Problem]{\begin{trivlist} \item[\hskip \labelsep {\bfseries #1}\hskip \labelsep {\bfseries #2.}]}{\end{trivlist}} \newenvironment{solution}{\begin{proof}[Solution]}{\end{proof}} % -------------------------------------------------------------- % Document Begins Here % -------------------------------------------------------------- \begin{document} \title{Classwork 01} \author{Sharon Kim, Kynan Barton, Kristalee Lio} \affiliation{CS 510, Schmid College of Science and Technology, Chapman University} \date{\today} \maketitle \section{Abstract} \noindent The derivative is one of the core concepts of calculus. After mastering derivatives, students can apply them to various fields such as statistics, engineering, economics, medicine, etc. This paper provides the first step in understanding derivatives by giving you the definition of a derivative in algebraic and graphical terms. A step by step example is also provided to illustrate how to use the definition of a derivative to calculate the derivative of a function. \section{The Derivative} \noindent \textbf{Definition.} Let \textbf{\textit{y = f(x)}} be a function. The derivative of f is the function whose value at \textbf{\textit{x}} is the limit provided this limit exists. \begin{align*} f'(x) &= \lim_{h \to 0} \frac{f(x+h)-f(x)}{h}. \end{align*} \noindent If this limit exists for each \textbf{\textit{x}} in an open interval \textbf{\textit{I}}, then we say that \textit{f} is differentiable on \textbf{\textit{I}}. \bigskip \noindent \textbf{Example.} Find the derivative of the following function using the definition of the derivative. \begin{align*} f(x) &= 2x^2-16x+35 \end{align*} \noindent \textbf{Solution.} Plug this function into the definition of the derivative, and do some algebra. First plug the function into the definition of the derivative. Then multiply everything out. Notice that every term in the numerator that did not have an h in it cancelled out. We can now factor an h out of the numerator which we will cancel against the h in the denominator. After that, we can compute the limit. \begin{align*} f'(x) &= \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\ &= \lim_{h \to 0} \frac{2(x+h)^2 -16(x+h) +35 -(2x^2-16x+35)}{h} \\ &= \lim_{h \to 0} \frac{2x^2 +4xh +2h^2 -16x -16h +35 -2x^2+16x-35}{h}\\ &= \lim_{h \to 0} \frac{4xh +2h^2 -16h}{h} \\ &= \lim_{h \to 0} \frac{h(4x+2h-16)}{h} \\ &= \lim_{h \to 0} 4x+2h-16 \\ f'(x) &= 4x-16 \end{align*} \noindent \textbf{Figure.} \begin{figure}[h!] \includegraphics[width=0.6\textwidth]{derivativedefinition.JPG} \caption{} \label{fig:figlabel} \end{figure} \noindent Figure 1 shows a graphical representation of the definition of a derivative. f'(x) represents the slope of f(x) at an arbitrary point in it's domain. \section{References} \noindent Lawrence S. Husch. \textit{Husch: Definition of Derivative.} \\ URL: http://archives.math.utk.edu/visual.calculus/2/definition.12/ \\ \\ \noindent Paul Dawkins. \textit{Dawkins: The Definition of the Derivative.} \\ URL: http://tutorial.math.lamar.edu/Classes/CalcI/DefnOfDerivative.aspx \\ \\ \noindent \textit{Limits and Derivatives} URL: https://learnmathforfree.jimdo.com/calculus/limits-and-derivatives/ \\ \\ \noindent Jeff Cruzan. \textit{Cruzan: The Derivative} URL: http://www.drcruzan.com/TheDerivative.html \end{document}
{ "alphanum_fraction": 0.7079167792, "avg_line_length": 46.2625, "ext": "tex", "hexsha": "8926e0a463aab2bebbed1d438c577a70e1f717cb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d083e7dcc6889f1abf8433a00403c92b7d82ad9b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "sharonyeji/LaTeX_docs", "max_forks_repo_path": "testlatex.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d083e7dcc6889f1abf8433a00403c92b7d82ad9b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "sharonyeji/LaTeX_docs", "max_issues_repo_path": "testlatex.tex", "max_line_length": 477, "max_stars_count": null, "max_stars_repo_head_hexsha": "d083e7dcc6889f1abf8433a00403c92b7d82ad9b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "sharonyeji/LaTeX_docs", "max_stars_repo_path": "testlatex.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1110, "size": 3701 }
\documentclass[11pt,titlepage,dvipdfmx,twoside]{article} \linespread{1.1} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsmath} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \usepackage{enumitem} \usepackage{geometry} \geometry{left=2.5cm,right=2.5cm,top=2.5cm,bottom=2.5cm} \usepackage{algorithm} \usepackage{algorithmic} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \renewcommand{\algorithmicforall}{\textbf{for each}} \usepackage{mathtools} \usepackage{comment} \usepackage[dvipdfmx]{graphicx} \usepackage{float} \usepackage{framed} \usepackage{graphicx} \usepackage{subcaption} \usepackage{color} \usepackage{url} \title{\Huge{ Program Manual: \\ Listing Chemical Isomers of a Given Chemical Graph}} \begin{document} % % The following makeatletter must be after begin{document} % \makeatletter % \let\c@lstlisting\c@figure % \makeatother % \西暦 \date{\today} \maketitle % \cleardoublepage \thispagestyle{empty} \tableofcontents \clearpage \pagenumbering{arabic} \section{Introduction} \label{sec:intro} This text explains how to use the program for listing chemical isomers of a given 2-lean cyclic chemical graph~\cite{branch}. %In addition to this text, a {\tt README.txt} and a {\tt License.txt} file, %you will find accompanying the following files and folders. % \begin{itemize} \item Folder{\tt isomers}\\ Containing source code files of the program for generating chemical isomers of a cyclic chemical graph. The program is written in the C++ programming language. \begin{itemize} \item Folder{\tt include}\\ A folder that contains related header files. \begin{itemize} \item{chemical\_graph.hpp}\\ A header file that contains functions for manipulating chemical graphs. \item{cross\_timer.h}\\ A header file that contains functions for measuring execution time. \item{data\_structures.hpp}\\ Data structures implemented for storing chemical graphs. \item{fringe\_tree.hpp}\\ A header file with functions for enumerating 2-fringe trees with fictitious degree~\cite{branch}. \item{tools.hpp}\\ Various functions used in the implementation. \item{tree\_signature}\\ A header file with functions to compute and read canonical representation of fringe trees. \end{itemize} \item Folder{\tt instance}\\ A folder containing sample input instance. \begin{itemize} \item{\tt sample.sdf}\\ A cyclic chemical graph with 45 vertices (non-Hydrogen atoms). \item{\tt sample\_partition.txt}\\ A file containing partition information into acyclic subgraphs of the cyclic chemical graph given in {\tt sample.sdf}. \item{\tt sample\_fringe\_trees.sdf}\\ A family of 2-fringe trees attached with the cyclic chemical graph given in {\tt sample.sdf}. \end{itemize} \item Folder{\tt main}\\ Folder containing source files. \begin{itemize} \item{generate\_isomers.cpp}\\ Implements an algorithm for listing chemical isomers of a 2-lean cyclic chemical graph. \item{Pseudocode\_Graph\_Generation.pdf}\\ A pdf files showing Pseudo-codes for Graph Search Algorithms. \end{itemize} \end{itemize} \end{itemize} The remainder of this text is organized as follows. Section~\ref{sec:term} explains some of the terminology used throughout the text. Section~\ref{sec: main} gives an explanation of the program for generating chemical isomers of a given 2-lean cyclic chemical graph, explaining the input, output, and presenting a computational example. % \newpage \section{Terminology} \label{sec:term} This section gives an overview of the terminology used in the text. % \begin{itemize} % \item Chemical Graph\\ A graph-theoretical description of a chemical compound, where the graph's vertices correspond to atoms, and its (multi) edges to chemical bonds. Each vertex is colored with the chemical element of the atom it corresponds to, and edges have multiplicity according to the corresponding bond order. We deal with ``hydrogen-suppressed'' graphs, where none of the graph's vertices is colored as hydrogen. This can be done without loss of generality, since there is a unique way to saturate a hydrogen-suppressed chemical graph with hydrogen atoms subject to a fixed valence of each chemical element. \item Feature vector\\ A numerical vector giving information such as the count of each chemical element in a chemical graph. For a complete information on the descriptors used in feature vectors for this project, please see~\cite{branch}. \item Partition Information\\ Information necessary to specify the base vertices and edges, as well as the vertex and edge components of a chemical graph and fringe trees for each component. For more details, please check~\cite{branch}. \item Fringe Tree Information\\ A list of 2-fringe trees to be used to construct isomer. For more details, please check~\cite{branch}. \end{itemize} \section{Program for Generating Chemical Isomers} \label{sec: main} \subsection{Input and Output of the Program} \label{sec:InOut_m} This section gives an explanation of the input and output of the program that generates chemical isomers of a given 2-lean chemical graph. We call the program {\tt Generate Isomers}. Section~\ref{sec:Input_m} gives an explanation of the program's input, and Sec.~\ref{sec:Output_m} of the program's output. \subsubsection{The Program's Input} \label{sec:Input_m} The input to the {\tt Generate Isomers} program consists of ten necessary items. \\ First, comes information of a 2-lean chemical graph (in SDF format).\\ Second, comes a time limit in seconds on each of the stages of the program (for details, check the accompanying file with pseudo-codes of the program).\\ Third is an upper bound on the number of {\em partial} feature vectors, that the program stores during its computation. \\ Fourth is the number of graphs that are stored per one base vertex or edge.\\ Fifth is a global time limit for enumeration of paths.\\ Sixth is a global upper bound on the number of paths enumerated from the DAGs.\\ Seventh is an upper limit on the number of generated output chemical graphs. \\ Eighth is a filename (SDF) where the output graphs will be written.\\ Ninth is a file that contains partition information of the chemical graph given as the first parameter.\\ Tenth is a file that contains list of fringe trees. % \bigskip\\ A cyclic chemical graph, given as a ``structured data file,'' in SDF format. This is a standard format for representing chemical graphs. For more details, please check the documentation at \\ \url{ http://help.accelrysonline.com/ulm/onelab/1.0/content/ulm_pdfs/direct/reference/} \\ \phantom{\url{ http://help.accelrysonline.com}}\url{/ctfileformats2016.pdf} %% \bigskip\\ The partition information is stored as a text file and an output of Stage 4. \\ The {\tt sample\_partition.txt} looks like: \begin{oframed} {\bf Partition information file}\\\\ {\tt 2 \\ 14 \\ 0 45 \\ 1 2 3 4 5 6 7 8 \\ 15 \\ 0 45 \\ 1 2 3 4 5 6 7 8 \\ 2 \\ 14 1 2 3 5 6 15 \\ 0 45 \\ 1 2 3 4 5 6 7 8 \\ 14 10 15 \\ 0 45 \\ 1 2 3 4 5 6 7 8 \\ } \end{oframed} Following, Table~\ref{tab:PartitionFormat} gives a row-by-row explanation of the numerical information in the above partition file. \bigskip \begin{table}[H] \begin{center} \caption{Structure of a Partition Information} \label{tab:PartitionFormat} \begin{tabular}{l|l} Value in the file & Explanation \\ \hline \hline {\tt 2} & Number of base vertices \\ \hline {\tt 14} & The index of a base vertex in the input SDF \\ {\tt0 45 } & Core height lower and upper bound \\ {\tt 1 2 3 4 5 6 7 8} & Indices of fringe trees in the fringe tree file\\ {\tt15} &\\ {\tt 0 45} &\\ {\tt1 2 3 4 5 6 7 8} \\ \hline {\tt2} & Number of base edges \\ \hline {\tt 14 1 2 3 5 6 15} & Indices of vertices in the base edge from the input SDF \\ {\tt 0 45} &Core height lower and upper bound \\ {\tt 1 2 3 4 5 6 7 8} & Indices of fringe trees in the fringe tree file\\ {\tt 14 10 15} & \\ {\tt 0 45} &\\ {\tt 1 2 3 4 5 6 7 8} & \\ \hline \end{tabular} \end{center} \end{table} %%%% The list of fringe trees are stored in a text file. The {\tt sample\_fringe\_trees.sdf} is given below: \begin{oframed} {\bf Fringe tree file}\\\\ {\tt1, C 0 C 1, 1 \\ 2, C 0 O 1, 2 \\ 3, N 0 C 1, 1 \\ 4, C 0 C 1 C 2, 1 1 \\ 5, C 0 C 1 C 2 C 2, 1 1 1\\ 6, C 0 C 1 O 2 O 2, 1 1 2\\ 7, N 0 C 1 N 2 N 2, 2 1 1\\ 8, C 0 C 1 C 2 C 1, 1 1 1}\\ \end{oframed} %% Following, Table~\ref{tab:fringeFormat} gives a row-by-row explanation of the numerical information in the above fringe tree file. \bigskip \begin{table}[H] \begin{center} \caption{Structure of a Fringe Tree File} \label{tab:fringeFormat} \begin{tabular}{l|l} Value in the file & Explanation \\ \hline \hline {\tt 1, C 0 C 1, 1} & Index of the fringe tree, (color, depth)-sequence, weight sequence \\ & For details please check \\ & Pseudocode\_Graph\_Generation.pdf\\ \hline {\tt 2, C 0 O 1, 2 } &\\ {\tt 3, N 0 C 1, 1 } &\\ {\tt 4, C 0 C 1 C 2, 1 1 } &\\ {\tt 5, C 0 C 1 C 2 C 2, 1 1 1} &\\ {\tt 6, C 0 C 1 O 2 O 2, 1 1 2} &\\ {\tt 7, N 0 C 1 N 2 N 2, 2 1 1} &\\ {\tt 8, C 0 C 1 C 2 C 1, 1 1 1} &\\ \hline \end{tabular} \end{center} \end{table} \subsubsection{The Program's Output} \label{sec:Output_m} After executing the {\tt Generate Isomers} program, the chemical isomers of the input graph will be written in the specified SDF, and some information on the execution will be output on the terminal. The information printed on the terminal includes:\\ - a lower bound on the number of chemical isomers of the given input chemical graph, \\ - the number of graphs that the program generated under the given parameters, and \\ - the program's execution time. \subsection{Executing the Program and a Computational Example} \label{sec:Example_m} This section gives a concrete computational example of the {\tt Generate Isomers} program. \subsubsection{Compiling and Executing the {\tt Generate} Program} \label{sec:compile_m} \begin{itemize} \item {\em Computation environment}\\ There should not be any problems when using a ISO C++ compatible compiler. % There program has been tested on Ubuntu Linux 20.04, with the g++ compiler ver 9.3. If the compiler is not installed on the system, it can be installed with the following command.\\ \verb|$ sudo apt install g++| \item {\em Compiling the program}\\ Please run the following command in the terminal.\\ \verb|$ g++ -o generate_isomers generate\_isomers.cpp -O3 -std=c++11|\\ \item {\em Executing the program}\\ The program can be executed by running the following command in the terminal.\\ \verb|$ ./generate_isomers instance.txt a b c d e f output.sdf instance_partition.txt|\\ Above, {\tt generate\_isomers} is the name of the program's executable file, and the remaining command-line parameters are as follows: \\ \verb|instance.txt| a text file containing a chemical specification \\ \verb|a| upper bound (in seconds) on the computation time on each stages of the program, \\ \verb|b| upper bound on the number of stored partial feature vectors, \\ \verb|c| upper bound on the number of graphs stored per base vertex or edge, \\ \verb|d| upper bound (in seconds) on time for enumeration of paths,\\ \verb|e| upper bound on the number of total paths stored during the computation,\\ \verb|f| upper bound on the number of output graphs, \\ \verb|output.sdf| filename to store the output chemical graphs (SDF format), \\ \verb|instance_partition.txt| partition information of the input chemical graph.\\ \verb|instance_fringe_trees.txt| list of fringe trees.\\ \end{itemize} \subsubsection{Computational Example} \label{sec:instance_p} We execute the {\tt Generate Isomers} program with the following parameters. \begin{itemize} \item Input graph: File {\tt sample.sdf} from the folder {\tt instance} \item Time limit: 10 seconds \item Upper limit on the number of partial feature vectors: 10000 \item Number of graphs per base vertex or edge: 5 \item Global time limit for enumeration of paths: 10 seconds \item Global upper bound on number of paths: 10000 \item Upper limit on the number of output graphs: 2 \item Filename to store the output graphs: {\tt output.sdf} \item Partition information of the input graph: File {\tt sample\_partition.txt} from the folder {\tt instance}. \item Fringe tree information of the input and output graphs: File {\tt sample\_fringe\_tree.txt} from the folder {\tt instance}. \end{itemize} Execute the program by typing the following command into the terminal (without a line break). \bigskip {\tt ./generate\_isomers ../instance/sample.sdf 10 10000 5 10 10000 2} \\ \phantom{{\tt ./generate\_isomers ~~\,}} {\tt output.sdf ../instance/sample\_partition.txt }\\ \phantom{{\tt ./generate\_isomers ~~\,}} {\tt../instance/sample\_fringe\_tree.txt} % \bigskip Upon successful execution of the program, the following text should appear on the terminal. \begin{oframed} {\bf Output Written on the Terminal}\\\\ {\tt A lower bound on the number of graphs = 10512\\ Number of generated graphs = 2\\ Total time : 34.1s.} \end{oframed} \begin{oframed} {\bf Contents of the file output.sdf}\\\\ {1 \\ BH-2L \\ BH-2L \\ 45 45 0 0 0 0 0 0 0 0999 V2000 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 1 9 1 0 0 0 0 \\ 1 27 1 0 0 0 0 \\ 2 3 1 0 0 0 0 \\ 2 9 1 0 0 0 0 \\ 2 13 1 0 0 0 0 \\ 3 4 1 0 0 0 0 \\ 4 5 1 0 0 0 0 \\ 5 6 1 0 0 0 0 \\ 6 7 1 0 0 0 0 \\ 6 8 2 0 0 0 0 \\ 9 10 1 0 0 0 0 \\ 10 11 1 0 0 0 0 \\ 10 12 1 0 0 0 0 \\ 13 14 1 0 0 0 0 \\ 13 21 1 0 0 0 0 \\ 14 15 2 0 0 0 0 \\ 14 16 1 0 0 0 0 \\ 16 17 2 0 0 0 0 \\ 16 18 1 0 0 0 0 \\ 18 19 1 0 0 0 0 \\ 19 20 1 0 0 0 0 \\ 21 22 1 0 0 0 0 \\ 21 23 1 0 0 0 0 \\ 23 24 1 0 0 0 0 \\ 23 25 1 0 0 0 0 \\ 25 26 1 0 0 0 0 \\ 25 27 1 0 0 0 0 \\ 27 28 1 0 0 0 0 \\ 28 29 1 0 0 0 0 \\ 29 30 1 0 0 0 0 \\ 29 33 1 0 0 0 0 \\ 30 31 1 0 0 0 0 \\ 30 32 1 0 0 0 0 \\ 33 34 1 0 0 0 0 \\ 33 35 1 0 0 0 0 \\ 35 36 2 0 0 0 0 \\ 35 37 1 0 0 0 0 \\ 37 38 1 0 0 0 0 \\ 38 39 2 0 0 0 0 \\ 38 40 1 0 0 0 0 \\ 40 41 1 0 0 0 0 \\ 41 42 1 0 0 0 0 \\ 42 43 1 0 0 0 0 \\ 43 44 1 0 0 0 0 \\ 43 45 2 0 0 0 0 \\ M END \\ \$\$\$\$ \\ 2 \\ BH-2L \\ BH-2L \\ 46 46 0 0 0 0 0 0 0 0999 V2000 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 N 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0.0000 0.0000 0.0000 O 0 0 0 0 0 0 0 0 0 0 0 0 \\ 1 9 1 0 0 0 0 \\ 1 28 1 0 0 0 0 \\ 2 3 1 0 0 0 0 \\ 2 9 1 0 0 0 0 \\ 2 13 1 0 0 0 0 \\ 3 4 1 0 0 0 0 \\ 4 5 1 0 0 0 0 \\ 5 6 1 0 0 0 0 \\ 6 7 1 0 0 0 0 \\ 6 8 2 0 0 0 0 \\ 9 10 1 0 0 0 0 \\ 10 11 1 0 0 0 0 \\ 10 12 1 0 0 0 0 \\ 13 14 1 0 0 0 0 \\ 13 21 1 0 0 0 0 \\ 14 15 2 0 0 0 0 \\ 14 16 1 0 0 0 0 \\ 16 17 2 0 0 0 0 \\ 16 18 1 0 0 0 0 \\ 18 19 1 0 0 0 0 \\ 19 20 1 0 0 0 0 \\ 21 22 1 0 0 0 0 \\ 21 23 1 0 0 0 0 \\ 23 24 1 0 0 0 0 \\ 23 25 1 0 0 0 0 \\ 25 26 1 0 0 0 0 \\ 25 28 1 0 0 0 0 \\ 26 27 1 0 0 0 0 \\ 28 29 1 0 0 0 0 \\ 29 30 1 0 0 0 0 \\ 30 31 1 0 0 0 0 \\ 30 32 1 0 0 0 0 \\ 32 33 1 0 0 0 0 \\ 32 34 1 0 0 0 0 \\ 34 35 1 0 0 0 0 \\ 34 38 1 0 0 0 0 \\ 35 36 1 0 0 0 0 \\ 35 37 1 0 0 0 0 \\ 38 39 1 0 0 0 0 \\ 39 40 2 0 0 0 0 \\ 39 41 1 0 0 0 0 \\ 41 42 1 0 0 0 0 \\ 42 43 1 0 0 0 0 \\ 43 44 1 0 0 0 0 \\ 44 45 1 0 0 0 0 \\ 44 46 2 0 0 0 0 \\ M END \\ \$\$\$\$ \\ } \end{oframed} \addcontentsline{toc}{section}{\refname} \begin{thebibliography}{10} \bibitem{branch} Y.~Shi, J.~Zhu, N.~ A.~Azam, K.~Haraguchi, L.~Zhao, H.~Nagamochi, and T.~Akutsu. A two-layered model for inferring chemical compounds with integer programming, J.~Mol. Sci. [submitted]. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.4490008396, "avg_line_length": 49.2963576159, "ext": "tex", "hexsha": "2e579c4ace56f662641bbec009ebc41bd08a566e", "lang": "TeX", "max_forks_count": 6, "max_forks_repo_forks_event_max_datetime": "2022-02-27T09:05:41.000Z", "max_forks_repo_forks_event_min_datetime": "2021-07-03T02:41:23.000Z", "max_forks_repo_head_hexsha": "6d5411a2cdc7feda418f9413153b1b66b45a2e96", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "CitrusAqua/mol-infer", "max_forks_repo_path": "2L-model/Module_4/Manual_Module_4_2L-model_en.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6d5411a2cdc7feda418f9413153b1b66b45a2e96", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "CitrusAqua/mol-infer", "max_issues_repo_path": "2L-model/Module_4/Manual_Module_4_2L-model_en.tex", "max_line_length": 151, "max_stars_count": 2, "max_stars_repo_head_hexsha": "6d5411a2cdc7feda418f9413153b1b66b45a2e96", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "CitrusAqua/mol-infer", "max_stars_repo_path": "2L-model/Module_4/Manual_Module_4_2L-model_en.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T20:39:26.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-14T02:16:56.000Z", "num_tokens": 11676, "size": 29775 }
\documentclass[10pt]{article} \usepackage{../inc/style} \begin{document} % --------------------------------------------------------------------- \title{Calculus extended} \subsection*{Language specifications} \subsubsection*{Syntax} File format is similar to M3. There are 4 sections: sources, maps declarations, queries and explicit triggers. The source queries and trigger definitions are the same as in M3. The maps are defined as:\ul \item {\tt\small DECLARE MAP map[indices:types] := expr;} where expression is a calculus expression. Map references can appear in the expression (including self-reference). Maps are intentional by default. \item {\tt\small DECLARE MAP map[indices:types] : type;} declares an unmanaged map, that has no associated triggers. The user can the define extra triggers to compute this map. \ule We extend the expressions language with:\ul \item {\tt\small AggMin([indices], expr)}, {\tt\small AggMax([indices], expr)} that return the minimal resp. maximal value of expression or 0 if there is no such expression exists. \ule We add new statements (in triggers):\ul \item {\tt\small CALL trigger(values) IN expression;} where the IN expression part can be omitted if it is 1. values are constants or variables bound by the right hand-side expression. \ule The incremental computation of each map is then computed (M3) before it is transformed to imperative version with explicit loops in the target language. \subsubsection*{Semantics} Maps support recursive declaration. Trigger for different maps will be aggregated in a single trigger for a particular event. Triggers calls are passed through a queue (to keep consistent behavior with distributed version). This implies that whenever an event happen all rule are traversed once, matching rules are called and recursive events are added to the queue. Then the next event from the queue is being processed until the queue is empty. Desirable properties we want to achieve (give strong intuition/proof):\ul \item Avoid synchronization between add operations on maps (in particular for computing fixpoints) \item Avoid stack overflows due to recursion (this is dealt with the events/calls queue) \item Avoid recursion if the value is not changed (or changed below some relative threshold) \ule \subsection*{Examples} \subsubsection*{Problem 1: betweenness centrality} Let our fact table be $edge(x,y)$ we want to compute the \href{http://en.wikipedia.org/wiki/Betweenness_centrality}{betweenness centrality} $g(v)$ of graph nodes $v$, we apply the following rules: \[\begin{array}{rcl} edge[x,y] & := & \text{\it storage of input stream} \\ path[x,y,len] & := & \exists(edge[x,y]) \times (len\hat= 1) + \exists(edge[x,z]) \times \exists(path[z,y,l]) \times (len\hat= 1+l) \times (x\ne y)\\ path_v[x,y,v,len] & := & \exists(path[x,v,l_1]) \times \exists(path[v,y,l_2]) \times (len\hat= l_1+l_2) \times (x\ne y) \\ centrality[v] & := & \exists(path[x,y,m]) \times (\exists(path[x,y,s] \times (s<m)) = 0) \times \left(\dfrac{{\rm AggSum}([], path_v[x,y,v,m])}{path[x,y,m]}\right) \end{array}\] By adding the condition $(x\ne y )$ in the $path$ definition, we prevent cycles. \subsubsection*{Problem 2: PageRank} The iterative method of \href{http://en.wikipedia.org/wiki/PageRank#Iterative}{PageRank} is defined for $N$ pages with a matrix $M=(K^{-1} A)^T$ where $A$ is the adjacency matrix of the graph, $K$ the diagonal matrix with out-degrees in diagonal. We compute \[R(t+1)=d\cdot M\cdot R(t)+\dfrac{1-d}{N}{\bf 1} \qquad \text{with {\bf 1} a column vector of 1 and }R(0)={\bf 1}\cdot \frac{1}{n}\] The issues with the matrix model is that all computation are done synchrnously. \textbf{Naive version:} To avoid explicit synchronization, we have 2 solutions: maintaining versions in the database or normalizing $|R|=1$ after every computation. \[\begin{array}{rrl} node[x] &:=& \text{\it input stream} \in\{0,1\} \\ edge[x,y] &:=& \text{\it input stream, adjacency matrix} \\ weight[x] &:=& node[x] \times (w_0\hat=weight[x]) \times \left( w_0 + (w_1\hat=nw) \times \left( \dfrac{|w_1 - w_0|}{w_1+w_0} > {\rm threshold}\right) \times (w_1-w_0) \right) \\ \\ && \text{where } nw= {\rm AggSum}([], \dfrac{edge[y,x] \times weight[y]}{{\rm AggSum}([],edge[y,z])}) \times d + \dfrac{1-d}{{\rm AggSum}([],node[v])}\\ \end{array}\] The shortcomings with this approach are the non-preservation of global weight and injection of initial coefficient. The $d$ weighting should mitigates this issue (but not sure at all !!). We also must ensure that we don't call recursively if the value did not change (encode this with threshold in map declaration or in another function?). \textbf{Clock join:} One possible idea to virtually execute the program synchronously is to have a clock stream and a timestamp attribute for all maps. When a clock tick is inserted, the next iteration can be computed (join $data \times tick$); when the tick is removed, the associated data can be removed from the map. Issues:\ul \item Deletion trigger of weight must be \underline{overridden} to only remove data (and avoid recursion). \item How to know that all the data at a tick has been fully computed ? \ule \[\begin{array}{rrl} tick[t] &:=& \text{\it logical clock} \in \{0,1\} \\ node[x] &:=& \text{\it input stream} \in\{0,1\} \\ edge[x,y] &:=& \text{\it input stream} \\ weight[x,t] &:=& node[x] \ \times {\rm AggSum}([x], \dfrac{edge[y,x] \times weight[y,t-1]}{{\rm AggSum}([],edge(y,z))}) \times d + \dfrac{1-d}{{\rm AggSum}([],node[z])} \end{array}\] {\color{red} XXX: for non-stratified programs, provide the user facilities like $<+, <-$ to do operations at the next step. XXX: we need to be able to define aggregation in one of the keys XXX: have column-oriented / nested tables to avoid keys duplication? how is this compatible with secondary indices ? } connect map to its delta to use them recursion = non-recursive with fixed \# iterations. orthogonal serie of maps \begin{verbatim} Goal: - design language - incrementalize, generalize this - trigger/datalog power combined - different use cases : windowed streams, state machine, continuous queries => get a clean semantics - locality stuff - specify location (optionally) => separate from other concerns. @all, @specific, @hashed (to many?) - ease of use, ease of understanding semantics => inflationary semantics, what's cleanest language ? ======> write a roadmap look at orchestra (datalog + incrementally) [not very interesting] - clean simple locality (orthogonalized) - distribution - default locality - simulate timestamps - iteration number Goals: - explicit deltas, use cases, triggers, active dbs - what make it easier to do incrementalization easier \end{verbatim} for maxflow mincut have sets of flows and union them to construct recursively instead of hashing tuples to host id, hash to a list of host id => redundancy, then this id can be made "mostly" coherent by doing modulo on group size to get id in group, groups are tagged and a node decides to join to one or more group and join/notifies master at connect \newpage The main concern in this problem definition is that the $path$ map is defined recursively, so we need to bind its definition. The proposal is to distinguish two types of maps: extensional and intentional. Only $edge$ would be extensional here.\ul \item Extensional (regular) map are modified by triggers \item Intentional maps are iteratively converted into: 1) a map with 2) associated triggers, 3) additional statements in the existing triggers and possibly 4) materialize streams it depends on into maps. \ule The key insight is to see intentional maps as SQL queries over extensional dataset. This means:\ul \item We can reuse DBToaster machinery to maintain incrementally intentional maps, thereby having <<front-end M3>> where intentional maps declarations are allowed, and <<back-end M3>> where they are converted into additional maps and triggers. \item The concern of having multiple declaration (for the same map) is solved (as DBToaster will automatically break maintenance into multiple triggers) \item We do not have to worry about the set and counting semantics of our maps or use deltas. \item Updates do not need need a specific order because it is implicitly defined by the triggers call chain (that is we will update $path$ before $centrality$ because $centrality$ depends on $path$ in its definition). \item The logical clock of the system would correspond to the events of external tuples arriving. \ule Practically, we could write this program as follows, let the nodes be identified by integers: \begin{verbatim} -------------------- SOURCES -------------------- CREATE STREAM sedge(x:int,y:int) FROM ... --------------------- MAPS ---------------------- DECLARE MAP EDGE[][x:int,y:int] := sedge(x,y); DECLARE INTENTIONAL MAP PATH[][x:int,y:int,len:int] := EXISTS(EDGE(x,y)) * (len^=1) + EXISTS(EDGE(x,z)) * EXISTS(PATH(x,y,l)) * (len^=1+l); DECLARE INTENTIONAL MAP PATHV[][x:int,y:int,v:int,len:int] := ... DECLARE INTENTIONAL MAP CENTRALITY[][v:int] := ... -------------------- QUERIES -------------------- DECLARE QUERY g := CENTRALITY(double)[][v]; ------------------- TRIGGERS -------------------- ON + sedge(x,y) { EDGE(x,y) += 1; } ON - sedge(x,y) { EDGE(x,y) += -1; } \end{verbatim} This declaration has no triggers because they will be generated during an M3 intentional$\to$triggers phase. Its result would be the addition of new regular maps and triggers to replace each intentional map. \newpage We will expand one by one intentional maps into extensional ones. Intuition: a map is convertible only if its right hand-side contains only extensional maps or itself, and we always append additional statements to existing triggers. Practically, we would obtain: % CALL + PATH(x,z,len2) IN PATH(y,z,len) * (len2 ^= len+1) \begin{verbatim} --------------------- MAPS ---------------------- DECLARE MAP EDGE[][x:int,y:int] := edge(x,y); DECLARE MAP PATH[][x:int,y:int,len:int] := ... -- expand first DECLARE MAP PATHV[][x:int,y:int,v:int,len:int] := ... -- expand second DECLARE MAP CENTRALITY[][v:int] := ... -- expand third ------------------- TRIGGERS -------------------- ON + sedge(x,y) { EDGE(x,y) += 1; -- first CALL + PATH(x,y,1) -- PATH, base step } ON - sedge(x,y) { EDGE(x,y) -= 1; -- first CALL - PATH(x,y,1) -- PATH, base step } ON + PATH(x,y,len) { PATH[x,y,len] += 1; -- first CALL + PATH(z,y,len2) IN EDGE(z,x) * (len2 ^= len+1) -- PATH inductive step ... -- call + PATHV ... -- call + CENTRALITY } ON - PATH(x,y,len) { PATH[] := 0; -- clear all CALL + PATH(x,y,len) IN EDGE(x,y) * (len ^= 1) -- rebuild all PATH base steps ... -- call - PATHV ... -- call - CENTRALITY } ON + PATHV(x,y,v,len) { ... -- call + CENTRALITY } \end{verbatim} Notes:\ul \item A trigger always appends to its associated map first and foremost. \item CALL + ... IN has the same semantics as += but on a trigger instead of a map. If the IN ... part is omitted, it is assumed to be true/1. \item We need to assume there is no cycles in the graph, otherwise there is no termination. If we have edges (a,b), (b,a) the set of paths is infinite. It should be up to the programmer that make sure that the path does not contain cycles. \item By expanding the intentional maps, we duplicate the logic of <<base step>> in each of the right hand-side extensional map trigger and the recursive <<inductive step>> in its own trigger. For correctness, we only need to make sure no <<inductive step>> is written outside of the own trigger. \item Because of incrementality, the idea of <<generations>> mentioned in the implementation details of DataLog is implicit because all previous generations are in the map, whereas next generation is passed through trigger. \item In the deletion trigger for paths, we need to replay all the construction steps unless we want to tag each tuple with all the base steps it depends on (which might be even more complex). \ule \subsubsection*{Discussion} \ol \item One concern was the non-termination of programs. If we let the programmer write his own triggers, we can not prevent him from writing: \begin{verbatim} ON + EDGE(x,y) { EDGE[x,y] += 1; CALL + EDGE(x,y); } \end{verbatim} The set semantics is quite straightforward: we place existential guards to avoid multiple insertions. \begin{verbatim} ON + EDGE(x,y) { EDGE[x,y] += 1; CALL + EDGE(x,y) IN EXISTS(EDGE[x,y]) = 0; } \end{verbatim} But again, this would not prevent the programmer to shoot himself in the foot by writing recursive non-terminating code. (Unless he is not allowed to edit triggers, but that's maybe not desirable). \item We might bound the recursion with each trigger call carrying a counter of <<how far from the original stream event>> it is (where far is counter in \# of calls), and avoid recursion above some threshold. \item We need to have an order between maps, which prevents mutual recursion. One workaround for this is to rewrite the program such that there is only self-recursion in the map definition. An insightful programmer might also write his own trigger to close the loop of mutual recursion. \item Does order matter? We have two different ways of calling other triggers: we can immediately invoke it or add the next trigger call in a queue. The first method produces DFS-like calls whereas the second strategy behave like BFS. I would argue for the latter one as this would give a coherent behavior to what is achievable with message passing for distributed system. \item Batching streams? If we only have map add/delete operations in \underline{all} triggers, we may be able to relax the constraints about streams ordering (I think in particular we do not need to wait for a fix-point to be reached before we can send the next external tuple). \ole \subsubsection*{Problem 2: metro and busses} This problem is to test how difficult it is to convert mutual recursion to self-recursion. Say we have a city with locations (as integers) and metro $m(l_1,l_2)$ and busses $b(l_1,l_2)$ connecting these locations. We want to know all locations reachable from $l_0$ and how many changes (to forbid graph union) are required. The maps contain the minimal number of alternated rides. Original statement would be: \[\begin{array}{rclllll} RB[l] &:=& {\rm MIN}\big(&b(l_0,l),&RB[l_1] \times b(l_1,l),&RM[l_1] \times b(l_1,l) +1&\big) \\ RM[l] &:=& {\rm MIN}\big(&m(l_0,l),&RM[l_1] \times m(l_1,l),&RB[l_1] \times m(l_1,l) +1&\big) \\ R[l] &:=& {\rm MIN}\big(&RB[l],&RM[l] \big) \end{array}\] In this case, we can add an extra field to the tuples to know whether they are in the map RB (ending with a bus) or RM (ending with metro). Hence the transformation look quite general and the problem can be transformed: let $x\in\{B, M, 0\}$ in \[\begin{array}{rclllll} R[l,x] &:=& {\rm MIN}\big(&b(l_0,l),&R[l_1,B] \times b(l_1,l),&R[l_1,M] \times b(l_1,l) +1&\big) \times(x\hat=B) \\ && {\rm MIN}\big(&m(l_0,l),&R[l_1,M] \times m(l_1,l),&R[l_1,B] \times m(l_1,l) +1&\big) \times(x\hat=M) \\ && {\rm MIN}\big(&R[l,B],&R[l,M] \big)\times(x\hat=0) \end{array}\] and our query now become $rides(l):=R[l,0]$ with self-recursion. Conversion seems quite straightforward but this is due to the fact that $RB$, $RM$ and $R$ have the same domain; this may not be true in general. Can we have mutual recursion for functions that do not operate over the same domain ? %\begin{verbatim} %for both problems: %- write mathemal definition %- write pseudo M3 code %- write naive implementation using := to recreate the views at every event %- write expanded M3 code (using call to incremental view maintenance on intentionally defined maps) %- give ideas how we could achieve this using existing architecture %\end{verbatim} \end{document}
{ "alphanum_fraction": 0.7160040517, "avg_line_length": 65.2727272727, "ext": "tex", "hexsha": "1300b5d22cd096984d84a2da7d221c8ab08f66c0", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2022-02-18T04:22:29.000Z", "max_forks_repo_forks_event_min_datetime": "2017-09-29T08:16:00.000Z", "max_forks_repo_head_hexsha": "24d43034fa3a5a4f6ab4f1de3e2d83a7ca80be73", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "szarnyasg/dbtoaster-backend", "max_forks_repo_path": "ddbtoaster/docs/drafts/m3p.tex", "max_issues_count": 33, "max_issues_repo_head_hexsha": "24d43034fa3a5a4f6ab4f1de3e2d83a7ca80be73", "max_issues_repo_issues_event_max_datetime": "2022-02-18T02:13:57.000Z", "max_issues_repo_issues_event_min_datetime": "2018-04-05T02:42:39.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "szarnyasg/dbtoaster-backend", "max_issues_repo_path": "ddbtoaster/docs/drafts/m3p.tex", "max_line_length": 520, "max_stars_count": 38, "max_stars_repo_head_hexsha": "24d43034fa3a5a4f6ab4f1de3e2d83a7ca80be73", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "szarnyasg/dbtoaster-backend", "max_stars_repo_path": "ddbtoaster/docs/drafts/m3p.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T03:56:20.000Z", "max_stars_repo_stars_event_min_datetime": "2017-09-29T08:12:31.000Z", "num_tokens": 4139, "size": 15796 }
\documentclass{article} \usepackage[hidelinks]{hyperref} \usepackage[utf8]{inputenc} \usepackage{float} \usepackage{graphicx} \graphicspath{ {assets/} } \usepackage[backend=biber, style=authoryear-icomp]{biblatex} \addbibresource{my.bib} \newcommand{\comment}[1]{} \usepackage[acronym]{glossaries} \loadglsentries{glossary} \makeglossaries \begin{document} \title{ Software Requirement Specification\\ for \\ Bug-Tracker \\ Version 1.0 approved } \author{Jesse Wood} \maketitle \newpage \tableofcontents \label{contents} \newpage \section{Introduction} \subsection{Purpose} \comment{ Identify the product whose software requirements are specified in this document, including the revision or release number. Describe the scope of the product that is covered by the SRS. Particularly if this SRS describes only part of the system or a single subsytem. } The product whose software requirements are specified in this document is a \Gls{bug}-tracker. \Gls{bug} tracking is a process of logging or monitoring bugs during software development \parencite{ibm20}. The software is a stand-alone system to be used as a bug-tracker while developing a project. \subsection{Document Conventions} \comment{ Decribe any standards of typographical conventions that were followed when writing this \acrlong{srs}, such as fonts or highlighting that have special significance. For exmaple, state whether priorites for higher-level requirements are assumed to be inheritied by detailed requirements, or whether every requirement statement is to have its own priorities. } This document follows the \acrshort{ieee} \acrshort{srs} convention \parencite{ieee20}. Fundamental requirements for this software will be explored in more detail than others.Each of the requirements will be listed in order of descending priority. All references can be found below in the \nameref{references}. Acronyms and technical terms can be found in the \nameref{glossary}. All of the UML diagrams used throughout the document were made using PlantUML \parencite{plantuml20}. \subsection{Intended Audience and Reading Suggestions} \comment{ Descibe the different types of reader that the document is inteded for, such as developers, project managers, marketing staff, users, testers, and documentation writers. Describe what the rest of the STS contains and how it is organised. Suggest a sequenc for reading the document, beginning with the overview sections and proceeding throught the sections that are most pertinent to each reader type. } The intended audience for this document is academics, developers and users. This document is set out sections defined in the table of contents \nameref{contents}, which explore different aspects of the systems requirements. Users of this software could find themselves only interested in a few sections, such as the \nameref{userDocumentation} or \nameref{systemFeatures}. Where as academics and developers may find the contents of the entire document to be relevant if they are looking into a similar software. \subsection{Product Scope}\label{scope} \comment{ Provide a short decription of the software being specified and its purpose, including relevant benefits, objectives and goals. Relate the software to corporate goals or business strategies. If a separate vision and scope document is available, refer to it rather than duplicating its contents here. } This software is designed to track bugs in projects. Bug-tracking is a useful way to develop large scale software and/or develop software in a team. A goal of this project will be to follow the \acrlong{mvc} design pattern \parencite{designpatterns97}. Using a design pattern will make the software reusable and easier to develop further in the future. Expanding on the \acrshort{mvc} model, maximising the modularity of this software will help make the project adaptable and promote further re-use in the future. \newpage \subsection{References}\label{references} \comment{ If a seperate reader could access a copy of each reference, including title, author, version number, date, and source or location. } Each of the following references are available in the form of an online pdf. Note that the availibility of website references may change subject to time. \printbibliography \newpage \section{Overall Description} \subsection{Product Perspective} \comment{ Describe the context and origin of the product being specified in this \acrshort{srs}. For example, state whether this product is a follow-on member of a product family, a replacment for existing systems, or a new, self-contained product. If the \acrshort{srs} defines a component of a larger system, relate the requirments of the larger system to the functionality of this software and identift interfaces between the two. A simple diagram that shows the major components of the overall system, subsystem interconnections, and external interfaces can be helpful. } The development of this software originated out of necessity. One of the most crucial and time consuming tasks of any software development is the \gls{debugging} process. As software developers we plan to mimise the prevelance of bugs within our systems, however hard we may try, they will still perculate throughout our code bases. Bug-tracking is a process of managing bugs in a system into a priority hierarchy, such that each bug can be addressed with respect to the severity of its impact on the system. Another important part of a bug-tracker is the ability to change the status of a bug (e.g., when solved), this allows the workflow of a project to be monitored so that duplicate resources aren't assigned to fixing the same issue. \subsection{Product Functions} \comment{ Summarize the major functions to the product must perform or must let the user perform. Details will be provided in Section 3, so only a high level summary (such as a bullet list) is needed here. Organize the functions to make them understandable to any ready of the SRS. A picture of the major groups of related requirements and how they relate, such as a top level data flow diagram or object class diagram, is often effective. } \begin{figure}[h] \caption{Class Diagram} \centering \includegraphics[width=\textwidth]{class-diagram.png} \end{figure} \subsection{User Classes and Characteristics}\label{userClass} \comment{ Identify the various user classes that you anticipate will use this product. User classes may be differentiated based on the frequency of use, subset of product functions used, technical expertise, security or privelege levels, educational level, or experience. Describe the pertinent characteristics of each user class. Certain requirements may pertain only to certain user classes. Distinguish the most import user classes for this product from those who are less important to satisfy. } This software has three main user classes. The \emph{Author}, \emph{Colloborater} and \emph{Audience}. Each of these classes with have differently privelages related to a project. \\ \\ An \emph{Author} has permission to delete and edit a project at will. They can also contribute bug reports and change the scope of a project from public to private or vice versa. An author is able to add/remove collaborators from a project, and remove any bug reports on their own projects. \\ \\ A \emph{Collaborater} is an actor that has been granted permissions by the author. These privaleges allow the user to contribute bug reports to a project. However they cannot edit or delete the project itself. \\ \\ The \emph{Audience} includes users that do not come under the aforementioned actors. They are able to view public projects. However they cannot contribute bug reports to these projects. Also they cannot view private projects. \begin{figure}[h] \caption{ Use Case Diagram} \centering \includegraphics[width=\textwidth]{use-case-diagram.png} \end{figure} \subsection{Operating Environment} \comment{ Describe the enviroment in which the software will operate, including the hardware platform, operating systems and versions, and any other software components or applications with which it must peacefully coexist. } The software will be deployed in the form of a web app that relies on \acrshort{html}/\acrshort{css}/\acrshort{js} framework \acrfull{tbd}. These frameworks are constantly updated, so the software must be maintained over time, to ensure that future releases of the framework used do not introduce bugs. As long as the users' browswer supports the framework of the software, this system will be able to be run accross multiple operating systems. Therefore, only one version will have to be released. A more technical version of the software could be implemented for a terminal based environment for advanced use later if needed \acrshort{tbd}. \subsection{Design and Implementation Constraints}\label{designConstraints} \comment{ Descirbe any items or issues that will limit the options available to developers. These might include: corporate or regulatory policies; hardware limitations (timing requirements, memory requirements); interfaces to other applications; specific technologies, tools, and databases to be used, parallel operations; language requirements; communication protocols; security considerations; design conventions or programming standards (for example, if the customer's organization will be responsible for maintaining delivered software). } The \acrshort{tbd} \gls{database} system will be used to store the user informaiton. This introduces contraints to the storage capacity for user information. The amount of information stored cannot exceed the free-tier limit provided by the \acrshort{tbd} \gls{database} provider, or else the software will no longer be able to introduce new information. \\ \\ Using a \gls{database} makes the software vulnerable to security exploits such as \acrshort{sql} Injection \parencite{sql20} and \acrshort{xss} \parencite{xss20}. The projects, bugs and user account informaiton will have to be stored in a \gls{database}. This is likely to hold sensitive information, especially the user information, therefore it is of paramount importance to ensure the software in secure against possible security exploits such as these. \\ \\ The software will follow the design convention of \acrshort{mvc}. The \acrlong{mvc} is a very popular design pattern aimed at increasing flexibility and reuse \parencite{designpatterns97}. Developers intending to contribute to this project will have to follow the same design convention in any of their additions to the code base. \\ \\ It would be expected that the software also follows the important principles of writing clean code \parencite{cleancode08}. Crafting software using these principles is pivotal to maintaing usable codebase. \\ \\ Test driven development is a programming standard that is required during the development of this software. The increasingly popular DevOps approach it crucial in software development. Keep the test suit for this software up-to-date and all encompasing is of paramount importance to proving the functionality of the software. \subsection{User Documentation}\label{userDocumentation} \comment{ List the user documentation components (such as user manuals, on-line help, and tutorials) that will be delivered along with the software. Identify any known user documentation delivery formats or standards. } The high-level components of the software will be documented in the source code. Following the documentation standards mentioned on the previous section \nameref{designConstraints}, the code-base of the software is well-documented for other developers. \\ \\ \acrshort{tbd} Wiki eh. The code will have a Wiki available on the Github VCS for the software. This will explain the purpose for each of the individual components of the software at a high-level, with the in-line documentation providing further explanation when required. \\ \\ \acrshort{tbd} Help section. On the web-page there is a help section. This includes FAQ with links to the Wiki for the relevant information a user and/or developer may require. \subsection{Assumptions and Dependencies} \comment{ List any assumed factors (as opposed to known facts) that could affect the requirements stated in the SRS. These could include third-party or commercial components that you plan to use, issues around the development environment, or constraints. The project could be affected if these assumptions are incorrect, are not shared, or change. Also identify any dependencies the project has on external factors, such as software components that you intent to reuse from another project, unless they are already documented elsewhere (for example, in the vision and scope document or the project plan) } Javascript frameworks, \\ \\ Vim is the text editor used to develop the software. Vim is a free and open source software. With the needed technical expertise, anyone can continux to develop this software, with no commerical software needed. \\ \\ The project will be released under an MIT license. Therefore the reuse of the code and further development will follow the guidelines stipulated by this license \parencite{mit20}. It is assumed that developers and users of this software will adhere to the rules of the license it was released under. \\ \\ Git is the Version Control System used to document the development of the project. It is assumed that any further development will follow git ettiquette. Improvements and re-works to the software can be forked of the master branch. Given the license, previously mentioned, further development of this software from other developers is welcome and encouraged. \newpage \section{External Interface Requirements} \subsection{User Interfaces} \comment{ Describe the logical characteristics of each interface between the software product and users. This may include smaple screen images, any GUI standards or product family style guides that are to be followed, screen layout contraints, standard buttons and functions (e.g., help) that will appear on every screen, keyboard shortcuts, error message display standards, and so on. Define the software components for which a user interface is needed. Details of the user interface should be documented in a seperate user interface specification. } The user interfaces for this software will be designed to work in a browser. The web app intefaces will be designed with mobile use considered. Therefore all of the following interfaces will also have cross-compbatibility with browsers on mobile devices. The user interface will be designed with usability being the primary focus. It will implement some common UI design conventions discussed in literature \parencite{aboutface14} \\ \\ \begin{figure}[H]\label{sign-in} \caption{Sign In} \centering \includegraphics[width=0.5\textwidth]{sign-in.png} \end{figure} \begin{figure}[H]\label{register} \caption{Register} \centering \includegraphics[width=0.5\textwidth]{register.png} \end{figure} The sign in screen is the first interface the user will see. This will allow the user to log in to their account. Google verification will be provided should a user not want to create their own account (\nameref{sign-in}). \\ \\ \begin{figure}[H]\label{project} \caption{Add a Project} \centering \includegraphics[width=0.5\textwidth]{project.png} \end{figure} \begin{figure}[H]\label{projects} \caption{Projects Menu} \centering \includegraphics[width=0.75\textwidth]{projects.png} \end{figure} Another interface is the project menu. This allows the user to navigate to a particular project. With the possibility to create a new project, or find an existing one (\nameref{projects},\nameref{project}). \\ \\ \begin{figure}[H]\label{bug} \caption{Add a Bug} \centering \includegraphics[width=0.5\textwidth]{bug.png} \end{figure} \begin{figure}[H]\label{bugs} \caption{Project Menu} \centering \includegraphics[width=0.75\textwidth]{bugs.png} \end{figure} Each project has its own dashboard. On this interface a user can add/change the status of bugs. View the history of a project. Search for a bug within a project (\nameref{bugs}, \nameref{bug}). \\ \\ Their will be a settings interface for the authors of projects. Given the correct log in credentials a user will be able to edit information about a project. Such as whether it is public or private, its name, remove a project ... etc. \subsection{Hardware Interfaces} \comment{i Describe the logical and physical characteristics of each interface between the software product and the hardware components of the system. This may include the supported device types, the communication protocols to be used. } The software supports any device that supports a \acrshort{tbd} browsers. It can run on Desktop, Laptop and Mobile devices so long as they support the \acrshort{tbd} framework. \subsection{Software Interfaces} \comment{ Describe the connections between this product and other specific software components (name and version) including databases, operating systems, tools, libraries, and integrated commercial components. Identify the data items or messages coming into the system and going out and describe the purpose of each. Describe the services needed and the nature of communications. Refer to documents that describe detailed application programmin interface protocols. Identify data that will be shared across software components. If the data sharing mechanisim must be implemented in a specific way (for example, use of a global data area in a multitasking operating system), specify this as an implementation constraint. } \acrshort{tbd} The software requires a connection to an external \gls{database}. This stores all the information regarding the projects and their bugs. Using this software interface requires the use of queries as well as read and write access to the \gls{database}. The \gls{database} is able to concurrently accessed by multiple users. Using a robust \acrshort{dbms} ensures that this \gls{database} can be implemented. \\ \\ Another software interface used is the \acrshort{tbd} framework. Libraries from this framework are used to implement the \acrshort{gui} and query/read/write to the \gls{database}. The software is implemented in a way such that all communication between the \gls{database} and framework uses an interface. This allows the code to be refactored easily should the choice of \gls{database} change in the future. Using the \acrshort{mvc} helps, such that the interactions with the \gls{database} can be compartmentalized into the model component. \subsection{Communication Interfaces} \comment{ Describe the requirements associated with any communication functions required by this product, including e-mail, web-browswer, network server communication protocols, electronic forms, and so on. Define any pertinant message formatting. Identify any communication standards that will be used, such as FTP or HTTP. Specify any communication security or encryption issues, data transfer rates, and synchronization mechanisms. } The communication interface of email \acrshort{tbd} is used to allow notificaitons for projects. This includes letting the user know when new bugs are contributed or if the status of a bug changes. The user can edit the thresholds for certain notifications in the settings \acrshort{gui}. \newpage \section{System Features}\label{systemFeatures} \comment{ E.g. Description and Priority: <Provide a short description of the feature and indicate whether it is of High, Medium or Low Priority. You could also include specific priority component ratings, such as benefit, penalty, cost, and risk (each rated on a relative scale from low of 1 to high of 9)> Stimulus/Response Sequences: <List the sequences of user actions and system responses that stimulate behaviour defined for this feature. These will correspond to the dialog elements associated with use cases.> Functional Requirments: < Itemize the detailed functional requirements associated with this feature. These are the software capabilites that must be present in order for the user to carry out the services provided by the feature, or to execute the use case. Include how the product should respond to anticipated error conditions or invalid inputs. Requirements should be concise, complete, unambigious, verifiable, and necessary. Use \acrshort{tbd} as a placeholder to indicate when necessary information is not yet available.> <Each requirement should be uniquely identfied with a sequence number or a meaningful tab of some kind.> } \subsection{Bugs} A bug represents a software error. It is a brief description of a problem accompanied with a state and a priority. \\ \\ Stimulus/Response Sequences \begin{itemize} \item create bug with a priority and state \item change the state of a bug \item record the bug contributor, time, etc ... \end{itemize} Functional Requirements: \begin{itemize} \item Each bug has one state and priority \item Each bug is unique \item Bugs store information about state changes \item Bugs have a contributor \end{itemize} States for software bugs include: \begin{itemize} \item Active \item Test \item Verified \item Closed \item Reopened \end{itemize} Priorities for software bugs include: \begin{itemize} \item Catastrophic \item Impaired Functionality \item Failure of non-critical systems \item Very Minor \end{itemize} \parencite{ibm20} \subsection{Project} A project represents a code base that a user intends to monitor bugs on. These can be added/removed/edited from/on the \gls{database}. \\ \\ Stimulus/Response Sequences: \begin{itemize} \item create project \item add/remove collaborators \item add bugs to the project \item View History of bugs \item View Project information (collaboraters/author/date initiated) \end{itemize} Functional Requirements: \begin{itemize} \item Collaboraters/Author can edit project \item Author can add/remove collaboraters to project \item Author can edit details of the project \item Bugs can be added/removed from a project \end{itemize} \subsection{Sign In} This feature allows the user to sign in and access their projects. This has Google authentication and also the ability for a user to register their own account, and sign in. This feature requires \gls{encryption}, such that the users sensitive information such as emails and passwords are not leaked. \\ \\ Stimulus/Response Sequences \begin{itemize} \item A user can enter a username and password \item Google Authentication \item Register an account \item A user can sign in \end{itemize} Functional Requirements: \begin{itemize} \item No duplicate accounts \item Strong passwords \item store the account information securely \item Account is linked to its own and collobaration projects \end{itemize} \begin{figure}[H] \caption{ Sign In Sequence Diagram } \centering \includegraphics[width=\textwidth]{sequence-diagram.png} \end{figure} \newpage \section{Other Nonfunctional Requirements} \subsection{Performance Requirements} \comment{ If there are performance requirements for the product under various circumstances, stat them here and explain their rationale, to help developers understand the intent and make suitable design choices. Specift the timing relationships for real time systems. Make such requirements as specific as possible. You may need to state performance requirements for individual functional requirements or features. } The \gls{database} must be able to handle \gls{con}. Since there will be multiple users attempting to access the information at the same time. Modern \acrshort{rdbms} like \gls{mysql} or \gls{postgresql} handle this. The rationale behind this is to allow multiple users to access the software at once. These users need to have the same results if performing the same actions. \subsection{Safety Requirements} \comment{ Specify those requirements that are concerned with possible loss, damage, or harm that could result from use of the product. Define any safeguards or actions that must be taken, as well as actions that must be prevented. Refer to any external policies or regulations that state safety issues that affect the product's design or use. Define any safety certifications that must be satisfied. } Secure log in authentication is a safety requirement for this software. The software will have to meet the safety certifications Google requires to include Google Authentication. This requires the software to meet \gls{totp} algorithm specifications \parencite{totp11}. This can be implemented using \acrfull{js} frameworks. It is a requirement for this software. Users find software with Google Authentication more trustworthy and reputable. \subsection{Security Requirements} \comment{ Specify any requirements regarding security or privacy issues surrounding use of the product or protection of the data used or created by the product. Define any user identity authentication requirements. Refer to any external policies or regulations containing security issues that affect the product. Define any security or privacy certifications that must be satisfied. } End to end data \gls{encryption} is requried for private projects for users. If a user sets a projects scope to private, it means for whatever reason they do not wish to make the contents of this project publicly available. It is in the best interest for developing this software that all the information regarding private projects remains encrypted. It will require a \gls{key} only available to the author of the project to decrypt the projects contents. This means should the private projects on the database leak, they will still remain encrypted in a non-human readible format. \\ \\ \Gls{encryption} of user log-in information. It is important that the users' sensitive information, such as their passwords are stored on the database in an encrpted from. The authentication service can perform a \Gls{hash} Function \parencite{hash20}. This ensures that the password does not have to stored as a \gls{string} on the \gls{database}. Instead it can be stored as a \gls{hash}. Should the sensitive user information on the \gls{database} be released, the passwords will still remain encrypted. \\ \\ \Gls{database} must be secure from \acrshort{sql} Injection and \acrshort{xss}. To prevent database leaks from even happening in the first place the program is required to be secure. For example, to prevent \acrshort{sql} injection any user input that interacts with the \gls{database} must be sanatized. This involves checking for hidden \acrshort{sql} code within user input, and sanatizing the input, before allowing it to be entered into the \gls{database}. \\ \\ \subsection{Software Quality Attributes} \comment{ Specify any additional quality characteristics for the product that will be omportant to either the customer or the developer. Some to consider are adaptability, availability, correctness, flexibility, interoperability, maintaintability, portability, reliability, reusability, robustness, testability, and usability. Write these to be specific, quantative and verifiable where possible. At least, clarify the relative preferences for various attributes, such as ease of use over ease of learning.dd } The software demonstrates \gls{mod} through the use of the \acrshort{mvc} design pattern \parencite{designpatterns97}. This design pattern is an excellent example of \gls{mod} in a code base. It uses abstraction to seperate the software into three distincts modules which. The \acrfull{mvc} modules make future development of this software easier, even for new developers. So long as they are familiar with this common design pattern. \\ \\ \Gls{test} is perhaps one of the most important aspects of developing reliable software. Through the use of \acrshort{tdd}, this software creates its own test suite during development. This test suite helps to demonstrate the functionality of the software. It make integration testing new features for un-intended side effects a piece of cake, since the test suite is already written. Using \acrshort{tdd} is one of the most effective ways to demonstrate that the software meets its own requirements. \\ \\ \Gls{main} is a quality attribute that software inherits through \gls{mod} and \gls{test}. Focussing on \gls{mod} by using \gls{mvc} means that each module has low coupling. Using \gls{tdd} the test is written first, then the code to pass it. This means that extra functionality or dead code will not be prevelent in the code base. This in combination with the \gls{mod} will ensure high cohesion. \subsection{Business Rules} \comment{ List any principles about the product, such as which individuals or roles can perform which functions under specific circumstances. These are not functional requirments in themselves, but they may imply certain functional to enforce the rules. } The individuals who can perform specific functions is explained in detial in this previous section, \nameref{userClass}. To reiterate the business rules for this software the reasoning behind this will be explained. \\ \\ An \emph{Author} can Delete and Rename Projects and Bugs. This rule applies only to the projects they have created. This privelage is extended to the user such that they can manage their own projects. Colloboraters do not have these permissions because an author is the only actor that should be able to decide whether or not a project is needed. \\ \\ \emph{Authors} and \emph{Collaborators} can see, add bugs, change status for private projects. Some projects may have more than one developer. The author is able to add one or more collobaraters to their own project. This allows a team to track bugs for a project. This functionality is not extended to the audience, this is because an author may not welcome the input of random contributors. \\ \\ Lastly, the \emph{Audience} cant edit or post bugs without relevant permissions. They still have the ability to view the history of public projects. This adds a community aspect to the software. Users can see if other contributors have solved the same or similar bugs to what they are currently experiencing. \newpage \section{Other Requirements} \comment{ Define any other requirements not covered elsewhere in the \acrshort{srs}. This might include database requirements, internationalization requirments, legal requirments, reuse objectives for the projects, and so on. Add any new sections that are pertinent to the project. } \subsection{Glossary}\label{glossary} \comment{ Define all the terms necessary to properly interpret the \acrshort{srs}, including acronyms and abbreviations. You may wish to build a seperate gglossary that spans multiple projects or the entire organization, and just include terms specific to a single project in each \acrshort{srs}. } \printglossary[type=\acronymtype] \printglossary % \subsection{Analysis Models} \comment{ Optionally, include any pertinent analysis models, such as data flow diagrams, class diagrams, state-transition diagrams, or entity-relationship diagrams. } \subsection{To Be Determined List} \comment{ Collect a numbered list of the \acrshort{tbd} (to be determined) references in the \acrshort{srs} so they can be tracked for closure. } TBD To Be Decided. 6, 7, 9, 10 \end{document}
{ "alphanum_fraction": 0.8047022646, "avg_line_length": 87.9408450704, "ext": "tex", "hexsha": "94b4f3404639aa0d331bfa21cd4a0562ff49e64c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a56cc8263cafb55f09dc383cd53ef65fca7de529", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "woodRock/bug-tracker", "max_forks_repo_path": "SRS/SRS.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "a56cc8263cafb55f09dc383cd53ef65fca7de529", "max_issues_repo_issues_event_max_datetime": "2020-03-09T11:53:39.000Z", "max_issues_repo_issues_event_min_datetime": "2020-02-22T11:10:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "woodRock/bug-tracker", "max_issues_repo_path": "SRS/SRS.tex", "max_line_length": 738, "max_stars_count": null, "max_stars_repo_head_hexsha": "a56cc8263cafb55f09dc383cd53ef65fca7de529", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "woodRock/bug-tracker", "max_stars_repo_path": "SRS/SRS.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6717, "size": 31219 }
%% LyX 2.3.1-1 created this file. For more info, see http://www.lyx.org/. %% Do not edit unless you really know what you are doing. \documentclass[11pt]{article} \usepackage[LGR,T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{geometry} \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \usepackage{color} \usepackage{amsmath} \usepackage{amssymb} \usepackage{graphicx} \usepackage{pgfplots} \usepackage{setspace} \doublespacing \usepackage[unicode=true, bookmarks=false, breaklinks=false,pdfborder={0 0 1},backref=section,colorlinks=true] {hyperref} \hypersetup{pdftitle={Terra Money White Paper}, linkcolor=black,citecolor=blue,filecolor=magenta,urlcolor=blue} \makeatletter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands. \DeclareRobustCommand{\greektext}{% \fontencoding{LGR}\selectfont\def\encodingdefault{LGR}} \DeclareRobustCommand{\textgreek}[1]{\leavevmode{\greektext #1}} \ProvideTextCommand{\~}{LGR}[1]{\char126#1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands. \usepackage{color,titling,titlesec} \usepackage{eurosym} \usepackage{harvard}\usepackage{amsfonts} \usepackage{graphicx,pdflscape} \usepackage{chicago}\setcounter{MaxMatrixCols}{30} \usepackage[bottom]{footmisc} % This shrinks the space before and after display formulas \usepackage{etoolbox} \apptocmd\normalsize{% \abovedisplayskip=5pt %\abovedisplayshortskip=6pt % plus 3pt \belowdisplayskip=6pt %\belowdisplayshortskip=7pt plus 3pt }{}{} % slim white space before title, section/subsection headers \setlength{\droptitle}{-.75in} \titlespacing*{\section}{0pt}{.2cm}{0pt} \titlespacing*{\subsection}{0pt}{.2cm}{0pt} \makeatother \begin{document} \title{Terra Money:\\ Stability and Adoption} \author{Evan Kereiakes, Do Kwon, Marco Di Maggio, Nicholas Platias} \date{February 2019} \maketitle \begin{center} {\large{}\vspace{-1.5cm} }{\large\par} \par\end{center} \begin{abstract} \begin{singlespace} While many see the benefits of a price-stable cryptocurrency that combines the best of both fiat and Bitcoin, not many have a clear plan to get such a currency adopted. Since the value of a currency as medium of exchange is mainly driven by its network effects, a successful new digital currency needs to maximize adoption in order to become useful. We propose a cryptocurrency, Terra, which is both price-stable and growth-driven. It achieves price-stability via an elastic money supply enabled by countercyclical mining incentives and transaction fees. It also uses seigniorage created by its minting operations as transaction stimulus, thereby facilitating adoption. There is demand for a decentralized, price-stable money protocol in both fiat and blockchain economies. If such a protocol succeeds, then it will have a significant impact as the best use case for cryptocurrencies. \end{singlespace} \end{abstract} \thispagestyle{empty} \newpage\setcounter{page}{1} \section{Introduction} The price-volatility of cryptocurrencies is a well-studied problem by both academics and market observers (see for instance, Liu and Tsyvinski, 2018, Makarov and Schoar, 2018). Most cryptocurrencies, including Bitcoin, have a predetermined issuance schedule that, together with a strong speculative demand, contributes to wild fluctuations in price. Bitcoin\textquoteright s extreme price volatility is a major roadblock towards its adoption as a medium of exchange or store of value. Intuitively, nobody wants to pay with a currency that has the potential to double in value in a few days, or wants to be paid in the currency if its value can significantly decline before the transaction is settled. The problems are aggravated when the transaction requires more time, e.g. for deferred payments such as mortgages or employment contracts, as volatility would severely disadvantage one side of the contract, making the usage of existing digital currencies in these settings prohibitively expensive. At the core of how the Terra Protocol solves these issues is the idea that a cryptocurrency with an elastic monetary policy would stabilize its price, retaining all the censorship resistance of Bitcoin, and making it viable for use in everyday transactions. However, price-stability is not sufficient to get a new currency widely adopted. Currencies are inherently characterized by strong network effects: a customer is unlikely to switch over to a new currency unless a critical mass of merchants are ready to accept it, but at the same time, merchants have no reason to invest resources and educate staff to accept a new currency unless there is significant customer demand for it. For this reason, Bitcoin\textquoteright s adoption in the payments space has been limited to small businesses whose owners are personally invested in cryptocurrencies. Our belief is that while an elastic monetary policy is the solution to the stability problem, an efficient fiscal policy can drive adoption. Then, the Terra Protocol also offers strong incentives for users to join the network with an efficient fiscal spending regime, managed by a Treasury, where multiple stimulus programs compete for financing. That is, proposals from community participants will be vetted by the rest of the ecosystem and, when approved, they will be financed with the objective to increase adoption and expand the potential use cases. The Terra Protocol with its balance between fostering stability and adoption represents a meaningful complement to fiat currencies as means of payment, and store of value. The rest of the paper is organized as follows. We first discuss the protocol and how stability is achieved and maintained, through the calibration of miners\textquoteright{} demand and the use of the native mining Luna token. We then dig deeper in how countercyclical incentives and fees are adopted to smooth fluctuations. Second, we discuss how Terra\textquoteright s fiscal policy can be used as an efficient stimulus to drive adoption. \section{Multi-fiat peg monetary policy } A stable-coin mechanism must answer three key questions: \begin{itemize} \item \textbf{How is price-stability defined?} Stability is a relative concept; which asset should a stable-coin be pegged to in order to appeal to the broadest possible audience? \item \textbf{How is price-stability measured?} Coin price is exogenous to the Terra blockchain, and an efficient, corruption-resistant price feed is necessary for the system to function properly. \item \textbf{How is price-stability achieved?} When coin price has deviated from the target, the system needs a way to apply pressures to the market to bring price back to the target. \end{itemize} This section will specify Terra\textquoteright s answers to the above questions in detail. \subsection{Defining stability against regional fiat currencies } The existential objective of a stable-coin is to retain its purchasing power. Given that most goods and services are consumed domestically, it is important to create crypto-currencies that track the value of local fiat currencies. Though the US Dollar dominates international trade and forex operations, to the average consumer the dollar exhibits unacceptable volatility against her choice unit of account. Recognizing strong regionalities in money, Terra aims to be a family of cryptocurrencies that are each pegged to the world's major currencies. Close to genesis, the protocol will issue Terra currencies pegged to USD, EUR, CNY, JPY, GBP, KRW, and the IMF SDR. Over time, more currencies will be added to the list by user voting. TerraSDR will be the flagship currency of this family, given that it exhibits the lowest volatility against any one fiat currency (Kereiakes, 2018). TerraSDR be the currency in which transaction fees, miner rewards and stimulus grants will be denominated. It is important, however, for Terra currencies to have access to shared liquidity. For this reason, the system supports atomic swaps among Terra currencies at their market exchange rates. A user can swap TerraKRW for TerraUSD instantly at the effective KRW/USD exchange rate. This allows all Terra currencies to share liquidity and macroeconomic fluctuations; a fall in demand by one currency can quickly be absorbed by the others. We can therefore reason about the stability of Terra currencies in a group; we will be referring to Terra loosely as a single currency for the remainder of this paper. As Terra's ecosystem adds more currencies, its atomic swap functionality can be an instant solution to cross border transactions, international trade settlements and usurious capital controls. \subsection{Measuring stability with miner oracles } Since the price of Terra currencies in secondary markets is exogenous to the blockchain, the system must rely on a decentralized price oracle to estimate the true exchange rate. We define the mechanism for the price oracle as the following: \begin{itemize} \item For any Terra sub-currency in the set of currencies C = {TerraKRW, TerraUSD, TerraSDR... } miners submit a vote for what they believe to be the current exchange rate in the target fiat asset. \item Every n blocks the vote is tallied by taking the weighted medians as the true rates. \item Some amount of Terra is rewarded to those who voted within 1 standard deviation of the elected median. Those who voted outside may be punished via slashing of their stakes. The ratio of those that are punished and rewarded may be calibrated by the system every vote to ensure that a sufficiently large portion of the miners vote. \end{itemize} Several issues have been raised in implementing decentralized oracles, but chief among them is the possibility for voters to profit by coordinating on a false price vote. Limiting the vote to a specific subset of users with strong vested interest in the system, the miners, can vastly decrease the odds of such a coordination. A successful coordination event on the price oracle would result in a much higher loss in the value of the miner stakes than any potential gains, as Luna stakes are time-bound to the system. The oracle can also play a role in adding and deprecating Terra currencies. The protocol may start supporting a new Terra currency when oracle votes for it satisfies a submission threshold. Similarly, the failure to receive a sufficient number of oracle votes for several periods could trigger the deprecation of a Terra currency. \subsection{Achieving stability through countercyclical mining } Once the system has detected that the price of a Terra currency has deviated from its peg, it must apply pressures to normalize the price. Like any other market, the Terra money market follows simple rules of supply and demand for a pegged currency. That is: \begin{itemize} \item Contracting money supply, all conditions held equal, will result in higher relative price levels. That is, when price levels are falling below the target, reducing money supply sufficiently will return price levels to normalcy. \item Expanding money supply, all conditions held equal, will result in lower relative price levels. That is, when price levels are rising above the target, increasing money supply sufficiently will return price levels to normalcy. \end{itemize} Of course, contracting the supply of money isn\textquoteright t free; like any other asset, money needs to be bought from the market. Central banks and governments shoulder contractionary costs for pegged fiat systems through a variety of mechanisms including intervention, the issuance of bonds and short-term instruments thus incurring costs of interest, and hiking of money market rates and reserve ratio requirements thus losing revenue. Put in a different way, central banks and governments absorb the volatility of the pegged currencies they issue. Analogously, Terra miners absorb volatility in Terra supply. \begin{itemize} \item \textbf{In the short term, miners absorb Terra contraction costs} through mining power dilution. During a contraction, the system mints and auctions more mining power to buy back and burn Terra. This effectively contracts the supply of Terra until its price has returned to the peg, and temporarily results in mining power dilution. \item \textbf{In the mid to long term, miners are compensated with increased mining rewards}. First, the system continues to buy back mining power until a fixed target supply is reached, thereby creating long-run dependability in available mining power. Second, the system increases mining rewards, which will be explained in more detail in a later section. \end{itemize} In summary, miners bear the costs of Terra volatility in the short term, while being compensated for it in the long-term. Compared to ordinary users, miners have a long-term vested interest in the stability of the system, with invested infrastructure, trained staff and business models with high switching cost. The remainder of this section will discuss how the system forwards short-term volatility and creates long-term incentives for Terra miners. \subsection{Miners absorb short-term Terra volatility } The Terra Protocol runs on a Proof of Stake (PoS) blockchain, where miners need to stake a native cryptocurrency Luna to mine Terra transactions. At every block period, the protocol elects among the set of staked miners a block producer, which is entrusted with the work required to produce the next block by aggregating transactions, achieving consensus among miners, and ensuring that messages are distributed properly in a short timeframe with high fault tolerance. The block producer election is weighted by the size of the active miner\textquoteright s Luna stake. Therefore, \textbf{Luna represents mining power in the Terra network.} Similar to how a Bitcoin miner\textquoteright s hash power represents a pro-rata odds of generating Bitcoin blocks, the Luna stake represents pro-rata odds of generating Terra blocks. Luna also serves as the most immediate defense against Terra price fluctuations. The system uses Luna to make the price for Terra by agreeing to be counter-party to anyone looking to swap Terra and Luna at Terra's target exchange rate. More concretely: \begin{itemize} \item When TerraSDR's price < 1 SDR, users and arbitragers can send 1 TerraSDR to the system and receive 1 SDR's worth of Luna. \item When TerraSDR's price > 1 SDR, users and arbitragers can send 1 SDR's worth of Luna to the system and receive 1 TerraSDR. \end{itemize} The system's willingness to respect the target exchange rate irrespective of market conditions keeps the market exchange rate of Terra at a tight band around the target exchange rate. An arbitrager can benefit when 1 TerraSDR = 0.9 SDR by trading TerraSDR for 1 SDR's worth of Luna from the system, as opposed to 0.9 SDR's worth of assets she could get from the open market. Similarly, she can also benefit when 1 TerraSDR = 1.1 SDR by trading in 1 SDR’s worth of Luna to the system to get 1.1 SDR’s worth of TerraSDR, once again beating the price of the open market. The system finances Terra price making via Luna: \begin{itemize} \item To buy 1 TerraSDR, the protocol mints and sells Luna worth 1 SDR \item By selling 1 TerraSDR, the protocol earns Luna worth 1 SDR \end{itemize} As Luna is minted to match Terra offers, volatility is moved from Terra price to Luna supply. If unmitigated, this Luna dilution presents a problem for miners; their Luna stakes are worth a smaller portion of total available mining power post-contraction. The system burns a portion of the Luna it has earned during expansions until Luna supply has reached its 1 billion equilibrium issuance. Therefore, Luna can have steady demand as a token with pro-rata rights to Terra mining over the long term. The next section discusses how the system offers countercyclical mining incentives to keep the market for mining and demand for Luna long-term stable through volatile macroeconomic cycles. \subsection{Countercyclical Mining Rewards } Our objective is to counteract fluctuations in the value of mining Terra by calibrating mining rewards to be countercyclical. The main intuition behind a countercyclical policy is that it attempts to counteract economic cycles by increasing mining rewards during recessions and decreasing mining rewards during booms. The protocol has two levers at its disposal to calibrate mining rewards: transaction fees, and the proportion of seigniorage that gets allocated to miners. \begin{itemize} \item Transaction fees: The protocol levies a small fee from every Terra transaction to reward miners. Fees default to 0.1\% but may vary over time to smooth out mining rewards. If mining rewards are declining, an increase in fees can reverse that trend. Conversely, high mining rewards give the protocol leeway to bring fees down. Fees are capped at 2\%, so they are restricted to a range that does not exceed the fees paid to traditional payment processors. \item Seigniorage: Users can mint Terra by paying the system Luna. This Luna earned by the system is seigniorage, the value of newly minted currency minus the cost of issuance. The system burns a portion of seigniorage, which makes mining power scarcer and reduces mining competition. The remaining portion of seigniorage goes to the Treasury to finance fiscal stimulus. The system can calibrate the allocation of seigniorage between those two destinations to impact mining reward profiles. \end{itemize} We use two key macroeconomic indicators as inputs for controlling mining rewards: money supply and transaction volume. Those two indicators are important signals for the performance of the economy. Looking at the equation for mining rewards: when the economy underperforms relative to recent history (money supply and transaction volume have decreased) a higher proportion of seigniorage is allocated to mining rewards, and transaction fees increase; conversely, when the economy outperforms recent history (money supply and transaction volume have increased) a lower proportion of seigniorage is allocated to mining rewards, and transaction fees decrease. Phrasing the above more formally, both the proportion of seigniorage that gets allocated to mining rewards $w_t$ and the adjustment in transaction fees $f_t$ are controlled over time as follows: \[ w_{t+1}=w_{t}+\beta\left(\frac{M_{t}}{M_{t}^{*}}-1\right)+\gamma\left(\frac{TV_{t}}{TV_{t}^{*}}-1\right) \] \[ f_{t+1}=f_{t}+\kappa\left(\frac{M_{t}}{M_{t}^{*}}-1\right)+\lambda\left(\frac{TV_{t}}{TV_{t}^{*}}-1\right) \] In the above $M_{t}$ is Terra money supply at time $t$, $M_{t}^{*}$ is the historical moving average of money supply over the previous quarter, $TV_{t}$ is Terra transaction volume at time $t$ and $TV_{t}^{*}$ is correspondingly the historical moving average of transaction volume over the previous quarter. The parameters \textgreek{b}, \textgreek{g}, \textgreek{k} and \textgreek{l} are all negative real numbers in the range {[}-1, 0) and will be calibrated to produce responses that are gradual but effective. Indicative values that have worked well in our simulations are between -0.5 and -1 for \textgreek{b}, \textgreek{g} and between -0.005 and -0.01 for \textgreek{k}, \textgreek{l} respectively. Both $w_t$ and $f_t$ are restricted within the range imposed by the protocol (between 10\% and 90\% for $w_t$, and between 0.1\% and 2\% for $f_t$). To see how this might work in practice: say that money supply is 10\% higher than quarterly average and transaction volume is 20\% higher than quarterly average. Let \textgreek{b}, \textgreek{g} be -0.5 and \textgreek{k}, \textgreek{l} be -0.01 respectively. The seigniorage allocation weight to mining rewards would decrease by 15\% and transaction fees would decrease by 0.3\% (both on an absolute basis). These are reasonable adjustments that allocate proportionally more capital to the Treasury and ease the fee burden on users in response to strong performance of the economy. \begin{center} \includegraphics[scale=0.42]{graph_tvgap} \par\end{center} Alternatively, as the graph shows, when the gap widens, i.e. as the economy shrinks, the fees will increase from the example starting point of 0.1\% to the maximum of 2\% that will still make Terra more convenient than Visa and Mastercard. The rule we have outlined for making mining rewards countercyclical is simple, intuitive and easily programmable. It takes inspiration from Taylor\textquoteright s Rule (Taylor, 1993), utilized by monetary authorities banks to help frame the level of nominal interest rates as a function of inflation and output. Similarly, exactly as a central bank, the protocol observes the health of the economy, in our case the money supply and transaction volume, and adjusts its main levers to ensure the sustainability of the economy. \section{Terra Platform} Smart contracts have enormous potential, but their use cases are limited by the price volatility of their underlying currency. The canonical function of a smart contract is to hold an escrow of tokens to be distributed when some set of conditions are triggered. Such a scheme is quite simply a futures contract, where all involved parties are forced to speculate on the price movement of the funds held by the contract. Price volatility makes smart contracts unusable for most mainstream financial applications, as most users are used to value determinate payouts in insurance, credit, mortgage, and payroll. The introduction of a stable dApp platform will allow smart contracts to mature into a useful infrastructure for mainstream businesses. Though most dApps today issue native tokens with custom token economics, for vast majority of cases such tokens have limited use cases and fragments the overall user experience, as users today needs to sell tokens A and buy tokens B and C to interact with dApps. Instead, the Terra Platform will be oriented to building financial applications that use Terra as their underlying currency. Terra Platform DApps will help to drive growth and stabilize the Terra by diversifying its use cases. The protocol may therefore subsidize the growth of the more successful applications through its growth-driven fiscal policy, and we talk about this in the next section. \section{Growth-driven fiscal policy} National governments use expansionary fiscal spending with the objective of stimulating growth. On the balance, the hope of fiscal spending is that the economic activity instigated by the original spending results in a feedback loop that grows the economy more than the amount of money spent in the initial stimulus. This concept is captured by the spending multiplier \textemdash{} how many dollars of economic activity does one dollar of fiscal spending generate? The spending multiplier increases with the marginal propensity to consume, meaning that the effectiveness of the expansionary stimulus is directly related to how likely economic agents are to increase their spending. In a previous section, we discussed how Terra seigniorage is directed to both miner rewards and the Treasury. At this point, it is worth describing how exactly the Treasury implements Terra's fiscal spending policy, with its core mandate being stimulating Terra's growth while ensuring its stability. In this manner, Terra achieves greater efficiency by returning seigniorage not allocated for stability back to its users. The Treasury's main focus is the allocation of resources derived from seigniorage to decentralized application (dApp). To receive seigniorage from the Treasury, a dApp needs to register for consideration as an entity that operates on the Terra network. dApps are eligible for funding depending on their economic activity and use of funding. A dApp registers a wallet with the network that is used to track economic activity. Transactions that go through the wallet count towards the dApp's transaction volume. The funding procedure for a dApp works as follows: \begin{itemize} \item A dApp applies for an account with the Treasury; the application includes metadata such as the Title, a url leading to a detailed page regarding the use of funding, the wallet address of the applicant, as well as auditing and governance procedures. \item At regular voting intervals, Luna validators vote to accept or reject new dApp applications for Treasury accounts. The net number of votes (yes votes minus no votes) needs to exceed 1/3 of total available validator power for an application to be accepted. \item Luna validators may only exercise control over which dApps can open accounts with the Treasury. The funding itself is determined programmatically for each funding period by a weight that is assigned to each dApp. This allows the Treasury to prioritize dApps that earn the most funding. \item At each voting session, Luna validators have the right to request that a dApp be blacklisted, for example because it behaves dishonestly or fails to account for its use of Treasury funds. Again, the net number of votes (yes votes minus no votes) needs to exceed 1/3 of total available validator power for the blacklist to be enforced. A blacklisted dApp loses access to its Treasury account and is no longer eligible for funding. \end{itemize} The motivation behind assigning funding weights to dApps is to maximize the impact of the stimulus on the economy by rewarding the dApps that are more likely to have a positive effect on the economy. The Treasury uses two criteria for allocating spending: (1) \textbf{robust economic activity} and (2) \textbf{efficient use of funding}. dApps with a strong track record of adoption receive support for their continued success, and dApps that have grown relative to their funding are rewarded with more seigniorage, as they have a successful track record of efficiently using their resources. Those two criteria are combined into a single weight which determines the relative funding that dApps receive from the aggregate funding pool. For instance, a dApp with a weight of 2 would receive twice the amount of funding of a dApp with a weight of 1. We lay out the funding weight equation, followed by a detailed explanation of all the parts: For a time period t, let TV\_t be a dApp's transaction volume and F\_t be the Treasury funding received. Then we determine the funding weight $w_{t}$ for the period as follows: \[ w_{t}=\left(1-\lambda\right)TV_{t}^{*}+\lambda\frac{\Delta TV_{t}^{*}}{F_{t-1}^{*}} \] The notation {*} denotes a moving average, so TV{*}\_t would be the moving average of transaction volume leading up to time period t, while \textgreek{D}TV{*}\_t would be a difference of moving averages of different lengths leading up to time period t. One might make the averaging window quarterly for example. Finally, the funding weights among all dApps are scaled to sum to 1. \begin{itemize} \item \textbf{The first term} is proportional to $TV_{t}^{*}$, the average transaction volume generated by the dApp in the recent past. This is an indicator of the dApp's \textbf{economic activity}, or more simply the size of its micro-economy. \item \textbf{The second term} is proportional to $\Delta TV*_{t}/F*_{t}-1$. The numerator describes the trend in transaction volume \textemdash{} it is the difference between a more and a less recent average. When positive, it means that the transaction volume is following an upward trajectory and vice versa. The denominator is the average funding amount received by the dApp in the recent past, up to and including the previous period. So the second term describes how economic activity is changing relative to past funding. Overall, larger values of this ratio capture instances where the dApp is fast-growing for each dollar of funding it has received. This is in fact the spending multiplier of the funding program, a prime indicator of \textbf{funding efficiency}. \item The parameter \textgreek{l} is used to determine the relative importance of economic activity and funding efficiency. If it isset equal to 1/2 then the two terms would have equal contribution. By decreasing the value of \textgreek{l}, the protocol can favor more heavily dApps with larger economies. Conversely, by increasing the value of \textgreek{l} the protocol can favor dApps that are using funding with high efficiency, for example by growing fast with little funding, even if they are smaller in size. \end{itemize} The votes on registering and blacklisting a dApp serve to minimize the risk that the above system is gamed during its infancy. It is the responsibility of Luna validators to hold dApps accountable for dishonest behavior and blacklist them if necessary. As the economy grows and becomes more decentralized, the bar to register and blacklist an App can be adjusted. An important advantage of distributing funding in a programmatic way is that it is simpler, objective, transparent and streamlined compared to open-ended voting systems. In fact, compared to decentralized voting systems, it is more predictable, because the inputs used to compute the funding weights are transparent and slow moving. Furthermore, this system requires less trust in Luna validators, given that the only authority they are vested with is determining whether or not a dApp is honest and makes legitimate use of funding. Overall, the objective of Terra governance is simple: fund the organizations and proposals with the highest net impact on the economy. This will include dApps solving real problems for users, increasing Terra's adoption and as a result increasing the GDP of the Terra economy. \section{Conclusion} We have presented Terra, a stable digital currency that is designed to complement both existing fiat and cryptocurrencies as a way to transact and store value. The protocol adjusts the supply of Terra in response to changes in demand to keep its price stable. This is achieved using Luna, the mining token whose countercyclical rewards are designed to absorb volatility from changing economic cycles. Terra also achieves efficient adoption by returning seigniorage not invested in stability back to its users. Its transparent and democratic distribution mechanism gives dApps the power to attract and retain users by tapping into Terra's economic growth. If Bitcoin\textquoteright s contribution to cryptocurrency was immutability, and Ethereum expressivity, our value-add will be usability. The potential applications of Terra are immense. Immediately, we foresee Terra being used as a medium-of-exchange in online payments, allowing people to transact freely at a fraction of the fees charged by other payment methods. As the world starts to become more and more decentralized, we see Terra being used as a dApp platform where price-stable token economies are built on Terra. Terra is looking to become the first usable currency and stability platform on the blockchain, unlocking the power of decentralization for mainstream users, merchants, and developers. \noindent {\small{}\newpage}\textbf{\small{}References }{\small\par} \noindent {\small{}Liu, Yukun and Tsyvinski, Aleh, Risks and Returns of Cryptocurrency (August 2018). NBER Working Paper No. w24877. Available at https://ssrn.com/abstract=3226806. }{\small\par} \noindent {\small{}Makarov, Igor and Schoar, Antoinette, Trading and Arbitrage in Cryptocurrency Markets (April 30, 2018). Available at SSRN: https://ssrn.com/abstract=3171204. }{\small\par} \noindent {\small{}Kereiakes, Evan, Rationale for Including Multiple Fiat Currencies in Terra\textquoteright s Peg (November 2018). Available at https://medium.com/terra-money/rationale-for-including-multiple-fiat-currencies-in-terras-peg-1ea9eae9de2a }{\small\par} \noindent {\small{}Taylor, John B. (1993). \textquotedbl Discretion versus Policy Rules in Practice.\textquotedbl{} Carnegie-Rochester Conference Series on Public Policy. 39: 195\textendash 214. }{\small\par} \end{document} %% LyX 2.3.1-1 created this file. For more info, see http://www.lyx.org/. %% Do not edit unless you really know what you are doing. \documentclass[12pt]{article} \usepackage[LGR,T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{geometry} \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \usepackage{color} \usepackage{amsmath} \usepackage{amssymb} \usepackage{graphicx} \usepackage{setspace} \doublespacing \usepackage[unicode=true, bookmarks=false, breaklinks=false,pdfborder={0 0 1},backref=section,colorlinks=true] {hyperref} \hypersetup{pdftitle={How QE Works: Evidence on the Refinancing Channel}, linkcolor=black,citecolor=blue,filecolor=magenta,urlcolor=blue} \makeatletter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands. \DeclareRobustCommand{\greektext}{% \fontencoding{LGR}\selectfont\def\encodingdefault{LGR}} \DeclareRobustCommand{\textgreek}[1]{\leavevmode{\greektext #1}} \ProvideTextCommand{\~}{LGR}[1]{\char126#1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands. \usepackage{color,titling,titlesec} \usepackage{eurosym} \usepackage{harvard}\usepackage{amsfonts} \usepackage{graphicx,pdflscape} \usepackage{chicago}\setcounter{MaxMatrixCols}{30} \usepackage[bottom]{footmisc} % This shrinks the space before and after display formulas \usepackage{etoolbox} \apptocmd\normalsize{% \abovedisplayskip=5pt %\abovedisplayshortskip=6pt % plus 3pt \belowdisplayskip=6pt %\belowdisplayshortskip=7pt plus 3pt }{}{} % slim white space before title, section/subsection headers \setlength{\droptitle}{-.75in} \titlespacing*{\section}{0pt}{.2cm}{0pt} \titlespacing*{\subsection}{0pt}{.2cm}{0pt} \makeatother \begin{document} \title{Terra Money:\\ Stability and Adoption} \author{Evan Kereiakes ([email protected]) \and Do Kwon ([email protected]) \and Marco Di Maggio ([email protected]) \and Nicholas Platias ([email protected])} \date{February 2019} \maketitle \begin{center} {\large{}\vspace{-1.5cm} }{\large\par} \par\end{center} \begin{abstract} \begin{singlespace} While many see the benefits of a price-stable cryptocurrency that combines the best of both fiat and Bitcoin, not many have a clear plan to get such a currency adopted. Since the value of a currency as medium of exchange is mainly driven by its network effects, a successful new digital currency needs to maximize adoption in order to become useful. We propose a cryptocurrency, Terra, which is both price-stable and growth-driven. It achieves price-stability via an elastic money supply enabled by countercyclical mining incentives and transaction fees. It also uses seigniorage created by its minting operations as transaction stimulus, thereby facilitating adoption. There is demand for a decentralized, price-stable money protocol in both fiat and blockchain economies. If such a protocol succeeds, then it will have a significant impact as the best use case for cryptocurrencies. \end{singlespace} \end{abstract} \thispagestyle{empty} \newpage\setcounter{page}{1} \section{Introduction} The price-volatility of cryptocurrencies is a well-studied problem by both academics and market observers (see for instance, Liu and Tsyvinski, 2018, Makarov and Schoar, 2018). Most cryptocurrencies, including Bitcoin, have a predetermined issuance schedule that, together with a strong speculative demand, contributes to wild fluctuations in price. Bitcoin\textquoteright s extreme price volatility is a major roadblock towards its adoption as a medium of exchange or store of value. Intuitively, nobody wants to pay with a currency that has the potential to double in value in a few days, or wants to be paid in the currency if its value can significantly decline before the transaction is settled. The problems are aggravated when the transaction requires more time, e.g. for deferred payments such as mortgages or employment contracts, as volatility would severely disadvantage one side of the contract, making the usage of existing digital currencies in these settings prohibitively expensive. At the core of how the Terra Protocol solves these issues is the idea that a cryptocurrency with an elastic monetary policy would stabilize its price, retaining all the censorship resistance of Bitcoin, and making it viable for use in everyday transactions. However, price-stability is not sufficient to get a new currency widely adopted. Currencies are inherently characterized by strong network effects: a customer is unlikely to switch over to a new currency unless a critical mass of merchants are ready to accept it, but at the same time, merchants have no reason to invest resources and educate staff to accept a new currency unless there is significant customer demand for it. For this reason, Bitcoin\textquoteright s adoption in the payments space has been limited to small businesses whose owners are personally invested in cryptocurrencies. Our belief is that while an elastic monetary policy is the solution to the stability problem, an efficient fiscal policy can drive adoption. Then, the Terra Protocol also offers strong incentives for users to join the network with an efficient fiscal spending regime, managed by a Treasury, where multiple stimulus programs compete for financing. That is, proposals from community participants will be vetted by the rest of the ecosystem and, when approved, they will be financed with the objective to increase adoption and expand the potential use cases. The Terra Protocol with its balance between fostering stability and adoption represents a meaningful complement to fiat currencies as means of payment, and store of value. The rest of the paper is organized as follows. We first discuss the protocol and how stability is achieved and maintained, through the calibration of miners\textquoteright{} demand and the use of the native mining Luna token. We then dig deeper in how countercyclical incentives and fees are adopted to smooth fluctuations. Second, we discuss how Terra\textquoteright s fiscal policy can be used as an efficient stimulus to drive adoption. \section{Multi-fiat peg monetary policy } A stable-coin mechanism must answer three key questions: \begin{itemize} \item \textbf{How is price-stability defined?} Stability is a relative concept; which asset should a stable-coin be pegged to in order to appeal to the broadest possible audience? \item \textbf{How is price-stability measured?} Coin price is exogenous to the Terra blockchain, and an efficient, corruption-resistant price feed is necessary for the system to function properly. \item \textbf{How is price-stability achieved?} When coin price has deviated from the target, the system needs a way to apply pressures to the market to bring price back to the target. \end{itemize} This section will specify Terra\textquoteright s answers to the above questions in detail. \subsection{Defining stability against regional fiat currencies } The existential objective of a stable-coin is to retain its purchasing power. Given that most goods and services are consumed domestically, it is important to create crypto-currencies that track the value of local fiat currencies. Though the US Dollar dominates international trade and forex operations, to the average consumer the dollar exhibits unacceptable volatility against her choice unit of account. Recognizing strong regionalities in money, Terra aims to be a family of cryptocurrencies that are each pegged to the world's major currencies. Close to genesis, the protocol will issue Terra currencies pegged to USD, EUR, CNY, JPY, GBP, KRW, and the IMF SDR. Over time, more currencies will be added to the list by user voting. TerraSDR will be the flagship currency of this family, given that it exhibits the lowest volatility against any one fiat currency (Kereiakes, 2018). TerraSDR be the currency in which payment of transaction fees, disbursements of miner rewards and fiscal spending will be denominated. \subsection{Measuring stability with miner oracles } Since the price of Terra currencies in secondary markets is exogenous to the blockchain, the system must rely on a decentralized price oracle to estimate the true exchange rate. We define the mechanism for the price oracle as the following: \begin{itemize} \item For any Terra sub-currency in the set of currencies C = \{TerraKRW, TerraUSD, TerraSDR... \} miners submit a vote for what they believe to be the current exchange rate between the sub-currency and its target fiat asset. To prevent front-running, voters submit a hash of the price value instead of the price itself. \item Every n blocks the vote is tallied by having voters submit a solution to the vote hash. The weighted median of the votes is taken for both the target and observed exchange rates as the true rates. \item Some amount of Terra is rewarded to those who voted within 1 standard deviation of the elected median. Those who voted outside may be punished via slashing of their stakes. The ratio of those that are punished and rewarded may be calibrated by the system every vote to ensure that a sufficiently large portion of the miners vote. \end{itemize} Several issues have been raised in implementing decentralized oracles, but chief among them is the possibility for voters to profit by coordinating on a false price vote. Limiting the vote to a specific subset of users with strong vested interest in the system, the miners, can vastly decrease the odds of such a coordination. A successful coordination event on the price oracle would result in a much higher loss in the value of the miner stakes than any potential gains of a successful coordination. \subsection{Achieving stability through countercyclical mining } Once the system has detected that the price of a Terra currency has deviated from its peg, it must apply pressures to normalize the price. Like any other market, the Terra money market follows simple rules of supply and demand for a pegged currency. That is: \begin{itemize} \item Contracting money supply, all conditions held equal, will result in higher relative price levels. That is, when price levels are falling below the target, reducing money supply sufficiently will return price levels to normalcy. \item Expanding money supply, all conditions held equal, will result in lower relative price levels. That is, when price levels are rising above the target, increasing money supply sufficiently will return price levels to normalcy. \end{itemize} Of course, contracting the supply of money isn\textquoteright t free; like any other asset, money needs to be bought from the market. Central banks and governments shoulder contractionary costs for pegged fiat systems through a variety of mechanisms including intervention, the issuance of bonds and short-term instruments thus incurring costs of interest, and hiking of money market rates and reserve ratio requirements thus losing revenue. Put in a different way, central banks and governments absorb the volatility of the pegged currencies they issue. Analogously, Terra miners absorb volatility in Terra supply. \begin{itemize} \item \textbf{In the short term, miners absorb Terra contraction costs} through mining power dilution. During a contraction, the system mints and auctions more mining power to buy back and burn Terra. This effectively contracts the supply of Terra until its price has returned to the peg, and temporarily results in mining power dilution. \item \textbf{In the mid to long term, miners are compensated with increased mining rewards}. First, the system continues to buy back mining power until a fixed target supply is reached, thereby creating long-run dependability in available mining power. Second, the system increases mining rewards, which will be explained in more detail in a later section. \end{itemize} In summary, miners bear the costs of Terra volatility in the short term, while being compensated for it in the long-term. Compared to ordinary users, miners have a long-term vested interest in the stability of the system, with invested infrastructure, trained staff and business models with high switching cost. The remainder of this section will discuss how the system forwards short-term volatility and creates long-term incentives for Terra miners. \subsection{Miners absorb short-term Terra volatility } The Terra Protocol runs on a Proof of Stake (PoS) blockchain, where miners need to stake a native cryptocurrency Luna to mine Terra transactions. At every block period, the protocol elects among the set of staked miners a block producer, which is entrusted with the work required to produce the next block by aggregating transactions, achieving consensus among miners, and ensuring that messages are distributed properly in a short timeframe with high fault tolerance. The block producer election is weighted by the size of the active miner\textquoteright s Luna stake. Therefore, \textbf{Luna represents mining power in the Terra network.} Similar to how a Bitcoin miner\textquoteright s hash power represents a pro-rata odds of generating Bitcoin blocks, the Luna stake represents pro-rata odds of generating Terra blocks. Luna also serves as the most immediate defense against Terra price fluctuations. The system uses Luna to make the price for Terra by agreeing to be counter-party to anyone looking to swap Terra and Luna at Terra's target exchange rate. More concretely: \begin{itemize} \item When TerraSDR's price < 1 SDR, users and arbitragers can send 1 TerraSDR to the system and receive 1 SDR's worth of Luna. \item When TerraSDR's price > 1 SDR, users and arbitragers can send 1 SDR's worth of Luna to the system and receive 1 TerraSDR. \end{itemize} The system's willingness to respect the target exchange rate irrespective of market conditions keeps the market exchange rate of Terra at a tight band around the target exchange rate. The system finances Terra price making via Luna: \begin{itemize} \item To buy 1 TerraSDR, the protocol mints and sells Luna worth 1 SDR \item By selling 1 TerraSDR, the protocol earns Luna worth 1 SDR \end{itemize} As Luna is minted to match Terra offers, volatility is moved from Terra price to Luna supply. If unmitigated, this Luna dilution presents a problem for miners; their Luna stakes are worth a smaller portion of total available mining power post-contraction. Therefore, the system burns a portion of the Luna it has earned during expansions until Luna supply has reached its 1 billion equilibrium issuance. Therefore, Luna can have steady demand as a token with pro-rata rights to Terra mining over the long term. The next section discusses how the system offers countercyclical mining incentives to keep the market for mining and demand for Luna long-term stable through volatile macroeconomic cycles. \subsection{Countercyclical Mining Rewards } Our objective is to counteract fluctuations in the value of mining Terra by calibrating mining rewards to be countercyclical. The main intuition behind a countercyclical policy is that it attempts to counteract economic cycles by increasing mining rewards during recessions and decreasing mining rewards during booms. The protocol has two levers at its disposal to calibrate mining rewards: transaction fees, and the proportion of seigniorage that gets allocated to miners. \begin{itemize} \item Transaction fees: The protocol levies a small fee from every Terra transaction to reward miners. Fees default to 0.1\% but may vary over time to smooth out mining rewards. If mining rewards are declining, an increase in fees can reverse that trend. Conversely, high mining rewards give the protocol leeway to bring fees down. Fees are capped at 2\%, so they are restricted to a range that does not exceed the fees paid to traditional payment processors. \item Seigniorage: Users can mint Terra by paying the system Luna. This Luna earned by the system is seigniorage, the value of newly minted currency minus the cost of issuance. The system burns a portion of seigniorage, which makes mining power scarcer and reduces mining competition. The remaining portion of seigniorage goes to the Treasury to finance fiscal stimulus. The system can calibrate the allocation of seigniorage between those two destinations to impact mining reward profiles. \end{itemize} We use two key macroeconomic indicators as inputs for controlling mining rewards: money supply and transaction volume. Those two indicators are important signals for the performance of the economy. Looking at the equation for mining rewards: when the economy underperforms relative to recent history (money supply and transaction volume have decreased) a higher proportion of seigniorage is allocated to mining rewards, and transaction fees increase; conversely, when the economy outperforms recent history (money supply and transaction volume have increased) a lower proportion of seigniorage is allocated to mining rewards, and transaction fees decrease. Phrasing the above more formally, both the proportion of seigniorage that gets allocated to mining rewards wt and the adjustment in transaction fees ft are controlled over time as follows: \[ w_{t+1}=w_{t}+\beta\left(\frac{M_{t}}{M_{t}^{*}}-1\right)+\gamma\left(\frac{TV_{t}}{TV_{t}^{*}}-1\right) \] \[ f_{t+1}=f_{t}+\kappa\left(\frac{M_{t}}{M_{t}^{*}}-1\right)+\lambda\left(\frac{TV_{t}}{TV_{t}^{*}}-1\right) \] In the above $M_{t}$ is Terra money supply at time $t$, $M_{t}^{*}$ is the historical moving average of money supply over the previous quarter, $TV_{t}$ is Terra transaction volume at time $t$ and $TV_{t}^{*}$ is correspondingly the historical moving average of transaction volume over the previous quarter. The parameters \textgreek{b}, \textgreek{g}, \textgreek{k} and \textgreek{l} are all negative real numbers in the range {[}-1, 0) and will be calibrated to produce responses that are gradual but effective. Indicative values that have worked well in our simulations are between -0.5 and -1 for \textgreek{b}, \textgreek{g} and between -0.005 and -0.01 for \textgreek{k}, \textgreek{l} respectively. Both wt and ft are restricted within the range imposed by the protocol (between 0\% and 90\% for wt, and between 0.01\% and 2\% for ft). To see how this might work in practice: say that money supply is 10\% higher than quarterly average and transaction volume is 20\% higher than quarterly average. Let \textgreek{b}, \textgreek{g} be -0.5 and \textgreek{k}, \textgreek{l} be -0.01 respectively. The seigniorage allocation weight to mining rewards would decrease by 15\% and transaction fees would decrease by 0.3\% (both on an absolute basis). These are reasonable adjustments that allocate proportionally more capital to the Treasury and ease the fee burden on users in response to strong performance of the economy. \begin{center} \includegraphics[scale=1.25]{graph_tvgap} \par\end{center} Alternatively, as the graph shows, when the gap widens, i.e. as the economy shrinks, the fees will increase from the example starting point of 0.1\% to the maximum of 2\% that will still make Terra more convenient than Visa and Mastercard. The rule we have outlined for making mining rewards countercyclical is simple, intuitive and easily programmable. It takes inspiration from Taylor\textquoteright s Rule (Taylor, 1993), utilized by monetary authorities banks to help frame the level of nominal interest rates as a function of inflation and output. Similarly, exactly as a central bank, the protocol observes the health of the economy, in our case the money supply and transaction volume, and adjusts its main levers to ensure the sustainability of the economy. \section{Growth-driven fiscal policy} National governments use expansionary fiscal spending with the objective of stimulating growth. On the balance, the hope of fiscal spending is that the economic activity instigated by the original spending results in a feedback loop that grows the economy more than the amount of money spent in the initial stimulus. This concept is captured by the spending multiplier \textemdash{} how many dollars of economic activity does one dollar of fiscal spending generate? The spending multiplier increases with the marginal propensity to consume, meaning that the effectiveness of the expansionary stimulus is directly related to how likely economic agents are to increase their spending. In a previous section, we discussed how Terra seigniorage is directed to both miner rewards and the Treasury. At this point, it is worth describing how exactly the Treasury implements Terra's fiscal spending policy, with its core mandate being stimulating Terra's growth while ensuring its stability. In this manner, Terra achieves greater efficiency by returning seigniorage not allocated for stability back to its users. The Treasury main focus is the allocation of resources derived from seigniorage to decentralized application (dApp). To receive seigniorage from the Treasury, a dApp needs to register for consideration as an entity that operates on the Terra network. dApps are eligible for funding depending on their economic activity and use of funding. A dApp registers a wallet with the network that is used to track economic activity. Transactions that go through the wallet count towards the dApp's transaction volume. The funding procedure for a dApp works as follows: \begin{itemize} \item A dApp applies for an account with the Treasury; the application includes metadata such as the Title, a url leading to a detailed page regarding the use of funding, the wallet address of the applicant, as well as auditing and governance procedures. \item At regular voting intervals, Luna validators vote to accept or reject new dApp applications for Treasury accounts. The net number of votes (yes votes minus no votes) needs to exceed 1/3 of total available validator power for an application to be accepted. \item Luna validators may only exercise control over which dApps can open accounts with the Treasury. The funding itself is determined programmatically for each funding period by a weight that is assigned to each dApp. This allows the Treasury to prioritize dApps that earn the most funding. \item At each voting session, Luna validators have the right to request that a dApp be blacklisted, for example because it behaves dishonestly or fails to account for its use of Treasury funds. Again, the net number of votes (yes votes minus no votes) needs to exceed 1/3 of total available validator power for the blacklist to be enforced. A blacklisted dApp loses access to its Treasury account and is no longer eligible for funding. \end{itemize} The motivation behind assigning funding weights to dApps is to maximize the impact of the stimulus on the economy by rewarding the dApps that are more likely to have a positive effect on the economy. The Treasury uses two criteria for allocating spending: (1) \textbf{robust economic activity} and (2) \textbf{efficient use of funding}. dApps with a strong track record of adoption receive support for their continued success, and dApps that have grown relative to their funding are rewarded with more seigniorage, as they have a successful track record of efficiently using their resources. Those two criteria are combined into a single weight which determines the relative funding that dApps receive from the aggregate funding pool. For instance, a dApp with a weight of 2 would receive twice the amount of funding of a dApp with a weight of 1. We lay out the funding weight equation, followed by a detailed explanation of all the parts: For a time period t, let TV\_t be a dApp's transaction volume and F\_t be the Treasury funding received. Then we determine the funding weight $w_{t}$ for the period as follows: \[ w_{t}=\left(1-\lambda\right)TV_{t}^{*}+\lambda\frac{\Delta TV_{t}^{*}}{F_{t-1}^{*}} \] The notation {*} denotes a moving average, so TV{*}\_t would be the moving average of transaction volume leading up to time period t, while \textgreek{D}TV{*}\_t would be a difference of moving averages of different lengths leading up to time period t. One might make the averaging window quarterly for example. The parameters \textgreek{k}, \textgreek{l} are between 0 and 1 and determine the relative importance of the two terms being summed. Finally, the funding weights among all dApps are scaled to sum to 1. \begin{itemize} \item \textbf{The first term} is proportional to $TV_{t}^{*}$, the average transaction volume generated by the dApp in the recent past. This is an indicator of the dApp's \textbf{economic activity}, or more simply the size of its micro-economy. \item \textbf{The second term} is proportional to $\Delta TV*_{t}/F*_{t}-1$. The numerator describes the trend in transaction volume \textemdash{} it is the difference between a more and a less recent average. When positive, it means that the transaction volume is following an upward trajectory and vice versa. The denominator is the average funding amount received by the dApp in the recent past, up to and including the previous period. So the second term describes how economic activity is changing relative to past funding. Overall, larger values of this ratio capture instances where the dApp is fast-growing for each dollar of funding it has received. This is in fact the spending multiplier of the funding program, a prime indicator of \textbf{funding efficiency}. \item The parameter \textgreek{l} is used to determine the relative importance of economic activity and funding efficiency. If it isset equal to 1/2 then the two terms would have equal contribution. By decreasing the value of \textgreek{l}, the protocol can favor more heavily dApps with larger economies. Conversely, by increasing the value of \textgreek{l} the protocol can favor dApps that are using funding with high efficiency, for example by growing fast with little funding, even if they are smaller in size. \end{itemize} The votes on registering and blacklisting a dApp serve to minimize the risk that the above system is gamed during its infancy. It is the responsibility of Luna validators to hold dApps accountable for dishonest behavior and blacklist them if necessary. As the economy grows and becomes more decentralized, the bar to register and blacklist an App can be adjusted. An important advantage of distributing funding in a programmatic way is that it is simpler, objective, transparent and streamlined compared to open-ended voting systems. In fact, compared to decentralized voting systems, it is more predictable, because the inputs used to compute the funding weights are transparent and slow moving. Furthermore, this system requires less trust in Luna validators, given that the only authority they are vested with is determining whether or not a dApp is honest and makes legitimate use of funding. Overall, the objective of Terra governance is simple: fund the organizations and proposals with the highest net impact on the economy. This will include dApps solving real problems for users, increasing Terra's adoption and as a result increasing the GDP of the Terra economy. \section{Conclusion} We have presented Terra, a stable digital currency that is designed to complement both existing fiat and cryptocurrencies as a way to transact and store value. The protocol adjusts the supply of Terra in response to changes in demand to keep its price stable. This is achieved using Luna, the mining token whose countercyclical rewards are designed to absorb volatility from changing economic cycles. Terra also achieves efficient adoption by returning seigniorage not invested in stability back to its users. Its transparent and democratic distribution mechanism gives dApps the power to attract and retain users by tapping into Terra's economic growth. If Bitcoin\textquoteright s contribution to cryptocurrency was immutability, and Ethereum expressivity, our value-add will be usability. The potential applications of Terra are immense. Immediately, we foresee Terra being used as a medium-of-exchange in online payments, allowing people to transact freely at a fraction of the fees charged by other payment methods. As the world starts to become more and more decentralized, we see Terra being used as a dApp platform where price-stable token economies are built on Terra. Terra is looking to become the first usable currency and stability platform on the blockchain, unlocking the power of decentralization for mainstream users, merchants, and developers. \noindent {\small{}\newpage}\textbf{\small{}References }{\small\par} \noindent {\small{}Liu, Yukun and Tsyvinski, Aleh, Risks and Returns of Cryptocurrency (August 2018). NBER Working Paper No. w24877. Available at https://ssrn.com/abstract=3226806. }{\small\par} \noindent {\small{}Makarov, Igor and Schoar, Antoinette, Trading and Arbitrage in Cryptocurrency Markets (April 30, 2018). Available at SSRN: https://ssrn.com/abstract=3171204. }{\small\par} \noindent {\small{}Kereiakes, Evan, Rationale for Including Multiple Fiat Currencies in Terra\textquoteright s Peg (November 2018). Available at https://medium.com/terra-money/rationale-for-including-multiple-fiat-currencies-in-terras-peg-1ea9eae9de2a }{\small\par} \noindent {\small{}Taylor, John B. (1993). \textquotedbl Discretion versus Policy Rules in Practice.\textquotedbl{} Carnegie-Rochester Conference Series on Public Policy. 39: 195\textendash 214. }{\small\par} \end{document}
{ "alphanum_fraction": 0.802510392, "avg_line_length": 55.9717153285, "ext": "tex", "hexsha": "805269aa724fe11dd0330a6a65b4b4b7e2240615", "lang": "TeX", "max_forks_count": 2, "max_forks_repo_forks_event_max_datetime": "2021-02-11T00:55:57.000Z", "max_forks_repo_forks_event_min_datetime": "2019-04-02T09:42:52.000Z", "max_forks_repo_head_hexsha": "83229775f77d572078b83f991b8f1e6687c15f87", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "terra-project/white-paper", "max_forks_repo_path": "white-paper/terra-v1.0.tex", "max_issues_count": 7, "max_issues_repo_head_hexsha": "83229775f77d572078b83f991b8f1e6687c15f87", "max_issues_repo_issues_event_max_datetime": "2018-09-20T02:44:12.000Z", "max_issues_repo_issues_event_min_datetime": "2018-09-13T17:31:53.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "terra-money/documentation", "max_issues_repo_path": "white-paper/terra-v1.0.tex", "max_line_length": 793, "max_stars_count": 17, "max_stars_repo_head_hexsha": "ff6d4640982c3afda1c9291e1d3908247e9a0c72", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xiaoai99/terraproject", "max_stars_repo_path": "terra-v1.0.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-05T05:42:56.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-01T08:07:25.000Z", "num_tokens": 14187, "size": 61345 }
% This LaTeX was auto-generated from an M-file by MATLAB. % To make changes, update the M-file and republish this document. \subsection{gp2.m} \begin{par} \textbf{Summary:} Compute joint predictions and derivatives for multiple GPs with uncertain inputs. Does not consider the uncertainty about the underlying function (in prediction), hence, only the GP mean function is considered. Therefore, this representation is equivalent to a regularized RBF network. If gpmodel.nigp exists, individial noise contributions are added. \end{par} \vspace{1em} \begin{verbatim}function [M, S, V] = gp2(gpmodel, m, s)\end{verbatim} \begin{par} \textbf{Input arguments:} \end{par} \vspace{1em} \begin{verbatim}gpmodel GP model struct hyp log-hyper-parameters [D+2 x E ] inputs training inputs [ n x D ] targets training targets [ n x E ] nigp (optional) individual noise variance terms [ n x E ] m mean of the test distribution [ D x 1 ] s covariance matrix of the test distribution [ D x D ]\end{verbatim} \begin{par} \textbf{Output arguments:} \end{par} \vspace{1em} \begin{verbatim}M mean of pred. distribution [ E x 1 ] S covariance of the pred. distribution [ E x E ] V inv(s) times covariance between input and output [ D x E ]\end{verbatim} \begin{par} Copyright (C) 2008-2013 by Marc Deisenroth, Andrew McHutchon, Joe Hall, and Carl Edward Rasmussen. \end{par} \vspace{1em} \begin{par} Last modified: 2013-03-05 \end{par} \vspace{1em} \subsection*{High-Level Steps} \begin{enumerate} \setlength{\itemsep}{-1ex} \item If necessary, re-compute cached variables \item Compute predicted mean and inv(s) times input-output covariance \item Compute predictive covariance matrix, non-central moments \item Centralize moments \end{enumerate} \begin{lstlisting} function [M, S, V] = gp2(gpmodel, m, s) \end{lstlisting} \subsection*{Code} \begin{lstlisting} persistent iK oldX oldIn oldOut beta oldn; D = size(gpmodel.inputs,2); % number of examples and dimension of inputs [n, E] = size(gpmodel.targets); % number of examples and number of outputs input = gpmodel.inputs; target = gpmodel.targets; X = gpmodel.hyp; % 1) if necessary: re-compute cached variables if numel(X) ~= numel(oldX) || isempty(iK) || n ~= oldn || ... sum(any(X ~= oldX)) || sum(any(oldIn ~= input)) || ... sum(any(oldOut ~= target)) oldX = X; oldIn = input; oldOut = target; oldn = n; K = zeros(n,n,E); iK = K; beta = zeros(n,E); for i=1:E % compute K and inv(K) inp = bsxfun(@rdivide,gpmodel.inputs,exp(X(1:D,i)')); K(:,:,i) = exp(2*X(D+1,i)-maha(inp,inp)/2); if isfield(gpmodel,'nigp') L = chol(K(:,:,i) + exp(2*X(D+2,i))*eye(n) + diag(gpmodel.nigp(:,i)))'; else L = chol(K(:,:,i) + exp(2*X(D+2,i))*eye(n))'; end iK(:,:,i) = L'\(L\eye(n)); beta(:,i) = L'\(L\gpmodel.targets(:,i)); end end k = zeros(n,E); M = zeros(E,1); V = zeros(D,E); S = zeros(E); inp = bsxfun(@minus,gpmodel.inputs,m'); % centralize inputs % 2) Compute predicted mean and inv(s) times input-output covariance for i=1:E iL = diag(exp(-X(1:D,i))); % inverse length-scales in = inp*iL; B = iL*s*iL+eye(D); t = in/B; l = exp(-sum(in.*t,2)/2); lb = l.*beta(:,i); tL = t*iL; c = exp(2*X(D+1,i))/sqrt(det(B)); M(i) = sum(lb)*c; % predicted mean V(:,i) = tL'*lb*c; % inv(s) times input-output covariance k(:,i) = 2*X(D+1,i)-sum(in.*in,2)/2; end % 3) Compute predictive covariance, non-central moments for i=1:E ii = bsxfun(@rdivide,inp,exp(2*X(1:D,i)')); for j=1:i R = s*diag(exp(-2*X(1:D,i))+exp(-2*X(1:D,j)))+eye(D); t = 1/sqrt(det(R)); ij = bsxfun(@rdivide,inp,exp(2*X(1:D,j)')); L = exp(bsxfun(@plus,k(:,i),k(:,j)')+maha(ii,-ij,R\s/2)); S(i,j) = t*beta(:,i)'*L*beta(:,j); S(j,i) = S(i,j); end S(i,i) = S(i,i) + 1e-6; % add small jitter for numerical reasons end % 4) Centralize moments S = S - M*M'; \end{lstlisting}
{ "alphanum_fraction": 0.585005767, "avg_line_length": 35.243902439, "ext": "tex", "hexsha": "02829c48be3cf95ac584d4f858794d75725735bc", "lang": "TeX", "max_forks_count": 36, "max_forks_repo_forks_event_max_datetime": "2021-05-19T10:19:12.000Z", "max_forks_repo_forks_event_min_datetime": "2017-04-19T06:55:25.000Z", "max_forks_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sahandrez/quad_pilco", "max_forks_repo_path": "doc/tex/gp2.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "2c99152e3a910d147cd0a52822da306063e6a834", "max_issues_repo_issues_event_max_datetime": "2020-04-24T11:09:45.000Z", "max_issues_repo_issues_event_min_datetime": "2020-04-24T11:02:23.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sahandrez/quad_pilco", "max_issues_repo_path": "doc/tex/gp2.tex", "max_line_length": 369, "max_stars_count": 53, "max_stars_repo_head_hexsha": "a0b48b7831911837d060617903c76c22e4180d0b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "SJTUGuofei/pilco-matlab", "max_stars_repo_path": "doc/tex/gp2.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-09T16:59:27.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-17T15:15:48.000Z", "num_tokens": 1346, "size": 4335 }
% arara: indent: {overwrite: true, trace: on} % A sample chapter file- it contains a lot of % environments, including tabulars, align, etc % % Don't try and compile this file using pdflatex etc, just % compare the *format* of it to the format of the % sampleAFTER.tex % % In particular, compare the tabular and align-type % environments before and after running the script \section{Polynomial functions} \reformatstepslist{P} % the steps list should be P1, P2, \ldots In your previous mathematics classes you have studied \emph{linear} and \emph{quadratic} functions. The most general forms of these types of functions can be represented (respectively) by the functions $f$ and $g$ that have formulas \begin{equation}\label{poly:eq:linquad} f(x)=mx+b, \qquad g(x)=ax^2+bx+c \end{equation} We know that $m$ is the slope of $f$, and that $a$ is the \emph{leading coefficient} of $g$. We also know that the \emph{signs} of $m$ and $a$ completely determine the behavior of the functions $f$ and $g$. For example, if $m>0$ then $f$ is an \emph{increasing} function, and if $m<0$ then $f$ is a \emph{decreasing} function. Similarly, if $a>0$ then $g$ is \emph{concave up} and if $a<0$ then $g$ is \emph{concave down}. Graphical representations of these statements are given in \cref{poly:fig:linquad}. \begin{figure}[!htb] \setlength{\figurewidth}{.2\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-10:8]{(x+2)}; \end{axis} \end{tikzpicture} \caption{$m>0$} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-10:8]{-(x+2)}; \end{axis} \end{tikzpicture} \caption{$m<0$} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-4:4]{(x^2-6)}; \end{axis} \end{tikzpicture} \caption{$a>0$} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-4:4]{-(x^2-6)}; \end{axis} \end{tikzpicture} \caption{$a<0$} \end{subfigure} \caption{Typical graphs of linear and quadratic functions.} \label{poly:fig:linquad} \end{figure} Let's look a little more closely at the formulas for $f$ and $g$ in \cref{poly:eq:linquad}. Note that the \emph{degree} of $f$ is $1$ since the highest power of $x$ that is present in the formula for $f(x)$ is $1$. Similarly, the degree of $g$ is $2$ since the highest power of $x$ that is present in the formula for $g(x)$ is $2$. In this section we will build upon our knowledge of these elementary functions. In particular, we will generalize the functions $f$ and $g$ to a function $p$ that has any degree that we wish. %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{essentialskills} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Quadratic functions] Every quadratic function has the form $y=ax^2+bx+c$; state the value of $a$ for each of the following functions, and hence decide if the parabola that represents the function opens upward or downward. \begin{multicols}{2} \begin{subproblem} $F(x)=x^2+3$ \begin{shortsolution} $a=1$; the parabola opens upward. \end{shortsolution} \end{subproblem} \begin{subproblem} $G(t)=4-5t^2$ \begin{shortsolution} $a=-5$; the parabola opens downward. \end{shortsolution} \end{subproblem} \begin{subproblem} $H(y)=4y^2-96y+8$ \begin{shortsolution} $a=4$; the parabola opens upward. \end{shortsolution} \end{subproblem} \begin{subproblem} $K(z)=-19z^2$ \begin{shortsolution} $m=-19$; the parabola opens downward. \end{shortsolution} \end{subproblem} \end{multicols} Now let's generalize our findings for the most general quadratic function $g$ that has formula $g(x)=a_2x^2+a_1x+a_0$. Complete the following sentences. \begin{subproblem} When $a_2>0$, the parabola that represents $y=g(x)$ opens $\ldots$ \begin{shortsolution} When $a_2>0$, the parabola that represents the function opens upward. \end{shortsolution} \end{subproblem} \begin{subproblem} When $a_2<0$, the parabola that represents $y=g(x)$ opens $\ldots$ \begin{shortsolution} When $a_2<0$, the parabola that represents the function opens downward. \end{shortsolution} \end{subproblem} \end{problem} \end{essentialskills} \subsection*{Power functions with positive exponents} The study of polynomials will rely upon a good knowledge of power functions| you may reasonably ask, what is a power function? \begin{pccdefinition}[Power functions] Power functions have the form \[ f(x) = a_n x^n \] where $n$ can be any real number. Note that for this section we will only be concerned with the case when $n$ is a positive integer. \end{pccdefinition} You may find assurance in the fact that you are already very comfortable with power functions that have $n=1$ (linear) and $n=2$ (quadratic). Let's explore some power functions that you might not be so familiar with. As you read \cref{poly:ex:oddpow,poly:ex:evenpow}, try and spot as many patterns and similarities as you can. %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{pccexample}[Power functions with odd positive exponents] \label{poly:ex:oddpow} Graph each of the following functions, state their domain, and their long-run behavior as $x\rightarrow\pm\infty$ \[ f(x)=x^3, \qquad g(x)=x^5, \qquad h(x)=x^7 \] \begin{pccsolution} The functions $f$, $g$, and $h$ are plotted in \cref{poly:fig:oddpow}. The domain of each of the functions $f$, $g$, and $h$ is $(-\infty,\infty)$. Note that the long-run behavior of each of the functions is the same, and in particular \begin{align*} f(x)\rightarrow\infty & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} f(x)\rightarrow-\infty & \text{ as } x\rightarrow-\infty \end{align*} The same results hold for $g$ and $h$. \end{pccsolution} \end{pccexample} \begin{figure}[!htb] \begin{minipage}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-1.5,xmax=1.5, ymin=-5,ymax=5, xtick={-1.0,-0.5,...,1.0}, minor ytick={-3,-1,...,3}, grid=both, width=\textwidth, legend pos=north west, ] \addplot expression[domain=-1.5:1.5]{x^3}; \addplot expression[domain=-1.379:1.379]{x^5}; \addplot expression[domain=-1.258:1.258]{x^7}; \addplot[soldot]coordinates{(-1,-1)} node[axisnode,anchor=north west]{$(-1,-1)$}; \addplot[soldot]coordinates{(1,1)} node[axisnode,anchor=south east]{$(1,1)$}; \legend{$f$,$g$,$h$} \end{axis} \end{tikzpicture} \caption{Odd power functions} \label{poly:fig:oddpow} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-2.5,xmax=2.5, ymin=-5,ymax=5, xtick={-2.0,-1.5,...,2.0}, minor ytick={-3,-1,...,3}, grid=both, width=\textwidth, legend pos=south east, ] \addplot expression[domain=-2.236:2.236]{x^2}; \addplot expression[domain=-1.495:1.495]{x^4}; \addplot expression[domain=-1.307:1.307]{x^6}; \addplot[soldot]coordinates{(-1,1)} node[axisnode,anchor=east]{$(-1,1)$}; \addplot[soldot]coordinates{(1,1)} node[axisnode,anchor=west]{$(1,1)$}; \legend{$F$,$G$,$H$} \end{axis} \end{tikzpicture} \caption{Even power functions} \label{poly:fig:evenpow} \end{minipage}% \end{figure} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{pccexample}[Power functions with even positive exponents]\label{poly:ex:evenpow}% Graph each of the following functions, state their domain, and their long-run behavior as $x\rightarrow\pm\infty$ \[ F(x)=x^2, \qquad G(x)=x^4, \qquad H(x)=x^6 \] \begin{pccsolution} The functions $F$, $G$, and $H$ are plotted in \cref{poly:fig:evenpow}. The domain of each of the functions is $(-\infty,\infty)$. Note that the long-run behavior of each of the functions is the same, and in particular \begin{align*} F(x)\rightarrow\infty & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} F(x)\rightarrow\infty & \text{ as } x\rightarrow-\infty \end{align*} The same result holds for $G$ and $H$. \end{pccsolution} \end{pccexample} \begin{doyouunderstand} \begin{problem} Repeat \cref{poly:ex:oddpow,poly:ex:evenpow} using (respectively) \begin{subproblem} $f(x)=-x^3, \qquad g(x)=-x^5, \qquad h(x)=-x^7$ \begin{shortsolution} The functions $f$, $g$, and $h$ have domain $(-\infty,\infty)$ and are graphed below. \begin{tikzpicture} \begin{axis}[ framed, xmin=-1.5,xmax=1.5, ymin=-5,ymax=5, xtick={-1.0,-0.5,...,0.5}, minor ytick={-3,-1,...,3}, grid=both, width=\solutionfigurewidth, legend pos=north east, ] \addplot expression[domain=-1.5:1.5]{-x^3}; \addplot expression[domain=-1.379:1.379]{-x^5}; \addplot expression[domain=-1.258:1.258]{-x^7}; \legend{$f$,$g$,$h$} \end{axis} \end{tikzpicture} Note that \begin{align*} f(x)\rightarrow-\infty & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} f(x)\rightarrow\infty & \text{ as } x\rightarrow-\infty \end{align*} The same is true for $g$ and $h$. \end{shortsolution} \end{subproblem} \begin{subproblem} $F(x)=-x^2, \qquad G(x)=-x^4, \qquad H(x)=-x^6$ \begin{shortsolution} The functions $F$, $G$, and $H$ have domain $(-\infty,\infty)$ and are graphed below. \begin{tikzpicture} \begin{axis}[ framed, xmin=-2.5,xmax=2.5, ymin=-5,ymax=5, xtick={-1.0,-0.5,...,0.5}, minor ytick={-3,-1,...,3}, grid=both, width=\solutionfigurewidth, legend pos=north east, ] \addplot expression[domain=-2.236:2.236]{-x^2}; \addplot expression[domain=-1.495:1.495]{-x^4}; \addplot expression[domain=-1.307:1.307]{-x^6}; \legend{$F$,$G$,$H$} \end{axis} \end{tikzpicture} Note that \begin{align*} F(x)\rightarrow-\infty & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} F(x)\rightarrow-\infty & \text{ as } x\rightarrow-\infty \end{align*} The same is true for $G$ and $H$. \end{shortsolution} \end{subproblem} \end{problem} \end{doyouunderstand} \subsection*{Polynomial functions} Now that we have a little more familiarity with power functions, we can define polynomial functions. Provided that you were comfortable with our opening discussion about linear and quadratic functions (see $f$ and $g$ in \cref{poly:eq:linquad}) then there is every chance that you'll be able to master polynomial functions as well; just remember that polynomial functions are a natural generalization of linear and quadratic functions. Once you've studied the examples and problems in this section, you'll hopefully agree that polynomial functions are remarkably predictable. %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccdefinition}[Polynomial functions] Polynomial functions have the form \[ p(x)=a_nx^n+a_{n-1}x^{n-1}+\ldots+a_1x+a_0 \] where $a_n$, $a_{n-1}$, $a_{n-2}$, \ldots, $a_0$ are real numbers. \begin{itemize} \item We call $n$ the degree of the polynomial, and require that $n$ is a non-negative integer; \item $a_n$, $a_{n-1}$, $a_{n-2}$, \ldots, $a_0$ are called the coefficients; \item We typically write polynomial functions in descending powers of $x$. \end{itemize} In particular, we call $a_n$ the \emph{leading} coefficient, and $a_nx^n$ the \emph{leading term}. Note that if a polynomial is given in factored form, then the degree can be found by counting the number of linear factors. \end{pccdefinition} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{pccexample}[Polynomial or not] Identify the following functions as polynomial or not; if the function is a polynomial, state its degree. \begin{multicols}{3} \begin{enumerate} \item $p(x)=x^2-3$ \item $q(x)=-4x^{\nicefrac{1}{2}}+10$ \item $r(x)=10x^5$ \item $s(x)=x^{-2}+x^{23}$ \item $f(x)=-8$ \item $g(x)=3^x$ \item $h(x)=\sqrt[3]{x^7}-x^2+x$ \item $k(x)=4x(x+2)(x-3)$ \item $j(x)=x^2(x-4)(5-x)$ \end{enumerate} \end{multicols} \begin{pccsolution} \begin{enumerate} \item $p$ is a polynomial, and its degree is $2$. \item $q$ is \emph{not} a polynomial, because $\frac{1}{2}$ is not an integer. \item $r$ is a polynomial, and its degree is $5$. \item $s$ is \emph{not} a polynomial, because $-2$ is not a positive integer. \item $f$ is a polynomial, and its degree is $0$. \item $g$ is \emph{not} a polynomial, because the independent variable, $x$, is in the exponent. \item $h$ is \emph{not} a polynomial, because $\frac{7}{3}$ is not an integer. \item $k$ is a polynomial, and its degree is $3$. \item $j$ is a polynomial, and its degree is $4$. \end{enumerate} \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{pccexample}[Typical graphs]\label{poly:ex:typical} \Cref{poly:fig:typical} shows graphs of some polynomial functions; the ticks have deliberately been left off the axis to allow us to concentrate on the features of each graph. Note in particular that: \begin{itemize} \item \cref{poly:fig:typical1} shows a degree-$1$ polynomial (you might also classify the function as linear) whose leading coefficient, $a_1$, is positive. \item \cref{poly:fig:typical2} shows a degree-$2$ polynomial (you might also classify the function as quadratic) whose leading coefficient, $a_2$, is positive. \item \cref{poly:fig:typical3} shows a degree-$3$ polynomial whose leading coefficient, $a_3$, is positive| compare its overall shape and long-run behavior to the functions described in \cref{poly:ex:oddpow}. \item \cref{poly:fig:typical4} shows a degree-$4$ polynomial whose leading coefficient, $a_4$, is positive|compare its overall shape and long-run behavior to the functions described in \cref{poly:ex:evenpow}. \item \cref{poly:fig:typical5} shows a degree-$5$ polynomial whose leading coefficient, $a_5$, is positive| compare its overall shape and long-run behavior to the functions described in \cref{poly:ex:oddpow}. \end{itemize} \end{pccexample} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{figure}[!htb] \begin{widepage} \setlength{\figurewidth}{\textwidth/6} \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-10:8]{(x+2)}; \end{axis} \end{tikzpicture} \caption{$a_1>0$} \label{poly:fig:typical1} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-4:4]{(x^2-6)}; \end{axis} \end{tikzpicture} \caption{$a_2>0$} \label{poly:fig:typical2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-7.5:7.5]{0.05*(x+6)*x*(x-6)}; \end{axis} \end{tikzpicture} \caption{$a_3>0$} \label{poly:fig:typical3} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-2.35:5.35,samples=100]{0.2*(x-5)*x*(x-3)*(x+2)}; \end{axis} \end{tikzpicture} \caption{$a_4>0$} \label{poly:fig:typical4} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\textwidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-5.5:6.3,samples=100]{0.01*(x+2)*x*(x-3)*(x+5)*(x-6)}; \end{axis} \end{tikzpicture} \caption{$a_5>0$} \label{poly:fig:typical5} \end{subfigure} \end{widepage} \caption{Graphs to illustrate typical curves of polynomial functions.} \label{poly:fig:typical} \end{figure} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{doyouunderstand} \begin{problem} Use \cref{poly:ex:typical} and \cref{poly:fig:typical} to help you sketch the graphs of polynomial functions that have negative leading coefficients| note that there are many ways to do this! The intention with this problem is to use your knowledge of transformations- in particular, \emph{reflections}- to guide you. \begin{shortsolution} $a_1<0$: \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\solutionfigurewidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-10:8]{-(x+2)}; \end{axis} \end{tikzpicture} $a_2<0$ \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\solutionfigurewidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-4:4]{-(x^2-6)}; \end{axis} \end{tikzpicture} $a_3<0$ \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\solutionfigurewidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-7.5:7.5]{-0.05*(x+6)*x*(x-6)}; \end{axis} \end{tikzpicture} $a_4<0$ \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\solutionfigurewidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-2.35:5.35,samples=100]{-0.2*(x-5)*x*(x-3)*(x+2)}; \end{axis} \end{tikzpicture} $a_5<0$ \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, width=\solutionfigurewidth, xtick={-11}, ytick={-11}, ] \addplot expression[domain=-5.5:6.3,samples=100]{-0.01*(x+2)*x*(x-3)*(x+5)*(x-6)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{problem} \end{doyouunderstand} \fixthis{poly: Need a more basic example here- it can have a similar format to the multiple zeros example, but just keep it simple; it should be halfway between the 2 examples surrounding it} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Multiple zeros] Consider the polynomial functions $p$, $q$, and $r$ which are graphed in \cref{poly:fig:moremultiple}. The formulas for $p$, $q$, and $r$ are as follows \begin{align*} p(x) & =(x-3)^2(x+4)^2 \\ q(x) & =x(x+2)^2(x-1)^2(x-3) \\ r(x) & =x(x-3)^3(x+1)^2 \end{align*} Find the degree of $p$, $q$, and $r$, and decide if the functions bounce off or cut through the horizontal axis at each of their zeros. \begin{pccsolution} The degree of $p$ is 4. Referring to \cref{poly:fig:bouncep}, the curve bounces off the horizontal axis at both zeros, $3$ and $4$. The degree of $q$ is 6. Referring to \cref{poly:fig:bounceq}, the curve bounces off the horizontal axis at $-2$ and $1$, and cuts through the horizontal axis at $0$ and $3$. The degree of $r$ is 6. Referring to \cref{poly:fig:bouncer}, the curve bounces off the horizontal axis at $-1$, and cuts through the horizontal axis at $0$ and at $3$, although is flattened immediately to the left and right of $3$. \end{pccsolution} \end{pccexample} \setlength{\figurewidth}{0.25\textwidth} \begin{figure}[!htb] \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-6,xmax=5, ymin=-30,ymax=200, xtick={-4,-2,...,4}, width=\textwidth, ] \addplot expression[domain=-5.63733:4.63733,samples=50]{(x-3)^2*(x+4)^2}; \addplot[soldot]coordinates{(3,0)(-4,0)}; \end{axis} \end{tikzpicture} \caption{$y=p(x)$} \label{poly:fig:bouncep} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-3,xmax=4, xtick={-2,...,3}, ymin=-60,ymax=40, width=\textwidth, ] \addplot+[samples=50] expression[domain=-2.49011:3.11054]{x*(x+2)^2*(x-1)^2*(x-3)}; \addplot[soldot]coordinates{(-2,0)(0,0)(1,0)(3,0)}; \end{axis} \end{tikzpicture} \caption{$y=q(x)$} \label{poly:fig:bounceq} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-2,xmax=4, xtick={-1,...,3}, ymin=-40,ymax=40, width=\textwidth, ] \addplot expression[domain=-1.53024:3.77464,samples=50]{x*(x-3)^3*(x+1)^2}; \addplot[soldot]coordinates{(-1,0)(0,0)(3,0)}; \end{axis} \end{tikzpicture} \caption{$y=r(x)$} \label{poly:fig:bouncer} \end{subfigure} \caption{} \label{poly:fig:moremultiple} \end{figure} \begin{pccdefinition}[Multiple zeros]\label{poly:def:multzero} Let $p$ be a polynomial that has a repeated linear factor $(x-a)^n$. Then we say that $p$ has a multiple zero at $a$ of multiplicity $n$ and \begin{itemize} \item if the factor $(x-a)$ is repeated an even number of times, the graph of $y=p(x)$ does not cross the $x$ axis at $a$, but `bounces' off the horizontal axis at $a$. \item if the factor $(x-a)$ is repeated an odd number of times, the graph of $y=p(x)$ crosses the horizontal axis at $a$, but it looks `flattened' there \end{itemize} If $n=1$, then we say that $p$ has a \emph{simple} zero at $a$. \end{pccdefinition} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Find a formula] Find formulas for the polynomial functions, $p$ and $q$, graphed in \cref{poly:fig:findformulademoboth}. \begin{figure}[!htb] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[framed, xmin=-5,xmax=5, ymin=-10,ymax=10, xtick={-4,-2,...,4}, minor xtick={-3,-1,...,3}, ytick={-8,-6,...,8}, width=\textwidth, grid=both] \addplot expression[domain=-3.25842:2.25842,samples=50]{-x*(x-2)*(x+3)*(x+1)}; \addplot[soldot]coordinates{(1,8)}node[axisnode,inner sep=.35cm,anchor=west]{$(1,8)$}; \addplot[soldot]coordinates{(-3,0)(-1,0)(0,0)(2,0)}; \end{axis} \end{tikzpicture} \caption{$p$} \label{poly:fig:findformulademo} \end{subfigure} \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[framed, xmin=-5,xmax=5, ymin=-10,ymax=10, xtick={-4,-2,...,4}, minor xtick={-3,-1,...,3}, ytick={-8,-6,...,8}, width=\textwidth, grid=both] \addplot expression[domain=-4.33:4.08152]{-.25*(x+2)^2*(x-3)}; \addplot[soldot]coordinates{(2,4)}node[axisnode,anchor=south west]{$(2,4)$}; \addplot[soldot]coordinates{(-2,0)(3,0)}; \end{axis} \end{tikzpicture} \caption{$q$} \label{poly:fig:findformulademo1} \end{subfigure} \caption{} \label{poly:fig:findformulademoboth} \end{figure} \begin{pccsolution} \begin{enumerate} \item We begin by noting that the horizontal intercepts of $p$ are $(-3,0)$, $(-1,0)$, $(0,0)$ and $(2,0)$. We also note that each zero is simple (multiplicity $1$). If we assume that $p$ has no other zeros, then we can start by writing \begin{align*} p(x) & =(x+3)(x+1)(x-0)(x-2) \\ & =x(x+3)(x+1)(x-2) \\ \end{align*} According to \cref{poly:fig:findformulademo}, the point $(1,8)$ lies on the curve $y=p(x)$. Let's check if the formula we have written satisfies this requirement \begin{align*} p(1) & = (1)(4)(2)(-1) \\ & = -8 \end{align*} which is clearly not correct| it is close though. We can correct this by multiplying $p$ by a constant $k$; so let's assume that \[ p(x)=kx(x+3)(x+1)(x-2) \] Then $p(1)=-8k$, and if this is to equal $8$, then $k=-1$. Therefore the formula for $p(x)$ is \[ p(x)=-x(x+3)(x+1)(x-2) \] \item The function $q$ has a zero at $-2$ of multiplicity $2$, and zero of multiplicity $1$ at $3$ (so $3$ is a simple zero of $q$); we can therefore assume that $q$ has the form \[ q(x)=k(x+2)^2(x-3) \] where $k$ is some real number. In order to find $k$, we use the given ordered pair, $(2,4)$, and evaluate $p(2)$ \begin{align*} p(2) & =k(4)^2(-1) \\ & =-16k \end{align*} We solve the equation $4=-8k$ and obtain $k=-\frac{1}{4}$ and conclude that the formula for $q(x)$ is \[ q(x)=-\frac{1}{4}(x+2)^2(x-3) \] \end{enumerate} \end{pccsolution} \end{pccexample} \fixthis{Chris: need sketching polynomial problems} \begin{pccspecialcomment}[Steps to follow when sketching polynomial functions] \begin{steps} \item \label{poly:step:first} Determine the degree of the polynomial, its leading term and leading coefficient, and hence determine the long-run behavior of the polynomial| does it behave like $\pm x^2$ or $\pm x^3$ as $x\rightarrow\pm\infty$? \item Determine the zeros and their multiplicity. Mark all zeros and the vertical intercept on the graph using solid circles $\bullet$. \item \label{poly:step:last} Deduce the overall shape of the curve, and sketch it. If there isn't enough information from the previous steps, then construct a table of values. \end{steps} Remember that until we have the tools of calculus, we won't be able to find the exact coordinates of local minimums, local maximums, and points of inflection. \end{pccspecialcomment} Before we demonstrate some examples, it is important to remember the following: \begin{itemize} \item our sketches will give a good representation of the overall shape of the graph, but until we have the tools of calculus (from MTH 251) we can not find local minimums, local maximums, and inflection points algebraically. This means that we will make our best guess as to where these points are. \item we will not concern ourselves too much with the vertical scale (because of our previous point)| we will, however, mark the vertical intercept (assuming there is one), and any horizontal asymptotes. \end{itemize} %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{poly:ex:simplecubic} Use \crefrange{poly:step:first}{poly:step:last} to sketch a graph of the function $p$ that has formula \[ p(x)=\frac{1}{2}(x-4)(x-1)(x+3) \] \begin{pccsolution} \begin{steps} \item $p$ has degree $3$. The leading term of $p$ is $\frac{1}{2}x^3$, so the leading coefficient of $p$ is $\frac{1}{2}$. The long-run behavior of $p$ is therefore similar to that of $x^3$. \item The zeros of $p$ are $-3$, $1$, and $4$; each zero is simple (i.e, it has multiplicity $1$). This means that the curve of $p$ cuts the horizontal axis at each zero. The vertical intercept of $p$ is $(0,6)$. \item We draw the details we have obtained so far on \cref{poly:fig:simplecubicp1}. Given that the curve of $p$ looks like the curve of $x^3$ in the long-run, we are able to complete a sketch of the graph of $p$ in \cref{poly:fig:simplecubicp2}. Note that we can not find the coordinates of the local minimums, local maximums, and inflection points| for the moment we make reasonable guesses as to where these points are (you'll find how to do this in calculus). \end{steps} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=15, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[soldot] coordinates{(-3,0)(1,0)(4,0)(0,6)}node[axisnode,anchor=south west]{$(0,6)$}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:simplecubicp1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=15, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[soldot] coordinates{(-3,0)(1,0)(4,0)(0,6)}node[axisnode,anchor=south west]{$(0,6)$}; \addplot[pccplot] expression[domain=-3.57675:4.95392,samples=100]{.5*(x-4)*(x-1)*(x+3)}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:simplecubicp2} \end{subfigure}% \caption{$y=\dfrac{1}{2}(x-4)(x-1)(x+3)$} \label{poly:fig:simplecubic} \end{figure} \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{poly:ex:degree5} Use \crefrange{poly:step:first}{poly:step:last} to sketch a graph of the function $q$ that has formula \[ q(x)=\frac{1}{200}(x+7)^2(2-x)(x-6)^2 \] \begin{pccsolution} \begin{steps} \item $q$ has degree $4$. The leading term of $q$ is \[ -\frac{1}{200}x^5 \] so the leading coefficient of $q$ is $-\frac{1}{200}$. The long-run behavior of $q$ is therefore similar to that of $-x^5$. \item The zeros of $q$ are $-7$ (multiplicity 2), $2$ (simple), and $6$ (multiplicity $2$). The curve of $q$ bounces off the horizontal axis at the zeros with multiplicity $2$ and cuts the horizontal axis at the simple zeros. The vertical intercept of $q$ is $\left( 0,\frac{441}{25} \right)$. \item We mark the details we have found so far on \cref{poly:fig:degree5p1}. Given that the curve of $q$ looks like the curve of $-x^5$ in the long-run, we can complete \cref{poly:fig:degree5p2}. \end{steps} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=40, xtick={-8,-6,...,8}, ytick={-5,0,...,35}, width=\textwidth, ] \addplot[soldot] coordinates{(-7,0)(2,0)(6,0)(0,441/25)}node[axisnode,anchor=south west]{$\left( 0, \frac{441}{25} \right)$}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:degree5p1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=40, xtick={-8,-6,...,8}, ytick={-5,0,...,35}, width=\textwidth, ] \addplot[soldot] coordinates{(-7,0)(2,0)(6,0)(0,441/25)}node[axisnode,anchor=south west]{$\left( 0, \frac{441}{25} \right)$}; \addplot[pccplot] expression[domain=-8.83223:7.34784,samples=50]{1/200*(x+7)^2*(2-x)*(x-6)^2}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:degree5p2} \end{subfigure}% \caption{$y=\dfrac{1}{200}(x+7)^2(2-x)(x-6)^2$} \label{poly:fig:degree5} \end{figure} \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample} Use \crefrange{poly:step:first}{poly:step:last} to sketch a graph of the function $r$ that has formula \[ r(x)=\frac{1}{100}x^3(x+4)(x-4)(x-6) \] \begin{pccsolution} \begin{steps} \item $r$ has degree $6$. The leading term of $r$ is \[ \frac{1}{100}x^6 \] so the leading coefficient of $r$ is $\frac{1}{100}$. The long-run behavior of $r$ is therefore similar to that of $x^6$. \item The zeros of $r$ are $-4$ (simple), $0$ (multiplicity $3$), $4$ (simple), and $6$ (simple). The vertical intercept of $r$ is $(0,0)$. The curve of $r$ cuts the horizontal axis at the simple zeros, and goes through the axis at $(0,0)$, but does so in a flattened way. \item We mark the zeros and vertical intercept on \cref{poly:fig:degree6p1}. Given that the curve of $r$ looks like the curve of $x^6$ in the long-run, we complete the graph of $r$ in \cref{poly:fig:degree6p2}. \end{steps} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=10, ymin=-20,ymax=10, xtick={-4,-2,...,8}, ytick={-15,-10,...,5}, width=\textwidth, ] \addplot[soldot] coordinates{(-4,0)(0,0)(4,0)(6,0)}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:degree6p1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=10, ymin=-20,ymax=10, xtick={-4,-2,...,8}, ytick={-15,-10,...,5}, width=\textwidth, ] \addplot[soldot] coordinates{(-4,0)(0,0)(4,0)(6,0)}; \addplot[pccplot] expression[domain=-4.16652:6.18911,samples=100]{1/100*(x+4)*x^3*(x-4)*(x-6)}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:degree6p2} \end{subfigure}% \caption{$y=\dfrac{1}{100}(x+4)x^3(x-4)(x-6)$} \end{figure} \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{pccexample}[An open-topped box] A cardboard company makes open-topped boxes for their clients. The specifications dictate that the box must have a square base, and that it must be open-topped. The company uses sheets of cardboard that are $\unit[1200]{cm^2}$. Assuming that the base of each box has side $x$ (measured in cm), it can be shown that the volume of each box, $V(x)$, has formula \[ V(x)=\frac{x}{4}(1200-x^2) \] Find the dimensions of the box that maximize the volume. \begin{pccsolution} We graph $y=V(x)$ in \cref{poly:fig:opentoppedbox}. Note that because $x$ represents the length of a side, and $V(x)$ represents the volume of the box, we necessarily require both values to be positive; we illustrate the part of the curve that applies to this problem using a solid line. \begin{figure}[!htb] \centering \begin{tikzpicture} \begin{axis}[framed, xmin=-50,xmax=50, ymin=-5000,ymax=5000, xtick={-40,-30,...,40}, minor xtick={-45,-35,...,45}, minor ytick={-3000,-1000,1000,3000}, width=.75\textwidth, height=.5\textwidth, grid=both] \addplot[pccplot,dashed,<-] expression[domain=-40:0,samples=50]{x/4*(1200-x^2)}; \addplot[pccplot,-] expression[domain=0:34.64,samples=50]{x/4*(1200-x^2)}; \addplot[pccplot,dashed,->] expression[domain=34.64:40,samples=50]{x/4*(1200-x^2)}; \addplot[soldot] coordinates{(20,4000)}; \end{axis} \end{tikzpicture} \caption{$y=V(x)$} \label{poly:fig:opentoppedbox} \end{figure} According to \cref{poly:fig:opentoppedbox}, the maximum volume of such a box is approximately $\unit[4000]{cm^2}$, and we achieve it using a base of length approximately $\unit[20]{cm}$. Since the base is square and each sheet of cardboard is $\unit[1200]{cm^2}$, we conclude that the dimensions of each box are $\unit[20]{cm}\times\unit[20]{cm}\times\unit[30]{cm}$. \end{pccsolution} \end{pccexample} \subsection*{Complex zeros} There has been a pattern to all of the examples that we have seen so far| the degree of the polynomial has dictated the number of \emph{real} zeros that the polynomial has. For example, the function $p$ in \cref{poly:ex:simplecubic} has degree $3$, and $p$ has $3$ real zeros; the function $q$ in \cref{poly:ex:degree5} has degree $5$ and $q$ has $5$ real zeros. You may wonder if this result can be generalized| does every polynomial that has degree $n$ have $n$ real zeros? Before we tackle the general result, let's consider an example that may help motivate it. %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{pccexample}\label{poly:ex:complx} Consider the polynomial function $c$ that has formula \[ c(x)=x(x^2+1) \] It is clear that $c$ has degree $3$, and that $c$ has a (simple) zero at $0$. Does $c$ have any other zeros, i.e, can we find any values of $x$ that satisfy the equation \begin{equation}\label{poly:eq:complx} x^2+1=0 \end{equation} The solutions to \cref{poly:eq:complx} are $\pm i$. We conclude that $c$ has $3$ zeros: $0$ and $\pm i$; we note that \emph{not all of them are real}. \end{pccexample} \Cref{poly:ex:complx} shows that not every degree-$3$ polynomial has $3$ \emph{real} zeros; however, if we are prepared to venture into the complex numbers, then we can state the following theorem. %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{pccspecialcomment}[The fundamental theorem of algebra] Every polynomial function of degree $n$ has $n$ roots, some of which may be complex, and some may be repeated. \end{pccspecialcomment} \fixthis{Fundamental theorem of algebra: is this wording ok? do we want it as a theorem?} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{pccexample} Find all the zeros of the polynomial function $p$ that has formula \[ p(x)=x^4-2x^3+5x^2 \] \begin{pccsolution} We begin by factoring $p$ \begin{align*} p(x) & =x^4-2x^3+5x^2 \\ & =x^2(x^2-2x+5) \end{align*} We note that $0$ is a zero of $p$ with multiplicity $2$. The other zeros of $p$ can be found by solving the equation \[ x^2-2x+5=0 \] This equation can not be factored, so we use the quadratic formula \begin{align*} x & =\frac{2\pm\sqrt{(-2)^2}-20}{2(1)} \\ & =\frac{2\pm\sqrt{-16}}{2} \\ & =1\pm 2i \end{align*} We conclude that $p$ has $4$ zeros: $0$ (multiplicity $2$), and $1\pm 2i$ (simple). \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{pccexample} Find a polynomial that has zeros at $2\pm i\sqrt{2}$. \begin{pccsolution} We know that the zeros of a polynomial can be found by analyzing the linear factors. We are given the zeros, and have to work backwards to find the linear factors. We begin by assuming that $p$ has the form \begin{align*} p(x) & =(x-(2-i\sqrt{2}))(x-(2+i\sqrt{2})) \\ & =x^2-x(2+i\sqrt{2})-x(2-i\sqrt{2})+(2-i\sqrt{2})(2+i\sqrt{2}) \\ & =x^2-4x+(4-2i^2) \\ & =x^2-4x+6 \end{align*} We conclude that a possible formula for a polynomial function, $p$, that has zeros at $2\pm i\sqrt{2}$ is \[ p(x)=x^2-4x+6 \] Note that we could multiply $p$ by any real number and still ensure that $p$ has the same zeros. \end{pccsolution} \end{pccexample} \investigation*{} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Find a formula from a graph] For each of the polynomials in \cref{poly:fig:findformula} \begin{enumerate} \item count the number of times the curve turns round, and cuts/bounces off the $x$ axis; \item approximate the degree of the polynomial; \item use your information to find the linear factors of each polynomial, and therefore write a possible formula for each; \item make sure your polynomial goes through the given ordered pair. \end{enumerate} \begin{shortsolution} \Vref{poly:fig:findformdeg2}: \begin{enumerate} \item the curve turns round once; \item the degree could be 2; \item based on the zeros, the linear factors are $(x+5)$ and $(x-3)$; since the graph opens downwards, we will assume the leading coefficient is negative: $p(x)=-k(x+5)(x-3)$; \item $p$ goes through $(2,2)$, so we need to solve $2=-k(7)(-1)$ and therefore $k=\nicefrac{2}{7}$, so \[ p(x)=-\frac{2}{7}(x+5)(x-3) \] \end{enumerate} \Vref{poly:fig:findformdeg3}: \begin{enumerate} \item the curve turns around twice; \item the degree could be 3; \item based on the zeros, the linear factors are $(x+2)^2$, and $(x-1)$; based on the behavior of $p$, we assume that the leading coefficient is positive, and try $p(x)=k(x+2)^2(x-1)$; \item $p$ goes through $(0,-2)$, so we need to solve $-2=k(4)(-1)$ and therefore $k=\nicefrac{1}{2}$, so \[ p(x)=\frac{1}{2}(x+2)^2(x-1) \] \end{enumerate} \Vref{poly:fig:findformdeg5}: \begin{enumerate} \item the curve turns around 4 times; \item the degree could be 5; \item based on the zeros, the linear factors are $(x+5)^2$, $(x+1)$, $(x-2)$, $(x-3)$; based on the behavior of $p$, we assume that the leading coefficient is positive, and try $p(x)=k(x+5)^2(x+1)(x-2)(x-3)$; \item $p$ goes through $(-3,-50)$, so we need to solve $-50=k(64)(-2)(-5)(-6)$ and therefore $k=\nicefrac{5}{384}$, so \[ p(x)=\frac{5}{384}(x+5)^2(x+1)(x-2)(x-3) \] \end{enumerate} \end{shortsolution} \end{problem} \begin{figure}[!htb] \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-2,ymax=5, width=\textwidth, ] \addplot expression[domain=-4.5:3.75]{-1/3*(x+4)*(x-3)}; \addplot[soldot] coordinates{(-4,0)(3,0)(2,2)} node[axisnode,above right]{$(2,2)$}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:findformdeg2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-3,xmax=2, ymin=-2,ymax=4, xtick={-2,...,1}, width=\textwidth, ] \addplot expression[domain=-2.95:1.75]{1/3*(x+2)^2*(x-1)}; \addplot[soldot]coordinates{(-2,0)(1,0)(0,-1.33)}node[axisnode,anchor=north west]{$(0,-2)$}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:findformdeg3} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-100,ymax=150, width=\textwidth, ] \addplot expression[domain=-4.5:3.4,samples=50]{(x+4)^2*(x+1)*(x-2)*(x-3)}; \addplot[soldot]coordinates{(-4,0)(-1,0)(2,0)(3,0)(-3,-60)}node[axisnode,anchor=north]{$(-3,-50)$}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:findformdeg5} \end{subfigure} \caption{} \label{poly:fig:findformula} \end{figure} \begin{exercises} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Prerequisite classifacation skills] Decide if each of the following functions are linear or quadratic. \begin{multicols}{3} \begin{subproblem} $f(x)=2x+3$ \begin{shortsolution} $f$ is linear. \end{shortsolution} \end{subproblem} \begin{subproblem} $g(x)=10-7x$ \begin{shortsolution} $g$ is linear \end{shortsolution} \end{subproblem} \begin{subproblem} $h(x)=-x^2+3x-9$ \begin{shortsolution} $h$ is quadratic. \end{shortsolution} \end{subproblem} \begin{subproblem} $k(x)=-17$ \begin{shortsolution} $k$ is linear. \end{shortsolution} \end{subproblem} \begin{subproblem} $l(x)=-82x^2-4$ \begin{shortsolution} $l$ is quadratic \end{shortsolution} \end{subproblem} \begin{subproblem} $m(x)=6^2x-8$ \begin{shortsolution} $m$ is linear. \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Prerequisite slope identification] State the slope of each of the following linear functions, and hence decide if each function is increasing or decreasing. \begin{multicols}{4} \begin{subproblem} $\alpha(x)=4x+1$ \begin{shortsolution} $m=4$; $\alpha$ is increasing. \end{shortsolution} \end{subproblem} \begin{subproblem} $\beta(x)=-9x$ \begin{shortsolution} $m=-9$; $\beta$ is decreasing. \end{shortsolution} \end{subproblem} \begin{subproblem} $\gamma(t)=18t+100$ \begin{shortsolution} $m=18$; $\gamma$ is increasing. \end{shortsolution} \end{subproblem} \begin{subproblem} $\delta(y)=23-y$ \begin{shortsolution} $m=-1$; $\delta$ is decreasing. \end{shortsolution} \end{subproblem} \end{multicols} Now let's generalize our findings for the most general linear function $f$ that has formula $f(x)=mx+b$. Complete the following sentences. \begin{subproblem} When $m>0$, the function $f$ is $\ldots$ \begin{shortsolution} When $m>0$, the function $f$ is $\ldots$ \emph{increasing}. \end{shortsolution} \end{subproblem} \begin{subproblem} When $m<0$, the function $f$ is $\ldots$ \begin{shortsolution} When $m<0$, the function $f$ is $\ldots$ \emph{decreasing}. \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Polynomial or not?] Identify whether each of the following functions is a polynomial or not. If the function is a polynomial, state its degree. \begin{multicols}{3} \begin{subproblem} $p(x)=2x+1$ \begin{shortsolution} $p$ is a polynomial (you might also describe $p$ as linear). The degree of $p$ is 1. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=7x^2+4x$ \begin{shortsolution} $p$ is a polynomial (you might also describe $p$ as quadratic). The degree of $p$ is 2. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=\sqrt{x}+2x+1$ \begin{shortsolution} $p$ is not a polynomial; we require the powers of $x$ to be integer values. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=2^x-45$ \begin{shortsolution} $p$ is not a polynomial; the $2^x$ term is exponential. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=6x^4-5x^3+9$ \begin{shortsolution} $p$ is a polynomial, and the degree of $p$ is $6$. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=-5x^{17}+9x+2$ \begin{shortsolution} $p$ is a polynomial, and the degree of $p$ is 17. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=4x(x+7)^2(x-3)^3$ \begin{shortsolution} $p$ is a polynomial, and the degree of $p$ is $6$. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=4x^{-5}-x^2+x$ \begin{shortsolution} $p$ is not a polynomial because $-5$ is not a positive integer. \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=-x^6(x^2+1)(x^3-2)$ \begin{shortsolution} $p$ is a polynomial, and the degree of $p$ is $11$. \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Polynomial graphs] Three polynomial functions $p$, $m$, and $n$ are shown in \crefrange{poly:fig:functionp}{poly:fig:functionn}. The functions have the following formulas \begin{align*} p(x) & = (x-1)(x+2)(x-3) \\ m(x) & = -(x-1)(x+2)(x-3) \\ n(x) & = (x-1)(x+2)(x-3)(x+1)(x+4) \end{align*} Note that for our present purposes we are not concerned with the vertical scale of the graphs. \begin{subproblem} Identify both on the graph {\em and} algebraically, the zeros of each polynomial. \begin{shortsolution} $y=p(x)$ is shown below. \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-10,ymax=10, width=\solutionfigurewidth, ] \addplot expression[domain=-2.5:3.5,samples=50]{(x-1)*(x+2)*(x-3)}; \addplot[soldot] coordinates{(-2,0)(1,0)(3,0)}; \end{axis} \end{tikzpicture} $y=m(x)$ is shown below. \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-10,ymax=10, width=\solutionfigurewidth, ] \addplot expression[domain=-2.5:3.5,samples=50]{-1*(x-1)*(x+2)*(x-3)}; \addplot[soldot] coordinates{(-2,0)(1,0)(3,0)}; \end{axis} \end{tikzpicture} $y=n(x)$ is shown below. \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-90,ymax=70, width=\solutionfigurewidth, ] \addplot expression[domain=-4.15:3.15,samples=50]{(x-1)*(x+2)*(x-3)*(x+1)*(x+4)}; \addplot[soldot] coordinates{(-4,0)(-2,0)(-1,0)(1,0)(3,0)}; \end{axis} \end{tikzpicture} The zeros of $p$ are $-2$, $1$, and $3$; the zeros of $m$ are $-2$, $1$, and $3$; the zeros of $n$ are $-4$, $-2$, $-1$, and $3$. \end{shortsolution} \end{subproblem} \begin{subproblem} Write down the degree, how many times the curve of each function `turns around', and how many zeros it has \begin{shortsolution} \begin{itemize} \item The degree of $p$ is 3, and the curve $y=p(x)$ turns around twice. \item The degree of $q$ is also 3, and the curve $y=q(x)$ turns around twice. \item The degree of $n$ is $5$, and the curve $y=n(x)$ turns around 4 times. \end{itemize} \end{shortsolution} \end{subproblem} \end{problem} \begin{figure}[!htb] \begin{widepage} \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-10,ymax=10, ytick={-5,5}, width=\textwidth, ] \addplot expression[domain=-2.5:3.5,samples=50]{(x-1)*(x+2)*(x-3)}; \addplot[soldot]coordinates{(-2,0)(1,0)(3,0)}; \end{axis} \end{tikzpicture} \caption{$y=p(x)$} \label{poly:fig:functionp} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-10,ymax=10, ytick={-5,5}, width=\textwidth, ] \addplot expression[domain=-2.5:3.5,samples=50]{-1*(x-1)*(x+2)*(x-3)}; \addplot[soldot]coordinates{(-2,0)(1,0)(3,0)}; \end{axis} \end{tikzpicture} \caption{$y=m(x)$} \label{poly:fig:functionm} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-90,ymax=70, width=\textwidth, ] \addplot expression[domain=-4.15:3.15,samples=100]{(x-1)*(x+2)*(x-3)*(x+1)*(x+4)}; \addplot[soldot]coordinates{(-4,0)(-2,0)(-1,0)(1,0)(3,0)}; \end{axis} \end{tikzpicture} \caption{$y=n(x)$} \label{poly:fig:functionn} \end{subfigure} \caption{} \end{widepage} \end{figure} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Horizontal intercepts]\label{poly:prob:matchpolys}% State the horizontal intercepts (as ordered pairs) of the following polynomials. \begin{multicols}{2} \begin{subproblem}\label{poly:prob:degree5} $p(x)=(x-1)(x+2)(x-3)(x+1)(x+4)$ \begin{shortsolution} $(-4,0)$, $(-2,0)$, $(-1,0)$, $(1,0)$, $(3,0)$ \end{shortsolution} \end{subproblem} \begin{subproblem} $q(x)=-(x-1)(x+2)(x-3)$ \begin{shortsolution} $(-2,0)$, $(1,0)$, $(3,0)$ \end{shortsolution} \end{subproblem} \begin{subproblem} $r(x)=(x-1)(x+2)(x-3)$ \begin{shortsolution} $(-2,0)$, $(1,0)$, $(3,0)$ \end{shortsolution} \end{subproblem} \begin{subproblem}\label{poly:prob:degree2} $s(x)=(x-2)(x+2)$ \begin{shortsolution} $(-2,0)$, $(2,0)$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Minimums, maximums, and concavity]\label{poly:prob:incdec} Four polynomial functions are graphed in \cref{poly:fig:incdec}. The formulas for these functions are (not respectively) \begin{gather*} p(x)=\frac{x^3}{6}-\frac{x^2}{4}-3x, \qquad q(x)=\frac{x^4}{20}+\frac{x^3}{15}-\frac{6}{5}x^2+1\\ r(x)=-\frac{x^5}{50}-\frac{x^4}{40}+\frac{2x^3}{5}+6, \qquad s(x)=-\frac{x^6}{6000}-\frac{x^5}{2500}+\frac{67x^4}{4000}+\frac{17x^3}{750}-\frac{42x^2}{125} \end{gather*} \begin{figure}[!htb] \begin{widepage} \setlength{\figurewidth}{.23\textwidth} \centering \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, width=\textwidth, xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-8,-6,...,8}, grid=major, ] \addplot expression[domain=-5.28:4.68,samples=50]{-x^5/50-x^4/40+2*x^3/5+6}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:incdec3} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, width=\textwidth, xmin=-10,xmax=10,ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-8,-6,...,8}, grid=major, ] \addplot expression[domain=-6.08:4.967,samples=50]{x^4/20+x^3/15-6/5*x^2+1}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:incdec2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, width=\textwidth, xmin=-6,xmax=8,ymin=-10,ymax=10, xtick={-4,-2,...,6}, ytick={-8,-4,4,8}, minor ytick={-6,-2,...,6}, grid=both, ] \addplot expression[domain=-4.818:6.081,samples=50]{x^3/6-x^2/4-3*x}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:incdec1} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ framed, width=\textwidth, xmin=-10,xmax=10,ymin=-10,ymax=10, xtick={-8,-4,4,8}, ytick={-8,-4,4,8}, minor xtick={-6,-2,...,6}, minor ytick={-6,-2,...,6}, grid=both, ] \addplot expression[domain=-9.77:8.866,samples=50]{-x^6/6000-x^5/2500+67*x^4/4000+17/750*x^3-42/125*x^2}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:incdec4} \end{subfigure} \caption{Graphs for \cref{poly:prob:incdec}.} \label{poly:fig:incdec} \end{widepage} \end{figure} \begin{subproblem} Match each of the formulas with one of the given graphs. \begin{shortsolution} \begin{itemize} \item $p$ is graphed in \vref{poly:fig:incdec1}; \item $q$ is graphed in \vref{poly:fig:incdec2}; \item $r$ is graphed in \vref{poly:fig:incdec3}; \item $s$ is graphed in \vref{poly:fig:incdec4}. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} Approximate the zeros of each function using the appropriate graph. \begin{shortsolution} \begin{itemize} \item $p$ has simple zeros at about $-3.8$, $0$, and $5$. \item $q$ has simple zeros at about $-5.9$, $-1$, $1$, and $4$. \item $r$ has simple zeros at about $-5$, $-2.9$, and $4.1$. \item $s$ has simple zeros at about $-9$, $-6$, $4.2$, $8.1$, and a zero of multiplicity $2$ at $0$. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} Approximate the local maximums and minimums of each of the functions. \begin{shortsolution} \begin{itemize} \item $p$ has a local maximum of approximately $3.9$ at $-2$, and a local minimum of approximately $-6.5$ at $3$. \item $q$ has a local minimum of approximately $-10$ at $-4$, and $-4$ at $3$; $q$ has a local maximum of approximately $1$ at $0$. \item $r$ has a local minimum of approximately $-5.5$ at $-4$, and a local maximum of approximately $10$ at $3$. \item $s$ has a local maximum of approximately $5$ at $-8$, $0$ at $0$, and $5$ at $7$; $s$ has local minimums of approximately $-3$ at $-4$, and $-1$ at $3$. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} Approximate the global maximums and minimums of each of the functions. \begin{shortsolution} \begin{itemize} \item $p$ does not have a global maximum, nor a global minimum. \item $q$ has a global minimum of approximately $-10$; it does not have a global maximum. \item $r$ does not have a global maximum, nor a global minimum. \item $s$ has a global maximum of approximately $5$; it does not have a global minimum. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} Approximate the intervals on which each function is increasing and decreasing. \begin{shortsolution} \begin{itemize} \item $p$ is increasing on $(-\infty,-2)\cup (3,\infty)$, and decreasing on $(-2,3)$. \item $q$ is increasing on $(-4,0)\cup (3,\infty)$, and decreasing on $(-\infty,-4)\cup (0,3)$. \item $r$ is increasing on $(-4,3)$, and decreasing on $(-\infty,-4)\cup (3,\infty)$. \item $s$ is increasing on $(-\infty,-8)\cup (-4,0)\cup (3,5)$, and decreasing on $(-8,-4)\cup (0,3)\cup (5,\infty)$. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} Approximate the intervals on which each function is concave up and concave down. \begin{shortsolution} \begin{itemize} \item $p$ is concave up on $(1,\infty)$, and concave down on $(-\infty,1)$. \item $q$ is concave up on $(-\infty,-1)\cup (1,\infty)$, and concave down on $(-1,1)$. \item $r$ is concave up on $(-\infty,-3)\cup (0,2)$, and concave down on $(-3,0)\cup (2,\infty)$. \item $s$ is concave up on $(-6,-2)\cup (2,5)$, and concave down on $(-\infty,-6)\cup (-2,2)\cup (5,\infty)$. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} The degree of $q$ is $5$. Assuming that all of the real zeros of $q$ are shown in its graph, how many complex zeros does $q$ have? \begin{shortsolution} \Vref{poly:fig:incdec2} shows that $q$ has $3$ real zeros since the curve of $q$ cuts the horizontal axis $3$ times. Since $q$ has degree $5$, $q$ must have $2$ complex zeros. \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Long-run behaviour of polynomials] Describe the long-run behavior of each of polynomial functions in \crefrange{poly:prob:degree5}{poly:prob:degree2}. \begin{shortsolution} $\dd\lim_{x\rightarrow-\infty}p(x)=-\infty$, $\dd\lim_{x\rightarrow\infty}p(x)=\infty$, $\dd\lim_{x\rightarrow-\infty}q(x)=\infty$, $\dd\lim_{x\rightarrow\infty}q(x)=-\infty$, $\dd\lim_{x\rightarrow-\infty}r(x)=-\infty$, $\dd\lim_{x\rightarrow\infty}r(x)=\infty$, $\dd\lim_{x\rightarrow-\infty}s(x)=\infty$, $\dd\lim_{x\rightarrow\infty}s(x)=\infty$, \end{shortsolution} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[True of false?] Let $p$ be a polynomial function. Label each of the following statements as true (T) or false (F); if they are false, provide an example that supports your answer. \begin{subproblem} If $p$ has degree $3$, then $p$ has $3$ distinct zeros. \begin{shortsolution} False. Consider $p(x)=x^2(x+1)$ which has only 2 distinct zeros. \end{shortsolution} \end{subproblem} \begin{subproblem} If $p$ has degree $4$, then $\dd\lim_{x\rightarrow-\infty}p(x)=\infty$ and $\dd\lim_{x\rightarrow\infty}p(x)=\infty$. \begin{shortsolution} False. Consider $p(x)=-x^4$. \end{shortsolution} \end{subproblem} \begin{subproblem} If $p$ has even degree, then it is possible that $p$ can have no real zeros. \begin{shortsolution} True. \end{shortsolution} \end{subproblem} \begin{subproblem} If $p$ has odd degree, then it is possible that $p$ can have no real zeros. \begin{shortsolution} False. All odd degree polynomials will cut the horizontal axis at least once. \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Find a formula from a description] In each of the following problems, give a possible formula for a polynomial function that has the specified properties. \begin{subproblem} Degree 2 and has zeros at $4$ and $5$. \begin{shortsolution} Possible option: $p(x)=(x-4)(x-5)$. Note we could multiply $p$ by any real number, and still meet the requirements. \end{shortsolution} \end{subproblem} \begin{subproblem} Degree 3 and has zeros at $4$,$5$ and $-3$. \begin{shortsolution} Possible option: $p(x)=(x-4)(x-5)(x+3)$. Note we could multiply $p$ by any real number, and still meet the requirements. \end{shortsolution} \end{subproblem} \begin{subproblem} Degree 4 and has zeros at $0$, $4$, $5$, $-3$. \begin{shortsolution} Possible option: $p(x)=x(x-4)(x-5)(x+3)$. Note we could multiply $p$ by any real number, and still meet the requirements. \end{shortsolution} \end{subproblem} \begin{subproblem} Degree 4, with zeros that make the graph cut at $2$, $-5$, and a zero that makes the graph touch at $-2$; \begin{shortsolution} Possible option: $p(x)=(x-2)(x+5)(x+2)^2$. Note we could multiply $p$ by any real number, and still meet the requirements. \end{shortsolution} \end{subproblem} \begin{subproblem} Degree 3, with only one zero at $-1$. \begin{shortsolution} Possible option: $p(x)=(x+1)^3$. Note we could multiply $p$ by any real number, and still meet the requirements. \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{problem}[\Cref{poly:step:last}] \pccname{Saheed} is graphing a polynomial function, $p$. He is following \crefrange{poly:step:first}{poly:step:last} and has so far marked the zeros of $p$ on \cref{poly:fig:optionsp1}. Saheed tells you that $p$ has degree $3$, but does \emph{not} say if the leading coefficient of $p$ is positive or negative. \begin{figure}[!htbp] \begin{widepage} \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-15}, width=\textwidth, height=.5\textwidth, ] \addplot[soldot] coordinates{(-5,0)(2,0)(6,0)}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:optionsp1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-15}, width=\textwidth, height=.5\textwidth, ] \addplot[soldot] coordinates{(-5,0)(6,0)}; \end{axis} \end{tikzpicture} \caption{} \label{poly:fig:optionsp2} \end{subfigure}% \caption{} \end{widepage} \end{figure} \begin{subproblem} Use the information in \cref{poly:fig:optionsp1} to help sketch $p$, assuming that the leading coefficient is positive. \begin{shortsolution} Assuming that $a_3>0$: \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-15}, width=\solutionfigurewidth, ] \addplot expression[domain=-6.78179:8.35598,samples=50]{1/20*(x+5)*(x-2)*(x-6)}; \addplot[soldot] coordinates{(-5,0)(2,0)(6,0)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} Use the information in \cref{poly:fig:optionsp1} to help sketch $p$, assuming that the leading coefficient is negative. \begin{shortsolution} Assuming that $a_3<0$: \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-15}, width=\solutionfigurewidth, ] \addplot expression[domain=-6.78179:8.35598,samples=50]{-1/20*(x+5)*(x-2)*(x-6)}; \addplot[soldot] coordinates{(-5,0)(2,0)(6,0)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} Saheed now turns his attention to another polynomial function, $q$. He finds the zeros of $q$ (there are only $2$) and marks them on \cref{poly:fig:optionsp2}. Saheed knows that $q$ has degree $3$, but doesn't know if the leading coefficient is positive or negative. \begin{subproblem} Use the information in \cref{poly:fig:optionsp2} to help sketch $q$, assuming that the leading coefficient of $q$ is positive. Hint: only one of the zeros is simple. \begin{shortsolution} Assuming that $a_4>0$ there are $2$ different options: \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-15}, width=\solutionfigurewidth, ] \addplot expression[domain=-8.68983:7.31809,samples=50]{1/20*(x+5)^2*(x-6)}; \addplot expression[domain=-6.31809:9.68893,samples=50]{1/20*(x+5)*(x-6)^2}; \addplot[soldot] coordinates{(-5,0)(6,0)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} Use the information in \cref{poly:fig:optionsp2} to help sketch $q$, assuming that the leading coefficient of $q$ is negative. \begin{shortsolution} Assuming that $a_4<0$ there are $2$ different options: \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-15}, width=\solutionfigurewidth, ] \addplot expression[domain=-8.68983:7.31809,samples=50]{-1/20*(x+5)^2*(x-6)}; \addplot expression[domain=-6.31809:9.68893,samples=50]{-1/20*(x+5)*(x-6)^2}; \addplot[soldot] coordinates{(-5,0)(6,0)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{problem}[Zeros] Find all zeros of each of the following polynomial functions, making sure to detail their multiplicity. Note that you may need to use factoring, or the quadratic formula, or both! Also note that some zeros may be repeated, and some may be complex. \begin{multicols}{3} \begin{subproblem} $p(x)=x^2+1$ \begin{shortsolution} $\pm i$ (simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $q(y)=(y^2-9)(y^2-7)$ \begin{shortsolution} $\pm 3$, $\pm \sqrt{7}$ (all are simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $r(z)=-4z^3(z^2+3)(z^2+64)$ \begin{shortsolution} $0$ (multiplicity $3$), $\pm\sqrt{3}$ (simple), $\pm\sqrt{8}$ (simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $a(x)=x^4-81$ \begin{shortsolution} $\pm 3$, $\pm 3i$ (all are simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $b(y)=y^3-8$ \begin{shortsolution} $2$, $-1\pm i\sqrt{3}$ (all are simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $c(m)=m^3-m^2$ \begin{shortsolution} $0$ (multiplicity $2$), $1$ (simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $h(n)=(n+1)(n^2+4)$ \begin{shortsolution} $-1$, $\pm 2i$ (all are simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $f(\alpha)=(\alpha^2-16)(\alpha^2-5\alpha+4)$ \begin{shortsolution} $-4$ (simple), $4$ (multiplicity $2$), $1$ (simple). \end{shortsolution} \end{subproblem} \begin{subproblem} $g(\beta)=(\beta^2-25)(\beta^2-5\beta-4)$ \begin{shortsolution} $\pm 5$, $\dfrac{5\pm\sqrt{41}}{2}$ (all are simple). \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{problem}[Given zeros, find a formula] In each of the following problems you are given the zeros of a polynomial. Write a possible formula for each polynomial| you may leave your answer in factored form, but it may not contain complex numbers. Unless otherwise stated, assume that the zeros are simple. \begin{multicols}{3} \begin{subproblem} $1$, $2$ \begin{shortsolution} $p(x)=(x-1)(x-2)$ \end{shortsolution} \end{subproblem} \begin{subproblem} $0$, $5$, $13$ \begin{shortsolution} $p(x)=x(x-5)(x-13)$ \end{shortsolution} \end{subproblem} \begin{subproblem} $-7$, $2$ (multiplicity $3$), $5$ \begin{shortsolution} $p(x)=(x+7)(x-2)^3(x-5)$ \end{shortsolution} \end{subproblem} \begin{subproblem} $0$, $\pm i$ \begin{shortsolution} $p(x)=x(x^2+1)$ \end{shortsolution} \end{subproblem} \begin{subproblem} $\pm 2i$, $\pm 7$ \begin{shortsolution} $p(x)=(x^2+4)(x^2-49)$ \end{shortsolution} \end{subproblem} \begin{subproblem} $-2\pm i\sqrt{6}$ \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{problem}[Composition of polynomials] Let $p$ and $q$ be polynomial functions that have formulas \[ p(x)=(x+1)(x+2)(x+5), \qquad q(x)=3-x^4 \] Evaluate each of the following. \begin{multicols}{4} \begin{subproblem} $(p\circ q)(0)$ \begin{shortsolution} $160$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(q\circ p)(0)$ \begin{shortsolution} $-9997$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(p\circ q)(1)$ \begin{shortsolution} $84$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(p\circ p)(0)$ \begin{shortsolution} $1980$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{problem}[Piecewise polynomial functions] Let $P$ be the piecewise-defined function with formula \[ P(x)=\begin{cases} (1-x)(2x+5)(x^2+1), & x\leq -3 \\ 4-x^2, & -3<x < 4 \\ x^3 & x\geq 4 \end{cases} \] Evaluate each of the following \begin{multicols}{5} \begin{subproblem} $P(-4)$ \begin{shortsolution} $-255$ \end{shortsolution} \end{subproblem} \begin{subproblem} $P(0)$ \begin{shortsolution} $4$ \end{shortsolution} \end{subproblem} \begin{subproblem} $P(4)$ \begin{shortsolution} $64$ \end{shortsolution} \end{subproblem} \begin{subproblem} $P(-3)$ \begin{shortsolution} $-40$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(P\circ P)(0)$ \begin{shortsolution} $64$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: July 2012 %=================================== \begin{problem}[Function algebra] Let $p$ and $q$ be the polynomial functions that have formulas \[ p(x)=x(x+1)(x-3)^2, \qquad q(x)=7-x^2 \] Evaluate each of the following (if possible). \begin{multicols}{4} \begin{subproblem} $(p+q)(1)$ \begin{shortsolution} $14$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(p-q)(0)$ \begin{shortsolution} $7$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(p\cdot q)(\sqrt{7})$ \begin{shortsolution} $0$ \end{shortsolution} \end{subproblem} \begin{subproblem} $\left( \frac{q}{p} \right)(1)$ \begin{shortsolution} $\frac{3}{4}$ \end{shortsolution} \end{subproblem} \end{multicols} \begin{subproblem} What is the domain of the function $\frac{q}{p}$? \begin{shortsolution} $(-\infty,-1)\cup (-1,0)\cup (0,3)\cup (3,\infty)$ \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: July 2012 %=================================== \begin{problem}[Transformations: given the transformation, find the formula] Let $p$ be the polynomial function that has formula. \[ p(x)=4x(x^2-1)(x+3) \] In each of the following problems apply the given transformation to the function $p$ and write a formula for the transformed version of $p$. \begin{multicols}{2} \begin{subproblem} Shift $p$ to the right by $5$ units. \begin{shortsolution} $p(x-5)=4(x-5)(x-2)(x^2-10x+24)$ \end{shortsolution} \end{subproblem} \begin{subproblem} Shift $p$ to the left by $6$ units. \begin{shortsolution} $p(x+6)=4(x+6)(x+9)(x^2+12x+35)$ \end{shortsolution} \end{subproblem} \begin{subproblem} Shift $p$ up by $12$ units. \begin{shortsolution} $p(x)+12=4x(x^2-1)(x+3)+12$ \end{shortsolution} \end{subproblem} \begin{subproblem} Shift $p$ down by $2$ units. \begin{shortsolution} $p(x)-2=4x(x^2-1)(x+3)-2$ \end{shortsolution} \end{subproblem} \begin{subproblem} Reflect $p$ over the horizontal axis. \begin{shortsolution} $-p(x)=-4x(x^2-1)(x+3)$ \end{shortsolution} \end{subproblem} \begin{subproblem} Reflect $p$ over the vertical axis. \begin{shortsolution} $p(-x)=-4x(x^2-1)(3-x)$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Find a formula from a table]\label{poly:prob:findformula} \Crefrange{poly:tab:findformulap}{poly:tab:findformulas} show values of polynomial functions, $p$, $q$, $r$, and $s$. \begin{table}[!htb] \centering \begin{widepage} \caption{Tables for \cref{poly:prob:findformula}} \label{poly:tab:findformula} \begin{subtable}{.2\textwidth} \centering \caption{$y=p(x)$} \label{poly:tab:findformulap} \begin{tabular}{rr} \beforeheading \heading{$x$} & \heading{$y$} \\ \afterheading $-4$ & $-56$ \\\normalline $-3$ & $-18$ \\\normalline $-2$ & $0$ \\\normalline $-1$ & $4$ \\\normalline $0$ & $0$ \\\normalline $1$ & $-6$ \\\normalline $2$ & $-8$ \\\normalline $3$ & $0$ \\\normalline $4$ & $24$ \\\lastline \end{tabular} \end{subtable} \hfill \begin{subtable}{.2\textwidth} \centering \caption{$y=q(x)$} \label{poly:tab:findformulaq} \begin{tabular}{rr} \beforeheading \heading{$x$} & \heading{$y$} \\ \afterheading $-4$ & $-16$ \\\normalline $-3$ & $-3$ \\\normalline $-2$ & $0$ \\\normalline $-1$ & $-1$ \\\normalline $0$ & $0$ \\\normalline $1$ & $9$ \\\normalline $2$ & $32$ \\\normalline $3$ & $75$ \\\normalline $4$ & $144$ \\\lastline \end{tabular} \end{subtable} \hfill \begin{subtable}{.2\textwidth} \centering \caption{$y=r(x)$} \label{poly:tab:findformular} \begin{tabular}{rr} \beforeheading \heading{$x$} & \heading{$y$} \\ \afterheading $-4$ & $105$ \\\normalline $-3$ & $0$ \\\normalline $-2$ & $-15$ \\\normalline $-1$ & $0$ \\\normalline $0$ & $9$ \\\normalline $1$ & $0$ \\\normalline $2$ & $-15$ \\\normalline $3$ & $0$ \\\normalline $4$ & $105$ \\\lastline \end{tabular} \end{subtable} \hfill \begin{subtable}{.2\textwidth} \centering \caption{$y=s(x)$} \label{poly:tab:findformulas} \begin{tabular}{rr} \beforeheading \heading{$x$} & \heading{$y$} \\ \afterheading $-4$ & $75$ \\\normalline $-3$ & $0$ \\\normalline $-2$ & $-9$ \\\normalline $-1$ & $0$ \\\normalline $0$ & $3$ \\\normalline $1$ & $0$ \\\normalline $2$ & $15$ \\\normalline $3$ & $96$ \\\normalline $4$ & $760$ \\\lastline \end{tabular} \end{subtable} \end{widepage} \end{table} \begin{subproblem} Assuming that all of the zeros of $p$ are shown (in \cref{poly:tab:findformulap}), how many zeros does $p$ have? \begin{shortsolution} $p$ has 3 zeros. \end{shortsolution} \end{subproblem} \begin{subproblem} What is the degree of $p$? \begin{shortsolution} $p$ is degree 3. \end{shortsolution} \end{subproblem} \begin{subproblem} Write a formula for $p(x)$. \begin{shortsolution} $p(x)=x(x+2)(x-3)$ \end{shortsolution} \end{subproblem} \begin{subproblem} Assuming that all of the zeros of $q$ are shown (in \cref{poly:tab:findformulaq}), how many zeros does $q$ have? \begin{shortsolution} $q$ has 2 zeros. \end{shortsolution} \end{subproblem} \begin{subproblem} Describe the difference in behavior of $p$ and $q$ at $-2$. \begin{shortsolution} $p$ changes sign at $-2$, and $q$ does not change sign at $-2$. \end{shortsolution} \end{subproblem} \begin{subproblem} Given that $q$ is a degree-$3$ polynomial, write a formula for $q(x)$. \begin{shortsolution} $q(x)=x(x+2)^2$ \end{shortsolution} \end{subproblem} \begin{subproblem} Assuming that all of the zeros of $r$ are shown (in \cref{poly:tab:findformular}), find a formula for $r(x)$. \begin{shortsolution} $r(x)=(x+3)(x+1)(x-1)(x-3)$ \end{shortsolution} \end{subproblem} \begin{subproblem} Assuming that all of the zeros of $s$ are shown (in \cref{poly:tab:findformulas}), find a formula for $s(x)$. \begin{shortsolution} $s(x)=(x+3)(x+1)(x-1)^2$ \end{shortsolution} \end{subproblem} \end{problem} \end{exercises} \section{Rational functions} \subsection*{Power functions with negative exponents} The study of rational functions will rely upon a good knowledge of power functions with negative exponents. \Cref{rat:ex:oddpow,rat:ex:evenpow} are simple but fundamental to understanding the behavior of rational functions. %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Power functions with odd negative exponents]\label{rat:ex:oddpow} Graph each of the following functions on your calculator, state their domain in interval notation, and their behavior as $x\rightarrow 0^-$ and $x\rightarrow 0^+$. \[ f(x)=\frac{1}{x},\qquad g(x)=\dfrac{1}{x^3},\qquad h(x)=\dfrac{1}{x^5} \] \begin{pccsolution} The functions $f$, $g$, and $k$ are plotted in \cref{rat:fig:oddpow}. The domain of each of the functions $f$, $g$, and $h$ is $(-\infty,0)\cup (0,\infty)$. Note that the long-run behavior of each of the functions is the same, and in particular \begin{align*} f(x)\rightarrow 0 & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} f(x)\rightarrow 0 & \text{ as } x\rightarrow-\infty \end{align*} The same results hold for $g$ and $h$. Note also that each of the functions has a \emph{vertical asymptote} at $0$. We see that \begin{align*} f(x)\rightarrow -\infty & \text{ as } x\rightarrow 0^- \\ \mathllap{\text{and }} f(x)\rightarrow \infty & \text{ as } x\rightarrow 0^+ \end{align*} The same results hold for $g$ and $h$. The curve of a function that has a vertical asymptote is necessarily separated into \emph{branches}| each of the functions $f$, $g$, and $h$ have $2$ branches. \end{pccsolution} \end{pccexample} \begin{figure}[!htb] \begin{minipage}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-3,xmax=3, ymin=-5,ymax=5, xtick={-2,-1,...,2}, minor ytick={-3,-1,...,3}, grid=both, width=\textwidth, legend pos=north west, ] \addplot expression[domain=-3:-0.2]{1/x}; \addplot expression[domain=-3:-0.584]{1/x^3}; \addplot expression[domain=-3:-0.724]{1/x^5}; \addplot expression[domain=0.2:3]{1/x}; \addplot expression[domain=0.584:3]{1/x^3}; \addplot expression[domain=0.724:3]{1/x^5}; \addplot[soldot]coordinates{(-1,-1)}node[axisnode,anchor=north east]{$(-1,-1)$}; \addplot[soldot]coordinates{(1,1)}node[axisnode,anchor=south west]{$(1,1)$}; \legend{$f$,$g$,$h$} \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:oddpow} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-3,xmax=3, ymin=-5,ymax=5, xtick={-2,-1,...,2}, minor ytick={-3,-1,...,3}, grid=both, width=\textwidth, legend pos=south east, ] \addplot expression[domain=-3:-0.447]{1/x^2}; \addplot expression[domain=-3:-0.668]{1/x^4}; \addplot expression[domain=-3:-0.764]{1/x^6}; \addplot expression[domain=0.447:3]{1/x^2}; \addplot expression[domain=0.668:3]{1/x^4}; \addplot expression[domain=0.764:3]{1/x^6}; \addplot[soldot]coordinates{(-1,1)}node[axisnode,anchor=south east]{$(-1,1)$}; \addplot[soldot]coordinates{(1,1)}node[axisnode,anchor=south west]{$(1,1)$}; \legend{$F$,$G$,$H$} \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:evenpow} \end{minipage}% \end{figure} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Power functions with even negative exponents]\label{rat:ex:evenpow}% Graph each of the following functions, state their domain, and their behavior as $x\rightarrow 0^-$ and $x\rightarrow 0^+$. \[ f(x)=\frac{1}{x^2},\qquad g(x)=\frac{1}{x^4},\qquad h(x)=\frac{1}{x^6} \] \begin{pccsolution} The functions $F$, $G$, and $H$ are plotted in \cref{rat:fig:evenpow}. The domain of each of the functions $F$, $G$, and $H$ is $(-\infty,0)\cup (0,\infty)$. Note that the long-run behavior of each of the functions is the same, and in particular \begin{align*} F(x)\rightarrow 0 & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} f(x)\rightarrow 0 & \text{ as } x\rightarrow-\infty \end{align*} As in \cref{rat:ex:oddpow}, $F$ has a horizontal asymptote that has equation $y=0$. The same results hold for $G$ and $H$. Note also that each of the functions has a \emph{vertical asymptote} at $0$. We see that \begin{align*} F(x)\rightarrow \infty & \text{ as } x\rightarrow 0^- \\ \mathllap{\text{and }} F(x)\rightarrow \infty & \text{ as } x\rightarrow 0^+ \end{align*} The same results hold for $G$ and $H$. Each of the functions $F$, $G$, and $H$ have $2$ branches. \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{doyouunderstand} \begin{problem} Repeat \cref{rat:ex:oddpow,rat:ex:evenpow} using (respectively) \begin{subproblem} $k(x)=-\dfrac{1}{x}$, $ m(x)=-\dfrac{1}{x^3}$, $ n(x)=-\dfrac{1}{x^5}$ \begin{shortsolution} The functions $k$, $m$, and $n$ have domain $(-\infty,0)\cup (0,\infty)$, and are graphed below. \begin{tikzpicture} \begin{axis}[ framed, xmin=-3,xmax=3, ymin=-5,ymax=5, xtick={-2,-1,...,2}, minor ytick={-3,-1,...,3}, grid=both, width=\solutionfigurewidth, legend pos=north east, ] \addplot expression[domain=-3:-0.2]{-1/x}; \addplot expression[domain=-3:-0.584]{-1/x^3}; \addplot expression[domain=-3:-0.724]{-1/x^5}; \addplot expression[domain=0.2:3]{-1/x}; \addplot expression[domain=0.584:3]{-1/x^3}; \addplot expression[domain=0.724:3]{-1/x^5}; \legend{$k$,$m$,$n$} \end{axis} \end{tikzpicture} Note that \begin{align*} k(x)\rightarrow 0 & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} k(x)\rightarrow 0 & \text{ as } x\rightarrow-\infty \\ \intertext{and also} k(x)\rightarrow \infty & \text{ as } x\rightarrow 0^- \\ \mathllap{\text{and }} k(x)\rightarrow -\infty & \text{ as } x\rightarrow 0^+ \end{align*} The same are true for $m$ and $n$. \end{shortsolution} \end{subproblem} \begin{subproblem} $ K(x)=-\dfrac{1}{x^2}$, $ M(x)=-\dfrac{1}{x^4}$, $ N(x)=-\dfrac{1}{x^6}$ \begin{shortsolution} The functions $K$, $M$, and $N$ have domain $(-\infty,0)\cup (0,\infty)$, and are graphed below. \begin{tikzpicture} \begin{axis}[ framed, xmin=-3,xmax=3, ymin=-5,ymax=5, xtick={-2,-1,...,2}, minor ytick={-3,-1,...,3}, grid=both, width=\solutionfigurewidth, legend pos=north east, ] \addplot expression[domain=-3:-0.447]{-1/x^2}; \addplot expression[domain=-3:-0.668]{-1/x^4}; \addplot expression[domain=-3:-0.764]{-1/x^6}; \addplot expression[domain=0.447:3]{-1/x^2}; \addplot expression[domain=0.668:3]{-1/x^4}; \addplot expression[domain=0.764:3]{-1/x^6}; \legend{$K$,$M$,$N$} \end{axis} \end{tikzpicture} Note that \begin{align*} K(x)\rightarrow 0 & \text{ as } x\rightarrow\infty \\ \mathllap{\text{and }} K(x)\rightarrow 0 & \text{ as } x\rightarrow-\infty \\ \intertext{and also} K(x)\rightarrow -\infty & \text{ as } x\rightarrow 0^- \\ \mathllap{\text{and }} K(x)\rightarrow -\infty & \text{ as } x\rightarrow 0^+ \end{align*} The same are true for $M$ and $N$. \end{shortsolution} \end{subproblem} \end{problem} \end{doyouunderstand} \subsection*{Rational functions} \begin{pccdefinition}[Rational functions]\label{rat:def:function} Rational functions have the form \[ r(x) = \frac{p(x)}{q(x)} \] where both $p$ and $q$ are polynomials. Note that \begin{itemize} \item the domain or $r$ will be all real numbers, except those that make the \emph{denominator}, $q(x)$, equal to $0$; \item the zeros of $r$ are the zeros of $p$, i.e the real numbers that make the \emph{numerator}, $p(x)$, equal to $0$. \end{itemize} \Cref{rat:ex:oddpow,rat:ex:evenpow} are particularly important because $r$ will behave like $\frac{1}{x}$, or $\frac{1}{x^2}$ around its vertical asymptotes, depending on the power that the relevant term is raised to| we will demonstrate this in what follows. \end{pccdefinition} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Rational or not] Identify whether each of the following functions is a rational or not. If the function is rational, state the domain. \begin{multicols}{3} \begin{enumerate} \item $r(x)=\dfrac{1}{x}$ \item $f(x)=2^x+3$ \item $g(x)=19$ \item $h(x)=\dfrac{3+x}{4-x}$ \item $k(x)=\dfrac{x^3+2x}{x-15}$ \item $l(x)=9-4x$ \item $m(x)=\dfrac{x+5}{(x-7)(x+9)}$ \item $n(x)=x^2+6x+7$ \item $q(x)=1-\dfrac{3}{x+1}$ \end{enumerate} \end{multicols} \begin{pccsolution} \begin{enumerate} \item $r$ is rational; the domain of $r$ is $(-\infty,0)\cup(0,\infty)$. \item $f$ is not rational. \item $g$ is not rational; $g$ is constant. \item $h$ is rational; the domain of $h$ is $(-\infty,4)\cup(4,\infty)$. \item $k$ is rational; the domain of $k$ is $(-\infty,15)\cup(15,\infty)$. \item $l$ is not rational; $l$ is linear. \item $m$ is rational; the domain of $m$ is $(-\infty,-9)\cup(-9,7)\cup(7,\infty)$. \item $n$ is not rational; $n$ is quadratic (or you might describe $n$ as a polynomial). \item $q$ is rational; the domain of $q$ is $(-\infty,-1)\cup (-1,\infty)$. \end{enumerate} \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Match formula to graph] Each of the following functions is graphed in \cref{rat:fig:whichiswhich}. Which is which? \[ r(x)=\frac{1}{x-3}, \qquad q(x)=\frac{x-2}{x+5}, \qquad k(x)=\frac{1}{(x+2)(x-3)} \] \begin{figure}[!htb] \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x-2)/(x+5);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-6,ymax=6, xtick={-8,-6,...,8}, minor ytick={-4,-3,...,4}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:-6.37]{f}; \addplot[pccplot] expression[domain=-3.97:10]{f}; \addplot[soldot] coordinates{(2,0)}; \addplot[asymptote,domain=-6:6]({-5},{x}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:which1} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=1/(x-3);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-5,ymax=6, xtick={-8,-6,...,8}, ytick={-4,4}, minor ytick={-3,...,5}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:2.8]{f}; \addplot[pccplot] expression[domain=3.17:10]{f}; \addplot[asymptote,domain=-6:6]({3},{x}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:which2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=1/((x-3)*(x+2));}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-5,ymax=5, xtick={-8,-6,...,8}, ytick={-4,4}, minor ytick={-3,...,3}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:-2.03969]{f}; \addplot[pccplot] expression[domain=-1.95967:2.95967]{f}; \addplot[pccplot] expression[domain=3.03969:10]{f}; \addplot[asymptote,domain=-5:5]({-2},{x}); \addplot[asymptote,domain=-5:5]({3},{x}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:which3} \end{subfigure} \caption{} \label{rat:fig:whichiswhich} \end{figure} \begin{pccsolution} Let's start with the function $r$. Note that domain of $r$ is $(-\infty,3)\cup(0,3)$, so we search for a function that has a vertical asymptote at $3$. There are two possible choices: the functions graphed in \cref{rat:fig:which2,rat:fig:which3}, but note that the function in \cref{rat:fig:which3} also has a vertical asymptote at $-2$ which is not consistent with the formula for $r(x)$. Therefore, $y=r(x)$ is graphed in \cref{rat:fig:which2}. The function $q$ has domain $(-\infty,-5)\cup(-5,\infty)$, so we search for a function that has a vertical asymptote at $-5$. The only candidate is the curve shown in \cref{rat:fig:which1}; note that the curve also goes through $(2,0)$, which is consistent with the formula for $q(x)$, since $q(2)=0$, i.e $q$ has a zero at $2$. The function $k$ has domain $(-\infty,-2)\cup(-2,3)\cup(3,\infty)$, and has vertical asymptotes at $-2$ and $3$. This is consistent with the graph in \cref{rat:fig:which3} (and is the only curve that has $3$ branches). We note that each function behaves like $\frac{1}{x}$ around its vertical asymptotes, because each linear factor in each denominator is raised to the power $1$; if (for example) the definition of $r$ was instead \[ r(x)=\frac{1}{(x-3)^2} \] then we would see that $r$ behaves like $\frac{1}{x^2}$ around its vertical asymptote, and the graph of $r$ would be very different. We will deal with these cases in the examples that follow. \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Repeated factors in the denominator] Consider the functions $f$, $g$, and $h$ that have formulas \[ f(x)=\frac{x-2}{(x-3)(x+2)}, \qquad g(x)=\frac{x-2}{(x-3)^2(x+2)}, \qquad h(x)=\frac{x-2}{(x-3)(x+2)^2} \] which are graphed in \cref{rat:fig:repfactd}. Note that each function has $2$ vertical asymptotes, and the domain of each function is \[ (-\infty,-2)\cup(-2,3)\cup(3,\infty) \] so we are not surprised to see that each curve has $3$ branches. We also note that the numerator of each function is the same, which tells us that each function has only $1$ zero at $2$. The functions $g$ and $h$ are different from those that we have considered previously, because they have a repeated factor in the denominator. Notice in particular the way that the functions behave around their asymptotes: \begin{itemize} \item $f$ behaves like $\frac{1}{x}$ around both of its asymptotes; \item $g$ behaves like $\frac{1}{x}$ around $-2$, and like $\frac{1}{x^2}$ around $3$; \item $h$ behaves like $\frac{1}{x^2}$ around $-2$, and like $\frac{1}{x}$ around $3$. \end{itemize} \end{pccexample} \begin{figure}[!htb] \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x-2)/((x+2)*(x-3));}] \begin{axis}[ % framed, xmin=-5,xmax=5, ymin=-4,ymax=4, xtick={-4,-2,...,4}, ytick={-2,2}, % grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-5:-2.201]{f}; \addplot[pccplot] expression[domain=-1.802:2.951]{f}; \addplot[pccplot] expression[domain=3.052:5]{f}; \addplot[soldot] coordinates{(2,0)}; % \addplot[asymptote,domain=-6:6]({-2},{x}); % \addplot[asymptote,domain=-6:6]({3},{x}); \end{axis} \end{tikzpicture} \caption{$y=\dfrac{x-2}{(x+2)(x-3)}$} \label{rat:fig:repfactd1} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x-2)/((x+2)*(x-3)^2);}] \begin{axis}[ % framed, xmin=-5,xmax=5, ymin=-4,ymax=4, xtick={-4,-2,...,4}, ytick={-2,2}, % grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-5:-2.039]{f}; \addplot[pccplot] expression[domain=-1.959:2.796]{f}; \addplot[pccplot] expression[domain=3.243:5]{f}; \addplot[soldot] coordinates{(2,0)}; % \addplot[asymptote,domain=-4:4]({-2},{x}); % \addplot[asymptote,domain=-4:4]({3},{x}); \end{axis} \end{tikzpicture} \caption{$y=\dfrac{x-2}{(x+2)(x-3)^2}$} \label{rat:fig:repfactd2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x-2)/((x+2)^2*(x-3));}] \begin{axis}[ % framed, xmin=-5,xmax=5, ymin=-4,ymax=4, xtick={-4,-2,...,2}, ytick={-2,2}, % grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-5:-2.451]{f}; \addplot[pccplot] expression[domain=-1.558:2.990]{f}; \addplot[pccplot] expression[domain=3.010:6]{f}; \addplot[soldot] coordinates{(2,0)}; % \addplot[asymptote,domain=-4:4]({-2},{x}); % \addplot[asymptote,domain=-4:4]({3},{x}); \end{axis} \end{tikzpicture} \caption{$y=\dfrac{x-2}{(x+2)^2(x-3)}$} \label{rat:fig:repfactd3} \end{subfigure} \caption{} \label{rat:fig:repfactd} \end{figure} \Cref{rat:def:function} says that the zeros of the rational function $r$ that has formula $r(x)=\frac{p(x)}{q(x)}$ are the zeros of $p$. Let's explore this a little more. %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}[Zeros] Find the zeros of each of the following functions \[ \alpha(x)=\frac{x+5}{3x-7}, \qquad \beta(x)=\frac{9-x}{x+1}, \qquad \gamma(x)=\frac{17x^2-10}{2x+1} \] \begin{pccsolution} We find the zeros of each function in turn by setting the numerator equal to $0$. The zeros of $\alpha$ are found by solving \[ x+5=0 \] The zero of $\alpha$ is $-5$. Similarly, we may solve $9-x=0$ to find the zero of $\beta$, which is clearly $9$. The zeros of $\gamma$ satisfy the equation \[ 17x^2-10=0 \] which we can solve using the square root property to obtain \[ x=\pm\frac{10}{17} \] The zeros of $\gamma$ are $\pm\frac{10}{17}$. \end{pccsolution} \end{pccexample} \subsection*{Long-run behavior} Our focus so far has been on the behavior of rational functions around their \emph{vertical} asymptotes. In fact, rational functions also have interesting long-run behavior around their \emph{horizontal} or \emph{oblique} asymptotes. A rational function will always have either a horizontal or an oblique asymptote| the case is determined by the degree of the numerator and the degree of the denominator. \begin{pccdefinition}[Long-run behavior]\label{rat:def:longrun} Let $r$ be the rational function that has formula \[ r(x) = \frac{a_n x^n + a_{n-1}x^{n-1}+\ldots + a_0}{b_m x^m + b_{m-1}x^{m-1}+\ldots+b_0} \] We can classify the long-run behavior of the rational function $r$ according to the following criteria: \begin{itemize} \item if $n<m$ then $r$ has a horizontal asymptote with equation $y=0$; \item if $n=m$ then $r$ has a horizontal asymptote with equation $y=\dfrac{a_n}{b_m}$; \item if $n>m$ then $r$ will have an oblique asymptote as $x\rightarrow\pm\infty$ (more on this in \cref{rat:sec:oblique}) \end{itemize} \end{pccdefinition} We will concentrate on functions that have horizontal asymptotes until we reach \cref{rat:sec:oblique}. %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}[Long-run behavior graphically]\label{rat:ex:horizasymp} \pccname{Kebede} has graphed the following functions in his graphing calculator \[ r(x)=\frac{x+1}{x-3}, \qquad s(x)=\frac{2(x+1)}{x-3}, \qquad t(x)=\frac{3(x+1)}{x-3} \] and obtained the curves shown in \cref{rat:fig:horizasymp}. Kebede decides to test his knowledgeable friend \pccname{Oscar}, and asks him to match the formulas to the graphs. \begin{figure}[!htb] \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=2*(x+1)/(x-3);}] \begin{axis}[ framed, xmin=-15,xmax=15, ymin=-6,ymax=6, xtick={-12,-8,...,12}, minor ytick={-4,-3,...,4}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-15:2]{f}; \addplot[pccplot] expression[domain=5:15]{f}; \addplot[soldot] coordinates{(-1,0)}; \addplot[asymptote,domain=-6:6]({3},{x}); \addplot[asymptote,domain=-15:15]({x},{2}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:horizasymp1} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x+1)/(x-3);}] \begin{axis}[ framed, xmin=-15,xmax=15, ymin=-6,ymax=6, xtick={-12,-8,...,12}, minor ytick={-4,-3,...,4}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-15:2.42857,samples=50]{f}; \addplot[pccplot] expression[domain=3.8:15,samples=50]{f}; \addplot[soldot] coordinates{(-1,0)}; \addplot[asymptote,domain=-6:6]({3},{x}); \addplot[asymptote,domain=-15:15]({x},{1}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:horizasymp2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=3*(x+1)/(x-3);}] \begin{axis}[ framed, xmin=-15,xmax=15, ymin=-6,ymax=6, xtick={-12,-8,...,12}, minor ytick={-4,-3,...,4}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-15:1.6666,samples=50]{f}; \addplot[pccplot] expression[domain=7:15]{f}; \addplot[soldot] coordinates{(-1,0)}; \addplot[asymptote,domain=-6:6]({3},{x}); \addplot[asymptote,domain=-15:15]({x},{3}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:horizasymp3} \end{subfigure} \caption{Horizontal asymptotes} \label{rat:fig:horizasymp} \end{figure} Oscar notices that each function has a vertical asymptote at $3$ and a zero at $-1$. The main thing that catches Oscar's eye is that each function has a different coefficient in the numerator, and that each curve has a different horizontal asymptote. In particular, Oscar notes that \begin{itemize} \item the curve shown in \cref{rat:fig:horizasymp1} has a horizontal asymptote with equation $y=2$; \item the curve shown in \cref{rat:fig:horizasymp2} has a horizontal asymptote with equation $y=1$; \item the curve shown in \cref{rat:fig:horizasymp3} has a horizontal asymptote with equation $y=3$. \end{itemize} Oscar is able to tie it all together for Kebede by referencing \cref{rat:def:longrun}. He says that since the degree of the numerator and the degree of the denominator is the same for each of the functions $r$, $s$, and $t$, the horizontal asymptote will be determined by evaluating the ratio of their leading coefficients. Oscar therefore says that $r$ should have a horizontal asymptote $y=\frac{1}{1}=1$, $s$ should have a horizontal asymptote $y=\frac{2}{1}=2$, and $t$ should have a horizontal asymptote $y=\frac{3}{1}=3$. Kebede is able to finish the problem from here, and says that $r$ is shown in \cref{rat:fig:horizasymp2}, $s$ is shown in \cref{rat:fig:horizasymp1}, and $t$ is shown in \cref{rat:fig:horizasymp3}. \end{pccexample} %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}[Long-run behavior numerically] \pccname{Xiao} and \pccname{Dwayne} saw \cref{rat:ex:horizasymp} but are a little confused about horizontal asymptotes. What does it mean to say that a function $r$ has a horizontal asymptote? They decide to explore the concept by constructing a table of values for the rational functions $R$ and $S$ that have formulas \[ R(x)=\frac{-5(x+1)}{x-3}, \qquad S(x)=\frac{7(x-5)}{2(x+1)} \] In \cref{rat:tab:plusinfty} they model the behavior of $R$ and $S$ as $x\rightarrow\infty$, and in \cref{rat:tab:minusinfty} they model the behavior of $R$ and $S$ as $x\rightarrow-\infty$ by substituting very large values of $|x|$ into each function. \begin{table}[!htb] \begin{minipage}{.5\textwidth} \centering \caption{$R$ and $S$ as $x\rightarrow\infty$} \label{rat:tab:plusinfty} \begin{tabular}{crr} \beforeheading $x$ & $R(x)$ & $S(x)$ \\ \afterheading $1\times 10^2$ & $-5.20619$ & $3.29208$ \\\normalline $1\times 10^3$ & $-5.02006$ & $3.47902$ \\\normalline $1\times 10^4$ & $-5.00200$ & $3.49790$ \\\normalline $1\times 10^5$ & $-5.00020$ & $3.49979$ \\\normalline $1\times 10^6$ & $-5.00002$ & $3.49998$ \\\lastline \end{tabular} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \caption{$R$ and $S$ as $x\rightarrow-\infty$} \label{rat:tab:minusinfty} \begin{tabular}{crr} \beforeheading $x$ & $R(x)$ & $S(x)$ \\ \afterheading $-1\times 10^2$ & $-4.80583$ & $3.71212$ \\\normalline $-1\times 10^3$ & $-4.98006$ & $3.52102$ \\\normalline $-1\times 10^4$ & $-4.99800$ & $3.50210$ \\\normalline $-1\times 10^5$ & $-4.99980$ & $3.50021$ \\\normalline $-1\times 10^6$ & $-4.99998$ & $3.50002$ \\\lastline \end{tabular} \end{minipage} \end{table} Xiao and Dwayne study \cref{rat:tab:plusinfty,rat:tab:minusinfty} and decide that the functions $R$ and $S$ never actually touch their horizontal asymptotes, but they do get infinitely close. They also feel as if they have a better understanding of what it means to study the behavior of a function as $x\rightarrow\pm\infty$. \end{pccexample} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample}[Repeated factors in the numerator] Consider the functions $f$, $g$, and $h$ that have formulas \[ f(x)=\frac{(x-2)^2}{(x-3)(x+1)}, \qquad g(x)=\frac{x-2}{(x-3)(x+1)}, \qquad h(x)=\frac{(x-2)^3}{(x-3)(x+1)} \] which are graphed in \cref{rat:fig:repfactn}. We note that each function has vertical asymptotes at $-1$ and $3$, and so the domain of each function is \[ (-\infty,-1)\cup(-1,3)\cup(3,\infty) \] We also notice that the numerators of each function are quite similar| indeed, each function has a zero at $2$, but how does each function behave around their zero? Using \cref{rat:fig:repfactn} to guide us, we note that \begin{itemize} \item $f$ has a horizontal intercept $(2,0)$, but the curve of $f$ does not cut the horizontal axis| it bounces off it; \item $g$ also has a horizontal intercept $(2,0)$, and the curve of $g$ \emph{does} cut the horizontal axis; \item $h$ has a horizontal intercept $(2,0)$, and the curve of $h$ also cuts the axis, but appears flattened as it does so. \end{itemize} We can further enrich our study by discussing the long-run behavior of each function. Using the tools of \cref{rat:def:longrun}, we can deduce that \begin{itemize} \item $f$ has a horizontal asymptote with equation $y=1$; \item $g$ has a horizontal asymptote with equation $y=0$; \item $h$ does \emph{not} have a horizontal asymptote| it has an oblique asymptote (we'll study this more in \cref{rat:sec:oblique}). \end{itemize} \end{pccexample} \begin{figure}[!htb] \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x-2)^2/((x+1)*(x-3));}] \begin{axis}[ % framed, xmin=-5,xmax=5, ymin=-10,ymax=10, xtick={-4,-2,...,4}, ytick={-8,-4,...,8}, % grid=both, width=\figurewidth, ] \addplot[pccplot] expression[domain=-5:-1.248,samples=50]{f}; \addplot[pccplot] expression[domain=-0.794:2.976,samples=50]{f}; \addplot[pccplot] expression[domain=3.026:5,samples=50]{f}; \addplot[soldot] coordinates{(2,0)}; % \addplot[asymptote,domain=-6:6]({-1},{x}); % \addplot[asymptote,domain=-6:6]({3},{x}); \end{axis} \end{tikzpicture} \caption{$y=\dfrac{(x-2)^2}{(x+1)(x-3)}$} \label{rat:fig:repfactn1} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x-2)/((x+1)*(x-3));}] \begin{axis}[ % framed, xmin=-5,xmax=5, ymin=-10,ymax=10, xtick={-4,-2,...,4}, ytick={-8,-4,...,8}, % grid=both, width=\figurewidth, ] \addplot[pccplot] expression[domain=-5:-1.075]{f}; \addplot[pccplot] expression[domain=-0.925:2.975]{f}; \addplot[pccplot] expression[domain=3.025:5]{f}; \addplot[soldot] coordinates{(2,0)}; % \addplot[asymptote,domain=-6:6]({-1},{x}); % \addplot[asymptote,domain=-6:6]({3},{x}); \end{axis} \end{tikzpicture} \caption{$y=\dfrac{x-2}{(x+1)(x-3)}$} \label{rat:fig:repfactn2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x-2)^3/((x+1)*(x-3));}] \begin{axis}[ % framed, xmin=-5,xmax=5, xtick={-8,-6,...,8}, % grid=both, ymin=-30,ymax=30, width=\figurewidth, ] \addplot[pccplot] expression[domain=-5:-1.27]{f}; \addplot[pccplot] expression[domain=-0.806:2.99185]{f}; \addplot[pccplot] expression[domain=3.0085:5]{f}; \addplot[soldot] coordinates{(2,0)}; % \addplot[asymptote,domain=-30:30]({-1},{x}); % \addplot[asymptote,domain=-30:30]({3},{x}); \end{axis} \end{tikzpicture} \caption{$y=\dfrac{(x-2)^3}{(x+1)(x-3)}$} \label{rat:fig:repfactn3} \end{subfigure} \caption{} \label{rat:fig:repfactn} \end{figure} \subsection*{Holes} Rational functions have a vertical asymptote at $a$ if the denominator is $0$ at $a$. What happens if the numerator is $0$ at the same place? In this case, we say that the rational function has a \emph{hole} at $a$. \begin{pccdefinition}[Holes] The rational function \[ r(x)=\frac{p(x)}{q(x)} \] has a hole at $a$ if $p(a)=q(a)=0$. Note that holes are different from a vertical asymptotes. We represent that $r$ has a hole at the point $(a,r(a))$ on the curve $y=r(x)$ by using a hollow circle, $\circ$. \end{pccdefinition} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{pccexample} \pccname{Mohammed} and \pccname{Sue} have graphed the function $r$ that has formula \[ r(x)=\frac{x^2+x-6}{(x-2)} \] in their calculators, and can not decide if the correct graph is \cref{rat:fig:hole} or \cref{rat:fig:hole1}. Luckily for them, Oscar is nearby, and can help them settle the debate. Oscar demonstrates that \begin{align*} r(x) & =\frac{(x+3)(x-2)}{(x-2)} \\ & = x+3 \end{align*} but only when $x\ne 2$, because the function is undefined at $2$. Oscar says that this necessarily means that the domain or $r$ is \[ (-\infty,2)\cup(2,\infty) \] and that $r$ must have a hole at $2$. Mohammed and Sue are very grateful for the clarification, and conclude that the graph of $r$ is shown in \cref{rat:fig:hole1}. \begin{figure}[!htb] \begin{minipage}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-4,...,8}, ytick={-8,-4,...,8}, grid=both, width=\textwidth, ] \addplot expression[domain=-10:7]{x+3}; \addplot[soldot] coordinates{(-3,0)}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:hole} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-4,...,8}, ytick={-8,-4,...,8}, grid=both, width=\textwidth, ] \addplot expression[domain=-10:7]{x+3}; \addplot[holdot] coordinates{(2,5)}; \addplot[soldot] coordinates{(-3,0)}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:hole1} \end{minipage}% \end{figure} \end{pccexample} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{pccexample} Consider the function $f$ that has formula \[ f(x)=\frac{x(x+3)}{x^2-4x} \] The domain of $f$ is $(-\infty,0)\cup(0,4)\cup(4,\infty)$ because both $0$ and $4$ make the denominator equal to $0$. Notice that \begin{align*} f(x) & = \frac{x(x+3)}{x(x-4)} \\ & = \frac{x+3}{x-4} \end{align*} provided that $x\ne 0$. Since $0$ makes the numerator and the denominator 0 at the same time, we say that $f$ has a hole at $(0,-\nicefrac{3}{4})$. Note that this necessarily means that $f$ does not have a vertical intercept. We also note $f$ has a vertical asymptote at $4$; the function is graphed in \cref{rat:fig:holeex}. \begin{figure}[!htb] \centering \begin{tikzpicture}[/pgf/declare function={f=(x+3)/(x-4);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-8,-6,...,8}, grid=both, ] \addplot[pccplot] expression[domain=-10:3.36364,samples=50]{f}; \addplot[pccplot] expression[domain=4.77:10]{f}; \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[holdot]coordinates{(0,-0.75)}; \addplot[soldot] coordinates{(-3,0)}; \end{axis} \end{tikzpicture} \caption{$y=\dfrac{x(x+3)}{x^2-4x}$} \label{rat:fig:holeex} \end{figure} \end{pccexample} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{pccexample}[Minimums and maximums] \pccname{Seamus} and \pccname{Trang} are discussing rational functions. Seamus says that if a rational function has a vertical asymptote, then it can not possibly have local minimums and maximums, nor can it have global minimums and maximums. Trang says this statement is not always true. She plots the functions $f$ and $g$ that have formulas \[ f(x)=-\frac{32(x-1)(x+1)}{(x-2)^2(x+2)^2}, \qquad g(x)=\frac{32(x-1)(x+1)}{(x-2)^2(x+2)^2} \] in \cref{rat:fig:minmax1,rat:fig:minmax2} and shows them to Seamus. On seeing the graphs, Seamus quickly corrects himself, and says that $f$ has a local (and global) maximum of $2$ at $0$, and that $g$ has a local (and global) minimum of $-2$ at $0$. \begin{figure}[!htb] \begin{minipage}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=-32*(x-1)*(x+1)/(( x-2)^2*(x+2)^2);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-8,-6,...,8}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:-3.01]{f}; \addplot[pccplot] expression[domain=-1.45:1.45]{f}; \addplot[pccplot] expression[domain=3.01:10]{f}; \addplot[soldot] coordinates{(-1,0)(1,0)}; \end{axis} \end{tikzpicture} \caption{$y=f(x)$} \label{rat:fig:minmax1} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=32*(x-1)*(x+1)/(( x-2)^2*(x+2)^2);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-8,-6,...,8}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:-3.01]{f}; \addplot[pccplot] expression[domain=-1.45:1.45]{f}; \addplot[pccplot] expression[domain=3.01:10]{f}; \addplot[soldot] coordinates{(-1,0)(1,0)}; \end{axis} \end{tikzpicture} \caption{$y=g(x)$} \label{rat:fig:minmax2} \end{minipage}% \end{figure} Seamus also notes that (in its domain) the function $f$ is always concave down, and that (in its domain) the function $g$ is always concave up. Furthermore, Trang observes that each function behaves like $\frac{1}{x^2}$ around each of its vertical asymptotes, because each linear factor in the denominator is raised to the power $2$. \pccname{Oscar} stops by and reminds both students about the long-run behavior; according to \cref{rat:def:longrun} since the degree of the denominator is greater than the degree of the numerator (in both functions), each function has a horizontal asymptote at $y=0$. \end{pccexample} \investigation*{} %=================================== % Author: Pettit/Hughes % Date: March 2012 %=================================== \begin{problem}[The spaghetti incident] The same Queen from \vref{exp:prob:queenschessboard} has recovered from the rice experiments, and has called her loyal jester for another challenge. The jester has an $11-$inch piece of uncooked spaghetti that he puts on a table; he uses a book to cover $\unit[1]{inch}$ of it so that $\unit[10]{inches}$ hang over the edge. The jester then produces a box of $\unit{mg}$ weights that can be hung from the spaghetti. The jester says it will take $\unit[y]{mg}$ to break the spaghetti when hung $\unit[x]{inches}$ from the edge, according to the rule $y=\frac{100}{x}$. \begin{margintable} \centering \captionof{table}{} \label{rat:tab:spaghetti} \begin{tabular}{cc} \beforeheading \heading{$x$} & \heading{$y$} \\ \afterheading $1$ & \\\normalline $2$ & \\\normalline $3$ & \\\normalline $4$ & \\\normalline $5$ & \\\normalline $6$ & \\\normalline $7$ & \\\normalline $8$ & \\\normalline $9$ & \\\normalline $10$ & \\\lastline \end{tabular} \end{margintable} \begin{subproblem}\label{rat:prob:spaggt1} Help the Queen complete \cref{rat:tab:spaghetti}, and use $2$ digits after the decimal where appropriate. \begin{shortsolution} \begin{tabular}[t]{ld{2}} \beforeheading \heading{$x$} & \heading{$y$} \\ \afterheading $1$ & 100 \\\normalline $2$ & 50 \\\normalline $3$ & 33.33 \\\normalline $4$ & 25 \\\normalline $5$ & 20 \\\normalline $6$ & 16.67 \\\normalline $7$ & 14.29 \\\normalline $8$ & 12.50 \\\normalline $9$ & 11.11 \\\normalline $10$ & 10 \\\lastline \end{tabular} \end{shortsolution} \end{subproblem} \begin{subproblem} What do you notice about the number of $\unit{mg}$ that it takes to break the spaghetti as $x$ increases? \begin{shortsolution} It seems that the number of $\unit{mg}$ that it takes to break the spaghetti decreases as $x$ increases. \end{shortsolution} \end{subproblem} \begin{subproblem}\label{rat:prob:spaglt1} The Queen wonders what happens when $x$ gets very small| help the Queen construct a table of values for $x$ and $y$ when $x=0.0001, 0.001, 0.01, 0.1, 0.5, 1$. \begin{shortsolution} \begin{tabular}[t]{d{2}l} \beforeheading \heading{$x$} & \heading{$y$} \\ \afterheading 0.0001 & $1000000$ \\\normalline 0.001 & $100000$ \\\normalline 0.01 & $10000$ \\\normalline 0.1 & $1000$ \\\normalline 0.5 & $200$ \\\normalline 1 & $100$ \\\lastline \end{tabular} \end{shortsolution} \end{subproblem} \begin{subproblem} What do you notice about the number of $\unit{mg}$ that it takes to break the spaghetti as $x\rightarrow 0$? Would it ever make sense to let $x=0$? \begin{shortsolution} The number of $\unit{mg}$ required to break the spaghetti increases as $x\rightarrow 0$. We can not allow $x$ to be $0$, as we can not divide by $0$, and we can not be $0$ inches from the edge of the table. \end{shortsolution} \end{subproblem} \begin{subproblem} Plot your results from \cref{rat:prob:spaggt1,rat:prob:spaglt1} on the same graph, and join the points using a smooth curve| set the maximum value of $y$ as $200$, and note that this necessarily means that you will not be able to plot all of the points. \begin{shortsolution} The graph of $y=\frac{100}{x}$ is shown below. \begin{tikzpicture} \begin{axis}[ framed, xmin=-2,xmax=11, ymin=-20,ymax=200, xtick={2,4,...,10}, ytick={20,40,...,180}, grid=major, width=\solutionfigurewidth, ] \addplot+[-] expression[domain=0.5:10]{100/x}; \addplot[soldot] coordinates{(0.5,200)(1,100)(2,50)(3,33.33) (4,25)(5,20)(16.67)(7,14.29)(8,12.50)(9,11.11)(10,10)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} Using your graph, observe what happens to $y$ as $x$ increases. If we could somehow construct a piece of uncooked spaghetti that was $\unit[101]{inches}$ long, how many $\unit{mg}$ would it take to break the spaghetti? \begin{shortsolution} As $x$ increases, $y\rightarrow 0$. If we could construct a piece of spaghetti $\unit[101]{inches}$ long, it would only take $\unit[1]{mg}$ to break it $\left(\frac{100}{100}=1\right)$. Of course, the weight of spaghetti would probably cause it to break without the weight. \end{shortsolution} \end{subproblem} The Queen looks forward to more food-related investigations from her jester. \end{problem} %=================================== % Author: Adams (Hughes) % Date: March 2012 %=================================== \begin{problem}[Debt Amortization] To amortize a debt means to pay it off in a given length of time using equal periodic payments. The payments include interest on the unpaid balance. The following formula gives the monthly payment, $M$, in dollars that is necessary to amortize a debt of $P$ dollars in $n$ months at a monthly interest rate of $i$ \[ M=\frac{P\cdot i}{1-(1+i)^{-n}} \] Use this formula in each of the following problems. \begin{subproblem} What monthly payments are necessary on a credit card debt of \$2000 at $\unit[1.5]{\%}$ monthly if you want to pay off the debt in $2$ years? In one year? How much money will you save by paying off the debt in the shorter amount of time? \begin{shortsolution} Paying off the debt in $2$ years, we use \begin{align*} M & = \frac{2000\cdot 0.015}{1-(1+0.015)^{-24}} \\ & \approx 99.85 \end{align*} The monthly payments are \$99.85. Paying off the debt in $1$ year, we use \begin{align*} M & = \frac{2000\cdot 0.015}{1-(1+0.015)^{-12}} \\ & \approx 183.36 \end{align*} The monthly payments are \$183.36 In the $2$-year model we would pay a total of $\$99.85\cdot 12=\$2396.40$. In the $1$-year model we would pay a total of $\$183.36\cdot 12=\$2200.32$. We would therefore save $\$196.08$ if we went with the $1$-year model instead of the $2$-year model. \end{shortsolution} \end{subproblem} \begin{subproblem} To purchase a home, a family needs a loan of \$300,000 at $\unit[5.2]{\%}$ annual interest. Compare a $20$ year loan to a $30$ year loan and make a recommendation for the family. (Note: when given an annual interest rate, it is a common business practice to divide by $12$ to get a monthly rate.) \begin{shortsolution} For the $20$-year loan we use \begin{align*} M & = \frac{300000\cdot \frac{0.052}{12}}{1-\left( 1+\frac{0.052}{12} \right)^{-12\cdot 20}} \\ & \approx 2013.16 \end{align*} The monthly payments are \$2013.16. For the $30$-year loan we use \begin{align*} M & = \frac{300000\cdot \frac{0.052}{12}}{1-\left( 1+\frac{0.052}{12} \right)^{-12\cdot 30}} \\ & \approx 1647.33 \end{align*} The monthly payments are \$1647.33. The total amount paid during the $20$-year loan is $\$2013.16\cdot 12\cdot 20=\$483,158.40$. The total amount paid during the $30$-year loan is $\$1647.33\cdot 12\cdot 30=\$593,038.80$. Recommendation: if you can afford the payments, choose the $20$-year loan. \end{shortsolution} \end{subproblem} \begin{subproblem} \pccname{Ellen} wants to make monthly payments of \$100 to pay off a debt of \$3000 at \unit[12]{\%} annual interest. How long will it take her to pay off the debt? \begin{shortsolution} We are given $M=100$, $P=3000$, $i=0.01$, and we need to find $n$ in the equation \[ 100 = \frac{3000\cdot 0.01}{1-(1+0.01)^{-n}} \] Using logarithms, we find that $n\approx 36$. It will take Ellen about $3$ years to pay off the debt. \end{shortsolution} \end{subproblem} \begin{subproblem} \pccname{Jake} is going to buy a new car. He puts \$2000 down and wants to finance the remaining \$14,000. The dealer will offer him \unit[4]{\%} annual interest for $5$ years, or a \$2000 rebate which he can use to reduce the amount of the loan and \unit[8]{\%} annual interest for 5 years. Which should he choose? \begin{shortsolution} \begin{description} \item[Option 1:] $\unit[4]{\%}$ annual interest for $5$ years on \$14,000. This means that the monthly payments will be calculated using \begin{align*} M & = \frac{14000\cdot \frac{0.04}{12}}{1-\left( 1+\frac{0.04}{12} \right)^{-12\cdot 5}} \\ & \approx 257.83 \end{align*} The monthly payments will be $\$257.83$. The total amount paid will be $\$257.83\cdot 5\cdot 12=\$15,469.80$, of which $\$1469.80$ is interest. \item[Option 2:] $\unit[8]{\%}$ annual interest for $5$ years on \$12,000. This means that the monthly payments will be calculated using \begin{align*} M & = \frac{12000\cdot \frac{0.08}{12}}{1-\left( 1+\frac{0.08}{12} \right)^{-12\cdot 5}} \\ & \approx 243.32 \end{align*} The monthly payments will be $\$243.32$. The total amount paid will be $\$243.32\cdot 5\cdot 12 =\$14,599.20$, of which $\$2599.2$ is interest. \end{description} Jake should choose option 1 to minimize the amount of interest he has to pay. \end{shortsolution} \end{subproblem} \end{problem} \begin{exercises} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Rational or not] Decide if each of the following functions are rational or not. If they are rational, state their domain. \begin{multicols}{3} \begin{subproblem} $r(x)=\dfrac{3}{x}$ \begin{shortsolution} $r$ is rational; the domain of $r$ is $(-\infty,0)\cup (0,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $s(y)=\dfrac{y}{6}$ \begin{shortsolution} $s$ is not rational ($s$ is linear). \end{shortsolution} \end{subproblem} \begin{subproblem} $t(z)=\dfrac{4-x}{7-8z}$ \begin{shortsolution} $t$ is rational; the domain of $t$ is $\left( -\infty,\dfrac{7}{8} \right)\cup \left( \dfrac{7}{8},\infty \right)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $u(w)=\dfrac{w^2}{(w-3)(w+4)}$ \begin{shortsolution} $u$ is rational; the domain of $w$ is $(-\infty,-4)\cup(-4,3)\cup(3,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $v(x)=\dfrac{4}{(x-2)^2}$ \begin{shortsolution} $v$ is rational; the domain of $v$ is $(-\infty,2)\cup(2,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $w(x)=\dfrac{9-x}{x+17}$ \begin{shortsolution} $w$ is rational; the domain of $w$ is $(-\infty,-17)\cup(-17,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $a(x)=x^2+4$ \begin{shortsolution} $a$ is not rational ($a$ is quadratic, or a polynomial of degree $2$). \end{shortsolution} \end{subproblem} \begin{subproblem} $b(y)=3^y$ \begin{shortsolution} $b$ is not rational ($b$ is exponential). \end{shortsolution} \end{subproblem} \begin{subproblem} $c(z)=\dfrac{z^2}{z^3}$ \begin{shortsolution} $c$ is rational; the domain of $c$ is $(-\infty,0)\cup (0,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $d(x)=x^2(x+3)(5x-7)$ \begin{shortsolution} $d$ is not rational ($d$ is a polynomial). \end{shortsolution} \end{subproblem} \begin{subproblem} $e(\alpha)=\dfrac{\alpha^2}{\alpha^2-1}$ \begin{shortsolution} $e$ is rational; the domain of $e$ is $(-\infty,-1)\cup(-1,1)\cup(1,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $f(\beta)=\dfrac{3}{4}$ \begin{shortsolution} $f$ is not rational ($f$ is constant). \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Function evaluation] Let $r$ be the function that has formula \[ r(x)=\frac{(x-2)(x+3)}{(x+5)(x-7)} \] Evaluate each of the following (if possible); if the value is undefined, then state so. \begin{multicols}{4} \begin{subproblem} $r(0)$ \begin{shortsolution} $\begin{aligned}[t] r(0) & =\frac{(0-2)(0+3)}{(0+5)(0-7)} \\ & =\frac{-6}{-35} \\ & =\frac{6}{35} \end{aligned}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $r(1)$ \begin{shortsolution} $\begin{aligned}[t] r(1) & =\frac{(1-2)(1+3)}{(1+5)(1-7)} \\ & =\frac{-4}{-36} \\ & =\frac{1}{9} \end{aligned}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $r(2)$ \begin{shortsolution} $\begin{aligned}[t] r(2) & =\frac{(2-2)(2+3)}{(2+5)(2-7)} \\ & = \frac{0}{-50} \\ & =0 \end{aligned}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $r(4)$ \begin{shortsolution} $\begin{aligned}[t] r(4) & =\frac{(4-2)(4+3)}{(4+5)(4-7)} \\ & =\frac{14}{-27} \\ & =-\frac{14}{27} \end{aligned}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $r(7)$ \begin{shortsolution} $\begin{aligned}[t] r(7) & =\frac{(7-2)(7+3)}{(7+5)(7-7)} \\ & =\frac{50}{0} \end{aligned}$ $r(7)$ is undefined. \end{shortsolution} \end{subproblem} \begin{subproblem} $r(-3)$ \begin{shortsolution} $\begin{aligned}[t] r(-3) & =\frac{(-3-2)(-3+3)}{(-3+5)(-3-7)} \\ & =\frac{0}{-20} \\ & =0 \end{aligned}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $r(-5)$ \begin{shortsolution} $\begin{aligned}[t] r(-5) & =\frac{(-5-2)(-5+3)}{(-5+5)(-5-7)} \\ & =\frac{14}{0} \end{aligned}$ $r(-5)$ is undefined. \end{shortsolution} \end{subproblem} \begin{subproblem} $r\left( \frac{1}{2} \right)$ \begin{shortsolution} $\begin{aligned}[t] r\left( \frac{1}{2} \right) & = \frac{\left( \frac{1}{2}-2 \right)\left( \frac{1}{2}+3 \right)}{\left( \frac{1}{2}+5 \right)\left( \frac{1}{2}-7 \right)} \\ & =\frac{-\frac{3}{2}\cdot\frac{7}{2}}{\frac{11}{2}\left( -\frac{13}{2} \right)} \\ & =\frac{-\frac{21}{4}}{-\frac{143}{4}} \\ & =\frac{37}{143} \end{aligned}$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Holes or asymptotes?] State the domain of each of the following rational functions. Identify any holes or asymptotes. \begin{multicols}{3} \begin{subproblem} $f(x)=\dfrac{12}{x-2}$ \begin{shortsolution} $f$ has a vertical asymptote at $2$; the domain of $f$ is $(-\infty,2)\cup (2,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $g(x)=\dfrac{x^2+x}{(x+1)(x-2)}$ \begin{shortsolution} $g$ has a vertical asymptote at $2$, and a hole at $-1$; the domain of $g$ is $(-\infty,-1)\cup(-1,2)\cup(2,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $h(x)=\dfrac{x^2+5x+4}{x^2+x-12}$ \begin{shortsolution} $h$ has a vertical asymptote at $3$, and a whole at $-4$; the domain of $h$ is $(-\infty,-4)\cup(-4,3)\cup(3,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $k(z)=\dfrac{z+2}{2z-3}$ \begin{shortsolution} $k$ has a vertical asymptote at $\dfrac{3}{2}$; the domain of $k$ is $\left( -\infty,\dfrac{3}{2} \right)\cup\left( \dfrac{3}{2},\infty \right)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $l(w)=\dfrac{w}{w^2+1}$ \begin{shortsolution} $l$ does not have any vertical asymptotes nor holes; the domain of $w$ is $(-\infty,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} $m(t)=\dfrac{14}{13-t^2}$ \begin{shortsolution} $m$ has vertical asymptotes at $\pm\sqrt{13}$; the domain of $m$ is $(-\infty,\sqrt{13})\cup(-\sqrt{13},\sqrt{13})\cup(\sqrt{13},\infty)$. \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Find a formula from a graph] Consider the rational functions graphed in \cref{rat:fig:findformula}. Find the vertical asymptotes for each function, together with any zeros, and give a possible formula for each. \begin{shortsolution} \begin{itemize} \item \Vref{rat:fig:formula1}: possible formula is $r(x)=\dfrac{1}{x+5}$ \item \Vref{rat:fig:formula2}: possible formula is $r(x)=\dfrac{(x+3)}{(x-5)}$ \item \Vref{rat:fig:formula3}: possible formula is $r(x)=\dfrac{1}{(x-4)(x+3)}$. \end{itemize} \end{shortsolution} \end{problem} \begin{figure}[!htb] \begin{widepage} \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=1/(x+4);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-6,ymax=6, xtick={-8,-6,...,8}, minor ytick={-4,-3,...,4}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:-4.16667,samples=50]{f}; \addplot[pccplot] expression[domain=-3.83333:10,samples=50]{f}; \addplot[asymptote,domain=-6:6]({-4},{x}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:formula1} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=(x+3)/(x-5);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-6,ymax=6, xtick={-8,-6,...,8}, minor ytick={-4,-3,...,4}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:3.85714]{f}; \addplot[pccplot] expression[domain=6.6:10]{f}; \addplot[soldot] coordinates{(-3,0)}; \addplot[asymptote,domain=-6:6]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{1}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:formula2} \end{subfigure} \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture}[/pgf/declare function={f=1/((x-4)*(x+3));}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-3,ymax=3, xtick={-8,-6,...,8}, minor ytick={-4,-3,...,4}, grid=both, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:-3.0473]{f}; \addplot[pccplot] expression[domain=-2.95205:3.95205]{f}; \addplot[pccplot] expression[domain=4.0473:10]{f}; \addplot[asymptote,domain=-3:3]({-3},{x}); \addplot[asymptote,domain=-3:3]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{0}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:formula3} \end{subfigure} \caption{} \label{rat:fig:findformula} \end{widepage} \end{figure} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Find a formula from a description] In each of the following problems, give a formula of a rational function that has the listed properties. \begin{subproblem} Vertical asymptote at $2$. \begin{shortsolution} Possible option: $r(x)=\dfrac{1}{x-2}$. Note that we could multiply the numerator or denominator by any real number and still have the desired properties. \end{shortsolution} \end{subproblem} \begin{subproblem} Vertical asymptote at $5$. \begin{shortsolution} Possible option: $r(x)=\dfrac{1}{x-5}$. Note that we could multiply the numerator or denominator by any real number and still have the desired properties. \end{shortsolution} \end{subproblem} \begin{subproblem} Vertical asymptote at $-2$, and zero at $6$. \begin{shortsolution} Possible option: $r(x)=\dfrac{x-6}{x+2}$. Note that we could multiply the numerator or denominator by any real number and still have the desired properties. \end{shortsolution} \end{subproblem} \begin{subproblem} Zeros at $2$ and $-5$ and vertical asymptotes at $1$ and $-7$. \begin{shortsolution} Possible option: $r(x)=\dfrac{(x-2)(x+5)}{(x-1)(x+7)}$. Note that we could multiply the numerator or denominator by any real number and still have the desired properties. \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Given formula, find horizontal asymptotes] Each of the following functions has a horizontal asymptote. Write the equation of the horizontal asymptote for each function. \begin{multicols}{3} \begin{subproblem} $f(x) = \dfrac{1}{x}$ \begin{shortsolution} $y=0$ \end{shortsolution} \end{subproblem} \begin{subproblem} $g(x) = \dfrac{2x+3}{x}$ \begin{shortsolution} $y=2$ \end{shortsolution} \end{subproblem} \begin{subproblem} $h(x) = \dfrac{x^2+2x}{x^2+3}$ \begin{shortsolution} $y=1$ \end{shortsolution} \end{subproblem} \begin{subproblem} $k(x) = \dfrac{x^2+7}{x}$ \begin{shortsolution} $y=1$ \end{shortsolution} \end{subproblem} \begin{subproblem} $l(x)=\dfrac{3x-2}{5x+8}$ \begin{shortsolution} $y=\dfrac{3}{5}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $m(x)=\dfrac{3x-2}{5x^2+8}$ \begin{shortsolution} $y=0$ \end{shortsolution} \end{subproblem} \begin{subproblem} $n(x)=\dfrac{(6x+1)(x-7)}{(11x-8)(x-5)}$ \begin{shortsolution} $y=\dfrac{6}{11}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $p(x)=\dfrac{19x^3}{5-x^4}$ \begin{shortsolution} $y=0$ \end{shortsolution} \end{subproblem} \begin{subproblem} $q(x)=\dfrac{14x^2+x}{1-7x^2}$ \begin{shortsolution} $y=-2$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{problem}[Given horizontal asymptotes, find formula] In each of the following problems, give a formula for a function that has the given horizontal asymptote. Note that there may be more than one option. \begin{multicols}{4} \begin{subproblem} $y=7$ \begin{shortsolution} Possible option: $f(x)=\dfrac{7(x-2)}{x+1}$. Note that there are other options, provided that the degree of the numerator is the same as the degree of the denominator, and that the ratio of the leading coefficients is $7$. \end{shortsolution} \end{subproblem} \begin{subproblem} $y=-1$ \begin{shortsolution} Possible option: $f(x)=\dfrac{5-x^2}{x^2+10}$. Note that there are other options, provided that the degree of the numerator is the same as the degree of the denominator, and that the ratio of the leading coefficients is $10$. \end{shortsolution} \end{subproblem} \begin{subproblem} $y=53$ \begin{shortsolution} Possible option: $f(x)=\dfrac{53x^3}{x^3+4x^2-7}$. Note that there are other options, provided that the degree of the numerator is the same as the degree of the denominator, and that the ratio of the leading coefficients is $53$. \end{shortsolution} \end{subproblem} \begin{subproblem} $y=-17$ \begin{shortsolution} Possible option: $f(x)=\dfrac{34(x+2)}{7-2x}$. Note that there are other options, provided that the degree of the numerator is the same as the degree of the denominator, and that the ratio of the leading coefficients is $-17$. \end{shortsolution} \end{subproblem} \begin{subproblem} $y=\dfrac{3}{2}$ \begin{shortsolution} Possible option: $f(x)=\dfrac{3x+4}{2(x+1)}$. Note that there are other options, provided that the degree of the numerator is the same as the degree of the denominator, and that the ratio of the leading coefficients is $\dfrac{3}{2}$. \end{shortsolution} \end{subproblem} \begin{subproblem} $y=0$ \begin{shortsolution} Possible option: $f(x)=\dfrac{4}{x}$. Note that there are other options, provided that the degree of the numerator is less than the degree of the denominator. \end{shortsolution} \end{subproblem} \begin{subproblem} $y=-1$ \begin{shortsolution} Possible option: $f(x)=\dfrac{10x}{5-10x}$. Note that there are other options, provided that the degree of the numerator is the same as the degree of the denominator, and that the ratio of the leading coefficients is $-1$. \end{shortsolution} \end{subproblem} \begin{subproblem} $y=2$ \begin{shortsolution} Possible option: $f(x)=\dfrac{8x-3}{4x+1}$. Note that there are other options, provided that the degree of the numerator is the same as the degree of the denominator, and that the ratio of the leading coefficients is $2$. \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Find a formula from a description] In each of the following problems, give a formula for a function that has the prescribed properties. Note that there may be more than one option. \begin{subproblem} $f(x)\rightarrow 3$ as $x\rightarrow\pm\infty$. \begin{shortsolution} Possible option: $f(x)=\dfrac{3(x-2)}{x+7}$. Note that the zero and asymptote of $f$ could be changed, and $f$ would still have the desired properties. \end{shortsolution} \end{subproblem} \begin{subproblem} $r(x)\rightarrow -4$ as $x\rightarrow\pm\infty$. \begin{shortsolution} Possible option: $r(x)=\dfrac{-4(x-2)}{x+7}$. Note that the zero and asymptote of $r$ could be changed, and $r$ would still have the desired properties. \end{shortsolution} \end{subproblem} \begin{subproblem} $k(x)\rightarrow 2$ as $x\rightarrow\pm\infty$, and $k$ has vertical asymptotes at $-3$ and $5$. \begin{shortsolution} Possible option: $k(x)=\dfrac{2x^2}{(x+3)(x-5)}$. Note that the denominator must have the given factors; the numerator could be any degree $2$ polynomial, provided the leading coefficient is $2$. \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: Feb 2011 %=================================== \begin{problem} Let $r$ be the rational function that has \[ r(x) = \frac{(x+2)(x-1)}{(x+3)(x-4)} \] Each of the following questions are in relation to this function. \begin{subproblem} What is the vertical intercept of this function? State your answer as an ordered pair. \index{rational functions!vertical intercept} \begin{shortsolution} $\left(0,\frac{1}{6}\right)$ \end{shortsolution} \end{subproblem} \begin{subproblem}\label{rat:prob:rational} What values of $x$ make the denominator equal to $0$? \begin{shortsolution} $-3,4$ \end{shortsolution} \end{subproblem} \begin{subproblem} Use your answer to \cref{rat:prob:rational} to write the domain of the function in both interval, and set builder notation. %\index{rational functions!domain}\index{domain!rational functions} \begin{shortsolution} Interval notation: $(-\infty,-3)\cup (-3,4)\cup (4,\infty)$. Set builder: $\{x|x\ne -3, \mathrm{and}\, x\ne 4\}$ \end{shortsolution} \end{subproblem} \begin{subproblem} What are the vertical asymptotes of the function? State your answers in the form $x=$ \begin{shortsolution} $x=-3$ and $x=4$ \end{shortsolution} \end{subproblem} \begin{subproblem}\label{rat:prob:zeroes} What values of $x$ make the numerator equal to $0$? \begin{shortsolution} $-2,1$ \end{shortsolution} \end{subproblem} \begin{subproblem} Use your answer to \cref{rat:prob:zeroes} to write the horizontal intercepts of $r$ as ordered pairs. \begin{shortsolution} $(-2,0)$ and $(1,0)$ \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Holes] \pccname{Josh} and \pccname{Pedro} are discussing the function \[ r(x)=\frac{x^2-1}{(x+3)(x-1)} \] \begin{subproblem} What is the domain of $r$? \begin{shortsolution} The domain of $r$ is $(-\infty,-3)\cup(-3,1)\cup(1,\infty)$. \end{shortsolution} \end{subproblem} \begin{subproblem} Josh notices that the numerator can be factored- can you see how? \begin{shortsolution} $(x^2-1)=(x-1)(x+1)$ \end{shortsolution} \end{subproblem} \begin{subproblem} Pedro asks, `Doesn't that just mean that \[ r(x)=\frac{x+1}{x+3} \] for all values of $x$?' Josh says, `Nearly\ldots but not for all values of $x$'. What does Josh mean? \begin{shortsolution} $r(x)=\dfrac{x+1}{x+3}$ provided that $x\ne -1$. \end{shortsolution} \end{subproblem} \begin{subproblem} Where does $r$ have vertical asymptotes, and where does it have holes? \begin{shortsolution} The function $r$ has a vertical asymptote at $-3$, and a hole at $1$. \end{shortsolution} \end{subproblem} \begin{subproblem} Sketch a graph of $r$. \begin{shortsolution} A graph of $r$ is shown below. \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-8,-6,...,8}, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot] expression[domain=-10:-3.25]{(x+1)/(x+3)}; \addplot[pccplot] expression[domain=-2.75:10]{(x+1)/(x+3)}; \addplot[asymptote,domain=-10:10]({-3},{x}); \addplot[holdot]coordinates{(1,0.5)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: July 2012 %=================================== \begin{problem}[Function algebra] Let $r$ and $s$ be the rational functions that have formulas \[ r(x)=\frac{2-x}{x+3}, \qquad s(x)=\frac{x^2}{x-4} \] Evaluate each of the following (if possible). \begin{multicols}{4} \begin{subproblem} $(r+s)(5)$ \begin{shortsolution} $\frac{197}{8}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(r-s)(3)$ \begin{shortsolution} $\frac{53}{6}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(r\cdot s)(4)$ \begin{shortsolution} Undefined. \end{shortsolution} \end{subproblem} \begin{subproblem} $\left( \frac{r}{s} \right)(1)$ \begin{shortsolution} $-\frac{3}{4}$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: July 2012 %=================================== \begin{problem}[Transformations: given the transformation, find the formula] Let $r$ be the rational function that has formula. \[ r(x)=\frac{x+5}{2x-3} \] In each of the following problems apply the given transformation to the function $r$ and write a formula for the transformed version of $r$. \begin{multicols}{2} \begin{subproblem} Shift $r$ to the right by $3$ units. \begin{shortsolution} $r(x-3)=\frac{x+2}{2x-9}$ \end{shortsolution} \end{subproblem} \begin{subproblem} Shift $r$ to the left by $4$ units. \begin{shortsolution} $r(x+4)=\frac{x+9}{2x+5}$ \end{shortsolution} \end{subproblem} \begin{subproblem} Shift $r$ up by $\pi$ units. \begin{shortsolution} $r(x)+\pi=\frac{x+5}{2x-3}+\pi$ \end{shortsolution} \end{subproblem} \begin{subproblem} Shift $r$ down by $17$ units. \begin{shortsolution} $r(x)-17=\frac{x+5}{2x-3}-17$ \end{shortsolution} \end{subproblem} \begin{subproblem} Reflect $r$ over the horizontal axis. \begin{shortsolution} $-r(x)=-\frac{x+5}{2x-3}$ \end{shortsolution} \end{subproblem} \begin{subproblem} Reflect $r$ over the vertical axis. \begin{shortsolution} $r(-x)=\frac{x-5}{2x+3}$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: May 2011 %=================================== \begin{problem}[Find a formula from a table]\label{rat:prob:findformula} \Crefrange{rat:tab:findformular}{rat:tab:findformulau} show values of rational functions $r$, $q$, $s$, and $t$. Assume that any values marked with an X are undefined. \begin{table}[!htb] \begin{widepage} \centering \caption{Tables for \cref{rat:prob:findformula}} \label{rat:tab:findformula} \begin{subtable}{.2\textwidth} \centering \caption{$y=r(x)$} \label{rat:tab:findformular} \begin{tabular}{rr} \beforeheading $x$ & $y$ \\ \afterheading $-4$ & $\nicefrac{7}{2}$ \\\normalline $-3$ & $-18$ \\\normalline $-2$ & X \\\normalline $-1$ & $-4$ \\\normalline $0$ & $\nicefrac{-3}{2}$ \\\normalline $1$ & $\nicefrac{-2}{3}$ \\\normalline $2$ & $\nicefrac{-1}{4}$ \\\normalline $3$ & $0$ \\\normalline $4$ & $\nicefrac{1}{6}$ \\\lastline \end{tabular} \end{subtable} \hfill \begin{subtable}{.2\textwidth} \centering \caption{$y=s(x)$} \label{rat:tab:findformulas} \begin{tabular}{rr} \beforeheading $x$ & $y$ \\ \afterheading $-4$ & $\nicefrac{-2}{21}$ \\\normalline $-3$ & $\nicefrac{-1}{12}$ \\\normalline $-2$ & $0$ \\\normalline $-1$ & X \\\normalline $0$ & $\nicefrac{-2}{3}$ \\\normalline $1$ & $\nicefrac{-3}{4}$ \\\normalline $2$ & $\nicefrac{-4}{3}$ \\\normalline $3$ & X \\\normalline $4$ & $\nicefrac{6}{5}$ \\\lastline \end{tabular} \end{subtable} \hfill \begin{subtable}{.2\textwidth} \centering \caption{$y=t(x)$} \label{rat:tab:findformulat} \begin{tabular}{rr} \beforeheading $x$ & $y$ \\ \afterheading $-4$ & $\nicefrac{3}{5}$ \\\normalline $-3$ & $0$ \\\normalline $-2$ & X \\\normalline $-1$ & $3$ \\\normalline $0$ & $3$ \\\normalline $1$ & X \\\normalline $2$ & $0$ \\\normalline $3$ & $\nicefrac{3}{5}$ \\\normalline $4$ & $\nicefrac{7}{9}$ \\\lastline \end{tabular} \end{subtable} \hfill \begin{subtable}{.2\textwidth} \centering \caption{$y=u(x)$} \label{rat:tab:findformulau} \begin{tabular}{rr} \beforeheading $x$ & $y$ \\ \afterheading $-4$ & $\nicefrac{16}{7}$ \\\normalline $-3$ & X \\\normalline $-2$ & $-\nicefrac{4}{5}$ \\\normalline $-1$ & $-\nicefrac{1}{8}$ \\\normalline $0$ & $0$ \\\normalline $1$ & $-\nicefrac{1}{8}$ \\\normalline $2$ & $-\nicefrac{4}{5}$ \\\normalline $3$ & X \\\normalline $4$ & $\nicefrac{16}{7}$ \\\lastline \end{tabular} \end{subtable} \end{widepage} \end{table} \begin{subproblem} Given that the formula for $r(x)$ has the form $r(x)=\dfrac{x-A}{x-B}$, use \cref{rat:tab:findformular} to find values of $A$ and $B$. \begin{shortsolution} $A=3$ and $B=-2$, so $r(x)=\dfrac{x-3}{x+2}$. \end{shortsolution} \end{subproblem} \begin{subproblem} Check your formula by computing $r(x)$ at the values specified in the table. \begin{shortsolution} $\begin{aligned}[t] r(-4) & = \frac{-4-3}{-4+2} \\ & = \frac{7}{2} \\ \end{aligned}$ $r(-3)=\ldots$ etc \end{shortsolution} \end{subproblem} \begin{subproblem} The function $s$ in \cref{rat:tab:findformulas} has two vertical asymptotes and one zero. Can you find a formula for $s(x)$? \begin{shortsolution} $s(x)=\dfrac{x+2}{(x-3)(x+1)}$ \end{shortsolution} \end{subproblem} \begin{subproblem} Check your formula by computing $s(x)$ at the values specified in the table. \begin{shortsolution} $\begin{aligned}[t] s(-4) & =\frac{-4+2}{(-4-3)(-4+1)} \\ & =-\frac{2}{21} \end{aligned}$ $s(-3)=\ldots$ etc \end{shortsolution} \end{subproblem} \begin{subproblem} Given that the formula for $t(x)$ has the form $t(x)=\dfrac{(x-A)(x-B)}{(x-C)(x-D)}$, use \cref{rat:tab:findformulat} to find the values of $A$, $B$, $C$, and $D$; hence write a formula for $t(x)$. \begin{shortsolution} $t(x)=\dfrac{(x+3)(x-2)}{(x+2)(x+1)}$ \end{shortsolution} \end{subproblem} \begin{subproblem} Given that the formula for $u(x)$ has the form $u(x)=\dfrac{(x-A)^2}{(x-B)(x-C)}$, use \cref{rat:tab:findformulau} to find the values of $A$, $B$, and $C$; hence write a formula for $u(x)$. \begin{shortsolution} $u(x)=\dfrac{x^2}{(x+3)(x-3)}$ \end{shortsolution} \end{subproblem} \end{problem} \end{exercises} \section{Graphing rational functions (horizontal asymptotes)} \reformatstepslist{R} % the steps list should be R1, R2, \ldots We studied rational functions in the previous section, but were not asked to graph them; in this section we will demonstrate the steps to be followed in order to sketch graphs of the functions. Remember from \vref{rat:def:function} that rational functions have the form \[ r(x)=\frac{p(x)}{q(x)} \] In this section we will restrict attention to the case when \[ \text{degree of }p\leq \text{degree of }q \] Note that this necessarily means that each function that we consider in this section \emph{will have a horizontal asymptote} (see \vref{rat:def:longrun}). The cases in which the degree of $p$ is greater than the degree of $q$ is covered in the next section. Before we begin, it is important to remember the following: \begin{itemize} \item Our sketches will give a good representation of the overall shape of the graph, but until we have the tools of calculus (from MTH 251) we can not find local minimums, local maximums, and inflection points algebraically. This means that we will make our best guess as to where these points are. \item We will not concern ourselves too much with the vertical scale (because of our previous point)| we will, however, mark the vertical intercept (assuming there is one), and any horizontal asymptotes. \end{itemize} \begin{pccspecialcomment}[Steps to follow when sketching rational functions]\label{rat:def:stepsforsketch} \begin{steps} \item \label{rat:step:first} Find all vertical asymptotes and holes, and mark them on the graph using dashed vertical lines and open circles $\circ$ respectively. \item Find any intercepts, and mark them using solid circles $\bullet$; determine if the curve cuts the axis, or bounces off it at each zero. \item Determine the behavior of the function around each asymptote| does it behave like $\frac{1}{x}$ or $\frac{1}{x^2}$? \item \label{rat:step:penultimate} Determine the long-run behavior of the function, and mark the horizontal asymptote using a dashed horizontal line. \item \label{rat:step:last} Deduce the overall shape of the curve, and sketch it. If there isn't enough information from the previous steps, then construct a table of values including sample points from each branch. \end{steps} Remember that until we have the tools of calculus, we won't be able to find the exact coordinates of local minimums, local maximums, and points of inflection. \end{pccspecialcomment} The examples that follow show how \crefrange{rat:step:first}{rat:step:last} can be applied to a variety of different rational functions. %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{rat:ex:1overxminus2p2} Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of the function $r$ that has formula \[ r(x)=\frac{1}{x-2} \] \begin{pccsolution} \begin{steps} \item $r$ has a vertical asymptote at $2$; $r$ does not have any holes. The curve of $r$ will have $2$ branches. \item $r$ does not have any zeros since the numerator is never equal to $0$. The vertical intercept of $r$ is $\left( 0,-\frac{1}{2} \right)$. \item $r$ behaves like $\frac{1}{x}$ around its vertical asymptote since $(x-2)$ is raised to the power $1$. \item Since the degree of the numerator is less than the degree of the denominator, according to \vref{rat:def:longrun} the horizontal asymptote of $r$ has equation $y=0$. \item We put the details we have obtained so far on \cref{rat:fig:1overxminus2p1}. Notice that there is only one way to complete the graph, which we have done in \cref{rat:fig:1overxminus2p2}. \end{steps} \end{pccsolution} \end{pccexample} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-5,xmax=5, ymin=-5,ymax=5, width=\textwidth, ] \addplot[asymptote,domain=-5:5]({2},{x}); \addplot[asymptote,domain=-5:5]({x},{0}); \addplot[soldot] coordinates{(0,-0.5)}node[axisnode,anchor=north east]{$\left( 0,-\frac{1}{2} \right)$}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:1overxminus2p1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=1/(x-2);}] \begin{axis}[ xmin=-5,xmax=5, ymin=-5,ymax=5, width=\textwidth, ] \addplot[pccplot] expression[domain=-5:1.8,samples=50]{f}; \addplot[pccplot] expression[domain=2.2:5]{f}; \addplot[asymptote,domain=-5:5]({2},{x}); \addplot[asymptote,domain=-5:5]({x},{0}); \addplot[soldot] coordinates{(0,-0.5)}node[axisnode,anchor=north east]{$\left( 0,-\frac{1}{2} \right)$}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:1overxminus2p2} \end{subfigure}% \caption{$y=\dfrac{1}{x-2}$} \end{figure} The function $r$ in \cref{rat:ex:1overxminus2p2} has a horizontal asymptote which has equation $y=0$. This asymptote lies on the horizontal axis, and you might (understandably) find it hard to distinguish between the two lines (\cref{rat:fig:1overxminus2p2}). When faced with such a situation, it is perfectly acceptable to draw the horizontal axis as a dashed line| just make sure to label it correctly. We will demonstrate this in the next example. %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{rat:ex:1overxp1} Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of the function $v$ that has formula \[ v(x)=\frac{10}{x} \] \begin{pccsolution} \begin{steps} \item $v$ has a vertical asymptote at $0$. $v$ does not have any holes. The curve of $v$ will have $2$ branches. \item $v$ does not have any zeros (since $10\ne 0$). Furthermore, $v$ does not have a vertical intercept since $v(0)$ is undefined. \item $v$ behaves like $\frac{1}{x}$ around its vertical asymptote. \item $v$ has a horizontal asymptote with equation $y=0$. \item We put the details we have obtained so far in \cref{rat:fig:1overxp1}. We do not have enough information to sketch $v$ yet (because $v$ does not have any intercepts), so let's pick a sample point in either of the $2$ branches| it doesn't matter where our sample point is, because we know what the overall shape will be. Let's compute $v(2)$ \begin{align*} v(2) & =\dfrac{10}{2} \\ & = 5 \end{align*} We therefore mark the point $(2,5)$ on \cref{rat:fig:1overxp2}, and then complete the sketch using the details we found in the previous steps. \end{steps} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-5,5}, ytick={-5,5}, axis line style={color=white}, width=\textwidth, ] \addplot[asymptote,<->,domain=-10:10]({0},{x}); \addplot[asymptote,<->,domain=-10:10]({x},{0}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:1overxp1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=10/x;}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-5,5}, ytick={-5,5}, axis line style={color=white}, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:-1]{f}; \addplot[pccplot] expression[domain=1:10]{f}; \addplot[soldot] coordinates{(2,5)}node[axisnode,anchor=south west]{$(2,5)$}; \addplot[asymptote,<->,domain=-10:10]({0},{x}); \addplot[asymptote,<->,domain=-10:10]({x},{0}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:1overxp2} \end{subfigure}% \caption{$y=\dfrac{10}{x}$} \end{figure} \end{pccsolution} \end{pccexample} %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{rat:ex:asympandholep1} Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of the function $u$ that has formula \[ u(x)=\frac{-4(x^2-9)}{x^2-8x+15} \] \begin{pccsolution} \begin{steps} \item We begin by factoring both the numerator and denominator of $u$ to help us find any vertical asymptotes or holes \begin{align*} u(x) & =\frac{-4(x^2-9)}{x^2-8x+15} \\ & =\frac{-4(x+3)(x-3)}{(x-5)(x-3)} \\ & =\frac{-4(x+3)}{x-5} \end{align*} provided that $x\ne 3$. Therefore $u$ has a vertical asymptote at $5$ and a hole at $3$. The curve of $u$ has $2$ branches. \item $u$ has a simple zero at $-3$. The vertical intercept of $u$ is $\left( 0,\frac{12}{5} \right)$. \item $u$ behaves like $\frac{1}{x}$ around its vertical asymptote at $4$. \item Using \vref{rat:def:longrun} the equation of the horizontal asymptote of $u$ is $y=-4$. \item We put the details we have obtained so far on \cref{rat:fig:1overxminus2p1}. Notice that there is only one way to complete the graph, which we have done in \cref{rat:fig:1overxminus2p2}. \end{steps} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-20,ymax=20, xtick={-8,-6,...,8}, ytick={-10,10}, width=\textwidth, ] \addplot[asymptote,domain=-20:20]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{-4}); \addplot[soldot] coordinates{(-3,0)(0,2.4)}node[axisnode,anchor=south east]{$\left( 0,\frac{12}{5} \right)$}; \addplot[holdot] coordinates{(3,12)}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:asympandholep1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=-4*(x+3)/(x-5);}] \begin{axis}[ xmin=-10,xmax=10, ymin=-20,ymax=20, xtick={-8,-6,...,8}, ytick={-10,10}, width=\textwidth, ] \addplot[pccplot] expression[domain=-10:3.6666,samples=50]{f}; \addplot[pccplot] expression[domain=7:10]{f}; \addplot[asymptote,domain=-20:20]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{-4}); \addplot[soldot] coordinates{(-3,0)(0,2.4)}node[axisnode,anchor=south east]{$\left( 0,\frac{12}{5} \right)$}; \addplot[holdot] coordinates{(3,12)}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:asympandholep2} \end{subfigure}% \caption{$y=\dfrac{-4(x+3)}{x-5}$} \end{figure} \end{pccsolution} \end{pccexample} \Cref{rat:ex:1overxminus2p2,rat:ex:1overxp1,rat:ex:asympandholep1} have focused on functions that only have one vertical asymptote; the remaining examples in this section concern functions that have more than one vertical asymptote. We will demonstrate that \crefrange{rat:step:first}{rat:step:last} still apply. %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{rat:ex:sketchtwoasymp} Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of the function $w$ that has formula \[ w(x)=\frac{2(x+3)(x-5)}{(x+5)(x-4)} \] \begin{pccsolution} \begin{steps} \item $w$ has vertical asymptotes at $-5$ and $4$. $w$ does not have any holes. The curve of $w$ will have $3$ branches. \item $w$ has simple zeros at $-3$ and $5$. The vertical intercept of $w$ is $\left( 0,\frac{3}{2} \right)$. \item $w$ behaves like $\frac{1}{x}$ around both of its vertical asymptotes. \item The degree of the numerator of $w$ is $2$ and the degree of the denominator of $w$ is also $2$. Using the ratio of the leading coefficients of the numerator and denominator, we say that $w$ has a horizontal asymptote with equation $y=\frac{2}{1}=2$. \item We put the details we have obtained so far on \cref{rat:fig:sketchtwoasymptp1}. The function $w$ is a little more complicated than the functions that we have considered in the previous examples because the curve has $3$ branches. When graphing such functions, it is generally a good idea to start with the branch for which you have the most information| in this case, that is the \emph{middle} branch on the interval $(-5,4)$. Once we have drawn the middle branch, there is only one way to complete the graph (because of our observations about the behavior of $w$ around its vertical asymptotes), which we have done in \cref{rat:fig:sketchtwoasymptp2}. \end{steps} \end{pccsolution} \end{pccexample} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{2}); \addplot[soldot] coordinates{(-3,0)(5,0)}; \addplot[soldot] coordinates{(0,1.5)}node[axisnode,anchor=north west]{$\left( 0,\frac{3}{2} \right)$}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:sketchtwoasymptp1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=2*(x+3)*(x-5)/( (x+5)*(x-4));}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{2}); \addplot[soldot] coordinates{(-3,0)(5,0)}; \addplot[soldot] coordinates{(0,1.5)}node[axisnode,anchor=north west]{$\left( 0,\frac{3}{2} \right)$}; \addplot[pccplot] expression[domain=-10:-5.56708]{f}; \addplot[pccplot] expression[domain=-4.63511:3.81708]{f}; \addplot[pccplot] expression[domain=4.13511:10]{f}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:sketchtwoasymptp2} \end{subfigure}% \caption{$y=\dfrac{2(x+3)(x-5)}{(x+5)(x-4)}$} \end{figure} The rational functions that we have considered so far have had simple factors in the denominator; each function has behaved like $\frac{1}{x}$ around each of its vertical asymptotes. \Cref{rat:ex:2asympnozeros,rat:ex:2squaredasymp} consider functions that have a repeated factor in the denominator. %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{rat:ex:2asympnozeros} Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of the function $f$ that has formula \[ f(x)=\frac{100}{(x+5)(x-4)^2} \] \begin{pccsolution} \begin{steps} \item $f$ has vertical asymptotes at $-5$ and $4$. $f$ does not have any holes. The curve of $f$ will have $3$ branches. \item $f$ does not have any zeros (since $100\ne 0$). The vertical intercept of $f$ is $\left( 0,\frac{5}{4} \right)$. \item $f$ behaves like $\frac{1}{x}$ around $-5$ and behaves like $\frac{1}{x^2}$ around $4$. \item The degree of the numerator of $f$ is $0$ and the degree of the denominator of $f$ is $2$. $f$ has a horizontal asymptote with equation $y=0$. \item We put the details we have obtained so far on \cref{rat:fig:2asympnozerosp1}. The function $f$ is similar to the function $w$ that we considered in \cref{rat:ex:sketchtwoasymp}| it has two vertical asymptotes and $3$ branches, but in contrast to $w$ it does not have any zeros. We sketch $f$ in \cref{rat:fig:2asympnozerosp2}, using the middle branch as our guide because we have the most information about the function on the interval $(-5,4)$. Once we have drawn the middle branch, there is only one way to complete the graph because of our observations about the behavior of $f$ around its vertical asymptotes (it behaves like $\frac{1}{x}$), which we have done in \cref{rat:fig:2asympnozerosp2}. Note that we are not yet able to find the local minimum of $f$ algebraically on the interval $(-5,4)$, so we make a reasonable guess as to where it is| we can be confident that it is above the horizontal axis since $f$ has no zeros. You may think that this is unsatisfactory, but once we have the tools of calculus, we will be able to find local minimums more precisely. \end{steps} \end{pccsolution} \end{pccexample} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{0}); \addplot[soldot] coordinates{(0,1.25)}node[axisnode,anchor=south east]{$\left( 0,\frac{5}{4} \right)$}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:2asympnozerosp1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=100/( (x+5)*(x-4)^2);}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{0}); \addplot[soldot] coordinates{(0,1.25)}node[axisnode,anchor=south east]{$\left( 0,\frac{5}{4} \right)$}; \addplot[pccplot] expression[domain=-10:-5.12022]{f}; \addplot[pccplot] expression[domain=-4.87298:2.87298,samples=50]{f}; \addplot[pccplot] expression[domain=5:10]{f}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:2asympnozerosp2} \end{subfigure}% \caption{$y=\dfrac{100}{(x+5)(x-4)^2}$} \end{figure} %=================================== % Author: Hughes % Date: May 2012 %=================================== \begin{pccexample}\label{rat:ex:2squaredasymp} Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of the function $g$ that has formula \[ g(x)=\frac{50(2-x)}{(x+3)^2(x-5)^2} \] \begin{pccsolution} \begin{steps} \item $g$ has vertical asymptotes at $-3$ and $5$. $g$ does not have any holes. The curve of $g$ will have $3$ branches. \item $g$ has a simple zero at $2$. The vertical intercept of $g$ is $\left( 0,\frac{4}{9} \right)$. \item $g$ behaves like $\frac{1}{x^2}$ around both of its vertical asymptotes. \item The degree of the numerator of $g$ is $1$ and the degree of the denominator of $g$ is $4$. Using \vref{rat:def:longrun}, we calculate that the horizontal asymptote of $g$ has equation $y=0$. \item The details that we have found so far have been drawn in \cref{rat:fig:2squaredasymp1}. The function $g$ is similar to the functions we considered in \cref{rat:ex:sketchtwoasymp,rat:ex:2asympnozeros} because it has $2$ vertical asymptotes and $3$ branches. We sketch $g$ using the middle branch as our guide because we have the most information about $g$ on the interval $(-3,5)$. Note that there is no other way to draw this branch without introducing other zeros which $g$ does not have. Once we have drawn the middle branch, there is only one way to complete the graph because of our observations about the behavior of $g$ around its vertical asymptotes| it behaves like $\frac{1}{x^2}$. \end{steps} \end{pccsolution} \end{pccexample} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-3},{x}); \addplot[asymptote,domain=-10:10]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{0}); \addplot[soldot] coordinates{(2,0)(0,4/9)}node[axisnode,anchor=south west]{$\left( 0,\frac{4}{9} \right)$}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:2squaredasymp1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=50*(2-x)/( (x+3)^2*(x-5)^2);}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, ytick={-5,5}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-3},{x}); \addplot[asymptote,domain=-10:10]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{0}); \addplot[soldot] coordinates{(2,0)(0,4/9)}node[axisnode,anchor=south west]{$\left( 0,\frac{4}{9} \right)$}; \addplot[pccplot] expression[domain=-10:-3.61504]{f}; \addplot[pccplot] expression[domain=-2.3657:4.52773]{f}; \addplot[pccplot] expression[domain=5.49205:10]{f}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:2squaredasymp2} \end{subfigure}% \caption{$y=\dfrac{50(2-x)}{(x+3)^2(x-5)^2}$} \end{figure} Each of the rational functions that we have considered so far has had either a \emph{simple} zero, or no zeros at all. Remember from our work on polynomial functions, and particularly \vref{poly:def:multzero}, that a \emph{repeated} zero corresponds to the curve of the function behaving differently at the zero when compared to how the curve behaves at a simple zero. \Cref{rat:ex:doublezero} details a function that has a non-simple zero. %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{pccexample}\label{rat:ex:doublezero} Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of the function $g$ that has formula \[ h(x)=\frac{(x-3)^2}{(x+4)(x-6)} \] \begin{pccsolution} \begin{steps} \item $h$ has vertical asymptotes at $-4$ and $6$. $h$ does not have any holes. The curve of $h$ will have $3$ branches. \item $h$ has a zero at $3$ that has \emph{multiplicity $2$}. The vertical intercept of $h$ is $\left( 0,-\frac{3}{8} \right)$. \item $h$ behaves like $\frac{1}{x}$ around both of its vertical asymptotes. \item The degree of the numerator of $h$ is $2$ and the degree of the denominator of $h$ is $2$. Using \vref{rat:def:longrun}, we calculate that the horizontal asymptote of $h$ has equation $y=1$. \item The details that we have found so far have been drawn in \cref{rat:fig:doublezerop1}. The function $h$ is different from the functions that we have considered in previous examples because of the multiplicity of the zero at $3$. We sketch $h$ using the middle branch as our guide because we have the most information about $h$ on the interval $(-4,6)$. Note that there is no other way to draw this branch without introducing other zeros which $h$ does not have| also note how the curve bounces off the horizontal axis at $3$. Once we have drawn the middle branch, there is only one way to complete the graph because of our observations about the behavior of $h$ around its vertical asymptotes| it behaves like $\frac{1}{x}$. \end{steps} \end{pccsolution} \end{pccexample} \begin{figure}[!htbp] \begin{subfigure}{.45\textwidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-5,ymax=5, xtick={-8,-6,...,8}, ytick={-3,3}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-4},{x}); \addplot[asymptote,domain=-10:10]({6},{x}); \addplot[asymptote,domain=-10:10]({x},{1}); \addplot[soldot] coordinates{(3,0)(0,-3/8)}node[axisnode,anchor=north west]{$\left( 0,-\frac{3}{8} \right)$}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:doublezerop1} \end{subfigure}% \hfill \begin{subfigure}{.45\textwidth} \begin{tikzpicture}[/pgf/declare function={f=(x-3)^2/((x+4)*(x-6));}] \begin{axis}[ xmin=-10,xmax=10, ymin=-5,ymax=5, xtick={-8,-6,...,8}, ytick={-3,3}, width=\textwidth, ] \addplot[asymptote,domain=-10:10]({-4},{x}); \addplot[asymptote,domain=-10:10]({6},{x}); \addplot[asymptote,domain=-10:10]({x},{1}); \addplot[soldot] coordinates{(3,0)(0,-3/8)}node[axisnode,anchor=north west]{$\left( 0,-\frac{3}{8} \right)$}; \addplot[pccplot] expression[domain=-10:-5.20088]{f}; \addplot[pccplot] expression[domain=-3.16975:5.83642,samples=50]{f}; \addplot[pccplot] expression[domain=6.20088:10]{f}; \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:doublezerop2} \end{subfigure}% \caption{$y=\dfrac{(x-3)^2}{(x+4)(x-6)}$} \end{figure} \begin{exercises} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{problem}[\Cref{rat:step:last}]\label{rat:prob:deduce} \pccname{Katie} is working on graphing rational functions. She has been concentrating on functions that have the form \begin{equation}\label{rat:eq:deducecurve} f(x)=\frac{a(x-b)}{x-c} \end{equation} Katie notes that functions with this type of formula have a zero at $b$, and a vertical asymptote at $c$. Furthermore, these functions behave like $\frac{1}{x}$ around their vertical asymptote, and the curve of each function will have $2$ branches. Katie has been working with $3$ functions that have the form given in \cref{rat:eq:deducecurve}, and has followed \crefrange{rat:step:first}{rat:step:penultimate}; her results are shown in \cref{rat:fig:deducecurve}. There is just one more thing to do to complete the graphs| follow \cref{rat:step:last}. Help Katie finish each graph by deducing the curve of each function. \begin{shortsolution} \Vref{rat:fig:deducecurve1} \begin{tikzpicture}[/pgf/declare function={f=3*(x+4)/(x+5);}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\solutionfigurewidth, ] \addplot[soldot] coordinates{(-4,0)(0,12/5)}; \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({x},{3}); \addplot[pccplot] expression[domain=-10:-5.42857]{f}; \addplot[pccplot] expression[domain=-4.76923:10,samples=50]{f}; \end{axis} \end{tikzpicture} \Vref{rat:fig:deducecurve2} \begin{tikzpicture}[/pgf/declare function={f=-3*(x-2)/(x-4);}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\solutionfigurewidth, ] \addplot[soldot] coordinates{(2,0)(0,-3/2)}; \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{-3}); \addplot[pccplot] expression[domain=-10:3.53846,samples=50]{f}; \addplot[pccplot] expression[domain=4.85714:10]{f}; \end{axis} \end{tikzpicture} \Vref{rat:fig:deducecurve4} \begin{tikzpicture}[/pgf/declare function={f=2*(x-6)/(x-4);}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\solutionfigurewidth, ] \addplot[soldot] coordinates{(6,0)(0,3)}; \addplot[asymptote,domain=-10:10]({x},{2}); \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[pccplot] expression[domain=-10:3.5,samples=50]{f}; \addplot[pccplot] expression[domain=4.3333:10]{f}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{problem} \begin{figure}[!htb] \begin{widepage} \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\textwidth, ] \addplot[soldot] coordinates{(-4,0)(0,12/5)}; \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({x},{3}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:deducecurve1} \end{subfigure}% \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\textwidth, ] \addplot[soldot] coordinates{(2,0)(0,-3/2)}; \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{-3}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:deducecurve2} \end{subfigure}% \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\textwidth, ] \addplot[soldot] coordinates{(6,0)(0,3)}; \addplot[asymptote,domain=-10:10]({x},{2}); \addplot[asymptote,domain=-10:10]({4},{x}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:deducecurve4} \end{subfigure} \caption{Graphs for \cref{rat:prob:deduce}} \label{rat:fig:deducecurve} \end{widepage} \end{figure} %=================================== % Author: Hughes % Date: June 2012 %=================================== \begin{problem}[\Cref{rat:step:last} for more complicated rational functions]\label{rat:prob:deducehard} \pccname{David} is also working on graphing rational functions, and has been concentrating on functions that have the form \[ r(x)=\frac{a(x-b)(x-c)}{(x-d)(x-e)} \] David notices that functions with this type of formula have simple zeros at $b$ and $c$, and vertical asymptotes at $d$ and $e$. Furthermore, these functions behave like $\frac{1}{x}$ around both vertical asymptotes, and the curve of the function will have $3$ branches. David has followed \crefrange{rat:step:first}{rat:step:penultimate} for $3$ separate functions, and drawn the results in \cref{rat:fig:deducehard}. Help David finish each graph by deducing the curve of each function. \begin{shortsolution} \Vref{rat:fig:deducehard1} \begin{tikzpicture}[/pgf/declare function={f=(x-6)*(x+3)/( (x-4)*(x+1));}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\solutionfigurewidth, ] \addplot[soldot] coordinates{(-3,0)(6,0)(0,9/2)}; \addplot[asymptote,domain=-10:10]({-1},{x}); \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{2}); \addplot[pccplot] expression[domain=-10:-1.24276]{f}; \addplot[pccplot] expression[domain=-0.6666:3.66667]{f}; \addplot[pccplot] expression[domain=4.24276:10]{f}; \end{axis} \end{tikzpicture} \Vref{rat:fig:deducehard2} \begin{tikzpicture}[/pgf/declare function={f=3*(x-2)*(x+3)/( (x-6)*(x+5));}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\solutionfigurewidth, ] \addplot[soldot] coordinates{(-3,0)(2,0)(0,3/5)}; \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({6},{x}); \addplot[asymptote,domain=-10:10]({x},{3}); \addplot[pccplot] expression[domain=-10:-5.4861]{f}; \addplot[pccplot] expression[domain=-4.68395:5.22241]{f}; \addplot[pccplot] expression[domain=7.34324:10]{f}; \end{axis} \end{tikzpicture} \Vref{rat:fig:deducehard3} \begin{tikzpicture}[/pgf/declare function={f=2*(x-7)*(x+3)/( (x+6)*(x-5));}] \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\solutionfigurewidth, ] \addplot[soldot] coordinates{(-3,0)(7,0)(0,1.4)}; \addplot[asymptote,domain=-10:10]({-6},{x}); \addplot[asymptote,domain=-10:10]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{2}); \addplot[pccplot] expression[domain=-10:-6.91427]{f}; \addplot[pccplot] expression[domain=-5.42252:4.66427]{f}; \addplot[pccplot] expression[domain=5.25586:10]{f}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{problem} \begin{figure}[!htb] \begin{widepage} \setlength{\figurewidth}{0.3\textwidth} \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\textwidth, ] \addplot[soldot] coordinates{(-3,0)(6,0)(0,9/2)}; \addplot[asymptote,domain=-10:10]({-1},{x}); \addplot[asymptote,domain=-10:10]({4},{x}); \addplot[asymptote,domain=-10:10]({x},{2}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:deducehard1} \end{subfigure}% \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\textwidth, ] \addplot[soldot] coordinates{(-3,0)(2,0)(0,3/5)}; \addplot[asymptote,domain=-10:10]({-5},{x}); \addplot[asymptote,domain=-10:10]({6},{x}); \addplot[asymptote,domain=-10:10]({x},{3}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:deducehard2} \end{subfigure}% \hfill \begin{subfigure}{\figurewidth} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, xtick={-8,-6,...,8}, width=\textwidth, ] \addplot[soldot] coordinates{(-3,0)(7,0)(0,1.4)}; \addplot[asymptote,domain=-10:10]({-6},{x}); \addplot[asymptote,domain=-10:10]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{2}); \end{axis} \end{tikzpicture} \caption{} \label{rat:fig:deducehard3} \end{subfigure}% \hfill \caption{Graphs for \cref{rat:prob:deducehard}} \label{rat:fig:deducehard} \end{widepage} \end{figure} %=================================== % Author: Adams (Hughes) % Date: March 2012 %=================================== \begin{problem}[\Crefrange{rat:step:first}{rat:step:last}] Use \crefrange{rat:step:first}{rat:step:last} to sketch a graph of each of the following functions \fixthis{need 2 more subproblems here} \begin{multicols}{4} \begin{subproblem} $y=\dfrac{4}{x+2}$ \begin{shortsolution} Vertical intercept: $(0,2)$; vertical asymptote: $x=-2$, horizontal asymptote: $y=0$. \begin{tikzpicture} \begin{axis}[ framed, xmin=-5,xmax=5, ymin=-5,ymax=5, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot] expression[domain=-5:-2.8]{4/(x+2)}; \addplot[pccplot] expression[domain=-1.2:5]{4/(x+2)}; \addplot[soldot]coordinates{(0,2)}; \addplot[asymptote,domain=-5:5]({-2},{x}); \addplot[asymptote,domain=-5:5]({x},{0}); \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} $y=\dfrac{2x-1}{x^2-9}$ \begin{shortsolution} Vertical intercept:$\left( 0,\frac{1}{9} \right)$; horizontal intercept: $\left( \frac{1}{2},0 \right)$; vertical asymptotes: $x=-3$, $x=3$, horizontal asymptote: $y=0$. \begin{tikzpicture} \begin{axis}[ framed, xmin=-5,xmax=5, ymin=-5,ymax=5, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot] expression[domain=-5:-3.23974]{(2*x-1)/(x^2-9)}; \addplot[pccplot,samples=50] expression[domain=-2.77321:2.83974]{(2*x-1)/(x^2-9)}; \addplot[pccplot] expression[domain=3.17321:5]{(2*x-1)/(x^2-9)}; \addplot[soldot]coordinates{(0,1/9)(1/2,0)}; \addplot[asymptote,domain=-5:5]({-3},{x}); \addplot[asymptote,domain=-5:5]({3},{x}); \addplot[asymptote,domain=-5:5]({x},{0}); \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} $y=\dfrac{x+3}{x-5}$ \begin{shortsolution} Vertical intercept $\left( 0,-\frac{3}{5} \right)$; horizontal intercept: $(-3,0)$; vertical asymptote: $x=5$; horizontal asymptote: $y=1$. \begin{tikzpicture} \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-5,ymax=5, xtick={-8,-6,...,8}, minor ytick={-3,-1,...,3}, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot] expression[domain=-10:3.666]{(x+3)/(x-5)}; \addplot[pccplot] expression[domain=7:10]{(x+3)/(x-5)}; \addplot[asymptote,domain=-5:5]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{1}); \addplot[soldot]coordinates{(0,-3/5)(-3,0)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} $y=\dfrac{2x+3}{3x-1}$ \begin{shortsolution} Vertical intercept: $(0,-3)$; horizontal intercept: $\left( -\frac{3}{2},0 \right)$; vertical asymptote: $x=\frac{1}{3}$, horizontal asymptote: $y=\frac{2}{3}$. \begin{tikzpicture}[/pgf/declare function={f=(2*x+3)/(3*x-1);}] \begin{axis}[ framed, xmin=-5,xmax=5, ymin=-5,ymax=5, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot] expression[domain=-5:0.1176]{f}; \addplot[pccplot] expression[domain=0.6153:5]{f}; \addplot[asymptote,domain=-5:5]({1/3},{x}); \addplot[asymptote,domain=-5:5]({x},{2/3}); \addplot[soldot]coordinates{(0,-3)(-3/2,0)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} $y=\dfrac{4-x^2}{x^2-9}$ \begin{shortsolution} Vertical intercept: $\left( 0,-\frac{4}{9} \right)$; horizontal intercepts: $(2,0)$, $(-2,0)$; vertical asymptotes: $x=-3$, $x=3$; horizontal asymptote: $y=-1$. \begin{tikzpicture}[/pgf/declare function={f=(4-x^2)/(x^2-9);}] \begin{axis}[ framed, xmin=-5,xmax=5, ymin=-5,ymax=5, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot] expression[domain=-5:-3.20156]{f}; \addplot[pccplot,samples=50] expression[domain=-2.85774:2.85774]{f}; \addplot[pccplot] expression[domain=3.20156:5]{f}; \addplot[asymptote,domain=-5:5]({-3},{x}); \addplot[asymptote,domain=-5:5]({3},{x}); \addplot[asymptote,domain=-5:5]({x},{-1}); \addplot[soldot] coordinates{(-2,0)(2,0)(0,-4/9)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \begin{subproblem} $y=\dfrac{(4x+5)(3x-4)}{(2x+5)(x-5)}$ \begin{shortsolution} Vertical intercept: $\left( 0,\frac{4}{5} \right)$; horizontal intercepts: $\left( -\frac{5}{4},0 \right)$, $\left( \frac{4}{3},0 \right)$; vertical asymptotes: $x=-\frac{5}{2}$, $x=5$; horizontal asymptote: $y=6$. \begin{tikzpicture}[/pgf/declare function={f=(4*x+5)*(3*x-4)/((2*x+5)*(x-5));}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-20,ymax=20, xtick={-8,-6,...,8}, ytick={-10,0,...,10}, minor ytick={-15,-5,...,15}, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot] expression[domain=-10:-2.73416]{f}; \addplot[pccplot] expression[domain=-2.33689:4.2792]{f}; \addplot[pccplot] expression[domain=6.26988:10]{f}; \addplot[asymptote,domain=-20:20]({-5/2},{x}); \addplot[asymptote,domain=-20:20]({5},{x}); \addplot[asymptote,domain=-10:10]({x},{6}); \addplot[soldot]coordinates{(0,4/5)(-5/4,0)(4/3,0)}; \end{axis} \end{tikzpicture} \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Inverse functions] Each of the following rational functions are invertible \[ F(x)=\frac{2x+1}{x-3}, \qquad G(x)= \frac{1-4x}{x+3} \] \begin{subproblem} State the domain of each function. \begin{shortsolution} \begin{itemize} \item The domain of $F$ is $(-\infty,3)\cup(3,\infty)$. \item The domain of $G$ is $(-\infty,-3)\cup(-3,\infty)$. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} Find the inverse of each function, and state its domain. \begin{shortsolution} \begin{itemize} \item $F^{-1}(x)=\frac{3x+1}{x-2}$; the domain of $F^{-1}$ is $(-\infty,2)\cup(2,\infty)$. \item $G^{-1}(x)=\frac{3x+1}{x+4}$; the domain of $G^{-1}$ is $(-\infty,-4)\cup(-4,\infty)$. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} Hence state the range of the original functions. \begin{shortsolution} \begin{itemize} \item The range of $F$ is the domain of $F^{-1}$, which is $(-\infty,2)\cup(2,\infty)$. \item The range of $G$ is the domain of $G^{-1}$, which is $(-\infty,-4)\cup(-4,\infty)$. \end{itemize} \end{shortsolution} \end{subproblem} \begin{subproblem} State the range of each inverse function. \begin{shortsolution} \begin{itemize} \item The range of $F^{-1}$ is the domain of $F$, which is $(-\infty,3)\cup(3,\infty)$. \item The range of $G^{-1}$ is the domain of $G$, which is $(-\infty,-3)\cup(-3,\infty)$. \end{itemize}<++> \end{shortsolution} \end{subproblem} \end{problem} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Composition] Let $r$ and $s$ be the rational functions that have formulas \[ r(x)=\frac{3}{x^2},\qquad s(x)=\frac{4-x}{x+5} \] Evaluate each of the following. \begin{multicols}{3} \begin{subproblem} $(r\circ s)(0)$ \begin{shortsolution} $\frac{75}{16}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(s\circ r)(0)$ \begin{shortsolution} $(s\circ r)(0)$ is undefined. \end{shortsolution} \end{subproblem} \begin{subproblem} $(r\circ s)(2)$ \begin{shortsolution} $\frac{147}{4}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(s\circ r)(3)$ \begin{shortsolution} $192$ \end{shortsolution} \end{subproblem} \begin{subproblem} $(s\circ r)(4)$ \begin{shortsolution} $(s\circ r)(4)$ is undefined. \end{shortsolution} \end{subproblem} \begin{subproblem} $(s\circ r)(x)$ \begin{shortsolution} $\dfrac{4x^2-3}{1+5x^2}$ \end{shortsolution} \end{subproblem} \end{multicols} \end{problem} %=================================== % Author: Hughes % Date: March 2012 %=================================== \begin{problem}[Piecewise rational functions] The function $R$ has formula \[ R(x)= \begin{dcases} \frac{2}{x+3}, & x<-5 \\ \frac{x-4}{x-10}, & x\geq -5 \end{dcases} \] Evaluate each of the following. \begin{multicols}{4} \begin{subproblem} $R(-6)$ \begin{shortsolution} $-\frac{2}{3}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $R(-5)$ \begin{shortsolution} $\frac{3}{5}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $R(-3)$ \begin{shortsolution} $\frac{7}{13}$ \end{shortsolution} \end{subproblem} \begin{subproblem} $R(5)$ \begin{shortsolution} $-\frac{1}{5}$ \end{shortsolution} \end{subproblem} \end{multicols} \begin{subproblem} What is the domain of $R$? \begin{shortsolution} $(-\infty,10)\cup(10,\infty)$ \end{shortsolution} \end{subproblem} \end{problem} \end{exercises} \section{Graphing rational functions (oblique asymptotes)}\label{rat:sec:oblique} \begin{subproblem} $y=\dfrac{x^2+1}{x-4}$ \begin{shortsolution} \begin{enumerate} \item $\left( 0,-\frac{1}{4} \right)$ \item Vertical asymptote: $x=4$. \item A graph of the function is shown below \begin{tikzpicture}[/pgf/declare function={f=(x^2+1)/(x-4);}] \begin{axis}[ framed, xmin=-20,xmax=20, ymin=-30,ymax=30, xtick={-10,10}, minor xtick={-15,-5,...,15}, minor ytick={-10,10}, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot,samples=50] expression[domain=-20:3.54724]{f}; \addplot[pccplot,samples=50] expression[domain=4.80196:20]{f}; \addplot[asymptote,domain=-30:30]({4},{x}); \end{axis} \end{tikzpicture} \end{enumerate} \end{shortsolution} \end{subproblem} \begin{subproblem} $y=\dfrac{x^3(x+3)}{x-5}$ \begin{shortsolution} \begin{enumerate} \item $(0,0)$, $(-3,0)$ \item Vertical asymptote: $x=5$, horizontal asymptote: none. \item A graph of the function is shown below \begin{tikzpicture}[/pgf/declare function={f=x^3*(x+3)/(x-5);}] \begin{axis}[ framed, xmin=-10,xmax=10, ymin=-500,ymax=2500, xtick={-8,-6,...,8}, ytick={500,1000,1500,2000}, grid=both, width=\solutionfigurewidth, ] \addplot[pccplot,samples=50] expression[domain=-10:4]{f}; \addplot[pccplot] expression[domain=5.6068:9.777]{f}; \addplot[asymptote,domain=-500:2500]({5},{x}); \end{axis} \end{tikzpicture} \end{enumerate} \end{shortsolution} \end{subproblem}
{ "alphanum_fraction": 0.5875217482, "avg_line_length": 35.2558419244, "ext": "tex", "hexsha": "7b01acc3d65c9e6d631171350f1415c550369d28", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "digorithm/latex-linter", "max_forks_repo_path": "project/server/dependencies/latexindent.pl-master/test-cases/benchmarks/sampleBEFORE-default.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "digorithm/latex-linter", "max_issues_repo_path": "project/server/dependencies/latexindent.pl-master/test-cases/benchmarks/sampleBEFORE-default.tex", "max_line_length": 163, "max_stars_count": null, "max_stars_repo_head_hexsha": "d6b63002cdcecf291e2abc7a399e0d7af4bd9038", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "digorithm/latex-linter", "max_stars_repo_path": "project/server/dependencies/latexindent.pl-master/test-cases/benchmarks/sampleBEFORE-default.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 69383, "size": 205189 }
\section{\scshape Preliminary work}\label{sec:preliminary-work} \begin{frame}{Preliminary work} \begin{figure}[!ht] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[height=.33\textheight]{projection-dlp-white} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[height=.33\textheight]{projection-dlp} \end{minipage} \caption{Projection mapping system for DLP projectors} \end{figure}% \begin{figure}[!ht] \centering \begin{minipage}{.5\textwidth} \centering \includegraphics[height=.33\textheight]{projection-object} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[height=.33\textheight]{projection-side} \end{minipage} \caption{Projection mapping system for laser projectors} \end{figure} \end{frame}
{ "alphanum_fraction": 0.7136150235, "avg_line_length": 29.3793103448, "ext": "tex", "hexsha": "02409d19aeb831cd8ec21284d3124ee8ab58732e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c144ec287e2d4ed934586b031485cdbda5495d1e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "carlosmccosta/prodei-research-planning-presentation", "max_forks_repo_path": "tex/sections/preliminary-work.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c144ec287e2d4ed934586b031485cdbda5495d1e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "carlosmccosta/prodei-research-planning-presentation", "max_issues_repo_path": "tex/sections/preliminary-work.tex", "max_line_length": 65, "max_stars_count": null, "max_stars_repo_head_hexsha": "c144ec287e2d4ed934586b031485cdbda5495d1e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "carlosmccosta/prodei-research-planning-presentation", "max_stars_repo_path": "tex/sections/preliminary-work.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 302, "size": 852 }
\documentclass[letterpaper,12pt]{scrartcl} \usepackage{epsfig,latexsym,amsmath,amssymb,epic,eepic,psfrag,subfigure,float,euscript,array} \usepackage[latin1]{inputenc} \usepackage[margin=24mm]{geometry} \usepackage{enumitem} \usepackage{tikz,pgf,pgfplots} \usepgfplotslibrary{fillbetween} \usetikzlibrary{decorations, arrows, patterns} \usepackage{adjustbox} \usepackage[amssymb]{SIunits} \usepackage{array} \newcolumntype{L}{>{\[}l<{\]}} % math-mode version of "l" column type \newenvironment{exercise}[1][Problem]{\begin{trivlist} \item[\hskip \labelsep {\stepcounter{exerctr}\bfseries #1 \arabic{exerctr}}]}{\end{trivlist}\vspace{10mm}} \newcounter{exerctr} \newcounter{abcctr}[exerctr] \newcommand{\abc}{\noindent\vspace{1mm}\\ {\bf \stepcounter{abcctr}(\alph{abcctr})\ }} \newcommand{\bbm}{\begin{bmatrix}} \newcommand{\ebm}{\end{bmatrix}} \newcommand{\point}[1]{\hfill {\bf (#1p)}\\ \vspace{-5mm}} \newcommand{\ctrb}{\EuScript{S}} \newcommand{\Lap}{\mathcal{L}} \newcommand{\obsv}{\EuScript{O}} \newcommand{\realdel}{\text{Re}} \newcommand{\imagdel}{\text{Im}} \newcommand{\bC}{\mathbb{C}} \newcommand{\bR}{\mathbb{R}} \newcommand{\bmpv}{\begin{minipage}[t]} \newcommand{\bmps}{\begin{minipage}[t]{45mm}} \newcommand{\bmpm}{\begin{minipage}[t]{90mm}} \newcommand{\bmpl}{\begin{minipage}[t]{\textwidth}} \newcommand{\emp}{\end{minipage}} \newcommand{\mexp}[1]{\ensuremath{\mathrm{e}^{#1}}} \newcommand*{\laplaceinv}[1]{\ensuremath{\mathcal{L}^{-1} \left\{#1\right\}}} \newcommand*{\ztrf}[1]{\ensuremath{\mathcal{Z} \left\{#1\right\}}} \newcommand*{\shift}{\ensuremath{\operatorname{q}}} \newcommand*{\diff}{\ensuremath{\operatorname{p}}} \newcommand\tikzmark[2]{\tikz[overlay, remember picture, anchor=base] \node (#1) {#2};} \newcommand\tikzmathmark[2]{\tikz[remember picture, inner sep=0pt, outer sep=0pt, baseline, anchor=base] \node (#1) {\ensuremath{#2}};} \addtolength{\topmargin}{-8mm} %\textheight 22.5cm %\oddsidemargin 1.3cm %\evensidemargin 1.3cm \makeatletter \newcommand*{\rom}[1]{\expandafter\@slowromancap\romannumeral #1@} \makeatother \newcommand*\circled[1]{\tikz[baseline=(char.base)]{ \node[shape=circle,draw,inner sep=2pt] (char) {#1};}} \title{Computerized Control Final exam (28\%)} \author{Kjartan Halvorsen} \date{} \begin{document} \maketitle \begin{description} \item[Time] Wednesday November 28 19:10 --- 21.55 \item[Place] 4206 \item[Permitted aids] The single colored page with your own notes, table of Laplace transforms, calculator \end{description} All answers should be readable and well motivated (if nothing else is written). Solutions/motivations should be written on the provided spaces in this exam. Use the last page if more space is needed. \begin{center} {\Large Good luck!} \\ \end{center} \noindent \fbox{ \bmpl% {\bf Matricula and name:}\\ \vspace*{18mm} \emp} \clearpage %----------------------------------------------------------------- % Exercise 1. Block-diagram calculations %----------------------------------------------------------------- \begin{exercise} The figure below shows a block-diagram of a two-degrees-of-freedom control system. \begin{center} \includegraphics[width=0.6\linewidth]{../../homework/2dof-block-complete} \end{center} Which of the following pulse transfer operators describes the \textbf{closed-loop response}, $y(kh)$, to the measurement noise sequence $n(kh)$? No motivation required. \begin{enumerate} \item \( H_n(q) = \frac{1}{1 + H(q)F_b(q)} \) \item \( H_n(q) = -\frac{H(q)F_f(q)}{1 + H(q)F_f(q)} \) \item \( H_n(q) = -\frac{H(q)F_b(q)}{1 + H(q)F_b(q)} \) \item \( H_n(q) = \frac{H(q)F_b(q)}{1 + H(q)F_b(q)} \) \end{enumerate} \end{exercise} % ----------------------------------------------------------------- % Exercise 2. Match pulse-transfer function with step-response %----------------------------------------------------------------- \begin{exercise} For each of the four pulse-transfer functions below, determine which of the step-responses in figure \ref{fig:steps} it corresponds to. \begin{align*} G_1(z) &= \frac{0.026z + 0.024}{z(z-0.95)}\\ G_2(z) &= \frac{0.13z + 0.12}{z^2 - z + 0.25}\\ G_3(z) &= \frac{0.54z + 0.52}{{(z-0.5)}^2 + 0.81}\\ G_4(z) &= \frac{0.025z + 0.025}{{(z-0.8)}^2 - 0.09} \end{align*} %\begin{center} %\includegraphics[height=0.3\textheight]{../../figures/zgrid-crop}\\ %\end{center} \begin{figure}[hbtp] \begin{center} \includegraphics[width=\linewidth]{step-responses} \caption{Exercise \arabic{exerctr} Match the step-responses with the pulse-transfer functions.}\label{fig:steps} \end{center} \end{figure} \noindent \fbox{ \bmpl% {\bf Motivation:}\\ \vspace*{120mm} \emp} \end{exercise} % -------------------------------------------------------------------------------- % Match pulse-transfer function with difference equations %-------------------------------------------------------------------------------- \begin{exercise} A discrete-time LTI with input $u(k)$ and output $y(k)$ can be represented as a linear difference equation, or with a transfer operator $H(\shift)$, where $y(k) = H(\shift)u(k)$. In the table that follows, connect the pulse-transfer operator with the corresponding difference equation. No motivation required. \def\tsep{5mm} \begin{center} \begin{tabular}{|p{5cm}p{8cm}|} \hline $H(\shift)$ & Difference equation\\\hline\hline \vspace*{1mm}$\frac{b_0\shift+b_1}{\shift + a_1}$ & \vspace*{1mm}$y(k+1) = -a_1y(k) + b_0u(k) + b_1u(k-1)$\\[\tsep] $\frac{b_0\shift+b_1}{\shift + a_1}\shift^{-1}$ & $y(k+1) = -a_1y(k-1) + b_0u(k-1) + b_1u(k-2)$\\[\tsep] $\frac{b_0\shift+b_1}{\shift^2 + a_1}$ & $y(k+1) = -a_1y(k) + b_0u(k+1) + b_1u(k)$\\[\tsep] $\frac{b_0\shift+b_1}{\shift^2 + a_1}\shift^{-1}$ & $y(k+1) = -a_1y(k-1) + b_0u(k) + b_1u(k-1)$\\[\tsep]\hline \end{tabular} \end{center} \end{exercise} \begin{exercise} A continuous-time lead-compensator \begin{equation} F(s) = 10\frac{s+1}{s+4} \label{eq:lead} \end{equation} has been designed to control a plant $G(s)$. \begin{center} \begin{tikzpicture}[scale = 0.8, node distance=22mm, block/.style={rectangle, draw, minimum width=15mm}, sumnode/.style={circle, draw, inner sep=2pt}] \node[coordinate] (input) {}; \node[sumnode, right of=input, node distance=12mm] (sum) {\tiny $\sum$}; \node[block, right of=sum, node distance=28mm] (controller) {$F(s)$}; \node[block, right of=controller, node distance=38mm] (plant) {$G(s)$}; \node[coordinate, right of=plant, node distance=35mm] (output) {}; \draw[->] (input) -- node[above, pos=0.3] {$y_{ref}$} (sum); \draw[->] (sum) -- node[above, pos=0.3] {$$} (controller); \draw[->] (controller) -- node[above, pos=0.3] {$$} (plant); \draw[->] (plant) -- node[coordinate] (measure) {} node[above, very near end] {$y$} (output); \draw[->] (measure) -- ++(0, -2) -| node[pos=0.95, left] {$-$} (sum); \end{tikzpicture} \end{center} The continuous-time lead-compensator is discretized using the backward Euler approximation \[\frac{d}{dt} \approx \frac{\shift -1}{h\shift}.\] \abc Which of the below pulse-transfer functions is the correct one (circle)? No motivation required. \begin{align*} F_d(z) &= 10\frac{z(1+h)-1}{z(1+4h)-1}\\[2mm] F_d(z) &= 10\frac{z(1+4h)-1}{z(1+h)-1}\\[2mm] F_d(z) &= 10\frac{z+1}{z+4}\\[2mm] F_d(z) &= 10\frac{z(1+h)-1}{z(1+h)-4}\\[2mm] \end{align*} \pgfmathsetmacro\omegac{2} \pgfmathsetmacro\hh{0.2} \pgfmathsetmacro\argF{atan2(\omegac, 1) - atan2(\omegac, 4)} \pgfmathsetmacro\argFd{atan2( (1+\hh)*sin(deg(2*\hh)), (1+\hh)*cos(deg(2*\hh))-1) - atan2( (1+4*\hh)*sin(deg(2*\hh)), (1+4*\hh)*cos(deg(2*\hh))-1 )} \abc [Bonus problem worth 4p] The purpose of using a lead-compensator, as compared to a simpler proportional controller, is to improve the stability of the closed-loop system. The frequency response $F(i\omega)$ of the lead-compensator has positive phase, which improves the phase-margin of the system. Assuming that the cross-over frequency is $\omega_c = \unit{2}{\radian\per\second}$, the proposed lead-compensator \eqref{eq:lead} has a phase of \[ \arg F(i\omegac) = \arg 10 + \arg (i\omegac + 1) - \arg(i\omegac + 4) = \unit{\argF}{\degree}\] at the cross-over frequency. If the sampling period is $h=0.2$. What is the phase of the discretized filter $F_d(z)$ at the cross-over frequency? [You get 2p for answering correctly whether the phase of the discretized controller is larger or smaller than that of the continuous-time controller.] \noindent \fbox{ \bmpl% {\bf Calculations and answer:}\\ \vspace*{70mm} %\argFd \emp} \end{exercise} %------------------------------------------------------------------------------------------------- % State-space %------------------------------------------------------------------------------------------------- \begin{exercise} \abc% Consider the first-order difference equation \(x(k+1) = ax(k) + bu(k)\). Which of the following is the correct expression for this discrete-time system in the z-domain? No motivation required. \begin{align} X(z) &= (z-a)z^{-1}bU(z)\\ X(z) &= z{(z-a)}^{-1}bU(z)\\ X(z) &= {(z-a)}^{-1}bU(z)\\ X(z) &= (z-a)bU(z)\\ \end{align} \abc% Consider the discrete-time DC-motor model \pgfmathsetmacro\hh{0.2} \pgfmathsetmacro\expminh{exp(-\hh)} \pgfmathsetmacro\oneminexpminh{1-\expminh} \pgfmathsetmacro\aoneone{\expminh} \pgfmathsetmacro\atwoone{\oneminexpminh} \pgfmathsetmacro\bone{\oneminexpminh} \pgfmathsetmacro\btwo{\hh - 1+\expminh} \begin{equation*} \begin{aligned} x(k) &= \bbm \pgfmathprintnumber[fixed,precision=1]{\aoneone} & 0\\ \pgfmathprintnumber[fixed,precision=1]{\atwoone} & 1\ebm x(k) + \bbm \pgfmathprintnumber[fixed,precision=1]{\bone}\\ \pgfmathprintnumber[fixed,precision=2]{\btwo}\ebm u(k)\\ y(k) &= \bbm 0 & 1 \ebm x(k), \end{aligned} \end{equation*} where \(x_1(k)\) is the angular velocity and \(x_2(k)\) is the angular position of the motor shaft. Which of the below is the correct characteristic equation of the model? Motivate! \begin{enumerate} \item \((z-0.8)(z+1) = 0\) \item \((z-0.8)(z-1) = 0\) \item \((z+0.8)(z+1) = 0\) \item \((z+0.8)(z-1) = 0\) \end{enumerate} \noindent \fbox{ \bmpl% {\bf Motivation:}\\ \vspace*{50mm} \emp} \abc% If we attempt to control the DC-motor with the control law \begin{center} \begin{adjustbox}{margin=1cm} \begin{tikzpicture} \node {$u(k) = -0.6\tikzmathmark{angvel}{\overbrace{x_1(k)}} - 13\tikzmathmark{angpos}{\underbrace{x_2(k)}} + y_{ref}(k)$}; \end{tikzpicture} \end{adjustbox} \end{center} \tikz[overlay, remember picture] \draw[<-, thin] (angvel) -- ++(1cm,1cm) node[right] {angular velocity}; \tikz[overlay, remember picture] \draw[<-, thin] (angpos) -- ++(1cm,-1cm) node[right] {angle}; which poles will the closed-loop poles have? No motivation required. \begin{enumerate} \item \( z = 0.71 \pm 0.70 i\) \item \( z_1 = 0.7, \; z_2 = 0.7\) \item \( z_1 = 0, \; z_2 = 0\) \item \( z_1 = 0, \; z_2 = 0.7\) \end{enumerate} \abc% Discuss in 2-3 sentences the properties of the closed-loop system obtained in \textbf{(c)}. \noindent \fbox{ \bmpl% {\bf Properties of the closed-loop system:}\\ \vspace*{50mm} \emp} \end{exercise} %\cleardoublepage %\end{document} \section*{Solutions} \setcounter{exerctr}{0} \begin{exercise} Using Mason's rule, which says that the numerator of the closed-loop pulse-transfer function equals the gain in the direct path from the input signal to the output signal, it is clear that the answer must be \( H_n(q) = -\frac{H(q)F_b(q)}{1 + H(q)F_b(q)} \). \end{exercise} \begin{exercise} \pgfmathsetmacro{\omegan}{sqrt(1+pow(0.6, 2))} \pgfmathsetmacro{\omeganhertz}{\omegan * 1000/2/pi} \pgfmathsetmacro{\hh}{0.2} \pgfmathsetmacro{\pdreal}{exp(-\hh)*cos(deg(0.6*\hh))} \pgfmathsetmacro{\pdim}{exp(-\hh)*sin(deg(0.6*\hh))} The idea is to figure out the poles of the four systems, and from this determine how the system should respond. The system $G_1(z)$ poles in $z=0$ and $z=0.95$, the latter of which is dominating. This gives a slow first-order response. The system $G_2(z)$ is critically damped with two poles in $z=0.5$. This has a stable, fast response with no overshoot. System $G_3(z)$ has two poles in $z = 0.5 \pm i0.9$ which are outside the unit circle. This gives an oscillating, unstable response. Finally, $G_4(z)$ has poles in $z=0.5$ and $z=1.1$. This gives an unstable response without oscillations. The analysis leads to the pairing: \textbf{$G_1(z)$ --- \rom{1}}, \textbf{$G_2(z)$ --- \rom{3}}, \textbf{$G_3(z)$ --- \rom{4}}, \textbf{$G_4(z)$ --- \rom{2}}. \end{exercise} \begin{exercise} The z-transform can be used in this exercise to find the correct answer, or more directly to just write the difference equation using the shift operator. For instance, the first difference equation becomes \[ y(k+1) = -a_1y(k) + b_0u(k) + b_1u(k-1)\] \[ (\shift + a_1)y(k) = (b_0 + b_1\shift^{-1}) u(k)\] \[ y(k) = \frac{b_0\shift + b_1}{\shift + a_1}\shift^{-1} u(k).\] The complete answer is %\end{exercise} %\end{document} \def\tsep{5mm} \begin{center} \begin{tabular}{|p{5cm}p{8cm}|} \hline $H(\shift)$ & Difference equation\\\hline\hline \vspace*{1mm}$\frac{b_0\shift+b_1}{\shift + a_1}$\tikzmark{H1}{} & \vspace*{1mm}\tikzmark{sys1}{}{$y(k+1) = -a_1y(k) + b_0u(k) + b_1u(k-1)$}\\[\tsep] $\frac{b_0\shift+b_1}{\shift + a_1}\shift^{-1}$\tikzmark{H2}{} & \tikzmark{sys2}{}{$y(k+1) = -a_1y(k-1) + b_0u(k-1) + b_1u(k-2)$}\\[\tsep] $\frac{b_0\shift+b_1}{\shift^2 + a_1}$\tikzmark{H3}{} & \tikzmark{sys3}{}{$y(k+1) = -a_1y(k) + b_0u(k+1) + b_1u(k)$}\\[\tsep] $\frac{b_0\shift+b_1}{\shift^2 + a_1}\shift^{-1}$\tikzmark{H4}{} & \tikzmark{sys4}{}{$y(k+1) = -a_1y(k-1) + b_0u(k) + b_1u(k-1)$}\\[\tsep]\hline \end{tabular} \end{center} \tikz[remember picture, overlay] \draw[<->, blue!60!black] (H1) to[out=0, in=180] (sys3); \tikz[remember picture, overlay] \draw[<->, orange!60!black] (H2) to[out=0, in=180] (sys1); \tikz[remember picture, overlay] \draw[<->, red!60!black] (H3) to[out=0, in=180] (sys4); \tikz[remember picture, overlay] \draw[<->, green!60!black] (H4) to[out=0, in=180] (sys2); \end{exercise} \begin{exercise} \abc% To find the discretized controller using the backward Euler approximation, substitute $s = \frac{z-1}{zh}$ in the transfer function. This gives \begin{equation*} \begin{aligned} F_d(z) &= F(s)|_{s=\frac{z-1}{zh}} = 10 \frac{\frac{z-1}{zh} + 1}{\frac{z-1}{zh} + 4}\\ &= 10 \frac{z-1 + zh}{z-1 + 4zh} = 10\frac{z(1+h) - 1}{z(1+4h) - 1}. \end{aligned} \end{equation*} \abc% In discrete-time, the frequency response of a pulse-transfer function is obtained by evaluating the pulse-transfer function on the upper half of the unit circle, i.e.~\(F_d(z=\mathrm{e}^{i\omega h}\), where $\omega$ is the continuous-time frequency and $h$ is the sampling period. The Bode-plot of a discrete-time system ends at the frequency corresponding to $z=\mathrm{e}^\pi$, which is the Nyquist frequency $\omega_N = \frac{\pi}{h}$. We are asked to determine the phase (argument) of the discrete-time pulse-transfer function at the frequency $\omega_c = \unit{2}{\radian\per\second}$, which means to determine $\arg F_d(\mathrm{e}^{i\omega_c h})$, which is \pgfmathsetmacro\omegac{2} \pgfmathsetmacro\hh{0.2} \pgfmathsetmacro\argF{atan2(\omegac, 1) - atan2(\omegac, 4)} \pgfmathsetmacro\argFd{atan2( (1+\hh)*sin(deg(2*\hh)), (1+\hh)*cos(deg(2*\hh))-1) - atan2( (1+4*\hh)*sin(deg(2*\hh)), (1+4*\hh)*cos(deg(2*\hh))-1 )} \begin{equation*} \begin{aligned} \arg F_d(\mathrm{e}^{i\omega_c h}) &= \arg 10 \frac{\mathrm{e}^{i\omega_c h}(1+h) - 1}{\mathrm{e}^{i\omega_c h}(1+4h) - 1}\\ &= \arg 10 + \arg \big(\mathrm{e}^{i\omega_c h}(1+h) - 1\big) - \arg \big(\mathrm{e}^{i\omega_c h}(1+4h) - 1\big)\\ &= 0 + \arg \big(\mathrm{e}^{i0.4}1.2 - 1\big) - \arg \big(\mathrm{e}^{i0.4}1.8 - 1\big)\\ &= \arg\big( (1.2\cos(0.4)-1) + i1.2\sin(0.4) \big) - \arg\big( (1.8\cos(0.4)-1) + i1.8\sin(0.4)\big)\\ &= \tan^{-1}\left(\frac{1.2\sin(0.4)}{1.2\cos(0.4)-1}\right) - \tan^{-1}\left(\frac{1.8\sin(0.4)}{1.8\cos(0.4)-1}\right) = \unit{\argFd}{\degree} < \unit{\argF}{\degree}. \end{aligned} \end{equation*} \end{exercise} \begin{exercise} \abc% Applying the z-transform to the difference equation $x(k+1) = ax(k) + bu(k)$ (assuming $x(0)=0$) gives \[ zX(z) = aX(z) + bU(z)\] \[ zX(z) - aX(s) = bU(z)\] \[ (z-a)X(z) = bU(z)\] \[ X(z) = {(z-a)}^{-1}bU(z) = \frac{b}{z-a}U(z).\] \abc% The poles of the system are the eigenvalues of the state-transition matrix \[ \Phi = \bbm 0.8 & 0\\0.2 & 1\ebm.\] The eigenvalues are easy to determine here, since the matrix is triangular. They are found on the diagonal as $z=0.8$ and $z=1$. This means that the characteristic equation must be \[ (z-0.8)(z-1) = 0. \] \abc% The suggested control-law is $u(k) = -0.6x_1(k) - 13x_2(k) + y_{ref}(k)$. Note the large gain in the feedback for the second state element, which is the angular position. The feedback gain for the first element, which is the angular velocity is much smaller. We should therefore expect a system with small damping. Only Case 1 fits: $z=0.71 \pm 0.70i$. You can of course also just calculate what the poles are from the characteristic equation for the closed-loop system $\det \left( zI - (\Phi - \Gamma L)\right) = 0$: \begin{equation*} \begin{aligned} \det\left( zI - (\Phi - \Gamma L) \right) &= \det\left( \bbm z & 0\\ 0 & z\ebm - \Big( \bbm 0.8 & 0\\0.2 & 1 \ebm - \bbm 0.2\cdot 0.6 & 0.2\cdot 13\\ 0.02\cdot 0.6 & 0.02\cdot 13\ebm\Big)\right)\\ &= \det \bbm z-0.8+0.12 & 2.6\\-0.2 + 0.012 & z-1+0.26 \ebm\\ &= (z-0.68)(z-0.74) + 2.6\cdot{}0.188) = z^2 -1.42z + 0.992\\ &= (z-0.71-i0.70)(z-0.71+i0.70). \end{aligned} \end{equation*} \abc% The poles have magnitude \(|0.71+0.70i| = 0.997\), so the system is almost undamped. Any excitation of the system will give oscillations that die out very slowly. The frequency of the oscillations are given by \pgfmathsetmacro\argP{rad(atan2(0.70, 0.71))} \pgfmathsetmacro\omegar{\argP/0.2} \[ \omega_r h = \arg (0.71+0.70i) = \argP\] which gives \[ \omega_r = \unit{\omegar}{\radian\per\second}.\] \end{exercise} \end{document}
{ "alphanum_fraction": 0.6375315691, "avg_line_length": 42.3581395349, "ext": "tex", "hexsha": "06ac60a9e5966aca4250f775f112058cee8c7ab1", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-14T03:55:27.000Z", "max_forks_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_forks_repo_path": "exams/final-exam/MR2007-final-exam-fall18.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_issues_repo_issues_event_max_datetime": "2020-06-12T20:49:00.000Z", "max_issues_repo_issues_event_min_datetime": "2020-06-12T20:44:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_issues_repo_path": "exams/final-exam/MR2007-final-exam-fall18.tex", "max_line_length": 645, "max_stars_count": 2, "max_stars_repo_head_hexsha": "16e35f5007f53870eaf344eea1165507505ab4aa", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kjartan-at-tec/mr2007-computerized-control", "max_stars_repo_path": "exams/final-exam/MR2007-final-exam-fall18.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-22T09:46:13.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-07T05:20:37.000Z", "num_tokens": 6773, "size": 18214 }
\addcontentsline{toc}{chapter}{Contact Information} \begin{normalsize} \begin{center} \section*{Contact Information} %\addcontentsline{toc}{chapter}{DECLARATION} %\vspace*{1.0cm} %\centerline{\Large{\bf DECLARATION}} \end{center} \doublespacing \vspace{1.0cm} \noindent This project report is submitted to the School of Computing, Software Engineering Academic Program at Debre Markos University in partial fulfillment of the requirements for the degree of Bachelor of Science in Software Engineering. Authors contact information: \begin{tabular}{p{.2in}p{1.4in}p{1.7in}p{1.5in} } \\ \\ & Name & ID & E-mail \\ \\ 1. & Simon Belete & TER\_0087\_09 & [email protected] \\ \\ 2. & Hailemariem Lema & TER\_0104\_09 & \hrulefill \\ \\ 3. & Mulat Shimelash & TER\_0082\_09 & \hrulefill \\ \\ \end{tabular} \end{normalsize}
{ "alphanum_fraction": 0.72520908, "avg_line_length": 27, "ext": "tex", "hexsha": "3b76eb376c7b6d7cdaf0c17922d01ed4909accf7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c7bee21024207c7611f851ddd991b2eacd14171a", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Guya-LTD/srs", "max_forks_repo_path": "CONTACT_INFO.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c7bee21024207c7611f851ddd991b2eacd14171a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Guya-LTD/srs", "max_issues_repo_path": "CONTACT_INFO.tex", "max_line_length": 260, "max_stars_count": 1, "max_stars_repo_head_hexsha": "c7bee21024207c7611f851ddd991b2eacd14171a", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Guya-LTD/srs", "max_stars_repo_path": "CONTACT_INFO.tex", "max_stars_repo_stars_event_max_datetime": "2021-01-14T16:20:34.000Z", "max_stars_repo_stars_event_min_datetime": "2021-01-14T16:20:34.000Z", "num_tokens": 260, "size": 837 }
\documentclass[12pt]{cdblatex} \usepackage{exercises} \usepackage{fancyhdr} \usepackage{footer} \begin{document} % -------------------------------------------------------------------------------------------- \section*{Exercise 2.4 Combining rules -- a problem} \begin{cadabra} {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u#}::Indices(position=independent). \nabla{#}::Derivative. \partial{#}::PartialDerivative. # rules for covariant derivatives of v deriv1 := \nabla_{a}{v^{b}} -> \partial_{a}{v^{b}} + \Gamma^{b}_{d a} v^{d}. deriv2 := \nabla_{a}{\nabla_{b}{v^{c}}} -> \partial_{a}{\nabla_{b}{v^{c}}} + \Gamma^{c}_{d a} \nabla_{b}{v^{d}} - \Gamma^{d}_{b a} \nabla_{d}{v^{c}}. # attempt to combine both rules for second covariant derivative of v substitute (deriv2,deriv1) # cdb (ex-0204.101,deriv2) \end{cadabra} \vskip 1cm Note that the call to \verb|substitute| has made changes to both sides of the rule for \verb|deriv2|. This is not ideal and a better method is developed in the following exercise. \begin{align*} \Cdb{ex-0204.101} \end{align*} \end{document}
{ "alphanum_fraction": 0.5426482535, "avg_line_length": 29.3095238095, "ext": "tex", "hexsha": "9b7be61b25cdbfb36c4d8e0697979ee4ca1514ac", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_forks_event_min_datetime": "2019-12-22T13:52:19.000Z", "max_forks_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "leo-brewin/cadabra-tutorial", "max_forks_repo_path": "source/cadabra/exercises/ex-0204.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "leo-brewin/cadabra-tutorial", "max_issues_repo_path": "source/cadabra/exercises/ex-0204.tex", "max_line_length": 94, "max_stars_count": 20, "max_stars_repo_head_hexsha": "5b428ae158b5346315ab6c975dee9de933e5c3d7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "leo-brewin/cadabra-tutorial", "max_stars_repo_path": "source/cadabra/exercises/ex-0204.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-27T22:55:47.000Z", "max_stars_repo_stars_event_min_datetime": "2019-12-20T07:49:47.000Z", "num_tokens": 372, "size": 1231 }
\chapter{Participant Generated Paths} \label{user_generated_paths} \begin{figure}[!b] \centering \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_sue_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_tin_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_coy_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_find_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_mans_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_core_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_bale_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_tales_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_shier_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_falls_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_sires_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_dubbed_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_sicker_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_dipped_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_hailed_paths} \end{minipage} \end{minipage} \caption[User Generated Paths for the Touch Screen Keyboard]{The participant-generated word-gesture paths using the Touch Screen Keyboard for the dictionary words: `sue', `tin', `coy', `find', `mans', `core', `bale', `tales', `shier', `falls', `sires', `dubbed', `sicker', `dipped', `hailed'.} \end{figure} \clearpage \begin{figure}[t] \centering \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_hot_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_tub_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_dot_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_rubs_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_owns_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_fire_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_vale_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_bales_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_shies_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_galls_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_fores_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_rubbed_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_docket_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_ripped_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_mailed_paths} \end{minipage} \end{minipage} \caption[User Generated Paths for the Leap Surface Keyboard]{The participant-generated word-gesture paths using the Leap Surface Keyboard for the dictionary words: `hot', `tub', `dot', `rubs', `owns', `fire', `vale', `bales', `shies', `galls', `fores', `rubbed', `docket', `ripped', `mailed'.} \end{figure} \clearpage \begin{figure}[t] \centering \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_got_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_rub_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_sir_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_rind_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_lane_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_sire_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_gape_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_hales_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_shirt_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_balls_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_sores_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_subbed_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_socket_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_yipped_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_bailed_paths} \end{minipage} \end{minipage} \caption[User Generated Paths for the Leap Static-air Keyboard]{The participant-generated word-gesture paths using the Leap Static-air Keyboard for the dictionary words: `got', `rub', `sir', `rind', `lane', `sire', `gape', `hales', `shirt', `balls', `sores', `subbed', `socket', `yipped', `bailed'.} \end{figure} \clearpage \begin{figure}[t] \centering \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_due_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_gin_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_zit_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_fibs_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_pens_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_fore_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_hale_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_gales_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_shire_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_calls_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_fires_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_tinned_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_rocker_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_topped_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_nailed_paths} \end{minipage} \end{minipage} \caption[User Generated Paths for the Leap Predictive-air Keyboard]{The participant-generated word-gesture paths using the Leap Predictive-air Keyboard for the dictionary words: `due', `gin', `zit', `fibs', `pens', `fore', `hale', `gales', `shire', `calls', `fires', `tinned', `rocker', `topped', `nailed'.} \end{figure} \clearpage \begin{figure}[t] \centering \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_fir_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_run_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_sit_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_fine_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_pans_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_dire_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_tale_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_dales_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_shied_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_halls_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_sites_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_sinned_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_wicker_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_hopped_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_failed_paths} \end{minipage} \end{minipage} \caption[User Generated Paths for the Leap Bimodal-air Keyboard]{The participant-generated word-gesture paths using the Leap Bimodal-air Keyboard for the dictionary words: `fir', `run', `sit', `fine', `pans', `dire', `tale', `dales', `shied', `halls', `sites', `sinned', `wicker', `hopped', `failed'.} \end{figure} \clearpage \begin{figure}[t] \centering \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_fit_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_rim_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_soy_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_fins_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_mane_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_sure_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_gale_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_games_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_shift_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_gamma_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_cores_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_finned_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_rocket_paths} \end{minipage} \begin{minipage}[t]{3in} \includegraphics[width=3.3in]{Figures/fig_tipped_paths} \end{minipage} \end{minipage} \begin{minipage}[t]{8in} \hspace{-20pt} \begin{minipage}[t]{3.1in} \includegraphics[width=3.3in]{Figures/fig_jailed_paths} \end{minipage} \end{minipage} \caption[User Generated Paths for the Leap Pinch-air Keyboard]{The participant-generated word-gesture paths using the Leap Pinch-air Keyboard for the dictionary words: `fit', `rim', `soy', `fins', `mane', `sure', `gale', `games', `shift', `gamma', `cores', `finned', `rocket', `tipped', `jailed'.} \end{figure}
{ "alphanum_fraction": 0.7144992526, "avg_line_length": 28.2676056338, "ext": "tex", "hexsha": "71f13510b5c7feb995b956f3896214ba90a3c790", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Malificiece/Leap-Motion-Thesis", "max_forks_repo_path": "docs/Thesis/appendixC.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Malificiece/Leap-Motion-Thesis", "max_issues_repo_path": "docs/Thesis/appendixC.tex", "max_line_length": 308, "max_stars_count": null, "max_stars_repo_head_hexsha": "a87684c7e9c1d7250922d00f37f31ae242dcc363", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Malificiece/Leap-Motion-Thesis", "max_stars_repo_path": "docs/Thesis/appendixC.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5958, "size": 14049 }
%&LaTeX \section{The Z-Transform and Convolution} This lab covers the z-transform, used to convert arbitrary digital signals to the frequency domain. It also exercises the relationship between a filter's transfer function and impulse response and how the operations of multiplication and convolution, respectively, can be used to compute a filter's output. \subsection{The z-transform, Transfer Function, \& Impulse Response} A discrete signal $x[n]$ has a z-transform $X(z)$ defined by the following equation: \[ X(z)=\sum_{n=0}^{\infty}x[n]z^{-n} \] With this definition lets investigate a feed forward filter with ten coefficients, $\{b_0, b_1,\cdots, b_9\}$. Recall that the \block{Filter} and \block{Coeff.} blocks of J-DSP allow us to specify a filter in terms of its \emph{coefficients}, but we can also define it in terms of its \emph{transfer function}. Considering the $b_k$ coefficients of the feed forward filter, the \block{Filter} block implements the transfer function: \begin{equation} H(z) = \sum_{k=0}^{9} b_k z^{-k} \end{equation} In previous labs we have computed the transfer function using the delays of the \emph{defining function}. Mathematically, we were actually taking the z-transform of the \emph{impulse response}! In this example, the impulse response is: \begin{equation} h[n] = \sum_{k=0}^{9} b_k \delta[k] \end{equation} where $\delta[k]$ is the unit impulse and only has value at $k$. $H(z)$ and $h[n]$ form a z-transform pair, $h[n]\xleftrightarrow{z} H(z)$. It should now be obvious why feedforward filters are also known as finite impulse response filters -- their impulse response only has a \emph{finite} number of values. To compute the output, $y[n]$, using the impulse response we use \emph{convolution}. Namely, we \emph{convolve} the input, $x[n]$, by the impulse response, $h[n]$, \begin{equation} y[n] = x[n] \ast h[n] = \sum_{k=0}^{9}x[k]h[n-k] \end{equation} Alternatively, we can compute a filter's output by multiplying the transfer function by the z-transform of the input to yield the z-transform of the output: \begin{equation} Y(z) = H(z) X(z) \end{equation} From a practical point of view, of course, it makes more sense to implement a filter in terms of its impulse response. However, for filters with long impulse responses, it is sometimes more convenient to represent them mathematically using the transfer function (which we now know is just the z-transform of the impulse response!). So, in effect, the J-DSP blocks allow us to invert the z-transform of various signals. \subsection{Z-Transforms} \paragraph{Step 1.1} On paper, compute the z-transform, $X(z)$, of \begin{equation} x[n] = \left\{ \begin{array}{ll} (-1)^n & n \ge 0 \\ 0 & n < 0 \end{array}\right. \end{equation} Note that this is an infinite geometric series. % Hand in hard copy of your work. What are the locations of any pole(s) (roots of the denominator polynomial) or zero(s) (roots of the numerator polynomial)? \paragraph{Step 1.2} Evaluate the frequency response of $X(z)$ from step~1.1, $X(z)\big{|}_{z=e^{j\hat{\omega}}}$. % Hand in hard copy of your work. What kind of filter is this? \paragraph{Step 1.3} Consider the z-transform: \begin{equation} X(z) = 1 - 2z^{-1} + 3z^{-3} - z^{-5} \end{equation} Write the inverse z-transform, $x[n]$, as a table of values for corresponding $n$ values. \subsection{Impulse Response} \paragraph{Step 2.1} Consider a filter with a transfer function \begin{equation} H(z) = 1 + 5z^{-1} - 3z^{-2} + 2.5z^{-3} + 4z^{-8} \end{equation} What is the defining equation for this filter, $y[n] = F(x[n])$? \paragraph{Step 2.2} What is the output sequence of the filter of Step~2.1 when the input is $x[n] = \delta[n]$? \paragraph{Step 2.3} The impulse response of a filter is $h[n] = x[n] + 2x[n-1] + x[n-2] - x[n-3]$, or equivalently, $h[n]=\{1,2,1,-1\}$, $n=\{0, 1, 2, 3\}$. Determine the response of the system to the input signal $x[n]=\{1,2,3,1\}$, $n=\{0, 1,2,3\}$. Use J-DSP to check your results. Include an image of the J-DSP block diagram with plots of the \block{Sig Gen} signal and the filter output in your report. Note that the \block{Sig Gen} block has a \emph{user-defined} \option{signal} option; use \button{reset} to reset all the signal values to zero before entering your own. Additionally, you can view the values in the \block{Plot} block. \paragraph{Step 2.4} Change the input to the filter of Step~2.3 to be $\delta[n]$, using the \option{signal} set to ``Delta''. What are the output values? How do they compare to the impulse response? Include a plot of the filter output values in your report. \begin{figure}[t] \centering \includegraphics[width=4in]{lab6/threeinputproblem} \caption{An example diagram for use in problem 2.5} \label{fg:threeinput} \end{figure} \paragraph{Step 2.5} Use J-DSP to determine the output of the filter \{1/3,1/3,1/3\}, $n=\{0,1,2\}$ for the input: \begin{equation} x[n] = 4 + \sin[0.25\pi(n-1)] - 3 \sin[(2\pi/3)n] \end{equation} An example diagram can be seen in Figure \ref{fg:threeinput}. Include an image of the J-DSP block diagram with plots of the filter input and output in your report. Is the result expected? Why or why not? \paragraph{Step 2.6} Create your own \block{UserDefinedFun} block to implement convolution in J-DSP. To test your function, make sure it works exactly like the \block{Filter} block in J-DSP. Use the diagram in Figure \ref{fg:userdefined} to plot the difference between your function output and the \block{Filter} block output. Use the filter \{1/3,1/3,1/3\}, $n=\{0,1,2\}$ from step 2.5. Note: refer to section \ref{sec:example} for how to use the \block{UserDefinedFun} block in J-DSP. \begin{figure}[t] \begin{center} \includegraphics[width=4in]{lab6/userdefinedtest} \caption{A diagram to plot the difference between \block{UserDefinedFun} function output and the \block{Filter} block output for use in step 2.6.} \label{fg:userdefined} \end{center} \end{figure} \subsection{Canceling Sinusoidal Components} Filters can be designed to cancel sinusoids. Construct a filter in J-DSP with the following impulse response: \begin{equation} h[n] = \delta[n] - 2\cos(\pi/4) \delta[n-1] + \delta[n-2] \end{equation} \paragraph{Step 3.1} Plot the frequency response for this filter. What are the zero locations? \paragraph{Step 3.2} Use as an input to this filter the signal $x[n] = \sin\hat{\omega}n$, using the two frequencies $\hat{\omega} = \pi/2$ and $\hat{\omega} = \pi/4$. Simulate the filter in J-DSP for each of these two inputs. Plot the filter output for each. When do we get cancellation? \paragraph{Step 3.3} Can you modify the filter coefficients to cancel the other sinusoid? If so, show your work. \subsection{Example User Defined J-DSP Function}\label{sec:example} Open J-DSP and navigate to the \option{Advanced} function set. There is one block called \block{UserDefinedFun}. Place it now and double click to open the dialog. You will see a dialog with example text for a prototype java class. In particular the class will look like: \begin{lstlisting} public void myCode(double[]x1,double[]x2, double[]y1, double[]y2, double[]b1,double[]a1, double[]b2, double[]a2, double para1, double para2, double para3) { // /\(3) // -------------------- // (0)<| |>(4) // | BLOCK | // (1)<| |>(5) // -------------------- // \/(2) // x1, x2 - input at pin 0 and pin1 // y1, y2 - output at pin 4 and pin5 // b1 - FIR Coefficients of the filter at input pin 2, // a1 - IIR Coefficients of the filter at input pin 2 // b2 - FIR Coefficients of the filter at output pin 3, // a2 - IIR Coefficients of the filter at output pin 3 // Para1, Para2 and Para3 are the variables that could be used // in the code and can be controlled from the block Dialog. // Paste your code here // compile the file // upload the .class file } \end{lstlisting} You can copy this code as a prototype and create your own signal processing algorithms! For now, you will mainly deal with the $x$ and $y$ arrays. Each of the arrays is of length 256. You can access this like a any regular array in java, {\it x1.length=256}. We also have access to four variable length arrays, {\it a1 a2 b1 b2} which contain any filter coefficients as inputs or outputs to the function. Lets start by creating our own .java file. Copy the example code on the next page and save it as a .java file using your favorite java text editor or plain text editor. The example code creates a weighted sum of the entries in each index of {\it x1}. Namely, it goes through each value of {\it x1} and uses the {\it b1} filter coefficients to weight and sum the value into {\it y1}. Save the file as ``MyFunction1.java'' and then compile the file using your favorite compiler tool for java (javac from the command line works fine for linux and mac or The Java SE Development Kit 6 (JDK 6) can be used in windows). This creates ``MyFunction1.class.'' After compiling, remember where you saved the ``MyFunction1.class'' compiled file. You will need to tell J-DSP where the file is located. Go back to the J-DSP block diagram and press \button{open}. Navigate to the ``MyFunction1.class'' compiled file and press \option{open}. That's it! J-DSP will automatically use the function when you attach inputs and outputs. To test the function we just made you can connect a \block{SigGen} block to the top left pin and a \block{Coeff} pin to the bottom as shown on page \pageref{fg:userdefexample}. You should see a weighted sum of the input when you plot the output. You should be able to change the output by changing the weighting coefficients from \block{Coeff}. In the example on page \pageref{fg:userdefexample} we are taking a weighted sum (using the weights in {\it b1}) of each coefficient in {\it x1} three times. You are now ready to start developing your own functions in J-DSP! \begin{lstlisting} public class MyFunction1 { public void myCode(double[]x1,double[]x2, double[]y1, double[]y2, double[]b1,double[]a1, double[]b2, double[]a2, double para1, double para2, double para3) { // /\(3) // -------------------- // (0)<| |>(4) // | BLOCK | // (1)<| |>(5) // -------------------- // \/(2) // x1, x2 - input at pin 0 and pin1 // y1, y2 - output at pin 4 and pin5 // b1 - FIR Coefficients of the filter at input pin 2, // a1 - IIR Coefficients of the filter at input pin 2 // b2 - FIR Coefficients of the filter at output pin 3, // a2 - IIR Coefficients of the filter at output pin 3 // Para1, Para2 and Para3 are the variables that could be used // in the code and can be controlled from the block Dialog. for( int i = 0 ; i < 256 ; i++) { y2[i] = 0; for(int j=0 ; j < b1.length ; j++) { y2[i] += x1[i]*b1[j]; } } // Paste your code here // compile the file // upload the .class file } } \end{lstlisting} \begin{figure}[h] \begin{center} \includegraphics[width=6in]{lab6/function1examplewithplot} \caption{The result of using the weighted sum example code in J-DSP.} \end{center} \label{fg:userdefexample} \end{figure} \subsection{Bonus: More $z$-Transforms and Convolutions} The following properties of the $z$-transform may be useful here: \begin{center} \begin{tabular}{|l|c|c|} \hline Property & Time Domain, $Z^{-1}\{\cdot\}$ & z-Domain, $Z\{\cdot\}$ \\ \hline\hline Linearity & $a_1x[n]+a_2y[n]$ & $a_1X(z)+a_2Y(z)$\\ Time shift & $x[n-k]$ & $z^{-k}X(z)$\\ Scaling in the z-domain & $a^nx[n]$ & $X(a^{-1}z)$ \\ Time reversal & $x[-n]$ & $X(z^{-1})$\\ Differentiation in the z-domain & $nx[n]$ & $-z \deriv{X(z)}{z}$ \\ Convolution & $x[n] \ast y[n]$ & $X(z)Y(z)$ \\ \hline \end{tabular} \end{center} \paragraph{$z$-Transform of arbitrary sequences} Give the $z$-transform of the following sequences, $x[n]$ (assume $x[n]=0$ for all $n$ not stated): \begin{enumerate} \item $x[n]=\{2, 4, 6, 4, 2\}$, $n = \{0, 1, 2, 3, 4\}$ \item $x[n] = \delta[n]$ \item $x[n] = \delta[n-1]$ \item $x[n] = 2\delta[n] - 3\delta[n-1] +4\delta[n-3]$ \item $x[n] = 2\delta[n] + 4\delta[n-1] + 6\delta[n-2] + 4\delta[n-3] + 2\delta[n-4]$ \end{enumerate} \paragraph{Inverse $z$-transforms} Give the inverse $z$-transform of $H(z) = 1 + 5z^{-1} - 3z^{-2} + 2.5z^{-3} +4z^{-8}$. \paragraph{Transfer function \& impulse response} Give the transfer function and impulse response for a filter with zeros at $(r, \hat{\omega}) = \{(1, 0), (1, \pm \pi/2), (0.9, \pm \pi/3)\}$. % LocalWords: WebQ MATLAB DSP
{ "alphanum_fraction": 0.6584278156, "avg_line_length": 38.7976539589, "ext": "tex", "hexsha": "b326fc7f058b8a09b815e5b9f0a4392a1def17bb", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "stiber/Signal-Computing", "max_forks_repo_path": "J-DSP Labs/lab6/lab6.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_issues_repo_issues_event_max_datetime": "2021-01-29T17:19:16.000Z", "max_issues_repo_issues_event_min_datetime": "2015-08-18T18:16:55.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "stiber/Signal-Computing", "max_issues_repo_path": "J-DSP Labs/lab6/lab6.tex", "max_line_length": 136, "max_stars_count": 11, "max_stars_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "stiber/Signal-Computing", "max_stars_repo_path": "J-DSP Labs/lab6/lab6.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T15:48:26.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-10T16:54:45.000Z", "num_tokens": 4032, "size": 13230 }
\documentclass{tufte-handout} \usepackage{amsmath} \usepackage[utf8]{inputenc} \usepackage{mathpazo} \usepackage{booktabs} \usepackage{microtype} \usepackage{tikz} \usepackage{pgfplots} \pgfplotsset{width=9cm,compat=1.13} \title{Foursum Report} \author{Alice Cooper and Bob Marley} \begin{document} \maketitle \thispagestyle{empty} \section{Exhaustive search} \sidenote{Complete the report by filling in your names, filling in the parts marked $[\ldots]$ and changing other parts wherever necessary. (For instance, the numbers is the tables are nonsense right now.) Remove the sidenotes in your final hand-in.} Our program [\texttt{Simple.java} / \texttt{simple.py}]\sidenote{Choose one.} solve the Four-sum problem using four nested loops. The index variables $i$, $j$, $k$, $l$ run from [$\ldots$]\sidenote{Explain. Do you start in $0$? One sentence.} Thus, we can bound the number of array accesses by $\sim\frac{1}{10}(\cos N)\binom{N}{5}$.\sidenote{Replace by an expression that corresponds to your code.} \subsection{Experiments} The following table summarises the empirical performance data on the input files in the \texttt{data} directory. We have run each file once, and report the minimum, maximum, and average running time over the files for each input size. \bigskip\noindent { \small \begin{tabular}{cccccccc} \toprule & \multicolumn{3}{c}{Java} & $\quad$ & \multicolumn{3}{c}{Python} \\\cmidrule{2-4} \cmidrule{6-8} $N$ & $\min$ & $\max$ & avg. & & $\min$ & $\max$ & avg. \\\midrule $100 $ & 1 sec & 5 days & 23.5 hours \\ $[\ldots]$ \\ \bottomrule \end{tabular} } \medskip We can plot the maximum running time as a function of input size, as a standard plot and as a log--log plot.\sidenote{Draw both graphs, as two separate figures. Use any software you want, or draw it hand.} In the latter, note that the points fall nicely on a line of slope [$\dots$]\sidenote{replace by an integer}. \medskip \begin{tikzpicture} \begin{axis}[ title={Running time for search, one planted quadruple}, xlabel={$N$}, xmode = log, log ticks with fixed point, ymode = log, ylabel={Time (s)}, xmin=30, xmax=3200, ymin=.03, ymax=8, xtick={30,50,100,200,400, 800, 1600,3200}, %ytick={0,40,80,160}, ytick={.05,.1,.5,1,2}, legend style={at={(1.1,0)}, anchor=south west}, % legend pos=north east, %north west, %ymajorgrids=true, %grid style=dashed, ] \addplot[color=blue, mark=x,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/Weed1Java.table}; % coordinates { (100,32)(200,37.8)(400,44.6)(800,61.8)(1600,83.8)(3200,114) }; \addplot[ color=red, mark=*,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/Weed1PythSimp.table}; % coordinates { (100,132)(200,72.8)(400,144.6)(800,161.8)(1600,133.8)(3200,224) }; \addplot[ color=green, mark=o,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/Weed1JavaHash.table}; \addplot[ color=brown, mark=x,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/Weed1PythDict.table}; % \addplot[color=yellow] expression[domain=8:3200] {.00009*x+.08}; % \addplot[color=yellow] expression[domain=8:3200] {.0009*x}; % \addplot[color=yellow] expression[domain=8:3200] {.0002*x*ln(x)}; \addplot[color=yellow] expression[domain=8:3200] {.0000000041*x*x*x*x}; % \addplot[color=yellow] expression[domain=8:3200] {.00000000031*x*x*x*x}; \addplot[color=yellow] expression[domain=8:3200] {.00000038*x*x}; \legend{Java, Python, Fast Java, Fast Python} \end{axis} \end{tikzpicture} \begin{tikzpicture} \begin{axis}[ title={Running time for exhaustive search}, xlabel={$N$}, xmode = log, log ticks with fixed point, ymode = log, ylabel={Time (s)}, xmin=30, xmax=3200, ymin=.03, ymax=8, xtick={30,50,100,200,400, 800, 1600,3200}, %ytick={0,40,80,160}, ytick={.05,.1,.5,1,2}, legend style={at={(1.1,0)}, anchor=south west}, % legend pos=north east, %north west, %ymajorgrids=true, %grid style=dashed, ] \addplot[color=blue, mark=x,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/TripleJava.table}; % coordinates { (100,32)(200,37.8)(400,44.6)(800,61.8)(1600,83.8)(3200,114) }; \addplot[ color=red, mark=*,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/TriplePythSimp.table}; % coordinates { (100,132)(200,72.8)(400,144.6)(800,161.8)(1600,133.8)(3200,224) }; \addplot[ color=green, mark=o,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/TripleJavaHash.table}; \addplot[ color=brown, mark=x,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/TriplePythDict.table}; % \addplot[color=yellow] expression[domain=8:3200] {.00009*x+.08}; \addplot[color=yellow] expression[domain=8:3200] {.02*exp(ln(x)/3)}; \addplot[color=yellow] expression[domain=8:3200] {.00000018*x*x}; \addplot[color=yellow] expression[domain=8:3200] {.0000000081*x*x*x*x}; \addplot[color=yellow] expression[domain=8:3200] {.000000000031*x*x*x*x}; \legend{Java, Python, Fast Java, Fast Python} \end{axis} \end{tikzpicture} \begin{tikzpicture} \begin{axis}[ title={Running time for simple search}, xlabel={$N$}, xmode = log, log ticks with fixed point, ymode = log, error bars/y dir=both, ylabel={Time (s)}, xmin=30, xmax=850, ymin=.04, ymax=1.6, xtick={30,50,100,200,400, 800, 1600,3200}, %ytick={0,40,80,160}, ytick={.05,.1,.2,.5,1,2}, legend pos=north west, %ymajorgrids=true, %grid style=dashed, ] \addplot[ color=blue, mark=x, error bars/.cd,y dir=both,y explicit] table [x index=0, y index=1, y error index=2] {Tables_done/TripleJava.table}; % coordinates { (100,32)(200,37.8)(400,44.6)(800,61.8)(1600,83.8)(3200,114) }; \addplot[ color=red, mark=*,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/Weed0Java.table}; % coordinates { (100,132)(200,72.8)(400,144.6)(800,161.8)(1600,133.8)(3200,224) }; \addplot[ color=green, mark=o, error bars/.cd,y dir=both,y explicit] table [x index=0, y index=1, y error index=2] {Tables_done/Weed1Java.table}; \addplot[ color=brown, mark=x,error bars/.cd,y dir=both,y explicit ] table [x index=0, y index=1, y error index=2] {Tables_done/Weed1JavaHash.table}; \legend{Triple, Weed 0, Weed 1, Weed1JavaHash} \end{axis} \end{tikzpicture} \subsection{Improvements} Using the binary search-based idea sketeched in [SW, 1.4] for the Three-sum problem, we can improve our running time to $\sim N^3\log N$. \sidenote{Correct as necessary. If instead you’ve figured out how to do it even faster, rewrite the whole sentence so as to explain what you do.} The following table reports our maximum running times on the test inputs. [$\cdots$] The following log--log plot shows these values graphically, note that the points are on a line of slope [$\cdots$]. \end{document}
{ "alphanum_fraction": 0.6670708704, "avg_line_length": 46.9746835443, "ext": "tex", "hexsha": "092a13bee782f2f1688fa9ce973b1b285c9449b8", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-01-21T05:12:49.000Z", "max_forks_repo_forks_event_min_datetime": "2019-01-21T05:12:49.000Z", "max_forks_repo_head_hexsha": "c97c4a61b19c1e75dae5f6c815bf12bbd22e1e7d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NoticeMeDan/algo-exercises", "max_forks_repo_path": "Foursum/Magnus/docs/report-foursum.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c97c4a61b19c1e75dae5f6c815bf12bbd22e1e7d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NoticeMeDan/algo-exercises", "max_issues_repo_path": "Foursum/Magnus/docs/report-foursum.tex", "max_line_length": 283, "max_stars_count": 2, "max_stars_repo_head_hexsha": "c97c4a61b19c1e75dae5f6c815bf12bbd22e1e7d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NoticeMeDan/algo-exercises", "max_stars_repo_path": "Foursum/Magnus/docs/report-foursum.tex", "max_stars_repo_stars_event_max_datetime": "2018-11-08T00:20:09.000Z", "max_stars_repo_stars_event_min_datetime": "2018-02-04T11:27:16.000Z", "num_tokens": 2502, "size": 7422 }
%!TEX root = ../../PhD_thesis__Edouard_Leurent.tex \graphicspath{{2-Chapters/5-Chapter/}} \chapter{Complements on \Cref{chapter:5}} \section{Proofs} \label{sec:proofs} \subsection{Proof of \Cref{prop:bellman-expectation}} \label{sec:proof-bell-expect} \begin{proof} Thanks to the introduction of the augmented spaces $\ocS, \ocA$ and dynamics $\augmentedtransition$, this proof is the same as that in classical \glspl{MOMDP}. \begin{align*} \oV^{\budgetedpolicy}(\os) &\eqdef \expectedvalue\left[ \augmentedreturn^{\budgetedpolicy} \condbar \ov{s_0} = \os\right] \\ &=\sum_{\oa\in\ocA} \probability{\oa_0 = \oa \condbar\ov{s_0} = \os} \expectedvalue\left[ \augmentedreturn^{\budgetedpolicy} \condbar \ov{s_0} = \os, \oa_0 = \oa\right]\\ &= \sum_{\oa\in\ocA} \budgetedpolicy(\oa | \os) \oQ^{\budgetedpolicy}(\os,\oa). \end{align*} \begin{align*} \oQ^{\budgetedpolicy}(\os, \oa) &\eqdef \expectedvalue\left[\sum_{t=0}^\infty \discountfactor^t \augmentedreward(\os_t, \oa_t)\condbar \ov{s_0} = \os, \ov{a_0} = \oa\right] \\ &= \augmentedreward(\os, \oa) + \sum_{\os'\in\ocS}\probability{\os_1 = \os' \condbar\ov{s_0} = \os, \ov{a_0} = \oa}\cdot \expectedvalue\left[\sum_{t=1}^\infty \discountfactor^t \augmentedreward(\os_t, \oa_t)\condbar \ov{s_1} = \os'\right] \\ &= \augmentedreward(\os, \oa) + \discountfactor\sum_{\os'\in\ocS}\augmentedtransition\left(\os' \condbar\os, \oa\right) \expectedvalue\left[\sum_{t=0}^\infty \discountfactor^t \augmentedreward(\os_t, \oa_t) \condbar \ov{s_0} = \os'\right] \\ &= \augmentedreward(\os, \oa) + \discountfactor\sum_{\os'\in\ocS}\augmentedtransition\left(\os' \condbar\os, \oa\right) \oV^{\budgetedpolicy}(\os'). \end{align*} \paragraph{Contraction of $\abo^{\budgetedpolicy}$.} Let $\budgetedpolicy\in\policies, \oQ_1, \oQ_2\in(\Real^2)^{\ocS\ocA}$. \begin{align*} \forall \os\in\ocS, \oa\in\ocA,\quad \left|\abo^{\budgetedpolicy} \oQ_1(\os,\oa) - \abo^{\budgetedpolicy} \oQ_2(\os,\oa)\right| &= \left|\discountfactor\expectedvalueover{\substack{\os'\sim\augmentedtransition(\os'|\os,\oa) \\ \oa'\sim\budgetedpolicy(\oa'|\os')}} \oQ_1(\os',\oa') - \oQ_2(\os',\oa')\right|\\ &\leq \discountfactor\left\|\oQ_1-\oQ_2\right\|_\infty. \end{align*} Hence, $\left\|\abo^{\budgetedpolicy} \oQ_1 - \abo^{\budgetedpolicy} \oQ_2 \right\|_\infty \leq \discountfactor\left\|\oQ_1-\oQ_2\right\|_\infty$ According to the Banach fixed point theorem \citep{Banach1922}, $\abo^{\budgetedpolicy}$ admits a unique fixed point. It can be easily verified that $\oQ^{\budgetedpolicy}$ is indeed this fixed point by combining the two Bellman Expectation equations~\eqref{eq:bellman_expectation}. \end{proof} \subsection{Proof of \Cref{thm:bellman-optimality}} \label{sec:proof-bell-optim} \begin{proof} Let $\os, \oa \in \ocA\times\ocS$. For this proof, we consider potentially non-stationary policies $\budgetedpolicy=(\rho, \budgetedpolicy')$, with $\rho\in\cM(\ocA)$, $\budgetedpolicy'\in\cM(\ocA)^\Natural$. The results will apply to the particular case of stationary optimal policies, when they exist. \begin{align} \Qr[^\star](\os, \oa) &= \max_{\rho, \budgetedpolicy'} \Qr[^{\rho, \budgetedpolicy'}](\os', \oa') \label{eq:pthm_def}\\ &= \max_{\rho, \budgetedpolicy'} \reward(s, a) + \discountfactor \sum_{\os'\in\ocS} \augmentedtransition(\os' | \os, \oa) \Vr[^{\rho, \budgetedpolicy'}](\os') \label{eq:pthm_exp}\\ &= \reward(s, a) + \discountfactor \sum_{\os'\in\ocS} \augmentedtransition(\os' | \os, \oa) \max_{\rho, \budgetedpolicy'} \sum_{\oa'\in\ocA} \rho(\oa' | \os')\Qr[^{\budgetedpolicy'}](\os', \oa') \label{eq:pthm_marg}\\ &= \reward(s, a) + \discountfactor \sum_{\os'\in\ocS} \augmentedtransition(\os' | \os, \oa) \max_\rho\sum_{\oa'\in\ocA}\rho(\oa' | \os')\max_{\budgetedpolicy'\in\policies_a(\os')}\Qr[^{\budgetedpolicy'}](\os', \oa') \label{eq:pthm_max}\\ &= \reward(s, a) + \discountfactor \sum_{\os'\in\ocS} \augmentedtransition(\os' | \os, \oa) \max_\rho\expectedvalueover{\oa'\sim\rho}\Qr[^\star](\os', \oa') \label{eq:pthm_marg_def2} \end{align} where $\budgetedpolicy = (\rho, \budgetedpolicy')\in\policies_a(\os)$ and $\budgetedpolicy'\in\policies_a(\os')$. This follows from: \begin{enumerate} \item[\eqref{eq:pthm_def}.] Definition of $\oQ^{\star}$. \item[\eqref{eq:pthm_exp}.] Bellman Expectation expansion from \Cref{prop:bellman-expectation}. \item[\eqref{eq:pthm_marg}.] Marginalisation on $\oa'$. \item[\eqref{eq:pthm_max}.] \begin{itemize} \item Trivially $\max_{\budgetedpolicy'\in\policies_a(\os')} \sum_{\oa'\in\ocA} \cdot \leq \sum_{\oa'\in\ocA} \max_{ \budgetedpolicy'\in\policies_a(\os)} \cdot$. \item Let $\ov{\budgetedpolicy}\in\argmax_{\budgetedpolicy'\in\policies_a(\os')} \Qr[^{\budgetedpolicy'}](\os', \oa')$, then: \begin{align*} \sum_{\oa'\in\ocA}\rho(\oa'|\os')\max_{\budgetedpolicy'\in\policies_a(\os')}\Qr[^{\budgetedpolicy'}](\os', \oa') &= \sum_{\oa'\in\ocA}\rho(\oa'|\os')\Qr[^{\budgetedpolicy'}](\os', \oa') \\ &\leq \max_{\budgetedpolicy'\in\policies_a(\os')} \sum_{\oa'\in\ocA}\rho(\oa'|\os')\Qr[^{\budgetedpolicy'}](\os', \oa'). \end{align*} \end{itemize} \item[\eqref{eq:pthm_marg_def2}.] Definition of $\oQ^{\star}$. \end{enumerate} Moreover, the condition $\budgetedpolicy=(\rho, \budgetedpolicy')\in\policies_a(\os)$ gives \begin{equation*} \expectedvalueover{\oa'\sim\rho} \Qc[^{\star}](\os, \oa) = \expectedvalueover{\oa'\sim\rho} \Qc[^{\budgetedpolicy'}](\os, \oa) = \Vc[^{\budgetedpolicy}](\os) \leq \budget. \end{equation*} Consequently, $\budgetedpolicy_\text{greedy}(\cdot; \oQ^{\star})$ belongs to the $\argmax$ of \eqref{eq:pthm_marg_def2}, and in particular: \begin{equation*} \Qr[^{\star}](\os, \oa) = r(\os, \oa) + \discountfactor \sum_{\os'\in\ocS} P(\os' | \os, \oa) \expectedvalueover{\oa'\sim\budgetedpolicy_\text{greedy}(\os', \oQ^{\star})} \Qr[^{\star}](\os', \oa'). \end{equation*} The same reasoning can be made for $\Qc[^\star]$ by replacing $\max$ operators by $\min$, and $\policies_a$ by $\policies_r$. \end{proof} \subsection{Proof of \Cref{prop:greedy_optimal}} \label{sec:proof-greedy-optim} \begin{proof} Notice from the definitions of $\abo^{\star}$ and $\abo^{\budgetedpolicy}$ in \eqref{eq:bellman-optimality} and \eqref{eq:bellman_expectation_operator} that $\abo^{\star}$ and $\abo^{\budgetedpolicy_\text{greedy}(\cdot;\oQ^{\star})}$ coincide on $\oQ^{\star}$. Moreover, since $\oQ^{\star} = \abo^{\star}\oQ^{\star}$ by \Cref{thm:bellman-optimality}, we have: $\abo^{\budgetedpolicy_\text{greedy}(\cdot;\oQ^{\star})} \oQ^{\star} = \abo^{\star} \oQ^{\star} = \oQ^{\star}$. Hence, $\oQ^{\star}$ is a fixed point of $\abo^{\budgetedpolicy_\text{greedy}(\cdot;\oQ^{\star})}$, and by \Cref{prop:bellman-expectation} it must be equal to $\oQ^{\budgetedpolicy_\text{greedy}(\cdot;\oQ^{\star})}$ To show the same result for $\oV^{\star}$, notice that \begin{equation*} \oV^{\budgetedpolicy_\text{greedy}(\oQ^{\star})}(\os) = \expectedvalueover{\oa\sim\budgetedpolicy_\text{greedy}(\oQ^{\star})}\oQ^{\budgetedpolicy_\text{greedy}(\oQ^{\star})}(\os,\oa) = \expectedvalueover{\oa\sim\budgetedpolicy_\text{greedy}(\oQ^{\star})}\oQ^{\star}(\os,\oa). \end{equation*} By applying the definitions of $\oQ^{\star}$ and $\budgetedpolicy_\text{greedy}$, we recover the definition of $\oV^{\star}$. \end{proof} \subsection{Proof of \Cref{thm:contraction}} \label{sec:proof-contraction} \begin{proof} In the trivial case $|\cA| = 1$, there exits only one policy $\budgetedpolicy$ and $\abo = \abo^\budgetedpolicy$, which is a contraction by \Cref{prop:bellman-expectation}. In the general case $|\cA| \geq 2$, we can build the following counter-example. Let $(\cS, \cA, P, R_r, R_c)$ be a \gls{BMDP}. For any $\epsilon > 0$, we define $\oQ_\epsilon^1$ and $\oQ_\epsilon^2$ as \begin{align*} \oQ_\epsilon^1(\os,\oa) = \begin{cases} (0, 0), & \text{if } a = a_0 \\ \left(\frac{1}{\discount}, \epsilon\right), & \text{if } a \neq a_0 \end{cases}\\ \oQ_\epsilon^2(\os,\oa) = \begin{cases} (0, \epsilon), & \text{if } a = a_0 \\ \left(\frac{1}{\discount}, 2\epsilon\right), & \text{if } a \neq a_0 \end{cases} \end{align*} Then, $\|\oQ_1-\oQ_2\|_\infty = \epsilon$. $\oQ_\epsilon^1$ and $\oQ_\epsilon^2$ are represented in \Cref{fig:concavity_example}. \begin{figure}[tp] \centering \includegraphics[width=0.4\textwidth]{img/concavity_example.pdf} \caption{Representation of $\oQ_\epsilon^1$ (blue) and $\oQ_\epsilon^2$ (yellow)} \label{fig:concavity_example} \end{figure} But for $\oa=(a,\budgetaction)$ with $\budgetaction = \epsilon$, we have \begin{align*} \|\abo \oQ_\epsilon^1(\os, \oa) - \abo \oQ_\epsilon^2(\os, \oa)\|_\infty &= \discount\left\|\expectedvalueover{\os'\sim\augmentedtransition(\os'|\os,\oa)} \expectedvalueover{\oa'\sim\budgetedpolicy_\text{greedy}(\oQ^1_\epsilon)}\oQ^1_\epsilon(\os',\oa') - \expectedvalueover{\oa'\sim\budgetedpolicy_\text{greedy}(\oQ^2_\epsilon)}\oQ^2_\epsilon(\os',\oa')\right\|_\infty \\ &= \discount\left\|\expectedvalueover{\os'\sim\augmentedtransition(\os'|\os,\oa)}\left(\frac{1}{\discount}, \epsilon\right) - (0, \epsilon)\right\|_\infty \\ &= \discount\frac{1}{\discount} = 1 \end{align*} Hence, \begin{align*} \|\abo \oQ_\epsilon^1 - \abo \oQ_\epsilon^2\|_\infty &\geq 1 = \frac{1}{\epsilon} \|\oQ_1-\oQ_2\|_\infty \end{align*} In particular, there does not exist $L>0$ such that $$\forall \oQ_1,\oQ_2\in(\Real^2)^{\ocS\ocA}, \|\abo \oQ^1 - \abo \oQ^2\|_\infty \leq L \|\oQ^1 - \oQ^2\|_\infty$$ In other words, $\abo$ is not a contraction for $\|\cdot\|_\infty$. \end{proof} \subsection{Proof of \Cref{thm:contractivity-smooth}} \label{sec:contraction-with-smooth} \begin{remark} \begin{leftbar}[remarkbar] This proof makes use of insights detailed in the proof of \Cref{prop:bftq_pi_hull} (\Cref{sec:proof_pi_hull}), which we recommend the reader to consult first. \end{leftbar} \end{remark} \begin{proof} We now study the contractivity of $\abo^{\star}$ when restricted to the functions of $\cL_{\discountfactor}$ defined as follows: \begin{equation} \cL_{\discountfactor} = \left\{\begin{array}{cc} \oQ\in(\Real^2)^{\ocS\ocA}\text{ s.t. }\exists L<\frac{1}{\discountfactor}-1: \forall \os\in\ocS,\oa_1,\oa_2\in\ocA, \\ |\Qr(\os,\oa_1) - \Qr(\os,\oa_2)| \leq L|\Qc(\os,\oa_1) - \Qc(\os,\oa_2)| \end{array}\right\}. \end{equation} That is, for all state $\os$, the set $\oQ(\os, \ocA)$ plot in the $(\Qc,\Qr)$ plane must be the \emph{graph} of a $L$-Lipschitz function, with $L<1/\discountfactor-1$. We impose such structure for the following reason: the counter-example presented above prevented contraction because it was a pathological case in which the slope of $\oQ$ can be arbitrary large. As a consequence, when solving $\Qr[^\star]$ such that $\Qc[^\star]=\budget$, a vertical slice of a $\|\cdot\|_\infty$ ball around $\oQ_1$ (which must contain $\oQ_2$) can be arbitrary large as well. We denote $\text{Ball}(\oQ,R)$ the ball of centre $\oQ$ and radius $R$ for the $\|\cdot\|_\infty$-norm: \begin{equation*} \text{Ball}(\oQ,R) = \{\oQ'\in(R^2)^{\ocS\ocA}: \|\oQ-\oQ'\|_\infty \leq R\}. \end{equation*} We give the three main steps required to show that $\abo^{\star}$ restricted to $\cL_{\discountfactor}$ is a contraction. Given $\oQ^1, \oQ^2\in\cL_{\discountfactor}$, show that: \begin{enumerate} \item $\oQ^2\in\text{Ball}(\oQ^1,R)\implies\cF^2\in\text{Ball}(\cF^1, R), \forall\os\in\ocS$, where $\cF$ is the top frontier of the convex hull of undominated points, as defined in~\Cref{sec:proof_pi_hull}. \item $\oQ\in\cL_{\discountfactor} \implies \cF$ is the graph of a $L$-Lipschitz function, $\forall\os\in\ocS$. \item taking the slice $\Qc=\budget$ of a ball $\text{Ball}(\cF,R)$ with $\cF$ $L$-Lipschitz results in an interval on $\Qr$ of range at most $(L+1)R$ \end{enumerate} These three steps will allow us to control $\Qr[^{2\star}] - \Qr[^{1\star}]$ as a function of $R = \|\oQ^2-\oQ^1\|_\infty$. \paragraph{Step 1} We want to show that if $\oQ^1$ and $\oQ^2$ are close, then $\cF^1$ are $\cF^2$ are close as well in the following sense: \begin{align} \cF^2\in\text{Ball}(\cF^1, R) &\iff d(\cF^1, \cF^2) \leq R \iff \max_{q^2\in\cF^2}\min_{q^1\in\cF^1}\|q^2-q^1\|_\infty \leq R. \label{eq:ball-set} \end{align} Assume $\oQ^2\in\text{Ball}(\oQ^1,R)$, we show by contradiction that $\cF^2 \in \text{Ball}(\cF^1, R)$. Indeed, assume there exists $q^1\in \cF^1$ such that $\cF^2 \cap \text{Ball}(q^1, R) = \emptyset$. Denote $q^2$ the unique point of $\cF^2$ such that $q^2_c = q^1_c$. By construction of $q^1$, we know that $\|q^1-q^2\|_\infty > R$. There are two possible cases: \begin{itemize} \item $q^2_r > q^1_r$: this also directly implies that $q^2_r > q^1_r + R$. But $q^2\in\cF^2$, so there exist $q^2_1, q^2_2\in Q^{2},\lambda\in\Real$ such that $q^2 = (1-\lambda)q^2_1 + \lambda q^2_2$. But since $\oQ^2\in \text{Ball}(\oQ^1, R)$, there also exist $q_1^1, q^1_2\in \oQ^1$ such that $\|q^1_1-q^2_1\|_\infty \leq R$ and $\|q^1_2-q^2_2\|_\infty \leq R$, and in particular $q^1_{1r}\geq q^2_{1r}-R$ and $q^1_{2r}\geq q^2_{2r}-R$. But then, the point $q^{1'}=(1-\mu)q^1_1 + \mu q^1_2$ with $\mu=(q^2_c-q^1_{1c})/(q^2_{2c}-q^1_{1c})$ verifies $q^{1'}_c = q^1_c$ and $q^{1'}_r \geq q^2_r - R > q^1_r$ which contradicts the definition of $q_1\in\cF^1$ as defined in \eqref{eq:top-frontier}. \item $q^2_r < q^1_r$: then the same reasoning can be applied by simply swapping the indexes 1 and 2. \end{itemize} % We start by showing this result for $\cC^2(Q^{1-})$ and $\cC^2(Q^{2-})$ as defined in \Cref{sec:proof_pi_hull}: % Let $\os\in\ocS$ and $q^2\in\cC^2(Q^{2-})$, $\exists\lambda\in[0,1], \oa_1,\oa_2\in\ocA: q^2 = (1-\lambda)Q^2(\os,\oa_1) + \lambda Q^2(\os,\oa_2)$. Define $q^1 = (1-\lambda)Q^1(\os,\oa_1) + \lambda Q^1(\os,\oa_2)$. Then % \begin{align*} % \|q^2-q^1\|_\infty &= \|(1-\lambda)(Q^2(\os,\oa_1) - Q^1(\os,\oa_1)) + \lambda (Q^2(\os,\oa_2) - Q^1(\os,\oa_2))\|_\infty\\ % &\leq (1-\lambda)\|Q^2(\os,\oa_1) - Q^1(\os,\oa_1)\|_\infty + \lambda \|Q^2(\os,\oa_2) - Q^1(\os,\oa_2)\|_\infty\\ % &\leq (1-\lambda)R+\lambda R = R % \end{align*} % It remains to show that when taking the top frontiers of the convex sets $\cC^2(Q^{1-})$ and $\cC^2(Q^{2-})$, they remain at a distance of at most $R$. We have shown that $\cF^2 \in \text{Ball}(\cF^1, R)$. This is illustrated in \Cref{fig:contraction_lips_hull}: given a function $\oQ^1$, we show the locus $\text{Ball}(\oQ_1,R)$ of $\oQ^2$. We then draw $\cF^1$ the top frontier of the convex hull of $\oQ^1$ and alongside the locus of all possible $\cF^2$, which belong to a ball $\text{Ball}(\cF^1, R)$. \begin{figure}[ht] \centering \includegraphics[trim=7cm 4cm 7cm 4cm, clip, width=0.7\textwidth]{img/contraction_lipschitz.pdf} \caption{We represent the range of possible solutions $\Qr[^{2\star}]$ for any $\oQ^2\in\text{Ball}(\oQ^1)$, given $\oQ_1\in\cL_\lambda$} \label{fig:contraction_lips_hull} \end{figure} \paragraph{Step 2} We want to show that if $\oQ\in\cL_{\discountfactor}$, $\cF$ is the graph of an $L$-Lipschitz function: \begin{equation} \label{eq:L-lip-set} \forall q^1,q^2\in\cF, |q_r^2-q_r^1| \leq |q_c^2-q_c^1|. \end{equation} Let $\oQ\in\cL_{\discountfactor}$ and $\os\in\ocS$, $\cF$ the corresponding top frontier of convex hull. For all $q^1,q^2\in\cF, \exists \lambda,\mu\in[0,1], q^{11},q^{12},q^{21},q^{22}\in \oQ(\os,\ocA)$ such that $q^1 = (1-\lambda)q^{11} + \lambda q^{12}$ and $q^2 = (1-\mu)q^{21} + \mu q^{22}$. Without loss of generality, we can assume $q_c^{11}\leq q_c^{12}$ and $q_c^{21}\leq q_c^{22}$. We also consider the worst case in terms of maximum $q_r$ deviation: $q_c^{12} \leq q_c^{21}$. Then the maximum increment $q_r^2-q_r^{1}$ is: \begin{align*} \|q^2_r-q^{1}_r\| &\leq \|q^{12}_r-q^{1}_r\| + \|q^{21}_r-q^{12}_r\| + \|q^{2}_r-q^{21}_r\| \\ &= (1-\lambda)\|q^{12}_r-q^{11}_r\| + \|q^{21}_r-q^{12}_r\| + \mu\|q^{22}_r-q^{21}_r\| \\ &\leq (1-\lambda)L\|q^{12}_c-q^{11}_c\| + L\|q^{21}_c-q^{12}_c\| + \mu L\|q^{22}_c-q^{21}_c\| \\ &= L\|q^{12}_c-q^{1}_c\| + L\|q^{21}_c-q^{12}_c\| + L\|q^{2}_c-q^{21}_c\|\\ &= L\|q^{2}_c-q^{1}_c\|. \end{align*} This can also be seen in \Cref{fig:contraction_lips_hull}: the maximum slope of the $\cF^1$ is lower than the maximum slope between two points of $\oQ^1$. \paragraph{Step 3} Let $\cF_1$ be a L-Lipschitz set as defined in \eqref{eq:L-lip-set}, and consider a ball $\text{Ball}(\cF_1,R)$ around it as defined in \eqref{eq:ball-set}. We want to bound the optimal reward value $\Qr[^{2\star}]$ under constraint $\Qc[^{2\star}] = \budget$ (regular case in \Cref{sec:proof_pi_hull} where the constraint is saturated), for any $\cF^2\in\text{Ball}(\cF_1,R)$. This quantity is represented as a red double-ended arrow in \Cref{fig:contraction_lips_hull}. Because we are only interested in what happens locally at $\Qc=\budget$, we can zoom in on \Cref{fig:contraction_lips_hull} and only consider a thin $\epsilon$-section around $\budget$. In the limit $\epsilon\rightarrow 0$, this section becomes the tangent to $\cF^1$ at $\Qc[^1]=\budget$. It is represented in \Cref{fig:contraction_lips_hull_slope}, from which we derive a geometrical proof: \begin{figure}[ht] \centering \includegraphics[trim=2cm 1cm 2cm 1cm, clip, width=0.7\textwidth]{img/contraction_lipschitz_slope.pdf} \caption{We represent a section $[\budget-\epsilon, \budget+\epsilon]$ of $\cF^1$ and $\text{Ball}(\cF^1, R)$. We want to bound the range of $\Qr[^{2\star}].$} \label{fig:contraction_lips_hull_slope} \end{figure} \begin{align*} \Delta \Qr[^{2\star}] &= b + c &\\ & \leq La + c & \text{($\cF^1$ $L$-Lipschitz)}\\ &= 2LR+2R = 2R(L+1). \end{align*} Hence, \begin{equation*} | \Qr[^{2\star}] - \Qr[^{1\star}]| \leq \frac{\Delta \Qr[^{2\star}]}{2} = R(L+1) \end{equation*} and $\Qc[^{1\star}] = \Qc[^{2\star}] = \budget$. Consequently, $ \|\oQ[^{2\star}] - \oQ[^{1\star}]\|_\infty \leq (L+1)R$. Finally, consider the edge case in \Cref{sec:proof_pi_hull}: the constraint is not active, and the optimal value is simply $\argmax_{q\in\cF} q^r$. In particular, since we showed that $\cF^2\in \text{Ball}(\cF^1, R)$, and since $\oQ[^{2\star}]\in \cF^2$, there exist $q^1\in \cF^1: \|\oQ[^{2\star}]-q^1\|_\infty\leq R$ and in particular $\oQ[^{1\star}]_r \geq q^1_r \geq \oQ[^{2\star}]_r - R$. Reciprocally, by the same reasoning, $\Qr[^{2\star}] \geq \Qr[^{1\star}] - R$. Hence, we have that $| \Qr[^{2\star}] - \Qr[^{1\star}]| \leq R \leq R(L+1).$ \paragraph{Wrapping it up} We have shown that for any $\oQ^1,\oQ^2\in\cL_{\discountfactor}$, and all $\os\in\ocS$, $\cF^2\in\text{Ball}(\cF^1,\|\oQ^2-\oQ^1\|_\infty)$ and $\cF^1$ is the graph of a $L$-Lipschitz function with $L<1/\discountfactor - 1$. Moreover, the solutions of $\budgetedpolicy_\text{greedy}(\oQ^1)$ and $\budgetedpolicy_\text{greedy}(\oQ^2)$ at $\os$ are such that $ \|\oQ[^{2\star}] - \oQ[^{1\star}]\|_\infty \leq (L+1)\|\oQ^2-\oQ^1\|_\infty$. Hence, for all $\oa$, \begin{align*} \|\abo^{\star}\oQ^1(\os, \oa) - &\abo^{\star}\oQ^2(\os, \oa)\|_\infty \\ &= \discountfactor\left\|\expectedvalueover{\os'\sim\augmentedtransition(\os'|\os,\oa)} \expectedvalueover{\oa'\sim\budgetedpolicy_\text{greedy}(\oQ^1)}\oQ^1(\os',\oa') - \expectedvalueover{\oa'\sim\budgetedpolicy_\text{greedy}(\oQ^2)}\oQ^2(\os',\oa')\right\|_\infty \\ &= \discountfactor\left\|\oQ[^{2\star}] - \oQ[^{1\star}]\right\|_\infty \\ &\leq \discountfactor(L+1)\|\oQ^2-\oQ^1\|_\infty. \end{align*} Taking the sup on $\ocS\ocA$, \begin{equation*} \|\abo^{\star}\oQ^1 - \abo^{\star}\oQ^2\|_\infty \leq \discountfactor(L+1)\|\oQ^1-\oQ^2\|_\infty \end{equation*} with $\discountfactor(L+1) < 1$. As a conclusion, $\abo^{\star}$ is a $\discountfactor(L+1)$-contraction on $\cL_{\discountfactor}$. \end{proof} %\subsection{\Cref{lemma:concavity}} %\begin{proof}. Let $s,s'\in\cS, a\in\cA$. %We first prove those results for $V_r^{\star}(s', \cdot)$ %\textbf{Non-decreasing} %Consider $\beta_a^1 \leq \beta_a^2 \in \cB$. %Any policy that satisfies the budget $\beta_a^1$ in $s'$ also satisfies $\beta_a^2$, so $\Pi_c(s', \beta_a^1) \subset \Pi_c(s', \beta_a^2)$. Hence, by taking the max over policies, $V_r^{\star}(s', \beta_a^1) \leq V_r^{\star}(s', \beta_a^2)$. %Hence, $V_r^{\star}(s', \cdot)$ is non-decreasing. %\textbf{Concave} %By contradiction: assume that $V_r^{\star}(s', \cdot)$ is not concave, \ie there exist $\beta^1 < \beta^2\in \cB$ and $p\in(0, 1)$ such that $\beta^3 = (1-p)\beta^1 + p\beta^2$ verifies: $V_r^{\star}(s', \beta^3) < (1-p)V_r^{\star}(s', \beta^1) + pV_r^{\star}(s',\beta^2)$. By definition of $V^{\star}$, there must be $\pi_1,\pi_2\in\Pi^{\star}$ such that $V^{\star}(s', \beta^1) = V^{\pi_1}(s', \beta^1)$ and $V^{\star}(s', \beta^2) = V^{\pi_2}(s', \beta^2)$. %Define $\pi = (1-p)(\pi_1(\cdot, \beta^1), \pi_1) + p(\pi_2(\cdot, \beta^2), \pi_2)$. By linearity of $V^\pi$ with respect to $\pi$, we have that $V_c^\pi(s', \beta^3) = (1-p)V_c^{\pi_1}(s', \beta^1) + pV_c^{\pi_2}(s', \beta^2) \leq (1-p)\beta^1 + p\beta^2 = \beta^3$ since $\pi_1, \pi_2\in\Pi^{\star}(s')\subset\Pi_a(s')$, so $\pi$ respects the budget $\beta^3$. Moreover, we also have $V_r^\pi(s', \beta^3) = (1-p)V_r^{\pi_1}(s', \beta^1) + pV_r^{\pi_2}(s', \beta^2) > V_r^{\star}(s', \beta^3)$, which contradicts the definition of $V_r^{\star}$. %Consequently, $V_r^{\star}(s', \cdot)$ is non-decreasing and concave. By \eqref{eq:bellman_expectation_Q} we see that $Q_r^{\star}(s,a,\cdot) = R_r(s,a) + \discount\expectedvalueover{s'}V_r^{\star}(s', \cdot)$ is too. %\end{proof} %\subsection{\Cref{lemma:tau_concavity}} %\subsection{\Cref{lemma:pi_hull}} %\td %\begin{proof} %If the estimates $q^c_0, q^c_1$ are accurate, then by construction and linearity of the expectation, the returned mixture policy has an expected total cost of $\expectedvalueover{a, \beta_a \sim\pi_\text{greedy}}Q_c(s, a, \beta_a) = \beta$ as desired in \eqref{eq:pi_greedy_constraint}. Because the $Q_r(s,a,\cdot)$ is concave and under its tangents, this mixture must have the largest $Q_r$ possible as required in \eqref{eq:pi_greedy_reward}. The special case of a tie $q_r^0 = q_r^1$ is considered, where we do minimise $Q_c$ as required in \eqref{eq:pi_greedy_cost}. %\end{proof} \subsection{Proof of \Cref{prop:bftq_pi_hull}} \label{sec:proof_pi_hull} \begin{definition} \begin{leftbar}[defnbar] Let $A$ be a set, and $f$ a function defined on $A$. We define \begin{itemize} \item the convex hull of $A$: $\cC(A) = \{\sum_{i=1}^p \lambda_i a_i: a_i\in A, \lambda_i\in\Real^+, \sum_{i=1}^p \lambda_i = 1, p\in\Natural\}$; \item the convex edges of $A$: $\cC^2(A) = \{\lambda a_1 + (1-\lambda)a_2: a_1, a_2\in A, \lambda\in[0, 1]\}$; \item Dirac distributions of $A$: $\dirac(A) = \{\dirac(a-a_0): a_0\in A\}$; \item the image of $A$ by $f$: $f(A) = \{f(a): a\in A\}$. \end{itemize} \end{leftbar} \end{definition} \begin{proof} Let $\os=(s,\budget)\in\ocS$ and $\oQ\in(\Real^2)^{\ocS\ocA}$. We recall the definition of $\budgetedpolicy_\text{greedy}$: \begin{subequations} \begin{align} \budgetedpolicy_\text{greedy}(\oa|\os; \oQ) &\in \argmin_{\rho\in\policies_r^{\oQ}} \expectedvalueover{\oa\sim\rho}\Qc(\os, \oa) \tag{\ref{eq:pi_greedy_cost}}\\ \text{where }\quad\policies_r^{\oQ} = &\argmax_{\rho\in\cM(\ocA)} \expectedvalueover{\oa\sim\rho} \Qr(\os, \oa) \tag{\ref{eq:pi_greedy_reward}}\\ & \text{ s.t. } \expectedvalueover{\oa\sim\rho} \Qc(\os, \oa) \leq \budget \tag{\ref{eq:pi_greedy_constraint}} \end{align} \end{subequations} Note that any policy in the $\argmin$ in \eqref{eq:pi_greedy_cost} is suitable to compute $\abo^{\star}$. We first reduce the set of candidate optimal policies. Consider the problem described in \eqref{eq:pi_greedy_reward},\eqref{eq:pi_greedy_constraint}: it can be seen as a single-step \gls{CMDP} problem with reward $\reward=\Qr$ and cost $\constraint=\Qc$. By \citep[Theorem 4.4][]{Beutler1985}, we know that the solutions are mixtures of two deterministic policies. Hence, we can replace $\cM(\cA)$ by $\cC^2(\dirac(\ocA))$ in \eqref{eq:pi_greedy_reward}. Moreover, remark that: \begin{align*} \{\expectedvalueover{\oa\sim\rho} \oQ(\os,\oa):& \rho\in \cC^2(\dirac(\ocA))\} \\ &= \{\expectedvalueover{\oa\sim\rho} \oQ(\os,\oa): \rho=(1-\lambda)\dirac(\oa-\oa_1)+\lambda\dirac(\oa-\oa_2), \oa_1,\oa_2\in\ocA, \lambda\in[0,1]\} \\ &= \{(1-\lambda)\oQ(\os, \oa_1)+\lambda \oQ(\os, \oa_2), \oa_1,\oa_2\in\ocA, \lambda\in[0,1]\} \\ &= \cC^2(\oQ(\os,\ocA))\}. \end{align*} Hence, the problem \eqref{eq:pi_greedy_reward}, \eqref{eq:pi_greedy_constraint} has become: \begin{equation*} \tilde{\policies}^{\Qr} = \argmax_{(q_r, q_c)\in\cC^2(\oQ(\os, \ocA))} q_r \quad\text{ s.t. }\quad q_c \leq \budget \end{equation*} and the solution of $\budgetedpolicy_\text{greedy}$ is $q^{\star}=\argmin_{q\in\tilde{\policies}^{\Qr}} q_c$. The original problem in the space of actions $\ocA$ is now expressed in the space of values $\oQ(\os, \ocA)$ (which is why we use $=$ instead of $\in$ before $\argmin$ here). We further restrict the search space of $q^{\star}$ following two observations: \begin{enumerate} \item $q^{\star}$ belongs to the \emph{undominated} points $\cC^2(\oQ^-)$: \begin{align} \label{eq:q_minus_undominated} \oQ^+ &= \{(q_c, q_r): q_c > q_c^{\pm} = \min_{q^+} q_c^+\text{ s.t. }q^+\in\argmax_{q\in \oQ(\os,\ocA)} q_r\}\\ \oQ^- &= \oQ(\os,\ocA) \setminus \oQ^+. \end{align} Denote $q^{\star}$ = $(1-\lambda) q^1 + \lambda q^2$, with $q^1, q^2\in \oQ(\os,\ocA)$. There are three possible cases: \begin{enumerate} \item $q^1, q^2 \not\in \oQ^-$. Then $q_c^{\star} = (1-\lambda) q^1_c + \lambda q^2_c > q_c^{\pm}$. But then $q_c^{\pm} < q_c^{\star} \leq \budget$ so $q^{\pm}\in\tilde{\policies}^{\Qr}$ with a strictly lower $q_c$ than $q^{\star}$, which contradicts the $\argmin$. \item $q^1\in \oQ^-, q^2 \not\in \oQ^-$. But then consider the mixture $q^\top = (1-\lambda) q^1 + \lambda q^\pm$. Since $q_r^{\pm} \geq q_r^{2}$ and $q_r^{\pm} < q_r^{2}$, we also have $q^\top_r \geq q_r^{\star}$ and $q^\top_c < q_c^{\star}$, which also contradicts the $\argmin$. \item $q^1,q^2\in \oQ^-$ is the only remaining possibility. \end{enumerate} \item $q^{\star}$ belongs to the \emph{top frontier} $\cF$: \begin{equation} \label{eq:top-frontier} \cF_{\oQ} = \{q\in \cC^2(\oQ^-): \not\exists q'\in \cC^2(\oQ^-): q_c=q_c'\text{ and }q_r<q_r'\}. \end{equation} Trivially, otherwise q' would be a better candidate than $q^{\star}$. \end{enumerate} Let us characterise this frontier $\cF$. It is both: \begin{enumerate} \item the \emph{graph of a non-decreasing function}: $\forall q^1, q^2\in\cF$ such that $q_c^1\leq q_c^2$ then $q_r^1\leq q_r^2$.\\ By contradiction, if we had $q_r^1 > q_r^2$, we could define $q^\top = (1-\lambda)q^1 + \lambda q^\pm$ where $q^\pm$ is the dominant point as defined in \eqref{eq:q_minus_undominated}. By choosing $\lambda=(q^2_c-q^1_c)/(q^\pm_c-q^1_c)$ such that $q^\top_c = q_c^2$, then since $q_r^\pm \geq q_r^1 > q_r 2$ we also have $q^\top_r > q_r^2$ which contradicts $q^2\in\cF$. \item the \emph{graph of a concave function}: $\forall q^1, q^2, q^3\in\cF$ such that $q_c^1\leq q_c^2 \leq q_c^3$ with $\lambda$ such that $q^2_c = (1-\lambda)q^1_c + \lambda q^3_c$, then $q_r^2 \geq (1-\lambda)q_r^1 + \lambda q_r^3$.\\ Trivially, otherwise the point $q^\top = (1-\lambda)q^1 + \lambda q^3$ would verify $q^\top_c=q^2_c$ and $q^\top_r > q^2_r$, which would contradict $q^2 \in\cF$. \end{enumerate} We denote $\cF_{\oQ} = \cF \cap \oQ$. Clearly, $q^{\star}\in\cC^2(\cF_{\oQ})$: let $q^1, q^2\in \oQ^-$ such that $q^{\star} = (1-\lambda)q^1 + \lambda q^2$. First, $q^1, q^2\in \oQ^-\subset\cC^2(\oQ^-)$. Then, by contradiction, if there existed $q^{1'}$ or $q^{2'}$ with equal $q_c$ and strictly higher $q_r$, again we could build an admissible mixture $q^{\top}=(1-\lambda)q^{1'} + \lambda q^{2'}$ strictly better than $q^{\star}$. $q^{\star}$ can be written as $q^{\star} = (1-\lambda)q^1 + \lambda q^2$ with $q^1, q^2\in\cF_{\oQ}$ and, without loss of generality, $q^1_c \leq q^2_c$. \paragraph{Regular case} There exists $q^0\in\cF_{\oQ}$ such that $q^0_c \geq \budget$. Then $q^1$ and $q^2$ must flank the budget: $q_c^1 \leq \budget \leq q_c^2$. Indeed, by contradiction, if $q_c^2 \geq q_c^1 > \budget$ then $q_c^{\star} > \budget$ which contradicts $\policies_r^{\oQ}$. Conversely, if $q_c^1 \leq q_c^2 < \budget$ then $q^{\star} < \budget \leq q^0_c$, which would make $q^{\star}$ a worse candidate than $q^\top=(1-\lambda)q^{\star} + \lambda q^0$ when $\lambda$ is chosen such that $q_c^\top=\budget$, and contradict $\policies_r^{\oQ}$ again. Because $\cF$ is the graph of a non-decreasing function, $\lambda$ should be as high as possible, as long as the budget $q^{\star}\leq\budget$ is respected. We reach the highest $q_r^{\star}$ when $q^{\star}_c=\budget$, that is: $\lambda=(\budget-q_c^1)/(q_c^2-q_c^1)$. It remains to show that $q^1$ and $q^2$ are two successive points in $\cF_{\oQ}$: $\not\exists q\in\cF_{\oQ}\setminus\{q^1, q^2\}: q^1_c \leq q_c \leq q^2_c$. Otherwise, as $\cF$ is the graph of a concave function, we would have $q_r \geq (1-\mu)q_r^1 + \mu q_r^2$. $q_r$ cannot be strictly greater than $(1-\mu)q_r^1 + \mu q_r^2$ which would contradict $q^{\star}$, but it can still be equal, which means the tree points $q, q^1, q^2$ are aligned. In fact, every points aligned with $q^1$ and $q^2$ can also be used to construct mixtures resulting in $q^{\star}$, but among these solutions we can still choose $q^1$ and $q^2$ as the two points in $\cF_{\oQ}$ closest to $q^{\star}$. \paragraph{Edge case} $\forall q\in\cF_{\oQ}, q_c < \budget$. Then $q^{\star} = \argmax_{q\in\cF} q_r = q^\pm = \argmax_{q\in \oQ^-} q_r$. \end{proof} %\begin{proof} %First, a straightforward proof by induction shows that for all $k\in\Natural$, $Q_k$ computed at iteration $k$ of either \Cref{alg:bvi} or \Cref{alg:bftq} is concave non-decreasing with respect to $\beta_a$: the initialisation is trivial from $Q_0 = 0$, and the heredity stems from \Cref{lemma:tau_concavity}. %\end{proof} % \subsection{Decomposition Lemma} % \begin{lemma} % For any sequence real valued functions $f_1,\ldots,f_n$ and any real number $c$, we have % \[ % \begin{array}{lcl} % \underbrace{\max\limits_{\sum_i x_i \leq c}\sum_j f_j(x_j)}_{(a)} & \quad{}=\quad{} & \underbrace{\max\limits_{\sum_i c_i \leq c}\left(\sum_j\max\limits_{x\leq c_j} f_j(x)\right)}_{(b)}\\ % \end{array} % \] % \end{lemma} % \begin{proof} % Let us first show that $(a)\leq(b)$. % By definition of the maximum on a set, for any $f_j$ and any $c_j$ we have % $\max\limits_{x\leq c_j} f_j(x) \geq f_j(c_j)$. % Hence, by replacing these terms in $(b)$ we get: % \[ % \begin{array}{lcl} % \max\limits_{\sum_i c_i \leq c} \sum_j f_j(c_j) & \quad{}\leq\quad{} & \max\limits_{\sum_i c_i\leq c}\left(\sum_j \max\limits_{x_j\leq c_j} f_j(x_j)\right)\\ % \end{array} % \] % The left hand side of this inequality is just a rewriting of $(a)$ with different dummy variables names. % Let us show now that $(a) \geq (b)$. % Let $\hat{x}_1,\ldots,\hat{x}_n, \hat{c}_1, \ldots \hat{c}_n$ be a realisation (argmax) of $(b)$. % By definition of $(b)$'s feasible set, we have $\sum_i\hat{c}_i \leq c$ and for any $i$: $\hat{x}_i\leq \hat{c}_i$. % Because $\sum_i\hat{x}_i\leq \sum_i\hat{c}_i \leq c$, the tuple $(\hat{x}_1, \ldots \hat{x}_n)$ is also a feasible value for $(a)$. And, by definition of the maximum on a set: $(a) = \max\limits_{\sum_i x_i \leq c} \sum_j f_j(x_j) \geq \sum_j f_j(\hat{x}_j) = (b)$. % \end{proof} %\section{Risk-Sensitive Exploration} %\label{sec:risk-sensitive-supp} %We recall the Risk-Sensitive Exploration in %\Cref{alg:risk-sensitive-exploration}: %\input{source/risk-sensitive-explo-pseudo-code.tex}
{ "alphanum_fraction": 0.6296801576, "avg_line_length": 72.9756637168, "ext": "tex", "hexsha": "1a6bbd06b4c4a90b81487466b78011f220634d58", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c1464cd3a219715e5218a866a822a34af581aac7", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eleurent/phd-thesis", "max_forks_repo_path": "2-Chapters/5-Chapter/appendix5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c1464cd3a219715e5218a866a822a34af581aac7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eleurent/phd-thesis", "max_issues_repo_path": "2-Chapters/5-Chapter/appendix5.tex", "max_line_length": 705, "max_stars_count": 8, "max_stars_repo_head_hexsha": "c1464cd3a219715e5218a866a822a34af581aac7", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eleurent/phd-thesis", "max_stars_repo_path": "2-Chapters/5-Chapter/appendix5.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-03T08:19:45.000Z", "max_stars_repo_stars_event_min_datetime": "2020-12-04T13:16:56.000Z", "num_tokens": 12986, "size": 32985 }
\section{The Basics} \subsection{Exercises} \subsubsection{Exercise 2.1} Let $a = nb + k$. If $n \geq 2$, then $\frac{a}{2} \geq b > a \mod b$ since $a \mod b \leq b-1$. For $n = 1$ we have \begin{align*} k < b \implies \frac{k}{2} < \frac{b}{2} \implies k < \frac{b}{2} + \frac{k}{2} = \frac{a}{2} \end{align*} Since Euclid's algorithm ``alternates'' between performing divisions on $a$ and $b$ (since $a$ is replaced by $b$ after a step), the number of divisions it performs is bounded by $2\log_2 a$. \subsubsection{Exercise 2.2} The maximum divisor of $a$ is bounded by $\sqrt{a}$. We can find all primes less than or equal to $\sqrt{a}$ in polynomial time by using a prime number sieve approach (e.g. Sieve of Eratosthenes). Once we have the primes, the number of operations it takes to compute the prime factorization of $a$ is bounded by $\sqrt{a} \log_2 a$, since the number of primes is bounded by $\sqrt{a}$ and 2 is the smallest prime. \subsubsection{Exercise 2.3} Changing logarithm base amounts to multiplying by a constant. \subsubsection{Exercise 2.4} We have the following \begin{itemize} \item $4\log_2 n = \log_2 n^4 \implies n^4$ \item $4\sqrt{n} = \sqrt{16n} \implies 16n$ \item $4n \implies 4n$ \item $4n^2 = (2n)^2 \implies 2n$ \item $4 * 2^n = 2^{n+2} \implies n + 2$ \item $4 * 4^n = 4^{n+1} \implies n + 1$ \end{itemize} \subsection{Problems} \subsubsection{Problem 2.18} An adjacency matrix requires a single bit for each vertex pair, so we need $n^2$ bits to specify it. A list of edges consists of $m$ tuples of the form $(a, b)$ for $1 \leq a, b \leq n$, so it requires $2m\log n$ bits. Thus, we are better off using the adjacency list format for sparse graphs and the adjacency matrix format for dense graphs. If, however, we consider multigraphs, then the adjacency matrix requires $n^2 \log n$ bits (as we need to store the number of edges from $a$ to $b$). Thus, in the multigraph case, we can always choose to use adjacency lists. \subsubsection{Problem 2.19} Suppose $f(n) = O(n^a)$ and $g(n) = O(n^b)$. Then we have that \begin{align*} \exists c_1, c_2, N \: | \: f(n) &\leq c_1 n^a, \: g(n) \leq c_2 n^b \quad \forall n \geq N\\ \implies g(f(n)) &\leq g(c_1 n^a) \leq c_2 (c_1 n^a)^b = O(n^{ab}) \end{align*} so $\text{poly}(n)$ is closed under composition. Thus, any polynomial time algorithm that calls another polynomial time algorithm remains polynomial. \subsubsection{Problem 2.20} Since $f(n) = 2^{\Theta(\log^k n)}$ and $g(n) = O(n^a)$, there exists $N$ such that for all $n \geq N$ we have $f(n) \geq 2^{c_1 \log^k n}$ and $g(n) \leq c_2 n^a$ for some constants $c_1, c_2$. Thus, \begin{align*} \lim_{n \to \infty} \frac{f(n)}{g(n)} &\geq \lim_{n \to \infty} \frac{2^{c_1 \log^k n}}{c_2 n^a} \\ &= \lim_{n \to \infty} \frac{(n^{c_1})^{\log^{k - 1} n}}{c_2 n^a} \end{align*} Since $\lim_{n \to \infty}\log^{k - 1} n = \infty$, we have that $f(n) = \omega(g(n))$. To see that $f(n) = o(h(n))$, we substitute $n = 2^x$ to get \begin{align*} \lim_{x \to \infty} \frac{f(2^x)}{g(2^x)} &\leq \lim_{x \to \infty} \frac{2^{a_1 x^k}}{2^{a_2 2^{c x}}} \\ &= 0 \end{align*} Where we used the fact that $\lim_{x \to \infty} \frac{a_1 x^k}{a_2 2^{c x}} = 0$. Finally, to see that $\text{QuasiP}$ is closed under composition, we can check that \begin{align*} 2^{\log^{k_1} 2^{\log^{k_2} n}} = 2^{\log^{k_1 k_2} n} \end{align*}
{ "alphanum_fraction": 0.6186345831, "avg_line_length": 49.6388888889, "ext": "tex", "hexsha": "93e52f39e8638986e0ce5a95a6525cb44743f951", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_forks_repo_path": "Nature_of_Computation_Moore_Mertens/chapter_2.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_issues_repo_path": "Nature_of_Computation_Moore_Mertens/chapter_2.tex", "max_line_length": 114, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_stars_repo_path": "Nature_of_Computation_Moore_Mertens/chapter_2.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-19T07:33:25.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-19T07:33:25.000Z", "num_tokens": 1330, "size": 3574 }
% !TEX encoding = UTF-8 Unicode % !TEX spellcheck = en_US % !TEX root = ../../../ICMA2020.tex \subsection{Experiment Scenario} \label{subsec:ExperimentScenario} As described in Sec.\,\ref{subsec:Introduction}, a parameter identification in the process environment is rarely feasible without restrictions regarding the available workspace. For this reason, the approach of the method presented in this paper is based on performing the necessary parameter identification directly in the running process. To illustrate this situation, the proposed method is applied to two different tasks widely used in industry. These processes are presented in more detail in the following paragraph. \begin{itemize} \item Process 1: The robot performs a highly dynamic \textit{pick and place} motion between two pre-defined areas in the workspace (see Fig.\,\ref{fig:Process1}.a). To create a set of \textit{pick-and-place} cycles the points on both areas are generated randomly. \item Process 2: The end-effector of the robot has to trace the edges of a horizontal square (see Fig.\,\ref{fig:Process1}.b). \end{itemize} To validate the identified parameters, different trajectories of the same type are used for each process. For process 1 a new set of points is randomly generated using a uniform distribution. For process 2 the trajectory is given an offset in the $x$-$y$-plane. %\begin{figure*}[tb] % \vspace{-0.2cm} % \centering % \includegraphics[width = 18cm]{Chapters/Experiments/Experimental_Design/SzenarienKompakt.pdf} % \caption{Representation of process 1 and 2 and the restricted robot workspace.} % \label{fig:Process1} % \vspace{-0.1cm} %\end{figure*} %\begin{figure}[tb] % \vspace{-0.2cm} % \centering % \includegraphics[width = 7.5cm]{Chapters/Experiments/Experimental_Design/Szenario_PnP.pdf} % \caption{Representation of process 1 in the robot workspace.} % \label{fig:Process1} % \vspace{-0.1cm} %\end{figure} %\begin{figure}[tb] % \vspace{-0.2cm} % \centering % \includegraphics[width = 7.5cm]{Chapters/Experiments/Experimental_Design/Szenario_Square.pdf} % \caption{Representation of process 2 in the robot workspace.} % \label{fig:Process2} % \vspace{-0.1cm} %\end{figure} To evaluate the results of the process-based parameter identification a classical parameter identification with three different optimized excitation trajectories is also carried out. The first excitation trajectory uses the full workspace of the robot to get the best possible identification results. For the second trajectory the available robot workspace is restrained to simulate the identification process under practical conditions in the industrial environment. A representation of the used workspace is given in Fig.\,\ref{fig:Process1}.c. The complete half-sphere represents the full workspace. The marked area denotes the restrained workspace. The optimized excitation trajectories are generated according to Sec.\,\ref{subsec:OptimalExcitation}. The trajectory which uses the full workspace has a condition number of 6.76 and will be called \textit{trajectory A}. The trajectory in the restrained workspace has a condition number of 7.71 and will be called \textit{trajectory B}. Restraining the available workspace generally leads to higher condition numbers of the generated excitation trajectory. The third trajectory is used to validate both models derived from the optimal excitation and will be called \textit{trajectory C}. Like \textit{trajectory A}, \textit{trajectory C} is generated in the full workspace of the robot but uses a different set of \textsc{Fourier}-coefficients, due to the heuristic nature of the genetic algorithm. The identification was performed without model-based feedforward control. %\begin{figure}[tb] % \vspace{-0.2cm} % \centering % \includegraphics[width = 7.5cm]{Chapters/Experiments/Experimental_Design/TaskSpace.pdf} % \caption{Representation of the used workspace during optimized excitation.} % \label{fig:TaskSpace} % \vspace{-0.1cm} %\end{figure} %\begin{figure*}[tb] % \centering % \begin{subfigure}{0.3\textwidth} % %\vspace{-0.2cm} % \centering % \includegraphics[width = \textwidth]{Chapters/Experiments/Experimental_Design/Szenario_PnP.pdf} % \caption{Representation of process 1 in the robot workspace.} % \label{fig:Process1} % %\vspace{-0.1cm} % \end{subfigure}% % % \begin{subfigure}{0.3\textwidth} % %\vspace{-0.2cm} % \centering % \includegraphics[width = \textwidth]{Chapters/Experiments/Experimental_Design/Szenario_Square.pdf} % \caption{Representation of process 2 in the robot workspace.} % \label{fig:Process2} % %\vspace{-0.1cm} % \end{subfigure}% % % \begin{subfigure}{0.3\textwidth} % %\vspace{-0.2cm} % \centering % \includegraphics[width = \textwidth]{Chapters/Experiments/Experimental_Design/TaskSpace.pdf} % \caption{Representation of the used workspace during optimized excitation.} % \label{fig:TaskSpace} % %\vspace{-0.1cm} % \end{subfigure}% % \caption{Representation of process 1 and 2 and the restricted robot workspace.} %\end{figure*} \begin{figure*}[tb] \vspace{-0.2cm} \centering %\subfigure[Representation of process 1 in the robot workspace.]{\includegraphics[width = 5.5cm]{Chapters/Experiments/Experimental_Design/Szenario_PnP.pdf}} %\subfigure[Representation of process 2 in the robot workspace.]{\includegraphics[width = 5.5cm]{Chapters/Experiments/Experimental_Design/Szenario_Square.pdf}} %\subfigure[Representation of the used workspace during optimized excitation.]{\includegraphics[width = 5.5cm]{Chapters/Experiments/Experimental_Design/TaskSpace.pdf}} \includegraphics{Chapters/Experiments/Experiment_Scenario/SzenarienKompakt.pdf} \vspace{-0.2cm} \caption{Representation of process 1 and 2 and the workspace used for optimization.} \label{fig:Process1} \vspace{-0.3cm} \end{figure*}
{ "alphanum_fraction": 0.7546666667, "avg_line_length": 62.5, "ext": "tex", "hexsha": "3939c6a3cd50eebd4eccc62bdaf14da2d54d0c73", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f81c6599ec7a9341e6a467a4ff9b31091aa50e75", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SchapplM/robotics-paper_icma2020", "max_forks_repo_path": "paper/Chapters/Experiments/Experiment_Scenario/Experiment_Scenario.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f81c6599ec7a9341e6a467a4ff9b31091aa50e75", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SchapplM/robotics-paper_icma2020", "max_issues_repo_path": "paper/Chapters/Experiments/Experiment_Scenario/Experiment_Scenario.tex", "max_line_length": 1525, "max_stars_count": null, "max_stars_repo_head_hexsha": "f81c6599ec7a9341e6a467a4ff9b31091aa50e75", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SchapplM/robotics-paper_icma2020", "max_stars_repo_path": "paper/Chapters/Experiments/Experiment_Scenario/Experiment_Scenario.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1582, "size": 6000 }
\chapter{Methodology} \section{Planning} Before commencing the project, I previously made an e-commerce website using Python Django and using SQLite as the database, as I mentioned in the Introduction. While I was not happy with the outcome of that project, it was a valuable learning experience as Python was quite new to me at the time. Before I started, I had to choose what features I should implement and how they should be implemented in relation to the languages, frameworks and database. I navigated through many e-commerce websites, such as Amazon and eBay, to find features that should be implemented in this project. I did not want to use the same language and framework I used previously, so I decided to use React as the main frontend library. \section{Agile Software Development} \subsection{What Is Agile?} I felt that using agile software development would be best suited for this type of project. Agile was developed by 17 software developers who met together in 2001. As a result of their meeting, they jointly released the Manifesto for Agile Software Development which was signed by everyone involved \cite{manifesto2001agile}. Agile is a catchall term for frameworks such as scrum, extreme programming (XP) and dynamic systems development method (DSDM). Agile is an iterative process for project management. The four core agile values are: "Individuals and interactions over processes and tools, Working software over comprehensive documentation, Customer collaboration over contract negotiation, Responding to change over following a plan \cite{manifesto2001agile}". \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{images/agile.jpg} \caption{Agile Software Development} \label{fig:Agile Software Development} \end{figure} \section[Selection Criteria for Languages, Platforms and Technologies]{\texorpdfstring{Selection Criteria for Languages,\\ Platforms and Technologies}{Selection Criteria for Languages, Platforms and Technologies}} Here I talk about why I chose some of the main technologies used for the development of this project. I go into more detail about the languages, frameworks and technologies used in Technology Review. \subsection{Languages} \paragraph{JavaScript} As React is a JavaScript library, I wanted to use JavaScript for this project. I also could have used TypeScript, which is a superset of JavaScript, but I chose to use JavaScript due to its immense popularity and my previous experience with it. \subsection{Platforms} \paragraph{GitHub} There would be no question that I would use GitHub throughout the development of this project. Even though it was mandatory to use GitHub for this project, I would have used it even if it were optional. I made regular commits to my GitHub repository. GitHub was widely used during my time studying at the Galway-Mayo Institute of Technology. GitHub is also widely used by millions all around the world and it is used extensively in the software industry as well. \subsection{Database} \paragraph{MongoDB} As MongoDB makes up the MERN stack, MongoDB was used as the database of this project. MongoDB is also one of the most popular databases in the world and it is used by many companies. I also had previous using MongoDB, and I enjoyed using it and prefer it to SQL. \section{Runtime Environment} \paragraph{Node.js} Like with MongoDB, Node.js was used as it is part of the MERN stack. Node.js is also a great choice as it is very scalable, and it is also a good choice as it reduces the technologies someone needs to know. \section{Time Management} I only started working on the project in February as I was unable to find time to work on the project in the first semester. Dividing my time among my different modules proved to be quite a task. There were a few projects and assignments where I spent too much time on them than necessary. Fortunately, this did not have a major effect on my other projects and assignments but I would have liked to spend more time on a few of them.
{ "alphanum_fraction": 0.8058397804, "avg_line_length": 102.7435897436, "ext": "tex", "hexsha": "2cb5e1f49bd9e91774423ac174b54926d6e9e1b5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "61541c9b1d524029e16d4249d50c28b81c852dd6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "WilliamVida/Applied-Project-and-Minor-Dissertation", "max_forks_repo_path": "Dissertation/LaTeX/methodology.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "61541c9b1d524029e16d4249d50c28b81c852dd6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "WilliamVida/Applied-Project-and-Minor-Dissertation", "max_issues_repo_path": "Dissertation/LaTeX/methodology.tex", "max_line_length": 766, "max_stars_count": null, "max_stars_repo_head_hexsha": "61541c9b1d524029e16d4249d50c28b81c852dd6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "WilliamVida/Applied-Project-and-Minor-Dissertation", "max_stars_repo_path": "Dissertation/LaTeX/methodology.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 830, "size": 4007 }
\section{Reconfiguration Mechanism} \label{mechanism} In this section we describe the reconfiguration protocol and the changes required to \LBFT protocol. One of the design goal is to decouple these two protocols as much as possible to provide nice abstractions. \begin{figure}[h] \centering \includegraphics[scale=.45]{figures/reconfig1.png} \caption{Commit reconfig transaction} \end{figure} \subsection{Reconfiguration Protocol} \emph{Epoch} is a monotonous increasing number used to refer to a \LBFT instance running with a specific configuration. \LBFT instance is agnostic to epoch numbers, so we introduce \module{EpochManager} to process epoch related information. \subsubsection{Message types} First we add \emph{epoch} to every message types of \LBFT protocol including proposals, votes, timeout msg etc. Messages with the same \emph{epoch} as local \LBFT protocol go to \LBFT for processing. Messages that have different \emph{epoch} go to \rust{EpochManager.process\_different\_epoch\_messages}. Additionally we introduce two messages types \rust{EpochChange} and \rust{EpochRetrieval}: \begin{algorithm}[H] \myalgorithm {\SetAlgoNoLine \tcp{The commit proof signed by \LBFT} \Iblock{\rust{LedgerInfo}}{ \rust{state\_id} ;\; \rust{signatures} ;\; ... \; \rust{next\_configuration}: \rust{Option<Configuration>} ; \tcp{Added field to support reconfiguration} } \BlankLine \tcp{Signals an \emph{epoch} change from $i$ to $j$} \Iblock{\rust{EpochChange}}{ \rust{start\_epoch} ; \tcp{The epoch of the first ledger info} \rust{end\_epoch} ; \tcp{The epoch after the last ledger info} \rust{proof}: \rust{Vec<LedgerInfo>} ; } \BlankLine \tcp{Rquest an EpochChange from start\_epoch} \Iblock{\rust{EpochRetrieval}}{ \rust{start\_epoch} ; } } \end{algorithm} \subsubsection{EpochManager} We have three different handlers for \module{EpochManager} to process those messages. \paragraph{Handling different epoch, \rust{process\_different\_epoch\_msg}:} When the different \LBFT epoch messages comes, we'll compare our local \emph{epoch} with the messages. If we're behind, we'll issue a \rust{EpochRetrieval} to the peer who sent the message. If we're ahead, we'll provide a \rust{EpochChange} to the peer. \paragraph{Handling epoch retrieval, \rust{process\_epoch\_retrieval}:} It's similar to how to process messages from lower \emph{epoch}, we'll provide the \rust{EpochChange} to help peer join our \emph{epoch}. \paragraph{Handling epoch change, \rust{start\_new\_epoch}:} When we receive an \rust{EpochChange}, first we verify we're at \emph{epoch} $i$ and the \rust{EpochChange} is correct by iterating through the ledger info and verify the signatures are corresponding to the configuration, then we ensure we're ready for the configuration of \emph{epoch} $j$ e.g. sync to the ledger state at the beginning of the \emph{epoch} $j$. Finally we spawn a new \LBFT instance with the configuration of \emph{epoch} $j$. \subsection{Changes to \LBFT} Although \LBFT is agnostic to \emph{epoch}, it has to support additional rules for \textbf{reconfiguration protocol} in order to maintain the safety property across \emph{epoch}. \subsubsection{Commit Flow} Just a kind reminder that \LBFT has three-phase commit rule - once we collects a QC for three contiguous rounds k, k+1, k+2, we commit the the block at round k and all its prefix. To support \rust{EpochChange} mentioned above, we need to add an optional field \rust{next\_configuration} to the \rust{ExecutedState} which is only outputed when executing a reconfiguration transaction. Remember that \rust{LedgerInfo} carries the \rust{ExecutedState}, so that it contains such field and can serve as end-peoch proof. Besides the normal commit flow in \LBFT which persists transactions in ledger state, we have an additional step to broadcast \rust{EpochChange} to all the current validators if the \rust{LedgerInfo} carries next\_configuration. \module{EpochManager} would then kick in and handle the \rust{EpochChange} message and spawn new \LBFT instance with next\_configuration and shutdown the current one. \subsubsection{Descendants of Reconfig Block - Safety} \label{safety} In this section, we explain the need to keep all the descendants of a reconfig-block empty of transactions until it's committed otherwise we might have safety violation e.g. double spend. The fundamental problem here is that \LBFT spreads the phase of the protocol(for every proposal) over 3 rounds i.e.\ pipelining. When it comes to configuration, the proposal after reconfiguration transaction is under a different configuration with its parent and the pipelining may not work anymore, we'll discuss more about this in section \ref{correctness}. We use validators configuration as example, demonstrated in Figure~\ref{fig:safety}. Imagine we're at \emph{epoch} $1$ with \rust{validators\{a1, a2, a3, a4\}}, and there's a reconfiguration transaction changes us to \rust{validators\{c1, c2, c3, c4\}}. Reconfiguration transaction is included in B1, and a4 collects QC for B3 which has a ledger info(commit proof) of B1 but only unveils to \rust{c1, c2, c3} but not \rust{a1, a2, a3}. \rust{c1, c2, c3} will start the new epoch and use B1 as its genesis while \rust{a1, a2, a3} is going through another three rounds B4, B5, B6 and finally commit B4(which has B1 as its ancestor) and unveil it to \rust{c4}. Now \rust{c4} thinks B4 is the genesis. If we have a double spend txn1, txn2 that txn1 committed with B4 and txn2 committed in B10 in new \emph{epoch} $2$ with \rust{c1, c2, c3}, clients who query \rust{c1, c2, c3} would see the double spend compared with \rust{c4}. \begin{figure}[ht] \centering \includegraphics[scale=.45]{figures/reconfig-safety.png} \caption{Double spend in B4 and B10(green box)} \label{fig:safety} \end{figure} Having any descendants of reconfiguration empty solves this problem, semantically it's equivalent to run 3 phases without pipelining. \subsubsection{Epoch Boundary - Liveness} In this section, we explain how to specify the boundary of each \emph{epoch} and the risk of losing liveness if we don't do it correctly. \paragraph{Start} In \LBFT we assume everyone starts from a known genesis which we can consider as part of the configuration otherwise we'll lose liveness even we preserve safety with empty blocks. Intuitively we could use the block that includes reconfiguration transaction, but we may not commit that block directly but instead one of its descendants. And even worse, we may have multiple commit of its descendants since byzantine nodes could hide commits as Figure~\ref{fig:liveness} demonstrates. \begin{figure}[ht] \centering \includegraphics[scale=.45]{figures/reconfig-liveness.png} \caption{If genesis is not equal to genesis', we lose liveness} \label{fig:liveness} \end{figure} With the empty suffix blocks change, we ensure the same ledger state when we commit reconfiguration transaction regardless of which decendants, and we leverage that to derive a genesis block so that every ledger info that ends previous \emph{epoch} would result in the same genesis deterministically. \paragraph{End} After a node sees \rust{LedgerInfo} that has \rust{next\_reconfiguration}, it'll stop participating the current \LBFT instance, and spawning next \LBFT instance. If it's not part of the next reconfiguration, counterintuitively it can not shutdown itself immediately otherwise we'll lose liveness. As an example, if we have \rust{validators\{a1, a2, a3, a4\}} at \emph{epoch} $i$, and malicious \rust{a4} collects a \rust{LedgerInfo} and sends to \rust{a1} that's not part of \emph{epoch} $i+1$ and \rust{a1} decides to shutdown itself. Byzantine node \rust{a4} also stops participating then \rust{a2, a3} would suffer a liveness problem and are never able to make progress. To address such problem, \rust{a1} needs to keep the \rust{EpochManager} running until it sees a QuorumCert from new \emph{epoch} so that it knows at least f+1 honest nodes in the new \emph{epoch} bootstrap.
{ "alphanum_fraction": 0.7770312306, "avg_line_length": 56.5985915493, "ext": "tex", "hexsha": "f033d3a7621dbb6239c0fd06b811be5f366a619a", "lang": "TeX", "max_forks_count": 2671, "max_forks_repo_forks_event_max_datetime": "2020-12-07T19:35:21.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-18T08:47:03.000Z", "max_forks_repo_head_hexsha": "27c125594ee82c8edb361f0c4cf79c9fa101dd68", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "BlockSuite/libra", "max_forks_repo_path": "documentation/tech-papers/lbft-reconfig/mechanism.tex", "max_issues_count": 6316, "max_issues_repo_head_hexsha": "27c125594ee82c8edb361f0c4cf79c9fa101dd68", "max_issues_repo_issues_event_max_datetime": "2020-12-07T21:27:46.000Z", "max_issues_repo_issues_event_min_datetime": "2019-06-18T09:02:03.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "BlockSuite/libra", "max_issues_repo_path": "documentation/tech-papers/lbft-reconfig/mechanism.tex", "max_line_length": 159, "max_stars_count": 16705, "max_stars_repo_head_hexsha": "27c125594ee82c8edb361f0c4cf79c9fa101dd68", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "BlockSuite/libra", "max_stars_repo_path": "documentation/tech-papers/lbft-reconfig/mechanism.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-07T17:25:27.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-18T08:46:59.000Z", "num_tokens": 2125, "size": 8037 }
% !TeX spellcheck = en_US \documentclass[compsoc,onecolumn,11pt,a4paper,final]{IEEEtran} %%% Packages \usepackage[utf8]{inputenc} % for umlautes %\usepackage[ngerman]{babel} % for German \usepackage[T1]{fontenc} % adjust font encoding in output \usepackage{amsmath} % enable mathematical symbols \usepackage[cmintegrals]{newtxmath} % ieee style for integrals \usepackage{xargs} % use more than one optional parameter in a new commands \usepackage[pdftex,dvipsnames]{xcolor} % coloured text etc. \usepackage{graphicx} \usepackage{caption} \usepackage{float} % forcing figure here (H option) \usepackage{ifdraft} % \ifdraft{do in draft}{do in final} \usepackage[hidelinks]{hyperref} % PDF-Links and Bookmarks \usepackage{bookmark} \usepackage{mathtools} \usepackage{soul} % \st for strike through \usepackage{lipsum} % dummy text %%% Tikz \usepackage{pgf,tikz} \usepackage{mathrsfs} \usetikzlibrary{arrows} %%% Colors \definecolor{red}{rgb}{1,0,0} \definecolor{green}{rgb}{0,1,0} \definecolor{blue}{rgb}{0,0,1} %%% TODO nots \usepackage[colorinlistoftodos,prependcaption,textsize=tiny]{todonotes} \newcommandx{\unsure}[2][1=]{\todo[linecolor=red,backgroundcolor=red!25,bordercolor=red,#1]{#2}} \newcommandx{\change}[2][1=]{\todo[linecolor=blue,backgroundcolor=blue!25,bordercolor=blue,#1]{#2}} \newcommandx{\info}[2][1=]{\todo[linecolor=OliveGreen,backgroundcolor=OliveGreen!25,bordercolor=OliveGreen,#1]{#2}} \newcommandx{\improvement}[2][1=]{\todo[linecolor=Plum,backgroundcolor=Plum!25,bordercolor=Plum,#1]{#2}} \newcommandx{\thiswillnotshow}[2][1=]{\todo[disable,#1]{#2}} %%% Title and author \title{CSP and SAT Encoding} \author{ \IEEEauthorblockN{Mazen Bouchur}\\ \IEEEauthorblockA{TU Clausthal, Germany} } \begin{document} \maketitle \begin{abstract}\label{abstract} This paper gives a brief overview over different encodings between the constraint satisfaction problem (CSP) and the Boolean satisfiability problem (SAT). Each encoding is analyzed and evaluated based on various aspects and characteristics such as performance and complexity. Most importantly each encoding is compared performance-wise to the unencoded counterpart with respect to the standard algorithm of each problem (DPLL in case of SAT and arc consistency based algorithm in case of CSP). We try to summarize multiple papers discussing several encodings and highlight the remarkable properties of each them, specially the SAT-based constraint solver "sugar" and its underlying order encoding. \end{abstract} \input{introduction} \input{preliminaries} \input{csp_to_sat} \input{sat_to_csp} \section{Conclusion}\label{sec:conclusion} We discussed some mappings between the constraint satisfaction problem (CSP) and the Boolean satisfiability problem (SAT). Some theoretical background has been introduced and based on general comparing criterion the encodings have been analyzed and compared against each other. It has been shown that the choice of an encoding has a large impact on the efficiency and solvability of the generated problem. Encoding CSP into simpler SAT problem has led to efficient instances of the original problem that can be solved quickly. This claim was supported with theoretical arguments and proofs specially in the case of some tractable CSP classes encoded using the order encoding. Based on similar encoding techniques various new SAT-based constraint solves have been developed. On the other hand, a variety of SAT to CSP encodings have been introduced. Although no SAT to CSP encoding has proven itself to be superior, a clearer picture of the relation between CSP and SAT has emerged and further studies can build upon these results. %%% Bibliography \newpage \nocite{*} \bibliographystyle{ieeetran} \bibliography{bibliography} \end{document}
{ "alphanum_fraction": 0.7802772692, "avg_line_length": 53.8450704225, "ext": "tex", "hexsha": "bf6f3a6e4e60f29220c74c8090cb3b6ae4fe9918", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ba73dda02acc2ecfc66a66530e54e3940b82384d", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mazenbesher/csp_and_sat", "max_forks_repo_path": "csp_and_sat.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ba73dda02acc2ecfc66a66530e54e3940b82384d", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mazenbesher/csp_and_sat", "max_issues_repo_path": "csp_and_sat.tex", "max_line_length": 1030, "max_stars_count": null, "max_stars_repo_head_hexsha": "ba73dda02acc2ecfc66a66530e54e3940b82384d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mazenbesher/csp_and_sat", "max_stars_repo_path": "csp_and_sat.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1012, "size": 3823 }
%!TEX root = Funktionalanalysis - Vorlesung.tex \chapter*{Bibliography} \begin{itemize} \item Prof.~Dr.~Szech's lecture about Economics and Behaviour in 2015 \item Berninghaus, Ehrhart, Gueth: Strategische Spiele, 2002 \item Kahnemann, Daniel: Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. \item Ariely, Dan: Predictably irrational. New York: Harper Collins, 2008. \item Ariely, Dan: The Upside of Irrationality. New York: HarperCollins, 2011. \item Güth, W.; Levati, M.V.; Nardi, C.; Soraperra, I.: An ultimatum game with multidimensional response strategies, 2014. \end{itemize}
{ "alphanum_fraction": 0.7554076539, "avg_line_length": 50.0833333333, "ext": "tex", "hexsha": "06267b161259809306cf6d7c9d3e7c07a2dadfa1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b3dd3405522c9bdb8fcd3bdeb7543ac4ad74257f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "MBelica/Social-Choice-Theory-SS2016", "max_forks_repo_path": "source/credit.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b3dd3405522c9bdb8fcd3bdeb7543ac4ad74257f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "MBelica/Social-Choice-Theory-SS2016", "max_issues_repo_path": "source/credit.tex", "max_line_length": 123, "max_stars_count": null, "max_stars_repo_head_hexsha": "b3dd3405522c9bdb8fcd3bdeb7543ac4ad74257f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "MBelica/Social-Choice-Theory-SS2016", "max_stars_repo_path": "source/credit.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 195, "size": 601 }
\chapter{Additional Experiment and Statistics} \label{chap:additional_exp} In this appendix chapter we put a series of extra experiments as well as all numeric measurements not previously shown in the results chapter. \section{Additional Experiment: Varying CC/EV} \label{sec:additional_cc_ev_exp} In this experiment we examine the bahaviour of LDOF PED MC method for a varying the eigenvalue-cluster count. For 2 to 6 clusters and for 2 to 6 eigenvectors we generate segmentations on the Waving Hand dataset and plot the resulting performance (Fig. $\ref{fig:wh_altering_ev_cc_app}$. This experiment will give us some further insight about choosing appropriate eigenvalue (EV) and cluster count (CC) we want to solve for in MinCut (MC) and Spectral clustering (SC). The corresponding statistics are listed in Table $\ref{tab:wh_ev_c}$. To give the reader a easier way to understand the results we visualized the resulting statistics. For fixed cluster counts, we plotted the recall-precession and the eigenvector-F1 score graphs. Moreover, for fixed eigenvector counts we plotted the resulting recall-precession and clusters-F1 score graphs. \begin{figure}[H] \begin{center} \subfigure[Varying Clusters Recall/Precision Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/wh1/perf_ev_c/clusters_rec_prec} } \subfigure[Varying Clusters EV/F1 Score Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/wh1/perf_ev_c/clusters_ev_f1} } \subfigure[Varying EV Recall/Precision Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/wh1/perf_ev_c/ev_rec_prec} } \subfigure[Varying EV Clusters/F1 Score Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/wh1/perf_ev_c/ev_c_f1} } \end{center} \caption[Plot Performance Varying CLusters/Eigenvectors]{Visualizing the experiment \enquote{Varying CC/EV} via graphs: For a fixed number of eigenvectors and an altering cluster count (first row) we show the recall/precision and eigenvector/F1-Score plots. Similarly, we show recall/precision and clusters/F1-Score plots plots for a fixed number of clusters and an altering number of eigenvectors (second row). The utilized measurements for this plots is listed in Table $\ref{fig:wh_altering_ev_cc_app}$.} \label{fig:wh_altering_ev_cc_app} \end{figure} \begin{table}[H] \centering \begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Performance Varying Eigenvectors/Clusters PED MC} \\ \hline \multicolumn{6}{|c|}{Precision} \\ \hline \textbf{Eigenvectors / Clusters} & 2 & 3 & 4 & 5 & 6 \\ \hline 2 & 7.03\% & 29.28\% & 33.80\% & 59.61\% & 48.63\% \\ \hline 3 & 6.95\% & 28.62\% & 40.45\% & 57.17\% & 45.59\% \\ \hline 4 & 6.99\% & 30.28\% & 46.90\% & 43.53\% & 65.60\% \\ \hline 5 & 7.05\% & 30.28\% & 49.35\% & 53.55\% & 70.23\% \\ \hline 6 & 7.05\% & 30.28\% & 50.11\% & 53.81\% & 71.47\% \\ \hline \multicolumn{6}{|c|}{Recall} \\ \hline 2 & 23.64\% & 40.45\% & 36.28\% & 79.73\% & 50.27\% \\ \hline 3 & 23.64\% & 41.17\% & 40.45\% & 57.17\% & 61.04\% \\ \hline 4 & 23.64\% & 41.17\% & 57.20\% & 60.50\% & 79.28\% \\ \hline 5 & 24.09\% & 41.17\% & 52.55\% & 54.29\% & 71.71\% \\ \hline 6 & 24.09\% & 41.17\% & 52.55\% & 53.42\% & 67.39\% \\ \hline \multicolumn{6}{|c|}{F1 Score} \\ \hline 2 & 10.83\% & 33.97\% & 35.00\% & 68.22\% & 49.43\% \\ \hline 3 & 10.74\% & 33.77\% & 32.97\% & 50.94\% & 52.20\% \\ \hline 4 & 10.79\% & 34.90\% & 51.54\% & 50.63\% & 71.79\% \\ \hline 5 & 10.91\% & 34.90\% & 50.90\% & 53.92\% & 70.97\% \\ \hline 6 & 10.91\% & 34.90\% & 51.30\% & 53.62\% & 69.37\% \\ \hline \end{tabular} \caption[Performance Varying Eigenvector-Cluster]{Measurements of the experiment \enquote{Varying CC/EV} on the Waving Hand dataset (Fig. $\ref{fig:wh_altering_ev_cc_app}$).} \label{tab:wh_ev_c} \end{table} \section{Varying Cluster Count on SC and MC} This experiment should examine the behaviour of the modes PD SC, PD MC, PED SC, PED MC when using LDOF flow fields for a varying number of clusters on the One Chair dataset (Fig. $\ref{fig:chair_3_cast_dataset_appendix}$). The quantitative results are visualized in Figure $\ref{fig:chair_3_cast_plot_avg_stat}$. Additionally, qualitative results of the best resulting segmentations (Fig. $\ref{fig:chair_3_cast_best_f_score_results}$) and the worst result (Fig. $\ref{fig:chair_3_cast_gt_worst_best}$) are visualized below. The corresponding measurements of these experiments are listed in Table $\ref{tab:chair_3_cast_avg_performance}$. \begin{figure}[H] \begin{center} \subfigure[Frame 30]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/ds/45} } \subfigure[Frame 45]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/ds/60} } \subfigure[Frame 60]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/ds/75} } ~ \subfigure[Gt Frame 45]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/gt/45} } \subfigure[GT Frame 60]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/gt/60_amb} } \subfigure[GT Frame 75]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/gt/75_amb} } \end{center} \caption[Chair 3 Cast Dataset]{Visualizing the ground truth frame and their frames of the One Chairs dataset} \label{fig:chair_3_cast_dataset_appendix} \end{figure} \begin{figure}[H] \begin{center} \subfigure[Raw 5 Clusters PED MC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/best/ped_mc_c_5} } \subfigure[5 Clusters PED MC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/merged/ped_mc_c_5} } \subfigure[Raw 10 Clusters PED MC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/best/ped_mc_c_10} } \subfigure[10 Clusters PED MC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/merged/ped_mc_c_10} } ~ \subfigure[Raw 15 Clusters PD SC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/best/pd_sc_c_15} } \subfigure[15 Clusters PD SC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/merged/pd_sc_c_15} } \subfigure[Raw 20 Clusters PD SC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/best/ped_mc_c_20} } \subfigure[20 Clusters PED MC]{ \includegraphics[width=0.22\linewidth] {evaluation/chairs_3_cast/merged/ped_mc_c_20} } \end{center} \caption[Chair 3 Cast Winner]{Listing the four top segmentation results of frame 60 according to their scored F1 value (Tab. $\ref{tab:chair_3_cast_avg_performance})$. On the left, the unmerged- and on the right the merged segmentation.} \label{fig:chair_3_cast_best_f_score_results} \end{figure} \begin{table}[H] \centering \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{5}{|c|}{Average Precision} \\ \hline \textbf{Method / \#Cluster} & 5 & 10 & 15 & 20 \\ \hline PD SC & 18.01\% & \textbf{55.17}\% & \textbf{55.12}\% & \textbf{56.23}\% \\ \hline PD MC & \textbf{38.70}\% & 38.14\% & 45.65\% & 49.64\% \\ \hline PED SC & 13.14\% & 35.29\% & 50.75\% & 52.14\% \\ \hline PED MC & 26.81\% & 42.94\% & 38.70\% & \textbf{56.23}\% \\ \hline \multicolumn{5}{|c|}{Average Recall} \\ \hline PD SC & 12.34\% & 19.22\% & 38.13\% & 47.82\% \\ \hline PD MC & 13.10\% & 24.51\% & \textbf{44.07}\% & \textbf{53.50}\% \\ \hline PED SC & 10.20\% & 28.29\% & 39.24\% & 33.75\% \\ \hline PED MC & \textbf{19.37}\% & \textbf{38.86}\% & 41.88\% & 50.07\% \\ \hline \multicolumn{5}{|c|}{Average F1 Score} \\ \hline PD SC & 14.02\% & 28.47\% & \textbf{45.04}\% & 51.59\% \\ \hline PD MC & 19.50\% & 29.79\% & 44.63\% & 51.40\% \\ \hline PED SC & 11.48\% & 30.89\% & 43.88\% & 40.69\% \\ \hline PED MC & \textbf{21.97}\% & \textbf{40.63}\% & 40.12\% & \textbf{52.58}\% \\ \hline \end{tabular} \caption[Chair 3 Cast: Average Precision Scores]{Average results of the experiment \enquote{varying number of clusters} on the One Chair dataset.} \label{tab:chair_3_cast_avg_performance} \end{table} \begin{figure}[H] \begin{center} \subfigure[Recall / Precision Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/chairs_3_cast/avg/avg_prec_rec} \label{fig:chair_3_cast_plot_avg_stat_a} } \subfigure[Cluster Count / F1 Score Plot]{ \includegraphics[width=0.47\linewidth] {evaluation/chairs_3_cast/avg/avg_clust_f1} \label{fig:chair_3_cast_plot_avg_stat_b} } \end{center} \caption[Chair 3 Cast avg statistic plots]{Plots of the average performance of the four methods PED SC, PD MC, PED SC and PED MC for a varying number of clusters. The left plots shows the recall/precision plot and the figure on the right shows the F1 score alongside the number of clusters. The actual measurements are listed in Table $\ref{tab:chair_3_cast_avg_performance}$.} \label{fig:chair_3_cast_plot_avg_stat} \end{figure} \begin{figure}[H] \begin{center} \subfigure[Ground truth frame 60]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/worst_best/60_amb} \label{fig:chair_3_cast_gt_worst_best_a} } \subfigure[Worst F1 Score Frame 60]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/worst_best/ped_sc_c_5_f_60} \label{fig:chair_3_cast_gt_worst_best_b} } \subfigure[Best F1 Score Frame 60]{ \includegraphics[width=0.31\linewidth] {evaluation/chairs_3_cast/worst_best/ped_mc_c_20_f_60} \label{fig:chair_3_cast_gt_worst_best_c} } \end{center} \caption[Chair 3 Cast Worst/Best Result]{Comparison of the worst- (center) and the best (right) segmentation results according to the F1 measure (see Tab. $\ref{tab:chair_3_cast_avg_performance}$) using the shown mask on the left.} \label{fig:chair_3_cast_gt_worst_best} \end{figure}
{ "alphanum_fraction": 0.7199665132, "avg_line_length": 54.6057142857, "ext": "tex", "hexsha": "12ad0c68ea8a016ba27134c51ad35641e06f668a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d296c2befba97942765d87d40722105a26e4e97b", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "simplay/master_thesis", "max_forks_repo_path": "Document/Source/Chapters/additional_experiments.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "d296c2befba97942765d87d40722105a26e4e97b", "max_issues_repo_issues_event_max_datetime": "2017-03-05T00:08:25.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-05T00:08:25.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "simplay/master_thesis", "max_issues_repo_path": "Document/Source/Chapters/additional_experiments.tex", "max_line_length": 845, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d296c2befba97942765d87d40722105a26e4e97b", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "simplay/master_thesis", "max_stars_repo_path": "Document/Source/Chapters/additional_experiments.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-11T04:28:28.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-18T05:17:34.000Z", "num_tokens": 3459, "size": 9556 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % NOTE: WORK IN PROGRESS % % Modification to ACM sample. % Based on: % acmsmall-sample.tex, dated 15th July 2010 % This is a sample file for ACM small trim journals % % Compilation using 'acmsmall.cls' - version 1.1, Aptara Inc. % (c) 2010 Association for Computing Machinery (ACM) % % Questions/Suggestions/Feedback should be addressed to => "[email protected]". % Users can also go through the FAQs available on the journal's submission webpage. % % Steps to compile: latex, bibtex, latex latex % % For tracking purposes => this is v1.1 - July 2010 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentclass[prodmode,acmtecs]{acmsmall} \usepackage{draftcopy} \usepackage{graphicx} \usepackage{type1cm} \usepackage{eso-pic} \usepackage{color} % Package to generate and customize Algorithm as per ACM style \usepackage[ruled]{algorithm2e} \renewcommand{\algorithmcfname}{ALGORITHM} \SetAlFnt{\small} \SetAlCapFnt{\small} \SetAlCapNameFnt{\small} \SetAlCapHSkip{0pt} \IncMargin{-\parindent} % Metadata Information \acmVolume{1} \acmNumber{1} \acmArticle{1} \acmYear{2011} \acmMonth{3} % Document starts \begin{document} % Page heads \markboth{Berlin Brown}{A Bottom-Up Approach for Artificial Life Simulations} \makeatletter \AddToShipoutPicture{% \setlength{\@tempdimb}{.5\paperwidth}% \setlength{\@tempdimc}{.5\paperheight}% \setlength{\unitlength}{1pt}% \put(\strip@pt\@tempdimb,\strip@pt\@tempdimc){% \makebox(0,0){\rotatebox{45}{\textcolor[gray]{0.75}% {\fontsize{6cm}{6cm}\selectfont{DRAFT}}}}% }% } \makeatother % Title portion \title{A Bottom-Up Approach for Artificial Life Simulations} \author{Berlin Brown, [email protected] \affil{Berlin Research}} \begin{abstract} The field of artificial intelligence in computer science focuses on many different areas of computing from computer vision to natural language processing. These top-down approaches typically concentrate on human behavior or other animal functions. In this article we look at a bottom-up approach to artificial life and how emergent cell behavior can produce interesting results. With this bottom-up alife approach, we are not interested in solving any particular task, but we are interested in observing the adapative nature of the entities in our simulation. We also wanted to introduce those more familiar with software engineering to biological systems and evolutionary theory concepts. \end{abstract} \category{C.2.2}{Artificial Intelligence}{Artificial Life} \terms{Evolution, Artificial Life, ALife, Artificial Intelligence} \keywords{Evolution, artificial life, alife, scala, java, bottom-up} \begin{bottomstuff} This work is supported by Berlin Research. Author's addresses: B. Brown, Atlanta Georgia \end{bottomstuff} \maketitle %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% General Structure Notes: %% %% In my articles, I will question, what is artificial intelligence? %% what is intelligence? Why is human intelligence more interesting than %% animal intelligence? %% %% Outline: %% 1. Overview %% 2. History of AI and Computing, Turing, VonNue, etc %% History with ALife, Langton, GameOfLife %% %% Charles Darwin %% 1937 Alan Turing %% 1945 John von Neumann %% Wolfram %% 1970 Cellular automaton, John Conway %% 1986 Christopher Langton %% Marvin Minsky %%s %% 3. Basic biology and concepts %% What is DNA? %% What is the mitochondira? %% What is RNA? %% What is life? Why is it interesting? %% %% 4. Modeling the biology (artificial life, etc), biology computing, alife %% Overview of the demo application, object model %% DNA %% More detail and code. %% Analyzing results %% 7. Java and Scala Swing / %% 9. Summary and Future Direction %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \tableofcontents % Basic overview and introduction \include{overview} % Basic biology concepts \include{basic_biology} % History of AI and ALife other key ai evolution people \include{history} % ALife, AI \include{biology_computing} % Our expirement \include{object_model} % Appendix, running the simulation. % Configuration, building, etc \include{java_scala} \include{summary} % Figure \begin{figure} \centerline{\includegraphics{moreScreenShotThruDemo}} \caption{Code before preprocessing.} \label{fig:one} \end{figure} \section{MMSN Protocol} \subsection{Frequency Assignment} We propose a suboptimal distribution to be used by each node, which is easy to compute and does not depend on the number of competing nodes. A natural candidate is an increasing geometric sequence, in which where $t=0,{\ldots}\,,T$, and $b$ is a number greater than $1$. In our algorithm, we use the suboptimal approach for simplicity and generality. We need to make the distribution of the selected back-off time slice at each node conform to what is shown in Equation (\ref{eqn:01}). It is implemented as follows: First, a random variable $\alpha$ with a uniform distribution within the interval $(0, 1)$ is generated on each node, then time slice $i$ is selected according to the following equation: So protocols [\citeNP{Bahl-02,Culler-01,Zhou-06,Adya-01,Culler-01}; \citeNP{Tzamaloukas-01}; \citeNP{Akyildiz-01}] that use RTS/CTS controls\footnote{RTS/CTS controls are required to be implemented by 802.11-compliant devices. They can be used as an optional mechanism to avoid Hidden Terminal Problems in the 802.11 standard and protocols based on those similar to \citeN{Akyildiz-01} and \citeN{Adya-01}.} for frequency negotiation and reservation are not suitable for WSN applications, even though they exhibit good performance in general wireless ad hoc networks. % Head 3 \subsubsection{Exclusive Frequency Assignment} In exclusive frequency assignment, nodes first exchange their IDs among two communication hops so that each node knows its two-hop neighbors' IDs. % Head 4 \paragraph{Eavesdropping} Even though the even selection scheme leads to even sharing of available frequencies among any two-hop neighborhood, it involves a number of two-hop broadcasts. To \subsection{Basic Notations} As Algorithm~\ref{alg:one} states, for each frequency number, each node calculates a random number (${\textit{Rnd}}_{\alpha}$) for itself and a random number (${\textit{Rnd}}_{\beta}$) for each of its two-hop neighbors with the same pseudorandom number generator. Bus masters are divided into two disjoint sets, $\mathcal{M}_{RT}$ and $\mathcal{M}_{NRT}$. % description \begin{description} \item[RT Masters] $\mathcal{M}_{RT}=\{ \vec{m}_{1},\dots,\vec{m}_{n}\}$ denotes the $n$ RT masters issuing real-time constrained requests. To model the current request issued by an $\vec{m}_{i}$ in $\mathcal{M}_{RT}$, three parameters---the recurrence time $(r_i)$, the service cycle $(c_i)$, and the relative deadline $(d_i)$---are used, with their relationships. \item[NRT Masters] $\mathcal{M}_{NRT}=\{ \vec{m}_{n+1},\dots,\vec{m}_{n+m}\}$ is a set of $m$ masters issuing nonreal-time constrained requests. In our model, each $\vec{m}_{j}$ in $\mathcal{M}_{NRT}$ needs only one parameter, the service cycle, to model the current request it issues. \end{description} Here, a question may arise, since each node has a global ID. Why don't we just map nodes' IDs within two hops into a group of frequency numbers and assign those numbers to all nodes within two hops? \section{Simulator} \label{sec:sim} If the model checker requests successors of a state which are not created yet, the state space uses the simulator to create the successors on-the-fly. \subsection{Problem Formulation} The objective of variable coalescence-based offset assignment is to find both the coalescence scheme and the MWPC on the coalesced graph. We start with a few definitions and lemmas for variable coalescence. % Enunciations \begin{definition}[Coalesced Node (C-Node)]A C-node is a set of live ranges (webs) in the AG or IG that are coalesced. Nodes within the same C-node cannot interfere with each other on the IG. Before any coalescing is done, each live range is a C-node by itself. \end{definition} \begin{definition}[C-AG (Coalesced Access Graph)]The C-AG is the access graph after node coalescence, which is composed of all C-nodes and C-edges. \end{definition} \begin{lemma} The C-MWPC problem is NP-complete. \end{lemma} \begin{proof} C-MWPC can be easily reduced to the MWPC problem assuming a coalescence graph without any edge or a fully connected interference graph. Therefore, each C-node is an uncoalesced live range after value separation and C-PC is equivalent to PC. A fully connected interference graph is made possible when all live ranges interfere with each other. Thus, the C-MWPC problem is NP-complete. \end{proof} \begin{lemma}[Lemma Subhead]The solution to the C-MWPC problem is no worse than the solution to the MWPC. \end{lemma} \begin{proof} Simply, any solution to the MWPC is also a solution to the C-MWPC. But some solutions to C-MWPC may not apply to the MWPC (if any coalescing were made). \end{proof} \section{Performance Evaluation} During all the experiments, the Geographic Forwarding (GF) \cite{Akyildiz-01} routing protocol is used. GF exploits geographic information of nodes and conducts local data-forwarding to achieve end-to-end routing. Our simulation is configured according to the settings in Table~\ref{tab:one}. Each run lasts for 2 minutes and repeated 100 times. For each data value we present in the results, we also give its 90\% confidence interval. \section{Conclusions} In this article, we develop the first multifrequency MAC protocol for WSN applications in which each device adopts a single radio transceiver. The different MAC design requirements for WSNs and general wireless ad-hoc networks are compared, and a complete WSN multifrequency MAC design (MMSN) is put forth. During the MMSN design, we analyze and evaluate different choices for frequency assignments and also discuss the nonuniform back-off algorithms for the slotted media access design. % Appendix \appendix \section*{APPENDIX} \setcounter{section}{1} In this appendix, we measure the channel switching time of Micaz \cite{CROSSBOW} sensor devices. In our experiments, one mote alternatingly switches between Channels 11 and 12. Every time after the node switches to a channel, it sends out a packet immediately and then changes to a new channel as soon as the transmission is finished. We measure the number of packets the test mote can send in 10 seconds, denoted as $N_{1}$. In contrast, we also measure the same value of the test mote without switching channels, denoted as $N_{2}$. We calculate the channel-switching time $s$ as \begin{eqnarray}% s=\frac{10}{N_{1}}-\frac{10}{N_{2}}. \nonumber \end{eqnarray}% By repeating the experiments 100 times, we get the average channel-switching time of Micaz motes: 24.3$\mu$s. \appendixhead{ZHOU} % Acknowledgments \begin{acks} The authors would like to thank Dr. Maura Turolla of Telecom Italia for providing specifications about the application scenario. \end{acks} % Bibliography \bibliographystyle{acmsmall} \bibliography{acmsmall-sam} % History dates \received{March 2011}{March 2011}{March 2011} % Electronic Appendix \elecappendix \medskip \section{This is an example of Appendix section head} By repeating experiments 100 times, we get the average channel-switching time of Micaz motes: 24.3 $\mu$s. We then conduct the same experiments with different Micaz motes, as well as experiments with the transmitter switching from Channel 11 to other channels. In both scenarios, the channel-switching time does not have obvious changes. (In our experiments, all values are in the range of 23.6 $\mu$s to 24.9 $\mu$s.) \section{Appendix section head} The primary consumer of energy in WSNs is idle listening. The key to reduce idle listening is executing low duty-cycle on nodes. Two primary approaches are considered in controlling duty-cycles in the MAC layer. \end{document}
{ "alphanum_fraction": 0.7260602188, "avg_line_length": 34.6842105263, "ext": "tex", "hexsha": "5718ca1a758f7b74ff925e6cd068170863ecb1de", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fa6917a3f28bbfded272953437f7fffcd9b64a69", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "berlinbrown/berlinbrown.github.com", "max_forks_repo_path": "applets/scala/GameOfLifeCellular/docs/articles/introlife_acm/start-introlife-acm.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fa6917a3f28bbfded272953437f7fffcd9b64a69", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "berlinbrown/berlinbrown.github.com", "max_issues_repo_path": "applets/scala/GameOfLifeCellular/docs/articles/introlife_acm/start-introlife-acm.tex", "max_line_length": 91, "max_stars_count": 1, "max_stars_repo_head_hexsha": "fa6917a3f28bbfded272953437f7fffcd9b64a69", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "berlinbrown/berlinbrown.github.com", "max_stars_repo_path": "applets/scala/GameOfLifeCellular/docs/articles/introlife_acm/start-introlife-acm.tex", "max_stars_repo_stars_event_max_datetime": "2015-11-06T00:24:40.000Z", "max_stars_repo_stars_event_min_datetime": "2015-11-06T00:24:40.000Z", "num_tokens": 3152, "size": 12521 }
\section*{Part 1: Question (g)} \textbf{Question:} Next, go to \verb|lab_09_script.m|, create a vector named \verb|n|: \verb|n = [9, 11, 13, 15]|. Using a for-loop to call \verb|lab_09_function| by passing each entry of \verb|n| as the input argument. What do you notice about the results? \textbf{Answer}: % PUT YOUR ANSWER HERE As we can see from the output, the condition number increases as $n$ increases. In other words, as the linear system gets more ill-conditioned (i.e., larger condition number), the linear system becomes harder to solve (the bigger error). \newpage \section*{Script and Output} \subsection*{Output file: \lstinline[style=Plain]{lab_09_output.txt}} \lstinputlisting[style=Plain]{../src/lab_09_output.txt} \newpage \subsection*{Script file: \lstinline[style=Plain]{lab_09_script.m}} \lstinputlisting[style=MATLAB]{../src/lab_09_script.m} \newpage \subsection*{Function file: \lstinline[style=Plain]{lab_09_function.m}} \lstinputlisting[style=MATLAB]{../src/lab_09_function.m}
{ "alphanum_fraction": 0.7572139303, "avg_line_length": 55.8333333333, "ext": "tex", "hexsha": "c61ba4a82fa1aa0629e89fc9046c8b44eb7e6401", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "butlerm0405/math3341", "max_forks_repo_path": "courses/template/MATH3341/Math.3341.Lab.09/Math.3341.Lab.09.Report/LaTeX/body.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "butlerm0405/math3341", "max_issues_repo_path": "courses/template/MATH3341/Math.3341.Lab.09/Math.3341.Lab.09.Report/LaTeX/body.tex", "max_line_length": 257, "max_stars_count": null, "max_stars_repo_head_hexsha": "524d4e23cd8fab4ab8368df8b7e6b4442f8436f1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "butlerm0405/math3341", "max_stars_repo_path": "courses/template/MATH3341/Math.3341.Lab.09/Math.3341.Lab.09.Report/LaTeX/body.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 300, "size": 1005 }
% Sidebar about panic: % panic is the kernel's last resort: the impossible has happened and the % kernel does not know how to proceed. In xv6, panic does ... \chapter{Traps and system calls} \label{CH:TRAP} There are three kinds of event which cause the CPU to set aside ordinary execution of instructions and force a transfer of control to special code that handles the event. One situation is a system call, when a user program executes the {\tt ecall} instruction to ask the kernel to do something for it. Another situation is an \indextext{exception}: an instruction (user or kernel) does something illegal, such as divide by zero or use an invalid virtual address. The third situation is a device \indextext{interrupt}, when a device signals that it needs attention, for example when the disk hardware finishes a read or write request. This book uses \indextext{trap} as a generic term for these situations. Typically whatever code was executing at the time of the trap will later need to resume, and shouldn't need to be aware that anything special happened. That is, we often want traps to be transparent; this is particularly important for device interrupts, which the interrupted code typically doesn't expect. The usual sequence is that a trap forces a transfer of control into the kernel; the kernel saves registers and other state so that execution can be resumed; the kernel executes appropriate handler code (e.g., a system call implementation or device driver); the kernel restores the saved state and returns from the trap; and the original code resumes where it left off. Xv6 handles all traps in the kernel; traps are not delivered to user code. Handling traps in the kernel is natural for system calls. It makes sense for interrupts since isolation demands that only the kernel be allowed to use devices, and because the kernel is a convenient mechanism with which to share devices among multiple processes. It also makes sense for exceptions since xv6 responds to all exceptions from user space by killing the offending program. Xv6 trap handling proceeds in four stages: hardware actions taken by the RISC-V CPU, some assembly instructions that prepare the way for kernel C code, a C function that decides what to do with the trap, and the system call or device-driver service routine. While commonality among the three trap types suggests that a kernel could handle all traps with a single code path, it turns out to be convenient to have separate code for three distinct cases: traps from user space, traps from kernel space, and timer interrupts. Kernel code (assembler or C) that processes a trap is often called a \indextext{handler}; the first handler instructions are usually written in assembler (rather than C) and are sometimes called a \indextext{vector}. \section{RISC-V trap machinery} Each RISC-V CPU has a set of control registers that the kernel writes to tell the CPU how to handle traps, and that the kernel can read to find out about a trap that has occured. The RISC-V documents contain the full story~\cite{riscv:priv}. {\tt riscv.h} \lineref{kernel/riscv.h:1} contains definitions that xv6 uses. Here's an outline of the most important registers: \begin{itemize} \item \indexcode{stvec}: The kernel writes the address of its trap handler here; the RISC-V jumps to the address in {\tt stvec} to handle a trap. \item \indexcode{sepc}: When a trap occurs, RISC-V saves the program counter here (since the {\tt pc} is then overwritten with the value in {\tt stvec}). The {\tt sret} (return from trap) instruction copies {\tt sepc} to the {\tt pc}. The kernel can write {\tt sepc} to control where {\tt sret} goes. \item \indexcode{scause}: RISC-V puts a number here that describes the reason for the trap. \item \indexcode{sscratch}: The kernel places a value here that comes in handy at the very start of a trap handler. \item \indexcode{sstatus}: The SIE bit in {\tt sstatus} controls whether device interrupts are enabled. If the kernel clears SIE, the RISC-V will defer device interrupts until the kernel sets SIE. The SPP bit indicates whether a trap came from user mode or supervisor mode, and controls to what mode {\tt sret} returns. \end{itemize} The above registers relate to traps handled in supervisor mode, and they cannot be read or written in user mode. There is a similar set of control registers for traps handled in machine mode; xv6 uses them only for the special case of timer interrupts. Each CPU on a multi-core chip has its own set of these registers, and more than one CPU may be handling a trap at any given time. When it needs to force a trap, the RISC-V hardware does the following for all trap types (other than timer interrupts): \begin{enumerate} \item If the trap is a device interrupt, and the {\tt sstatus} SIE bit is clear, don't do any of the following. \item Disable interrupts by clearing the SIE bit in {\tt sstatus}. \item Copy the {\tt pc} to {\tt sepc}. \item Save the current mode (user or supervisor) in the SPP bit in {\tt sstatus}. \item Set {\tt scause} to reflect the trap's cause. \item Set the mode to supervisor. \item Copy {\tt stvec} to the {\tt pc}. \item Start executing at the new {\tt pc}. \end{enumerate} Note that the CPU doesn't switch to the kernel page table, doesn't switch to a stack in the kernel, and doesn't save any registers other than the {\tt pc}. Kernel software must perform these tasks. One reason that the CPU does minimal work during a trap is to provide flexibility to software; for example, some operating systems omit a page table switch in some situations to increase trap performance. It's worth thinking about whether any of the steps listed above could be omitted, perhaps in search of faster traps. Though there are situations in which a simpler sequence can work, many of the steps would be dangerous to omit in general. For example, suppose that the CPU didn't switch program counters. Then a trap from user space could switch to supervisor mode while still running user instructions. Those user instructions could break user/kernel isolation, for example by modifying the {\tt satp} register to point to a page table that allowed accessing all of physical memory. It is thus important that the CPU switch to a kernel-specified instruction address, namely {\tt stvec}. \section{Traps from user space} Xv6 handles traps differently depending on whether it is executing in the kernel or in user code. Here is the story for traps from user code; Section~\ref{s:ktraps} describes traps from kernel code. A trap may occur while executing in user space if the user program makes a system call ({\tt ecall} instruction), or does something illegal, or if a device interrupts. The high-level path of a trap from user space is {\tt uservec} \lineref{kernel/trampoline.S:/^uservec/}, then {\tt usertrap} \lineref{kernel/trap.c:/^usertrap/}; and when returning, {\tt usertrapret} \lineref{kernel/trap.c:/^usertrapret/} and then {\tt userret} \lineref{kernel/trampoline.S:/^userret/}. % talk about why RISC-V doesn't switch page tables A major constraint on the design of xv6's trap handling is the fact that the RISC-V hardware does not switch page tables when it forces a trap. This means that the trap handler address in {\tt stvec} must have a valid mapping in the user page table, since that's the page table in force when the trap handling code starts executing. Furthermore, xv6's trap handling code needs to switch to the kernel page table; in order to be able to continue executing after that switch, the kernel page table must also have a mapping for the handler pointed to by {\tt stvec}. Xv6 satisfies these requirements using a \indextext{trampoline} page. The trampoline page contains {\tt uservec}, the xv6 trap handling code that {\tt stvec} points to. The trampoline page is mapped in every process's page table at address \indexcode{TRAMPOLINE}, which is at the end of the virtual address space so that it will be above memory that programs use for themselves. The trampoline page is also mapped at address {\tt TRAMPOLINE} in the kernel page table. See Figure~\ref{fig:as} and Figure~\ref{fig:xv6_layout}. Because the trampoline page is mapped in the user page table, with the {\tt PTE\_U} flag, traps can start executing there in supervisor mode. Because the trampoline page is mapped at the same address in the kernel address space, the trap handler can continue to execute after it switches to the kernel page table. The code for the {\tt uservec} trap handler is in {\tt trampoline.S} \lineref{kernel/trampoline.S:/^uservec/}. When {\tt uservec} starts, all 32 registers contain values owned by the interrupted user code. These 32 values need to be saved somewhere in memory, so that they can be restored when the trap returns to user space. Storing to memory requires use of a register to hold the address, but at this point there are no general-purpose registers available! Luckily RISC-V provides a helping hand in the form of the {\tt sscratch} register. The {\tt csrrw} instruction at the start of {\tt uservec} swaps the contents of {\tt a0} and {\tt sscratch}. Now the user code's {\tt a0} is saved in {\tt sscratch}; {\tt uservec} has one register ({\tt a0}) to play with; and {\tt a0} contains the value the kernel previously placed in {\tt sscratch}. {\tt uservec}'s next task is to save the 32 user registers. Before entering user space, the kernel set {\tt sscratch} to point to a per-process {\tt trapframe} structure that (among other things) has space to save the 32 user registers \lineref{kernel/proc.h:/^struct.trapframe/}. Because {\tt satp} still refers to the user page table, {\tt uservec} needs the trapframe to be mapped in the user address space. When creating each process, xv6 allocates a page for the process's trapframe, and arranges for it always to be mapped at user virtual address {\tt TRAPFRAME}, which is just below {\tt TRAMPOLINE}. The process's {\tt p->trapframe} also points to the trapframe, though at its physical address so the kernel can use it through the kernel page table. Thus after swapping {\tt a0} and {\tt sscratch}, {\tt a0} holds a pointer to the current process's trapframe. {\tt uservec} now saves all user registers there, including the user's {\tt a0}, read from {\tt sscratch}. The {\tt trapframe} contains the address of the current process's kernel stack, the current CPU's hartid, the address of the {\tt usertrap} function, and the address of the kernel page table. {\tt uservec} retrieves these values, switches {\tt satp} to the kernel page table, and calls {\tt usertrap}. The job of {\tt usertrap} is to determine the cause of the trap, process it, and return \lineref{kernel/trap.c:/^usertrap/}. It first changes {\tt stvec} so that a trap while in the kernel will be handled by {\tt kernelvec} rather than {\tt uservec}. It saves the {\tt sepc} register (the saved user program counter), because {\tt usertrap} might call \lstinline{yield} to switch to another process's kernel thread, and that process might return to user space, in the process of which it will modify \lstinline{sepc}. If the trap is a system call, {\tt usertrap} calls {\tt syscall} to handle it; if a device interrupt, {\tt devintr}; otherwise it's an exception, and the kernel kills the faulting process. The system call path adds four to the saved user program counter because RISC-V, in the case of a system call, leaves the program pointer pointing to the {\tt ecall} instruction but user code needs to resume executing at the subsequent instruction. On the way out, {\tt usertrap} checks if the process has been killed or should yield the CPU (if this trap is a timer interrupt). The first step in returning to user space is the call to {\tt usertrapret} \lineref{kernel/trap.c:/^usertrapret/}. This function sets up the RISC-V control registers to prepare for a future trap from user space. This involves changing {\tt stvec} to refer to {\tt uservec}, preparing the trapframe fields that {\tt uservec} relies on, and setting {\tt sepc} to the previously saved user program counter. At the end, {\tt usertrapret} calls {\tt userret} on the trampoline page that is mapped in both user and kernel page tables; the reason is that assembly code in {\tt userret} will switch page tables. {\tt usertrapret}'s call to {\tt userret} passes a pointer to the process's user page table in {\tt a0} and {\tt TRAPFRAME} in {\tt a1} \lineref{kernel/trampoline.S:/^userret/}. {\tt userret} switches {\tt satp} to the process's user page table. Recall that the user page table maps both the trampoline page and {\tt TRAPFRAME}, but nothing else from the kernel. The fact that the trampoline page is mapped at the same virtual address in user and kernel page tables is what allows {\tt uservec} to keep executing after changing {\tt satp}. {\tt userret} copies the trapframe's saved user {\tt a0} to {\tt sscratch} in preparation for a later swap with TRAPFRAME. From this point on, the only data {\tt userret} can use is the register contents and the content of the trapframe. Next {\tt userret} restores saved user registers from the trapframe, does a final swap of {\tt a0} and {\tt sscratch} to restore the user {\tt a0} and save {\tt TRAPFRAME} for the next trap, and executes {\tt sret} to return to user space. \section{Code: Calling system calls} Chapter~\ref{CH:FIRST} ended with \indexcode{initcode.S} invoking the {\tt exec} system call \lineref{user/initcode.S:/SYS_exec/}. Let's look at how the user call makes its way to the {\tt exec} system call's implementation in the kernel. {\tt initcode.S} places the arguments for \indexcode{exec} in registers {\tt a0} and {\tt a1}, and puts the system call number in \texttt{a7}. System call numbers match the entries in the {\tt syscalls} array, a table of function pointers \lineref{kernel/syscall.c:/syscalls/}. The \lstinline{ecall} instruction traps into the kernel and causes {\tt uservec}, {\tt usertrap}, and then {\tt syscall} to execute, as we saw above. \indexcode{syscall} \lineref{kernel/syscall.c:/^syscall/} retrieves the system call number from the saved \texttt{a7} in the trapframe and uses it to index into {\tt syscalls}. For the first system call, \texttt{a7} contains \indexcode{SYS_exec} \lineref{kernel/syscall.h:/SYS_exec/}, resulting in a call to the system call implementation function \lstinline{sys_exec}. When \lstinline{sys_exec} returns, \lstinline{syscall} records its return value in \lstinline{p->trapframe->a0}. This will cause the original user-space call to {\tt exec()} to return that value, since the C calling convention on RISC-V places return values in {\tt a0}. System calls conventionally return negative numbers to indicate errors, and zero or positive numbers for success. If the system call number is invalid, \lstinline{syscall} prints an error and returns $-1$. \section{Code: System call arguments} System call implementations in the kernel need to find the arguments passed by user code. Because user code calls system call wrapper functions, the arguments are initially where the RISC-V C calling convention places them: in registers. The kernel trap code saves user registers to the current process's trap frame, where kernel code can find them. The kernel functions \lstinline{argint}, \lstinline{argaddr}, and \lstinline{argfd} retrieve the \textit{n} 'th system call argument from the trap frame as an integer, pointer, or a file descriptor. They all call {\tt argraw} to retrieve the appropriate saved user register \lineref{kernel/syscall.c:/^argraw/}. Some system calls pass pointers as arguments, and the kernel must use those pointers to read or write user memory. The {\tt exec} system call, for example, passes the kernel an array of pointers referring to string arguments in user space. These pointers pose two challenges. First, the user program may be buggy or malicious, and may pass the kernel an invalid pointer or a pointer intended to trick the kernel into accessing kernel memory instead of user memory. Second, the xv6 kernel page table mappings are not the same as the user page table mappings, so the kernel cannot use ordinary instructions to load or store from user-supplied addresses. The kernel implements functions that safely transfer data to and from user-supplied addresses. {\tt fetchstr} is an example \lineref{kernel/syscall.c:/^fetchstr/}. File system calls such as {\tt exec} use {\tt fetchstr} to retrieve string file-name arguments from user space. \lstinline{fetchstr} calls \lstinline{copyinstr} to do the hard work. \indexcode{copyinstr} \lineref{kernel/vm.c:/^copyinstr/} copies up to \lstinline{max} bytes to \lstinline{dst} from virtual address \lstinline{srcva} in the user page table \lstinline{pagetable}. Since \lstinline{pagetable} is {\it not} the current page table, \lstinline{copyinstr} uses {\tt walkaddr} (which calls {\tt walk}) to look up \lstinline{srcva} in \lstinline{pagetable}, yielding physical address \lstinline{pa0}. The kernel maps each physical RAM address to the corresponding kernel virtual address, so {\tt copyinstr} can directly copy string bytes from {\tt pa0} to {\tt dst}. {\tt walkaddr} \lineref{kernel/vm.c:/^walkaddr/} checks that the user-supplied virtual address is part of the process's user address space, so programs cannot trick the kernel into reading other memory. A similar function, {\tt copyout}, copies data from the kernel to a user-supplied address. \section{Traps from kernel space} \label{s:ktraps} Xv6 configures the CPU trap registers somewhat differently depending on whether user or kernel code is executing. When the kernel is executing on a CPU, the kernel points {\tt stvec} to the assembly code at {\tt kernelvec} \lineref{kernel/kernelvec.S:/^kernelvec/}. Since xv6 is already in the kernel, {\tt kernelvec} can rely on {\tt satp} being set to the kernel page table, and on the stack pointer referring to a valid kernel stack. {\tt kernelvec} pushes all 32 registers onto the stack, from which it will later restore them so that the interrupted kernel code can resume without disturbance. {\tt kernelvec} saves the registers on the stack of the interrupted kernel thread, which makes sense because the register values belong to that thread. This is particularly important if the trap causes a switch to a different thread -- in that case the trap will actually return from the stack of the new thread, leaving the interrupted thread's saved registers safely on its stack. {\tt kernelvec} jumps to {\tt kerneltrap} \lineref{kernel/trap.c:/^kerneltrap/} after saving registers. {\tt kerneltrap} is prepared for two types of traps: device interrrupts and exceptions. It calls {\tt devintr} \lineref{kernel/trap.c:/^devintr/} to check for and handle the former. If the trap isn't a device interrupt, it must be an exception, and that is always a fatal error if it occurs in the xv6 kernel; the kernel calls \lstinline{panic} and stops executing. If {\tt kerneltrap} was called due to a timer interrupt, and a process's kernel thread is running (as opposed to a scheduler thread), {\tt kerneltrap} calls {\tt yield} to give other threads a chance to run. At some point one of those threads will yield, and let our thread and its {\tt kerneltrap} resume again. Chapter~\ref{CH:SCHED} explains what happens in {\tt yield}. When {\tt kerneltrap}'s work is done, it needs to return to whatever code was interrupted by the trap. Because a {\tt yield} may have disturbed {\tt sepc} and the previous mode in {\tt sstatus}, {\tt kerneltrap} saves them when it starts. It now restores those control registers and returns to {\tt kernelvec} \lineref{kernel/kernelvec.S:/call.kerneltrap$/}. {\tt kernelvec} pops the saved registers from the stack and executes {\tt sret}, which copies {\tt sepc} to {\tt pc} and resumes the interrupted kernel code. It's worth thinking through how the trap return happens if {\tt kerneltrap} called {\tt yield} due to a timer interrupt. Xv6 sets a CPU's {\tt stvec} to {\tt kernelvec} when that CPU enters the kernel from user space; you can see this in {\tt usertrap} \lineref{kernel/trap.c:/stvec.*kernelvec/}. There's a window of time when the kernel has started executing but {\tt stvec} is still set to {\tt uservec}, and it's crucial that no device interrupt occur during that window. Luckily the RISC-V always disables interrupts when it starts to take a trap, and xv6 doesn't enable them again until after it sets {\tt stvec}. \section{Page-fault exceptions} \label{sec:pagefaults} Xv6's response to exceptions is quite boring: if an exception happens in user space, the kernel kills the faulting process. If an exception happens in the kernel, the kernel panics. Real operating systems often respond in much more interesting ways. As an example, many kernels use page faults to implement \indextext{copy-on-write (COW) fork}. To explain copy-on-write fork, consider xv6's \lstinline{fork}, described in Chapter~\ref{CH:MEM}. \lstinline{fork} causes the child's initial memory content to be the same as the parent's at the time of the fork. xv6 implements fork with \lstinline{uvmcopy} \lineref{kernel/vm.c:/^uvmcopy/}, which allocates physical memory for the child and copies the parent's memory into it. It would be more efficient if the child and parent could share the parent's physical memory. A straightforward implementation of this would not work, however, since it would cause the parent and child to disrupt each other's execution with their writes to the shared stack and heap. Parent and child can safely share physical memory by appropriate use of page-table permissions and page faults. The CPU raises a \indextext{page-fault exception} when a virtual address is used that has no mapping in the page table, or has a mapping whose \lstinline{PTE_V} flag is clear, or a mapping whose permission bits (\lstinline{PTE_R}, \lstinline{PTE_W}, \lstinline{PTE_X}, \lstinline{PTE_U}) forbid the operation being attempted. RISC-V distinguishes three kinds of page fault: load page faults (when a load instruction cannot translate its virtual address), store page faults (when a store instruction cannot translate its virtual address), and instruction page faults (when the address in the program counter doesn't translate). The \lstinline{scause} register indicates the type of the page fault and the \indexcode{stval} register contains the address that couldn't be translated. The basic plan in COW fork is for the parent and child to initially share all physical pages, but for each to map them read-only (with the \lstinline{PTE_W} flag clear). Parent and child can read from the shared physical memory. If either writes a given page, the RISC-V CPU raises a page-fault exception. The kernel's trap handler responds by allocating a new page of physical memory and copying into it the physical page that the faulted address maps to. The kernel changes the relevant PTE in the faulting process's page table to point to the copy and to allow writes as well as reads, and then resumes the faulting process at the instruction that caused the fault. Because the PTE allows writes, the re-executed instruction will now execute without a fault. Copy-on-write requires book-keeping to help decide when physical pages can be freed, since each page can be referenced by a varying number of page tables depending on the history of forks, page faults, execs, and exits. This book-keeping allows an important optimization: if a process incurs a store page fault and the physical page is only referred to from that process's page table, no copy is needed. Copy-on-write makes \lstinline{fork} faster, since \lstinline{fork} need not copy memory. Some of the memory will have to be copied later, when written, but it's often the case that most of the memory never has to be copied. A common example is \lstinline{fork} followed by \lstinline{exec}: a few pages may be written after the \lstinline{fork}, but then the child's \lstinline{exec} releases the bulk of the memory inherited from the parent. Copy-on-write \lstinline{fork} eliminates the need to ever copy this memory. Furthermore, COW fork is transparent: no modifications to applications are necessary for them to benefit. The combination of page tables and page faults opens up a wide range of interesting possibilities in addition to COW fork. Another widely-used feature is called \indextext{lazy allocation}, which has two parts. First, when an application asks for more memory by calling \lstinline{sbrk}, the kernel notes the increase in size, but does not allocate physical memory and does not create PTEs for the new range of virtual addresses. Second, on a page fault on one of those new addresses, the kernel allocates a page of physical memory and maps it into the page table. Like COW fork, the kernel can implement lazy allocation transparently to applications. Since applications often ask for more memory than they need, lazy allocation is a win: the kernel doesn't have to do any work at all for pages that the application never uses. Furthermore, if the application is asking to grow the address space by a lot, then \lstinline{sbrk} without lazy allocation is expensive: if an application asks for a gigabyte of memory, the kernel has to allocate and zero 262,144 4096-byte pages. Lazy allocation allows this cost to be spread over time. On the other hand, lazy allocation incurs the extra overhead of page faults, which involve a kernel/user transition. Operating systems can reduce this cost by allocating a batch of consecutive pages per page fault instead of one page and by specializing the kernel entry/exit code for such page-faults. Yet another widely-used feature that exploits page faults is \indextext{demand paging}. In \lstinline{exec}, xv6 loads all text and data of an application eagerly into memory. Since applications can be large and reading from disk is expensive, this startup cost may be noticeable to users: when the user starts a large application from the shell, it may take a long time before user sees a response. To improve response time, a modern kernel creates the page table for the user address space, but marks the PTEs for the pages invalid. On a page fault, the kernel reads the content of the page from disk and maps it into the user address space. Like COW fork and lazy allocation, the kernel can implement this feature transparently to applications. The programs running on a computer may need more memory than the computer has RAM. To cope gracefully, the operating system may implement \indextext{paging to disk}. The idea is to store only a fraction of user pages in RAM, and to store the rest on disk in a \indextext{paging area}. The kernel marks PTEs that correspond to memory stored in the paging area (and thus not in RAM) as invalid. If an application tries to use one of the pages that has been {\it paged out} to disk, the application will incur a page fault, and the page must be {\it paged in}: the kernel trap handler will allocate a page of physical RAM, read the page from disk into the RAM, and modify the relevant PTE to point to the RAM. What happens if a page needs to be paged in, but there is no free physical RAM? In that case, the kernel must first free a physical page by paging it out or {\it evicting} it to the paging area on disk, and marking the PTEs referring to that physical page as invalid. Eviction is expensive, so paging performs best if it's infrequent: if applications use only a subset of their memory pages and the union of the subsets fits in RAM. This property is often referred to as having good locality of reference. As with many virtual memory techniques, kernels usually implement paging to disk in a way that's transparent to applications. Computers often operate with little or no {\it free} physical memory, regardless of how much RAM the hardware provides. For example, cloud providers multiplex many customers on a single machine to use their hardware cost-effectively. As another example, users run many applications on smart phones in a small amount of physical memory. In such settings allocating a page may require first evicting an existing page. Thus, when free physical memory is scarce, allocation is expensive. Lazy allocation and demand paging are particularly advantageous when free memory is scarce. Eagerly allocating memory in \lstinline{sbrk} or \lstinline{exec} incurs the extra cost of eviction to make memory available. Furthermore, there is a risk that the eager work is wasted, because before the application uses the page, the operating system may have evicted it. Other features that combine paging and page-fault exceptions include automatically extending stacks and memory-mapped files. \section{Real world} The trampoline and trapframe may seem excessively complex. A driving force is that the RISC-V intentionally does as little as it can when forcing a trap, to allow the possibility of very fast trap handling, which turns out to be important. As a result, the first few instructions of the kernel trap handler effectively have to execute in the user environment: the user page table, and user register contents. And the trap handler is initially ignorant of useful facts such as the identity of the process that's running or the address of the kernel page table. A solution is possible because RISC-V provides protected places in which the kernel can stash away information before entering user space: the {\tt sscratch} register, and user page table entries that point to kernel memory but are protected by lack of \lstinline{PTE_U}. Xv6's trampoline and trapframe exploit these RISC-V features. The need for special trampoline pages could be eliminated if kernel memory were mapped into every process's user page table (with appropriate PTE permission flags). That would also eliminate the need for a page table switch when trapping from user space into the kernel. That in turn would allow system call implementations in the kernel to take advantage of the current process's user memory being mapped, allowing kernel code to directly dereference user pointers. Many operating systems have used these ideas to increase efficiency. Xv6 avoids them in order to reduce the chances of security bugs in the kernel due to inadvertent use of user pointers, and to reduce some complexity that would be required to ensure that user and kernel virtual addresses don't overlap. Production operating systems implement copy-on-write fork, lazy allocation, demand paging, paging to disk, memory-mapped files, etc. Furthermore, production operating systems will try to use all of physical memory, either for applications or caches (e.g., the buffer cache of the file system, which we will cover later in Section~\ref{s:bcache}). Xv6 is na\"{i}ve in this regard: you want your operating system to use the physical memory you paid for, but xv6 doesn't. Furthermore, if xv6 runs out of memory, it returns an error to the running application or kills it, instead of, for example, evicting a page of another application. \section{Exercises} \begin{enumerate} \item The functions {\tt copyin} and {\tt copyinstr} walk the user page table in software. Set up the kernel page table so that the kernel has the user program mapped, and {\tt copyin} and {\tt copyinstr} can use {\tt memcpy} to copy system call arguments into kernel space, relying on the hardware to do the page table walk. \item Implement lazy memory allocation. \item Implement COW fork. \item Is there a way to eliminate the special {\tt TRAPFRAME} page mapping in every user address space? For example, could {\tt uservec} be modified to simply push the 32 user registers onto the kernel stack, or store them in the {\tt proc} structure? \item Could xv6 be modified to eliminate the special {\tt TRAMPOLINE} page mapping? \end{enumerate}
{ "alphanum_fraction": 0.7801283054, "avg_line_length": 45.4551920341, "ext": "tex", "hexsha": "73c4f84c79993068e01307e948c0b8d2d1e9b8f1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0a3ce63e7a71d7454b7c9af2bf188a3e59891033", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "VARoDeK/xv6-riscv-book", "max_forks_repo_path": "trap.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0a3ce63e7a71d7454b7c9af2bf188a3e59891033", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "VARoDeK/xv6-riscv-book", "max_issues_repo_path": "trap.tex", "max_line_length": 81, "max_stars_count": 1, "max_stars_repo_head_hexsha": "fbb800aef717fd77aae756d2b21631a09122e72d", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "samwhitlock/xv6-riscv-book", "max_stars_repo_path": "trap.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-25T07:02:11.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-25T07:02:11.000Z", "num_tokens": 7733, "size": 31955 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ 11pt, ]{book} \usepackage{lmodern} \usepackage{setspace} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={Selected Topics In Data Science}, pdfauthor={Bruce Campbell}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{longtable,booktabs} % Correct order of tables after \paragraph or \subparagraph \usepackage{etoolbox} \makeatletter \patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} \makeatother % Allow footnotes in longtable head/foot \IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} \makesavenoteenv{longtable} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} \usepackage{booktabs} This work is a collection of data science, machine learning, and statistics notes taken in the last few years. No effort has been made to be complete with citations, references, or attribution. This is merely the results of lots of google queries and trials with various algorithms, software packages, and methods. \usepackage[]{natbib} \bibliographystyle{plainnat} \title{Selected Topics In Data Science} \author{Bruce Campbell} \date{2021-08-26} \begin{document} \maketitle \setstretch{1.2} \hypertarget{preface}{% \chapter{Preface}\label{preface}} This is the first installment on my promise to elucidate less popular topics in statistics and machine learning. I wrote this as a way to solidify my understanding of some of the topics that are treated here. Hopefully others will find value here. \hypertarget{intro}{% \chapter{Introduction}\label{intro}} ``Where must we go, we who wander this wasteland, in search of our better selves.'' -The First History of Man This is a living book. It's under development. We are using the \textbf{bookdown} package \citep{R-bookdown} in this book, which was built on top of R Markdown and \textbf{knitr} \citep{xie2015}. \hypertarget{data-science-best-practices-and-processes}{% \chapter{Data Science Best Practices and Processes}\label{data-science-best-practices-and-processes}} This (draft) document is meant to help onboard new data scientists and serve as a reference for current data scientists. Data science consists of a multidisciplinary field encompassing machine learning \& statistics, knowledge representation, visualization, writing, and software engineering. This document covers a variety of subjects and best practices from how to run an Agile data science process, to coding standards, and to project management. There is no right way to do data science but many ways to do it wrong. This document represents just one approach. Data science is a team sport, and consensus is the best way to define what process works best. Above all else, we're scientists, and that comes with ethical and process implications. Keep this in mind always. \hypertarget{code-of-conduct}{% \section{Code of Conduct}\label{code-of-conduct}} We don't have a strict code of conduct here, but there are a few essential principles we hold dear: \begin{itemize} \tightlist \item Act in good faith: we are here to accomplish several goals - deliver value to the company, one another, and ourselves. Whenever you have a choice to make about work or how you treat a colleague, ask whether, in doing so, you further these goals. \item Consider things from other people's perspectives: Working on a team is hard if we expect our colleagues to do all the work of communication. Give people the benefit of the doubt when you disagree, and try to understand what they are really saying and why they might be doing so. \item Create a welcoming environment: This team recognizes that racism, sexism, gender discrimination, and classism can severely impact the way people move through both life and the workplace. It is our policy to proactively address these problems by educating ourselves about them, self-monitoring and letting our colleagues, regardless of their background, get on with work. At the same time, we actively seek to improve the work environment. \end{itemize} \hypertarget{ethics}{% \section{Ethics}\label{ethics}} Ethics is an essential concern in machine learning. Focusing on the ethical impact of a model will potentially reduce bias and improve results. Try to think through the hidden biases and potential misuses of any model you build. Think about this in model choice - try to use transparent methods like traditional statistical models and decision trees when interpretability is important. 8 Principles for ethical machine learning. {[}\url{https://ethical.institute/}{]} \begin{itemize} \item Human augmentation: assess the impact of incorrect predictions and, when reasonable, design systems with human-in-the-loop review processes. \item Bias evaluation: continuously develop processes that allow me to understand, document, and monitor bias in development and production. \item Explainability by justification: develop tools and processes to continuously improve transparency and explainability of machine learning systems where reasonable. \item Reproducible operations: develop the infrastructure required to enable for a reasonable level of reproducibility across the operations of ML systems. \item Displacement strategy: identify and document relevant information so that business change processes can be developed to mitigate the impact towards workers being automated. \item Practical accuracy: develop processes to ensure my accuracy and cost metric functions are aligned to the domain-specific applications. \item Trust by privacy: build and communicate processes that protect and handle data with stakeholders that may interact with the system directly and/or indirectly. \item Data risk awareness: develop and improve reasonable processes and infrastructure to ensure data and model security are being taken into consideration during the development of machine learning systems. \end{itemize} \hypertarget{project-philosophy}{% \chapter{Project Philosophy}\label{project-philosophy}} Projects should, to the best of our ability, be broken up into as many small pieces as possible and is feasible. Dependencies between projects should be managed by tools and proper software configuration management (packages) rather than by including code from one project directly inside another project. This practice encourages modular design and also encourages independence between projects. Sometimes it is better to implement multiple similar pieces of code in different projects than it is to expect a single library to cover all of these slightly different cases in a simple, coherent way. When a subset of a project reaches the point of being its own library or utility, don't be afraid to factor it out - git can even preserve the history of the sub-project in a new repository for you. Many choices we make as software developers are arbitrary, in the sense that they don't significantly impact productivity. Whether a team uses R or Python is probably one such choice, for instance, since both languages provide roughly the same capabilities or their costs and benefits balance out. The value of fixing such decisions isn't in the choices themselves, but on reducing the complexity of the development process for the data science community at large. In other words, while we strive to find the best possible practices to use, don't take this document as bald assertions that these are the optimal choices. The point is to simplify teamwork, collaboration, and add clarity to our process. \hypertarget{project-planning-and-accounting}{% \chapter{Project Planning and Accounting}\label{project-planning-and-accounting}} Data science should be run in an Agile fashion. Agile is a general type of project management process, used mainly for software development, where and solutions evolve through the collaborative effort of self-organizing. This includes Kanban and SCRUM. \begin{itemize} \tightlist \item Generally, a data scientist will work on only one or two projects per sprint. \item Project artifacts should be stored in content stores like Rally/Jira, Confluence/Mojo \item Code, documentation and all software artifacts for a project \emph{must} exist in a git repository \item Data and private information should \emph{not} exist in any git repository. Automation in each project should, instead, fetch necessary data to a location outside the git repository (or ignored by it). Instructions about where to get private information required for the project should be in the project documentation. \item Projects must have defined completion criteria before starting \end{itemize} Project planning is hard. The point isn't to get estimates exactly right but to create a manageable history of effort. It is also very good for productivity to resolve questions of priorities and to plan ahead of time so that when we do get to work, we can do so without distraction. \hypertarget{agile}{% \chapter{Agile}\label{agile}} \emph{Negative results are not a failure and in fact, are expected in a healthy data science practice.} \hypertarget{the-agile-manifesto}{% \subsection{The Agile Manifesto}\label{the-agile-manifesto}} Value: \begin{itemize} \tightlist \item Individuals and interactions over processes and tools \item Working software over comprehensive documentation \item Customer collaboration over contract negotiation \item Responding to change over following a plan \end{itemize} \hypertarget{agile-data-science-manifesto}{% \subsection{Agile Data Science Manifesto}\label{agile-data-science-manifesto}} Agile Data Science is organized around the following principles: \begin{itemize} \tightlist \item Iterate, iterate, iterate: tables, charts, reports, predictions. \item Ship intermediate output. Even failed experiments have output. \item Prototype experiments. \item Integrate the tyrannical opinion of data in product management. \item Climb up and down the data-value pyramid as we work. \item Discover and pursue the critical path to a killer product. \item Get meta. Describe the process, not just the end state. \end{itemize} \hypertarget{sprints}{% \section{Sprints}\label{sprints}} Try to run an agile process with the aim of coordinating downtime throughout the year optimally. No team runs full steam all year - by being honest about the need for creative downtime and team bonding, we'll be more productive. \begin{itemize} \tightlist \item Two week sprints \item Mandatory daily stand-ups - 10-15 minutes by the clock \item Stand-ups are not a time for announcements \item Hold lab-style development meetings for the whole team to discuss technical matters and review interim results \item Use chat regularly to communicate- a distributed development culture requires this \item Share code snippets and demo results frequently through the sprint \item retrospectives are important \item scoping of work is important \item code reviews \emph{VERY} important \end{itemize} \hypertarget{planning-estimation}{% \section{Planning \& Estimation}\label{planning-estimation}} During this phase, we choose a set of tasks to try to accomplish in the sprint, and we break them down into sub-tasks, which can be completed in one day or less. We generate enough of these sub-tasks that we can reasonably expect to exhaust them in the current sprint. We may also pull such subtasks from the backlog. Work is, of course, organized in pieces more substantial than one-day tasks. Some teams manage these in terms of Projects, Epics, Stories, and other sorts of tickets. All projects which take multiple days of work should be well documented. These include but not limited to projects involving DevOps, R\&D, presentations, academic collaborations, etc. \hypertarget{execution}{% \section{Execution}\label{execution}} A ticket should be small enough to complete in one day or less. When you work on a ticket, you create a local branch from master for the given project and do your work there. You have some latitude on how you actually work in this branch, but the end result should be a series of small commits which implements the feature. At this stage, you will rebase your code on the most recent version of master, and either merge your branch into master and push or make a pull request. This is roughly the \emph{git flow} workflow. Read about it here: \url{http://nvie.com/posts/a-successful-git-branching-model/} Repeat until the end of the sprint. \hypertarget{demo}{% \section{Demo}\label{demo}} Each sprint ends with a sprint demo, which shows either some interesting results of your work or a two-three slide summary of the work you've accomplished in the sprint. Analytics' ultimate purpose is to provide insight into data - if your work does that, be sure to present that result in the most engaging way possible, given the time constraints. It's very important to include stakeholders in the demo. The content needs to be understandable for non-technical managers and customers. It's OK to have technical content but be clear where it is, and always respect the audience. The demo is not a time to show other data scientists your skills at mathematics and statistics - there are other venues for that. Don't derail the sprint demo with side conversations, planning, or peer review. The demo is to communicate what was accomplished during the sprint. \hypertarget{retrospectives}{% \section{Retrospectives}\label{retrospectives}} After the demo comes a retrospective, this is an opportunity to reflect on how the agile process is working. What's working well, what needs to improve, and an action plan for improvements. \hypertarget{open-time}{% \section{Open Time}\label{open-time}} Two days between each sprint and the next are open times. Use this time to work on long-shot projects or for learning new techniques that aren't necessarily immediately applicable to the current work. Don't use this time for refactoring code: this task can and should be included in sprints since it is a bona fide cost of doing business. Do use sprint time for learning critical path techniques. \hypertarget{the-master-backlog-and-the-open-business-questions-list}{% \section{The Master Backlog and the Open Business Questions List}\label{the-master-backlog-and-the-open-business-questions-list}} Keep a document in the enterprise store that describes at a high level what are the primary business questions and concerns that drive the need for data science. This content should come from the business community and will not cover all that is worked on by data science. The aim should be to give the customer what they want as well as what they need but might not be aware of. Keep a master backlog of all ideas and use cases - from well formulated to the not so well thought out. Cull this as a team regularly, refining and adding detail to the use cases that make sense to keep. This should be open to all data scientists to collaborate on. While we may have SME's in certain algorithm or enterprise domains, try not to factor the work in a way that creates knowledge silos. This makes review hard, adds risk to code maintenance, and can stifle innovation. \hypertarget{statistical-analysis-vs.-machine-learning-and-r-vs.-python}{% \section{Statistical Analysis vs.~Machine Learning and R vs.~Python}\label{statistical-analysis-vs.-machine-learning-and-r-vs.-python}} There are no rules here. Try to use what's best for the job. R has really good statistics and markdown capabilities. Python might be better for production use when all else is equal. There is a lot of overlap in functionality between R \& Python. Generally, R is used for statistical analysis. For Bayesian statistics, the tooling available in Python might be equivalent (JAGS, Stan, TF Probability). Python, Java, R, Spark/Scala are all acceptable for machine learning applications. {[}\url{https://github.com/matloff/R-vs.-Python-for-Data-Science}{]} \hypertarget{version-control}{% \chapter{Version Control}\label{version-control}} Version control is one of the most effective tools data scientists have to ensure smooth collaboration and total awareness of the history and meaning of software artifacts. Understanding git is required. \hypertarget{git-fundamental-ideas}{% \section{Git Fundamental Ideas}\label{git-fundamental-ideas}} For those unfamiliar with git, this reference may be helpful. \url{https://git-scm.com/book/en/v2} Don't think of your project as a bunch of files. Think of it as a series of commits which are, in actuality, simply \emph{diffs} applied to the previous state of the repository. A git repository literally is just a set of commit chains and some tools to merge different chains and apply the patches inside each to the current state of the project. But it is all \emph{diffs} at the end of the day. \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Always make small commits that do one thing. \item Always submit pull requests or merge to master a chain of small commits which result in a running, test-passing project. \item Don't document too much code inline. Documentation in source code gets out of date and can make code less readable. \item Do write very informative commit messages. These are perpetually attached to the code they refer to and furnish perpetual, contextual documentation that doesn't drift out of date. \item Do refactor code aggressively. If you follow rules 1-4 it will always be easy to find when things went wrong and to recover the latest, meaningful working version. \item Do get comfortable with git's fancier, patch management capabilities. Learn to revert and cherry pick. Learn to rebase and clean up messes. Know what a detached head state is. \end{enumerate} There are different types of git workflows. Get familiar with these. \begin{itemize} \tightlist \item fork and pull \item feature branch \item feature branch + merge request \item gitflow / master + develop \end{itemize} For projects with multiple data scientists where the ultimate aim is production, the gitflow workflow is probably best. \hypertarget{definition-of-done-for-data-science}{% \chapter{Definition of Done for Data Science}\label{definition-of-done-for-data-science}} \hypertarget{workflow}{% \subsection{Workflow}\label{workflow}} If an analysis is to be repeated, it needs to be scripted in a repeatable fashion or exposed via an API. The minimal output of a project always consists of a report. Additional artifacts include R and Python packages, intermediate data for further analysis, blog posts, etc. \hypertarget{testing}{% \subsection{Testing}\label{testing}} Code should be tested where possible. Any algorithms should have test data passed through. This is not necessary for well-used algorithms in common frameworks (LM in R, for example). This is not only for the benefit of the workflow - but for the benefit of other team members and future interns. \hypertarget{data-validation}{% \subsection{Data Validation}\label{data-validation}} Data should be randomly sampled at various points in the workflow. Any transformations should be manually extracted. This data - manually transformed where needed - should be spot-checked against the original source. \hypertarget{hardware}{% \subsection{Hardware}\label{hardware}} Favor disposable hardware and persistent data paradigms. A small community of data scientists should be responsible for maintaining DevOps best practices like connecting containers to git, how to orchestrate multi-container flows, and managing how EMR workflows are developed and deployed. \hypertarget{languages}{% \subsection{Languages}\label{languages}} \begin{itemize} \tightlist \item R \item Python \item SQL \item Spark : Scala / Python / R \end{itemize} \hypertarget{ides-vs-notebooks}{% \subsection{IDE's vs Notebooks}\label{ides-vs-notebooks}} Notebooks are a great way to start a project but are not appropriate for commercial deployment. Use them, but keep in mind that it's easier to use an IDE from the start if you know you're going to be developing a package. Use templates for software configuration management. {[}\url{https://github.com/uwescience/shablona}{]} {[}\url{https://usethis.r-lib.org/articles/articles/usethis-setup.html}{]} \hypertarget{modern-devops}{% \subsection{Modern devops}\label{modern-devops}} All of the modern software engineering paradigms apply to data science. \begin{itemize} \tightlist \item Containerization \item API's and REST \item Continuous integration : \url{https://www.redhat.com/en/topics/devops/what-is-ci-cd} \item Unit testing \item Code coverage \item Logging with levels (DEBUG INFO WARN ERROR) and streams (file stdout etc) : log4r , python logger\\ \item linters for code standards: such as flake8 (which combines the tools pep8 and pyflakes) and lintr \item Auto documentation tools - Doxygen, Roxygen, Sphynx \end{itemize} These are best incorporated in an Agile data science practice by using platforms and templates. GitLab offers a platform for CI. The tidyverse has a library for creating and maintaining R packages - usethis. There are package templates for Python projects as well. {[}\url{https://github.com/uwescience/shablona}{]} is particularly comprehensive. Look for easy ways to incorporate these into your practice. A common pitfall in data science is to spend too much time on these concerns. The risk being that all your time will be spent tooling and optimizing and not innovating. Try to strike a good balance here. \hypertarget{data-science-reading-list}{% \chapter{Data Science Reading List}\label{data-science-reading-list}} This is not a comprehensive list of reading material, but it's an excellent place to start for the basics. The machine learning material generally requires multivariate calculus, probability, statistics, and linear algebra. \hypertarget{big-data}{% \subsection{Big Data}\label{big-data}} \begin{itemize} \tightlist \item Agile Data Science Chapter 2. Agile Tools \item Hadoop: The Definitive Guide, 4th Edition by Tom White \item Learning Spark by Matei Zaharia , Patrick Wendell , Andy Konwinski , Holden Karau \item Advanced Analytics with Spark, 2nd Edition by Josh Wills , Sean Owen , Sandy Ryza , Uri Laserson \item Agile Data Science 2.0, 1st Edition by Russell Jurney \end{itemize} \hypertarget{statistics}{% \section{Statistics}\label{statistics}} \hypertarget{theory}{% \subsection{Theory}\label{theory}} \begin{itemize} \tightlist \item Mathematical Statistics and Data Analysis 3rd Edition by John A. Rice \item Mathematical Statistics 2nd Edition by Wiebe R. Pestman \item Mathematical Statistics: Basic Ideas and Selected Topics 1st Edition Peter J. Bickel , Kjell A. Doksum \end{itemize} \hypertarget{applied}{% \subsection{Applied}\label{applied}} \begin{itemize} \tightlist \item Linear Models with R by Julian J. Faraway \item Extending the Linear Model with R: Generalized Linear, Mixed Effects and Nonparametric Regression Models by Julian J. Faraway \item Generalized Linear Models by P. McCullagh and John A. Nelder \end{itemize} \hypertarget{machine-learning}{% \section{Machine Learning}\label{machine-learning}} \begin{itemize} \tightlist \item An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics) by Gareth James,Daniela Witten,Trevor Hastie, Robert Tibshirani \item (Intermediate) The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics) by Trevor Hastie,Robert Tibshirani, Jerome Friedman \item Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) by Kevin P. Murphy \item Probability for Statistics and Machine Learning: Fundamentals and Advanced Topics Springer Texts in Statistics DasGupta, Anirban \item Probabilistic Graphical Models: Principles and Techniques (Adaptive Computation and Machine Learning series) 1st Edition by Daphne Koller (Author), Nir Friedman \item Deep Learning (Adaptive Computation and Machine Learning series) by Ian Goodfellow (Author), Yoshua Bengio (Author), Aaron Courville \item Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning) by Bernhard Schlkopf (Author), Alexander J. Smola \end{itemize} \hypertarget{core-competencies}{% \section{Core Competencies}\label{core-competencies}} \begin{itemize} \tightlist \item git {[}\url{https://www.csc.kth.se/utbildning/kth/kurser/DD2385/material/gitmagic.pdf}{]} \item SQL \item R \item Spark \item Python \end{itemize} \hypertarget{general-advice-thoughts-on-data-science}{% \chapter{General Advice \& Thoughts on data science}\label{general-advice-thoughts-on-data-science}} \emph{The highest and most beautiful things in life are not to be heard about, nor read about, nor seen but, if one will, are to be lived.} Soren Kierkegaard Data science is not a new field - statistics, information theory, cybernetics (the science of communications and automatic control systems), and AI have been around a long time. What's new are the engineering breakthroughs that have commoditized previously esoteric software and advances in hardware that have allowed for training much larger models. What else is new is the attention being paid to the field by business leaders. \hypertarget{to-the-managers-of-data-scientists}{% \subsection{To the managers of data scientists}\label{to-the-managers-of-data-scientists}} Understand the resources and support required to maintain your data scientists. Don't socialize too early. Data science is a craft where many projects have negative results or fail. Do take the time to understand the difference between negative results and failure due to technical reasons. Understand the difference between the different types of data scientists. {[}\url{https://hbr.org/2018/11/the-kinds-of-data-scientist}{]} \hypertarget{when-is-your-data-big}{% \subsection{When is your data big?}\label{when-is-your-data-big}} \begin{itemize} \tightlist \item When it cant fit in memory on one computer \item When processing takes more than a few hours \end{itemize} The data science community needs to maintain a few experts responsible for calcualting features on large data sets. Not everyone needs to learn Spark, but be familiar with it and understand how you might be able to write feature calculations in a way that can be deployed in Spark as a custom aggregation. \hypertarget{c}{% \subsection{C++?}\label{c}} Yes, please. If you want to do anything serious in developing packages and machine learning algorithms, then C++ is a must. Consider Fortran as well. It's not widely discussed, but a vast amount of data science is done running Fortran libraries written long ago. NIST statistics libraries, Arpack, CSuite, and Lapack are a few examples. \hypertarget{on-metrics-and-being-data-driven}{% \subsection{On Metrics and Being Data-Driven}\label{on-metrics-and-being-data-driven}} Enterprise efforts to be more data-driven can devolve into a culture of false certainty. The influence of results and metrics is dictated not by their reliability but rather by their abundance and the confidence with which they're presented. This can lead to bad or misguided decision making. The remedy for this is to develop a strong culture of science and a code of ethics in the practice of data science. Sometimes, more data and more analytics are thrown at a problem when what's needed is a hypothesis-based approach. \hypertarget{fallacies-and-failures-in-judgment}{% \subsection{Fallacies and Failures in Judgment}\label{fallacies-and-failures-in-judgment}} Biases not only come from the models we fit. They are an inherent part of the human world the data scientist interacts with. Be familiar with these biases. Try to understand the human biases that arise in the enterprise - utility theory provides a framework to understand how money can make decisions `irrational' from a modeling point of view. Pricing, commission rates, deals, resource allocations are areas fraught with bias. Be sensitive to these and try to model around them or account for them directly. \begin{itemize} \item Lucid Fallacy: Mistaking the complex real-world to the well-posed problems of mathematics and laboratory experiments. \item Iatrogenics : Harm done by the healer, like the doctor doing more harm than good. \item Naive Interventionism: intervention with disregard to iatrogenics. The preference or perceived obligation, to ``do something'' over doing nothing. \item The Agency Problem: A moral hazard and conflict of interest that may arise in any relationship where one party is expected to act in another's best interests. \item Narrative fallacy: Our need to fit a story or pattern to series of connected or disconnected facts. The statistical application is data mining. Fitting a convincing and well-sounding story to the past as opposed to utilizing experimental methodology. \end{itemize} \hypertarget{on-relevance}{% \subsection{On relevance}\label{on-relevance}} Be kind to yourself when evaluating the relevance of your work to the enterprise. The company has hired talented staff, and you want to make an impact. It can sometimes be hard to understand why a model is not adopted, or why one of many competing models developed in isolation is adopted over another. This is part of science, while it may all be correct - it might all be relevant. Focus instead on the correctness of the results and the process in which it's developed. Be kind to your leaders when assessing their reasoning behind making a decision regarding models and results. \hypertarget{on-model-complexity}{% \subsection{On model complexity}\label{on-model-complexity}} Fit a variety of models, especially if using black-box methods. Simple models can provide a form of model validation for more complex models. Most business leaders at Red Hat understand effect size and significance. Models that are interpretable may sometimes provide more actionable insights than black-box methods with better predictive performance. Do you want to learn Tensorflow? Many frequentist and Bayesian models can now be fit in Tensorflow. It's not just for CNN,s, BERT, and LSTM's. Learn how to fit a linear model, wide and deep models, or investigate Tensorflow-Probability to learn about powerful samplers for large Bayesian models. \hypertarget{on-credit-and-accountability}{% \subsection{On credit and accountability}\label{on-credit-and-accountability}} We're all responsible for the work that the data science community produces. Any failure of one is a failure for all. Give credit where credit is due, but keep in mind that work properly captured in agile artifacts speaks for itself. Data science code is owned by Red Hat and should maintain a copyright notice. \texttt{Copyright\ \textless{}YEAR\textgreater{}\ ACME,\ Inc.} \hypertarget{causal-inference}{% \section{Causal Inference}\label{causal-inference}} \emph{Rubin causal model (RCM), also known as the Neyman--Rubin causal model,{[}1{]} is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes For example, a person would have a particular income at age 40 if he had attended college, whereas he would have a different income at age 40 if he had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. This dilemma is the ``fundamental problem of causal inference'' Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects.{[}5{]} A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples. In many circumstances, however, randomized experiments are not possible due to ethical or practical concerns. In such scenarios there is a non-random assignment mechanism. This is the case for the example of college attendance: people are not randomly assigned to attend college. Rather, people may choose to attend college based on their financial situation, parents' education, and so on. Many statistical methods have been developed for causal inference, such as propensity score matching. These methods attempt to correct for the assignment mechanism by finding control units similar to treatment units.} \emph{The Rubin causal model has also been connected to instrumental variables (Angrist, Imbens, and Rubin, 1996){[}6{]} and other techniques for causal inference. For more on the connections between the Rubin causal model, structural equation modeling, and other statistical methods for causal inference, see Morgan and Winship (2007).{[}7{]}} Counterfactuals and Causal Inference: Methods and Principles for Social Research By Stephen E. Morgan and Christopher Winship Three distinct and complementary strategies for causal inference: - 1. conditioning on other potential variables that could affect the outcome, as in regression and matching analysis; - 2. using appropriate exogenous variables as instrumental variables; and - 3. establishing an ``isolated and exhaustive'' mechanism that links the outcome variable to the causal variable of interest. \hypertarget{spike-and-slab-regression}{% \section{spike-and-slab regression}\label{spike-and-slab-regression}} Bayesian variable selection technique ( particularly useful when the number of potential covariates is larger than the number of samples). Mitchell \& Beauchamp (1988) Madigan \& Raftery (1994) George \& McCulloch (1997) Ishwaran \& Rao (2005).* \hypertarget{nowcasting}{% \section{Nowcasting}\label{nowcasting}} \url{https://www.sr-sv.com/nowcasting-for-financial-markets/} \url{https://cran.r-project.org/web/packages/nowcasting/nowcasting.pdf} \hypertarget{higher-criticism}{% \section{Higher Criticism}\label{higher-criticism}} Comes from the needs of Large Scale Inference - testing for effects in the age of high content screening. See \emph{Empirical Bayes Methods for Estimation, Testing and Prediction. B. Efron} Donoho Higher Criticism for Large-Scale Inference, Especially for Rare and Weak Effects Author(s): David Donoho and Jiashun Jin \url{https://www.jstor.org/stable/pdf/24780402.pdf?refreqid=excelsior\%3A98f50337ba4fabafd7b1c9f081ee1f98} \(HC^{*}\) can be connected with the maximum of a standardized empirical process; see HIGHER CRITICISM FOR DETECTING SPARSE HETEROGENEOUS MIXTURES By David Donoho and Jiashun Jin Efron \url{https://statweb.stanford.edu/~ckirby/brad/LSI/monograph_CUP.pdf} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item Hong Zhang, Jiashun Jin and Zheyang Wu. ``Distributions and Statistical Power of Optimal Signal-Detection Methods In Finite Cases'', submitted. \item Li, Jian; Siegmund, David. ``Higher criticism: p-values and criticism''. Annals of Statistics 43 (2015). \end{enumerate} Software \begin{Shaded} \begin{Highlighting}[] \ControlFlowTok{if}\NormalTok{(}\OperatorTok{!}\KeywordTok{require}\NormalTok{(SetTest))\{}\KeywordTok{install.packages}\NormalTok{( }\StringTok{"SetTest"}\NormalTok{,}\DataTypeTok{dependencies =} \OtherTok{TRUE}\NormalTok{) \}} \NormalTok{pval.test =}\StringTok{ }\KeywordTok{runif}\NormalTok{(}\DecValTok{10}\NormalTok{)} \KeywordTok{test.hc}\NormalTok{(pval.test, }\DataTypeTok{M=}\KeywordTok{diag}\NormalTok{(}\DecValTok{10}\NormalTok{), }\DataTypeTok{k0=}\DecValTok{1}\NormalTok{, }\DataTypeTok{k1=}\DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $pvalue ## [1] 0.2956397 ## ## $hcstat ## [1] 2.07434 ## ## $location ## [1] 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{test.hc}\NormalTok{(pval.test, }\DataTypeTok{M=}\KeywordTok{diag}\NormalTok{(}\DecValTok{10}\NormalTok{), }\DataTypeTok{k0=}\DecValTok{1}\NormalTok{, }\DataTypeTok{k1=}\DecValTok{10}\NormalTok{, }\DataTypeTok{LS =} \OtherTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $pvalue ## [1] 0.2809834 ## ## $hcstat ## [1] 2.07434 ## ## $location ## [1] 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{test.hc}\NormalTok{(pval.test, }\DataTypeTok{M=}\KeywordTok{diag}\NormalTok{(}\DecValTok{10}\NormalTok{), }\DataTypeTok{k0=}\DecValTok{1}\NormalTok{, }\DataTypeTok{k1=}\DecValTok{10}\NormalTok{, }\DataTypeTok{ZW =} \OtherTok{TRUE}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $pvalue ## [1] 0.4264026 ## ## $hcstat ## [1] 2.07434 ## ## $location ## [1] 1 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#When the input are statistics#} \NormalTok{stat.test =}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\DecValTok{20}\NormalTok{)} \NormalTok{p.test =}\StringTok{ }\DecValTok{2}\OperatorTok{*}\NormalTok{(}\DecValTok{1} \OperatorTok{-}\StringTok{ }\KeywordTok{pnorm}\NormalTok{(}\KeywordTok{abs}\NormalTok{(stat.test)))} \KeywordTok{test.hc}\NormalTok{(p.test, }\DataTypeTok{M=}\KeywordTok{diag}\NormalTok{(}\DecValTok{20}\NormalTok{), }\DataTypeTok{k0=}\DecValTok{1}\NormalTok{, }\DataTypeTok{k1=}\DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## $pvalue ## [1] 0.06761214 ## ## $hcstat ## [1] 4.093402 ## ## $location ## [1] 1 \end{verbatim} \hypertarget{cognitive-biases}{% \chapter{Cognitive Biases}\label{cognitive-biases}} \emph{People typically consider a limited number of promising ideas in order to manage the cognitive burden of searching through all possible ideas, but this can lead them to accept adequate solutions without considering potentially superior alternatives. Here we show that people systematically default to searching for additive transformations, and consequently overlook subtractive transformations.} Adams, G.S., Converse, B.A., Hales, A.H. et al.~People systematically overlook subtractive changes. Nature 592, 258--261 (2021). \url{https://doi.org/10.1038/s41586-021-03380-y} \hypertarget{hierarchical-and-grouped-time-series}{% \chapter{Hierarchical and grouped time series}\label{hierarchical-and-grouped-time-series}} \url{https://robjhyndman.com/software/} \hypertarget{reconciled-distributional-forecasts}{% \section{Reconciled Distributional Forecasts}\label{reconciled-distributional-forecasts}} \url{https://otexts.com/fpp3/rec-prob.html} SO question \url{https://stats.stackexchange.com/questions/514050/how-to-add-coherent-distributions-from-reconciled-distributional-forecasts} \url{https://github.com/anastasiospanagiotelis/ProbReco} \url{https://robjhyndman.com/publications/coherentprob/} \hypertarget{propensity-score-matching-caliper}{% \chapter{Propensity Score Matching : Caliper}\label{propensity-score-matching-caliper}} Putting constraints on matching can reduce bias \citep{10.1093/aje/kwt212}. \emph{Matching on the propensity score is widely used to estimate the effect of an exposure in observational studies. However, the quality of the matches can be affected by decisions made during the matching process, particularly the order in which subjects are selected for matching and the maximum permitted difference between matched subjects (the ``caliper''). This study used simulations to explore the effects of these decisions on both the imbalance of covariates and the closeness of matching, while allowing the numbers of potential matches and strengths of association between the confounding variable and the exposure to vary. It was found that, without a caliper, substantial bias was possible, particularly with a relatively small reservoir of potential matches and strong confounder-exposure association. Use of the recommended caliper reduced the bias considerably, but bias remained if subjects were selected by increasing or decreasing propensity score. A tighter caliper led to greatly reduced bias and closer matches, although some subjects could not be matched. This study suggests that a narrow caliper can improve the performance of propensity score matching. In situations where it is impossible to find appropriate matches for all exposed subjects, it is better to select subjects in order of the best available matches, rather than increasing or decreasing the propensity score.} \hypertarget{random-effects-and-mixed-models}{% \chapter{Random Effects and Mixed Models}\label{random-effects-and-mixed-models}} \hypertarget{crossed-versus-nested-random-effects.}{% \section{Crossed versus nested random effects.}\label{crossed-versus-nested-random-effects.}} How do they differ and how are they specified correctly in lme4 and in JAGS / Stan? \hypertarget{very-large-number-of-res}{% \section{Very Large Number of RE's}\label{very-large-number-of-res}} \url{https://arxiv.org/abs/1610.08088} \hypertarget{propensity-score-matching-caliper-1}{% \chapter{Propensity Score Matching : Caliper}\label{propensity-score-matching-caliper-1}} Putting constraints on matching can reduce bias \citep{10.1093/aje/kwt212}. \emph{Matching on the propensity score is widely used to estimate the effect of an exposure in observational studies. However, the quality of the matches can be affected by decisions made during the matching process, particularly the order in which subjects are selected for matching and the maximum permitted difference between matched subjects (the ``caliper''). This study used simulations to explore the effects of these decisions on both the imbalance of covariates and the closeness of matching, while allowing the numbers of potential matches and strengths of association between the confounding variable and the exposure to vary. It was found that, without a caliper, substantial bias was possible, particularly with a relatively small reservoir of potential matches and strong confounder-exposure association. Use of the recommended caliper reduced the bias considerably, but bias remained if subjects were selected by increasing or decreasing propensity score. A tighter caliper led to greatly reduced bias and closer matches, although some subjects could not be matched. This study suggests that a narrow caliper can improve the performance of propensity score matching. In situations where it is impossible to find appropriate matches for all exposed subjects, it is better to select subjects in order of the best available matches, rather than increasing or decreasing the propensity score.} \hypertarget{sensitivity-analysis-and-shapley-values}{% \chapter{Sensitivity Analysis and Shapley Values}\label{sensitivity-analysis-and-shapley-values}} Global sensitivity analysis measures the importance of input variables to a function. This is an important task in quantifying the uncertainty in which target variables can be predicted from their inputs. Sobol indices \citep{sobolindices} are a popular approach to this. It turns out that there's a relationship between Sobol indices and Shapley values. We explore this relationship here and demonstrate their effectiveness on some linear and non-linear models. \hypertarget{relationship-between-sobol-indices-and-shapley-values}{% \section{Relationship between Sobol indices and Shapley values}\label{relationship-between-sobol-indices-and-shapley-values}} Shapley values are based on \(f(x)-E[f(x)]\) while Sobol indices decompose output variance into fractions contributed by the inputs. The Sobol index is a global measure of feature importance while Shapley values focus on local explanations although we could combine local Shapley values to achieve a global importance measure. Sobol indices are based on expectations and can be used for features not included in the model / function of interest. In this way we could query for important features correlated with those that the model does use. \hypertarget{cran-sensitivity-package}{% \section{CRAN sensitivity package}\label{cran-sensitivity-package}} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(ggplot2)} \KeywordTok{library}\NormalTok{(pander)} \ControlFlowTok{if}\NormalTok{(}\OperatorTok{!}\KeywordTok{require}\NormalTok{(sensitivity))\{} \KeywordTok{install.packages}\NormalTok{(}\StringTok{"sensitivity"}\NormalTok{)} \KeywordTok{library}\NormalTok{(sensitivity)} \NormalTok{\}} \end{Highlighting} \end{Shaded} Standardized Regression Coefficients (SRC), or the Standardized Rank Regression Coefficients (SRRC), which are sensitivity indices based on linear or monotonic assumptions in the case of independent factors. \begin{Shaded} \begin{Highlighting}[] \NormalTok{n <-}\StringTok{ }\DecValTok{100} \NormalTok{X <-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{X1 =} \KeywordTok{runif}\NormalTok{(n, }\FloatTok{0.5}\NormalTok{, }\FloatTok{1.5}\NormalTok{),} \DataTypeTok{X2 =} \KeywordTok{runif}\NormalTok{(n, }\FloatTok{1.5}\NormalTok{, }\FloatTok{4.5}\NormalTok{),} \DataTypeTok{X3 =} \KeywordTok{runif}\NormalTok{(n, }\FloatTok{4.5}\NormalTok{, }\FloatTok{13.5}\NormalTok{))} \CommentTok{# linear model : Y = X1 + X2 + X3} \NormalTok{y <-}\StringTok{ }\KeywordTok{with}\NormalTok{(X, X1 }\OperatorTok{+}\StringTok{ }\NormalTok{X2 }\OperatorTok{+}\StringTok{ }\NormalTok{X3)} \NormalTok{Z <-}\StringTok{ }\KeywordTok{src}\NormalTok{(X, y, }\DataTypeTok{rank =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{logistic =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{nboot =} \DecValTok{0}\NormalTok{, }\DataTypeTok{conf =} \FloatTok{0.95}\NormalTok{)} \KeywordTok{pander}\NormalTok{(Z}\OperatorTok{$}\NormalTok{SRC,}\DataTypeTok{caption =} \StringTok{"Standardized Regression Coefficients "}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{longtable}[]{@{}cc@{}} \caption{Standardized Regression Coefficients}\tabularnewline \toprule \begin{minipage}[b]{0.12\columnwidth}\centering ~\strut \end{minipage} & \begin{minipage}[b]{0.14\columnwidth}\centering original\strut \end{minipage}\tabularnewline \midrule \endfirsthead \toprule \begin{minipage}[b]{0.12\columnwidth}\centering ~\strut \end{minipage} & \begin{minipage}[b]{0.14\columnwidth}\centering original\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{0.12\columnwidth}\centering \textbf{X1}\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.099\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.12\columnwidth}\centering \textbf{X2}\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.2686\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.12\columnwidth}\centering \textbf{X3}\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.915\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{ggplot}\NormalTok{(Z, }\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))}\OperatorTok{+}\KeywordTok{ggtitle}\NormalTok{(}\StringTok{"Standardized Regression Coefficients"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{topics_in_data_science_files/figure-latex/unnamed-chunk-3-1.pdf} \begin{Shaded} \begin{Highlighting}[] \NormalTok{y <-}\StringTok{ }\KeywordTok{with}\NormalTok{(X, X1 }\OperatorTok{+}\StringTok{ }\NormalTok{X2 }\OperatorTok{+}\StringTok{ }\NormalTok{X3)} \NormalTok{y <-}\StringTok{ }\NormalTok{y }\OperatorTok{+}\StringTok{ }\KeywordTok{rnorm}\NormalTok{(}\KeywordTok{nrow}\NormalTok{(X),}\DecValTok{0}\NormalTok{,}\DecValTok{1}\OperatorTok{/}\DecValTok{2}\NormalTok{)} \NormalTok{df<-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\KeywordTok{cbind}\NormalTok{(X,y))} \NormalTok{Z <-}\StringTok{ }\KeywordTok{src}\NormalTok{(X, y, }\DataTypeTok{rank =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{logistic =} \OtherTok{FALSE}\NormalTok{, }\DataTypeTok{nboot =} \DecValTok{0}\NormalTok{, }\DataTypeTok{conf =} \FloatTok{0.95}\NormalTok{)} \KeywordTok{pander}\NormalTok{(Z}\OperatorTok{$}\NormalTok{SRC,}\DataTypeTok{caption =} \StringTok{"Standardized Regression Coefficients "}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{longtable}[]{@{}cc@{}} \caption{Standardized Regression Coefficients}\tabularnewline \toprule \begin{minipage}[b]{0.12\columnwidth}\centering ~\strut \end{minipage} & \begin{minipage}[b]{0.14\columnwidth}\centering original\strut \end{minipage}\tabularnewline \midrule \endfirsthead \toprule \begin{minipage}[b]{0.12\columnwidth}\centering ~\strut \end{minipage} & \begin{minipage}[b]{0.14\columnwidth}\centering original\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{0.12\columnwidth}\centering \textbf{X1}\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.1197\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.12\columnwidth}\centering \textbf{X2}\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.2438\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.12\columnwidth}\centering \textbf{X3}\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.906\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{ggplot}\NormalTok{(Z, }\DataTypeTok{ylim =} \KeywordTok{c}\NormalTok{(}\OperatorTok{-}\DecValTok{1}\NormalTok{,}\DecValTok{1}\NormalTok{))}\OperatorTok{+}\KeywordTok{ggtitle}\NormalTok{(}\StringTok{"Standardized Regression Coefficients"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{topics_in_data_science_files/figure-latex/unnamed-chunk-4-1.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#lm.fit = lm(y ~ X1+X2+X3,data = df)} \CommentTok{#summary(lm.fit)} \CommentTok{#attach(df)} \CommentTok{#plot(y, X1+X2+X3)} \end{Highlighting} \end{Shaded} We see how the importance of X3 is ranked above X2 and likewise X2 is more important than X1. This is by design of the simulated data set. The standardized regression coefficients (beta coefficients) are calculated from that has been standardized, let's normalize and calculate the regression to see if indeed that is the case. \begin{Shaded} \begin{Highlighting}[] \NormalTok{dfs<-}\StringTok{ }\KeywordTok{data.frame}\NormalTok{(}\KeywordTok{scale}\NormalTok{(df,}\DataTypeTok{center =} \OtherTok{TRUE}\NormalTok{,}\DataTypeTok{scale =} \OtherTok{TRUE}\NormalTok{))} \NormalTok{lm.fit =}\StringTok{ }\KeywordTok{lm}\NormalTok{(y }\OperatorTok{~}\StringTok{ }\NormalTok{X1}\OperatorTok{+}\NormalTok{X2}\OperatorTok{+}\NormalTok{X3,}\DataTypeTok{data =}\NormalTok{ dfs)} \KeywordTok{summary}\NormalTok{(lm.fit)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Call: ## lm(formula = y ~ X1 + X2 + X3, data = dfs) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.46115 -0.09830 -0.01159 0.09760 0.36208 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -4.074e-16 1.709e-02 0.000 1 ## X1 1.197e-01 1.728e-02 6.926 4.92e-10 *** ## X2 2.438e-01 1.739e-02 14.017 < 2e-16 *** ## X3 9.060e-01 1.734e-02 52.237 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.1709 on 96 degrees of freedom ## Multiple R-squared: 0.9717, Adjusted R-squared: 0.9708 ## F-statistic: 1098 on 3 and 96 DF, p-value: < 2.2e-16 \end{verbatim} We see that the values are very close. \hypertarget{partial-correlation-coefficients}{% \section{Partial Correlation Coefficients}\label{partial-correlation-coefficients}} \begin{Shaded} \begin{Highlighting}[] \NormalTok{x <-}\StringTok{ }\KeywordTok{pcc}\NormalTok{(X, y, }\DataTypeTok{nboot =} \DecValTok{100}\NormalTok{)} \KeywordTok{print}\NormalTok{(x)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Call: ## pcc(X = X, y = y, nboot = 100) ## ## Partial Correlation Coefficients (PCC): ## original bias std. error min. c.i. max. c.i. ## X1 0.5772293 0.0056380312 0.067064354 0.4471891 0.7179550 ## X2 0.8196099 0.0025391757 0.029985334 0.7653207 0.8848415 ## X3 0.9828602 0.0005686946 0.002284419 0.9779532 0.9884970 \end{verbatim} \hypertarget{sobol-indices-for-deterministic-function-and-for-model}{% \section{Sobol indices for deterministic function and for model}\label{sobol-indices-for-deterministic-function-and-for-model}} \begin{Shaded} \begin{Highlighting}[] \NormalTok{y.fun <-}\StringTok{ }\ControlFlowTok{function}\NormalTok{(X) \{} \NormalTok{ X1<-}\StringTok{ }\NormalTok{X[,}\DecValTok{1}\NormalTok{]} \NormalTok{ X2<-}\StringTok{ }\NormalTok{X[,}\DecValTok{2}\NormalTok{]} \NormalTok{ X3<-}\StringTok{ }\NormalTok{X[,}\DecValTok{3}\NormalTok{]} \NormalTok{ X1}\OperatorTok{+}\NormalTok{X2}\OperatorTok{+}\NormalTok{X3} \NormalTok{\}} \NormalTok{yhat.fun<-}\ControlFlowTok{function}\NormalTok{(X,lm)} \NormalTok{\{} \NormalTok{ X1<-}\StringTok{ }\NormalTok{X[,}\DecValTok{1}\NormalTok{]} \NormalTok{ X2<-}\StringTok{ }\NormalTok{X[,}\DecValTok{2}\NormalTok{]} \NormalTok{ X3<-}\StringTok{ }\NormalTok{X[,}\DecValTok{3}\NormalTok{]} \NormalTok{ yhat <-}\StringTok{ }\KeywordTok{predict}\NormalTok{(lm.fit,}\KeywordTok{data.frame}\NormalTok{(}\DataTypeTok{X1=}\NormalTok{X1,}\DataTypeTok{X2=}\NormalTok{X2,}\DataTypeTok{X3=}\NormalTok{X3))} \KeywordTok{return}\NormalTok{(yhat)} \NormalTok{\}} \NormalTok{nboot =}\StringTok{ }\DecValTok{1000} \NormalTok{x <-}\StringTok{ }\KeywordTok{sobol}\NormalTok{(}\DataTypeTok{model =}\NormalTok{ y.fun, X[}\DecValTok{1}\OperatorTok{:}\DecValTok{50}\NormalTok{,], X[}\DecValTok{51}\OperatorTok{:}\DecValTok{100}\NormalTok{,], }\DataTypeTok{order =} \DecValTok{3}\NormalTok{, }\DataTypeTok{nboot =}\NormalTok{ nboot)} \NormalTok{S.sobol <-}\StringTok{ }\NormalTok{x}\OperatorTok{$}\NormalTok{S} \KeywordTok{pander}\NormalTok{(S.sobol)} \end{Highlighting} \end{Shaded} \begin{longtable}[]{@{}cccccc@{}} \toprule \begin{minipage}[b]{0.17\columnwidth}\centering ~\strut \end{minipage} & \begin{minipage}[b]{0.12\columnwidth}\centering original\strut \end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\centering bias\strut \end{minipage} & \begin{minipage}[b]{0.14\columnwidth}\centering std. error\strut \end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\centering min. c.i.\strut \end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\centering max. c.i.\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X1}\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 0.3045\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 0.01067\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.8787\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.464\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.949\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X2}\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 0.1629\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.003125\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.8411\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.43\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.754\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X3}\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 1.485\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 0.0379\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.2252\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 0.9809\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.873\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering **X1*X2**\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering -0.3074\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.01577\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.8972\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.956\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.464\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering **X1*X3**\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering -0.3074\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.01577\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.8972\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.956\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.464\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering **X2*X3**\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering -0.3074\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.01577\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.8972\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.956\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.464\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X1\emph{X2}X3}\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 0.3074\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 0.01577\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.8972\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.464\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.956\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \begin{Shaded} \begin{Highlighting}[] \CommentTok{#yhat.fun(data.frame(X1=1,X2=2,X3=3),lm.fit)} \NormalTok{x <-}\StringTok{ }\KeywordTok{sobol}\NormalTok{(}\DataTypeTok{model =}\NormalTok{ yhat.fun,X[}\DecValTok{1}\OperatorTok{:}\DecValTok{50}\NormalTok{,], X[}\DecValTok{51}\OperatorTok{:}\DecValTok{100}\NormalTok{,], }\DataTypeTok{order =} \DecValTok{3}\NormalTok{, }\DataTypeTok{nboot =}\NormalTok{ nboot)} \NormalTok{S.sobol <-}\StringTok{ }\NormalTok{x}\OperatorTok{$}\NormalTok{S} \KeywordTok{pander}\NormalTok{(S.sobol)} \end{Highlighting} \end{Shaded} \begin{longtable}[]{@{}cccccc@{}} \toprule \begin{minipage}[b]{0.17\columnwidth}\centering ~\strut \end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\centering original\strut \end{minipage} & \begin{minipage}[b]{0.12\columnwidth}\centering bias\strut \end{minipage} & \begin{minipage}[b]{0.14\columnwidth}\centering std. error\strut \end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\centering min. c.i.\strut \end{minipage} & \begin{minipage}[b]{0.13\columnwidth}\centering max. c.i.\strut \end{minipage}\tabularnewline \midrule \endhead \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X1}\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 7.339e-05\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering -0.04067\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.7359\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.386\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.419\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X2}\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.03858\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering -0.04091\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.7296\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.393\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.361\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X3}\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.283\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 0.01002\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.0622\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.136\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.385\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering **X1*X2**\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.001133\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 0.04042\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.738\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.408\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.396\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering **X1*X3**\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.001133\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 0.04042\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.738\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.408\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.396\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering **X2*X3**\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -0.001133\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering 0.04042\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.738\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.408\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.396\strut \end{minipage}\tabularnewline \begin{minipage}[t]{0.17\columnwidth}\centering \textbf{X1\emph{X2}X3}\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 0.001133\strut \end{minipage} & \begin{minipage}[t]{0.12\columnwidth}\centering -0.04042\strut \end{minipage} & \begin{minipage}[t]{0.14\columnwidth}\centering 0.738\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering -1.396\strut \end{minipage} & \begin{minipage}[t]{0.13\columnwidth}\centering 1.408\strut \end{minipage}\tabularnewline \bottomrule \end{longtable} \hypertarget{on-model-averaging}{% \chapter{On Model Averaging}\label{on-model-averaging}} Recall that we can break down model error into the bias an variance \(bias(\hat{Y})= E[\hat{Y}-E[Y]]\) If we are averaging models \(i=1, \cdots ,k\) then \(\operatorname{MSE}\left(\hat{Y}_{i}\right)=\left\{\operatorname{bias}\left(\hat{Y}_{i}\right)\right\}^{2}+\operatorname{var}\left(\hat{Y}_{i}\right)\) \hypertarget{poincare-embedding}{% \chapter{Poincare Embedding}\label{poincare-embedding}} \hypertarget{embeddings}{% \section{Embeddings}\label{embeddings}} \emph{The Poincaré Embedding is concerned with the problem of learning hierarchical structure on the dataset. Phylogenetic tree or the tree of hypernymy are the examples of hierarchical structure dataset. The embedding space is a Poincaré ball, which is an unit ball equipped with poincaré distance. An advantage using Poincaré space compared to the Euclidean space as embedding space is that this space preserve tree-shaped structure well in relatively low dimension. This is because poincaré distance is intuitively continuous version of distance on tree-shaped dataset. We can take advantage of this property to provide efficient embeddings with comparably less dimensionality.} See {[}\url{https://github.com/hwchang1201/poincare.embeddings/}{]} \citep{DBLP:journals/corr/NickelK17} \citep{DBLP:journals/corr/abs-1806-03417} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{install.packages}\NormalTok{(}\StringTok{"remotes"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Installing package into '/home/bruce/R/x86_64-pc-linux-gnu-library/4.1' ## (as 'lib' is unspecified) \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{remotes}\OperatorTok{::}\KeywordTok{install_github}\NormalTok{(}\StringTok{"hwchang1201/poincare.embeddings"}\NormalTok{, }\DataTypeTok{upgrade =} \StringTok{"never"}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Skipping install of 'poincare.embeddings' from a github remote, the SHA1 (7fd455e0) has not changed since last install. ## Use `force = TRUE` to force installation \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{install.packages}\NormalTok{(}\StringTok{'data.tree'}\NormalTok{)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## Installing package into '/home/bruce/R/x86_64-pc-linux-gnu-library/4.1' ## (as 'lib' is unspecified) \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{library}\NormalTok{(yaml)} \KeywordTok{library}\NormalTok{(data.tree)} \CommentTok{# defining tree structured dataset.} \NormalTok{acme_treeDataset <-}\StringTok{ }\NormalTok{Node}\OperatorTok{$}\KeywordTok{new}\NormalTok{(}\StringTok{"Acme Inc."}\NormalTok{)} \NormalTok{ accounting <-}\StringTok{ }\NormalTok{acme_treeDataset}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"Accounting"}\NormalTok{)} \NormalTok{ software <-}\StringTok{ }\NormalTok{accounting}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"New Software"}\NormalTok{)} \NormalTok{ standards <-}\StringTok{ }\NormalTok{accounting}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"New Accounting Standards"}\NormalTok{)} \NormalTok{ research <-}\StringTok{ }\NormalTok{acme_treeDataset}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"Research"}\NormalTok{)} \NormalTok{ newProductLine <-}\StringTok{ }\NormalTok{research}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"New Product Line"}\NormalTok{)} \NormalTok{ newLabs <-}\StringTok{ }\NormalTok{research}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"New Labs"}\NormalTok{)} \NormalTok{ it <-}\StringTok{ }\NormalTok{acme_treeDataset}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"IT"}\NormalTok{)} \NormalTok{ outsource <-}\StringTok{ }\NormalTok{it}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"Outsource"}\NormalTok{)} \NormalTok{ agile <-}\StringTok{ }\NormalTok{it}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"Go agile"}\NormalTok{)} \NormalTok{ goToR <-}\StringTok{ }\NormalTok{it}\OperatorTok{$}\KeywordTok{AddChild}\NormalTok{(}\StringTok{"Switch to R"}\NormalTok{)} \KeywordTok{print}\NormalTok{(acme_treeDataset)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## levelName ## 1 Acme Inc. ## 2 ¦--Accounting ## 3 ¦ ¦--New Software ## 4 ¦ °--New Accounting Standards ## 5 ¦--Research ## 6 ¦ ¦--New Product Line ## 7 ¦ °--New Labs ## 8 °--IT ## 9 ¦--Outsource ## 10 ¦--Go agile ## 11 °--Switch to R \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# loading package "poincare.embeddings"} \KeywordTok{library}\NormalTok{(poincare.embeddings)} \CommentTok{# use example dataset} \CommentTok{# 1. use "toy"} \NormalTok{toy_yaml <-}\StringTok{ }\NormalTok{yaml}\OperatorTok{::}\KeywordTok{yaml.load}\NormalTok{(toy)} \NormalTok{toy_tree <-}\StringTok{ }\NormalTok{data.tree}\OperatorTok{::}\KeywordTok{as.Node}\NormalTok{(toy_yaml)} \NormalTok{emb <-}\StringTok{ }\KeywordTok{poincareEmbeddings}\NormalTok{(toy_tree, }\DataTypeTok{theta_dim =} \DecValTok{2}\NormalTok{, }\DataTypeTok{N_epoch =} \DecValTok{200}\NormalTok{, }\DataTypeTok{lr =} \FloatTok{0.05}\NormalTok{, }\DataTypeTok{n_neg =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{topics_in_data_science_files/figure-latex/unnamed-chunk-9-1.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# 2. use "statistics"} \NormalTok{statistics_yaml <-}\StringTok{ }\NormalTok{yaml}\OperatorTok{::}\KeywordTok{yaml.load}\NormalTok{(statistics)} \NormalTok{statistics_tree <-}\StringTok{ }\NormalTok{data.tree}\OperatorTok{::}\KeywordTok{as.Node}\NormalTok{(statistics_yaml)} \NormalTok{emb <-}\StringTok{ }\KeywordTok{poincareEmbeddings}\NormalTok{(statistics_tree, }\DataTypeTok{theta_dim =} \DecValTok{2}\NormalTok{, }\DataTypeTok{N_epoch =} \DecValTok{200}\NormalTok{, }\DataTypeTok{lr =} \FloatTok{0.05}\NormalTok{, }\DataTypeTok{n_neg =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{topics_in_data_science_files/figure-latex/unnamed-chunk-9-2.pdf} \begin{Shaded} \begin{Highlighting}[] \CommentTok{# 3. use "statistics_adv"} \NormalTok{statistics_adv_yaml <-}\StringTok{ }\NormalTok{yaml}\OperatorTok{::}\KeywordTok{yaml.load}\NormalTok{(statistics_adv)} \NormalTok{statistics_adv_tree <-}\StringTok{ }\NormalTok{data.tree}\OperatorTok{::}\KeywordTok{as.Node}\NormalTok{(statistics_adv_yaml)} \NormalTok{emb <-}\StringTok{ }\KeywordTok{poincareEmbeddings}\NormalTok{(statistics_adv_tree, }\DataTypeTok{theta_dim =} \DecValTok{2}\NormalTok{, }\DataTypeTok{N_epoch =} \DecValTok{200}\NormalTok{, }\DataTypeTok{lr =} \FloatTok{0.05}\NormalTok{, }\DataTypeTok{n_neg =} \DecValTok{10}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{topics_in_data_science_files/figure-latex/unnamed-chunk-9-3.pdf} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{print}\NormalTok{(}\KeywordTok{paste}\NormalTok{(}\StringTok{"The ranking of the poincare embedding :"}\NormalTok{, emb}\OperatorTok{$}\NormalTok{rank))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "The ranking of the poincare embedding : 5.48888888888889" \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \KeywordTok{print}\NormalTok{(}\KeywordTok{paste}\NormalTok{(}\StringTok{"The mean average precision of the poincare embedding :"}\NormalTok{, emb}\OperatorTok{$}\NormalTok{map))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] "The mean average precision of the poincare embedding : 0.73738246401919" \end{verbatim} \hypertarget{multirelational}{% \chapter{Multirelational}\label{multirelational}} \url{https://github.com/ibalazevic/multirelational-poincare} GeomStats \url{https://geomstats.github.io/geomstats.github.io/05_embedding_graph_structured_data_h2.html} \hypertarget{mlops}{% \chapter{MLOps}\label{mlops}} \emph{Development platform: a collaborative platform for performing ML experiments and empowering the creation of ML models by data scientists should be considered part of the MLOps framework. This platform should enable secure access to data sources (e.g.~from data engineering workflows). We want the handover from ML training to deployment to be as smooth as possible, which is more likely the case for such a platform as compared to ML models developed in different local environment. Model unit testing: every time we create, change, or retrain a model, we should automatically validate the integrity of the model, e.g. - should meet minimum performance on test set - should perform well on synthetic use case-specific datasets Versioning: it should be possible to go back in time to inspect everything relating to a given model; e.g.~what data \& code was used. Why? because it something breaks, we need to be able to go back in time and see why. Model registry: there should be an overview of deployed \& decommissioned ML models, their version history, and deployment stage of each version. Why? if something breaks, we can rollback a previous archived version back into production. Model Governance: only certain people should have access to see training related to any given model, and there should be access control for who can request/reject/approve transitions between deployment stages (e.g.~dev to test to prod) in the model registry. Deployments: deployment can be many things, but in this post I consider the case where we want to deploy a model to cloud infrastructure and expose an API, which enables other people to consume and use the model, i.e.~I'm not considering cases where we want to deploy ML models into embedded systems. Efficient model deployments on appropriate infrastructure should: - support multiple ML frameworks + custom models - have well defined API spec (e.g.~Swagger/OpenAPI) - support containerized model servers Monitoring: tracking performance metrics (throughput, uptime, etc.). Why? If all of the sudden a model starts returning errors, or being unexpectedly slow, we need to know before the end-user complains, so that we can fix it. Feedback: we need to feedback information to the model on how well it is performing. Why? typically we run predictions on new samples where we do not yet know the ground truth. As we learn the truth, however, we need to inform the model, so that it can report on how well it is actually doing. A/B testing: no matter how solid cross-validation we think we're doing, we never know how the model will perform until it actually gets deployed. It should be easy to perform A/B experiments with live models within the MLOps framework. Drift detection: typically the longer time a given model is deployed the worse it becomes as circumstances change compared to the time of training the model. We can try to monitor and alert on these different circumstances, or ``drifts'', before they get too severe: - Concept drift: when the relation between input and output has changed - Prediction drift: changes in incoming predictions, but model still holds - Label drift: change in the model's outcomes compared to training data - Feature drift: change in the distribution of model input data Outlier detection: if a deployed model receives an input sample which is significantly different from anything observed during training, we can try to identify this sample as a potential outlier, and the returned prediction should be marked as such, indicating that the end-user should be careful in trusting the prediction. Adversarial Attack Detection: we should be warned when our models are attacked by adversarial samples (e.g.~someone trying to abuse / manipulate the outcome of our algorithms). Interpretability: the ML deployments should support endpoints returning the explanation of our prediction, e.g.~through SHAP values. Why? for a lot of use cases a prediction is not enough and the end-user needs to know why a given prediction was made. Governance of deployments: we not only need access restrictions on who can see the data, trained models, etc, but also for who can eventually use the deployed models. These deployed models can often be just as confidential as the data they were trained on. Data-centricity: rather than focus on model performance \& improvements, it makes sense that a MLOps framework also enables an increased focus on how data quality and breadth can be improved.} \hypertarget{introduction-to-normalizing-flows}{% \chapter{Introduction to Normalizing Flows}\label{introduction-to-normalizing-flows}} \hypertarget{variational-inference-with-nf}{% \section{Variational Inference With NF}\label{variational-inference-with-nf}} \emph{Variational inference now lies at the core of large-scale topic models of text (Hoffman et al., 2013), provides the state-of-the-art in semi-supervised classification (Kingma et al., 2014), drives the models that currently produce the most realistic generative models of images (Gregor et al., 2014; Rezende et al., 2014), and are a default Proceedings of the 32 nd International Conference on Machine Learning, Lille, France, 2015. JMLR: W\&CP volume 37. Copyright 2015 by the author(s). tool for the understanding of many physical and chemical systems. Despite these successes and ongoing advances, there are a number of disadvantages of variational methods that limit their power and hamper their wider adoption as a default method for statistical inference. It is one of these limitations, the choice of posterior approximation, that we address in this paper} {[}\url{http://proceedings.mlr.press/v37/rezende15.pdf}{]} \emph{Generative modeling loosely refers to building a model of data, for instance p(image), that we can sample from. This is in contrast to discriminative modeling, such as regression or classification, which tries to estimate conditional distributions such as p(class \textbar{} image).} See also \url{https://blog.evjang.com/2018/01/nf1.html} \url{https://deepgenerativemodels.github.io/notes/flow} \hypertarget{sa-and-m-estimators}{% \chapter{SA and M-estimators}\label{sa-and-m-estimators}} R. Martin and C. Masreliez, ``Robust estimation via stochastic approximation,'' in IEEE Transactions on Information Theory, vol.~21, no. 3, pp.~263-271, May 1975, doi: 10.1109/TIT.1975.1055386. \hypertarget{random-matrix-theory-and-machine-learning}{% \chapter{Random Matrix Theory and Machine Learning}\label{random-matrix-theory-and-machine-learning}} \bibliography{packages.bib,book.bib} \end{document}
{ "alphanum_fraction": 0.7695578542, "avg_line_length": 54.1048121292, "ext": "tex", "hexsha": "5cf5d5076dc9337bc9a70041bdec4e1e18d0daf7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5d4fcb7e153358f8e140d58f7c91ce0478f6ed2f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "brucebcampbell/stds", "max_forks_repo_path": "topics_in_data_science.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5d4fcb7e153358f8e140d58f7c91ce0478f6ed2f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "brucebcampbell/stds", "max_issues_repo_path": "topics_in_data_science.tex", "max_line_length": 2046, "max_stars_count": null, "max_stars_repo_head_hexsha": "5d4fcb7e153358f8e140d58f7c91ce0478f6ed2f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "brucebcampbell/stds", "max_stars_repo_path": "topics_in_data_science.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 23184, "size": 82077 }
\section*{Introduction} \addcontentsline{toc}{section}{Introduction} The main text of a thesis always starts with an ``Introduction''. You can leave writing it to the final phase of writing. It is a good idea to start the Introduction with the main thesis statement or research question of the thesis. After that, it is a good idea to clarify things by defining any necessary terms.\footnote{Definitions after the thesis statement! Also, don't babble in the introduction.} This section is also a good place to discuss why your thesis statement is scientifically or practically relevant and interesting. Introduction is the place where you explain what is your contribution -- knowledge that you have investigated or produced personally. At the end of the section, it is customary to briefly explain the structure of the thesis -- what each chapter is about. Please note that the instructions given in this sample are by no means official. Always follow your supervisor's instructions even if they conflict with what this sample says.
{ "alphanum_fraction": 0.7992351816, "avg_line_length": 52.3, "ext": "tex", "hexsha": "4f229ca6c754e688d270baf740326818fc290142", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "klumiste/UTMatStat-thesis-template", "max_forks_repo_path": "02_introduction.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "klumiste/UTMatStat-thesis-template", "max_issues_repo_path": "02_introduction.tex", "max_line_length": 78, "max_stars_count": null, "max_stars_repo_head_hexsha": "09ea8ee128f7ea9c22949cd9cdf2ae4cf67ede78", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "klumiste/UTMatStat-thesis-template", "max_stars_repo_path": "02_introduction.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 218, "size": 1046 }
\documentclass[vecarrow]{svproc} % % RECOMMENDED % to typesetURLs, URIs, and DOIs \usepackage{url} \def\UrlFont{\rmfamily} \usepackage{listings} \usepackage{color} \definecolor{dkgreen}{rgb}{0,0.6,0} \definecolor{gray}{rgb}{0.5,0.5,0.5} \definecolor{mauve}{rgb}{0.58,0,0.82} \lstset{frame=tb, aboveskip=3mm, belowskip=3mm, showstringspaces=false, columns=flexible, basicstyle={\small\ttfamily}, numbers=none, numberstyle=\tiny\color{gray}, keywordstyle=\color{blue}, commentstyle=\color{dkgreen}, stringstyle=\color{mauve}, breaklines=true, breakatwhitespace=true, tabsize=3 } \usepackage{graphicx} \usepackage{booktabs} \usepackage{rotating} \usepackage{pdflscape} %\usepackage{amsmath} \begin{document} \mainmatter % start of a contribution % \title{RNN in tensorflow} % \titlerunning{RNN in ensorflow} % abbreviated title (for running head) % also used for the TOC unless % \toctitle is used % \author{Mohd Zamri Murah} % %\authorrunning{Ivar Ekeland et al.} % abbreviated author list (for running head) % list of authors for the TOC (use if author list has to be modified) %\tocauthor{Ivar Ekeland, Roger Temam, Jeffrey Dean, David \institute{Center for Artificial Intelligence Technology\\ Fakulti Teknologi Sains Maklumat\\ Universiti Kebangsaan Malaysia\\ \email{[email protected]}} \maketitle % typeset the title of the contribution \begin{abstract} Tensorflow is an open-source deep learning library developed by Google. It has been used in many areas such as image recognition, text to speech engine, pattern recognition and big data. This note provide an introductory concepts for computation using tensorflow. \keywords{deep learning} \end{abstract} % \section{Introduction} Feed-forward networks operate on fixed size vectors. For example, they map the pixels of $28x28$ image to the probabilities of $10$ possible classes. The computation happens in a fixed number of steps, namely the number of layers. In contrast, recurrent networks can operate on variable length sequences of vectors, either as input, output or both. RNNs are basically arbirary directed graphs of neurons and weights. Input neurons have on incoming connections because their activation is set by the input data anyway. The output neurons are just set of neurons in the graph that we read the prediction from. All other neurons in the graph are referred to as hidden neurons. A basic RNN is shown in figure~\ref{fig:1}. \begin{figure} % https://en.wikibooks.org/wiki/LaTeX/Importing_Graphics % Use the relevant command to insert your figure file. % For example, with the graphicx package use \centering{\fbox{\includegraphics[scale=.15]{rnn.png}}} % \caption{A feed-forward network and a recurrent network} \label{fig:1} % Give a unique label \end{figure} The state of an RNN depends on the current input and the previous state, which in turn depends on the input and state before that. Therefore, the state has indirect access to all previous inputs of the sequence and can be interpreted as a working memory. \begin{figure} % https://en.wikibooks.org/wiki/LaTeX/Importing_Graphics % Use the relevant command to insert your figure file. % For example, with the graphicx package use \centering{\fbox{\includegraphics[scale=.15]{rnn1.png}}} % \caption{A feed-forward network and a recurrent network} \label{fig:1} % Give a unique label \end{figure} \bibliography{tensorflow} \bibliographystyle{splncs_srt} \end{document}
{ "alphanum_fraction": 0.7775189284, "avg_line_length": 33.0192307692, "ext": "tex", "hexsha": "f116c931b0c251ad47450b58f7da219ab7c971f9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ea6bdbc4184e564d81ec0990af25a94a5abf010f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mohdzamrimurah/ftsm_technical_reports", "max_forks_repo_path": "tensor_rnn/tensor_rnn.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ea6bdbc4184e564d81ec0990af25a94a5abf010f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mohdzamrimurah/ftsm_technical_reports", "max_issues_repo_path": "tensor_rnn/tensor_rnn.tex", "max_line_length": 348, "max_stars_count": null, "max_stars_repo_head_hexsha": "ea6bdbc4184e564d81ec0990af25a94a5abf010f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mohdzamrimurah/ftsm_technical_reports", "max_stars_repo_path": "tensor_rnn/tensor_rnn.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 927, "size": 3434 }
\documentclass[aps,pra,reprint, onecolumn, rmp]{revtex4-2} \usepackage{lipsum} \usepackage{listings} \usepackage{etoolbox} \patchcmd{\section} {\centering} {\raggedright} {} {} \patchcmd{\subsection} {\centering} {\raggedright} {} {} \usepackage{xcolor} \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \lstdefinestyle{pystyle}{ commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\ttfamily\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } %\lstset{style=pystyle} \lstset{language=Python} \lstset{frame=lines} \lstset{caption={Insert code directly in your document}} \lstset{label={lst:code_direct}} \lstset{basicstyle=\footnotesize} \usepackage{booktabs} \usepackage{siunitx} \usepackage{adjustbox} \usepackage{tabularx} \newcommand\setrow[1]{\gdef\rowmac{#1}#1\ignorespaces} \newcommand\clearrow{\global\let\rowmac\relax} \clearrow \usepackage{amsmath} \begin{document} %Title of paper \title{COMPUTATIONAL PHYSICS - EXERCISE SHEET 05 \\Nose-Hoover Thermostat} \author{Matteo Garbellini} \email[]{[email protected]} \homepage[]{https://github.com/mgarbellini/Computational-Physics-Material-Science} \affiliation{Department of Physics, University of Freiburg \\ } \date{\today} \begin{abstract} The following is the report for the Exercise Sheet 05. The goal of this exercise is to simulate a Lennard-Jones fluid using the Nose-Hoover NVT thermostat, and to study significant thermostatistical quantities. Along this report, python scripts were also handed in. Additional code can be found at the github repository given at the end of this page. \textbf{As of today the code does not work properly, and shows unrealistic-unphysical results.} \end{abstract} %\maketitle must follow title, authors, abstract, and keywords \maketitle % % % % % % % % % % MAIN REPORT % % % % % % % % % % \section{Code implementation} The new code implements the Nose-Hoover thermostat, which include a Velocity-Verlet type of integrator for position, velocities and the additional thermostat quantities. Moreover a routine for computing the specific heat was implemented, together with a statistical routine for calculating the running average of a given quantity.\\ Additionally a new routine for generating random velocities directly distributed over a Maxwell-Boltzmann distribution was implemented. Although not necessary, some rescaling iterations are still perfomed. % % % % % % % % % % SECTION 1 % % % % % % % % % % \section{Problems with the code implementation} Beginning from last week when running the code I noticed some inconsistent results, in particular related to the difference in magnitude between kinetic energy and potential energy. The resulting values of the specific heat using $\sigma_E^2$ and $\sigma_U^2$ were different by many order of magnidute. Indeed, by outputting the variance results, I discovered that the variance would increase over time, with peaks of $\sigma_E^2\approx 10^6$. \subsection{Issues} In order to compare the results with some of the fellow students I decided to switch back to real units -- considering that so far I had been using reduced units for convenience. This made the comparison much easier and gave an intuition of the possible errors. After some time of debugging I found the following major issues: \begin{itemize} \item The temperature after the first step would increase to incredibly high values $>10^{44}$ in very little time \item The energy was off and the kinetic energy, after the first few iterations would remain constants, which was not a physical result. \item The velocity rescaling at the first step had some issues, resulting in a kinetic energy of $\approx 10^{-19}$ before and $\approx 10^{23}$ after the rescaling. \item Unrealistic specific heat values $C_v \approx 10^{200}$ (even after some fixes, see plot) \end{itemize} \subsection{Fixes} So far I believe I have been able to fix some of the major issues, in particular I found the following bugs: \begin{itemize} \item The velocity update would add twice the new velocity at each time step, fault of a wrong += \item Somehow the velocity rescaling would compute the temperature in the wrong way, leading to inconsistent results \end{itemize} \subsection{Unresolved yet} The following are some of the unresolved issues so far: \begin{itemize} \item Specific heat shows unphysical results \item Running average used with specific heat shows some error of the type \textit{RuntimeWarning: overflow in true-divide} \item Outputting $\xi(t)$ the order of magnitude does not seem correct \end{itemize} \subsection{Comments} By outputting the temperatures at each timestep it seems that the new implementation now produces physical meaningful results. This however took me almost a week of debugging, therefore I was not able to finish the simulations necessary for this exercise sheet. In the next section I will provide some preliminary results. Even though the temperature shows realistic results, the specific heat still does not show correct results. \textbf{Overall the code does not work properly yet.}\\ Additionally, it is to be noted that the issues rose from a clean-up and rewrite of some part of the code. Therefore I think that the results obtained in previous exercise sheets should be correct, although some minor inconsistency could also be explained -- in particular some of the major fluctuations found in the density profiles. % % % % % % % % % % SECTION 2 % % % % % % % % % % \section{Some preliminary results} The following are some preliminary results showing that some of the code fixes were successfull. \subsection{Temperature} The temperature seems to be correct. The figure shows a simulation run with $N=6^3$, with 2000 equilibration iterations at T=300k, 2000 and 2000 production iterations for T=300K and T=100K, respectively. \begin{figure}[h] \centering \includegraphics[width=90mm]{./plots/T} \caption{The figure shows the temperature over time} \end{figure} \subsection{Specific Heat} The specific heat still shows some unrealistic results. Additional debugging is needed. As can be seen from the figure, the specific heat computed with $\sigma_E^2$ does not even show in the range of the one with $\sigma_U^2$. \begin{figure}[h] \centering \includegraphics[width=90mm]{./plots/cv} \caption{The figure shows the specific heat over time} \end{figure} \subsection{Plot of $\xi(t)$} The results for $\xi(t)$ seem quite unrealist as well, given the enormous fluctuations. \begin{figure}[h] \centering \includegraphics[width=80mm]{./plots/xi} \caption{The figure shows the value of $\xi(t)$ over time} \end{figure} \end{document}
{ "alphanum_fraction": 0.756281407, "avg_line_length": 43.6829268293, "ext": "tex", "hexsha": "61908887af23f5f3514a829ed93a74ceae41e768", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-05-02T16:11:09.000Z", "max_forks_repo_forks_event_min_datetime": "2021-05-02T16:11:09.000Z", "max_forks_repo_head_hexsha": "08f456e94cc27dd2a9bf47ae9c793c0b7342b4bc", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mgarbellini/Computational-Physics-Material-Science", "max_forks_repo_path": "Report/Exercise_Sheet_05.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "08f456e94cc27dd2a9bf47ae9c793c0b7342b4bc", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mgarbellini/Computational-Physics-Material-Science", "max_issues_repo_path": "Report/Exercise_Sheet_05.tex", "max_line_length": 486, "max_stars_count": null, "max_stars_repo_head_hexsha": "08f456e94cc27dd2a9bf47ae9c793c0b7342b4bc", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mgarbellini/Computational-Physics-Material-Science", "max_stars_repo_path": "Report/Exercise_Sheet_05.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1780, "size": 7164 }
% BEGIN LICENSE BLOCK % Version: CMPL 1.1 % % The contents of this file are subject to the Cisco-style Mozilla Public % License Version 1.1 (the "License"); you may not use this file except % in compliance with the License. You may obtain a copy of the License % at www.eclipse-clp.org/license. % % Software distributed under the License is distributed on an "AS IS" % basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See % the License for the specific language governing rights and limitations % under the License. % % The Original Code is The ECLiPSe Constraint Logic Programming System. % The Initial Developer of the Original Code is Cisco Systems, Inc. % Portions created by the Initial Developer are % Copyright (C) 2006 Cisco Systems, Inc. All Rights Reserved. % % Contributor(s): % % END LICENSE BLOCK %---------------------------------------------------------------------- \chapter{An Overview of the Constraint Libraries} %HEVEA\cutdef[1]{section} %---------------------------------------------------------------------- %---------------------------------------------------------------------- \section{Introduction} %---------------------------------------------------------------------- In this section we shall briefly summarize the constraint solving libraries of \eclipse which will be discussed in the rest of this tutorial. %No examples are given here - each solver has its own documentation %with examples for the interested reader. %---------------------------------------------------------------------- \section{Implementations of Domains and Constraints} %---------------------------------------------------------------------- \subsection{Suspended Goals: {\em suspend}} \index{suspend} \label{shortsecsuspend} The constraint solvers of { \eclipse } are all implemented using suspended goals. The simplest implementation of any constraint is to suspend it until all its variables are sufficiently instantiated, and then test it. The {\em suspend} solver implements this behaviour for all the mathematical constraints of { \eclipse }, $>=$, $>$, $=:=$, =\bsl=, $=<$ and $<$. \subsection{Interval Solver: {\em ic}} \index{ic} \label{shortsecic} The standard constraint solver offered by most constraint programming systems is the {\em finite domain} solver, which applies constraint propagation techniques developed in the AI community \cite{VanHentenryck}. { \eclipse } supports finite domain constraints via the {\em ic} library\footnote{and the {\em fd} library which will not be addressed in this tutorial}. The library implements finite domains of integers, together with a basic set of constraints. In addition, {\em ic} also allows {\em continuous domains} (in the form of numeric intervals), and constraints (equations and inequations) between expressions involving variables with continuous domains. These expressions can contain non-linear functions such as $sin$ and built-in constants such as $pi$. %The user can also specify any piecewise linear unary function and {\em ic} %will apply interval reasoning on that. Integrality is treated as a constraint, and it is possible to mix continuous and integral variables in the same constraint. Specialised search techniques ({\em splitting} \cite{VanHentenryck:95} and {\em squashing} \cite{lhomme96boosting}) support the solving of problems with continuous variables. Most constraints are also available in reified form, providing a convenient way of combining several primitive constraints. Note that the {\em ic} library itself implements only a standard, basic set of arithmetic constraints. Many more finite domain constraints can be defined, which have uses in specific applications. The behaviour of these constraints is to prune the finite domains of their variables, in just the same way as the standard constraints. \eclipse{} offers several further libraries which implement such constraints using the underlying domain of the {\em ic} library. \subsection{Global Constraints: {\em ic\_global}} \index{ic_global} \label{shortsecglobal} \label{secglobalcstr} One such library is {\em ic\_global}. It supports a variety of constraints, each of which takes as an argument a list of finite domain variables, of unspecified length. Such constraints are called ``global'' constraints \cite{beldiceanu}. Examples of such constraints, available from the {\em ic\_global} library are \verb0alldifferent/10, \verb0maxlist/20, \verb0occurrences/30 and \verb0sorted/20. For more details see section \ref{secglobal} in chapter \ref{chapicintro}. \subsection{Scheduling Constraints: {\em ic_cumulative, ic_edge_finder}} \index{cumulative} \index{edge_finder} \label{shortsecsched} There are several { \eclipse } libraries implementing global constraints for scheduling applications. The constraints take a list of tasks (start times, durations and resource needs), and a maximum resource level. They reduce the finite domains of the task start times by reasoning on resource bottlenecks \cite{lepape}. Three { \eclipse } libraries implementing scheduling constraints are {\em ic_cumulative}, {\em ic_edge\_finder} and {\em ic_edge\_finder3}. They implement the same constraints declaratively, but with different time complexity and strength of propagation. For more details see the library documentation in the Reference Manual. \subsection{Finite Integer Sets: {\em ic_sets}} \index{ic_sets} \label{shortsecsets} The {\em ic_sets} library implements constraints over the domain of finite sets of integers\footnote{ the other set solvers lib(conjunto) and lib(fd_sets) are similar but not addressed in this tutorial}. The constraints are the usual relations over sets, e.g.\ membership, inclusion, intersection, union, disjointness. In addition, there are constraints between sets and integers, e.g.\ cardinality and weight constraints. For those, the {\em ic_sets} library cooperates with the {\em ic} library. For more details see chapter \ref{icsets}. \subsection{Linear Constraints: {\em eplex}} \index{eplex} \label{shortseceplex} {\em eplex} supports a tight integration \cite{Bockmayr} between an external linear programming (LP) / mixed integer programming (MIP) solver (XPRESS \cite{Dash} or CPLEX \cite{ILOG}) and { \eclipse }. Constraints as well as variables can be handled by the external LP/MIP solver, by a propagation solver like {\em ic}, or by both. Optimal solutions and other solution porperties can be returned to \eclipse{} as required. Search can be carried out either in \eclipse{} or in the external solver. For more details see chapter \ref{chapeplex}. \subsection{Constraints over symbols: {\em ic\_symbolic}} \index{ic_symbolic} \label{shortsecsymbolic} The {\em ic\_symbolic} library supports variables ranging over ordered symbolic domains (e.g. the names of products, the names of the weekdays), and constraints over such variables. It is implemented by mapping such variables and constraints to variables over integers and {\em ic}-constraints. %---------------------------------------------------------------------- \section{User-Defined Constraints} %---------------------------------------------------------------------- \subsection{Generalised Propagation: {\em propia}} \index{propia} \label{shortsecpropia} The predicate {\em infers} takes as one argument any user-defined predicate, and as a second argument a form of propagation to be applied to that predicate. This functionality enables the user to turn any predicate into a constraint \cite{LeProvost93b}. The forms of propagation include finite domains and intervals. For more details see chapter \ref{chappropiachr}. \subsection{Constraint Handling Rules: {\em ech}} \index{ech} \index{chr} \label{shortsecech} The user can also specify predicates using rules with guards \cite{Fruehwirth}. They delay until the guard is entailed or disentailed, and then execute or terminate accordingly. This functionality enables the user to implement constraints in a way that is clearer than directly using the underlying {\em suspend} library. For more details see chapter \ref{chappropiachr}. %---------------------------------------------------------------------- \section{Search and Optimisation Support} %---------------------------------------------------------------------- \subsection{Tree Search Methods: {\em ic_search}} \index{ic_search} \label{shortsecsearch} \eclipse{} has built-in backtracking and is therefore well suited for performing depth-first tree search. With combinatorial problems, naive depth-first search is usually not good enough, even in the presence of constraint propagation. It is usually necessary to apply heuristics, and if the problems are large, one may even need to resort to incomplete search. The {\em ic_search} contains a collection of predefined, easy-to-use search heuristics as well as incomplete tree search strategies, applicable to problems involving {\em ic} variables. For more details see chapter \ref{chapsearch}. \subsection{Optimisation: {\em branch_and_bound}} \index{branch_and_bound} \label{shortsecbb} Solvers that are based on constraint propagation are typically only concerned with satisfiability, i.e.\ with finding some or all solutions to a problems. The branch-and-bound method is a general technique to build optimisation on top of a satisfiability solver. The \eclipse{} {\em branch_and_bound} library is a solver-independent implementation of the branch-and-bound method, and provides a number of options and variants of the basic technique. %---------------------------------------------------------------------- \section{Hybridisation Support} %---------------------------------------------------------------------- \subsection{Repair and Local Search: {\em repair}} \index{repair} \label{shortsecrepair} The {\em repair} library allows a {\em tentative} value to be associated with any variable \cite{cp99wkshoptalk}. This tentative value may violate constraints on the variable, in which case the constraint is recorded in a list of violated constraints. The repair library also supports propagation {\em invariants} \cite{Localizer}. Using invariants, if a variable's tentative value is changed, the consequences of this change can be propagated to any variables whose tentative values depend on the changed one. The use of tentative values in search is illustrated in chapter \ref{chaprepair}. \subsection{Hybrid: {\em ic\_probing\_for\_scheduling}} \index{probing_for_scheduling} \label{shortsecprobing} For scheduling applications where the cost is dependent on each start time, a combination of solvers can be very powerful. For example, we can use finite domain propagation to reason on resources and linear constraint solving to reason on cost \cite{HaniProbe}. The {\em probing\_for\_scheduling} library supports such a combination, via a similar user interface to the {\em cumulative} constraint mentioned above in section \ref{secglobalcstr}. For more details see chapter \ref{chaphybrid}. %---------------------------------------------------------------------- \section{Other Libraries} %---------------------------------------------------------------------- The solvers described above are just a few of the many libraries available in ECLiPSe and listed in the \eclipse{} library directory. Any \eclipse{} user who has implemented a constraint solver is encouraged to make it available to the user community and publicise it via the {\tt [email protected]} mailing list! Comments and suggestions on the existing libraries are also welcome! %\bibliographystyle{alpha} %\bibliography{solver_intro} %\end{document} %HEVEA\cutend
{ "alphanum_fraction": 0.7231270358, "avg_line_length": 41.8136200717, "ext": "tex", "hexsha": "583d2635577750afbd5445090db22f4d69a6a608", "lang": "TeX", "max_forks_count": 55, "max_forks_repo_forks_event_max_datetime": "2022-03-31T05:00:03.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-03T05:28:12.000Z", "max_forks_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "lambdaxymox/barrelfish", "max_forks_repo_path": "usr/eclipseclp/documents/tutorial/solversintro.tex", "max_issues_count": 12, "max_issues_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f", "max_issues_repo_issues_event_max_datetime": "2020-03-18T13:30:29.000Z", "max_issues_repo_issues_event_min_datetime": "2016-03-22T14:44:32.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "lambdaxymox/barrelfish", "max_issues_repo_path": "usr/eclipseclp/documents/tutorial/solversintro.tex", "max_line_length": 88, "max_stars_count": 111, "max_stars_repo_head_hexsha": "06a9f54721a8d96874a8939d8973178a562c342f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "lambdaxymox/barrelfish", "max_stars_repo_path": "usr/eclipseclp/documents/tutorial/solversintro.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-01T23:57:09.000Z", "max_stars_repo_stars_event_min_datetime": "2015-02-03T02:57:27.000Z", "num_tokens": 2497, "size": 11666 }
\subsection{Solving Markov Decision Processes with linear programming}
{ "alphanum_fraction": 0.8356164384, "avg_line_length": 18.25, "ext": "tex", "hexsha": "bad8ecd495e789dae26c45589d9e45af3d42e163", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/ai/MDP/03-01-linear.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/ai/MDP/03-01-linear.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/ai/MDP/03-01-linear.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 14, "size": 73 }
\chapter{The Enigma machine} \label{chapter:enigma} The Enigma machine is probably the most famous of all cryptographic devices in history, due to the prominent role it played in WWII~\cite{wiki:enigma}.\indEnigma The first Enigma machines were available around 1920s, with various models in the market for commercial use. When Germans used the Enigma during WWII, they were using a particular model referred to as the {\em Wehrmacht Enigma}, a fairly advanced model available at the time. The most important role of Enigma\indEnigma is in its role in the use of automated machines to aid in secret communication, or what is known as \emph{mechanizing secrecy}. One has to understand that computers as we understand them today were not available when Enigma was in operation. Thus, the Enigma employed a combination of mechanical (keyboard, rotors, etc.) and electrical parts (lamps, wirings, etc.) to implement its functionality. However, our focus in this chapter will not be on the mechanical aspects of Enigma at all. For a highly readable account of that, we refer the reader to Singh's excellent book on cryptography~\cite{Singh:1999:CBE}. Instead, we will model Enigma in Cryptol in an algorithmic sense, implementing Enigma's operations without any reference to the underlying mechanics. More precisely, we will model an Enigma machine that has a plugboard, three interchangeable scramblers, and a fixed reflector. \todo[inline]{Add a photo or two of some Enigmas here. Look into the WikiCommons.} \todo[inline]{Provide an architectural diagram of this Cryptol construction somewhere reasonable and refer to it regularly to give the reader a better big-picture of how the spec hangs together vs.~the actual machine.} %===================================================================== % \section{The plugboard} % \label{sec:enigma:plugboard} \sectionWithAnswers{The plugboard}{sec:enigma:plugboard} Enigma essentially implements a polyalphabetic substitution cipher (\autoref{section:subst})\indPolyAlphSubst, consisting of a number of rotating units that jumble up the alphabet. The first component is the so called plugboard (\textit{Steckerbrett} in German)\indEnigmaPlugboard. In the original Enigma, the plugboard provided a means of interchanging 6 pairs of letters. For instance, the plugboard could be set-up so that pressing the {\tt B} key would actually engage the {\tt Q} key, etc. We will slightly generalize and allow any number of pairings, as we are not limited by the availability of cables or actual space to put them in a box! Viewed in this sense, the plugboard is merely a permutation of the alphabet. In Cryptol, we can represent the plugboard combination by a string of 26 characters, corresponding to the pairings for each letter in the alphabet from {\tt A} to {\tt Z}: \begin{code} type Permutation = String 26 type Plugboard = Permutation \end{code} For instance, the plugboard matching the pairs {\tt A}-{\tt H}, {\tt C}-{\tt G}, {\tt Q}-{\tt X}, {\tt T}-{\tt V}, {\tt U}-{\tt Y}, {\tt W}-{\tt M}, and {\tt O}-{\tt L} can be created as follows: \begin{code} plugboard : Plugboard plugboard = "HBGDEFCAIJKOWNLPXRSVYTMQUZ" \end{code} Note that if a letter is not paired through the plugboard, then it goes untouched, i.e., it is paired with itself. \begin{Exercise}\label{ex:enigma:1} Use Cryptol to verify that the above plugboard definition indeed implements the pairings we wanted. \end{Exercise} \begin{Answer}\ansref{ex:enigma:1} We can simply ask Cryptol what the implied mappings are: \begin{Verbatim} Cryptol> [ plugboard @ (c - 'A') | c <- "ACQTUWO" ] "HGXVYML" \end{Verbatim} Why do we subtract the {\tt 'A'} when indexing? \end{Answer} \note{In Enigma, the plugboard pairings are symmetric; if {\tt A} maps to {\tt H}, then {\tt H} must map to {\tt A}.} %===================================================================== % \section{Scrambler rotors} % \label{sec:enigma:scramblerrotors} \sectionWithAnswers{Scrambler rotors}{sec:enigma:scramblerrotors} The next component of the Enigma are the rotors that scramble the letters. Rotors (\textit{walzen} in German)\indEnigmaRotor are essentially permutations, with one little twist: as their name implies, they rotate. This rotation ensures that the next character the rotor will process will be encrypted using a different alphabet, thus giving Enigma its polyalphabetic nature.\indPolyAlphSubst The other trick employed by Enigma is how the rotations are done. In a typical setup, the rotors are arranged so that the first rotor rotates at every character, while the second rotates at every 26th, the third at every 676th ($=26*26$), etc. In a sense, the rotors work like the odometer in your car, one full rotation of the first rotor triggers the second, whose one full rotation triggers the third, and so on. In fact, more advanced models of Enigma allowed for two notches per rotor, i.e., two distinct positions on the rotor that will allow the next rotor in sequence to rotate itself. We will allow ourselves to have any number of notches, by simply pairing each substituted letter with a bit value saying whether it has an associated notch:\footnote{The type definition for {\tt Char} was given in Example~\ref{section:subst}-\ref{ex:subst:1}.} \begin{code} type Rotor = [26](Char, Bit) \end{code} The function {\tt mkRotor} will create a rotor for us from a given permutation of the letters and the notch locations:\footnote{The function {\tt elem} was defined in \exerciseref{ex:recfun:4:1}.\indElem} \begin{code} mkRotor : {n} (fin n) => (Permutation, String n) -> Rotor mkRotor (perm, notchLocations) = [ (p, elem (p, notchLocations)) | p <- perm ] \end{code} \todo[inline]{A diagram here would be really useful, especially one that captures the location and use of notches and the state of these rotors before and after rotations.} Let us create a couple of rotors with notches: \begin{code} rotor1, rotor2, rotor3 : Rotor rotor1 = mkRotor ("RJICAWVQZODLUPYFEHXSMTKNGB", "IO") rotor2 = mkRotor ("DWYOLETKNVQPHURZJMSFIGXCBA", "B") rotor3 = mkRotor ("FGKMAJWUOVNRYIZETDPSHBLCQX", "CK") \end{code} For instance, {\tt rotor1} maps {\tt A} to {\tt R}, {\tt B} to {\tt J}, $\ldots$, and {\tt Z} to {\tt B} in its initial position. It will engage its notch if one of the permuted letters {\tt I} or {\tt O} appear in its first position. \begin{Exercise}\label{ex:enigma:2} Write out the encrypted letters for the sequence of 5 {\tt C}'s for {\tt rotor1}, assuming it rotates in each step. At what points does it engage its own notch to signal the next rotor to rotate? \end{Exercise} \begin{Answer}\ansref{ex:enigma:2} Recall that {\tt rotor1} was defined as: \begin{Verbatim} rotor1 = mkRotor ("RJICAWVQZODLUPYFEHXSMTKNGB", "IO") \end{Verbatim} Here is a listing of the new mappings and the characters we will get at the output for each successive {\tt C}: \begin{Verbatim} starting map output notch engaged? RJICAWVQZODLUPYFEHXSMTKNGB I no JICAWVQZODLUPYFEHXSMTKNGBR C no ICAWVQZODLUPYFEHXSMTKNGBRJ A yes CAWVQZODLUPYFEHXSMTKNGBRJI W no AWVQZODLUPYFEHXSMTKNGBRJIC V no \end{Verbatim} Note how we get different letters as output, even though we are providing the same input (all {\tt C}'s.) This is the essence of the Enigma: the same input will not cause the same output necessarily, making it a polyalphabetic substitution cipher.\indPolyAlphSubst \end{Answer} %===================================================================== % \section{Connecting the rotors: notches in action} % \label{sec:enigma:notches} \sectionWithAnswers{Connecting the rotors: notches in action}{sec:enigma:notches} \todo[inline]{A diagram here depicting rotor interchangeability and the relationship between \texttt{scramble} and rotors in the above figure.} The original Enigma had three interchangeable rotors. The operator chose the order they were placed in the machine. In our model, we will allow for an arbitrary number of rotors. The tricky part of connecting the rotors is ensuring that the rotations of each are done properly. Let us start with a simpler problem. If we are given a rotor and a particular letter to encrypt, how can we compute the output letter and the new rotor position? First of all, we will need to know if the rotor should rotate itself, that is if the notch between this rotor and the previous one was activated. Also, we need to find out if the act of rotation in this rotor is going to cause the next rotor to rotate. We will model this action using the Cryptol function {\tt scramble}: \begin{code} scramble : (Bit, Char, Rotor) -> (Bit, Char, Rotor) \end{code} The function {\tt scramble} takes a triple \texttt{(rotate, c, rotor)}: \begin{itemize} \item {\tt rotate}: if {\tt True}, this rotor will rotate after encryption. Indicates that the notch between this rotor and the previous one was engaged, \item {\tt c}: the character to encrypt, and \item {\tt rotor}: the current state of the rotor. \end{itemize} Similarly, the output will also be a triple: \begin{itemize} \item {\tt notch}: {\tt True} if the notch on this rotor engages, i.e., if the next rotor should rotate itself, \item {\tt c'}: the result of encrypting (substituting) for {\tt c} with the current state of the rotor. \item {\tt rotor'}: the new state of the rotor. If no rotation was done this will be the same as {\tt rotor}. Otherwise it will be the new substitution map obtained by rotating the old one to the left by one. \end{itemize} Coding {\tt scramble} is straightforward: \begin{code} scramble (rotate, c, rotor) = (notch, c', rotor') where (c', _) = rotor @ (c - 'A') (_, notch) = rotor @ 0 rotor' = if rotate then rotor <<< 1 else rotor \end{code} To determine {\tt c'}, we use the substitution map to find out what this rotor maps the given character to, with respect to its current state. Note how Cryptol's pattern matching notation\indPatMatch helps with extraction of {\tt c'}, as we only need the character, not whether there is a notch at that location. (The underscore character use, `\texttt{\_}',\indUnderscore means that we do not need the value at the position, and hence we do not give it an explicit name.) To determine if we have our notch engaged, all we need to do is to look at the first elements notch value, using Cryptol's selection operator ({\tt @ 0}\indIndex), and we ignore the permutation value there this time, again using pattern matching. Finally, to determine {\tt rotor'} we merely rotate left by 1\indRotLeft if the {\tt rotate} signal was received. Otherwise, we leave the {\tt rotor} unchanged. \begin{Exercise}\label{ex:enigma:3} Redo \exerciseref{ex:enigma:2}, this time using Cryptol and the {\tt scramble} function. \end{Exercise} \begin{Answer}\ansref{ex:enigma:3} We can define the following value to simulate the operation of always telling {\tt scramble} to rotate the rotor and providing it with the input {\tt C}. \begin{Verbatim} rotor1CCCCC = [(c1, n1), (c2, n2), (c3, n3), (c4, n4), (c5, n5)] where (n1, c1, r1) = scramble (True, 'C', rotor1) (n2, c2, r2) = scramble (True, 'C', r1) (n3, c3, r3) = scramble (True, 'C', r2) (n4, c4, r4) = scramble (True, 'C', r3) (n5, c5, r5) = scramble (True, 'C', r4) \end{Verbatim} \todo[inline]{Remind reader of simultaneity of \texttt{where} clauses.} Note how we chained the output rotor values in the calls, through the values {\tt r1}-{\tt r2}-{\tt r3} and {\tt r4}. We have: \begin{Verbatim} Cryptol> rotor1CCCCC [(I, False), (C, False), (A, True), (W, False), (V, False)] \end{Verbatim} Note that we get precisely the same results from Cryptol as we predicted in the previous exercise. \end{Answer} \note{The actual mechanics of the Enigma machine were slightly more complicated: due to the keyboard mechanism and the way notches were mechanically built, the first rotor was actually rotating before the encryption took place. Also, the middle rotor could double-step if it engages its notch right after the third rotor does~\cite{enigmaAnomaly}. We will take the simpler view here and assume that each key press causes an encryption to take place, {\em after} which the rotors do their rotation, getting ready for the next input. The mechanical details, while historically important, are not essential for our modeling purposes here. Also, the original Enigma had {\em rings}, a relatively insignificant part of the whole machine, that we ignore here.} \paragraph*{Sequencing the rotors} Now that we have the rotors modeled, the next task is to figure out how to connect them in a sequence. As we mentioned, Enigma had 3 rotors originally (later versions allowing 4). The three rotors each had a single notch (later versions allowing double notches). Our model allows for arbitrary number of rotors and arbitrary number of notches on each. The question we now tackle is the following: Given a sequence of rotors, how do we run them one after the other? We are looking for a function with the following signature: \begin{code} joinRotors : {n} (fin n) => ([n]Rotor, Char) -> ([n]Rotor, Char) \end{code} That is, we receive {\tt n} rotors and the character to be encrypted, and return the updated rotors (accounting for their rotations) and the final character. The implementation is an instance of the fold\indFold pattern (\autoref{sec:recandrec}), using the {\tt scramble} function we have just defined: \begin{code} joinRotors (rotors, inputChar) = (rotors', outputChar) where initRotor = mkRotor (['A' .. 'Z'], []) ncrs : [n+1](Bit, [8], Rotor) ncrs = [(True, inputChar, initRotor)] # [ scramble (notch, char, r) | r <- rotors | (notch, char, rotor') <- ncrs ] rotors' = tail [ r | (_, _, r) <- ncrs ] (_, outputChar, _) = ncrs ! 0 \end{code} The workhorse in {\tt joinRotors} is the definition of {\tt ncrs}, a mnemonic for {\em notches-chars-rotors}. The idea is fairly simple. We simply iterate over all the given rotors ({\tt r <- rotors}), and {\tt scramble} the current character {\tt char}, using the rotor {\tt r} and the notch value {\tt notch}. These values come from {\tt ncrs} itself, using the fold pattern\indFold encoded by the comprehension\indComp. The only question is what is the seed value for this fold? The seed used in {\tt ncrs} is {\tt (True, inputChar, initRotor)}. The first component is {\tt True}, indicating that the very first rotor should always rotate itself at every step. The second element is {\tt inputChar}, which is the input to the whole sequence of rotors. The only mysterious element is the last one, which we have specified as {\tt initRotor}. This rotor is defined so that it simply maps the letters to themselves with no notches on it, by a call to the {\tt mkRotor} function we have previously defined. This rotor is merely a place holder to kick off the computation of {\tt ncrs}, it acts as the identity element in a sequence of rotors. To compute {\tt rotors'}, we merely project the third component of {\tt ncrs}, being careful about skipping the first element using {\tt tail}\indTail. Finally, {\tt outputChar} is merely the output coming out of the final rotor, extracted using {\tt !0}\indRIndex. Note how we use Cryptol's pattern matching to get the second component out of the triple in the last line.\indPatMatch \begin{Exercise}\label{ex:enigma:4} Is the action of {\tt initRotor} ever used in the definition of {\tt joinRotors}? \end{Exercise} \begin{Answer}\ansref{ex:enigma:4} Not unless we receive an empty sequence of rotors, i.e., a call of the form: {\tt joinRotors ([], c)} for some character {\tt c}. In this case, it does make sense to return {\tt c} directly, which is what {\tt initRotor} will do. Note that unless we do receive an empty sequence of rotors, the value of {\tt initRotor} will not be used when computing the {\tt joinRotors} function. \end{Answer} \begin{Exercise}\label{ex:enigma:5} What is the final character returned by the expression: \begin{Verbatim} joinRotors ([rotor1 rotor2 rotor3], 'F') \end{Verbatim} Use paper and pencil to figure out the answer by tracing the execution of {\tt joinRotors} before running it in Cryptol! \end{Exercise} \begin{Answer}\ansref{ex:enigma:5} The crucial part is the value of {\tt ncrs}. Let us write it out by substituting the values of {\tt rotors} and {\tt inputChar}: \begin{Verbatim} ncrs = [(True, 'F', initRotor)] # [ scramble (notch, char, r) | r <- [rotor1, rotor2, rotor3] | (notch, char, rotor') <- ncrs ] \end{Verbatim} Clearly, the first element of {\tt ncrs} will be: \begin{Verbatim} (True, 'F', initRotor) \end{Verbatim} Therefore, the second element will be the result of the call: \begin{Verbatim} scramble (True, 'F', rotor1) \end{Verbatim} Recall that {\tt rotor1} was defined as: \begin{Verbatim} rotor1 = mkRotor ("RJICAWVQZODLUPYFEHXSMTKNGB", "IO") \end{Verbatim} What letter does {\tt rotor1} map {\tt F} to? Since {\tt F} is the 5th character (counting from 0), {\tt rotor1} maps it to the 5th element of its permutation, i.e., {\tt W}, remembering to count from 0! The topmost element in {\tt rotor1} is {\tt R}, which is not its notch-list, hence it will {\em not} tell the next rotor to rotate. But it will rotate itself, since it received the {\tt True} signal. Thus, the second element of {\tt ncrs} will be: \begin{Verbatim} (False, 'W', ...) \end{Verbatim} where we used {\tt ...} to denote the one left-rotation of {\tt rotor1}. (Note that we do not need to know the precise arrangement of {\tt rotor1} now for the purposes of this exercise.) Now we move to {\tt rotor2}, we have to compute the result of the call: \begin{Verbatim} scramble (False, 'W', rotor2) \end{Verbatim} Recall that {\tt rotor2} was defined as: \begin{Verbatim} rotor2 = mkRotor ("DWYOLETKNVQPHURZJMSFIGXCBA", "B") \end{Verbatim} So, it maps {\tt W} to {\tt X}. (The fourth letter from the end.) It will not rotate itself, and it will not tell {\tt rotor3} to rotate itself either since the topmost element is {\tt D} in its current configuration, and {\tt D} which is not in the notch-list {\tt "B"}. Thus, the final {\tt scramble} call will be: \begin{Verbatim} scramble (False, 'X', rotor3) \end{Verbatim} where \begin{Verbatim} rotor3 = mkRotor ("FGKMAJWUOVNRYIZETDPSHBLCQX", "CK") \end{Verbatim} It is easy to see that {\tt rotor3} will map {\tt X} to {\tt C}. Thus the final value coming out of this expression must be {\tt C}. Indeed, we have:\indTupleProj \begin{Verbatim} Cryptol> project(2, 2, joinRotors ([rotor1 rotor2 rotor3], 'F')) C \end{Verbatim} Of course, Cryptol also keeps track of the new rotor positions as well, which we have glossed over in this discussion. \end{Answer} %===================================================================== % \section{The reflector} % \label{sec:enigma:reflector} \sectionWithAnswers{The reflector}{sec:enigma:reflector} The final piece of the Enigma machine is the reflector\indEnigmaReflector ({\em umkehrwalze} in German). The reflector is another substitution map. Unlike the rotors, however, the reflector did not rotate. Its main function was ensuring that the process of encryption was reversible: The reflector did one final jumbling of the letters and then sent the signal back through the rotors in the {\em reverse} order, thus completing the loop and allowing the signal to reach back to the lamps that would light up. For our purposes, it suffices to model it just like any other permutation: \begin{code} type Reflector = Permutation \end{code} Here is one example: \begin{code} reflector : Reflector reflector = "FEIPBATSCYVUWZQDOXHGLKMRJN" \end{code} Like the plugboard, the reflector is symmetric: If it maps {\tt B} to {\tt E}, it should map {\tt E} to {\tt B}, as in the above example. Furthermore, the Enigma reflectors were designed so that they never mapped any character to themselves, which is true for the above permutation as well. Interestingly, this idea of a non-identity reflector (i.e., never mapping any character to itself) turned out to be a weakness in the design, which the allies exploited in breaking the Enigma during WWII~\cite{Singh:1999:CBE}. \begin{Exercise}\label{ex:enigma:6} Write a function {\tt checkReflector} with the signature: \begin{Verbatim} checkReflector : Reflector -> Bit \end{Verbatim} such that it returns {\tt True} if a given reflector is good (i.e., symmetric and non-self mapping) and {\tt False} otherwise. Check that our definition of {\tt reflector} above is a good one. \lhint{Use the {\tt all} function you have defined in \exerciseref{ex:zero:1}.} \end{Exercise} \begin{Answer}\ansref{ex:enigma:6} %% if this is {code} we have two all's \begin{Verbatim} all : {n, a} (fin n) => (a -> Bit) -> [n]a -> Bit all f xs = [ f x | x <- xs ] == ~zero checkReflector refl = all check ['A' .. 'Z'] where check c = (c != m) && (c == c') where m = refl @ (c - 'A') c' = refl @ (m - 'A') \end{Verbatim} For each character in the alphabet, we first figure out what it maps to using the reflector, named {\tt m} above. We also find out what {\tt m} gets mapped to, named {\tt c'} above. To be a valid reflector it must hold that {\tt c} is not {\tt m} (no character maps to itself), and {\tt c} must be {\tt c'}. We have: \begin{Verbatim} Cryptol> checkReflector reflector True \end{Verbatim} Note how we used {\tt all}\indAll to make sure {\tt check} holds for all the elements of the alphabet. \end{Answer} %===================================================================== % \section{Putting the pieces together} % \label{sec:enigma:puttingittogether} \sectionWithAnswers{Putting the pieces together}{sec:enigma:puttingittogether} We now have all the components of the Enigma: the plugboard, rotors, and the reflector. The final task is to implement the full loop. The Enigma ran all the rotors in sequence, then passed the signal through the reflector, and ran the rotors in reverse one more time before delivering the signal to the lamps. Before proceeding, we will define the following two helper functions: \begin{code} substFwd, substBwd : (Permutation, Char) -> Char substFwd (perm, c) = perm @ (c - 'A') substBwd (perm, c) = invSubst (perm, c) \end{code} (You have defined the {\tt invSubst} function in \exerciseref{ex:subst:1}.) The {\tt substFwd} function simply returns the character that the given permutation, whether from the plugboard, a rotor, or the reflector. Conversely, {\tt substBwd} returns the character that the given permutation maps {\em from}, i.e., the character that will be mapped to {\tt c} using the permutation. \begin{Exercise}\label{ex:enigma:7} Using Cryptol, verify that {\tt substFwd} and {\tt substBwd} return the same elements for each letter in the alphabet for {\tt rotor1}. \end{Exercise} \begin{Answer}\ansref{ex:enigma:7} We can define the following helper function, using the function {\tt all} you have defined in \exerciseref{ex:zero:1}:\indAll \begin{code} checkPermutation : Permutation -> Bit checkPermutation perm = all check ['A' .. 'Z'] where check c = (c == substBwd(perm, substFwd (perm, c))) && (c == substFwd(perm, substBwd (perm, c))) \end{code} Note that we have to check both ways (first {\tt substFwd} then {\tt substBwd}, and also the other way around) in case the substitution is badly formed, for instance if it is mapping the same character twice. We have: \begin{Verbatim} Cryptol> checkPermutation [ c | (c, _) <- rotor1 ] True \end{Verbatim} For a bad permutation we would get {\tt False}: \begin{Verbatim} Cryptol> checkPermutation (['A' .. 'Y'] # ['A']) False \end{Verbatim} \end{Answer} \begin{Exercise}\label{ex:enigma:8} Show that {\tt substFwd} and {\tt substBwd} are exactly the same operations for the reflector. Why? \end{Exercise} \begin{Answer}\ansref{ex:enigma:8} Since the reflector is symmetric, substituting backwards or forwards does not matter. We can verify this with the following helper function:\indAll \begin{code} all : {a, b} (fin b) => (a -> Bit) -> [b]a -> Bit all fn xs = folds ! 0 where folds = [True] # [ fn x && p | x <- xs | p <- folds] checkReflectorFwdBwd : Reflector -> Bit checkReflectorFwdBwd refl = all check ['A' .. 'Z'] where check c = substFwd (refl, c) == substBwd (refl, c) \end{code} We have: \begin{Verbatim} Cryptol> checkReflectorFwdBwd reflector True \end{Verbatim} \end{Answer} \paragraph*{The route back} One crucial part of the Enigma is the running of the rotors in reverse after the reflector. Note that this operation ignores the notches, i.e., the rotors do not turn while the signal is passing the second time through the rotors. (In a sense, the rotations happen after the signal completes its full loop, getting to the reflector and back.) Consequently, it is much easer to code as well (compare this code to {\tt joinRotors}, defined in \autoref{sec:enigma:notches}): \begin{code} backSignal : {n} (fin n) => ([n]Rotor, Char) -> Char backSignal (rotors, inputChar) = cs ! 0 where cs = [inputChar] # [ substBwd ([ p | (p, _) <- r ], c) | r <- reverse rotors | c <- cs ] \end{code} Note that we explicitly reverse the rotors in the definition of {\tt cs}.\indReverse (The definition of {\tt cs} is another typical example of a fold. See page~\pageref{par:fold}.)\indFold Given all this machinery, coding the entire Enigma loop is fairly straightforward: \begin{code} //enigmaLoop : {n} (fin n) => (Plugboard, [n]Rotor, Reflector, Char) // -> ([n]Rotor, Char) enigmaLoop (pboard, rotors, refl, c0) = (rotors', c5) where // 1. First run through the plugboard c1 = substFwd (pboard, c0) // 2. Now run all the rotors forward (rotors', c2) = joinRotors (rotors, c1) // 3. Pass through the reflector c3 = substFwd (refl, c2) // 4. Run the rotors backward c4 = backSignal(rotors, c3) // 5. Finally, back through the plugboard c5 = substBwd (pboard, c4) \end{code} %===================================================================== \section{The state of the machine} \label{sec:state-machine} We are almost ready to construct our own Enigma machine in Cryptol. Before doing so, we will take a moment to represent the state of the Enigma machine as a Cryptol record\indTheRecordType, which will simplify our final construction. At any stage, the state of an Enigma machine is given by the status of its rotors. We will use the following record to represent this state, for an Enigma machine containing \texttt{n} rotors: \begin{code} type Enigma n = { plugboard : Plugboard, rotors : [n]Rotor, reflector : Reflector } \end{code} To initialize an Enigma machine, the operator provides the plugboard settings, rotors, the reflector. Furthermore, the operator also gives the initial positions for the rotors. Rotors can be initially rotated to any position before put together into the machine. We can capture this operation with the function {\tt mkEnigma}: \begin{code} mkEnigma : {n} (Plugboard, [n]Rotor, Reflector, [n]Char) -> Enigma n mkEnigma (pboard, rs, refl, startingPositions) = { plugboard = pboard, rotors = [ r <<< (s - 'A') | r <- rs | s <- startingPositions ], reflector = refl } \end{code} Note how we rotate each given rotor to the left by the amount given by its starting position. \todo[inline]{Connect ``left'' with up/down in the earlier illustrations.} Given this definition, let us construct an Enigma machine out of the components we have created so far, using the starting positions {\tt GCR} for the rotors respectively:\label{def:modelEnigma} \begin{code} modelEnigma : Enigma 3 modelEnigma = mkEnigma (plugboard, [rotor1, rotor2, rotor3], reflector, "GCR") \end{code} We now have an operational Enigma machine coded up in Cryptol! %===================================================================== % \section{Encryption and decryption} % \label{enigma:encdec} \sectionWithAnswers{Encryption and decryption}{enigma:encdec} Equipped with all the machinery we now have, coding Enigma encryption is fairly straightforward: \begin{code} enigma : {n, m} (fin n, fin m) => (Enigma n, String m) -> String m enigma (m, pt) = tail [ c | (_, c) <- rcs ] where rcs = [(m.rotors, '*')] # [ enigmaLoop (m.plugboard, r, m.reflector, c) | c <- pt | (r, _) <- rcs ] \end{code} The function {\tt enigma} takes a machine with {\tt n} rotors and a plaintext of {\tt m} characters, returning a ciphertext of {\tt m} characters back. It is yet another application of the fold pattern, where we start with the initial set of rotors and the placeholder character {\tt *} (which could be anything) to seed the fold.\indFold Note how the change in rotors is reflected in each iteration of the fold, through the {\tt enigmaLoop} function. At the end, we simply drop the rotors from {\tt rcs}, and take the {\tt tail}\indTail to skip over the seed character {\tt *}. Here is our Enigma in operation: \begin{Verbatim} Cryptol> :set ascii=on Cryptol> enigma (modelEnigma, "ENIGMAWASAREALLYCOOLMACHINE") "UPEKTBSDROBVTUJGNCEHHGBXGTF" \end{Verbatim} \paragraph*{Decryption} As we mentioned before, Enigma was a self-decrypting machine, that is, encryption and decryption are precisely the same operations. Thus, we can define: \begin{code} dEnigma : {n, m} (fin n, fin m) => (Enigma n, String m) -> String m dEnigma = enigma \end{code} And decrypt our above message back: \begin{Verbatim} Cryptol> dEnigma (modelEnigma, "UPEKTBSDROBVTUJGNCEHHGBXGTF") "ENIGMAWASAREALLYCOOLMACHINE" \end{Verbatim} We have successfully performed our first Enigma encryption! \begin{Exercise}\label{ex:enigma:9} Different models of Enigma came with different sets of rotors. You can find various rotor configurations on the web~\cite{wiki:enigmarotors}. Create models of these rotors in Cryptol, and run sample encryptions through them. \end{Exercise} \begin{Exercise}\label{ex:enigma:10} As we have mentioned before, Enigma implements a polyalphabetic substitution cipher\indPolyAlphSubst, where the same letter gets mapped to different letters during encryption. The period\indSubstPeriod of a cipher is the number of characters before the encryption repeats itself, mapping the same sequence of letters in the plaintext to the to the same sequence of letters in the ciphertext. What is the period of an Enigma machine with $n$ rotors? \end{Exercise} \begin{Answer}\ansref{ex:enigma:10} Enigma will start repeating once the rotors go back to their original position. With $n$ rotors, this will take $26^n$ characters. In the case of the traditional 3-rotor Enigma this amounts to $26^3 = 17576$ characters. Note that we are assuming an ideal Enigma here with no double-stepping~\cite{enigmaAnomaly}. \end{Answer} \begin{Exercise}\label{ex:enigma:11} Construct a string of the form {\tt CRYPTOLXXX...XCRYPTOL}, where ...'s are filled with enough number of {\tt X}'s such that encrypting it with our {\tt modelEnigma} machine will map the instances of ``{\tt CRYPTOL}'' to the same ciphertext. How many {\tt X}'s do you need? What is the corresponding ciphertext for ``{\tt CRYPTOL}'' in this encryption? \end{Exercise} \begin{Answer}\ansref{ex:enigma:11} Since the period\indSubstPeriod for the 3-rotor Enigma is 17576 (see the previous exercise), we need to make sure two instances of {\tt CRYPTOL} are 17576 characters apart. Since {\tt CRYPTOL} has 7 characters, we should have 17569 X's. The following Cryptol definition would return the relevant pieces: \begin{code} enigmaCryptol = (take`{7} ct, drop`{17576} ct) where str = "CRYPTOL" # [ 'X' | _ <- [1 .. 17569] ] # "CRYPTOL" ct = dEnigma(modelEnigma, str) \end{code} We have: \begin{Verbatim} Cryptol> enigmaCryptol ("KGSHMPK", "KGSHMPK") \end{Verbatim} As predicted, both instances of {\tt CRYPTOL} get encrypted as {\tt KGSHMPK}. \end{Answer} \paragraph*{The code} You can see all the Cryptol code for our Enigma simulator in Appendix~\ref{app:enigma}. \commentout{ \begin{code} invSubst : (Permutation, Char) -> Char invSubst (key, c) = candidates ! 0 where candidates = [0] # [ if c == k then a else p | k <- key | a <- ['A' .. 'Z'] | p <- candidates ] elem : {a, b} (fin 0, fin a, Cmp b) => (b, [a]b) -> Bit elem (x, xs) = matches ! 0 where matches = [False] # [ m || (x == e) | e <- xs | m <- matches ] sanityCheck = enigma (modelEnigma, "ENIGMAWASAREALLYCOOLMACHINE") == "UPEKTBSDROBVTUJGNCEHHGBXGTF" \end{code} } %%% Local Variables: %%% mode: latex %%% TeX-master: "../main/Cryptol" %%% End:
{ "alphanum_fraction": 0.6961166195, "avg_line_length": 43.7264516129, "ext": "tex", "hexsha": "ca43cd5169028a5b22db607dd129f8f0d72af6eb", "lang": "TeX", "max_forks_count": 117, "max_forks_repo_forks_event_max_datetime": "2022-03-06T15:40:57.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-01T18:45:39.000Z", "max_forks_repo_head_hexsha": "571f0dd249a72f830abd511caca87a971a91d07e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "axon-terminal/cryptol", "max_forks_repo_path": "docs/ProgrammingCryptol/enigma/Enigma.tex", "max_issues_count": 1050, "max_issues_repo_head_hexsha": "5a668a4594e386a084f6f0ceb231c69946971a50", "max_issues_repo_issues_event_max_datetime": "2022-03-30T17:02:34.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-02T23:10:55.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "emlisa3162/cryptol", "max_issues_repo_path": "docs/ProgrammingCryptol/enigma/Enigma.tex", "max_line_length": 107, "max_stars_count": 773, "max_stars_repo_head_hexsha": "5a668a4594e386a084f6f0ceb231c69946971a50", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "emlisa3162/cryptol", "max_stars_repo_path": "docs/ProgrammingCryptol/enigma/Enigma.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T04:26:02.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-08T15:43:54.000Z", "num_tokens": 9599, "size": 33888 }
\input{preamble.tex} \begin{document} \pagenumbering{Roman} \tableofcontents \newpage \listoffigures \newpage \listoftables \newpage \pagenumbering{arabic} \section{Introduction} The goal of this file is to describe the steps to create an eletric bicycle. I dream of a world where we can all run on renewable energy sources. Building my own project in this field just me make part of this new trend. I would you enjoy. \subsection{Sources} There are a couple of interesting websites and Youtube channels that provide a lot of high-quality material about ebicycles. You might want to check some of those: \begin{enumerate} \item \textit{https://electricbikereview.com/} \item \textit{https://www.bruegelmann.de/elektrofahrrad.html} %\item \textit{https://www.amazon.de/Radsport-Fahrr\%C3\%A4der-Elektrofahrr\%C3\%A4der/b?ie=UTF8&node=245332011} \end{enumerate} \section{Main components} \begin{enumerate} \item Bicycle \item Motor \item Battery \item Recharger \end{enumerate} \section{Tools} \section{Batteries} In order to better understand batteries, one should focus on its main parameters that define its quality: \begin{enumerate} \item \textit{Power-to-ratio} %%https://en.wikipedia.org/wiki/Power-to-weight_ratio \item \textit{Specific energy}. It corresponds to the amount of energy per unit mass. \item \textit{Energy density}. It amounts for the quantity of energy in a given volume. \end{enumerate} The two main options of batteries are: lithium and lead acid battery. Our book will describe firstly the lithium batteries. \subsection{Materials} Some types of batteries are: \begin{enumerate} \item \textit{Lithium Iron Phosphate} $LiFePO_4$. Also called LFP battery, they are some of the heaviest and most expensive ones, but they are also one of the safest and longlasting ones. They have low cost, are non toxic, safe and has a lot of thermal stability. \item \textit{Lithium Manganese Oxide} % https://en.wikipedia.org/wiki/Lithium_ion_manganese_oxide_battery \end{enumerate} % http://www.ebikeschool.com/electric-bicycle-batteries-lithium-vs-lead-acid-batteries/ \begin{figure} \centering \includegraphics[width=\textwidth]{mountainbike_ebike.png} \caption{Electric Mountainbike from Amazon} \label{fig:amazon_mountainbike} \end{figure} \subsection{Location} \end{document}
{ "alphanum_fraction": 0.7850668968, "avg_line_length": 31.7397260274, "ext": "tex", "hexsha": "e66b6bb669376590ab4d66f0f4673c6bbfae08f8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "26409cbeed53ea21b8f09855968f20e3b78ced54", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "vitorbellotto/buildingyourelectricbicycle", "max_forks_repo_path": "main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "26409cbeed53ea21b8f09855968f20e3b78ced54", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "vitorbellotto/buildingyourelectricbicycle", "max_issues_repo_path": "main.tex", "max_line_length": 264, "max_stars_count": null, "max_stars_repo_head_hexsha": "26409cbeed53ea21b8f09855968f20e3b78ced54", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "vitorbellotto/buildingyourelectricbicycle", "max_stars_repo_path": "main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 631, "size": 2317 }
% !TEX TS-program = xelatex % !TEX encoding = UTF-8 % !Mode:: "TeX:UTF-8" \documentclass[onecolumn,oneside]{SUSTechHomework} \usepackage{float} \author{胡玉斌} \sid{11712121} \title{Lab 6} \coursecode{CS315} \coursename{Computer Security} \begin{document} \maketitle \section{Lab 6 Part 1} \subsection{Read the lab instructions above and finish all the tasks.} Done \subsection{Answer the questions in the Introduction section, and justify your answers. Simple yes or no answer will not get any credits.} \subsubsection{What security features does Zephyr have?} \begin{itemize} \item The security functionality in Zephyr hinges mainly on the inclusion of cryptographic algorithms, and on its monolithic system design. \item The security architecture is based on a monolithic design where the Zephyr kernel and all applications are compiled into a single static binary. System calls are implemented as function calls without requiring context switches. Static linking eliminates the potential for dynamically loading malicious code. \item Stack protection mechanisms are provided to protect against stack overruns. \end{itemize} \subsubsection{Do applications share the same address space with the OS kernel?} \begin{itemize} \item No \item It is very dangerous to share the the same address space with the OS kernel which means we can modify the value in that address space. \end{itemize} \subsubsection{Does Zephyr have defense mechanisms such as non-executable stack or Address Space Layout Randomization (ASLR)?} \begin{itemize} \item No \item Due to the exploration, we can see that the EIP register has the same value for the same payload and generate the special value for the EIP register. \item So Zephyr do not defense mechanisms. \end{itemize} \subsubsection{Do textbook attacks(e.g., buffer overflow or heap spray)work on Zephyr?} \begin{itemize} \item Yes \item Due to the exploration, we generate a payload to cause the buffer overflow. \item As we expected, the application crashes due to an invalid return address. \item Furthermore, QEMU also crashes and you will see a pop-up window as the screenshot below. \end{itemize} \subsection{Change the EIP register to the value 0xdeadbeef, and show me the screenshot of the EIP value when the application crashes.} \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{img/pic1.png} \caption{EIP value} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{img/pic2.png} \caption{obfuscate} \end{figure} \section{Lab 6 Part 2} \subsection{Read the lab instructions above and finish all the tasks.} Done \subsection{Answer the questions in the Introduction section,and justify your answers. Simple yes or no answer will not get any credits.} \subsubsection{What is the difference between Monitor Mode and Promiscuous Mode} \begin{itemize} \item Monitor mode: Sniffing the packets in the air without connecting (associating) with any access point. \item Promiscuous mode: Sniffing the packets after connecting to an access point. This is possible because the wireless-enabled devices send the data in the air but only "mark" them to be processed by the intended receiver. They cannot send the packets and make sure they only reach a specific device, unlike with switched LANs. \end{itemize} \subsubsection{What lessons we learned from this lab about setting the WiFi password?} \begin{itemize} \item We should use strong password. \item Avoid using weak password to be blasted by weak password tables. \end{itemize} \subsection{Change your router to a different pass phrase, and use the Wireshark and Aircrach-ng to crack the passphrase. Show screenshots of the result.} Use airport command to enable monitor mode, and monitor on channel 6. The password of the router is \textbf{SUSTech-Eveneko} \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{img/pic3.png} \caption{Wireshark} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{img/pic4.png} \caption{password1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.95\textwidth]{img/pic5.png} \caption{password2} \end{figure} \end{document}
{ "alphanum_fraction": 0.7037747921, "avg_line_length": 39.4033613445, "ext": "tex", "hexsha": "f799628b8a124caabe87d822b2f8c006c8d173c4", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-04-27T13:41:36.000Z", "max_forks_repo_forks_event_min_datetime": "2021-01-07T04:14:11.000Z", "max_forks_repo_head_hexsha": "0420873110e91e8d13e6e85a974f1856e01d28d6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Eveneko/SUSTech-Courses", "max_forks_repo_path": "CS315_Computer-Security/lab6/lab6.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0420873110e91e8d13e6e85a974f1856e01d28d6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Eveneko/SUSTech-Courses", "max_issues_repo_path": "CS315_Computer-Security/lab6/lab6.tex", "max_line_length": 336, "max_stars_count": 4, "max_stars_repo_head_hexsha": "0420873110e91e8d13e6e85a974f1856e01d28d6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Eveneko/SUSTech-Courses", "max_stars_repo_path": "CS315_Computer-Security/lab6/lab6.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-11T10:05:09.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-11T11:56:57.000Z", "num_tokens": 1134, "size": 4689 }
\label{sec:ifc-types} Similar to declarations, types are also indicated by abstract type references. They are values of type \type{TypeIndex}, with 32-bit precision and the following layout \begin{figure}[H] \centering \absref{5}{TypeSort} \caption{\type{TypeIndex}: Abstract reference of type} \label{fig:ifc-type-index} \end{figure} \begin{SortEnum}{TypeSort} \enumerator{VendorExtension} \enumerator{Fundamental} \enumerator{Designated} \enumerator{Tor} \enumerator{Syntactic} \enumerator{Expansion} \enumerator{Pointer} \enumerator{PointerToMember} \enumerator{LvalueReference} \enumerator{RvalueReference} \enumerator{Function} \enumerator{Method} \enumerator{Array} \enumerator{Typename} \enumerator{Qualified} \enumerator{Base} \enumerator{Decltype} \enumerator{Placeholder} \enumerator{Tuple} \enumerator{Forall} \enumerator{Unaligned} \enumerator{SyntaxTree} \end{SortEnum} \section{Type structures} \label{sec:ifc-type-structures} \subsection{\valueTag{TypeSort::VendorExtension}} \label{sec:ifc:TypeSort:VendorExtension} \partition{type.vendor-extension} \subsection{\valueTag{TypeSort::Fundamental}} \label{sec:ifc:TypeSort:Fundamental} A \type{TypeIndex} abstract reference with tag \type{TypeSort::Fundamental} designates a fundamental type. The \field{index} field of that abstract reference is an index into the fundamental type partition. Each entry in that partition is a structure with the following components: % \begin{figure}[H] \centering \structure{ \DeclareMember{basis}{TypeBasis} \\ \DeclareMember{precision}{TypePrecision} \\ \DeclareMember{sign}{TypeSign} \\ \DeclareMember{<padding>}{uint8\_t} \\ } \caption{Structure of a fundamental type} \label{fig:ifc-fundamental-type-structure} \end{figure} % The \field{basis} field designates the fundamental basis of the fundamental type. The \field{precision} field designates the "bit precision" variant of the basis type. The \field{sign} field indicates the "sign" variant of the basis type. \partition{type.fundamental} \subsubsection{Fundamental type basis} \label{sec:ifc-fundamental-type-basis} The fundamental types are made out a small set of type basis defined as: % \begin{typedef}{TypeBasis}{} enum class TypeBasis : uint8_t { Void, Bool, Char, Wchar_t, Int, Float, Double, Nullptr, Ellipsis, SegmentType, Class, Struct, Union, Enum, Typename, Namespace, Interface, Function, Empty, VariableTemplate, Concept, Auto, DecltypeAuto, Overload }; \end{typedef} % with the following meaning: \begin{itemize} \item \code{TypeBasis::Void}: fundamental type \code{void} \item \code{TypeBasis::Bool}: fundamental type \code{bool} \item \code{TypeBasis::Char}: fundamental type \code{char} \item \code{TypeBasis::Wchar\_t}: fundamental type \code{wchar\_t} \item \code{TypeBasis::Int}: fundamental type \code{int} \item \code{TypeBasis::Float}: fundamental type \code{float} \item \code{TypeBasis::Double}: fundamental type \code{double} \item \code{TypeBasis::Nullptr}: fundamental type \code{decltype(nullptr)} \item \code{TypeBasis::Ellipsis}: fundamental type denoted by \code{...} \item \code{TypeBasis::SegmentType}. Note: this basis member is subject to removal in future revision. \item \code{TypeBasis::Class}: fundamental type \code{class} \item \code{TypeBasis::Struct}: fundamental type \code{struct} \item \code{TypeBasis::Union}: fundamental type \code{union} \item \code{TypeBasis::Enum}: fundamental type \code{enum} \item \code{TypeBasis::Typename}: fundamental concept \code{typename} \item \code{TypeBasis::Namespace}: fundamental type \code{namespace} \item \code{TypeBasis::Interface}: fundamental type \code{\_\_interface} \item \code{TypeBasis::Function}: fundamental concept of function. Note: this basis member is object to removal in future revision. \item \code{TypeBasis::Empty}: fundamental type resulting from an empty pack expansion \item \code{TypeBasis::VariableTemplate}: concept of variable template. Note: this basis member is subject to removal in future revision. \item \code{TypeBasis::Concept}: fundamental type of a \code{concept} expression. \item \code{TypeBasis::Auto}: type placeholder \code{auto} \item \code{TypeBasis::DecltypeAuto} type placeholder \code{decltype(auto)} \item \code{TypeBasis::Overload}: fundamental type for an overload set. \end{itemize} \note{The current set of type basis is subject to change.} \subsubsection{Fundamental type precision} \label{sec:ifc-fundamental-type-precision} The bit precision of a fundamental type is a value of type \type{TypePrecision} defined as follows: % \begin{typedef}{TypePrecision}{} enum class TypePrecision : uint8_t { Default, Short, Long, Bit8, Bit16, Bit32, Bit64, Bit128, }; \end{typedef} % with the following meaning: \begin{itemize} \item \code{TypePrecision::Default}: the default precision of the basis type \item \code{TypePrecision::Short}: the \code{short} variant of the basis type \item \code{TypePrecision::Long}: the \code{long} variant of the basis type \item \code{TypePrecision::Bit8}: the 8-bit variant of the basis type \item \code{TypePrecision::Bit16}: the 16-bit variant of the basis type \item \code{TypePrecision::Bit32}: the 32-bit variant of the basis type \item \code{TypePrecision::Bit64}: the 64-bit variant of the basis type \item \code{TypePrecision::Bit128}: the 128-bit variant of the basis type \end{itemize} \subsubsection{Fundamental type sign} \label{sec:ifc-fundamental-type-sign} The sign of a fundamental type is expressed as a value of type \type{TypeSign} defined as follows: % \begin{typedef}{TypeSign}{} enum class TypeSign : uint8_t { Plain, Signed, Unsigned, }; \end{typedef} % with the following meaning: \begin{itemize} \item \code{TypeSign::Plain}: the plain sign of the basis type \item \code{TypeSign::Signed}: the signed variant of the basis type \item \code{TypeSign::Unsigned}: the unsigned variant of the basis type \end{itemize} \subsection{\valueTag{TypeSort::Designated}} \label{sec:ifc:TypeSort:Designated} A \type{TypeIndex} value with tag \valueTag{TypeSort::Designated} represents an abstract reference to type denoted by a declaration name. This is the typical use of a class name. The \field{index} field is an index into the designated type partition. Each entry in that partition is a structure with a single component: the \field{decl} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{decl}{DeclIndex} \\ } \caption{Structure of a designated type} \label{fig:ifc-designated-type-structure} \end{figure} % The \field{decl} field denotes the declaration of the type. \partition{type.designated} \subsection{\valueTag{TypeSort::Tor}} \label{sec:ifc:TypeSort:Tor} A \type{TypeIndex} value with \valueTag{TypeSort::Tor} represents an abstract reference to the type of a constructor declaration. ISO C++ does not define the type of a constructor -- it even denies a constructor has a name. However, for regularly and handling of the representation of template declaration, it is simpler to assign types to constructors. The \field{index} field of that abstract reference is an index into the partition of tor types. Each entry in that partition is a structure with the following layout % \begin{Structure} \structure{ \DeclareMember{source}{TypeIndex} \\ \DeclareMember{eh\_spec}{NoexceptSpecification} \\ \DeclareMember{convention}{CallingConvention} \\ } \caption{Structure of a tor type} \label{fig:ifc:TypeSort:Tor} \end{Structure} % The meaning of the fields is as follows \begin{itemize} \item \field{source} denotes the sequence of parameter types in the declaration of the corresponding constructor. \item \field{eh\_spec} denotes the exception specification associated with the constructor declaration this type associates with. \item \field{convention} denotes the calling convention associated with the constructor declaration. \end{itemize} \partition{type.tor} \subsection{\valueTag{TypeSort::Syntactic}} \label{sec:ifc:TypeSort:Syntactic} A \type{TypeIndex} value with tag \valueTag{TypeSort::Syntactic} represents an abstract reference to type expressed at the C++ source-level as a type-id. Typical examples include a template-id designating a specialization. The \field{index} field is an index into the syntactic type partition. Each entry in that partition is a structure with a single component: the \field{expr} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{expr}{ExprIndex} \\ } \caption{Structure of a syntactic type} \label{fig:ifc-syntactic-type-structure} \end{figure} % The \field{expr} field denotes the C++ source-level type expression. \partition{type.syntactic} \subsection{\valueTag{TypeSort::Expansion}} \label{sec:ifc:TypeSort:Expansion} A \type{TypeIndex} abstract reference with tag \valueTag{TypeSort::Expansion} designates an expansion of a pack type. The \field{index} field is an index into the expansion type partition. Each entry in that partition is a structure with the following layout % \begin{figure}[H] \centering \structure{ \DeclareMember{pack}{TypeIndex} \\ \DeclareMember{mode}{ExpansionMode} \\ } \caption{Structure of an expansion type} \label{fig:fic-expansion-type-structure} \end{figure} % The \field{pack} field designates the type form under expansion. The \field{mode} field denotes the mode of expansion. \partition{type.expansion} \subsubsection{Pack expansion mode} \label{sec:ifc-pack-expansion-mode} During pack expansion, a template parameter can be fully expanded, or only partially expanded. The mode is indicated by a value of type % \begin{typedef}{ExpansionMode}{} enum class ExpansionMode : uint8_t { Full, Partial }; \end{typedef} % \subsection{\valueTag{TypeSort::Pointer}} \label{sec:ifc:TypeSort:Pointer} A \type{TypeIndex} value with tag \valueTag{TypeSort::Pointer} represents an abstract reference to a pointer type. The \field{index} is an index into the pointer type partition. Each entry in that partition is a structure with one component: the \field{pointee} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{pointee}{TypeIndex} \\ } \caption{Structure of a pointer type} \label{fig:ifc-pointer-type-structure} \end{figure} % The \field{pointee} field denotes the type pointed-to: any valid C++ type. \partition{type.pointer} \subsection{\valueTag{TypeSort::PointerToMember}} \label{sec:ifc:TypeSort:PointerToMember} A \type{TypeIndex} value with tag \valueTag{TypeSort::PointerToMember} represents an abstract reference to a C++ source-level pointer-to-nonstatic-data member type. The \field{index} field is an index into the pointer-to-member type partition. Each entry in that partition is a structure with two components: the \field{scope} field, and the \field{member} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{scope}{TypeIndex} \\ \DeclareMember{member}{TypeIndex} \\ } \caption{Structure of a pointer-to-member type} \label{fig:ifc-pointer-to-member-type-structure} \end{figure} % The \field{scope} field denotes the enclosing class type. The \field{member} field denotes the type of the member. \partition{type.pointer-to-member} \subsection{\valueTag{TypeSort::LvalueReference}} \label{sec:ifc:TypeSort:LvalueReference} A \type{TypeIndex} value with tag \valueTag{TypeSort::LvalueReference} represents an abstract reference to a C++ classic reference type, e.g. \code{int&}. The \field{index} field is an index into the lvalue-reference type partition. Each entry in that partition is a structure with one component: the \field{referee} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{referee}{TypeIndex} \\ } \caption{Structure of an lvalue-reference type} \label{fig:ifc-lvalue-reference-type-structure} \end{figure} % The \field{referee} field denotes the type referred-to: it is any valid C++ type, including function type. \partition{type.lvalue-reference} \subsection{\valueTag{TypeSort::RvalueReference}} \label{sec:ifc:TypeSort:RvalueReference} A \type{TypeIndex} value with tag \valueTag{TypeSort::RvalueReference} represents an abstract reference to a C++ rvalue-reference type, e.g. \code{int&&}. The \field{index} field is an index into the rvalue-reference type partition. Each entry in that partition is a structure with one component: the \field{referee} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{referee}{TypeIndex} \\ } \caption{Structure of an rvalue-reference type} \label{fig:ifc-rvalue-reference-type-structure} \end{figure} % The \field{referee} field denotes the type referred-to: it is any valid C++ type. \partition{type.rvalue-reference} \subsection{\valueTag{TypeSort::Function}} \label{sec:ifc:TypeSort:Function} A \type{TypeIndex} with tag \valueTag{TypeSort::Function} represents an abstract reference to a C++ source-level function type. The \field{index} field is an index into the function type partition. Each entry in that partition is a structure with the following components: a \field{target} field, a \field{source} field, an \field{eh\_spec} field, a \field{convention} field, a \field{traits} field. % \begin{figure}[H] \centering \structure[text width=15em]{ \DeclareMember{target}{TypeIndex} \\ \DeclareMember{source}{TypeIndex} \\ \DeclareMember{eh\_spec}{NoexceptSpecification} \\ \DeclareMember{convention}{CallingConvention} \\ \DeclareMember{traits}{FunctionTypeTraits} \\ } \caption{Structure of a function type} \label{fig:ifc-function-type-structure} \end{figure} % The \field{target} field denotes the return type of the function type. The \field{source} field denotes the parameter type list. A null \field{source} value indicates no parameter type. If the function type has at most one parameter type, the \field{source} denotes that type. Otherwise, it is a tuple type made of all the parameter types. The \field{eh\_spec} denotes the C++ source-level noexcept-specification. The \field{convention} field denotes the calling convention of the function type. The \field{traits} field denotes additional function type traits. \partition{type.function} \subsubsection{Function type traits} \label{sec:ifc-function-type-traits} Certain function types properties are expressed as value of bitmask type \type{FunctionTypeTraits} defined as: % \begin{typedef}{FunctionTypeTraits}{} enum class FunctionTypeTraits : uint8_t { None = 0, Const = 1 << 0, Volatile = 1 << 1, Lvalue = 1 << 2, Rvalue = 1 << 3, }; \end{typedef} % with the following meaning \begin{itemize} \item \code{FunctionTypeTraits::None}: the function type is that of a function that is not non-static member (either a non-member function or a static member function). \item \code{FunctionTypeTraits::Const}: the function type is that of a function that is a non-static member function declared with a \code{const} qualifier. \item \code{FunctionTypeTraits::Volatile}: the function type is that of a function that is a non-static member function declared with a \code{volatile} qualifier. \item \code{FunctionTypeTraits::Lvalue}: the function type is that of a function that is a non-static member function declared with an lvalue (\code{\&}) qualifier. \item \code{FunctionTypeTraits::Rvalue}: the function type is that of a function that is a non-static member function declared with an rvalue (\code{\&\&}) qualifier. \end{itemize} \subsubsection{Calling convention} \label{sec:ifc-calling-convention} Function calling conventions are expressed as values of type \type{CallingConvention} defined as: % \begin{typedef}{CallingConvention}{} enum class CallingConvention : uint8_t { Cdecl, Fast, Std, This, Clr, Vector, Eabi, }; \end{typedef} % with the following meaning \begin{itemize} \item \code{CallingConvention::Cdecl}: the function is declared with \code{\_\_cdecl}. \item \code{CallingConvention::Fast}: the function is declared with \code{\_\_fastcall}. \item \code{CallingConvention::Std}: the function is declared with \code{\_\_stdcall}. \item \code{CallingConvention::This}: the function is declared with \code{\_\_thiscall}. \item \code{CallingConvention::Clr}: the function is declared with \code{\_\_clrcall}. \item \code{CallingConvention::Vector}: the function is declared with \code{\_\_vectorcall}. \item \code{CallingConvention::Eabi}: the function is declared with \code{\_\_eabi}. \end{itemize} \note{The calling convention currently details only MSVC.} \subsubsection{Noexcept specification} \label{sec:ifc-noexcept-specification} The noexcept-specification of a function is described by a structure with the following components: % \begin{figure}[H] \centering \structure{ \DeclareMember{words}{SentenceIndex} \\ \DeclareMember{sort}{NoexceptSort} \\ } \caption{Structure of a noexcept-specification} \label{fig:ifc-noexception-specification-structure} \end{figure} % The \field{words} field is meaningful only for function templates and complex \code{noexcept}-specification. The \field{sort} describes the sort of noexcept-specification. \subsection{\valueTag{TypeSort::Method}} \label{sec:ifc:TypeSort:Method} A \type{TypeIndex} with tag \valueTag{TypeSort::Method} represents an abstract reference to a C++ source-level non-static member function type. The \field{index} field is an index into the method type partition. Each entry in that partition is a structure with the following components: a \field{target} field, a \field{source} field, a \field{scope} field, an \field{eh\_spec} field, a \field{convention} field, a \field{traits} field. % \begin{figure}[H] \centering \structure[text width=15em]{ \DeclareMember{target}{TypeIndex} \\ \DeclareMember{source}{TypeIndex} \\ \DeclareMember{scope}{TypeIndex} \\ \DeclareMember{eh\_spec}{NoexceptSpecification} \\ \DeclareMember{convention}{CallingConvention} \\ \DeclareMember{traits}{FunctionTypeTraits} \\ } \caption{Structure of a method type} \label{fig:ifc-method-type-structure} \end{figure} % The \field{target} field denotes the return type of the function type. The \field{source} field denotes the parameter type list. The function type has at most one parameter type, the \field{source} denotes that type. Otherwise, it is a tuple type made of all the parameter types. The \field{scope} denotes the C++ source-level enclosing class type. The \field{eh\_spec} denotes the C++ source-level noexcept-specification. The \field{convention} field denotes the calling convention of the function type. The \field{traits} field denotes additional function type traits. \partition{type.nonstatic-member-function} \subsection{\valueTag{TypeSort::Array}} \label{sec:ifc:TypeSort:Array} A \type{TypeIndex} value with tag \valueTag{TypeSort::Array} represents an abstract reference to a C++ source-level builtin array type. The \field{index} field is an index into the array type partition. Each entry in that partition is a structure with two components: a \field{element} field, and a \field{extent} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{element}{TypeIndex} \\ \DeclareMember{extent}{ExprIndex} \\ } \caption{Structure of an array type} \label{fig:ifc-array-type-structure} \end{figure} % The \field{element} denotes the element type of the array. The \field{extent} denotes the number of elements in the array type along its outer most dimension. \partition{type.array} \subsection{\valueTag{TypeSort::Typename}} \label{sec:ifc:TypeSort:Typename} A \type{TypeIndex} value with tag \valueTag{TypeSort::Typename} represents an abstract reference to a dependent type which at the C++ source-level is written as type expression. The \field{index} field is an index into the typename type partition. Each entry in that partition is a structure with a single component: the \field{path} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{path}{ExprIndex} \\ } \caption{Structure of a typename type} \label{fig:ifc-typename-type-structure} \end{figure} % The \field{path} field denotes the expression designating the dependent type. \partition{type.typename} \subsection{\valueTag{TypeSort::Qualified}} \label{sec:ifc:TypeSort:Qualified} A \type{TypeIndex} value with tag \valueTag{TypeSort::Qualified} represents an abstract reference to a C++ source-level cv-qualified type. The \field{index} field is an index into the qualified type partition. Each entry in that partition is a structure with two components: the \field{unqualified} field, and the \field{qualifiers} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{unqualified}{TypeIndex} \\ \DeclareMember{qualifiers}{Qualifiers} \\ } \caption{Structure of a qualified type} \label{fig:ifc-qualified-type-structure} \end{figure} % The \field{unqualified} field denotes the type without the toplevel cv-qualifiers. The \field{qualifiers} field denotes the cv-qualifiers. \partition{type.qualified} \subsubsection{Type qualifiers} \label{sec:ifc-type-qualifiers} Standard type qualifiers are given values of type: \begin{typedef}{Qualifiers}{} enum class Qualifier : uint8_t { None = 0, Const = 1 << 0, Volatile = 1 << 1, Restrict = 1 << 2, }; \end{typedef} with the following meaning: \begin{itemize} \item \code{Qualifiers::None}: No type qualifier \item \code{Qualifiers::Const}: the type is \code{const}-qualified \item \code{Qualifiers::Volatile}: the type is \code{volatile}-qualified. \item \code{Qualifiers::Restrict}: the type is \code{\_\_restrict}-qualified -- non-standard, but common extension. \end{itemize} \subsection{\valueTag{TypeSort::Base}} \label{sec:ifc:TypeSort:Base} A \type{TypeIndex} value with tag \valueTag{TypeSort::Base} represents an abstract reference to a use of a type as a base-class (in a class inheritance) at the C++ source-level. The \field{index} field is an index into the base type partition. Each entry in that partition is a structure with the following layout % \begin{figure}[H] \centering \structure{ \DeclareMember{type}{TypeIndex} \\ \DeclareMember{access}{Access} \\ \DeclareMember{specifiers}{BaseTypeSpecifiers} \\ } \caption{Structure of a base type} \label{fig:ifc-base-type-structure} \end{figure} % The meanings of the fields are as follows: \begin{itemize} \item \field{type} denotes the type base being used as a base type. \item \field{access} denotes the access specifier in the inheritance path. \item \field{specifiers} indicates specifiers for the base type (\secref{sec:ifc-base-type-specifiers}). \end{itemize} \partition{type.base} \note{This representation is subject to change} \subsubsection{Base type specifiers} \label{sec:ifc-base-type-specifiers} Base type specifiers are bitmask values of type \begin{typedef}{BaseTypeSpecifiers}{} enum class BaseTypeSpecifiers : uint8_t { None = 0, Shared = 0x01, Expanded = 0x02, }; \end{typedef} % with the following meaning: \begin{itemize} \item \code{BaseTypeSpecifiers::None}: no base type specifiers. \item \code{BaseTypeSpecifiers::Shared}: the base class is virtual. \item \code{BaseTypeSpecifiers::Expanded}: the base class is a pack expansion. \end{itemize} \note{Since type expansions are type designators, a base type that is a pack expansion should be designated by \valueTag{TypeSort::Expansion} (\secref{sec:ifc:TypeSort:Expansion}), not a bitmask value.} \subsection{\valueTag{TypeSort::Decltype}} \label{sec:ifc:TypeSort:Decltype} A \type{TypeIndex} value with tag \valueTag{TypeSort::Decltype} represents an abstract reference to the C++ source-level application of \code{decltype} to an expression. Ideally, the operand should be represented by an \type{ExprIndex} value. However, in the current implementation the operand is represented as an arbitrary sequence of tokens. The \field{index} field is an index into the decltype type partition. Each entry in that partition is a structure with a single component: the \field{words} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{expr}{SyntaxIndex} \\ } \caption{Structure of an decltype type} \label{fig:ifc-decltype-type-structure} \end{figure} % The \field{expr} field denotes the syntactic representation of the expression operand to \code{decltype}. \partition{type.decltype} \paragraph{Note:} The \code{decltype} representation will change in the future to change its operand representation to an expression tree. \subsection{\valueTag{TypeSort::Placeholder}} \label{sec:ifc:TypeSort:Placeholder} A \type{TypeIndex} abstract reference with tag \valueTag{TypeSort::Placeholder} designates a placeholder type. The \field{index} field is an index into the placeholder type partition. Each entry in that partition is a structure with the following layout % \begin{figure}[H] \centering \structure{ \DeclareMember{constraint}{ExprIndex} \\ \DeclareMember{basis}{TypeBasis} \\ \DeclareMember{elaboration}{TypeIndex} \\ } \caption{Structure of a placeholder type} \label{fig:ifc-placeholder-type-structure} \end{figure} % The \field{constraint} field designates the (generalized) predicate that the placeholder shall satisfy -- at the input source level. The \field{basis} field is a structure denoting the type basis (\secref{sec:ifc-fundamental-type-basis}) pattern expected for placeholder type, either \code{auto} or \code{decltype(auto)}. The \field{elaboration} field, if non null, designates the deduced type corresponding to the placeholder type. \partition{type.placeholder} \subsection{\valueTag{TypeSort::Tuple}} \label{sec:ifc:TypeSort:Tuple} A \type{TypeIndex} value with tag \valueTag{TypeSort::Tuple} represents an abstract reference to tuple type, i.e. finite sequence of types. The \field{index} field is an index into the tuple type partition. Each entry in that partition is a structure with two components: a \field{start} field, and a \field{cardinality} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{start}{Index} \\ \DeclareMember{cardinality}{Cardinality} \\ } \caption{Structure of a tuple type} \label{fig:ifc-tuple-type-structure} \end{figure} % The \field{start} field is an index into the type heap partition. It designates the first element in the tuple. The \field{cardinality} field denotes the number of elements in the tuple. \partition{type.tuple} \subsection{\valueTag{TypeSort::Forall}} \label{sec:ifc:TypeSort:Forall} A \type{TypeIndex} value with tag \valueTag{TypeSort::Forall} designates a generalized $\forall$-type, as can could be ascribed to a template declaration, even though a template declaration does not have a type in ISO C++. The \field{index} field is an index into the forall type partition. Each entry in that partition is a structure with the following layout % \begin{figure}[H] \centering \structure{ \DeclareMember{chart}{ChartIndex} \\ \DeclareMember{subject}{TypeIndex} \\ } \caption{Structure of a forall type} \label{fig:ifc-forall-type-structure} \end{figure} % The \field{chart} field designates the set of parameter lists in the template declaration. The \field{subject} field designated the type of the current instantiation. \partition{type.forall} \note{This generalized type is not yet used in current version of MSVC. However, they will be used in future releases of MSVC.} \subsection{\valueTag{TypeSort::Unaligned}} \label{sec:ifc:TypeSort:Unaligned} A \type{TypeIndex} value with tag \valueTag{TypeSort::Unaligned} represents an abstract reference to a type with \code{__unaligned} specifier. The \field{index} field is an index into the unaligned type partition. Each entry in that partition is a structure with a single component: the \field{type} field. % \begin{figure}[H] \centering \structure{ \DeclareMember{type}{TypeIndex} \\ } \caption{Structure of an unaligned type} \label{fig:ifc-unaligned-type-structure} \end{figure} % \partition{type.unaligned} \paragraph{Note:} An \code{__unaligned} type is an MSVC extension, with no clearly defined semantics. This representation should be expressed in a more regular framework of vendor-extension types. \subsection{\valueTag{TypeSort::SyntaxTree}} \label{sec:ifc:TypeSort:SyntaxTree} A \type{TypeIndex} value with tag \valueTag{TypeSort::SyntaxTree} represents an abstract reference to a type designated by a raw syntax construct (\secref{sec:ifc-syntax-tree-table}). The \field{index} field is an index into the syntax tree type partition. Each entry in that partition is a structure with a single component: the \field{syntax} field: % \begin{figure}[H] \centering \structure{ \DeclareMember{syntax}{SyntaxIndex} \\ } \caption{Structure of a syntax tree type} \label{fig:ifc-syntax-tree-type-structure} \end{figure} % \partition{type.syntax-tree}
{ "alphanum_fraction": 0.7624326564, "avg_line_length": 34.8028846154, "ext": "tex", "hexsha": "daea39cb613a065ca1c13128f1b9d186d77db5f5", "lang": "TeX", "max_forks_count": 9, "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:27:21.000Z", "max_forks_repo_forks_event_min_datetime": "2021-11-03T14:23:06.000Z", "max_forks_repo_head_hexsha": "87a92d065ebf4667ee52b7d256bda3333fbcc458", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "cdacamar/ifc-spec", "max_forks_repo_path": "ltx/types.tex", "max_issues_count": 13, "max_issues_repo_head_hexsha": "87a92d065ebf4667ee52b7d256bda3333fbcc458", "max_issues_repo_issues_event_max_datetime": "2022-02-17T20:39:05.000Z", "max_issues_repo_issues_event_min_datetime": "2021-10-30T15:49:40.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "cdacamar/ifc-spec", "max_issues_repo_path": "ltx/types.tex", "max_line_length": 202, "max_stars_count": 18, "max_stars_repo_head_hexsha": "87a92d065ebf4667ee52b7d256bda3333fbcc458", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "cdacamar/ifc-spec", "max_stars_repo_path": "ltx/types.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-18T21:15:40.000Z", "max_stars_repo_stars_event_min_datetime": "2021-11-06T21:23:02.000Z", "num_tokens": 7883, "size": 28956 }