Search is not available for this dataset
text
string
meta
dict
\subsection{Inception modules}
{ "alphanum_fraction": 0.7878787879, "avg_line_length": 8.25, "ext": "tex", "hexsha": "0c61047b6d25e8502c14186fc01ac1478f95f968", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/statistics/neuralNetworksConvolution/01-05-Inception.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/statistics/neuralNetworksConvolution/01-05-Inception.tex", "max_line_length": 30, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/statistics/neuralNetworksConvolution/01-05-Inception.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8, "size": 33 }
\documentclass[a4paper]{scrreprt} %\documentclass[a4paper]{report} % Uncomment to optimize for double-sided printing. % \KOMAoptions{twoside} % Set binding correction manually, if known. % \KOMAoptions{BCOR=2cm} % Localization options \usepackage[english]{babel} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} % Enhanced verbatim sections. We're mainly interested in % \verbatiminput though. \usepackage{verbatim} % PDF-compatible landscape mode. % Makes PDF viewers show the page rotated by 90°. \usepackage{pdflscape} % Advanced tables \usepackage{tabu} \usepackage{longtable} % Fancy tablerules \usepackage{booktabs} % Graphics \usepackage{graphicx} \usepackage{float} % Current time \usepackage[useregional=numeric]{datetime2} % Float barriers. % Automatically add a FloatBarrier to each \section \usepackage[section]{placeins} % Custom header and footer \usepackage{fancyhdr} \setlength{\headheight}{15.2pt} \pagestyle{fancyplain} \usepackage{geometry} \usepackage{layout} % Math tools \usepackage{mathtools} % Math symbols \usepackage{amsmath,amsfonts,amssymb} \fancyhf{} % Chapter header on non-plain pages only. \lhead{\fancyplain{} {\leftmark}} % Footer must contain print date. Ugly, but IPA requirement. \lfoot{\printdate} % Print date left and page count right was the thing which looked the % most balanced. \rfoot{\thepage} % % Source code & highlighting \usepackage{listings} % Convenience commands \newcommand{\mailsubject}{2407 - Computernetze - Practical exercise 2} \newcommand{\maillink}[1]{\href{mailto:#1?subject=\mailsubject} {#1}} % Should use this command wherever the print date is mentioned. \newcommand{\printdate}{\today} \subject{2407 - Computernetze} \title{Practical exercise 3} \author{Michael Senn \maillink{[email protected]}} \date{\printdate} % Needs to be the last command in the preamble, for one reason or % another. \usepackage{hyperref} \begin{document} \maketitle % \tableofcontents \chapter{IPSec VPN} For this exercise we have set up an IPSec VPN between the two routers R1 and R2, therefore allowing eg transparently encrypted communication between VM1 and VM2 over an (imaginary) insecure network. \section{Router configuration} \subsection{Access lists} In order to determine which traffic will be encrypted, ACLs were used. In this case, ACLs are a list of source/destination tuples, against which traffic is matched. If they do match one of the ACL's conditions, it will be encrypted. In this case, we have decided to only encrypt traffic which is between VM1 and VM2. If desired, it would be easy to change this ACL to eg encrypt all traffic between the two subnets the VMs reside in. One such possible configuration is included in the snippet below, albeit commented out. It should be noted that ACLs use wildcard masks rather than subnet masks - wildcard masks being the inverse of their corresponding subnet mask. \begin{lstlisting}[caption=Router 1] conf t ! access-list 100 permit ip 10.0.1.0 0.0.0.255 10.0.2.0 0.0.0.255 access-list 100 permit ip host 10.0.1.2 host 10.0.2.2 exit \end{lstlisting} \begin{lstlisting}[caption=Router 2] conf t ! access-list 100 permit ip 10.0.2.0 0.0.0.255 10.0.1.0 0.0.0.255 access-list 100 permit ip host 10.0.2.2 host 10.0.1.2 exit \end{lstlisting} \subsection{Cryptography settings} Next, we configured several things \begin{description} \item[IPSec Transform sets] Here we defined with which algorithm traffic is processed. The two IPsec peers have to have at least one set of operations in common. \item[Crypto maps] Here we defined the IPSec peer of a given router, as well as which transform set to use. \item[ISAKMP policy] Here we defined the mechanism of the security key exchange. In our case, we decided to use a pre-shared key for simplicity. \end{description} \begin{lstlisting}[caption=Router 1] conf t crypto ipsec transform-set ex3 esp-des esp-md5-hmac crypto map ex3 10 ipsec-isakmp match address 100 set peer 10.0.100.2 set transform-set ex3 exit crypto isakmp policy 1 hash md5 authentication pre-share exit crypto isakmp key 0 secretkey address 10.0.100.2 255.255.255.255 exit \end{lstlisting} \begin{lstlisting}[caption=Router 2] conf t crypto ipsec transform-set ex3 esp-des esp-md5-hmac crypto map ex3 10 ipsec-isakmp match address 100 set peer 10.0.100.1 set transform-set ex3 exit crypto isakmp policy 1 hash md5 authentication pre-share exit crypto isakmp key 0 secretkey address 10.0.100.1 255.255.255.255 exit \end{lstlisting} \subsection{Interface configuration} Lastly, the defined crypto maps had to be applied to the interfaces between which the tunnel was going to be established. \begin{lstlisting}[caption=Router 1] conf t interface fastethernet 0/1 crypto map ex3 exit exit \end{lstlisting} \begin{lstlisting}[caption=Router 2] conf t interface fastethernet 0/0 crypto map ex3 exit exit \end{lstlisting} \section{Results} In order to comfortably test whether traffic between machines was encrypted, we used \texttt{netcat}, which allows to open a basic TCP server on one machine, and send arbitrary data to it from another. In our case we used it to send plaintext messages, which could be read on the other end. In addition, we used Wireshark to capture traffic on the two hubs between R2 and R1/R3 respectively. \subsection{VM2 - VM3} As there was no IPSec tunnel set up between the two Routers R2 and R3, this traffic was not encrypted. This can be clearly seen in various ways. \begin{itemize} \item Source and destination IP are IPs of VM2 / VM3. \item The protocol used atop of IP is TCP. \item The sent message body can be seen in the payload. \end{itemize} \subsection{VM2 - VM1} As an IPsec tunnel was set up, traffic beteween R2 and R1 will be encrypted. This can be seen in several ways: \begin{itemize} \item Source and destination IP are IPs of the routers, due to them tunneling packets. \item The protocol used atop of IP is ESP. \item The sent message body can not be seen in the payload. \end{itemize} \subsection{Screenshots VM2 - VM3} \begin{figure}[H] \centering \textbf{Netcat server on VM3}\par\medskip \includegraphics[width=0.9\textwidth]{resources/nc_vm3.png} \end{figure} \begin{figure}[H] \centering \textbf{Netcat client on VM2 to VM3}\par\medskip \includegraphics[width=0.9\textwidth]{resources/nc_vm2_to_vm3.png} \end{figure} \begin{figure}[H] \centering \textbf{Non-encrypted traffic on hub}\par\medskip \includegraphics[width=0.9\textwidth]{resources/wireshark_hub2_to_r2.png} \end{figure} \subsection{Screenshots VM2 - VM1} \begin{figure}[H] \centering \textbf{Netcat server on VM1}\par\medskip \includegraphics[width=0.9\textwidth]{resources/nc_vm1.png} \end{figure} \begin{figure}[H] \centering \textbf{Netcat client on VM2 to VM1}\par\medskip \includegraphics[width=0.9\textwidth]{resources/nc_vm2_to_vm1.png} \end{figure} \begin{figure}[H] \centering \textbf{Encrypted traffic on hub}\par\medskip \includegraphics[width=0.9\textwidth]{resources/wireshark_hub1_to_r2.png} \end{figure} \end{document}
{ "alphanum_fraction": 0.762435558, "avg_line_length": 25.0069686411, "ext": "tex", "hexsha": "8aa66afc34d75f57e6cc2249006f3c0cc468ce56", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "31013a23eab9362c7b93d4da739cbd7522659565", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Lavode/2407-computernetze", "max_forks_repo_path": "practical_exercises/03/solution.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "31013a23eab9362c7b93d4da739cbd7522659565", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Lavode/2407-computernetze", "max_issues_repo_path": "practical_exercises/03/solution.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "31013a23eab9362c7b93d4da739cbd7522659565", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Lavode/2407-computernetze", "max_stars_repo_path": "practical_exercises/03/solution.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2095, "size": 7177 }
% Note that the a4paper option is mainly intended so that authors in % countries using A4 can easily print to A4 and see how their papers will % look in print - the typesetting of the document will not typically be % affected with changes in paper size (but the bottom and side margins will). % Use the testflow package mentioned above to verify correct handling of % both paper sizes by the user's LaTeX system. % % Also note that the "draftcls" or "draftclsnofoot", not "draft", option % should be used if it is desired that the figures are to be displayed in % draft mode. % \documentclass[conference]{IEEEtran} % Add the compsoc option for Computer Society conferences. % % If IEEEtran.cls has not been installed into the LaTeX system files, % manually specify the path to it like: % \documentclass[conference]{../sty/IEEEtran} % Some very useful LaTeX packages include: % (uncomment the ones you want to load) % *** MISC UTILITY PACKAGES *** % %\usepackage{ifpdf} % Heiko Oberdiek's ifpdf.sty is very useful if you need conditional % compilation based on whether the output is pdf or dvi. % usage: % \ifpdf % % pdf code % \else % % dvi code % \fi % The latest version of ifpdf.sty can be obtained from: % http://www.ctan.org/tex-archive/macros/latex/contrib/oberdiek/ % Also, note that IEEEtran.cls V1.7 and later provides a builtin % \ifCLASSINFOpdf conditional that works the same way. % When switching from latex to pdflatex and vice-versa, the compiler may % have to be run twice to clear warning/error messages. % *** CITATION PACKAGES *** % %\usepackage{cite} % cite.sty was written by Donald Arseneau % V1.6 and later of IEEEtran pre-defines the format of the cite.sty package % \cite{} output to follow that of IEEE. Loading the cite package will % result in citation numbers being automatically sorted and properly % "compressed/ranged". e.g., [1], [9], [2], [7], [5], [6] without using % cite.sty will become [1], [2], [5]--[7], [9] using cite.sty. cite.sty's % \cite will automatically add leading space, if needed. Use cite.sty's % noadjust option (cite.sty V3.8 and later) if you want to turn this off. % cite.sty is already installed on most LaTeX systems. Be sure and use % version 4.0 (2003-05-27) and later if using hyperref.sty. cite.sty does % not currently provide for hyperlinked citations. % The latest version can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/cite/ % The documentation is contained in the cite.sty file itself. % *** GRAPHICS RELATED PACKAGES *** % \ifCLASSINFOpdf \usepackage[pdftex]{graphicx} % declare the path(s) where your graphic files are \graphicspath{{../Output/}{/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics \DeclareGraphicsExtensions{.pdf,.jpeg,.jpg,.png} \else % or other class option (dvipsone, dvipdf, if not using dvips). graphicx % will default to the driver specified in the system graphics.cfg if no % driver is specified. % \usepackage[dvips]{graphicx} % declare the path(s) where your graphic files are % \graphicspath{{../eps/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics % \DeclareGraphicsExtensions{.eps} \fi % graphicx was written by David Carlisle and Sebastian Rahtz. It is % required if you want graphics, photos, etc. graphicx.sty is already % installed on most LaTeX systems. The latest version and documentation can % be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/graphics/ % Another good source of documentation is "Using Imported Graphics in % LaTeX2e" by Keith Reckdahl which can be found as epslatex.ps or % epslatex.pdf at: http://www.ctan.org/tex-archive/info/ % % latex, and pdflatex in dvi mode, support graphics in encapsulated % postscript (.eps) format. pdflatex in pdf mode supports graphics % in .pdf, .jpeg, .png and .mps (metapost) formats. Users should ensure % that all non-photo figures use a vector format (.eps, .pdf, .mps) and % not a bitmapped formats (.jpeg, .png). IEEE frowns on bitmapped formats % which can result in "jaggedy"/blurry rendering of lines and letters as % well as large increases in file sizes. % % You can find documentation about the pdfTeX application at: % http://www.tug.org/applications/pdftex % *** MATH PACKAGES *** % \usepackage[cmex10]{amsmath} % A popular package from the American Mathematical Society that provides % many useful and powerful commands for dealing with mathematics. If using % it, be sure to load this package with the cmex10 option to ensure that % only type 1 fonts will utilized at all point sizes. Without this option, % it is possible that some math symbols, particularly those within % footnotes, will be rendered in bitmap form which will result in a % document that can not be IEEE Xplore compliant! % % Also, note that the amsmath package sets \interdisplaylinepenalty to 10000 % thus preventing page breaks from occurring within multiline equations. Use: %\interdisplaylinepenalty=2500 % after loading amsmath to restore such page breaks as IEEEtran.cls normally % does. amsmath.sty is already installed on most LaTeX systems. The latest % version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/amslatex/math/ % *** SPECIALIZED LIST PACKAGES *** % %\usepackage{algorithmic} % algorithmic.sty was written by Peter Williams and Rogerio Brito. % This package provides an algorithmic environment fo describing algorithms. % You can use the algorithmic environment in-text or within a figure % environment to provide for a floating algorithm. Do NOT use the algorithm % floating environment provided by algorithm.sty (by the same authors) or % algorithm2e.sty (by Christophe Fiorio) as IEEE does not use dedicated % algorithm float types and packages that provide these will not provide % correct IEEE style captions. The latest version and documentation of % algorithmic.sty can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/algorithms/ % There is also a support site at: % http://algorithms.berlios.de/index.html % Also of interest may be the (relatively newer and more customizable) % algorithmicx.sty package by Szasz Janos: % http://www.ctan.org/tex-archive/macros/latex/contrib/algorithmicx/ % *** ALIGNMENT PACKAGES *** % \usepackage{array} % Frank Mittelbach's and David Carlisle's array.sty patches and improves % the standard LaTeX2e array and tabular environments to provide better % appearance and additional user controls. As the default LaTeX2e table % generation code is lacking to the point of almost being broken with % respect to the quality of the end results, all users are strongly % advised to use an enhanced (at the very least that provided by array.sty) % set of table tools. array.sty is already installed on most systems. The % latest version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/tools/ %\usepackage{mdwmath} %\usepackage{mdwtab} % Also highly recommended is Mark Wooding's extremely powerful MDW tools, % especially mdwmath.sty and mdwtab.sty which are used to format equations % and tables, respectively. The MDWtools set is already installed on most % LaTeX systems. The lastest version and documentation is available at: % http://www.ctan.org/tex-archive/macros/latex/contrib/mdwtools/ % IEEEtran contains the IEEEeqnarray family of commands that can be used to % generate multiline equations as well as matrices, tables, etc., of high % quality. %\usepackage{eqparbox} % Also of notable interest is Scott Pakin's eqparbox package for creating % (automatically sized) equal width boxes - aka "natural width parboxes". % Available at: % http://www.ctan.org/tex-archive/macros/latex/contrib/eqparbox/ % *** SUBFIGURE PACKAGES *** \usepackage[tight,footnotesize]{subfigure} % subfigure.sty was written by Steven Douglas Cochran. This package makes it % easy to put subfigures in your figures. e.g., "Figure 1a and 1b". For IEEE % work, it is a good idea to load it with the tight package option to reduce % the amount of white space around the subfigures. subfigure.sty is already % installed on most LaTeX systems. The latest version and documentation can % be obtained at: % http://www.ctan.org/tex-archive/obsolete/macros/latex/contrib/subfigure/ % subfigure.sty has been superceeded by subfig.sty. %\usepackage[caption=false]{caption} %\usepackage[font=footnotesize]{subfig} % subfig.sty, also written by Steven Douglas Cochran, is the modern % replacement for subfigure.sty. However, subfig.sty requires and % automatically loads Axel Sommerfeldt's caption.sty which will override % IEEEtran.cls handling of captions and this will result in nonIEEE style % figure/table captions. To prevent this problem, be sure and preload % caption.sty with its "caption=false" package option. This is will preserve % IEEEtran.cls handing of captions. Version 1.3 (2005/06/28) and later % (recommended due to many improvements over 1.2) of subfig.sty supports % the caption=false option directly: %\usepackage[caption=false,font=footnotesize]{subfig} % % The latest version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/subfig/ % The latest version and documentation of caption.sty can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/caption/ % *** FLOAT PACKAGES *** % %\usepackage{fixltx2e} % fixltx2e, the successor to the earlier fix2col.sty, was written by % Frank Mittelbach and David Carlisle. This package corrects a few problems % in the LaTeX2e kernel, the most notable of which is that in current % LaTeX2e releases, the ordering of single and double column floats is not % guaranteed to be preserved. Thus, an unpatched LaTeX2e can allow a % single column figure to be placed prior to an earlier double column % figure. The latest version and documentation can be found at: % http://www.ctan.org/tex-archive/macros/latex/base/ %\usepackage{stfloats} % stfloats.sty was written by Sigitas Tolusis. This package gives LaTeX2e % the ability to do double column floats at the bottom of the page as well % as the top. (e.g., "\begin{figure*}[!b]" is not normally possible in % LaTeX2e). It also provides a command: %\fnbelowfloat % to enable the placement of footnotes below bottom floats (the standard % LaTeX2e kernel puts them above bottom floats). This is an invasive package % which rewrites many portions of the LaTeX2e float routines. It may not work % with other packages that modify the LaTeX2e float routines. The latest % version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/sttools/ % Documentation is contained in the stfloats.sty comments as well as in the % presfull.pdf file. Do not use the stfloats baselinefloat ability as IEEE % does not allow \baselineskip to stretch. Authors submitting work to the % IEEE should note that IEEE rarely uses double column equations and % that authors should try to avoid such use. Do not be tempted to use the % cuted.sty or midfloat.sty packages (also by Sigitas Tolusis) as IEEE does % not format its papers in such ways. % *** PDF, URL AND HYPERLINK PACKAGES *** % \usepackage{url} % url.sty was written by Donald Arseneau. It provides better support for % handling and breaking URLs. url.sty is already installed on most LaTeX % systems. The latest version can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/misc/ % Read the url.sty source comments for usage information. Basically, % \url{my_url_here}. % *** Do not adjust lengths that control margins, column widths, etc. *** % *** Do not use packages that alter fonts (such as pslatex). *** % There should be no need to do such things with IEEEtran.cls V1.6 and later. % (Unless specifically asked to do so by the journal or conference you plan % to submit to, of course. ) % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} \usepackage{tikz} \def\layersep{2.5cm} \begin{document} % % paper title % can use linebreaks \\ within to get better formatting as desired \title{Character Recognition Using Artificial Neural Networks} % author names and affiliations % use a multiple column layout for up to three different % affiliations \author{\IEEEauthorblockN{Michael Single} \IEEEauthorblockA{University of Berne\\ 08-917-445} \and \IEEEauthorblockN{Stefan Moser} \IEEEauthorblockA{University of Berne\\ 09-277-013} } % make the title area \maketitle \section{Artificial neural networks} \begin{figure} \begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=\layersep] \tikzstyle{every pin edge}=[<-,shorten <=1pt] \tikzstyle{neuron}=[circle,fill=black!25,minimum size=17pt,inner sep=0pt] \tikzstyle{input neuron}=[neuron, fill=green!50]; \tikzstyle{output neuron}=[neuron, fill=red!50]; \tikzstyle{hidden neuron}=[neuron, fill=blue!50]; \tikzstyle{annot} = [text width=4em, text centered] % Draw the input layer nodes \foreach \name / \y in {1,...,5} \path[yshift=-2cm] node[input neuron, pin=left:Feature \#\y] (I-\name) at (0,-\y) {}; % Draw the hidden layer nodes \foreach \name / \y in {1,...,10} \path[yshift=0cm] node[hidden neuron] (H-\name) at (\layersep,-\y cm) {}; % Draw the output layer nodes \foreach \name / \y in {0,...,9} \path[yshift=0cm] node[output neuron, pin={[pin edge={->}]right:\name}] (O-\name) at (2*\layersep, -\y cm -1 cm) {}; % Connect every node in the input layer with every node in the % hidden layer. \foreach \source in {1,...,5} \foreach \dest in {1,...,10} \path (I-\source) edge (H-\dest); % Connect every node in the hidden layer with the output layer \foreach \dest in {1,...,10} \foreach \output in {0,...,9} \path (H-\dest) edge (O-\output); % Annotate the layers \node[annot,above of=H-1, node distance=1cm] (hl) {Hidden layer}; \node[annot,left of=hl] {Input layer}; \node[annot,right of=hl] {Output layer}; \end{tikzpicture} \caption{Structure of a artificial neuronal network with 5 input features, a hidden layer with 10 nodes and our 10 output classes, the numbers from 0 to 9. Every node is connected to all nodes of the next level. } \label{fig:example_ann} \end{figure} Every neuron as a number of input signals $x$ that are weighted with the neurons weights $w$. The \emph{activation} $a$ of a neuron is computed as \begin{equation} a = \sum_{i = 1}^n x_i \cdot w_i \end{equation} The \emph{output signal} $y$ further refines the activation by applying a bias $w_0$ and applying a non linear function $f$ \begin{equation} y = f(a - w_0) \end{equation} For the results obtained in this report, we used the sigmoid function for $f$ \begin{equation} f(t) = \operatorname{sig}(t) = \frac{1}{1 + e^{-t}} = \frac{1}{2}\cdot\left(1 + \tanh \frac{t}{2}\right) \end{equation} The weights $w$ of each neuron are learnt using \emph{reinforcement learning}. Since we never received any source code for the library we used, we can not comment on how exactly this is done, except that the initial weights are chosen randomly, leading to a slightly different outcome in every run. \section{Results} The structure of the neural network largely decides its performance. An example structure can be seen in Figure \ref{fig:example_ann}. The main focus of this report is to analyse the results produced by different structures and different input features. Our first experiments were done using the pixel values\footnote{Normalized to the range $[-1, 1]$} directly as features. One observation we made was, that the learning rate must be set dependent on the the number of neurons used. A plot comparing these two properties can be seen in Figure \ref{fig:learning_rate_vs_number_neurons}. We could also observe the most prevalent disadvantage of artificial neural networks. While the performance with large datasets is quite good, they take a long time to learn. If the training set size is reduced, accuracy drops considerably. Some trials showing this behavior can be seen in Figure \ref{fig:trainig_set_size_vs_number_neurons}. \begin{figure} \center \includegraphics[width=0.45\textwidth]{../output/evaluation/learning_rates_e_100} \caption{Different learning rates with different number of neurons. It becomes apparent, that very low learning rates work well with low numbers of neurons, but high learning rates work well with higher number of neurons.} \label{fig:learning_rate_vs_number_neurons} \end{figure} \begin{figure} \center \includegraphics[width=0.45\textwidth]{../output/evaluation/{training_set_size_e_100_lr_0.01}.png} \caption{Different sizes of training sets with different number of neurons. As expected, the more training samples available, the better the results. However, the time used for learning does also grow with training set size.} \label{fig:trainig_set_size_vs_number_neurons} \end{figure} \subsection{Advanced Features} To decrease the time needed for training, we reduced the dimensionality of the problem by decreasing the number of features. We did this by simply downscaling the image to a fourth. As expected, this brought a great improvement in learning time, as the dimensionality of the problem was brought down from 748 to 49. But at the same time accuracy decreased for high number of neurons. The effect was larger, the smaller the learning rate was, as can be seen in Figure \ref{fig:features_comparison}. With only few neurons used in the hidden layer, the accuracy was about the same. \begin{figure}[ht!]% \centering \subfigure[Learning rate 0.1]{\includegraphics[width=0.45\textwidth]{../output/evaluation/{different_features_e_100_lr_0.1}.png}} \\ \subfigure[Learning rate 0.01]{\includegraphics[width=0.45\textwidth]{../output/evaluation/{different_features_e_100_lr_0.01}.png}}\\ \subfigure[Learning rate 0.001]{\includegraphics[width=0.45\textwidth]{../output/evaluation/{different_features_e_100_lr_0.001}.png}} \caption{Comparisons between our two tested features: In yellow the pixel values of the full image (748 scalar values), in blue the pixel values of the image scaled to one fourth (49 scalar values).} \label{fig:features_comparison} \end{figure} % An example of a floating figure using the graphicx package. % Note that \label must occur AFTER (or within) \caption. % For figures, \caption should occur after the \includegraphics. % Note that IEEEtran v1.7 and later has special internal code that % is designed to preserve the operation of \label within \caption % even when the captionsoff option is in effect. However, because % of issues like this, it may be the safest practice to put all your % \label just after \caption rather than within \caption{}. % % Reminder: the "draftcls" or "draftclsnofoot", not "draft", class % option should be used if it is desired that the figures are to be % displayed while in draft mode. % %\begin{figure}[!t] %\centering %\includegraphics[width=2.5in]{myfigure} % where an .eps filename suffix will be assumed under latex, % and a .pdf suffix will be assumed for pdflatex; or what has been declared % via \DeclareGraphicsExtensions. %\caption{Simulation Results} %\label{fig_sim} %\end{figure} % Note that IEEE typically puts floats only at the top, even when this % results in a large percentage of a column being occupied by floats. % An example of a double column floating figure using two subfigures. % (The subfig.sty package must be loaded for this to work.) % The subfigure \label commands are set within each subfloat command, the % \label for the overall figure must come after \caption. % \hfil must be used as a separator to get equal spacing. % The subfigure.sty package works much the same way, except \subfigure is % used instead of \subfloat. % %\begin{figure*}[!t] %\centerline{\subfloat[Case I]\includegraphics[width=2.5in]{subfigcase1}% %\label{fig_first_case}} %\hfil %\subfloat[Case II]{\includegraphics[width=2.5in]{subfigcase2}% %\label{fig_second_case}}} %\caption{Simulation results} %\label{fig_sim} %\end{figure*} % % Note that often IEEE papers with subfigures do not employ subfigure % captions (using the optional argument to \subfloat), but instead will % reference/describe all of them (a), (b), etc., within the main caption. % An example of a floating table. Note that, for IEEE style tables, the % \caption command should come BEFORE the table. Table text will default to % \footnotesize as IEEE normally uses this smaller font for tables. % The \label must come after \caption as always. % %\begin{table}[!t] %% increase table row spacing, adjust to taste %\renewcommand{\arraystretch}{1.3} % if using array.sty, it might be a good idea to tweak the value of % \extrarowheight as needed to properly center the text within the cells %\caption{An Example of a Table} %\label{table_example} %\centering %% Some packages, such as MDW tools, offer better commands for making tables %% than the plain LaTeX2e tabular which is used here. %\begin{tabular}{|c||c|} %\hline %One & Two\\ %\hline %Three & Four\\ %\hline %\end{tabular} %\end{table} % Note that IEEE does not put floats in the very first column - or typically % anywhere on the first page for that matter. Also, in-text middle ("here") % positioning is not used. Most IEEE journals/conferences use top floats % exclusively. Note that, LaTeX2e, unlike IEEE journals/conferences, places % footnotes above bottom floats. This can be corrected via the \fnbelowfloat % command of the stfloats package. % trigger a \newpage just before the given reference % number - used to balance the columns on the last page % adjust value as needed - may need to be readjusted if % the document is modified later %\IEEEtriggeratref{8} % The "triggered" command can be changed if desired: %\IEEEtriggercmd{\enlargethispage{-5in}} \bibliographystyle{IEEEtran} \bibliography{IEEEabrv,report} \end{document}
{ "alphanum_fraction": 0.7533887584, "avg_line_length": 41.0612244898, "ext": "tex", "hexsha": "3dd5bfac063ef9a58334ebab0de8e78920fdb584", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b4b1c5b9df935db34dd070fbb0d26299f656ca61", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "simplay/document_analysis", "max_forks_repo_path": "4/report/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b4b1c5b9df935db34dd070fbb0d26299f656ca61", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "simplay/document_analysis", "max_issues_repo_path": "4/report/report.tex", "max_line_length": 133, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b4b1c5b9df935db34dd070fbb0d26299f656ca61", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "simplay/document_analysis", "max_stars_repo_path": "4/report/report.tex", "max_stars_repo_stars_event_max_datetime": "2015-08-04T19:19:45.000Z", "max_stars_repo_stars_event_min_datetime": "2015-08-04T19:19:45.000Z", "num_tokens": 5790, "size": 22132 }
% Part: second-order-logic % Chapter: syntax-and-semantics \documentclass[../../../include/open-logic-chapter]{subfiles} \begin{document} \chapter{Syntax and Semantics} \begin{editorial} Basic syntax and semantics for SOL covered so far. As a chapter it's too short. Substitution for second-order variables has to be covered to be able to talk about derivation systems for SOL, and there's some subtle issues there. \end{editorial} \olimport{introduction} \olimport{terms-formulas} \olimport{satisfaction} \olimport{semantic-notions} \olimport{expressive-power} \olimport{inf-count} \OLEndChapterHook \end{document}
{ "alphanum_fraction": 0.7738853503, "avg_line_length": 19.625, "ext": "tex", "hexsha": "cf2b116c95260f11731d1638dc1b52c4d02cd498", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fb3ec284177cfbc65760f76c16abb563c5477096", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "cardsorg/OpenLogic", "max_forks_repo_path": "content/second-order-logic/syntax-and-semantics/syntax-and-semantics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fb3ec284177cfbc65760f76c16abb563c5477096", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "cardsorg/OpenLogic", "max_issues_repo_path": "content/second-order-logic/syntax-and-semantics/syntax-and-semantics.tex", "max_line_length": 69, "max_stars_count": 3, "max_stars_repo_head_hexsha": "fb3ec284177cfbc65760f76c16abb563c5477096", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "cardsorg/OpenLogic", "max_stars_repo_path": "content/second-order-logic/syntax-and-semantics/syntax-and-semantics.tex", "max_stars_repo_stars_event_max_datetime": "2021-12-19T01:35:02.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-17T00:08:35.000Z", "num_tokens": 162, "size": 628 }
\subsection{The name of the society} \begin{enumerate}[i.] \item There shall be a “University of Nottingham Doctor Who Society” or “UoNSU Soctor Who”. \end{enumerate}
{ "alphanum_fraction": 0.7470588235, "avg_line_length": 42.5, "ext": "tex", "hexsha": "eaf80d413e5acf717b52ab9287123234c56a4e14", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-03-14T14:15:33.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-14T14:15:33.000Z", "max_forks_repo_head_hexsha": "4705fac2ab1b0eb70017d43933ae01b042b67749", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Emersont1/constitution", "max_forks_repo_path": "name.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "4705fac2ab1b0eb70017d43933ae01b042b67749", "max_issues_repo_issues_event_max_datetime": "2021-03-20T02:02:12.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-13T13:02:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Emersont1/constitution", "max_issues_repo_path": "name.tex", "max_line_length": 95, "max_stars_count": 1, "max_stars_repo_head_hexsha": "1a728a05d02e8535cd3dfa408d3e61821cb86e62", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Soctor-Who/constitution", "max_stars_repo_path": "name.tex", "max_stars_repo_stars_event_max_datetime": "2021-03-14T14:15:55.000Z", "max_stars_repo_stars_event_min_datetime": "2021-03-14T14:15:55.000Z", "num_tokens": 48, "size": 170 }
\chapter{Experiments} %Basic Environments, PickPlace is harder than reach etc., Plappert et. al., HER paper.. This chapter describes the experiments that were carried out and their results. This chapter is divided into 3 subchapters. %change number of chapters if it changes \newline First the four robotics environments "FetchReach", "FetchPush", "FetchSlide" and "FetchPickAndPlace" that are already integrated in OpenAI Gym, will be compared and used as benchmarks. %ref Plappert et al. Then two self created environments, FetchSlideball and FetchToss will be described. \vspace{0.5cm} "FetchSlideball" is an extension of FetchSlide. We changed the object from a cylinder to a ball and increased the distance to the goal. This will be compared with the FetchSlide environment of the benchmarks. It is planned to improve this environment to an environment that simulates a golf course in the future. \vspace{0.5cm} "FetchToss" requires the agent to toss an object to a goal that is outside of the agents reach. It requires the agent to grab the object and then find the right trajectory to move the object and release it to toss it. The first part of the task is comparable to FetchPickAndPlace that requires the fetch robot to fetch the object. It was planned to improve this environment to make it toss a ball into a basket like in basketball. \vspace{0.5cm} In each chapter, the tasks and environment will be described. The action space, observation space and rewards to control the agent are also described. Then the results are discussed and compared to other tasks. %We want to have golf/basketball , so we use a ball instead %We change the distance %doesnt work because too far, maybe also reasons from HGG %change friction %interesting results, cheating v9 %use v9 on v2 is not bad, show v5 %(do v9 with wall) %basketball %throwing, doesnt work, too hard %throwing ball also %put wall, maybe it throws over goal %try longer timesteps %try putting box closer \section{Fetch environments by OpenAI} In this section, the four basic robotics environments of OpenAI with the fetch robotic arm are shortly described and compared. \subsection{FetchReach} \begin{figure} [h] \centering \includegraphics[width=1\textwidth]{figures/FetchReach-v1.pdf} \caption{FetchReach-v1} \label{reach1} \end{figure} The environment FetchReach is the simplest of the OpenAI robotic environments. As can be seen in figure \ref{reach1}, the environment consists of the fetch robot, a table and a red ball indicating the goal. The task is to make the robot move its gripper to the same position as the goal. The goal can be on the table as well as in the air. Also, the goal is only above the table, so the robot is always able to reach the goal. The only thing the robot has to figure out is a path from one starting point to different points. \vspace{0.5cm} Figure \ref{benchm} shows how effective HER is. After just about 7 epochs, the success rate is already at 100\%. \subsection{FetchPush} \begin{figure} [h] \centering \includegraphics[width=1\textwidth]{figures/FetchPush-v1.pdf} \caption{FetchPush-v1} \label{push1} \end{figure} FetchPush is already much harder than FetchReach. The environment is the same as in FetchReach, but an object in form of a cube was added. This can be seen in figure \ref{push1}. The goal is to move the cube to the goal position. This requires the robot to learn how to move its gripper from the start position to the side of the cube that is away from the goal. Then it needs to move its gripper towards the goal to solve the task. Still, HER proves to be quite powerful. After about 14 epochs, In comparison to FetchReach, it took about twice the time to learn to solve the task. \subsection{FetchSlide} \begin{figure} [h] \centering \includegraphics[width=1\textwidth]{figures/FetchSlide-v1.pdf} \caption{FetchSlide-v1} \label{slide1} \end{figure} The FetchSlide environment is quite similar to FetchPush. The task is the same, the robot has to move an object, this time it is a cylinder (similar to a curling stone for curling). There are two main differences to FetchPush which make the task harder. The cylinder has less friction and slides, so the robot needs to carefully move the cylinder to avoid making it slide too far. Also the goal position is further away, partly even outside of the robotic arms' range. So the sliding property of the cylinder has to be used in order to reach those goals. \vspace{0.5cm} Using HER provides worse results than FetchPush. After training, the robot only reaches a success rate of about 60\%. In the failed attempts the robot learned to push the cylinder in the right direction, only the distance is not right. The cylinder either slides too far or does not slide far enough. Most of these fails have goal positions that are outside of the robotic arms' range. Interestingly, the robot also struggles with goals that are inside of the robotic arms' range. This is probably due to the sliding property. When touching the cylinder while training, the cylinder will probably slide further away and might often land outside of the robotic arms' reach. With HER, the agent learns to reach the states that it already reached at some point. Because the cylinder has a lot more positions it can be in due to its sliding property, other than in FetchPush, it does not learn to move the cylinder inside its range as well as in FetchPush. \subsection{FetchPickAndPlace} \begin{figure} [h] \centering \includegraphics[width=1\textwidth]{figures/FetchPickAndPlace-v1.pdf} \caption{FetchPickAndPlace-v1} \label{pickplace2} \end{figure} The environment for FetchPickAndPlace is exactly the same as in FetchPush except for the goal position. It can also be in the air. This requires the robot to use its gripper to fetch the cube and move it to the goals in the air. So the robot has to learn how to move its gripper towards the cube and open its gripper, then close its gripper to grab the cube. Then it has to move the cube to the goal position without dropping the cube by opening its gripper. The training results in figure \ref{benchm} show that learning to solve this task works quite well with HER. After about 30 epochs it almost reaches 100\% success rate. %maybe in comparison part The reason why it takes much more times than the FetchPush task can be explained when comparing both. In the FetchPush task, the agent needs to learn to move its arm to the cube on the side farther away from the goal, then move its arm towards the goal. We have seen in FetchReach that it is quite simple to learn how to move the arm from one position to another. The difficulty in FetchPush comes from figuring out how to move the object. In FetchPickAndPlace this is even harder, because the opening and closing the gripper is also part of the actions it can take. Learning how to grab the cube and keep it grabbed seems to be a the cause to why it takes more time to learn. \subsection{Discussion of Results} %obv. harder tasks take longer and are performing worse %use Plappert et. al /her paper as reference \begin{figure} [h] \centering \subfigure[FetchReach-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchReach-v1.pdf}\label{fig:fig_fetchreach-v1}} \subfigure[FetchPush-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchPush-v1.pdf}\label{fig:fig_fetchpush-v1}} \subfigure[FetchSlide-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchSlide-v1.pdf}\label{fig:fig_fetchslidea-v1}} \subfigure[FetchPickAndPlace-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchPickAndPlace-v1.pdf}\label{fig:fig_fetchpickandplace-v1a}} \caption{Success Rate of each task after 50 episodes of training} \label{benchm} \end{figure} Figure \ref{benchm} show the training results for each task. FetchReach showed that just moving the robotic arm between two points is quite fast and easy to learn. \newline Obviously when comparing, the harder tasks FetchSlide and FetchPickAndPlace perform worse than FetchPush and FetchReach. FetchPickAndPlace in comparison to FetchPush introduced the difficulty of having to control opening and closing the gripper. Instead of having an action space with only 3 variables like for the other tasks, this action space is extended to 4 variables, which is an extension of the action space by 33\%. Having to learn how to grab the cube seems to take about 20 episodes longer than not needing to do it. %check exact number \newline When comparing FetchSlide to FetchPush, the difference is clear. FetchSlide performs much worse. \newline One difference between both tasks is the control over the object. In FetchSlide it is much harder to control the cylinder while moving it. The robot either has to hit it with very precise force or stop it if the goal position is in the robots' reach. Having big fluctuations between the force used and the distance the cylinder traveled makes it hard to learn how much force exactly is needed. \newline Another difficulty is added by extending the range where the goal can be positioned. These multi-goal environments where the goal and object are in variable positions can be seen as a collection of many simple tasks, where each task is only about moving an object from a fixed position to another. Having a bigger goal space as increases the amount of these simple tasks greatly. As can be seen, these obstacles increase the difficulty drastically. %shortly summarize the obstacles ? \section{FetchSlideball} FetchSlideball is an extension to the environment FetchSlide. One of the future plans is to have an agent learn how to play golf. FetchSlideball made two differences to FetchSlide: the goal is put even farther in the distance and the cylinder was changed to a ball. The task will be approached in smaller steps. %put this in extra section? First a simple environment is tested where exactly the same environment as in FetchSlide is used and the only change is for the object to change from a cylinder to a ball. Through this test the difference in difficulty between using a cylinder and a ball is shown. This is needed to make FetchSlideball comparable to FetchSlide. Afterwards, for the following experiments, the friction and the steps per episode will be varied. \subsection{Task Description} The task for FetchSlideball is exactly the same as for FetchSlide. The robotic arm has to push a ball from one position to another position, using the balls property to roll farther. Rolling the ball and sliding a cylinder might imply different friction types used, because a ball uses rolling friction instead of sliding friction which is usually much more lower. However, in this environment the same amount of friction is used for both objects. The main difference is the stability of the object. While the cylinder can fall on its side and slide different depending on where it is pushed, the ball stays stable. Also, other than in FetchSlide, the goal position is guaranteed to be outside the robots' reach. This should make it much harder for the agent to learn how to solve task. \subsection{Environment} \begin{figure} [!h] \centering \includegraphics[width=1\textwidth]{figures/FetchSlideball-v3.pdf} \caption{FetchSlideball-v3} \label{slideball1} \end{figure} The environment can be seen in figure \ref{slideball1}. The size of the table was increased drastically. This was done to ensure that the goal would be on the table. The table is just much bigger than necessary to be able to accomodate future environments where the goal will be put in much farther distance. The object is a ball. As usual, there is a fetch robot and a red sphere marking the goal position. %put some experiment section for the differennt variables/versions used ? \subsection{Results} \begin{figure} [!h] \centering \subfigure[FetchSlide-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchSlide-v1.pdf}\label{fig:fig_fetchslide-v1}} \subfigure[FetchSlideball-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchSlideball-v1.pdf}\label{fig:fig_fetchslideball-v1}} \caption{FetchSlide with a cylinder (left) and with a ball (right)} \label{slidecomp} \end{figure} First the FetchSlide environment was used with the only change being a ball. As can be seen in figure \ref{slidecomp}, FetchSlide with a ball performs much better than vanilla FetchSlide with a cylinder. Both learning curves are quite similar. They both have a success rate curve for the first 20 epochs. After the first 20 epochs, the success rate is still rising, but visibly slower. While FetchSlide with the cylinder only reaches a success rate of 60\%, FetchSlide with a ball reaches about 80\%. The difference might be explained by the ball being more stable. The cylinder that is used in the normal FetchSlide environment can fall over when it is moved at a bad angle, this can not happen to a ball. %werid explanation, try to find out more \newline Afterwards the experiment continues for the FetchSlideball environment with a bigger distance. The new distance from start position of the ball to the goal position is about the doubled distance of the normal FetchSlide environment. %fill in exact distances. Training the FetchSlideball environment without changing any of the other parameters proved to be impossible as figure \ref{slideball2} shows. Later it was discovered to be because of a simple reason. The goal is too far away, so it is physically impossible for the robotic arm to roll the ball to the goal. \begin{figure} [h] \centering \includegraphics[width=1\textwidth]{figures/fig_FetchSlideball-v2.pdf} \caption{FetchSlideball with the same friction and timestep as in FetchSlide} \label{slideball2} \end{figure} Reducing the friction by 50\% made it barely possible to reach the goal. The goal could be reached, but the goal position is at the limit of the range that the ball could reach. Figure \ref{slideball3} showed how hard it is to learn to reach the goal. For the first 30 epochs, there was no success. Weirdly, at epochs 30 to 34 the goal was reached, but afterwards there was no success again. %some randomness ? but 4 episodes in a row ? %TODO check this, maybe retry the experiment \begin{figure} [h] \centering \includegraphics[width=1\textwidth]{figures/fig_FetchSlideball-v4.pdf} \caption{FetchSlideball with 50\% friction and more steps per episode} \label{slideball3} \end{figure} Changing the balls' friction to only 10\% of the original friction showed interesting results. As can be seen in figure \ref{slideball4}, for the first 15 epochs there is no success, but then the success rate slowly rose. At episode 47 the success rate spiked to almost the doubled success rate. The reason behind that shows how tricky the agent can be. Each training episode takes 1000 time steps. The episode is successful when the goal is reached, to be precise, in this environment if the ball is in a close range (0,05 units of length) of the goal position in the last time step. The agent abuses this fact to solve the task different than intended. The intended solution is to roll the ball with just enough force, so that it stops at the exact goal position and stays there, so that the success condition is fulfilled and the task is successful solved. But the agent uses a different idea. It tries to hit the ball at exactly the right time, so that the ball is just at the goal position at time step 1000, the ball does not need to stop there. If the episode would take more time steps, then the ball would just roll too far, but because the episode ended at 1000 time steps, the success condition is fulfilled and the episode is counted as solved right. But even in the cases where the episode is not successful, the robotic arm slides the ball in the right direction, it just rolls too far. When using the trained policy of the 10\% friction FetchSlideball environment to solve the task with 50\% friction, it is getting quite close to the goal because it learned to move the ball in the right direction. %do 25% friction ? \begin{figure} [h] \centering \includegraphics[width=1\textwidth]{figures/fig_FetchSlideball-v3.pdf} \caption{FetchSlideball with 10\% friction)} \label{slideball4} \end{figure} %do 10% friction, more time steps. \subsection{Discussion} Through these experiments two findings were learned. Using a ball instead of a cylinder improves the performance of the agent. This is attributed to the ball being a stable object. In this case, the ball showed an improvement of 33\% over the cylinder. It might be interesting to compare the ball to other objects. \newline Also it was figured out that a bigger distance to the goal position increases the difficulty of the task greatly. For the FetchSlideball task with 10\% friction, only 8\% success rate could be reached after 50 episodes in comparison to the 60\% success rate by the FetchSlide task. And even for that 8\% success rate, the agent did not solve the task the intended way. \newline This poses two questions: Is the difficulty solely rising due the fact that the goal distance is increasing or does the difficulty also depend on the proportion between goal distance and range where the robotic arm can roll the ball to? More experiments with different friction values have to be done to answer this question. \newline Also, how can the agent be prevented from solving the task in an unintended way ? To really solve the task in the intended way, the implementation of the task needs to be changed. The FetchSlideball task needs to change to have the ball lie on the goal position for some time, to count the task as successfully solved. This would prevent a ball that only touches the goal at the end to be counted. \newline The results have shown that vanilla HER can not solve these tasks where we increased the goal distance far outside the robotic arms' reach. To solve these tasks, further experiments with improved HER algorithms like Hindsight Goal Generation need to be done. As stated by Ren et al. \cite{hgg}, many hindsight experiences are not helpful to replay and therefore the hindsight goals need to be selected better to improve the performance. Especially in the case of the task FetchSlideball with 50 \% friction, the goal was located at the edge of the range where the robotic arm can roll the ball to. An approach where the agent is guided to roll the ball more often in the direction of the goal would improve the performance. Further research with improved HER approaches will be done in the future. \section{FetchToss} FetchToss is rather different than the other environments. For future plans, FetchToss is planned to become an environment that resembles basketball. The agent should learn how to throw a ball into a basket. This environment has similarities to FetchPickAndPlace and FetchSlide, because the gripper has to be used to grab a ball and the goal is also outside of the robotic arms' range. To solve the task, we first try to change the object to a ball and see how picking a ball compares to picking a cube. Then a box is used to try to make the agent learn, how to toss the ball into the box. \subsection{Task Description} The task for FetchToss is to fetch a ball that is placed on the table and toss it into a box that is not reachable by the robotic arm without tossing. The goal has to be outside of reach to avoid having the robot just picking the ball up and putting it inside. The goal position and size is different than for the other tasks. For one, the goal this time is static, it will always be the same box at the same position. Also, the goal is much bigger this time. The task is fulfilled, when the ball is inside the box, it does not matter where in the box. The red sphere is just a visual mark, the actual goal is the whole box. The agent has to learn following steps: Pick up the ball like in FetchPickAndPlace, then move the object with enough force towards the goal and open the gripper to toss the ball and also hit the goal. \subsection{Environment} \begin{figure} [ht] \centering \includegraphics[width=1\textwidth]{figures/FetchToss-v1.pdf} \caption{FetchToss} \label{toss1} \end{figure} For this environment the environment of FetchPickAndPlace was used as a base. The object was also changed into a ball and a box was created to simulate as basket. As usual, there is also a fetch robot and a red sphere marking the goal. As mentioned, the actual goal contains the whole box, not only the position of the red sphere. Also, the goal is static. \subsection{Results} \begin{figure} [!ht] \centering \subfigure[FetchPickAndPlace-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchPickAndPlace-v1.pdf}\label{fig:fig_fetchpickandplace-v1}} \subfigure[FetchPickAndPlaceball-v1] {\includegraphics[width=0.49\textwidth]{figures/fig_FetchPickAndPlaceball-v1.pdf}\label{fig:fig_fetchpickandplaceball-v1}} \caption{FetchPickAndPlace with a cube(left) and a ball (right)} \label{pickballcube} \end{figure} As figure \ref{pickballcube} shows, picking up a ball instead of a cube seems to perform worse. Both show similar success rate curves. FetchPickAndPlaceball seems to differ at about epoch 15. While FetchPickAndPlace still has a steep success rate curve at epoch 15, FetchPichAndPlaceball already slows down with being more successful. Overall FetchPickAndPlace with the ball shows slightly lower success rates. While it reaches about 90\% success rate at 50 epochs, the vanilla FetchPickAndPlace with the cube reaches about 95\% success rate. \begin{figure} [!ht] \centering \includegraphics[width=1\textwidth]{figures/fig_FetchToss-v1.pdf} \caption{FetchToss} \label{toss2} \end{figure} Figure \ref{toss2} summarizes the results for the other experiments that were run. The robotic arm does not learn how to toss the ball at all. The box was changed to a box where the front is open to make it easier to toss in the back was made higher to prevent the agent from throwing the ball over the box. This also showed the same results. Another try was lengthening the time steps per episode from 1000 time steps to 2000 time steps, because it could just be impossible to solve the task as tossing takes some time. The ball was also made 100 times lighter (from 2 to 0.02 units of weight) which did not change the result. A path was planned manually to show if the reason for failing might be because the task itself is impossible. For the environment with the lighter ball a path to solve the task is possible which can be seen in Figure \ref{toss3}. This means that vanilla HER can not solve this task. Also tossing a cube instead of the ball does not work. This also proved to be unsuccessful. \begin{figure} [!h] \centering \includegraphics[width=1\textwidth]{figures/FetchToss-v0.pdf} \caption{FetchToss, with lighter ball and more time steps} \label{toss3} \end{figure} \subsection{Discussion} Picking up a cube seemed to be easier than the ball. A reason might be because of their size form. A ball with radius of 0,02 units of length, and therefore a diameter of 0,04 units of length is simply smaller than a cube with each side being 0.04 units of length long. Even though both objects have 0,04 units of length at their longest part, the ball is just smaller. Also, because of the balls form, is has to be grabbed at the middle while the cube can be grabbed at any side, it will always be 0,04 units of length long. The cube is just easier to grab and harder to drop than a ball. Experiments could be done to figure out how big the ball has to show as much success as for the cube. \newline Learning to toss the ball is pretty difficult as the results show. It was proven that the task is physically possible by finding a path for the lighter ball. So it seems that the task of tossing a ball is too hard to learn with standard HER. \newline As explained by Ren et al. \cite{hgg}, HER has the flaw that it learns how to solve goals that are equal to states that the agent already reached once, even though these goals might not be useful to learn how to solve the actual goal. In the FetchToss task learning how to toss the ball in the box seems to take too many difficult steps to learn. So vanilla HER fails. Improved HER algorithms might be useful to solve the task. Especially the energy-based hindsight experience prioritization approach by Zhao and Tresp \cite{energyher} might be useful for this task because tossing requires a lot of energy and therefore prioritizing replaying experiences with high energy would be especially fitting. Further research in this direction will be done in the future.
{ "alphanum_fraction": 0.7913110311, "avg_line_length": 82.4033333333, "ext": "tex", "hexsha": "a2691d8351c71c3dad3314e9453f65db9544bdf0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ba4e1849ed5f2db1736002c5fc95ea47435a648f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "3nt0n/bachelor_thesis", "max_forks_repo_path": "Thesis_AntonMai/chapters/experiments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ba4e1849ed5f2db1736002c5fc95ea47435a648f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "3nt0n/bachelor_thesis", "max_issues_repo_path": "Thesis_AntonMai/chapters/experiments.tex", "max_line_length": 1003, "max_stars_count": null, "max_stars_repo_head_hexsha": "ba4e1849ed5f2db1736002c5fc95ea47435a648f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "3nt0n/bachelor_thesis", "max_stars_repo_path": "Thesis_AntonMai/chapters/experiments.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 5802, "size": 24721 }
\section{Related Work} Chen et al. introduced the idea of slippage in the course of describing efforts to automatically detect different faults in a large set of failing test cases \cite{PLDI13}. Hughes et al. \cite{FindMoreBugs} proposed a modification of QuickCheck to avoid re-producing known bugs that (in theory) could mitigate the problem of slippage, but is not directly comparable to our approach. The approach of Hughes et al. requires interpretation of test components (e.g. method calls), and analysis of patterns, while our approaches are purely algorithmic, with no additional requirements beyond those of delta debugging itself \cite{DD}. It is not clear how best to apply such an approach to cases such as {\tt jsfunfuzz} where each component is not a method call but essentially an arbitrary string, without significant user effort to define abstractions of components. There are also approaches that sidestep slippage by initially producing short test sequences (e.g. recent work by Mao et al. \cite{Mao}). However, for many generation algorithms longer sequences are essential for good fault detection \cite{ASE08,LongBetter}.
{ "alphanum_fraction": 0.8045178106, "avg_line_length": 54.8095238095, "ext": "tex", "hexsha": "99c7a58fadaa785c86b138910bb9e7957c988b61", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c77dda6abf616c9dfcd052762c5b07bf4368ddde", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "15821361594/python-automated-test", "max_forks_repo_path": "deprecated/papers/slippage/related.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c77dda6abf616c9dfcd052762c5b07bf4368ddde", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "15821361594/python-automated-test", "max_issues_repo_path": "deprecated/papers/slippage/related.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "c77dda6abf616c9dfcd052762c5b07bf4368ddde", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "15821361594/python-automated-test", "max_stars_repo_path": "deprecated/papers/slippage/related.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 265, "size": 1151 }
\chapter{Introduction} To begin with, here is the original goal of this project: \begin{quote} It shall be a library to manipulate ext2 filesystems, maybe even a higher layer of abstraction corresponding to file system drivers in an OS (like VFS for Linux). The interesting part will be that the interface would be non-blocking using futures and promises (single-threaded) in the programming language Rust. \end{quote} I will briefly describe the problem and the motivation, then I will explain how and why I failed, discuss the structure of the resulting piece of software, some technical issues and try to come up with a conclusion (and, indeed, a morale).
{ "alphanum_fraction": 0.7827380952, "avg_line_length": 39.5294117647, "ext": "tex", "hexsha": "26a8c54c9384da7e339504e0d3468b2988e9eb5d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "ff6707ad1d23572ee7e418d4b63cc76252f90e6f", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "honzasp/libext2", "max_forks_repo_path": "paper/tex/intro.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "ff6707ad1d23572ee7e418d4b63cc76252f90e6f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "honzasp/libext2", "max_issues_repo_path": "paper/tex/intro.tex", "max_line_length": 79, "max_stars_count": 2, "max_stars_repo_head_hexsha": "ff6707ad1d23572ee7e418d4b63cc76252f90e6f", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "honzasp/libext2", "max_stars_repo_path": "paper/tex/intro.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-14T14:53:16.000Z", "max_stars_repo_stars_event_min_datetime": "2019-03-27T06:42:13.000Z", "num_tokens": 150, "size": 672 }
\chapter{LiveClips: Contextual Recommendation of Inspirational Clips from Live Streamed Videos} \label{chapter:liveclips} \begin{quote} Tutorials are helpful for accomplishing specific goals, but people do not always have a particular outcome in mind when using creative software -- sometimes users seek resources for general learning or inspiration. One such resource is live streamed videos, where artists share a window into their creative process. This chapter explores the benefits and challenges of using such videos for learning and inspiration. %Through content analysis of live stream archives, interviews with 8 streamers, and online surveys with 165 viewers, we study current practices and challenges in creative live stream communities and compare them with prior observations of live streaming in other domains. We observed four common types of creative live streams: teaching, making, socializing, and performing. We find that despite the wealth of expert knowledge in live stream archives, their length and volume makes them hard to search or browse. To address this challenge, we introduce \textit{LiveClips}: a system for automatically selecting inspirational segments from live streamed videos and recommending them to users in the context of their creative workflows. We present three methods for recommending and displaying video clips with varying levels of contextual support, and implement them in a popular creative application, Adobe Photoshop. We compare the accuracy of LiveClips' clip ranking to human ranking and present initial user feedback on the three prototypes. \end{quote} \input{liveclips/1_intro} \input{liveclips/2_relatedwork} \input{liveclips/3_livestreams} \input{liveclips/4_design} \input{liveclips/5_system} \input{liveclips/6_evaluations} \input{liveclips/7_discussion} \input{liveclips/8_conclusion}
{ "alphanum_fraction": 0.8290968091, "avg_line_length": 108.7647058824, "ext": "tex", "hexsha": "5433dd81ed58047212c945f3a5706c27c4520bf5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "36c269696ac71cd1f7dac50fa094f8db72e895f6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ailiefraser/ucsd-thesis", "max_forks_repo_path": "liveclips/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "36c269696ac71cd1f7dac50fa094f8db72e895f6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ailiefraser/ucsd-thesis", "max_issues_repo_path": "liveclips/main.tex", "max_line_length": 668, "max_stars_count": null, "max_stars_repo_head_hexsha": "36c269696ac71cd1f7dac50fa094f8db72e895f6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ailiefraser/ucsd-thesis", "max_stars_repo_path": "liveclips/main.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 384, "size": 1849 }
\documentclass[a4paper]{article} \def\npart{II} \def\ntitle{Algebraic Geometry} \def\nlecturer{I.\ Grojnowski} \def\nterm{Lent} \def\nyear{2019} \input{header} \renewcommand{\A}{\mathbb{A}} \DeclareMathOperator{\ev}{ev} \DeclareMathOperator{\Cl}{Cl} \renewcommand*{\P}{\mathbb{P}} \DeclareMathOperator{\Der}{Der} % derivation \newcommand{\rational}{\dashrightarrow} % rational map \DeclareMathOperator{\Div}{Div} % divisor \begin{document} \input{titlepage} \tableofcontents \setcounter{section}{-1} \section{Introduction} Algebraic geometry is the study of polynomial equations. \begin{eg} \(E = \{(x, y) \in \C^2: y^2 = x^3 - x\}\). Sketch this. Consider \(p: E \to \C, (x, y) \mapsto x\). For each \(x \notin \{0, \pm 1\}\), there are 2 points in \(p^{-1}(x)\). So this is a double cover ramified at \(0, \pm 1\), the precise meaning of these phrases will be defined later. How does this help us sketch? For \(x\) away from the three points, the preimage of a disk under \(p\) are two copies of the disk. If \(x\) is near \(0\), we have \(x^3 - x \approx -x\) so locally it looks like \(y^2 = -x\). If we project \((x, y)\) to \(x\) we get a disk winding around twice. But if we project to \(y\) we get a bijection. Still, how do we visualise \(E\)? First let's sketch it over \(\R\). If \((x, y) \in \R^2\) then \(y^2 \geq 0\) so \(x(x^2 - 1) \cdot 0\). Thus \(x \geq 1\) or \(-1 \leq x \leq 0\). Just like in high school, we can differentiate. (graph) The infinite bit should be visualised as a circle minus a point. Now let \((x, y) \in \C^2\). Let \[ \Gamma = \{(x, y) \in E: y \in \R, x \in [-1, 0] \cup [1, \infty)\} = p^{-1}\{[-1, 1] \cup [1, \infty)\}. \] Claim \(E \setminus \Gamma\) is disconnected and it consists of two pieces, each isomorphic via \(p\) to \(\C \subseteq ([-1, 0] \cup [1, \infty))\). This is equivalent to the claim that if \(x \in \C \subseteq ([-1, 0] \cup [1, \infty))\) then can choose a square root of \(x^3 - x\), and then as you wander around, this remains a single-valued functions. The proof is left as an exercise. Granting this, we have two copies of \(\C \subseteq ([-1, 0] \cup [1, \infty))\). Turn one of them around and glue (graph). More surprisingly, solutions of equaitons have a topology! \end{eg} \section{The dictionary between algebra and geometry} \subsection{Basic notions} \begin{definition}[\(k\)-algebra]\index{\(k\)-algebra} Let \(k\) be a field. A \emph{(commutative) \(k\)-algebra} is a unital commutative ring countaining \(k\) as a subring. \end{definition} \begin{eg} \(k[x_1, \dots, x_n]\), the polynomial ring in \(n\)-variables. \end{eg} \begin{notation} If \(k\) is a fields, write \(\A^n = \A^n(k) = k^n\), the \emph{affine \(n\)-space}\index{affine space}. \end{notation} Observe that every \(f \in k[x_1, \dots, x_n]\) defines a function \begin{align*} \A^n(k) &\to \A^1(k) \\ (p_1, \dots, p_n) &\mapsto f(p_1, \dots, p_n) = \ev_p(f) \end{align*} This defines a map from \(k[x_1, \dots, x_n]\) to the space of all functions \(\A^n \to \A^1\). If \(k\) is finite then it is surjective but not injective, and if \(k\) is infinite then it is not surjective. More generally, if \(L \supseteq k\) is an algebraic extension then one can define a function \(\A^n(L) \to \A^1(L)\) by evaluating \(f\) at a point in \(L^n\). Therefore \(f\) defines a function \(\A^n(\cl k) \to \A^1(\cl k)\) where \(\cl k\) is the algebraic closure of \(k\). So now the map \(k[x_1, \dots, x_n] \to \{\A^n(\cl k) \to \A^1(\cl k)\}\) is injective for all \(k\) but never surjective. The conclusion is that we should think of \(k[x_1, \dots, x_n]\) as very special functions \(\cl k^n \to \cl k\), namely ``polynomial with \(k\)-coefficients''. As a concrete example, let \(k = \F_q\). Then \(x^q - x\) defines a function \(\cl k \to \cl k\) that is \emph{not} zero. \begin{definition}[algebraic set] Let \(S \subseteq k[x_1, \dots, x_n]\). Define \[ Z(S) = \{p \in \A^p: f(p) = 0 \text{ for all } f \in S\} \subseteq \A^n \] which are the simultaneous zeros of equations in \(S\). Such a subset is known as \emph{algebraic set}, \emph{Zariski closed subset of \(\A^n\)}. \end{definition} \begin{eg}\leavevmode \begin{enumerate} \item \(\A^n = Z(0)\). \item \(Z(x) = \{0\}\). Similarly \(Z(x - 7) = \{7\}\). \item If \(f(x) = (x - \lambda_1) \dots (x - \lambda_n)\) then \(Z(f) = \{\lambda_1, \dots, \lambda_n\}\). \item if \(k = \cl k\) then algebraic subsets of \(\A^1\) are \(\emptyset, \A^1\) or finite set of points of \(k\). \item In \(\A^2\), \(Z(y^2 - x^3 + x) = E\) which we sketched in introduction. \item In \(\A^2\), \(Z(x, y) = \{(0, 0)\}\), \(Z(xy)\) is the union of two axes. \(Z(y)\) is the \(x\)-axis and \(Z(y(y - 1), x(y - 1))\) is the union of a point and a line. \end{enumerate} \end{eg} If \(J\) is the ideal generated by \(S\), i.e. \[ J = \left\{\sum a_i f_i: a_i \in k[x_1, \dots, x_n], f_i \in S\right\} \] then \(Z(J) = Z(S)\). Recall from IB Groups, Rings and Modules \begin{theorem}[Hilbert basis theorem] If \(k\) is Noetherian then so is \(k[x]\). \end{theorem} So every ideal in \(k[x_1, \dots, x_n]\) is finitely generated. Therefore there exist \(f_1, \dots, f_r \in k[x_1, \dots, x_n]\) such that \[ Z(S) = Z(f_1, \dots, f_r). \] Thus algebraic sets are solutions of finitely many polynomial equations. \begin{lemma}\leavevmode \begin{enumerate} \item If \(I \subseteq J\) then \(Z(J) \subseteq Z(I)\). \item \(Z(0) = \A^n\) and \(Z(k[x_1, \dots, x_n]) = \emptyset\). \item \(Z(\bigcup J_i) = Z(\sum J_i) = \bigcap Z(J_i)\) for any (possibly infinite) family of ideals \(\{J_i\}\). \item \(Z(I \cap J) = Z(I) \cup Z(J)\) for ideals \(I, J\). \end{enumerate} \end{lemma} \begin{proof} 1, 2, 3 are clear. For 4, \(\supseteq\) follows from 1. For \(\subseteq\), if \(x \notin Z(I)\) then exists \(f_1 \in I\) with \(f_1(x) \neq 0\) and if \(x \notin Z(J)\) then exists \(f_2 \in J\) with \(f_2(x) \neq 0\). Thus \(f_1f_2(x) = f_1(x)f_2(x) \neq 0\) so \(x \notin Z(f_1f_2)\). But \(f_1f_2 \in I \cap J\) as \(I\) and \(J\) are ideals. Thus \(x \notin Z(I \cap J)\). \end{proof} We can define a map goes the other direction. If \(Z \subseteq \A^n(\cl k)\) is a subset, define \[ I(Z) = \{f \in k[x_1, \dots, x_n]: f(p) = 0 \text{ for all } p \in Z\}. \] If \(f \in I(Z), g \in k[x_1, \dots, x_n]\), \(fg(p) = f(p)g(p) = 0\) if \(p \in Z\) so \(I(Z)\) is an ideal. \begin{lemma}\leavevmode \begin{enumerate} \item If \(Z \subseteq Z'\) then \(I(Z') \subseteq I(Z)\). \item For any \(Y \subseteq \A^n\), \(Y \subseteq Z(I(Y))\). \item If \(V = Z(J)\) is an algebraic subset then \(V = Z(I(V))\). \item If \(J \subseteq k[x_1, \dots, x_n]\) is an ideal then \(J \subseteq I(Z(J))\). \end{enumerate} \end{lemma} \begin{proof} 1, 2 and 4 are immediate. For 3, \(\supseteq\) follows from \(I(V) = I(Z(J) \supseteq J\) by 4 so \(Z(I(V)) \subseteq Z(J) = V\) by 1. \(\subseteq\) follows from 2. \end{proof} The first lemma says that algebraic subsets of \(\A^n\) form the closed sets of a topology on \(\A^n\). This is called the \emph{Zariski topology}\index{Zariski topology}. \begin{eg} If \(X = \A^1(k)\) where \(k = \cl k\), the closed subsets are finite subsets of points of \(\A^1\). Note that if \(k = \C\), if \(Z \subseteq \A^n(k)\) is Zariski closed then it is closed in the usual sense. \end{eg} The second lemma says that \(Z(I(Y))\) is the smallest algebraic subset of \(\A^n\) containing \(Y\), i.e.\ the closure of \(Y\) in the Zariski topology. \begin{eg} If \(k = \C\) and \(\Z \subseteq \C\) then \(Z(I(\Z)) = \C\) as if a polynomial vanishes at \(\Z\) then it must be zero. \end{eg} We have a correspondence \[ \begin{tikzcd} \{\text{algebraic subsets of } \A^n\} \ar[r, "I", shift left] & \{\text{ideals in } k[x_1, \dots, x_n]\} \ar[l, "Z", shift left] \end{tikzcd} \] Note that this is not quite a bijection. For example in \(k[x]\), \[ Z(x) = Z(x^2) = Z(x^3) = \cdots \] and more generally \[ Z(f_1^{q_1} \cdots f_r^{g_r}) = Z(f_1 \cdots f_r) \] where \(q_i > 0\). We will fix this shortly. \subsection{Hilbert's Nullstellensatz} Let \(Y \subseteq \A^n\) be an algebraic subset so \(Y = Z(I(Y))\). Recall that we have a map \(k[x_1, \dots, x_n] \to \{\cl k^n \to \cl k\}\). Hence by restriction we have a map \(k[x_1, \dots, x_n] \to \{Y \to \cl k\}\) as \(Y \subseteq \cl k^n\). By definition \(I(Y) \mapsto 0\). This motivates us to make the following definition: \begin{definition} Let \(Y \subseteq \A^n\) be an algebraic set. Then \[ k[Y] = k[x_1, \dots, x_n]/I(Y). \] \end{definition} We've just seen \(k[Y] \embed \{Y \to \cl k\}\) so \(k[Y]\) is a special class of functions on \(Y\), namely ``polynomial functions on \(Y\) with \(k\)-coefficients''. \begin{eg}\leavevmode \begin{itemize} \item \(k[\A^n] = k[x_1, \dots, x_n]\). \item \(k[E] = k[x, y]/(y^2 - x^3 + x)\). \end{itemize} \end{eg} Clearly \(k[Y]\) is a \(k\)-algebra. Our aim is to recover \(Y\) completely from this \(k\)-algebra. Observe that if \(p \in Y \subseteq \A^n(k)\) then the map \begin{align*} k[Y] &\to k \\ f &\mapsto f(p) \end{align*} is an algebra homomorphism. It is surjective and its kernel, denoted \[ \mathfrak m_p = \{f \in k[Y]: f(p) = 0\}, \] is a maximal ideal, as \(k[Y]/\mathfrak m_p\) is a field. So \[ \{\text{points in } Y\} \embed \{\text{algebra homomorphism } k[Y] \to \cl k\} \embed \{\text{max ideals } \mathfrak m \subseteq k[Y]\}. \] It is remarkable that if \(k = \cl k\) then all of these coincides (it is particularly so for the first inclusion, as it gives a translation between geometry and algebra. By contrast, the second inclusion is more or less a corollary of a standard result in algebra). What are the maximal ideals of \(k[x_1, \dots, x_n]\)? We've observed if \(p \in k^n\) then \(\{f \in k[x_1, \dots, x_n]: f(p) = 0\}\) is a maximal ideal. Not all maximal ideals are of the form \(\mathfrak m_p\), however. For example if \(k = \R\) then \((x^2 + 1) \subseteq \R[x]\) is a maximal ideal as \(\R[x]/(x^2 + 1) \cong \C\). Nevertheless, notice that \(\R \subseteq \C\) and this is an extension of \(\R\). \begin{theorem}[Nullstellensatz]\index{Nullstellensatz}\leavevmode \label{thm:Nullstellensatz} If \(\mathfrak m \subseteq k[x_1, \dots, x_n]\) is a maximal ideal then \(k[x_1, \dots, x_n]/\mathfrak m = L\) is an algebraic field extension of \(k\), and finite-dimensional over \(k\). \end{theorem} Note that in this setting \(L\) is finite-dimensional over \(k\) if and only if every \(\alpha \in L\) is algebraic over \(k\). For the nontrivial direction, images of \(x_1, \dots, x_n\) in \(L\) generate \(L\) and each satisfies a polynomial equation of degree \(d_i\) so \(\dim_k L \leq d_1 \cdots d_n\). \begin{corollary} If \(k = \cl k\) then the field embedding \(k \to L\) is an isomorphism, that is every maximal ideal of \(k[x_1, \dots, x_n]\) is of the form \[ \mathfrak m_p = (x_1 - p_1, \dots, x_n - p_n) \] for \(p \in k^n\). \end{corollary} \begin{proof} \(L \supseteq k\) is an algebraic extension of fields so \(L = k\) as \(k = \cl k\) and \(p_i\) is the image of \(x_i\) under the map \(k[x_1, \dots, x_n] \to k[x_1, \dots, x_n]/\mathfrak m_p = L\). \end{proof} \begin{corollary} Suppose \(k = \cl k\). If \(Y \subseteq \A^n\) is an algebraic set then we have bijections \[ \begin{array}{ccccc} \{\text{points in } Y\} & \to & \{\text{algebra homomorphisms } k[Y] \to k\} & \to & \{\text{maximal ideals of } k[Y]\} \\ \\ p & \mapsto & \ev_p:f \mapsto f(p) \\ & & \varphi & \mapsto & \ker \varphi \\ \\ \varphi(p) & \mapsfrom & \varphi \\ & & k[Y] \to k = k[Y]/\mathfrak m & \mapsfrom & \mathfrak m \end{array} \] \end{corollary} \begin{proof} When \(Y = \A^n\) this is Nullstellensatz. In general, an algebra homomorphism \(\varphi: k[x_1, \dots, x_n]/I \to k\) is the same thing as an algebra homomorphism \(k[x_1, \dots, x_n] \to k\) with \(I\) in its kernel. \end{proof} We will give a better proof later when we are more adept at playing with polynomial equations, but for now we'll prove a special case. \begin{proof}[Proof of \nameref{thm:Nullstellensatz} when \(k\) is uncountable] Suppose \(L\) is not algebraic. Then there exists \(t \in L\) which is not algebraic over \(k\) so \(k(t) \subseteq L\). But observe the following: \begin{enumerate} \item \(L\) has countable dimension as a vector space over \(k\). \item The set \[ \left\{ \frac{1}{t - \lambda}: \lambda \in k\right\} \subseteq L \] is linearly independent: suppose not then exist \(\lambda_1, \dots, \lambda_r, a_1, \dots, a_r \in k\) with \[ \sum_{i = 1}^r \frac{a_i}{t - \lambda_i} = 0. \] Clear the denominators, we get an algebraic identity that \(t\) satisfies, contradicting \(t\) transcendental. \end{enumerate} This implies that \(\{\lambda: \lambda \in k\}\) is countable. Absurd. \end{proof} % aside on logic and model theory \begin{corollary}[Nullstellensatz] Let \(k = \cl k\), \(I \subseteq k[x_1, \dots, x_n]\) an ideal. Then \(Z(I) \neq \emptyset\) if \(I \neq k[x_1, \dots, x_n]\). More generally, let \(k = \cl k\), \(I \subseteq k[Y]\) has \(Z(I) \neq \emptyset\) if \(I \neq k[Y]\). \end{corollary} \begin{proof} If \(I \neq k[x_1, \dots, x_n]\) then \(I \subseteq \mathfrak m\) for some maximal ideal \(\mathfrak m\). But Nullstellensatz says that \(Z(\mathfrak m) = \{p\}\) for some \(p \in k^n\) as \(\mathfrak m = \mathfrak m_p\) for some \(p\). Thus \[ Z(I) \supseteq Z(\mathfrak m) = \{p\} \neq 0. \] \end{proof} This motivates us to give an abstract, algebraic definition of algebraic set. \begin{definition}[radical]\index{radical} Let \(R\) be a ring and \(J \subseteq R\) an ideal. The \emph{radical} of \(J\) is \[ \sqrt J = \{f \in R: f^n \in J \text{ for some } n \geq 1\}. \] \end{definition} \begin{lemma} Given an ideal \(J \subseteq R\), \(\sqrt J\) is an ideal. \end{lemma} \begin{proof} If \(f, g \in \sqrt J\) then \(f^n \in J, g^m \in J\) for some \(n, m\). Then \[ (f + g)^{n + m} = \sum_{i = 0}^{n + m} \binom{n + m}{c} f^i g^{n + m - i} \in J \] so \(fg \in \sqrt J\). If \(r \in R, f \in \sqrt J\) then \(rf \in \sqrt J\). \end{proof} \begin{eg}\leavevmode \begin{enumerate} \item \(\sqrt{(x^n)} = (x)\) in \(k[x]\). \item If \(J\) is a prime ideal then \(\sqrt J = J\). \item If \(f \in k[x_1, \dots, x_n]\) is irreducible then \((f)\) is a prime ideal. As \(k[x_1, \dots, x_n]\) is a UFD, \(\sqrt{(f)} = (f)\). \end{enumerate} \end{eg} Note that \(Z(J) = Z(\sqrt J)\). \begin{theorem}[Nullstellensatz] If \(k = \cl k\), \(J \subseteq k[x_1, \dots, x_n]\) then \(I(Z(J)) = \sqrt J\). \end{theorem} \begin{proof} Let \(f \in I(Z(J))\) so \(f(p) = 0\) for all \(p \in Z(J)\). We must show that \(f^n \in J\) for some \(n > 0\). Consider \(k[x_1, \dots, x_n, t]/(tf - 1) = k[x_1, \dots, x_n, \frac{1}{f}]\). Let \(I\) be the ideal in this ring generated by the image of \(J\). Claim that \(Z(I) = \emptyset\): if not, let \(p \in Z(I)\). As \(J \subseteq I\), \(p \in Z(J)\) so \(f(p) = 0\). But \(p = (p_1, \dots, p_n, p_t)\) with \(p_t f(p_1, \dots, p_n) = 1\), i.e.\ \(f(p) \neq 0\). Absurd. Then the corollary to the Nullstellensatz implies that \(I = k[x_1, \dots, x_n, \frac{1}{f}]\) (we used the fact \(k = \cl k\)). As \(1 \in I = (J)\), \[ \sum_{i = 1}^N \frac{\gamma_i}{f^i} = 1 \] for some \(\gamma_i \in J\) for some \(N \geq 1\). Multiply by \(f^N\), get \[ f^N = \sum_{i = 1}^N \gamma_i f^{N -i} \in J. \] \end{proof} \begin{remark} Let's try to deconstruct this mysterious proof. What are the points of \(k[x_1, \dots, x_n, t]/(tf - 1) = k[Y]\)? Here \[ Y = \{(p_1, \dots, p_n, p_t) \in \A^{n + 1}: p_t f(p_1, \dots, p_n) = 1\} \] which is isomorphic as a set to \[ \{(p_1, \dots, p_n) \in \A^n: f(p) \neq 0\} = \A^n \setminus Z(f). \] So \(Y\) is a Zariski closed subset of \(\A^{n + 1}\) which is isomorphic as a set to \(\A^n \setminus Z(f)\), and our proof was asking in what does \[ Z(f) \cap (\A^n \setminus Z(f)) = \emptyset \] mean in terms of the ideal \(J\). \end{remark} \begin{corollary} Suppose \(k = \cl k, I, J \subseteq k[x_1, \dots, x_n]\). Then \(Z(I) = Z(J)\) if and only if \(I(Z(I)) = I(Z(J))\) if and only if \(\sqrt I = \sqrt J\). That is we have a bijection between \[ \begin{tikzcd} \{\text{Zariski closed subsets of } \A^n\} \ar[r, "I", shift left] & \{\text{radical ideal } I \subseteq k[x_1, \dots, x_n]\} \ar[l, "Z", shift left] \\ p & \mathfrak m_p \end{tikzcd} \] \end{corollary} This is a hint that we may have an intrinsic characterisation of rings \(k[Y]\). We'll do this shortly. \begin{definition}[(ir)reducible, disconnected]\index{reducible}\index{irreducible}\index{disconnected} An algebraic subset \(Y\) is \emph{reducible} if there exist algebraic subsets \(Y_1, Y_2 \neq Y\) such that \(Y_1 \cup Y_2\). It is \emph{irreducible} if is not reducible. It is \emph{disconnected if \(Y_1 \cap Y_2 = \emptyset\)}. \end{definition} \begin{eg}\leavevmode \begin{enumerate} \item \(Z(xy) = Z(x) \cup Z(y)\) is reducible. \item \(Z(y(y - 1), x(y - 1)) = Z(x, y) \cup Z(y - 1)\) is reducible and disconnected. \end{enumerate} \end{eg} In other words, \(Y\) is reducible/disconnected in Zariski topology. In usual topology, such as the usual one \(\R\), almost every set is reducible. However, in Zariski topology there are so few closed sets that this is actually a useful definition. In fact, they have a very nice algebraic characterisation: \begin{lemma} \(Y\) is irreducible if and only if \(I(Y)\) is a prime ideal in \(k[x_1, \dots, x_n]\). \end{lemma} \begin{proof} If \(Y = Y_1 \cup Y_2\) is reducible then exists \(p \in Y_1 \setminus Y_2\) so exists \(f \in I(Y_2)\) with \(f(p) \neq 0\). Similarly exists \(q \in Y_2 \setminus Y_1\) so exists \(g \in I(Y_1)\) with \(g(q) \neq 0\). So \[ fg \in I(Y_1) \cap I(Y_1) = I(Y). \] But \(f, g \notin I(Y)\) so \(I(Y)\) is not prime. Conversely if \(I(Y)\) is not prime then exists \(f_1, \dots, f_2 \in k[x_1, \dots, x_n]\) with \(f_1, f_2 \notin I(Y)\) but \(f_1f_2 \in I(Y)\). Set \(Y_i = Y \cap Z(f_i)\). Then \(Y_1 \cup Y_2 = Y\) as for \(p \in Y\), \(f_1f_2(p) = 0\) so \(f_1(p) = 0\) or \(f_2(p) = 0\) and \(Y_i \neq Y\) as \(f_2 \notin I(Y)\). \end{proof} \begin{eg} \(I = (x_1, \dots, x_m) \subseteq k[x_1, \dots, x_m]\) is prime, as \[ k[x_1, \dots, x_n]/(x_1, \dots, x_m) = k[x_{m + 1}, \dots, x_n] \] is an integral domain. \end{eg} \begin{ex} Recall that if \(R\) is a UFD, \(f \in R\) nonzero then if \(f\) is irreducible then \((f)\) is a prime ideal. Furthermore as \(k[x_1, \dots, x_n]\) is a UFD, it is an exercise to check that \(Z(y -x^2), Z(y^2 - x^3 + x)\) are irreducible. \end{ex} Zariski topology is very different from usual topology: if \(X\) is an irreducible Zariski closed subset and \(U \subseteq X\) is a nonempty Zarisk open subset in \(X\) then \(\cl U = X\), i.e.\ nonempty Zariski open subsets are dense. \begin{proof} Let \(Y = X \setminus U\) which is closed. Then \(\cl U \cup Y = X\) and \(U \neq \emptyset\) so \(Y \neq X\). But \(X\) is irreducible so \(\cl U = X\). \end{proof} \begin{application}[Cayley-Hamilton] Let \(A \in \text{Mat}_n(k)\), an \(n \times n\) matrix. Define its characteristic polynomial to be \[ \chi_A(x) = \det(xI - A) \in k[x] \] This defines a map \begin{align*} \text{Mat}_n(k) &\to \text{Mat}_n(k) \\ B &\mapsto \chi_A(B) \end{align*} Then for all \(A\), \(\chi_A(A) = 0\). \begin{proof} Strategy: \begin{enumerate} \item The set of matrices for which Cayley-Hamilton holds is a Zariski closed subset of \(\A^{n^2}\). \item It holds for diagonalisable matrices, which is a Zariski open subset of \(\A^{n^2}\). \item Hence as \(\A^{n^2}\) is an irreducible algebraic set, it holds for all matrices. \end{enumerate} Let \(X = \text{Mat}_n(k) = k^{n^2} = \A^{n^2}\) be the space of matrix coefficients. It is an affine space so irreducible closed. Consider \[ C = \{A \in \text{Mat}_n(k): \chi_A(A) = 0\}. \] Claim that this is a Zariski closed subset, cut out by \(n^2\) equations of the form \(\chi_A(A)_{ij} = 0\). We must check these equations are polynomial equations in the matrix coefficients of \(A\). Note that \[ \chi_A(x) \in k[X \times \A^1] = k[\A^{n^2 + 1}], \] i.e.\ \(\det (xI - A)\) is a polynomial equation in \(n^2 + 1\) variables: matrix coefficients of \(A\) and \(x\). Now substitute \(x = A\). Note that matrix coefficients of \(A^r\), \((A^r)_{ij}\), are polynomials in the matrix coefficients of \(A\) (of degree \(r\)). Hence \(\chi_A(A)_{ij}\) are polynomial equations in coefficients of \(A\). As \(\text{Mat}_n(k) \subseteq \text{Mat}_n(\cl k)\), suffices to prove the case \(k = \cl k\). Note that \begin{align*} \chi_A(x) &= \chi_{gAg^{-1}}(x) \\ \chi_A(gBg^{-1}) &= g \chi_A(B) g^{-1} \end{align*} for all \(g \in \GL_n(k)\). so \(\chi_A(A) = 0\) if and only if \(\chi_{gAg^{-1}} (gAg^{-1}) = 0\), so \(A\) satisfies its only characteristic polynomial if and only if \(gAg^{-1}\) does for all \(g \in \GL_n(k)\). Now let \(U\) be the set of all matrices with distinct eigenvalues. As \(k = \cl k\), \(A \in U\) implies that there exists \(g \in \GL_n(k)\) such that \(gAg^{-1}\) is \[ \begin{pmatrix} \lambda_1 \\ & \lambda_2 \\ & & \ddots \\ & & & \lambda_n \end{pmatrix} \] which clearly satisfies its own characteristic polynomial. Moreover \(U \neq \emptyset\) since we can always find distinct elements \(\lambda_1, \dots, \lambda_n\) of \(k\) as \(k = \cl k\). Left to show \(U\) is Zariski open. \(A \in U\) if and only \(\chi_A(x) \in k[x]\) has distinct roots. But a polynomial \(f\) has distinct roots if and only if \(f\) and \(f'\) have no common root, if and only if \(\Delta(f) \neq 0\), where the discriminant \(\Delta(f)\) is a polynomial in the coefficients of \(f\). Hence \(A \in U\) if and only if \(\Delta(\chi_A(x)) \neq 0\), so \(U\) is Zariski open. \end{proof} \end{application} Now back to the abstract characterisation of algebraic varieties. We need some preliminary definitions: \begin{definition}[nilpotent]\index{nilpotent} Let \(R\) be a ring. \(y \in R\) is \emph{nilpotent} if exists \(n > 0\) such that \(y^n = 0\). \end{definition} \begin{eg}\leavevmode \begin{enumerate} \item If \(R = k[x]\) then \(0\) is the only nilpotent. \item If \(R = k[x]/(x^7)\) then \(x\) is nilpotent as \(x^7 = 0\). \end{enumerate} \end{eg} \begin{ex} Let \(J \subseteq k[x_1, \dots, x_n]\) be an ideal and \(R = k[x_1, \dots, x_n]/J\). Then \(J = \sqrt J\) if and only if \(R\) has no nonzero nilpotents. \end{ex} \begin{corollary} Let \(k = \cl k\). If \(Y \subseteq \A^n\) is a Zariski closed subset then \(k[Y]\) is a finitely generated \(k\)-algebra with no nonzero nilpotents. \end{corollary} Conversely, given a finitely generated reduced \(k\)-algebra \(A\), there exists a surjection \(k[t_1, \dots, t_n] \to A\). As \(A\) is reduced, the kernel is radical. This is precisely the definition of a coordinate ring. What do we gain from this? We need not choose a generator set of the \(k\)-algebra, which is the same as an embedding \(Y \embed \A^n\). In this abstract formulation, the ``points'' in the affine space corresponds to maximal ideals of the \(k\)-algebra. \begin{definition}[affine algebraic variety]\index{affine algebraic variety}\index{point} An \emph{affine algebraic variety} over \(k\), where \(k\) is a field, is a finitely generated \(k\)-algebra \(R\) with no nonzero nilpotent elements. If \(k = \cl k\), define a \emph{point} of \(R\) to be a \(k\)-algebra homomorphism \(R \to k\). More generally if \(L \supseteq k\) is a field extension then an \emph{\(L\)-point of \(R\)} is a \(k\)-algebra homomorphism \(R \to L\). \end{definition} \begin{eg} Let \(J = \sqrt J \subseteq k[x_1, \dots, x_n]\) be a radical ideal and \(R = k[x_1, \dots, x_n]/J\) be an affine algebraic variety. Coversely, if \(R\) is such an algebra, choose generators \(\overline x_1, \dots, \overline x_n\) of \(R\) as a \(k\)-algebra so get a surjective map \(k[x_1, \dots, x_n] \to R\) where \(x_i \mapsto \overline x_i\). Let \(J\) be the kernel and \(J = \sqrt J\) per the exercise above. By Nullstellensatz, points of \(R\) is \(Z(J) \subseteq k^n\) given by \begin{align*} Z(J) &\to \{R \to k\} \\ p = (p_1, \dots, p_n) &\mapsto (\ev_p: \overline x_i \mapsto p_i) \end{align*} In general, choice of generators \(\overline x_1, \dots, \overline x_n\) of \(R\) is the \emph{choice} of an embedding of points of \(R\) to \(\A^n\). \end{eg} \begin{eg} \(\R[x]/(x^2 + 1)\) has no \(\R\)-point, but it has two \(\C\)-points, given by \(x \mapsto \pm i\). \end{eg} We indulge in imprecision and often write ``let \(Y\) be an affine algebraic variety and \(R = k[Y]\) be its ring of functions''. What we really mean, when spelt out, is: let \(R\) be an affine algebraic variety with \(\cl k\)-points \(Y\). \begin{definition}[morphism]\index{morphism}\index{isomorphism} A \emph{morphism} \(\gamma: X \to Y\) of affine algebraic varieties is a \(k\)-algebra homomorphism \(\gamma^*: k[Y] \to k[X]\). An \emph{isomorphism} \(\alpha: X \to Y\) is a morphism such that there exists an inverse morphism \(\beta: Y \to X\) such that \(\alpha\beta = 1_Y, \beta\alpha = 1_X\). \end{definition} Let's unpack the definition. Suppose \(X\) and \(Y\) are the points of \(R\) and \(S\) respectively. If \(\gamma^*: S \to R\) is a \(k\)-algebra homomorphism and \(p \in X\) is a point of \(X\), that is, if \(\ev_p: R \to \cl k\) is a \(k\)-algebra homomorphism, then \(\ev_p \compose \gamma^*: S \to \cl k\) is a \(k\)-algebra homomorphism, so a point in \(Y\). Thus \(\gamma^*\) defines a map \(X \to Y\), which we denote by \(\gamma\). So this definition is a clever way of saying the map \(\gamma\) is defined by polynomial equations. \begin{eg}\leavevmode \begin{enumerate} \item Let \(X = \A^1, Y = \{(x, y) \in \A^2: x^2 = y^3\} = Z(x^2 - y^3)\). Let \(R = k[t]\). Claim \(t \mapsto (t^3, t^2)\) is a morphism \(X \to Y\). Unpack the definition, we have \(k[Y] = k[x, y]/(x^2 - y^3)\) and a \(k\)-algebra homomorphism \begin{align*} \gamma^*: k[x, y]/(x^2 - y^3) &\to k[t] \\ x &\mapsto t^3 \\ y &\mapsto t^2 \end{align*} Check that \(x^2 - y^3 \mapsto 0\) so it is well-defined. \end{enumerate} \end{eg} Unravel the definition of a morphism in general, let \(k[X] = k[x_1, \dots, x_n]/(s_1, \dots, s_\ell)\), \(k[Y] = k[y_1, \dots, y_m]/(r_1, \dots, r_k)\) (remember choice of generators \(x_1, \dots, x_n\) is choice of embeddings \(X \embed \A^n\)). Let \(\overline y_1, \dots, \overline y_m\) denote the image of \(y_1, \dots, y_m\) in \(k[Y]\). An algebra homomorphism \(\gamma^*: k[Y] \to k[X]\) is uniquely determined by where \(\overline y_1, \dots, \overline y_m\) go, i.e.\ by \[ \overline \Phi_i = \gamma^*(\overline y_i) \in k[X]. \] Choose a polynomial \(\Phi_i \in k[x_1, \dots, x_n]\) whose reduction is \(\overline \Phi_i\). Such a choice determines an algebra homomorphism \begin{align*} k[y_1, \dots, y_m] &\to k[x_1, \dots, x_n] \\ y_i &\mapsto \Phi_i \end{align*} i.e.\ a morphism \(\A^n \to \A^m\), and the conditions on the polynomials \(\Phi_i\) ensure the image is in \(Y\) are the condition that the ideal \((r_1, \dots, r_k)\) is sent to \(0\) in \(k[X]\), i.e.\ \(r_i(\Phi_1, \dots, \Phi_m) \in (s_1, \dots, s_\ell) = 0 \in k[X]\). \begin{question} Is the morphism in the above example an isomorphism? \end{question} \begin{eg}\leavevmode \begin{enumerate} \item A morphsim \(\A^1 \to \A^n\) is a \(k\)-algebra homomorphism \(k[x_1, \dots, x_n] \to k[t]\), which is the same as an \(n\)-tuple of polynomials \((\Phi_1(t), \dots, \Phi_n(t))\). \item A morphism \(X \to \A^1\) is an \(k\)-algebra homomorphism \(k[t] \to k[X]\), which is an element of \(k[X]\) (i.e.\ where \(t\) is sent to). This says that \(k[X]\) is precisely the functions \(X \to \A^1\), which is something we knew before! \item Suppose \(\ch k \neq 2\). Is there a morphism \(\A^1 \to E = \{(x, y): y^2 = x^3 - x\}\)? Suppose \(k = \C\), this is asking if there is a polynomial map from the punctured sphere to the punctured torus. From analytic point of view this is impossible (there is not even an analytic functions does this). Algebraically, this is asking if there exist polynomials \(a(t), b(t) \in k[t]\) such that \(b^2 = a^3 - a\). See example sheet 1. \item Let \(X\) be an affine algebraic variety and let \(f \in k[X]\). Consider \[ k[X] \to k[X][t]/(tf - 1) = k[Y] \] which defines a morphism \(Y \to X\). What is \(Y\) and what is the morphism? By definition a point of \(Y\) is a \(k\)-algebra homomorphism \(\gamma: k[X][t]/(tf - 1) \to k\). Suppose \(\gamma(t) = a\) then \(\gamma|_{k[X]} = \ev_p\) where \(p \in X\) such that \(a f(p) = 1\), i.e.\ \(f(p) = \frac{1}{a} \neq 0\). Conversely, if \(f(p) \neq 0\), set \(a = \frac{1}{f(p)}\), we get a \(k\)-algebra homomorphism. So \[ Y = \{x \in X: f(x) \neq 0\} = X \setminus Z(f) \] which is Zariski open, and \(\gamma: Y \embed X\) is the inclusion. In general, Zariski open sets of the form \(X \setminus Z(f)\) are affine varieties in their own right, and the inclusion map is a morphism of affine varieties. \end{enumerate} \end{eg} By the same argument the complement of the subvariety cut out by a single polynomial is a variety. We call them \begin{definition}[hypersurface]\index{hypersurface} If \(f \in k[x_1, \dots, x_n]\) then \(Z(f) \subseteq \A^n\) is called a \emph{hypersurface}. \end{definition} We may ask: is every Zariski open set also an affine variety, i.e.\ the image of an affine variety inside some bigger affine space under an injection?. No! \(\{(x, y) \in \A^2: (x, y) \neq (0, 0)\}\) is not an affine variety. \section{Smooth points, dimension \& Noether normalisation} Let \(X \subseteq \A^n\) be an affine variety and \(p \in X\). Let \(X = Z(I)\), \(I = (f_1, \dots, f_r)\). Tentatively we define \begin{align*} T_pX &= \{v \in \A^n: \sum v_i \frac{\partial f}{\partial x_i}(p) = 0 \text{ for all } f \in I\} \\ &= \{v \in \A^n: \sum v_i \frac{\partial f_j}{\partial x_i}(p) = 0, j = 1, \dots, r\} \end{align*} Translate \(T_pX\) from the origin to \(p \in \A^n\) so the equations are \[ \{v \in \A^n: \sum (v_i - p_i) \frac{\partial f}{\partial x_i}(p) = 0 \text{ for all } f \in I\}. \] This is the best linear approximation to \(X\) at the point \(p\), as \[ f(x) = f(p) + \sum (x_i - p_i) \frac{\partial f}{\partial x_i}(p) + \dots \] If \(X\) is complex analytic then this is indeed the analytic definition of tangent space. However it's not always the case. \begin{eg} If \(I = (x^2 - y^3)\) then \[ T_{(a, b)}(X) = \{(v_1, v_2): v_1(2a) + v_2(-3b^2) = 0\}. \] If \((a, b) \neq (0, 0)\) this is a line and if \((a, b) = (0, 0)\) then this is \(\A^2\). \end{eg} \begin{lemma} \(\{p \in X: \dim T_pX \geq t\}\) is a Zariski closed subset of \(X\) for all \(t \geq 0\). \end{lemma} \begin{proof} Write \(T_p X = \ker (A: k^n \to k^r)\) where \(A\) is the matrix \[ \begin{pmatrix} \frac{\partial f_1}{\partial x_1}(p) & \cdots & \frac{\partial f_1}{\partial x_n}(p) \\ & \ddots \\ \frac{\partial f_r}{\partial x_1}(p) & \cdots & \frac{\partial f_r}{\partial x_n}(p) \end{pmatrix} \] By rank-nullity, \(\dim \ker A \geq t\) if and only if \(\text{rank} A \leq n - t\). But rank of a matrix \(A\) is greater than or equal to \(s\) if and only if there exists an \(s \times s\) subminor \(B\) with \(\det B \neq 0\), which is a polynomial equation in matrix coefficients. Thus \(\text{rank} A \leq n - t\) if and only if all \((n + 1 - t) \times (n + 1 - t)\) subminors have zero determinant. Hence \[ \{p \in X: \dim T_pX \geq t\} = Z(f_1, \dots, f_r, \text{ determinants of subminors}). \] \end{proof} \begin{definition}[dimension]\index{dimension} Let \(X\) be an irreducible affine variety. Then \[ \dim X = \min \{\dim T_pX: p \in X\}. \] If \(k \neq \cl k\) then \(p\) is taken to be \(\cl k\)-points. \end{definition} In a moment we'll show \(T_pX\) is independent of embedding \(X \embed \A^n\). We require \(X\) to be irreducible as if not then each component can have different dimensions and \(\dim X\) is not a good notion but we may as well define it anyway: we let \[ \dim X = \max \{\dim X_i: X_i \text{ irreducible component of } X\}. \] \begin{lemma} Suppose \(k = \cl k\). Let \(f \in k[x_1, \dots, x_n]\) be a nonconstant irreducible polynomial. Then \(Z(f)\) has dimension \(n - 1\). \end{lemma} \begin{proof} \(\dim T_p Z(f)\) is either \(n\) or \(n - 1\) as there is only one equation. If \(\dim T_pZ(f) = n\) then \(\frac{\partial f}{\partial x_i}(p) = 0\) for all \(i\) so if \(\dim Z(f) = n\) then \[ \frac{\partial f}{\partial x_i} \in I(Z(f)) = \sqrt{(f)} = (f) \] as \((f)\) is prime. Write \(\frac{\partial f}{\partial x_i} = fg\) for some \(g \in k[x_1, \dots, x_n]\). But \(\deg_{x_i} \frac{\partial f}{\partial x_i} < \deg_{x_i} f\) so \(g = 0, \frac{\partial f}{\partial x_i} = 0\). Thus we have shown \(\dim Z(f) = n\) implies that \(\frac{\partial f}{\partial x_i} = 0\) for all \(i\). If \(\ch k = 0\) then \(f\) is a constant, \(Z(f) = \emptyset\), contradiction. If \(\ch k = p\) this implies \(f \in k[x_1^p, \dots, x_n^p]\). Claim that there exists \(h \in k[x_1, \dots, x_n]\) such that \(f = h^p\), contradicting \(f\) being prime: write \(f = \sum a_\lambda x^{p \lambda}\) for \(a_\lambda \in k\). As \(k = \cl k\), \(a_\lambda^{1/p}\) exists. Set \(h(x) = \sum a_\lambda^{1/p} x^\lambda\). As \(\ch k = p\), \(h^p = f\). \end{proof} \begin{eg}\leavevmode \begin{enumerate} \item \(\dim \A^n = n\). \item Any plane curve \(f(x, y)\) has dimension \(1\). \end{enumerate} \end{eg} \begin{definition}[smooth, singular point]\index{smooth}\index{singular} Suppose \(k = \cl k\). Let \(X\) be an irreducible algebraic variety and \(p \in X\). \(p\) is \emph{smooth} if \(\dim T_pX = \dim X\). \(p\) is \emph{singular} otherwise. \end{definition} Thus the above lemma says that singular points form a Zariski closed subvariety and smooth points form a Zariski open subset, which is non-empty. \begin{proposition}[nonexaminable] If \(k = \C\) and \(\dim X = d\), then \(p \in X\) is smooth if and only if there exists an isomorphism from a small ball around \(0 \in \C^d\) to a small neighbourhood of \(p \in X\) in the usual topology. \end{proposition} This is obviously false in Zariski topology. \begin{proof} This is a consequence of implicit function theorem. \end{proof} \begin{definition}[derivation]\index{derivation} Let \(A\) be a \(k\)-algebra and \(\varphi: A \to k\) a \(k\)-algebra homomorphism. A \emph{derivation centred at \(\varphi\)} is a \(k\)-linear map \(D: A \to k\) such that \[ D(fg) = \varphi(f) D(g) + D(f) \varphi(g) \] for all \(f, g \in A\). Write \(\Der(A, \varphi)\) for derivations centred at \(\varphi\). \end{definition} \begin{eg} \(f \mapsto \frac{\partial f}{\partial x}(p)\) is a derivation centred at \(p\). \end{eg} \begin{lemma} If \(X \subseteq \A^n\) then for all \(p \in X\), \[ T_pX = \Der(k[X], \ev_p). \] \end{lemma} \begin{proof} If \(X = \A^n\), \(k[X] = k[x_1, \dots, x_n]\). Let \(D \in \Der(k[X], \ev_p)\). Let \(v_i = D(x_i)\). This gives a map \begin{align*} \Der(k[X], \ev_p) &\to \A^n \\ D &\mapsto (D(x_i) = v_i) \end{align*} Conversely, given \(v \in \A^n\), define a derivation \(D\) by \[ D(f) = \sum v_i \frac{\partial f}{\partial x_i}(p). \] In general, \(k[X] = k[x_1, \dots, x_n]/(f_1, \dots, f_r)\). Let \(p \in X = Z(f_1, \dots, f_r)\). Then \begin{align*} \Der(k[X], \ev_p) &= \{D \in \Der(k[x_1, \dots, x_n], \ev_p): D|_{(f_1, \dots, f_n)} = 0\} \\ &= \{D \in \Der(k[x_1, \dots, x_n], \ev_p): \sum v_j \frac{\partial f_i}{\partial x_j}(p) = 0 \text{ for all } i\} \end{align*} where \(v_j = D(x_j)\). \end{proof} Observe that if \(\alpha: X \to Y\) is a morphism of varieties, i.e.\ \(\alpha^*: k[Y] \to k[X]\) is a \(k\)-algebra homomorphism, \(D \in \Der(k[X], \ev_p)\) then \(D \compose \alpha^* \in \Der(k[Y], \ev_{\alpha(p)})\). Thus we get a linear map \(T_pX \to T_{\alpha(p)}Y\). \begin{ex} Let \(f \in k[X]\). Consider \(k[X] \to k[U] = k[X][t]/(tf - 1)\). We get a morphism \(U = X \setminus Z(f) \to X\). Let \(p \in U\). Show this defines an isomorphism \(T_pU \to T_pX\). \end{ex} We have two more definitions of dimension of varieties, which agree with our current definition. To prove so we need some algebraic tools. \begin{definition}[Krull dimension]\index{Krull dimension} Let \(X\) be an irreducible affine variety. The \emph{Krull dimension} of \(X\) is \begin{align*} \dim_{\text{Kr}} X &= \max \{r: Z_0 \subsetneq Z_1 \subsetneq \dots \subsetneq Z_r = X: Z_i \text{ irreducible Zariski closed}\} \\ &= \max \{r: 0 = I_r \subsetneq I_{r - 1} \subsetneq \dots \subsetneq I_0 = k[x]: I_i \text{ prime}\} \end{align*} \end{definition} \begin{eg}\leavevmode \begin{enumerate} \item If \(X = \A^1\) then \(\{\text{point}\} \subsetneq \A^1\) is the only such chain so \(X\) has Krull dimension \(1\). \item If \(X\) is a plane curve then it has Krull dimension \(1\), shown in example sheet 1. \end{enumerate} \end{eg} \begin{definition}[function field]\index{function field} Let \(X\) be an irrducible affine variety. Define the \emph{function field} of \(X\) to be \[ k(X) = \operatorname{Frac} k[X] = \bigcup_{g \in k[X]} k[X][\frac{1}{g}] = \bigcup_{g \in k[X]} k[X \setminus Z(g)] \] which is non-zero as \(k[X]\) is an integral domain. We define the \emph{transcendence dimension} of \(X\) to be the transcendence degree of \(k(X)\) over \(k\) \[ \dim_{\text{tr}} X = \operatorname{trdeg}_k k(X). \] \end{definition} \begin{eg}\leavevmode \begin{enumerate} \item \(k(\A^n) = k(x_1, \dots, x_n)\). \item \(E = \{(x, y): y^2 = x^3 - x\}\). Then \(k(E) = k(x)[y]/(y^2 - x^3 + x)\) which is an algebraic extension of \(k(x)\), so has transcedence dimension \(1\). \end{enumerate} \end{eg} \begin{theorem} Let \(X\) be an irreducible affine variety. Then \[ \dim X = \dim_{\text{Kr}} X = \dim_{\text{tr}} X. \] \end{theorem} \begin{proof} Strategy of proof: show \[ \dim \A^n = \dim_{\text{Kr}} \A^n = \dim_{\text{tr}} \A^n = n \] then reduce to this. \end{proof} We want to describe very special maps \(X \to Y\) with the property that \(\dim X = \dim Y, \dim_{\text{tr}} X = \dim_{\text{tr}} Y\), and then show these maps exist from \(X \to \A^n\) if \(\dim X = n\). Suppose we have \(X, Y\) affine varieties such that \begin{enumerate} \item \(X\) and \(Y\) are irreducible, \item there exists \(f \in k[Y][t]\) such that \(k[X] = k[Y][t]/(f(t))\) so \[ f(t) = a_0(y) + a_1(y)t + \dots + a_N(y)t^N = f(y, t), \] with \(a_i(y) \in k[Y]\), \(a_N \neq 0\). This defines a morphism \(\varphi: X \to Y\). \item \(f\) is a separable polynomial when regarded as an element of \(k(Y)[t]\), i.e.\ let \[ F(t) = \frac{1}{a_N(y)} f(t) = t^N + \frac{a_{N - 1}}{a_N} t^{N - 1} + \dots + \frac{a_0}{a_N}, \] then \(F(t), F'(t)\) no common roots. In other words, \(k(Y) \subseteq k(X)\) is a separable algebraic extension. \end{enumerate} \paragraph{Claim 1} \(\varphi(X)\) contains an open, hence dense subset of \(Y\). \begin{proof} By definition \[ X = \{(y_0, t_0) \in Y \times \A^1: f(y_0, t_0) = 0\} \] so if \(y_0 \in Y \setminus Z(a_N)\), that is \(a_N(y_0) \neq 0\), then \(f(y_0, t)\) is a polynomial in \(t\) of degree \(N\), so has exactly \(N\) roots (counting with multiplicity) over \(\cl k\), i.e.\ there exists\footnote{Lecturer suddenly declares \(k = \cl k\).}\((y_0, t_0) \in X\) and \(\varphi(y_0, t_0) = y_0\), in particular non-empty. \end{proof} \paragraph{Claim 2} There exists a non-empty Zariski open subset of \(Y\) such that the natural map \(T_{(y_0, t_0)}X \to T_{y_0}Y\) is an isomorphism. \begin{remark} Consider \(Y = \A^1, X = \{(y, t): y= t^p\}\) with \(\ch k = p\). Then \[ T_{(a, b)}X = \{(v_y, v_t): v_y - (pt^{p - 1}|_{(a, b)}) v_t = 0\} = \{(0, v_t): v_t \in \A^1\} \] as \(p = 0\). So \(T_{(a, b)}X \to T_a Y\) is the zero map. Thus separability assumption is important. \end{remark} \begin{proof} Choose generators for \(k[Y]\), i.e.\ \(Y \subseteq \A^n\). Then \begin{align*} T_{y_0}Y &= \{v \in \A^n: \sum v_i \frac{\partial h}{\partial x_i} (y_0) = 0 \text{ for all } h \in I(Y)\} \\ T_{(y_0, t_0)} X &= \{(v, \gamma) \in \A^n \times \A^1: \sum v_i \frac{\partial h}{\partial x_i}(y_0) = 0 \text{ for all } h \in I(Y), \\ &\qquad \sum v_i \frac{\partial f}{\partial x_i}(y_0, t_0) + \gamma \frac{\partial f}{\partial t}(y_0, t_0) = 0\} \end{align*} as \(I(X) = I(Y, f)\). But then \[ T_{(y_0, t_0)} = \{(v, \gamma) \in T_{y_0} Y \times \A^1: \sum v_i \frac{\partial f}{\partial x_i}(y_0, t_0) + \gamma \frac{\partial f}{\partial t}(y_0, t_0) = 0\} \] Claim this is equivalent to: there exists Zariski open subset of \(Y\) included in the above such that \(\frac{\partial f}{\partial t}(y_0, t_0) \neq 0\) for all \(y_0 \in U\), and this is immediate if \(\frac{\partial f}{\partial t}\) is not the zero polynomial in \(k[Y][t]\), but our assumption about separability implies it is not. \end{proof} \[ \begin{tikzcd} \varphi^{-1}(U) \ar[r] \ar[d] & X \ar[d, "\varphi"] \\ U \ar[r] & Y \end{tikzcd} \] where \(U\) has finite fibre and \(\varphi\) restricted to \(\varphi^{-1}(U)\) induces isomorphism of tangent space. \begin{corollary} \[ \dim X = \dim Y, \dim_{\text{tr}} X = \dim_{\text{tr}} Y. \] \end{corollary} \begin{proof} \(\dim_{\text{tr}} X = \dim_{\text{tr}} Y\) is an immediate algebraic fact. Let \(Y^{\text{sm}}\) be the smooth points of \(Y\). As \(Y\) is irreducible, this is an open dense set and hence \(U \cap Y^{\text{sm}}\) is non-empty so \(\dim T_y Y = \dim Y\) if \(y \in Y^{\text{sm}} \cap U\) and \[ \dim T_{(y, t)}X = \dim T_yY = \dim Y \] for all \((y, t) \in \varphi^{-1}(y)\). But \(\varphi^{-1}(U \cap Y^{\text{sm}})\) is an open set and \(X\) is irreducible, so \[ \dim X = \dim T_xX = \dim Y \] for all \(x \in \varphi^{-1}(U \cap Y^{\text{sm}})\). \end{proof} \begin{theorem}[Noether normalisation theorem]\index{Noether normalisation theorem} Let \(X\) be an irreducible affine variety over \(k\) with \(\dim X = d\). Then there exists a surjective map \(p: X \to \A^d\) which is a composite of the above form (and in particular, \(\varphi^{-1}(y)\) is a finite set for all \(y \in \A^d\)). \end{theorem} \begin{corollary} \[ \dim X = \dim \A^d = d = \dim_{\text{tr}} \A^d = \dim_{\text{tr}} X. \] \end{corollary} \begin{eg} Let \(X = \C^* = \{(x, y) \in \C^2: xy = 1\}\). Then Noether normalisation asserts that there is a surjection \(\C^* \to \C\), i.e. \begin{align*} \C^* &\to \C \\ t &\mapsto t + t^{-1} = z \end{align*} \(k[t, t^{-1}] = k[z][t]/(t^2 - zt + 1)\). \end{eg} \begin{ex} Find a surjective map \(\A^1 \setminus \{\lambda_1, \dots, \lambda_N\} \to \A^1\). \end{ex} It is clear that \(\varphi: X \to Y\) such that \(k[X] = k[Y][t]/(f(t))\) with \(f\) monic is particularly nice. \(\varphi\) is surjective, the fibres are finite. Such a \(\varphi\) is an example of a \emph{finite flat morphism}. Note that \(k[Y] \subseteq k[X]\) is an \emph{integral extension} of rings. \begin{definition} \(B \subseteq A\) is an integral extension of rings if for all \(a \in A\), there exists a monic polynomial \(f(t) \in B[t]\) such that \(f(a) = 0\). \end{definition} \begin{lemma}\leavevmode \begin{enumerate} \item If \(f\) is a monic polynomial, \(B[t]/(f(t))\) is an integral extension of \(B\). \item If \(C \subseteq B, B \subseteq A\) are integral extensions then so is \(C \subseteq A\). \end{enumerate} \end{lemma} \begin{theorem}[Noether normalisation] Let \(A\) be a finitely generated \(k\)-algebra where \(k\) is a field and suppose \(A\) is an integral domain. Then there exists \(z_1, \dots, z_n \in A\) which generate \(A\) as a \(k\)-algebra such that \begin{enumerate} \item there exists \(d\) such that \(z_1, \dots, z_d\) are algebraically independent over \(k\), \item for all \(i > d\), \(z_i\) is algebraic over \(k[z_1, \dots, z_{i - 1}]\) with monic minimal polynomial \(F_i\). \end{enumerate} In particular, \(A\) is integral over \(k[z_1, \dots, z_d]\). Moreover if \(\operatorname{Frac} A\) is a separable field extension of \(k\) then we can also ensure \(F_i\)'s are separable polynomials, and we can always do this if \(k = \cl k\). \end{theorem} \begin{corollary}[Nullstellensatz] If \(A\) is a finitely generated \(k\)-algebra that is also a field then \(A \supseteq k\) is algebraic. \end{corollary} \begin{lemma} If \(B \subseteq A\) is an integral ring extension then \[ B^\times = A^\times \cap B. \] \end{lemma} \begin{proof} Let \(b \in A^\times \cap B\). Then exists \(a \in A\) such that \(ab = 1\). As \(A \supseteq B\) is integral, exists \(c_i \in B\) such that \[ a^n + c_{n - 1} a^{n - 1} + \dots + c_0 = 0. \] Multiply by \(b^{n - 1}\) to get \[ a = -c_{n - 1} - c_{n - 2} b - \dots - c_0b^{n - 1} \in B. \] \end{proof} \begin{proof} Let \(z_1, \dots, z_n\) be as in Noether, so \(A\) is generated by \(z_1, \dots, z_n\) and \(z_1, \dots, z_d\) are transcendental over \(k\) and \(z_i\) is integral over \(k[z_1, \dots, z_d]\) for \(i > d\). Claim that if \(d > 0\) then \(A\) is \emph{not} a field: if \(d > 0\) then the units in \(k[z_1, \dots, z_d]\) are just \(k^\times\). So \(z_1\) is not invertible in \(k[z_1, \dots, z_d]\), so not invertible in \(A\) by the lemma. \end{proof} \begin{proof} As \(A\) is finitely generated, there exist generators \(z_1, \dots, z_n\). wlog \(z_1, \dots, z_d\) are algebraically independent and \(A\) is algebraic over \(k[z_1, \dots, z_d]\). If \(d = n\) then done. Otherwise assume the theorem holds for all \(k\)-algebras with \(\leq n - 1\) generators. Let \(A' = k[z_1, \dots, z_{n - 1}]\). There exists nonzero \(f \in k[x_1, \dots, x_n]\) such that \[ f(z_1, \dots, z_{n - 1}, z_n) = 0. \] Write \(f = \sum_{i \leq N} F_i\) where \(F_i \in k[x_1, \dots, x_n]\) has degree \(i\) in \(x_n\). Suppose \(k\) is infinite, then there exist \(\lambda_1, \dots, \lambda_n \in k\) such that \[ F_N(\lambda_1, \dots, \lambda_n) \neq 0. \] Set \(x_i' = x_i - \lambda_i x_n\) for \(i < n\) and \(x_n' = x_n\). Note that \begin{align*} x_1^{e_1} \cdots x_n^{e_n} &= (x_1' + \lambda_1 x_n)^{e_1} \cdots (x_{n - 1}' + \lambda_{n - 1} x_n)^{e_{n - 1}} x_n^{e_n} \\ &= \lambda_1^{e_1} \cdots \lambda_{n - 1}^{e_{n - 1}} x_n^{e_1 + \dots + e_n} + \text{ terms in \(x_1', \dots, x_n'\) with lower \(x_n'\) degree} \end{align*} Hence \[ f(x_1', \dots, x_n') = F_N(\lambda_1, \dots, \lambda_n) \cdot x_n^{e_1 + \dots + e_n} + \text{ lower \(x_n\) degree terms}. \] But this implies that \(z_n' = z_n\) is integral over \(k[z_1', \dots, z_{n - 1}'] = A''\). But \(A''\) is generated by \(n - 1\) elements, so inductive hypothesis gives the result. Separability requires further argument. If \(k\) is finite then we use an argument of Nagata: let \(x_i = x_i - x_n^{\gamma_i}\) for \(\gamma_i\) sufficiently large. \end{proof} \begin{ex} Let \(k = \cl k\) and \(X, Y\) irreducible varieties over \(k\) with \(\varphi: X \to Y\) a morphism. Show that \begin{enumerate} \item \(k[Y] \embed k[X]\) if and only if \(\cl{\varphi(X)} = Y\) \item If \(\cl{\varphi(X)} = Y\) then \(\dim X \geq \dim Y\). In fact, for all \(y \in \varphi(X)\), \[ \dim \varphi^{-1}(y) \geq \dim X - \dim Y \] and equality holds on a dense open subset. (hard! Require Noether normalisation) \end{enumerate} \end{ex} \section{Projective space} We will first define projective space as a set. Let \(V\) be a vector space over \(k\) with \(\dim V = n + 1\), \(n \geq 0\). Define \[ \P V = \P^n = \{\text{lines through origin in } V\} = V \setminus \{0\}/k^\times. \] Suppose \(v \in V, v \neq 0\), \(kv = \{\lambda v: \lambda \in k\}\) is a line. Conversely, \(\ell \in \P V\) is a line if and only if \(\ell = kv\) for any \(v \in \ell \setminus \{0\}\). Note that it is not clear that \(\P^n\) is a variety (affine or otherwise) as it is the result of two operations, neither of which gives a variety: \begin{enumerate} \item \(V \setminus \{0\}\) is not an affine algebraic variety if \(\dim V > 1\). \item quotienting a vaiety by the action of group like \(k^\times\) is subtle, even if the variety is affine. This is the subject of geometric invariant theory. \end{enumerate} The first we can do to analyse the projective space is to give it homoegeneous coordiantes. Choose a basis \(e_0, \dots, e_n\) of \(V\), i.e.\ an isomorphism \(V \cong k^{n + 1}\), write \([x_0 : \dots : x_n] \in \P^n\) for the line through \(\sum x_i e_i\). Thus \[ [x_0: \dots : x_n] = [\lambda x_0 : \cdots : \lambda x_n] \] for all \(\lambda \in k^\times\). Claim \(\P^n = \A^n \amalg \P^{n - 1}\): \begin{proof} Consider \(p = [x_0 : \dots : x_n]\). If \(x_n = 0\), \(p = [x_0 : \dots : x_{n - 1} : 0]\) determines a unique point in \(\P^{n - 1}\), and conversely if \(x_n \neq 0\) then \[ [x_0 : \dots : x_n] = [\frac{x_0}{x_n} : \dots \frac{x_{n - 1}}{x_n} : 1]. \] This gives a bijection as required. \end{proof} \begin{corollary} \[ \P^n = \A^n \amalg \A^{n - 1} \amalg \dots \amalg \A^0. \] \end{corollary} This gives a nice set theoretic description of \(\P^n\), although we still cannot quite make it into an algebraic variety by gluing together a closed and an open subset. For example, \(Z(x^2 - y^3) \subseteq \A^2\) can be written as \(k^\times \amalg \{\text{pt}\}\). On the other hand, \(\A^1 = k^\times \amalg \{\text{pt}\}\). More data is needed. We want to rephrase \(\P^n = \A^n \amalg \P^{n - 1}\). Let \(H \leq V\) be a hyperplane, let \(w_0 \in V \setminus H\) (for example \(H = \{x: x_n = 0\}, w_0 = (0, \dots, 0, 1)\)). Then we have an inclusion of the projectivisation of \(H\) \begin{align*} \P H &\embed \P V \\ kv &\mapsto kv \end{align*} as well as the affine hyperplane \begin{align*} H &\embed \P V \\ h &\mapsto k(h + w_0) \end{align*} It is an exercise to show that \[ \P V \setminus \P H \cong H = \A^n, \] with the isomorphism depends on the choice of \(w_0\). Set \(U_i = \{x = [x_0: \dots : x_n] \in \P^n: x_i \neq 0\}, H_i = \{(x_0, \dots, x_n): x_i = 0\} \cong \A^n\) so \(\P V \setminus \P H_i = U_i\). It is clear that \[ U_0 \cup U_1 \cup \dots \cup U_n = \P^n \] as if \(x = [x_0 : \dots : x_n] \in \P V\), some \(x_i \neq 0\) and then \(x \in U_i\). \begin{eg}\leavevmode \begin{enumerate} \item For \(n = 1\), \(U_0 = \{[1: x_1]\}, U_1 = \{[x_0 : 1]\}\). The inclusion is \begin{align*} U_0 &\to \P^1 \\ [x_0 : x_1] &\to \frac{x_1}{x_0} \in \A^1 \cup \{\infty\} \end{align*} \item \(n = 2\): \(\P^2 = U_0 \cup U_1 \cup U_2\). \(\P^2 = U_i \amalg \P^1\). (graph) three lines at infinity in \(\P^2\). Exercise: the pattern of \(\P^{n - 1}\)'s at \(\infty\) in \(\P^n\) is given by the boundary of the \(n\)-simplex. \end{enumerate} \end{eg} Consider such a map \(j: U \embed \P^n\) where \(U = U_i = \A^n\) for some \(i\). This is an open embedding of topological spaces. It is an exercise to check this is an open embedding of topological spaces. As each \(U_i \cong \A^n\) is an affine variety, and \begin{align*} U_i \cap U_j &\to U_j \\ k^\times \times \A^{n - 1} &\to \A^n \end{align*} is a morphism of affine variety, the \(\P^n\) is a well-defined algebraic variety, and \(U \to \P^n\) is a morphism of algebraic varieties. Lots of maps \(\A^n \embed \P^n\) (choose a hyperplane and a point off the hyperplane. \((x_0, \dots, x_{n -1}) \mapsto (x_0, \dots, x_{n - 1}, 1)\). Call the map \(i: \A^2 \to \P^2\). Let \(E^0 = \{(x, y) \in \A^2: y^2 = x^3 - x\}\). What is \(\cl{i(E^0)}\) in \(\P^2\)? Let's work it out. As \([x: y: 1] = [X: Y: Z]\) for \(z \neq 0\), have \(x = \frac{X}{Z}, y = \frac{Y}{Z}\) so \(y^2 = x^3 - x\) gives \[ Y^2Z = X^3 - XZ^2 \] so \[ i(E^0) = \{[X: Y: Z] \in \P^2: Y^2Z = X^3 - XZ^2, Z \neq 0\}. \] From now on write \(E^0\) for \(i(E^0)\). Then the closure will be the same equation but allow \(Z = 0\). This can be done as follow. There are three charts: \(X \neq 0, Y \neq 0, Z \neq 0\). In chart \(Z \neq 0\), \(y^2 = x^3 - x\). In chart \(Y \neq 0\), put \(z = \frac{Z}{Y}, x = \frac{X}{Y}\) so the equation for \(E^0\) is \(z = x^3 - xz^2\) and \(z \neq 0\). On chart \(X \neq 0\), put \(y = \frac{Y}{X}, z = \frac{Z}{X}\), equation is \(y^2 = 1 - z^2\) and \(z = 1\). Now taking closure of \(E^0\) in each chart gives closure of \(E^0\) in \(\P^2\). If \([X: Y: Z]\) is in the chart \(Y \neq 0\) but not in chart \(Z \neq 0\), must have \(z = 0\). The equation says \(x^3 = 0\), which has a unique solution \(x = 0\), so we get an extra point \([0: 1: 0]\). If \([X: Y: Z]\) is in the chart \(X \neq 0\) and not in the chart \(Z \neq 0\) then \(z = 0\) and have \(0 = 1\) which has no solution, so no extra point. Thus the projective curve \(E\), defined as the closure of \(E^0\) in \(\P^2\), is \(E^0 \cup \{[0: 1: 0]\}\), which is what we wanted in the first lecture. In general, given \(X = Z(I) \subseteq \A^n\) where \(I \subseteq k[x_1, \dots, x_n]\), we may ask what is the closure of \(X\) in \(\P^n\) under the embedding \((x_1, \dots, x_n) \mapsto [1: x_1: \cdots : x_n]\). To do so we would like to talk about varieties in projective spaces just as in affine spaces. However, note that the zero of a general polynomial in \(\P^n\) is not well-defined as it is not invariant under the action of \(k^\times\). \begin{definition}[homogeneous polynomial]\index{homogeneous polynomial} Given \(f \in k[x_1, \dots, x_n]\), \(f\) is \emph{homogeneous} of degree \(d\) if \[ f = \sum_{c_1 + \dots + c_n = d} a_{i_1 \cdots i_n} x_1^{i_1} \cdots x_n^{i_n}. \] If \(k\) is infinite then this holds if and only if \[ f(\lambda x_1, \dots, \lambda x_n) = \lambda^d f(x_1, \dots, x_n) \] for all \(\lambda \in k^\times\). \end{definition} Any \(f \in k[x_1, \dots, x_n]\) can be written as \(f = \sum_{r = 0}^n f_{(r)}\) where \(f_{(r)}\) is homogeneous of degree \(r\). \begin{definition}[homogeneous ideal] An ideal \(I \subseteq k[x_1, \dots, x_n]\) is \index{homogeneous} if for all \(f \in I\), \(f = \sum_r f_{(r)}\) then \(f_{(r)} \in I\) for all \(I\). \end{definition} \begin{eg} \((xy + y^2, y^3, x^2)\) is homogeneous but \((xy + y^3)\) is not. \end{eg} Given \(f \in k[x_1, \dots, x_n]\), we homogenise it by defining \[ \tilde f(X_0, \dots, X_n) = X_0^d f(\frac{X_1}{X_0}, \dots, \frac{X_n}{X_0}) \] where \(d = \deg f\). \(f\) can be recovered by \[ \tilde f(1, x_1, \dots, x_n) = f(x_1, \dots, x_n). \] \begin{eg} If \(f = y^2 - x^3 + x\) then \[ \tilde f = Z^3((Y/Z)^2 - (X/Z)^3 + (X/Z)) = ZY^2 - X^3 + XZ^2. \] \end{eg} For an ideal \(I \subseteq k[x_1, \dots, x_n]\), define \[ \tilde I = (\tilde f: f \in I), \] \begin{ex} \(\tilde I\) is an ideal and is homogeneous. \(\tilde I|_{X_0 = 1} = I\). \end{ex} \begin{lemma}\leavevmode \begin{enumerate} \item \(I \subseteq k[x_1, \dots, x_n]\) is homogeneous if and only if \(I\) is generated by a finite set of homogeneous polynomials. \item Suppose \(k\) is infinite. \(\tilde I \subseteq k[x_0, \dots, x_n]\) is a homogeneous ideal if and only if \(\tilde X = Z(\tilde I) \subseteq \A^{n + 1}\) is invariant under the \(k^\times\)-action \((p_0,, \dots, p_n) \mapsto (\lambda p_0, \dots \lambda p_n)\). \end{enumerate} \end{lemma} \begin{proof} Exercise. \end{proof} This shows that Zariski closed subsets of \(\P^n\), defined to be zeros cut out by homogeneous ideals in \(k[x_0, \dots, x_n]\), are well-defined. They correspond to \(k^\times\)-invariant closed subsets of \(\A^{n + 1}\). \begin{note} If \(I = (f_1, \dots, f_r) \subseteq k[x_1, \dots, x_n]\), it need \emph{not} be the case that \(\tilde I = (\tilde f_1, \dots, \tilde f_r)\). For example given \(I = (x - y^2, y) = (x, y) = I(\{0\})\), \[ (xz - y^2, y) = (xz, y) \neq (x, y) = \tilde I. \] \end{note} \begin{ex} Find an ideal such that \(\tilde I \neq (\tilde f_1, \dots, \tilde f_r)\) for any minimal generator \(f_1, \dots, f_r\) of \(I\). \end{ex} \begin{definition}[quasi-projective/affine variety]\index{quasi-projective variety}\index{quasi-affine variety} A \emph{quasi-projective variety} is an open subvariety of a projective variety. A \emph{quasi-affine variety} is an open subvariety of an affine variety. \end{definition} \begin{eg} \(\C^2 \setminus \{(0, 0)\} \subseteq \C^2\) is a quasi-affine variety. \end{eg} \begin{remark} If \(X\) is an affine variety, \(f \in k[X]\) and \(X\) is irreducible, \(k[X \setminus Z(f)] = k[X][\frac{1}{f}]\) so \[ k(X \setminus Z(f)) = \operatorname{Frac} k[X \setminus Z(f)] = \operatorname{Frac} k[X] = k(X). \] Hence if \(X\) is an affine algebraic variety, we can define \(k(X)\) to be \(k(U)\) for \(U\) any open affine subvariety of \(X\), for example for \(U\) an open set in a chart defining \(X\). For example in \(\P^n\), \[ k(U_0) = k(\frac{x_1}{x_0}, \dots, \frac{x_n}{x_0}) = k(U_n) = k(\frac{x_0}{x_n}, \dots, \frac{x_{n - 1}}{x_n}). \] \end{remark} We end this chapter with a brief discussion of compactness of projective spaces. Let \(k = \C\). Claim \[ \P^n = (\C^{n + 1} \setminus \{0\}) / \C^\times = S^{2n + 1}/S^1. \] \begin{proof} Define \[ S^{2n + 1} = \{x \in \C^{n + 1}: \norm x = 1\} \] where \(\norm x = (\sum |x_i|^2)^{1/2}\). Consider the map \begin{align*} \C^{n + 1} \setminus \{0\} &\to S^{2n + 1} \\ x = (x_0, \dots, x_n) &\mapsto \frac{1}{\norm x} (x_0, \dots, x_n) \end{align*} \(|\lambda| = 1\), i.e.\ \(\lambda \in \C^*\) if and only if \(\norm{\lambda x} = \norm x\) so this descends to a map \[ (\C^{n + 1} \setminus 0)/ \C^\times \to S^{2n + 1}/S^1. \] \end{proof} \(S^{2n + 1}\) is compact in the usual topology and so is its quotient. Thus \(\P^n\) is compact in the usual topology. Surprisingly, this has an algebraic version in the Zariski topology. \begin{definition}[proper]\index{proper} \(X\) is \emph{proper} if for every continuous map \(\varphi: X \to Y\), the image of a closed subset under \(\varphi\) is closed. \end{definition} \begin{theorem}[fundamental theorem of elimination theory] For any field \(k\), \(\P^n\) is proper. \end{theorem} \begin{corollary} If \(X \subseteq \A^n\) is an affine variety and \(X\) is proper then \(X\) is a finite set of points. \end{corollary} \begin{proof} Suppose \(X\) is not a finite set of points. Then as \(X\) is affine there exists a non-constant element \(\varphi \in k[X]\), that is a morphism \(\varphi: X \to \A^1\) which is not constant. But \(X\) is proper so \(\im \varphi\) is closed and by assumption, \(\varphi(X)\) is not a finite set of points. Hence \(\varphi(X) = \A^1\). Define \(\tilde \varphi: X \to \P^1\) to be the obvious composition. The image of \(\tilde \varphi\) is \(\A^1\) which is not closed in \(\P^1\) so \(X\) is not proper. Contradiction. \end{proof} \section{Curves} From now on suppose \(k = \cl k\). \begin{definition}[curve]\index{curve} A \emph{curve} is a quasi-projective algebraic variety \(X\) such that \(\dim X = 1\). \end{definition} \begin{eg} If \(F \in k[X_0, X_1, X_2]\) is an irreducible homogeneous polynomial then \(Z(F) \subseteq \P^2\) is an irreducible plane projective curve. \end{eg} Warning: not all curves can be embedded in \(\P^2\). \begin{ex} \(\dim X = 1\) means that for all \(p \in X \setminus \{\text{finite set}\}\), \(\dim T_pX = 1\), if and only if \(\dim_{\text{tr}} k(X) = 1\), if and only if any Zariski closed subvariety of \(X\) is \(X\) or a finite set of points. \end{ex} \begin{definition} Let \(X\) be an irreducible algebraic variety and \(p \in X\). Define the \emph{local ring} at \(p\) to be \[ \mathcal O_{X, p} = \{\frac{f}{g} \in k(X): g(p) \neq 0\}, \] rational functions defined on some neighbourhood of \(p\). Define \[ \mathfrak m_{X, p} = \{\gamma \in \mathcal O_{X, p}: \gamma(p) = 0\}, \] the maximal ideal of \(\mathcal O_{X, p}\). \end{definition} \begin{ex}\leavevmode \begin{enumerate} \item If \(\gamma \in \mathcal O_{X, p} \setminus \mathfrak m_{X, p}\) then \(\gamma^{-1} \in \mathcal O_{X, p}\). \item Show \(\mathfrak m_{X, p}\) is the unique maximal ideal of \(\mathcal O_{X, p}\). \end{enumerate} \end{ex} Suppose \(k = \C\). Let \(X\) be a curve, \(p \in X\) a smooth curve. Then a small open neighbourhood of \(p\) in the usual topology is diffeomorphic to a small open neighbourhood of \(0\) in \(\C\) by implicit function theorem. The corresponding notion is convergent power series on some neighbourhood of \(p\). It is completely analogous that here is an algebraic replacement for it. \begin{theorem} Let \(X\) be a curve, \(p \in X\) a smooth point. Write \(\mathfrak m = \mathfrak m_{X, p}\). \begin{enumerate} \item \(\mathfrak m\) is a principal ideal in \(\mathcal O_{X, p}\). \item \(\bigcap_{n \geq 1} \mathfrak m^n = \{0\}\). \end{enumerate} \end{theorem} \begin{eg} Intuition: Consider \(\{x^2 + y^2 = 1\} \subseteq \A^1\). If \(p \neq (0, \pm 1)\) then \(y - y_0\) is a ``local coordinate'' at \(p\). If \(k = \C\), \(p \neq (0, \pm 1)\), then we can write \(x\) in terms of \(y\) as a convergence power series for \(|y - y_0| < \varepsilon\). For example at \((1, 0)\), \[ x = (1 - y^2)^{1/2} = \sum_{n \geq 0} \binom{1/2}{n} (-1)^n y^{2n} \] so \[ x - 1 = -\frac{1}{2}y^2 + \text{ high order terms} \] so \(x - 1\) vanishes to order \(2\) at the point. In the theorem, \[ \mathfrak m_{X, p} = (y - y_0) \] if \(\frac{\partial f}{\partial x}(p) \neq 0\). Alternatively, \[ x - 1 = \frac{x^2 - 1}{x + 1} = -\frac{y^2}{x + 1} \] and \(\frac{1}{x + 1} \in \mathcal O_{X, p} \setminus \mathfrak m_{X, p}\). \end{eg} \begin{proof} By definition of \(X\) there exists an affine open neighbourhood \(X_0\) of \(p\), i.e.\ an open subset \(X_0 \subseteq X\) which is an affine variety. Write \(k[X_0] = k[x_1, \dots, x_n]/I\). wlog \(p \in X_0\) corresponds to the point \((0, \dots, 0)\). Let us write \(\overline x_i\) for the image of \(x_i\) in \(k[X_0]\). Then \begin{align*} \mathcal O_{X, p} &= \{\frac{f}{g}: f, g \in k[X_0], g \notin (\overline x_1, \dots, \overline x_n)\} \\ \mathfrak m_{X, p} &= \{\frac{f}{g}: f \in (\overline x_1, \dots, \overline x_n), g \notin (\overline x_1, \dots, \overline x_n)\} \end{align*} \(X\) is a curve smooth at \(p\) so \(\dim T_pX_0 = 1\). Thus \(T_pX_0 \subseteq \A^n\) is a line, and by changing coordinates we can assume it is the line \(x_2 = x_3 = \dots = x_n = 0\). In other words, if \(\tilde f_2, \tilde f_3, \dots\) generate the ideal \(I\) then write \[ \tilde f_i = \sum a_{ij} x_j + \text{ quadratic and higher term}. \] Note that the higher terms do not contribute to the tangent space at \(0\). Thus \(\dim T_0 X = 1\) implies that \(\dim \ker (a_{ij}) = 1\), so by row reduction can assume that \[ \tilde f_i = \lambda_i x_i + \text{ high order terms} \] for \(i = 2, \dots, n\) and \[ \tilde f_i = \text{ quadratic and higher terms} \] for \(i > n\). So there exist \(\tilde f_2, \dots, \tilde f_n \in I\), \(\tilde f_i = x_i + h_i\) where \(h_i\) is at least quadratic. Thus in \(k[X_0]\), \(\overline x_j = -h_j\) and \[ \overline x_j \in (\overline x_1^2, \overline x_1 \overline x_2, \dots) = \mathfrak m^2 \] for \(j \geq 2\). Thus \[ \mathfrak m = (\overline x_1, \dots, \overline x_n) = \overline x_1 \mathcal O_{X, p} + \dots + \overline x_n \mathcal O_{X, p} = \overline x_1 \mathcal O_{X, p} + \mathfrak m^2 \] We want to conclude that \(\mathfrak m = (\overline x_1)\). Invoke Nakayama's lemma \begin{proposition} Let \(R\) be a ring, \(M\) a finitely generated \(R\)-module, \(J \subseteq R\) an ideal. Then \begin{enumerate} \item if \(JM = M\) then exists \(r \in J\) such that \((1 + r) M = 0\). \item if \(N \subseteq M\) is a submodule such that \(JM + N = M\) then there exists \(r \in J\) such that \((1 + r) M = N\). \end{enumerate} \end{proposition} Apply Nakayama to \(R = \mathcal O_{X, p}, J = \mathfrak m_{X, p}\) and note that \(1 + r \in \mathcal O_{X, p}^*\) if \(r \in \mathfrak m_{X, p}\), so \[ (1 + r) M = M. \] We would like to apply Nakayama to \(M = \mathfrak m_{X, p}, N = (x_1)\), so need to show \(M\) is finitely generated. But every ideal \(J \subseteq \mathcal O_{X, p}\) is finitely generated, \[ J = \{\frac{f}{g}: f \in J \cap k[X_0], g \in k[X_0], g(p) \neq 0\} \] so \(g \cdot \frac{f}{g} = f \in J \subseteq k[X_0]\), but \(J \cap k[X_0]\) is an ideal in \(k[X_0]\), hence finitely generated by Hilbert basis theorem. As \[ \mathfrak m = (x_1) + \mathfrak m^2, \] Nakayama 2 says that \(\mathfrak m \subseteq (x_1)\). But \((x_1) \subseteq \mathfrak m\) so equality. In particular \(\mathfrak m\) is the principal ideal generated by \(x_1\). Now let \(M = \bigcap_{n \geq 1} \mathfrak m^n\), \(J = \mathfrak m \subseteq \mathcal O_{X, p}\) so a finitely generated ideal. But \(\mathfrak m M = M\) so by Nakayama 1 \(M = 0\). \end{proof} \begin{ex} Apply this to the circle. \end{ex} Let \(X = Z(f) \subseteq \A^2\) be a plane curve, \(p = (x_0, y_0) \in X\) a smooth point. Then \(x- x_0\) generate \(\mathfrak m_{X, p}\) if and only if \(\frac{\partial f}{\partial y}(x_0, y_0) \neq 0\), and a similar statement holds for \(y\). Thus if \[ \frac{\partial f}{\partial x}(p) = \frac{\partial f}{\partial y}(p) = 0 \] then \(p\) is not a smooth point. Thus we can either write \(y\) in terms of \(x\) locally or vice versa near a smooth point. Exercise: check this is immediate from the theorem and its proof. \begin{definition} A function \(t \in \mathfrak m_{X, p}\) such that \(\mathfrak m_{X, p} = (t)\) is called a \emph{local parameter} or \emph{local coordinate} at \(p\). \end{definition} Such is not unique but if \(t\) is a local parameter so is \(ut\) if \(u \in \mathcal O_{X, p}^*\) and all other local parameters are of this form. \begin{corollary}[order of vanishing/pole]\index{order of vanishing/pole} Every \(f \in k(X)^*\) can be written uniquely as \[ f = t^n \cdot u \] where \(n \in \Z, u \in \mathcal O_{X, p}^*\). We call \(n = \nu_p(f)\) the \emph{order of vanishing/pole} of \(f\) at \(p\). \begin{align*} \mathcal O_{X, p} &= \{f \in k(X): \nu_p(f) \geq 0\} \cup \{0\} \\ \mathfrak m_{X, p} &= \{f \in k(X): \nu_p(f) \geq 1\} \cup \{0\} \end{align*} \end{corollary} This is independent of the choice of \(t\). \begin{proof} Given \(f \in \mathcal O_{X, p}\), as \(\bigcap_{n \geq 0} \mathfrak m^n = 0\), there exists a unique \(n \geq 0\) such that \(f \in \mathfrak m^n \setminus \mathfrak m^{n + 1}\). Define \(\nu_p(f) = n\). As \(\mathfrak m^n = (t^n)\), \(f = t^n u\) with \(u \in \mathcal O_{X, p} \setminus \mathfrak m_{X, p} = \mathcal O_{X, p}^*\). Note if \(t^n u' = t^m u\) where \(n \geq m\) then \(t^{n - m} = u' u^{-1} \in \mathcal O_{X, p}^*\) so \(n = m\). If \(f \in k(X)^*\), \(f \notin \mathcal O_{X, p}\) then \(f^{-1}\) is. Apply the above and define \(\nu_p(f) = - \nu_p(f^{-1})\). \end{proof} \begin{eg} \(X = \P^1\) so \(k(X) = k(x)\). Let \(f \in k(x), f \neq 0\). Then \(f = \prod (x - a_i)^{n_i}\), where \(a_i\)'s are distinct. Consider \(\nu_p(f)\). \begin{enumerate} \item If \(p = a \in \A^1\), i.e.\ \(p \neq \infty\), then a local coordinate \(t\) is \(x - a\) so \[ \nu_a(f) = \begin{cases} 0 & a \notin \{a_1, \dots, a_m\} \\ n_i & a = a_i \end{cases} \] \item If \(p = \infty\) then \(\frac{1}{x}\) is a coordinate. \[ f(x) = (\frac{1}{x})^{- \sum n_i} \underbrace{\prod (1 - \frac{a_i}{x})^{n_i}}_{\text{regular at } \infty} \] so \(\nu_\infty(f) = - \sum n_i\). \end{enumerate} \end{eg} \begin{proof}[Proof of Nakayama] Let \(M\) be generated by \(m_1, \dots, m_n\) as an \(R\)-module. As \(JM = M\), there exists \(x_{ij} \in J\) such that \(m_i = \sum x_{ij} m_j\), i.e. \[ \sum_j \underbrace{(\delta_{ij} - x_{ij})} m_j = 0 \] for all \(i\). Recall that \[ X \cdot \adj X = \det X \cdot I, \] so multiply the above by \(\adj (I - X)\) to get \(d m_i = 0\) for all \(i\), where \[ d = \det (I - X) = 1 + r \] for some \(r \in J\). expanding out the det, i.e.\ \((1 + r)M = 0\) as required. The second part is immediate by applying Nakayama's lemma to \(M/N\). \end{proof} \begin{ex} Show \[ \mathcal O_{X, p} / \mathfrak m^n = \mathcal O_{X, p}/(t^n) = k[t]/(t^n). \] (inverse limit) \end{ex} Discussion on projective space having no holes: \begin{proposition} Let \(X\) be a curve, \(U = X \setminus \{\text{finite set of points}\}\) and \(\alpha: U \to Y\) a morphism with \(Y\) a projective variety. Let \(p \in X\) be smooth. Then \(\alpha\) extends to a morphism \(U \cup \{p\} \to Y\). \end{proposition} \begin{proof} wlog \(Y = \P^m\) (by continuity). In some neighbourhood of \(p\), \(\alpha = [f_0: \cdots: f_m]\) where \(f_i \in k(X)\). Let \(t\) be a local coordinate at \(p\). Let \(n_i = \nu_p(f_i)\) so either \(f_i(p) = 0\) or \(f_i = t^{n_i} u_i\) where \(u_i \in \mathcal O_{X, p}^*\). Let \(N = \min \{n_1, \dots, n_m\}\), say it is attained at \(n_j\), that is \(N = n_j\) for some \(j\). Then \[ \alpha = [t^{-N} f_0: \cdots :t^{-N} f_m] \] but \(f_i t^{-N} \in \mathcal O_{X, p}\) has no pole at \(p\) and \(f_j t^{-N} = u_j\) which does not vanish at \(p\). \end{proof} \begin{definition}[rational map]\index{rational map} Let \(X, Y\) be arbitrary algebraic varieties. A \emph{rational map} \(\varphi: X \rational Y\) is a pair of a Zariski open \(U \subseteq X\) and a morphism \(\varphi: U \to Y\) (i.e.\ it is a partially defined map). \end{definition} Using this terminology, the proposition is saying that a rational map to a projective variety extends to a smooth point. \begin{eg} If \(F_0, \dots, F_m\) are homogeneous polynomials of degree \(d\) in \(X_0, \dots, X_n\) then \[ [X_0: \cdots : X_n] \mapsto [F_0(X): \cdots : F_m(X)] \] is a rational map \(\P^n \rational \P^m\) defined on the open set where some \(F_i\) is nonzero, i.e.\ on the complement of \(Z(F_0, \dots, F_n)\). \end{eg} \begin{definition} Two rational maps \(\varphi_1, \varphi_2: X \rational Y\) defined on \(U_1, U_2\) are \emph{equal} if there exists a Zariski open \(V \subseteq U_1 \cap U_2\) with \(\varphi_1|_V = \varphi_2|_V\). That is, the rational map defined by \(\varphi\) doesn't depend on \(U\) --- we can shrink and think of them as the same rational map. \end{definition} \begin{definition} \(X, Y\) are \emph{birational} if there exist rational maps \(\varphi: X \rational Y, \psi: Y \rational X\) such that \begin{align*} \psi \varphi &= \id_X \\ \varphi \psi &= \id_Y \end{align*} as rational maps. \end{definition} \begin{remark} The proposition is false if \(\dim X > 1\) or \(p \in X\) is not smooth. For example \begin{align*} \A^2 &\rational \P^1 \\ (x, y) &\mapsto \frac{x - y}{x + y} \end{align*} cannot be extended to \((0, 0)\). More interestingly, consider \[ [X: Y: Z] \mapsto [YZ: XZ: XY] ``='' [\frac{1}{X} : \frac{1}{Y}: \frac{1}{Z}] \] which cannot be extended to three points. This is the beginning of high dimensional algebraic geometry. \end{remark} \begin{proposition} Let \(\alpha: X \to Y\) be a nonconstant morphism of irreducible curves. \begin{enumerate} \item For all \(q \in Y\), \(\alpha^{-1}(q)\) is a finite set. \item \(\alpha\) induces an embedding of fields \(k(Y) \embed k(X)\) such that \([k(X): k(Y)]\) is finite. \end{enumerate} \end{proposition} \begin{definition}[degree]\index{degree} The degree of the extension is called the \emph{degree} of \(\alpha\). \end{definition} \begin{proof}\leavevmode \begin{enumerate} \item \(\alpha^{-1}(q)\) is a closed subset of \(X\). But the only closed subsets are fintie set of points and \(X\). As \(\alpha\) is nonconstant the result follows. \item If \(f \in k(Y)\) then exists \(U \subseteq Y\) affine such that \(f \in k[U]\). Then \(f \compose \alpha: \alpha^{-1}(U) \to k\) is well-defined in \(k[\alpha^{-1}(U)] \subseteq k(X)\) so we have a map of fields \(k(Y) \to k(X)\). Have \(k \subseteq k(Y) \subseteq k(X)\) where \(k(X)\) and \(k(Y)\) are both algebraic over \(k\). Thus \(k(X)/k(Y)\) is algebraic. \end{enumerate} \end{proof} \begin{eg} Consider the morphism \begin{align*} \alpha: \A^1 &\to \A^1 \\ z &\mapsto z^r \end{align*} which induces a filed extension \(k(Y) = k(y) \subseteq k(X) = k(x), y \mapsto x^r\) so \(k(x^r) \subseteq k(x)\). The degree of \(\alpha\) is \(r\). \end{eg} Let \(\alpha: X \to Y\) be a nonconstant morphism of smooth irreducible projective curves. Then \(\alpha\) is surjective (as \(\alpha(X) \subseteq Y\) is a closed subvariety (?) and not a finite set of points). Let \(y \in Y, t \in \mathcal O_{Y, y}\) a local coordinate. If \(x \in X, \alpha(x) = y\). Then \(t \compose \alpha \in \mathcal O_{X, x}\), i.e.\ \(t \compose \alpha\) is a function defined in some neighbourhood of \(x \in X\). So we can ask what is the order of vanishing of \(t \compose \alpha\) at \(x\), i.e.\ \(\nu_x(t\alpha)\). Call this the \emph{multiplicity} or \emph{ramification index} of \(\alpha\) at \(x\), denote it \(e_\alpha(x)\). How to calculate this? Choose a local parameter \(s\) at \(x\) then \(t\alpha = s^n \cdot u\) for some \(n \geq 0, u \in \mathcal O_{X, x}^*\). Then \(n = \nu_x(t\alpha) = e_\alpha(x)\). \begin{eg} Assume \(\ch k \ndivides r\) and consider \begin{align*} \alpha: \A^1 &\to \A^1 \\ z &\mapsto z^r \end{align*} Let's compute \(e_\alpha(x)\). Suppose \(a \in \A^1\). A local parameter at \(\alpha(a) = a^r\) is \(t = x - a^r\). Now \[ t \compose \alpha(x) = x^r - a ^r = \prod_{i = 0}^{r - 1} (x - \zeta^ia) \] where \(\zeta\) is a primitive \(r\)th root of unity (here we used the assumption \(\ch k \ndivides r\)). Hence \[ \nu_a (x^r - a^r) = \begin{cases} 1 & a \neq 0 \\ r & a = 0 \end{cases} \] as \(x - a\) is a local parameter at \(a\). Notice that \(\# \alpha^{-1}(a) = r\) for all \(a \in \A^1\) if we count the points with multiplicity. This is a general phenomenon. \end{eg} \begin{theorem}[finiteness theorem] Let \(\alpha: X \to Y\) be a morphism of smooth projective irreducible curves. Then \begin{enumerate} \item for all \(y \in Y\), \[ \sum_{x \in \alpha^{-1}(y)} e_\alpha(x) = \deg \alpha. \] \item if \(k(X)/k(Y)\) is separable then \(e_\alpha(x) = 1\) for all but finitely many \(x \in X\). \end{enumerate} \end{theorem} \begin{ex} Check the separability assumption in 2 is necessary. \end{ex} \begin{proof} Omitted. \end{proof} \begin{corollary} Let nonzero \(f \in k(X)\) where \(X\) is a smooth projective curve. Then the number of zeros of \(f\) equals to the number of poles of \(f\). More precisely, there are only finitely many zeros and poles, \(\{p \in X: \nu_p(f) \neq 0\}\) is finite and \[ \sum_{p \in X} \nu_p(f) = 0. \] \end{corollary} Cauchy's theorem implies this if \(k = \C\). \begin{proof} \(f \in k(X)\) is a rational map \(X \rational \P^1\). As \(X\) is smooth this extends to a well-defined morphism of algebraic varieties \(X \to \P^1\). Now \(x \in k(\P^1)\) is a local coordinate around \(0 \in \P^1\), so if \(f(p) = 0\) then \(e_f(p) = \nu_p(f)\). \(\frac{1}{x}\) is a local coordinate around \(\infty \in \P^1\) so if \(f(p) = \infty\) then \(e_f(p) = - \nu_p(f)\). If \(f(p) \neq 0\) or \(\infty\) then \(\nu_p(f) = 0\). Thus finiteness theorem says that \[ \deg f = \sum_{p: f(p) = 0} \nu_p(f) = \sum_{p: f(p) = \infty} - \nu_p(f) \] and hence the result. \end{proof} The rest of this course aims to answer the question, given a curve and points on the curve, can we find a function with prescribed order of vanishing at these points? \begin{definition}[divisor]\index{divisor} A \emph{divisor} \(D\) on a curve \(X\) is a formal sum \(D = \sum n_i P_i\) where \(n_i \in \Z, P_i \in X\) and only finitely many nonzero terms. \(\Div (X)\) is the abelian group of all divisors on \(X\), i.e.\ the free abelian group generated by points of \(X\). \end{definition} There is a homomorphism \begin{align*} \deg: \Div(X) &\to \Z \\ \sum n_i P_i &\mapsto \sum n_i \end{align*} If \(f \in k(X)\), define \[ \div(f) = \sum_{p \in X} \nu_p(f) p. \] We just saw that \[ \deg \div (f) = 0. \] Define \(\Div^n(X) = \{D \in \Div(X): \deg D = n\}\). Divisors of the form \(\div (f)\) are called \emph{principal divisors}, denoted \(\div k(X)^*\). We will study \[ \Cl (X) = \operatorname{Pic} (X) = \Div(X) / \div k(X)^*, \] the class group, Picard group or group of line bundles on \(X\). Note that we have an induced homomorphism \(\deg: \Cl (X) \to \Z\). \begin{proposition} If \(X = \P^1\) then \(\Cl (X) = \Z\). \end{proposition} \begin{remark} We will show this characterises \(\P^1\). \end{remark} \begin{proof} For any curve \(X\), \(\deg: \Cl(X) \to \Z\) is surjective so we must show \(\ker (\deg: \Cl(X) \to \Z) = 0\), i.e.\ any degree \(0\) divisor is of the form \(\div(f)\). Let \[ D = \sum_{a \in \A^1} n_a (a) + n_\infty (\infty) \] so \(0 = \deg D = \sum n_a +n_\infty\) implies that \(n_\infty = -\sum n_a\). Consider \(f(x) = \prod_{a \in \A^1} (x - a)^{n_a}\). It is clear that \(\div (f) = D\). \end{proof} Write \([D]\) for the class of \(D \in \Div(X)\) in \(\Cl X\) and \(D \sim D'\) if \([D] = [D']\), i.e.\ if \(D = D' + \div (f)\) for some \(f \in k(X)^*\). If \(D = \sum n_i P_i\), say \(D\) is \emph{effective} if \(n_i \geq 0\) for all \(i\). Write \(D \geq 0\). \begin{eg} Let \(\alpha: X \to Y\) be a morphism. Then \(\sum_{x \in \alpha^{-1}(y)} e_\alpha(x) (x)\) is an effective divisor of degree \(\deg \alpha\). \end{eg} Suppose \(k = \cl k\) and let \(X\) be a smooth irreducible projective curve. Let \(D = \sum_{i = 1}^n n_i P_i\) be a divisor. Let \begin{align*} L(D) &= \{f \in k(X)^*: D + \div (f) \geq 0\} \cup \{0\} \\ &= \{f \in k(X)^*: \nu_{p_i}(f) \geq -n_i, \nu_p(f) \geq 0 \text{ for } p \notin \{p_1, \dots, p_r\}\} \cup \{0\} \end{align*} As \(\nu_p(f + g) \geq \min\{\nu_p(f), \nu_p(g)\}\), \(L(D)\) is a vector space (usual notation: \(\Gamma(X, \mathcal O(D))\)). \begin{eg}\leavevmode \begin{enumerate} \item \(L(n P) = \{f \in k(X) \text{ with a pole of order \(\leq n\) at \(p\) and no other poles}\}\) \item If \(X = \P^1\), \(L(n(\infty))\) is the set of polynomials of degree \(\leq n\). \(L(n(\infty) - (a))\) where \(a \in \A^1\) equals to \((x - a) \cdot \{\text{polynomials of deg } \leq n - 1\}\). \end{enumerate} \end{eg} \begin{lemma}\leavevmode \begin{enumerate} \item If \(\deg D < 0\) then \(L(D) = 0\). \item \(L(0) = k\). \item If \(D \sim D'\), i.e.\ \(D = D' + \div(g)\) where \(g \in k(X)^*\) then \begin{align*} L(D) &\to L(D') \\ f &\mapsto fg \end{align*} is an isomorphism. \item If \(L(D) \neq 0\) then there exists \(D' \geq 0\) with \(D' \sim D\). \item \(\dim L(D) \leq \deg D + 1\) if \(\deg D \geq 0\). Indeed, \[ \dim L(D) \leq \dim (L(D - p)) + 1 \] for all \(p \in X\). \end{enumerate} \end{lemma} \begin{proof}\leavevmode \begin{enumerate} \item If \(f \in L(D)\) then \(\deg f \leq \deg D\) from the definition. But \(\deg f \geq 0\). \item Exercise. \item As \(\div (fg) = \div(f) + \div(g)\) since \(\nu_p(fg) = \nu_p(f) + \nu_p(g)\). \item Obvious from definition. \item Induct on \(\deg D\). If \(\deg D < 0\) then \(L(D) = 0\) by 1. Pick \(p \notin \{p_1, \dots, p_r\}\). Consider the map \(\lambda: L(D) \to k, f \mapsto f(p)\). This is well-defined as \(f\) has no pole at \(p\). Then \(f \in \ker \lambda\) if and only if \(f \in L(D)\) and \(\nu_p(f) \geq 1\), if and only if \(f \in L(D - p)\). Note that \(\lambda\) need not be surjective. As \(\ker \lambda = L(D - p)\), induction gives \[ \dim L(D) \leq 1 + \dim L(D - p) \leq 1 + (\deg D - 1) + 1 \] by induction. More generally, if \(D = n_p \cdot p + \sum_{q \neq p} n_q \cdot q\) then define \begin{align*} \lambda: L(D) &\to k \\ f &\mapsto (t^{n_p} f)(p) \end{align*} if \(t\) is a local coordinate at \(p\). \end{enumerate} \end{proof} \begin{definition} \[ \ell(D) = \dim L(D). \] \end{definition} \begin{eg} If \(X = \P^1\) and \(\deg D = n \geq 0\) then \(\ell(D) = \deg D + 1\). \begin{proof} By 3, this only depends on \([D] \in \Cl (\P^1) \cong \Z\), so may as well take \(D = n (\infty)\) and we have \(\ell(n (\infty)) = n + 1\). \end{proof} \end{eg} \begin{eg} Let \(E^0 = \{(x, y) \in \A^2: y^2 = (x - \lambda_1) (x - \lambda_2) (x - \lambda_3)\}\) where \(\lambda_i\)'s are distinct and \(\lambda_1 \lambda_2 \lambda_3 \neq 0\). Let \(E\) be the plane curve contained in \(\P^2\) defined by this, i.e.\ the closure of \(E^0\) in \(\P^2\). (Recall that \(E\) is the projective variety given by \[ ZY^2 = (X - \lambda_1Z)(X - \lambda_2Z)(X - \lambda_3Z). \] It has an extra point when \(Z = 0\), which implies \(X = 0, Y \neq 0\), so a unique point at \(\infty\), \(P_\infty = [0:1:0]\)) We will compute \(L(nP_\infty)\) for \(n\) small. Start by computing \(\div (x), \div (y)\). \(x = 0\) when \(y = \pm \sqrt{\lambda_1\lambda_2\lambda_3} = \pm c\). \(x = \infty\) at \(P_\infty\). Note that at \((x, y) = (0, \pm c)\), \(\frac{\partial f}{\partial y} \neq 0\) so \(x\) is a local parameter at these points and so \[ \div (x) = a P_\infty + \underbrace{[0:c:1] + [0:-c:1]}_{\text{vanishes of order \(1\) at these points}} \] To find \(a\) we can either take a local coordinate, or use the fact that \(\deg (\div(x)) = 0\) so \(a = -2\). Similarly \[ \div (y) = -3 P_\infty + \sum_{i = 1}^3 [\lambda_i:0:1] \] as \(\frac{\partial f}{\partial x} \neq 0\) at \([\lambda_i:0:1]\) so \(y\) is a local parameter there. Thus \(x \in L(2P_\infty), y \in L(3P_\infty)\). This is similar to computation of Weierstrass \(\wp\)-function. Claim that \(L(P_\infty) = k\). Granting the claim, lemma 5 implies that \(\dim L(nP_\infty) \leq n\), but \begin{align*} 1, x &\in L(2 P_\infty) \\ 1, x, y &\in L(3 P_\infty) \\ 1, x, y, x^2 &\in L(4 P_\infty) \\ 1, x, y, x^2, xy &\in L(5 P_\infty) \end{align*} Note that all these are linearly independent. But \[ 1, x, y, x^2, xy, x^3, y^2 \in L(6 P_\infty), \] which are \emph{not} linearly indepedent as \[ y^2 = (x - \lambda_1)(x - \lambda_2)(x - \lambda_3). \] \begin{ex} \(\{x^i, x^iy: i \geq 0\}\) are linearly independent in \(k(X)\) and hence \(\dim L(n P_\infty) = n\) for all \(n \geq 1\). \end{ex} Compare this with \(X = \P^1\), \(\dim L(n \infty) = n + 1\) when \(n \geq 0\). Note that \(\lambda_i\)'s being distinct is essential as it ensures the curve is smooth. On the other hand, \(\lambda_1\lambda_2\lambda_3 \neq 0\) is just a convenience (without which \(x\) vanishes at \([0:c:1]\) with order \(2\)). \begin{proof}[Proof of claim] If \(L(P_\infty) \neq k\) then \(L(P_\infty) = k + kt\) for some function \(t \in k(E)\). Then \(t^n \in L(n P_\infty) \setminus L((n - 1) P_\infty)\) so \(1, t, \dots, t^n\) are a basis of \(L(n P_\infty)\). But \(x \in L(2 P_\infty), y \in L(3P_\infty)\) so exist \(g_2(t), g_3(t)\) polynomials of degree \(2\) and \(3\) such that \(x = g_2(t), y = g_3(t)\), so \(x = (a t + b)^2 + d\), \(a \neq 0, b, d \in k\). By a change of variable (replacing \(t\) by \(a t + b\)), the defining equation \[ y^2 = \prod (x - \lambda_i) \] becomes \[ g_3(t)^2 = \prod (t^2 - (\lambda_i - d)). \] But \(\lambda_i\)'s distinct implies that \(\lambda_i - d\) distinct so RHS is not a square in \(k(t)\), contradiction. \end{proof} \end{eg} Suppose \(X\) is a smooth projective curve, \(D \in \Div (X)\) with \(\ell(D) \geq 1\). Set \(m = \ell(D) - 1\). Choose a basis \(f_0, \dots, f_m\) of \(L(D)\). We get a rational map \begin{align*} X &\rational \P^m = \P(L(D)^*) \\ p &\mapsto [f_0(p): \cdots : f_m(p)] \end{align*} which, as \(X\) is smooth, extends to a morphism \(\alpha_D: X \to \P^m\). Moreover if \(D \sim D'\), i.e.\ \(D' = D + \div (g)\) then \(f_0g, \dots, f_mg\) is a basis of \(L(D')\) by part 3 of the lemma and \[ [(gf_0)(p) : \cdots : (gf_n)(p)] = [f_0(p): \cdots: f_m(p)] \] so we get the same map to projective space. Thus the map \(\alpha_D: X \to \P^m\) depends only on \([D] \in \Cl(X)\). \begin{eg} \(X = \P^1, D = n \infty, \ell(D) = n + 1\). Choose basis \(1, t, \dots, t^n\) of \(L(D)\). Have \[ \alpha_D(t) = [1: t: \cdots :t^n]: \P^1 \to \P^n \] Write \(t = \frac{x_1}{x_0}\), \[ \alpha_D[x_0: x_1] = [1: \frac{x_1}{x_0}: \cdots : \left( \frac{x_1}{x_0} \right)^n] = [x_0^n:x_0^{n - 1} x_1: \cdots : x_1^n] \] \end{eg} \begin{definition}[embedding]\index{embedding} \(\alpha: X \to Y\) is an \emph{embedding} of \(X\) if it is a morphism which induces an isomorphism between \(\alpha(X)\) and \(X\). \end{definition} \begin{ex}\leavevmode \begin{enumerate} \item Show \(\alpha_{n \infty}: \P^1 \to \P^n\) is an embedding if \(n \geq 1\). \item Show that the map \begin{align*} \alpha: \P^1 &\to \P^2 \\ t &\mapsto [1: t^2 : t^3] \end{align*} is \emph{not} an embedding. \end{enumerate} \end{ex} \begin{ex} Let \[ X = E = \Cl \{(x, y): y^2 = (x - \lambda_1)(x - \lambda_2)(x - \lambda_3)\} \subseteq \P^2. \] Show \begin{align*} \alpha_{P_\infty}: E &\to \P^0 = \text{pt} \\ \alpha_{2 P_\infty}: E &\to \P^1 = \P(\langle 1, x \rangle^*) \\ (x, y) &\mapsto x \\ \alpha_{3 P_\infty}: E &\to \P^2 = \P(\langle 1, x, y \rangle^*) \\ (x, y) &\mapsto (x, y) \end{align*} \end{ex} \begin{theorem}[embedding criterion] Let \(X\) be a smooth projective curve and \(D \in \Div(X)\). Then \(\alpha_D: X \to \P^m\) is an embedding if and only if for all \(p, q \in X\), \[ \ell(D - p - q) = \ell(D) - 2. \] \end{theorem} Intuition: if \(p \neq q\) then this ensures this is an injection. If \(p \neq q\) then this gives a criterion for singular point. \begin{proof} Omitted for now. Instead, we will define the degree of a curve in \(\P^m\). \end{proof} When this happens, \(X\) is a curve in \(\P^m\) of \emph{degree} \(\deg D\). \(X \subseteq \P^m = \P V\) where \(\dim V = m + 1\), \(X\) a smooth curve. Let \(H \subseteq \P^m\) be a hyperplane such that \(X \nsubseteq H\) (otherwise take \(m - 1)\)). Define \[ [H \cap X] \in \Cl(X) \] as \(H \cap X\) ``counted with multiplicity''. (picture) There exists a linear function \(x_0 \in V^*\) such that \(H = \{p: x_0(p) = 0\}\). Write this as \(x_0 = 0\). \(x_0\) is not a well-defined function in \(k(X)\). To get a rational function on \(X\), pick \(x_1 \in V^*\) such that \(x_1(p) \neq 0\). Now \(\frac{x_0}{x_1} \in k(X)\) and \(\nu_p(\frac{x_0}{x_1})\) is defined and we set it to be \(n_p\). If \(x_1'\) is another line with \(x_1'(p) \neq 0\) then \[ \nu_p(\frac{x_0}{x_1'}) = \nu_p(\frac{x_0}{x_1}) + \underbrace{\nu_p(\frac{x_1}{x_1'})}_{= 0} \] so \(n_p\) is independent of the choice of \(x_1\). We thus define \[ [H \cap X] = \sum_{p \in H \cap X} n_p p \in \Div(X). \] Notice that \(n_p \geq 0\) for all \(p\), i.e.\ \([H \cap X] \geq 0\). Moreover, if we picked another hyperplane \(H' = \{x_0' = 0\}\) with \(X \nsubseteq H'\) then \[ \nu_p(\frac{x_0}{x_1}) = \nu_p(\frac{x_0'}{x_0}) + \nu_p(\frac{x_0}{x_0'}) \] hence \[ [H \cap X] = [H' \cap X] + \div (\frac{x_0}{x_0'}), \] so image in the class group is independent of the choice of \(H\). Thus we define \begin{definition} \[ \deg X = \deg [H \cap X] \] for any hyperplane \(H\) not containing \(X\). \end{definition} \begin{theorem} Let \(F(X_0, X_1, X_2)\) be a homogeneous polynomial of degree \(d\) and suppose \(Z(F) \subseteq \P^2\) is smooth irreducible. Then \[ \deg Z(F) = d. \] \end{theorem} \begin{proof} Linearly change coordinates if necessary so \([0: 1 : 0] \notin Z(F)\). Then \[ F = \sum_{i + j + k = d} a_{ijk} X_0^i X_1^j X_2^k \] and \(F[0: 1: 0] \neq 0\) implies that \(a_{0, d, 0} \neq 0\). Thus set \(x = \frac{X_0}{X_1}, z = \frac{X_2}{X_1}\) and \[ f(x, z) = \frac{1}{a_{0, d, 0}} F(x, 1, z) = x^d + a_{d - 1} x^{d - 1} + \cdots + a_0 \] where \(a_i = a_i(z)\) is a polynomial in \(z\) of degree \(\leq d - i\). \(f(x, z)\) is a polynomial of degree \(d\) in variable \(x\). In picture \(z = 0\) is the hyperplane \(H\) (yellow line) and we are computing \(H \cap X\) using chart \(X_1 \neq 0\) (complement of green line). We will now compute \(\nu_p(z)\) for all \(p \in \mathcal X_0\) where \(\mathcal X_0 = Z(F) \cap \{X_1 \neq 0 \} = \{(x, z): f(x, z) = 0\}\). Note the last expression is affine. But \[ k[\mathcal X_0]/(z) = k[x, z]/(z, f(x, z)) = k[x]/(f(x, 0)). \] Now write \[ f(x, 0) = (x - \alpha_1)^{n_1} \cdots (x - \alpha_r)^{n_r} \] with \(\alpha_i\)'s distinct and \(\sum n_i = d\) and notice that points \((\alpha_i, 0)\) are exactly the intersections \(\mathcal X_0 \cap \{z = 0\}\). But Chinese remainder theorem says \[ k[x]/(x - \alpha_1)^{n_1} \cdots (x - \alpha_r)^{n_r} \cong \bigoplus k[x]/(x - \alpha_i)^{n_i}. \] Let \(\mathcal X = Z(F)\). Claim \[ \nu_p(z) = \dim \mathcal O_{\mathcal X, p}/(z) = \dim \mathcal O_{\mathcal X_0, p} /(z) \] by definition: as if \(t\) is a local parameter at \(p\), \(z = t^n \cdot u\) where \(n = \nu_p(z)\), and we've seen \[ \mathcal O_{\mathcal X_0, p}/(z) = k[t]/(t^n) \] which has dimension \(n\). So \[ k[\mathcal X_0]/(z) \cong \bigoplus_{p \in \mathcal X \cap H} \mathcal O_{\mathcal X, p}/(z) \cong \bigoplus_{i = 1}^r \mathcal O_{\mathcal X_0, (\alpha_i, 0)} /(z). \] Have \[ \dim k[\mathcal X_0]/(z) = \sum n_i = \sum \nu_{p_i}(z) = d. \] \end{proof} \begin{remark}[quadrics] \(x^2 + y^2 = 1\), \(xy = 1\) and \(y = x^2\) are three type of curves over \(\R\), and two types over \(\C\). But they all correspond to curves in \(\P^2\): \(XY = Z^2\) has two points \([1:0:0]\) and \([0:1:0]\) at infinity while for \(YZ = X^2\), there is one point (with multiplicity 2) at infinity. There is only one family of quadric (degree \(2\) curve) in \(\P^2\), isomorphic to \(\P^1\). \end{remark} \begin{corollary}[Bezout's theorem]\index{Bezout's theorem} If \(X = Z(F), W = Z(G)\) with \(\deg F = d, \deg G = d'\) are two curves in \(\P^2\) such that \(X \nsubseteq W, W \nsubseteq X\) then they intersect in \(\leq dd'\) points. \end{corollary} \begin{proof} Given a curve \(\mathcal X\) in \(\P^m\) and \(G \in k[X_0, \dots, X_n]\) homogeneous of degree \(d'\) such that \(X \nsubseteq W = Z(G)\). Define \[ [\mathcal X \cap W] = \sum_{p \in \mathcal X \cap W} m_p p \] where \(m_p = \nu_p(G/X_1^{d'})\) for any linear function \(X_1\) such that \(X_1(p) \neq 0\). As \[ \frac{G}{X_1^{d'}} = \left( \frac{X_0}{X_1} \right)^{d'} \cdot \left( \frac{G}{X_0^{d'}} \right) \] but \(\nu_p(X_0/X_1)\) is the order of vanishing of \(\mathcal X\) along \(X_0\) so \[ [\mathcal X \cap W] = d'[\mathcal X \cap H] + \div \frac{G}{X_0^{d'}} \] hence \[ [\mathcal X \cap W] = d' [\mathcal X \cap H] \in \Cl(\mathcal X) \] with \(\deg [\mathcal X \cap W] = d' \deg [\mathcal X \cap H]\). So if \(m = 2\) and \(W, \mathcal X \subseteq \P^2\), \(\deg[\mathcal X \cap H] = d\), by the theorem. Hence \(\# (\mathcal X \cap W) \leq dd'\). \end{proof} \section{Differentials} Let \(B\) be a ring, \(A \subseteq B\) a subring. \begin{definition}[Kähler differential]\index{Kähler differential}\index{cotangent bundle} The \emph{Kähler differential}, \emph{\(1\)-form} or \emph{relative cotangent bundle} \(\Omega^1_{B/A}\) is the free \(B\)-module generated by \(B\), which we denote by \(\d b\) for \(b \in B\), quotiented by the submodule generated by \begin{align*} &\d(fg) - f \d g - g \d f \\ &\d(b + b') - \d b - \d b' \\ &\d a \end{align*} where \(b, b', f, g \in B, a \in A\). \end{definition} \begin{ex}\leavevmode \begin{enumerate} \item Let \(X\) be an affine algebraic variety over \(k\), \(x \in X\) and \(\ev_x: k[X] \to k\) the corresponding \(k\)-algebra homomorphism. Show that \[ \Hom_{k[X]} (\Omega^1_{k[X]/k}, k) \cong \Der(k[X], \ev_x) \] where on LHS \(k\) is regarded as a \(k[X]\)-module via \(\ev_x\). \item More generally, for any \(B\)-module \(M\), \[ \Hom_B(\Omega^1_{B/A}, M) \cong \{A\text{-linear derivations } B \to M\}. \] Hence \(\Omega^1_{k[X]/k}\) is dual to the tangent bundle, hence called the \emph{cotangent bundle}. \end{enumerate} \end{ex} \begin{definition}[rational differential]\index{rational differential} The \emph{rational differentials} on \(X\) is defined to be \(\Omega^1_{k(X)/k}\). \end{definition} If you prefer the language of complex geometry, this is the space of meromorphic differential forms. Usual rules of calculus apply so for example \[ 0 = \d (1) = \d (\frac{g}{g}) = \frac{1}{g} \d g + g \d \frac{1}{g} \] by Leibniz so \[ \d \frac{1}{g} = -\frac{1}{g^2} \d g. \] Similarly \[ \d(fg) = \frac{g \d f - f \d g}{g^2}. \] \begin{corollary}\leavevmode \begin{enumerate} \item \(\Omega^1_{k(x)/k} = \Omega^1_{k(\P^1)/k} = k(x) \d x\) where \(x\) is transcendental over \(k\). \item If \(L \supseteq k\) is a separable algebraic extension then \(\Omega^1_{L/k} = 0\). \end{enumerate} \end{corollary} \begin{proof} If \(\alpha \in L\) then by definition there exists a monic \(f(z) \in k[z]\) such that \(f(\alpha) = 0\) and \(f'(\alpha) \neq 0\). Differentiate the relation \(f(\alpha) = 0\) to get \((\d f)(\alpha) = 0\). But \(\d f(\alpha) = f'(\alpha) \d \alpha\) and \(f'(\alpha) \neq 0\) so \(\d \alpha = 0\). \end{proof} Combining these, get \begin{lemma} If \(X\) is a curve, \(p \in X\) smooth and \(t\) a local parameter at \(p\) then \[ \Omega^1_{k(X)/k} = k(X) \d t. \] \end{lemma} \begin{proof} If \(t\) is a local parameter then the extension \(k(X)/k(t)\) is algebraic and separable (the first one is obvious by transcendence degree and the second requires proof, but we omit it). Thus if \(\alpha \in k(X)\) there exists \(f \in k(t)[z]\) such that \(f(\alpha) = 0, \frac{\partial f}{\partial z}(\alpha) \neq 0\). Write \(f(z) = \sum f_i(t) z^i\) where \(f_i(t) \in k(t)\). Differentiate, \[ 0 = \d 0 = \d f(\alpha) = \d (\sum f_i(t) \alpha^i) = \sum (f_i'(t) \alpha^i) \d t + \underbrace{\sum i f_i(t) \alpha^{i - 1}}_{= \frac{\partial f}{\partial z}(\alpha)} \d \alpha \] by linearity and Leibniz rule. We get \[ \d \alpha = \frac{- \sum f_i'(t) \alpha^i}{(\p f/\p z)(\alpha)} \d t \in k(X) \d t. \] \end{proof} \begin{definition}[regular]\index{differential form!regular} If \(\omega \in \Omega^1_{k(X)/k}\), \(p \in X\) smooth and \(t\) a local parameter at \(p\) so \(\omega = f \d t\) for some \(f \in k(X)\). Define the order of vanishing of \(\omega\) at \(p\) to be \[ \nu_p(\omega) = \nu_p(f) \] and the divisor of \(\omega\) to be \[ \div(\omega) = \sum_p \nu_p(\omega) p. \] Say \(\omega\) is \emph{regular} at \(p\) if \(\nu_p(\omega) \geq 0\). \end{definition} Need to show that \(\nu_p(\omega)\) is independent of choice of local parameter \(t\). \begin{lemma}\leavevmode \begin{enumerate} \item If \(f \in \mathcal O_{X, p}\) then \(\nu_p(\d f) \geq 0\). \item If \(t_1\) is any local coordinate at \(p\) then \(\nu_p(\d t_1) = 0\). In particular, \(\nu_p(\omega)\) is well-defined and \[ \nu_p(f \d t_1) = \nu_p(f) + \nu_p(\d t_1). \] \item If \(f \in k(X)\) has \(\nu_p(f) = n < 0\) then \(\nu_p(\d f) = \nu_p(f) - 1\) if \(\ch k \ndivides n\). \end{enumerate} \end{lemma} \begin{proof}\leavevmode \begin{enumerate} \item Choose an affine open neighbourhood \(X_0\) of \(X\) so \(p \in X_0 \subseteq \A^N\). Then \(f \in \mathcal O_{X, p}\) means that \(f = \frac{g}{h}\) where \(g, h \in k[x_1, \dots, x_N]\), \(h(p) \neq 0\). So \[ \d f = \frac{h \d g - g \d h}{h^2} = \sum_{i = 1}^N \gamma_i \d x_i \] for some \(\gamma_i \in \mathcal O_{X, p}\), that is \(\nu_p(\gamma_i) \geq 0\). Hence \[ \nu_p(\d f) \geq \min \{\nu_p(\d x_i): i = 1, \dots, N\}. \] Hence \(\{\nu_p(\d f): f \in \mathcal O_{X, p}\}\) is bounded below. Choose \(f \in \mathcal O_{X, p}\) with \(\nu_p(\d f)\) minimal. Write \(f - f(p) = t f_1\), \(f_1 \in \mathcal O_{X, p}\). Hence \begin{equation*} \label{eqn:differential} \d f = \d(f - f(p)) = f_1 \d t + t \d f_1. \tag{\ast} \end{equation*} If \(\nu_p(f) < 0\) then as \(\nu_p(f_1 \d t) = \nu_p(f_1) \geq 0\) by definition, then \eqref{eqn:differential} implies that \[ \nu_p(\d f_1) = \nu_p(\d f) - 1, \] contradicting minimality of \(\nu_p(\d f)\). \item \(t_1 = u t\) for some \(u \in \mathcal O_{X, p}^*\) and hence \[ \d t_1 = u \d t + t \d u. \] By 1, \(\d u = g \d t\) with \(\nu_p(g) \geq 0\). So \(\d t_1 = (u + tg) \d t\) and \(\nu_p(u + tg) = \nu_p(u) = 0\). \item \(f = t^n u\) then \(\d f = n t^{n - 1} u \d t + t^n \d u\) and 2 implies the result. \end{enumerate} \end{proof} \begin{proposition} Let \(\omega \in \Omega^1_{k(X)/k}\). Then \(\nu_p(\omega) = 0\) for all but finitely many \(p \in X\). \end{proposition} \begin{proof} Choose \(t \in k(X)\) such that \(k(X)/k(t)\) is separable algebraic (for example \(t\) is a local parameter at some point \(p\), or \(t\) is obtained from Noether normalisation). Then \(t\) defines a rational map \(\alpha = [1: t]: X \rational \P^1\), hence extends to a map \(\alpha: X \to \P^1\) as \(X\) is smooth projective. Finiteness theorem says that only finitely many points \(p\) with \(\alpha(p) = \infty\), or with \(e_\alpha(p) > 1\). For any other \(p \in X\), \(t - t(p)\) is a local coordinate at \(p\), and so \(\nu_p(\d t) = 0\) for all but finitely many \(p\). Thus the proposition holds if \(\omega = \d t\). For any arbitrary \(\omega \in \Omega^1_{k(X)/k}\), \(\omega = f \d t\) and \[ \nu_p(f \d t) = \nu_p(f) + \nu_p(\d t) \] and \(\nu_p(f) = 0\) for all but finitely many \(p\), proving the result. \end{proof} \begin{definition} The divisor of a Kähler form is defined to be \[ \div \omega = \sum_{p \in X} \nu_p(\omega) p \in \Div(X). \] \end{definition} We have just shown that this is a finite sum and indeed well-defined. As \(\div (f\omega) = \div f + \div \omega\), the class of \(\div(\omega)\) in \(\Cl(X)\) is \emph{independent} of \(\omega\). This is called the \emph{canonical class}\index{canonical class} \(\mathcal K_X = [\div \omega]\) for any \(0 \neq \omega \in \Omega^1_{k(X)/k}\). Pick \(0 \neq \omega_0 \in \Omega^1_{k(X)/k}\). Recall that \begin{align*} L(\mathcal K_X) &= L(\div(\omega_0)) \\ &= \{f \in k(X): \div(\omega_0) + \div (f) \geq 0\} \\ &= \{f \in k(x): \div (f \omega_0) \geq 0\} \\ &= \{\omega \in \Omega^1_{k(X)/k}: \div \omega \geq 0\} \end{align*} \begin{definition}[genus]\index{genus} We define the \emph{genus} of \(X\) to be \[ \ell(\mathcal K_X) = \dim L(\mathcal K_X). \] \end{definition} \begin{eg}\leavevmode \begin{enumerate} \item Let \(X = \P^1\). Let \(x\) be a coordinate on \(\P^1\) and choose \(\omega = \d x\). Must compute \(\nu_p(\d x)\) for \(p \in \P^1\). If \(p \in \A^1\) then \(x - p\) is a local coordinate and \(\d (x - p) = \d x\) has \(\nu_p(\d x) = 0\). If \(p = \infty\) then \(t = \frac{1}{x}\) is a local coordinate and \[ \d x = \d \left( \frac{1}{t} \right) = -\frac{1}{t^2} \d t \] so \(\nu_\infty(\d x) = -2\) so \(\div(\d x) = -2 \infty = \mathcal K_X\). Thus \(\deg \mathcal K_X = -2\). Then by a lemma \(\ell(\mathcal K_X) = 0\) so \(\P^1\) has genus \(0\). \item \(y^2 = f(x) = (x - \lambda_1)(x - \lambda_2)(x - \lambda_3)\) where \(\lambda_i\)'s distinct. This gives \(X = E \subseteq \P^2\) with a unique point at \(\infty\), \(P_\infty = [0:1:0]\). Take derivative, \(2 y \d y = f'(x) \d x\). Let's consider the 1-form \[ \omega = \frac{\d x}{y} = \frac{2 \d y}{f'(x)} \in \Omega^1_{k(E)/k}. \] Need to compute \(\div \omega\). Given \(p = (x_0, y_0) \in \A^2\), if \(f'(x_0) \neq 0\) then \(y - y_0\) is a local coordinate so \[ \omega = \frac{2}{f'(x_0)} \d y = \frac{2}{f'(x_0)} \d (y - y_0) \] and thus \(\nu_p(\omega) = 0\). If \(y_0 = \frac{1}{2} \frac{\partial }{\partial y}(y^2 - f(x))|_p \neq 0\) then \(x - x_0\) is a local parameter so \[ \omega = \frac{1}{y} \d (x - x_0) \] has \(\nu_p(\omega) = 0\). Since \(\lambda_i\)'s are distinct and the curve is smooth at \(p\), at least one of these happens. At \(p = P_\infty\), have \[ Y^2Z = (X - \lambda_1Z)(X - \lambda_2Z)(X - \lambda_3Z). \] Consider the chart \(Y \neq 0\). Write \begin{align*} u &= \frac{x}{y} = \frac{X}{Y} \\ v &= \frac{1}{y} = \frac{Z}{Y} \end{align*} In this chart, \(E\) becomes \(\{(u, v): g(u, v) = 0\}\) where \[ g(u, v) = v - (u - \lambda_1v)(u - \lambda_2v)(u - \lambda_3v). \] In this chart \(P_\infty\) corresponds to \((u, v) = (0, 0)\). As \[ \frac{\partial g}{\partial v}\Big|_{(0, 0)} = 1 \neq 0 \] \(u\) is a local parameter at \((0, 0)\). Thus \(\nu_{P_\infty}(u) = 1\) and \(\nu_{P_\infty}(v) \geq 1\). Here is an ad hoc way of computing it: \[ \nu_{P_\infty}(u - \lambda_i v) \geq 1 \] so \(\nu_{P_\infty}(v) \geq 3\). Thus \(\nu_{P_\infty}(u - \lambda_i v) = 1\) and \(\nu_{P_\infty}(v) = 3\). But then \(y = \frac{1}{v}\) so \(\nu_{P_\infty}(y) = -3\). So \[ \nu_{P_\infty}(x) = \nu_{P_\infty}(ux) = 1 - 3 = -2. \] So \[ \nu_{P_\infty}(\d x) = -2 - 1 = -3 \] if \(\ch k \neq 2\) by lemma 3. Thus \[ \nu_{P_\infty} \left(\frac{\d x}{y}\right) = -3 - (-3) = 0 \] so \(\div \omega = 0\), i.e.\ \(\mathcal K_X = 0\). Thus \(g = \ell(0) = 1\). \(E\) has genus \(1\). \end{enumerate} \end{eg} \begin{definition}[elliptic curve]\index{elliptic curve} A curve of genus \(1\) is called an \emph{elliptic curve}. \end{definition} We showed \(y^2 = (x - \lambda_1)(x - \lambda_2)(x - \lambda_3)\) has genus \(1\). \begin{proposition} Let \(\mathcal X = Z(F) \subseteq \P^2\) be an irreducible smooth projective curve, \(F = F(X, Y, Z)\) homogeneous of degree \(d\). Then \[ \mathcal K_X = (d - 3) [\mathcal X \cap H] \] where \(\mathcal X \cap H\) is the divisor of the intersection of any line \(H\) (i.e.\ hyperplane) with \(\mathcal X\). In particular \[ \deg \mathcal K_X = d(d - 3). \] \end{proposition} \begin{proof} Let \(x = \frac{X}{Z}, y = \frac{Y}{Z}\) and \(f(x, y) = F(x, y, 1)\) be the equation of \(\mathcal X\) on \(\A^2\) which is the chart \(Z \neq 0\) in \(\P^2\). Differentiate, \[ \d f = \frac{\partial f}{\partial x} \d x + \frac{\partial f}{\partial y} \d y = 0 \in \Omega_{k(X)/k}^1 \] as \(f = 0 \in k(X)\). Take \[ \omega = \frac{\d x}{\partial f/\partial y} = - \frac{\d y}{\partial f/\partial x} \] and need to compute \(\nu_p(\omega)\) for all \(p \in \mathcal X\). Let \(p = (x_0, y_0) \in \A^2 \cap \mathcal X\). If \(\frac{\partial f}{\partial y}(p) \neq 0\) then \(x - x_0\) is a local coordinate at \(p\) so \(\omega = \frac{\d(x - x_0)}{\p f/\p y}\) has \(\nu_p(\omega) = 0\). If \(\frac{\partial f}{\partial x}(p) \neq 0\) then \(y - y_0\) is a local coordinate so \(\nu_p(\omega) = 0\). As \(\mathcal X\) is smooth by hypothesis, at least one of this is nonzero, so \(\nu_p(\omega) = 0\) for all \(p \in \A^2 \cap \mathcal X\), i.e.\ with the choice of \(\omega\), all contributions occur at the line at \(\infty\), which is \(z = 0\). If necessary, change coordinates on the \(z = 0\) line so \([1:0:0] \notin \mathcal X\). Then \(\mathcal X \cap \{z = 0\}\) is contained in the chart \(Y \neq 0\). (the only case in which we can't do this operation is \(\{z = 0\} \subseteq \mathcal X\), but in this case \(X\) is just \(\P^1\)). Let \begin{align*} u &= \frac{Z}{Y} = \frac{1}{y} \\ v &= \frac{X}{Y} = \frac{x}{y} \end{align*} so \(u, v\) are coordinates on \(Y \neq 0\) chart. Now the equation of \(\mathcal X\) is given by \[ g(u, v) = F(v, 1, u) = F(\frac{x}{y}, 1, \frac{1}{y}) = y^{-d} F(x, y, 1) = y^{-d} f(x, y), \] that is \[ f(x, y) = y^d g(u, v). \] Now differentiate and use chain rule, \[ \frac{\partial f}{\partial x} = y^d \left( \frac{\partial g}{\partial v} \frac{\partial v}{\partial x} + \frac{\partial g}{\partial u} \underbrace{\frac{\partial u}{\partial x}}_{= 0} \right) = y^{d - 1} \frac{\partial g}{\partial v} \] Also \(\d y = - \frac{1}{u^2} \d u\) so \[ \omega = - \frac{\d y}{\p f/\p x} = u^{d - 3} \frac{\d u}{\p g/\p v} = u^{d - 3} \eta \] where \[ \eta = \frac{\d u}{\p g/\p v} = - \frac{\d v}{\p g/\p u} \] For exactly the same reason as before, \(\nu_p(\eta) = 0\) for all \(p \in \A^2_{(u, v)} \cap \mathcal X\). Thus \[ \nu_p(\omega) = (d - 3) \nu_p(u) + \nu_p(\eta) = (d - 3) \nu_p(u). \] Finally observe that \[ \nu_p(u) = \nu_p(\frac{Z}{Y}) \] is just the contact order of the line \(Z = 0\) with \(\mathcal X\), i.e.\ \([\mathcal X \cap \{Z = 0\}] = \sum_{p \in \mathcal X \subseteq \{Z = 0\}} \nu_p(\omega) p\), by definition. \end{proof} \section{Riemann-Roch theorem} \begin{theorem}[classical Riemann-Roch]\index{Riemann-Roch theorem} Let \(X\) be a smooth projective curve with genus \(g = g(X) = \ell(\mathcal K_X)\). Let \(D = \sum n_i P_i \in \Div(X)\). Then \[ \ell(D) - \ell(\mathcal K_X - D) = 1 - g + \deg D. \] \end{theorem} We will not prove this theorem but will spend the rest of the course understanding the statement and its consequences. Immediate consequences are: \begin{enumerate} \item take \(D = 0\). As \(\ell(0) = 1\), this says that \(\ell(\mathcal K_X) = g\), which is the definition of genus. \item take \(D = \mathcal K_X\), we get \(\deg \mathcal K_X = 2g - 2\). \item If \(\deg D > 2g - 2\) then \(\deg (\mathcal K_X - D) < 0\) so \(\ell(\mathcal K_X - D) = 0\) so by Riemann-Roch, \[ \ell(D) = 1 - g + \deg D. \] Warning: if \(0 < \deg D \leq 2g - 2\), the behaviour of \(\ell(D)\) is complicated as you vary \(D\) in \(\Cl^a(X)\), \(a = \deg D\) fixed, \(\ell\) can jump. In fact \(\Cl^a(X)\) is an algebraic variety and it stratifies into subvarieties according to \(\ell(D)\). This is \emph{Brill-Noether loci}. \item If \(\deg D > 2g\) then for all \(p, q \in X\), \[ \ell(D - p - q) = \ell(D) - 2 = 1- g - 2 + \deg D. \] Hence by embedding criterion \[ \alpha_D: X \to \P(L(D)^*) \cong \P^n \] is an embedding, with image a curve of degree \(\deg D\). \end{enumerate} \begin{corollary} If \(\mathcal X\) is a smooth plane curve of degree \(d\), then as \(\deg \mathcal K_X = d(d - 3)\), have \[ g = \frac{1}{2}(d - 1)(d - 2). \] \end{corollary} For example if \(d = 1\) or \(2\), correponding to line and conics respectively, we have \(g = 0\). If \(d = 3\) then \(g\) = 1. In general we have a progression \[ 0, 0, 1, 3, 6, 10, \cdots \] which does not have every natural number in there. Thus smooth projective curves of genus \(2, 4, 5, 7, \cdots\) cannot occur inside \(\P^2\). Let's study curves of small genus using Riemann-Roch. \begin{proposition} \(X\) has genus \(0\) if and only if \(X = \P^1\). \end{proposition} \begin{proof}\leavevmode \begin{itemize} \item \(\impliedby\): done earlier. \item \(\implies\): suppose \(X\) has genus \(0\). Let \(p \in X\). The divisor \((p)\) has degree \(1\). As \(1 > -2 = 2g - 2\), by Riemann-Roch \(\ell((p)) = 2\). But \(k = L(0) \subseteq L((p))\) so exists \(f \in L((p)) \setminus k\). Have \(\div(f) + (p) \geq 0\), i.e.\ \(f\) has a pole at \(p\) and no other pole. But \(\div(f)\) has degree \(0\), so \(\deg (\div(f) + (p)) = 1\), which is saying \(\div(f) = -(p) + (q)\) for some \(q \in X\). In addition \(p \neq q\) as \(f\) is not constant. As \(p \neq q\), \(f\) is not constant so \(\alpha = [1:f]: X \rational \P^1\) is a nonconstant rational map, hence a morphism (as \(X\) is smooth) of degree \(1\). Thus by an exercise on example sheet 3 \(\alpha\) is an isomorphism. \end{itemize} \end{proof} Note that there are two parts of the proof: we showed that if \(\ell((p)) = 2\) for some \(p \in X\) then \(X \cong \P^1\), and we used Riemann-Roch to show such \(p\) exists. \subsection{Curves of genus 1} Let \(X\) be a smooth projective curve with genus \(g = 1\). Then by Riemann-Roch, if \(\deg D > 0\) then \(\ell(D) = \deg D\). Fix a point \(p_\infty \in X\). Have \[ \underbrace{L(0)}_k \subseteq \underbrace{L(p_\infty)}_k \subsetneq \underbrace{L(2p_\infty)}_{k\langle 1, x \rangle} \subsetneq \underbrace{L(3p_\infty)}_{k \langle 1, x, y\rangle} \subsetneq \cdots \] where we choose \(x \in L(2p_\infty) \setminus k, y \in L(3p_\infty) \setminus L(2p_\infty)\). As before, \(L(6p_\infty)\) contains \(1, x, y, x^2, xy, x^3, y^2\). But \(\dim L(6p_\infty) = 6\) so there exist a linear relation between these monomials, with \(x^3, y^2\) appearing with non-zero coefficients (as \(1, x, y, x^2, xy \in L(5 p_\infty)\) are linearly independent, and we cannot have only one of \(x^3, y^2\) with nonzero coefficient by considering the degree of pole at \(\infty\)). Rescale \(x \mapsto \lambda x, y \mapsto \mu y\), we get a relation in \(k(X)\) \[ y^2 + a_1xy + a_3y = x^3 + a_2x^2 + a_4x + a_6 \] for some \(a_i \in k\). This equation defines a curve \(C_0\) in \(\A^2\) (exericse: this is irreducible) with a unique point at \(\infty\), call it \(p_\infty\), so \(C = C^0 \cup \{p_\infty\} \subseteq \P^2\), and \[ \alpha_{3p_\infty} = [1:x:y]: X \to \P^2 \] maps \(X\) into \(C\). As \(\alpha_{3p_\infty}\) is not constant and \(C\) is irreducible, this map is surjective. The embedding criterion tells us \(\alpha_{3p_\infty}\) is an isomorphism \(X \cong C\). We can do better: if \(\ch k \neq 3\), we can complete the cube by \(x \mapsto x - \frac{a_2}{3}\) so the equation becomes (by renaming the coefficients) \[ y^2 + a_1xy + a_3y = x^3 + a_4x + a_6. \] If \(\ch k \neq 2\), we can complete the square by \(y \mapsto y - \frac{a_1x + a_3}{2}\) to get \[ y^2 = x^3 + q_2x^2 + a_4x + a_6. \] Combining these two, if \(\ch k \neq 2, 3\), do them in this order to get \[ y^2 = x^3 + a_4x + a_6 = (x - \lambda_1)(x - \lambda_2)(x - \lambda_3) \] for \(\lambda_i\)'s distinct (by smoothness). \begin{theorem} Every curve of genus \(1\) is isomorphic to a smooth plane curve of the form \[ y^2 + a_1xy + a_3y = x^3 + a_2x^2 + a_4x + a_6. \] \end{theorem} Amazingly, every curve of genus \(1\) is a group by the following: \begin{proposition} Let \(E\) be a curve of genus \(1\), \(p_\infty \in E\). Then the map \begin{align*} E &\to \Cl^0(E) \\ p &\mapsto [p - p_\infty] \end{align*} is a bijection. \end{proposition} \begin{proof} For injectivity, if \(p - p_\infty = q - p_\infty\) in \(\Cl(E)\) the \(p - q = \div (f)\) for some \(f \in k(E)\). But then \([1:f]: E \to \P^1\) is an isomorphism, contradicting \(E\) having genus \(1\). For surjectivity, if \(D \in \Div(E)\) and \(\deg D = 0\), then \[ \deg(D + p_\infty) = 1 > 2g - 2 = 0 \] so by Riemann-Roch \(\ell(D + p_\infty) = 1\). Let \(0 \neq f \in L(D + p_\infty)\) so \[ D + p_\infty + \div (f) \geq 0. \] But the degree of this is \(1\), implying that \(D + p_\infty + \div(f) = q\), i.e.\ \(D = q - p_\infty\) in \(\Cl(E)\). \end{proof} \begin{corollary} \(E\) is an algebraic group, where the group operation \(\boxplus\) is define by \[ p \boxplus q = r \iff (p - p_\infty) + (q - p_\infty) = (r - p_\infty) \] in \(\Cl(E)\), i.e.\ if \(p + q = r + p_\infty\) in \(\Cl(E)\). \end{corollary} Notice that the identity of the group is \(p_\infty\). \begin{definition}[elliptic curve]\index{elliptic curve} An \emph{elliptic curve} is a pair \((E, p_\infty)\) where \(E\) is a curve of genus \(1\) and \(p_\infty \in E\). \end{definition} In fact, the group law is algebraic: consider \(\alpha_{3p_\infty}: E \to \P^2\) and let \(X, Y, Z\) be coordinates on \(\P^2\). As we know \(E \cap \{Z = 0\} = 3p_\infty\) and if \(L = \{\ell = 0\}\) is any line in \(\P^2\), \([L \cap E] = p_1 + p_2 + p_3\) and \(\div(\ell/z) = p_1 + p_2 + p_3 - 3p_\infty\). Note \(\ell/z \in k(E)\). Thus \(p_1 + p_2 + p_3 = 3p_\infty\) in \(\Cl(E)\), i.e. \[ p_1 \boxplus p_2 \boxplus p_3 = p_\infty \boxplus p_\infty \boxplus p_\infty = p_\infty. \] Thus geometrically the group law can be characterised as follow: any line \(L\) intersects \(E\) at three points, and the sum of the three points in the group is \(p_\infty\). \begin{ex}\leavevmode \begin{enumerate} \item Show that for fixed \(p \in E\), the map \begin{align*} E &\rational E \\ e &\mapsto e \boxplus p \end{align*} is a rational map, hence a morphism, hence an isomorphism. \item Show the map \(y \mapsto \boxminus y\) is a morphism (i.e.\ it is a rational function). \item Show that \(\boxplus: E \times E \to E\) defines a morphism so \(E\) is a group object in the category of smooth projective varieties. \end{enumerate} \end{ex} Suppose \(E = \{y^2 = (x - \lambda_1)(x - \lambda_2)(x - \lambda_3)\} \cup \{p_\infty\}\) and \(\ch k \neq 2, 3\). Consider the line \(\{x = a\}\) in \(\P^2\). It intersects \(E\) at \(p_\infty\) and at \((a, \pm b)\) for some \(b \in k\). Hence \[ (a, b) \boxplus (a, -b) \boxplus p_\infty = p_\infty \] i.e. \[ (a, b) \boxplus (a, -b) = p_\infty, \] that is \(\boxminus (a, b) = (a, -b)\), so this proves a special case of 2. This implies that \([2] p = 0\), that is \(p \boxplus p = p_\infty\) if and only if \(b = 0\) or \(p = p_\infty\), i.e.\ \(p = (\lambda_i, 0)\) or \(p = p_\infty\). These are exactly the ramification points of the morphism \(\alpha_{2p_\infty} = [x:1]: E \to \P^1\). That is, \(E\) is a double cover of \(\P^1\), ramified at 4 points, and these four points are just the points of order \(2\) on \(E\), i.e.\ \(\P^1 = E/(\Z/(2))\) where \(\Z/(2)\) acts on \(E\) by \(p \mapsto \boxminus p\). These 4 points are well-defined, independent of choices, up to a coordinate change on \(\P^1\), i.e.\ up to action of \(\PGL_2\) on \(\P^1\). Let \(j(E)\) be the cross ratio of these 4 points \(\infty, \lambda_1, \lambda_2, \lambda_3\). It is an invariant of \((\P^1)^2/\PGL_2\). For example if we scale (change of coordinates?) so that \(y^2 = x(x - 1)(x - \lambda)\), then \[ j(E) = j(\lambda) = 2^8 \frac{(\lambda^2 - \lambda + 1)^3}{\lambda^2 (l - 1)^2}. \] Thus \(j(E) = j(E')\) if and only if \(E \cong E'\): only if because given \(\lambda_1, \lambda_2, \lambda_3, \infty\), we can define \(E\), if by the discussion above. \begin{corollary} There is a three way correspondence \begin{align*} &\{\text{genus 1 curves up to isomorphism}\} \\ \leftrightarrow & \{4 \text{ distinct points in } \P^1\}/\PGL_2 \\ \leftrightarrow & \A^1 \quad \text{ given by } j \end{align*} \end{corollary} \subsection{Riemann-Hurwitz} \begin{theorem}[Riemann-Hurwitz]\index{Riemann-Hurwitz} Let \(\alpha: X \to Y\) be a nonconstant morphism of smooth projective curves such that \(k(X)/k(Y)\) is a separable algebraic extension (for example if \(\ch k = 0\)). Set \(\chi(X) = 2 - 2g(X)\). Then \[ \chi(X) = \chi(Y) \deg \alpha - \sum_{p \in X} (e_p(\alpha) - 1). \] \end{theorem} \begin{proof} \(\alpha\) defines a map \begin{align*} \alpha^*: \Omega^1_{k(Y)/k} &\to \Omega^1_{k(X)/k} \\ f \d g &\mapsto (f \alpha) \d (g\alpha) \end{align*} Separability implies that \(\alpha^*\) is injective (as \(\alpha\) is nonconstant). Pick \(0 \neq \omega \in \Omega^1_{k(Y)/k}\), then by Riemann-Roch \[ \deg \omega = 2 g(Y) - 2 = - \chi(Y). \] Let \(p \in X, q = \alpha(p) \in Y\) and pick local coordinates \(t_p, t_q\) at \(p, q\) respectively, so \(t_q \compose \alpha = u t_p^{e_p(\alpha)}\) where \(u\) is a unit. Write \(\omega = f \d t_q\) for some \(f \in k(Y)\), so \(\alpha^* \omega = f \alpha \d (u t^{e_p})\). Hence \begin{align*} \nu_p(\alpha^* \omega) &= \nu_p(f \alpha) + \nu_p(\d (ut^{e_p})) \\ &= \nu_q(f) e_p + \nu_p(u t^{e_p}) - 1 \quad \text{if } \ch k \ndivides e_p \\ &= \nu_q(\omega) e_p + e_p - 1 \end{align*} Therefore \begin{align*} -\chi(X) &= \deg \alpha^* \omega \\ &= \sum_{q \in Y} (\sum_{p \in \alpha^{-1}(q)} e_p) \nu_q(\omega) + \sum_{p \in X} (e_p - 1) \\ &= \deg \alpha \sum_{q \in Y} \nu_q(\omega) + \sum_{p \in X} (e_p - 1) \end{align*} \end{proof} \begin{corollary} Let \(k = \C\). Then the topological Euler characteristic of a smooth projective curve \(X\) is \(2 - 2g\), i.e.\ \(g\) is the ``number of holes''. \end{corollary} \begin{proof} The topological characteristic of \(\P^1\) is \(2 = 2 - 0\) so the statement holds for \(X = \P^1\). In general, let \(f \in k(X)\) nonconstant. Then \(f\) defines a morphism \(\alpha: X \to \P^1\). Now Riemann-Hurwitz for \(f\) as a Riemann surface and as an algebraic curve coincide. \end{proof} \begin{corollary} If \(g(X) < g(Y)\) then there are no non-constant maps \(X \to Y\). In particular if \(g(Y) > 0\) then there are no non-constant maps \(\P^1 \to Y\). \end{corollary} c.f.\ exercise proving the non-existence of non-constant map from \(\P^1\) to an elliptic curve. \begin{proof} By Riemann-Hurwitz for any \(\alpha: X \to Y\) non-constant, \[ 0 \leq 2(g(X) - g(Y)) + (-2g(Y) + 2)(\deg \alpha - 1) < 0, \] absurd. \end{proof} \begin{definition}[hyperelliptic]\index{hyperelliptic} If a curve \(X\) admits a degree \(2\) map \(X \to \P^1\), we say \(X\) is \emph{hyperelliptic}. \end{definition} For example an elliptic curve is hyperelliptic. Suppose \(X\) is hyperelliptic, \(\pi: X \to \P^1\) is a degree \(2\) map and \(\ch k \neq 2\). If \(p \in X\), either \(e_p = 1\) and \(p\) is unramified, or \(e_p = 2\) and \(p\) is ramified (constrast this with the case \(\alpha: X \to \P^1\) with degree \(\geq 3\), where there are more than one type of ramification). Then Riemann-Hurwitz says that \[ 2 - 2g = 2 \times 2 - \#\{\text{ramification points}\} \] so there are \(2 + 2g\) ramification points, which in particular is always even. For example if \(g = 1\) then there are \(4\), if \(g = 2\) then there are \(6\). \subsection{Curves of genus \(2\)} If a curve \(X\) has \(g(X) > 0\) then we can then consider the map \(\alpha_{\mathcal K}: X \to \P^{g - 1}\), the \emph{canonical morphism}\index{canonical morphism}. This is not very interesting for \(g(X) = 1\). Suppose \(X\) has genus \(2\). \begin{proposition} \(\alpha_{\mathcal K}\) is a map of degree \(2\), so \(X\) is a hyperelliptic curve and \(\alpha_{\mathcal K}\) is ramified at \(6\) points. \end{proposition} \begin{proof} Only thing needs proving is that \(\alpha_{\mathcal K}\) has degree \(2\). As \(\ell(\mathcal K_X) = g(X) = 2 > 0\), \(\mathcal K_X = p + q\) in \(\Cl(X)\) and \[ \ell(p + q) = \ell(\mathcal K_X) = 2 > 1 \] so there exists a non-constant function \(h \in L(p + q)\), i.e.\ \(\div (h) + p + q \geq 0\). As \(h\) has poles at most at \(p\) and \(q\), \(\deg h = 1\) or \(2\). But \(\deg h = 1\) would imply \(X \cong \P^1\), contradicting \(X\) has genus \(2\). Thus \(\deg h = 2\) and \(\alpha_{\mathcal K_X} = [1:h]: X \to \P^1\) has degree \(2\). \end{proof} \begin{corollary} There is a map from the set of isomorphism classes of curves of genus 2 embeds to \{tuples of 6 points in \(\P^1\)\}/\(\PGL_2\), \end{corollary} We'll see in a moment these 6 points determine the curve, so this is an open embedding, suggesting that \(\dim\) \{curves of genus \(2\)\} is \(6 - 3 = 3\). \begin{remark} For \(g \geq 2\), \(\alpha_{2\mathcal K_X}: X \to \P^n\) is always an embedding by Riemann-Roch and embedding theorem. \end{remark} \begin{proposition} Let \(X\) be a smooth curve of genus \(g\), \(g \geq 2\). Then \begin{enumerate} \item either \(X\) is hyperelliptic, i.e.\ admits a degree \(2\) map to \(\P^1\), in which case the canonical map factors \[ \alpha_{\mathcal K_X}: X \surj \P^1 \embed \P^{g - 1} \] and \(\alpha_{\mathcal K_X}: X \to \P^1\) has degree \(2\). \item or \(X\) is not hyperelliptic, in which case \(\alpha_{\mathcal K_X}: X \to \P^{g - 1}\) is an embedding, called the \emph{canonical embedding}. \end{enumerate} Moreover, 2 happens for most curves of genus \(g\), \(g \geq 3\). The set of all curves of genus \(g\), i.e.\ the moduli space of curves of genus \(g\), denoted \(\mathcal M_g\), is an algebraic variety of dimension \(3g - 3\), and the set of all hyperelliptic curves of genus \(g\) is a subvariety, isomorphic to \((\P^1)^{2g + 2}/\PGL_2\), of dimension \(2g - 1\). \end{proposition} \begin{proof} We prove 2 first. The embedding criteria says \(\alpha_{\mathcal K_Z}\) is an embedding if and only if for all \(p, q \in X\), \[ \ell(\mathcal K_X - p - q) = \ell(\mathcal K_X) - 2 = g - 2. \] Riemann-Roch says \(\ell(\mathcal K_X - p - q) = \ell(p + q) + g - 3\) so the embedding criteria is equivalent to \(\ell(p + q) = 1\). But \(\ell(p + q) > 1\) implies that \(X\) is hyperelliptic by the argument last time on \(g = 2\) curves. Conversely if \(X\) is hyperelliptic there exist \(p, q\) with \(\ell(p + q) > 1\) (obvious). Thus \(\ell(p + q) = 1\) if and only if \(X\) is not hyperelliptic and \(\alpha_{\mathcal K_X}\) is an embedding. For the first part, suppose \(X\) is hyperelliptic, i.e.\ there exists a double cover (degree \(2\) map) \(X \to \P^1\). This gives an embedding \(k(x) = k(\P^1) \embed k(X)\) which makes \(k(X)/k(x)\) an algebraic extension of degree \(2\). Assume \(\ch k \neq 2\), there exists \(y \in k(X)\) such that \(y^2 = f(x)\) for some \(f \in k(x)\) (by completing the square). This gives a rational function \(f: X \rational \P^1\). As \(X\) is smooth, get a morphism \(f: X \to \P^1\), ramified at \(\infty\) and at points \(a_1, \dots, a_r\) if \(f(x) = (x - a_1) \cdots (x - a_r)\). But we say that Riemann-Roch implies that there are \(2g + 2\) ramification oints, so \(\deg f = 2g + 1\). Need to show that \(\alpha_{\mathcal K_X}\) factors through \(\P^1\). To finish choose \(\omega = \frac{\d x}{y}\), and check that \(X^0\), defined to be \[ \begin{tikzcd} X \ar[r, "f"] \ar[d, "\subseteq"] & \P^1 = \A^1 \cup \{\infty\} \ar[d, "\subseteq"] \\ X^0 = f^{-1}(A) \ar[r] & \A^1 \end{tikzcd} \] has \(X^0 = \{(x, y): y^2 = f(x)\}\) and \(L(\mathcal K_X) = \langle \omega, x\omega, \dots, x^{g - 1} \omega \rangle\) and \(f|_{X^0}: X^0 \to \A^1\) is \((x, y) \mapsto x\), so that \[ \alpha_{\mathcal K_X} = [1:x:x^2:\cdots:x^{g - 1}]: X \to \P^{g - 1} \] Indeed factors through \(\P^1\) as \begin{align*} X &\to \P^1 &\to \P^{g - 1} \\ (x, y) &\mapsto x &\mapsto [1:x: \cdots : x^{g - 1}] \end{align*} Finally we will not prove 3. See what happens when you restrict \(f\) to \(f^{-1}(\P^1 \setminus \{0\})\). \end{proof} \section{Abel-Jacobi theorem} Let \(k = \C\) and \(X\) a smooth curve. Pick \(\omega \in L(\mathcal K_X)\). For concreteness, consider \begin{align*} y^2 &= x^3 - a_2 x + a_4 \\ \omega &= \frac{\d x}{y} = \frac{\d x}{\sqrt{x^3 - a_2 x + a_4}} \end{align*} For \(P, Q \in X\), we would like to define \(\int_P^Q \omega\), but this is not defined unless unless we choose a path \(\gamma\) from \(P\) to \(Q\). If \(\gamma\) is a loop and the loop is contractible then \(\int_\gamma \omega = 0\), but if we choose \(\gamma\) to be \(\gamma_1, \gamma_2\), two elements giving independent classes in homology, then the integral is not zero. Thus \(\int_P^Q\) is not well-defined, but it is well-defined up to multiples \[ k_1 \int_{\gamma_1} \omega + k_2 \int_{\omega_2} \omega, \] i.e.\ up to an element of \(\Z \tau_1 + \Z \tau_2\). Thus there is a well-defined pairing \begin{align*} H_1(X; \Z) \times L(\mathcal K_X) &\to \C \\ ([\gamma], \omega) &\mapsto \int_\gamma \omega \end{align*} which is linear, so defines a map \[ L(\mathcal K_X) \to \Hom_\Z(H_1(X, \Z), \C) \cong H^1(X; \C) \] If \(\ell(\mathcal K_G) = g\) then \(H_1(X, \Z) \cong \Z^{2g}\), so the cohomology group is \(\C^{2g}\). It is a fact that this is an injection, and RHS does not depende on the complex structure on \(X\) (and in particular, on the algebraic structure of \(X\)). However the map does change so we get a faimly of \(\C^g\) sitting inside a fixed \(\C^{2g}\). \begin{theorem}[Abel-Jacobi]\index{Abel-Jacobi theorem} Pick a basis \(\omega_1, \dots, \omega_g\) of \(L(\mathcal K_X)\). Then the map \[ P - Q \mapsto (\int_P^Q \omega_1, \dots, \int_P^Q \omega_g) \] extends to a well-defined map \[ \Cl^0(X) \to \C^{g}/\Z^{2g} \] which is an isomorphism, so \[ \Cl^0(X) \cong \C^g/\Z^{2g} = (S^1)^{2g}. \] The number \(\int_\gamma \omega\) for \(\gamma \in H_1(X; \Z)\) are called \emph{periods}. Moreover \(\Cl^0(X)\) is a projective algebraic variety, so an \emph{abelian variety}. \end{theorem} Elementary application: we can do integrals \[ \int_\gamma \frac{\d x}{x^2 + ax + b} \] by writing in elementary terms. but we cannot integrate \[ \int_\gamma \frac{\d x}{x^3 - x + 1}. \] The former is a quadric so isomorphic to \(\P^1\). The failure in doing so for cubic gives a point in \(\Cl^0(X)\). \printindex \end{document}
{ "alphanum_fraction": 0.5870942295, "avg_line_length": 49.1527723924, "ext": "tex", "hexsha": "65677f1afff2ce74ac3e3e72c32bf5ab73e86668", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2022-02-25T17:20:19.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-08T16:16:20.000Z", "max_forks_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "geniusKuang/tripos", "max_forks_repo_path": "II/algebraic_geometry.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_issues_repo_issues_event_max_datetime": "2020-10-14T21:29:15.000Z", "max_issues_repo_issues_event_min_datetime": "2020-10-11T20:43:21.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "geniusKuang/tripos", "max_issues_repo_path": "II/algebraic_geometry.tex", "max_line_length": 847, "max_stars_count": 27, "max_stars_repo_head_hexsha": "127e9fccea5732677ef237213d73a98fdb8d0ca0", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "geniusKuang/tripos", "max_stars_repo_path": "II/algebraic_geometry.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-10T15:48:31.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-15T05:02:27.000Z", "num_tokens": 49112, "size": 126765 }
\section{Background} \label{s:background} \subsection{Performance Bugs} \label{s:background-bug} %\TODO{Have not had time to rephrase the decriptions from 'parser'->'compiler'} Performance bugs in a program could degrade its performance and waste computational resources. % Usually, people define performance bugs as software defects where relatively simple \emph{source-code} changes can significantly optimize the execution of the software while preserving the functionality \cite{perfbugstudy, killian2010finding, s2e}. % There can be several different performance issues regarding different categories of resources. % For example, some performance bugs could cause excessive CPU resource utilization, resulting in unexpectedly longer execution time; % some other bugs could lead to huge memory consumption because of uncontrolled memory allocation and memory leak \cite{wen2020memlock}. % %they Performance bugs lead to reduced throughput, increased latency, and wasted resources in software. % They particularly impact the end-user experiences. % What is worse, when a buggy application is deployed on the web servers, the bugs can be exploited by attackers for denial-of-service attacks, which can impair the availability of the services \cite{rampart}. % In the past, performance bugs have caused several publicized failures, causing many software projects to be abandoned \cite{perfbugstudy, lessons}. \subsection{Network Operating Systems} \label{s:background-nos} Network operating system is a computer operating system that facilitates to connect and communicate various autonomous computers over a network. % An autonomous computer is an independent computer that has its own local memory, hardware, \etc{} % It is self capable to perform operations and processing for a single user. % Network operating systems can be embedded in a router or hardware firewall that operates the functions in the network layer \cite{al2001dialoguer}. % Typical real-world network operating systems include Cisco IOS \cite{cisco-ios}, DD_WRT \cite{dd-wrt}, Cumulus Linux \cite{cumulus-linux}, \etc{}
{ "alphanum_fraction": 0.8097955302, "avg_line_length": 52.575, "ext": "tex", "hexsha": "6c5ecc6c8c0df9c859dd410df87b7976df5b329e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "38489d3032667147ddb5980bfdb88208a6d2b34b", "max_forks_repo_licenses": [ "AFL-1.1" ], "max_forks_repo_name": "peng-hui/csci5570-project", "max_forks_repo_path": "doc/report/background.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "38489d3032667147ddb5980bfdb88208a6d2b34b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AFL-1.1" ], "max_issues_repo_name": "peng-hui/csci5570-project", "max_issues_repo_path": "doc/report/background.tex", "max_line_length": 248, "max_stars_count": null, "max_stars_repo_head_hexsha": "38489d3032667147ddb5980bfdb88208a6d2b34b", "max_stars_repo_licenses": [ "AFL-1.1" ], "max_stars_repo_name": "peng-hui/csci5570-project", "max_stars_repo_path": "doc/report/background.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 423, "size": 2103 }
% Default to the notebook output style % Inherit from the specified cell style. \documentclass[11pt]{article} \usepackage[T1]{fontenc} % Nicer default font (+ math font) than Computer Modern for most use cases \usepackage{mathpazo} % Basic figure setup, for now with no caption control since it's done % automatically by Pandoc (which extracts ![](path) syntax from Markdown). \usepackage{graphicx} % We will generate all images so they have a width \maxwidth. This means % that they will get their normal width if they fit onto the page, but % are scaled down if they would overflow the margins. \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth \else\Gin@nat@width\fi} \makeatother \let\Oldincludegraphics\includegraphics % Set max figure width to be 80% of text width, for now hardcoded. \renewcommand{\includegraphics}[1]{\Oldincludegraphics[width=.8\maxwidth]{#1}} % Ensure that by default, figures have no caption (until we provide a % proper Figure object with a Caption API and a way to capture that % in the conversion process - todo). \usepackage{caption} \DeclareCaptionLabelFormat{nolabel}{} \captionsetup{labelformat=nolabel} \usepackage{adjustbox} % Used to constrain images to a maximum size \usepackage{xcolor} % Allow colors to be defined \usepackage{enumerate} % Needed for markdown enumerations to work \usepackage{geometry} % Used to adjust the document margins \usepackage{amsmath} % Equations \usepackage{amssymb} % Equations \usepackage{textcomp} % defines textquotesingle % Hack from http://tex.stackexchange.com/a/47451/13684: \AtBeginDocument{% \def\PYZsq{\textquotesingle}% Upright quotes in Pygmentized code } \usepackage{upquote} % Upright quotes for verbatim code \usepackage{eurosym} % defines \euro \usepackage[mathletters]{ucs} % Extended unicode (utf-8) support \usepackage[utf8x]{inputenc} % Allow utf-8 characters in the tex document \usepackage{fancyvrb} % verbatim replacement that allows latex \usepackage{grffile} % extends the file name processing of package graphics % to support a larger range % The hyperref package gives us a pdf with properly built % internal navigation ('pdf bookmarks' for the table of contents, % internal cross-reference links, web links for URLs, etc.) \usepackage{hyperref} \usepackage{longtable} % longtable support required by pandoc >1.10 \usepackage{booktabs} % table support for pandoc > 1.12.2 \usepackage[inline]{enumitem} % IRkernel/repr support (it uses the enumerate* environment) \usepackage[normalem]{ulem} % ulem is needed to support strikethroughs (\sout) % normalem makes italics be italics, not underlines \usepackage{mathrsfs} % Colors for the hyperref package \definecolor{urlcolor}{rgb}{0,.145,.698} \definecolor{linkcolor}{rgb}{.71,0.21,0.01} \definecolor{citecolor}{rgb}{.12,.54,.11} % ANSI colors \definecolor{ansi-black}{HTML}{3E424D} \definecolor{ansi-black-intense}{HTML}{282C36} \definecolor{ansi-red}{HTML}{E75C58} \definecolor{ansi-red-intense}{HTML}{B22B31} \definecolor{ansi-green}{HTML}{00A250} \definecolor{ansi-green-intense}{HTML}{007427} \definecolor{ansi-yellow}{HTML}{DDB62B} \definecolor{ansi-yellow-intense}{HTML}{B27D12} \definecolor{ansi-blue}{HTML}{208FFB} \definecolor{ansi-blue-intense}{HTML}{0065CA} \definecolor{ansi-magenta}{HTML}{D160C4} \definecolor{ansi-magenta-intense}{HTML}{A03196} \definecolor{ansi-cyan}{HTML}{60C6C8} \definecolor{ansi-cyan-intense}{HTML}{258F8F} \definecolor{ansi-white}{HTML}{C5C1B4} \definecolor{ansi-white-intense}{HTML}{A1A6B2} \definecolor{ansi-default-inverse-fg}{HTML}{FFFFFF} \definecolor{ansi-default-inverse-bg}{HTML}{000000} % commands and environments needed by pandoc snippets % extracted from the output of `pandoc -s` \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \newenvironment{Shaded}{}{} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.56,0.13,0.00}{{#1}}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.25,0.63,0.44}{{#1}}} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textit{{#1}}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{{#1}}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.02,0.16,0.49}{{#1}}} \newcommand{\RegionMarkerTok}[1]{{#1}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{1.00,0.00,0.00}{\textbf{{#1}}}} \newcommand{\NormalTok}[1]{{#1}} % Additional commands for more recent versions of Pandoc \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.53,0.00,0.00}{{#1}}} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.25,0.44,0.63}{{#1}}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.73,0.40,0.53}{{#1}}} \newcommand{\ImportTok}[1]{{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.73,0.13,0.13}{\textit{{#1}}}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.10,0.09,0.49}{{#1}}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.44,0.13}{\textbf{{#1}}}} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.40,0.40,0.40}{{#1}}} \newcommand{\BuiltInTok}[1]{{#1}} \newcommand{\ExtensionTok}[1]{{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.74,0.48,0.00}{{#1}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.49,0.56,0.16}{{#1}}} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.38,0.63,0.69}{\textbf{\textit{{#1}}}}} % Define a nice break command that doesn't care if a line doesn't already % exist. \def\br{\hspace*{\fill} \\* } % Math Jax compatibility definitions \def\gt{>} \def\lt{<} \let\Oldtex\TeX \let\Oldlatex\LaTeX \renewcommand{\TeX}{\textrm{\Oldtex}} \renewcommand{\LaTeX}{\textrm{\Oldlatex}} % Document parameters % Document title \title{Muon\_Physics} % Pygments definitions \makeatletter \def\PY@reset{\let\PY@it=\relax \let\PY@bf=\relax% \let\PY@ul=\relax \let\PY@tc=\relax% \let\PY@bc=\relax \let\PY@ff=\relax} \def\PY@tok#1{\csname PY@tok@#1\endcsname} \def\PY@toks#1+{\ifx\relax#1\empty\else% \PY@tok{#1}\expandafter\PY@toks\fi} \def\PY@do#1{\PY@bc{\PY@tc{\PY@ul{% \PY@it{\PY@bf{\PY@ff{#1}}}}}}} \def\PY#1#2{\PY@reset\PY@toks#1+\relax+\PY@do{#2}} \expandafter\def\csname PY@tok@cp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.74,0.48,0.00}{##1}}} \expandafter\def\csname PY@tok@sc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@sd\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@k\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@ni\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.60,0.60,0.60}{##1}}} \expandafter\def\csname PY@tok@se\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.13}{##1}}} \expandafter\def\csname PY@tok@kp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@kd\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@gh\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@gt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.27,0.87}{##1}}} \expandafter\def\csname PY@tok@nt\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@go\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.53,0.53}{##1}}} \expandafter\def\csname PY@tok@no\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.53,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@c1\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@gu\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.50,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@ch\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@mi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gs\endcsname{\let\PY@bf=\textbf} \expandafter\def\csname PY@tok@ge\endcsname{\let\PY@it=\textit} \expandafter\def\csname PY@tok@kn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@mh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@w\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.73,0.73}{##1}}} \expandafter\def\csname PY@tok@nn\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@vi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@s\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@vg\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@err\endcsname{\def\PY@bc##1{\setlength{\fboxsep}{0pt}\fcolorbox[rgb]{1.00,0.00,0.00}{1,1,1}{\strut ##1}}} \expandafter\def\csname PY@tok@nd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@sr\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@vm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@na\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.49,0.56,0.16}{##1}}} \expandafter\def\csname PY@tok@sa\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@kc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@mo\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@gd\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.63,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@nb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@cs\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@nf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@kt\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.69,0.00,0.25}{##1}}} \expandafter\def\csname PY@tok@s2\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@bp\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@nv\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@o\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@fm\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@sh\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@dl\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@gp\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,0.50}{##1}}} \expandafter\def\csname PY@tok@m\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@s1\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \expandafter\def\csname PY@tok@gr\endcsname{\def\PY@tc##1{\textcolor[rgb]{1.00,0.00,0.00}{##1}}} \expandafter\def\csname PY@tok@mf\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@sx\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@il\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@cpf\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@c\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@ne\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.82,0.25,0.23}{##1}}} \expandafter\def\csname PY@tok@gi\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.00,0.63,0.00}{##1}}} \expandafter\def\csname PY@tok@cm\endcsname{\let\PY@it=\textit\def\PY@tc##1{\textcolor[rgb]{0.25,0.50,0.50}{##1}}} \expandafter\def\csname PY@tok@mb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.40,0.40,0.40}{##1}}} \expandafter\def\csname PY@tok@kr\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.50,0.00}{##1}}} \expandafter\def\csname PY@tok@ow\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.67,0.13,1.00}{##1}}} \expandafter\def\csname PY@tok@vc\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@nc\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.00,0.00,1.00}{##1}}} \expandafter\def\csname PY@tok@si\endcsname{\let\PY@bf=\textbf\def\PY@tc##1{\textcolor[rgb]{0.73,0.40,0.53}{##1}}} \expandafter\def\csname PY@tok@ss\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.10,0.09,0.49}{##1}}} \expandafter\def\csname PY@tok@sb\endcsname{\def\PY@tc##1{\textcolor[rgb]{0.73,0.13,0.13}{##1}}} \def\PYZbs{\char`\\} \def\PYZus{\char`\_} \def\PYZob{\char`\{} \def\PYZcb{\char`\}} \def\PYZca{\char`\^} \def\PYZam{\char`\&} \def\PYZlt{\char`\<} \def\PYZgt{\char`\>} \def\PYZsh{\char`\#} \def\PYZpc{\char`\%} \def\PYZdl{\char`\$} \def\PYZhy{\char`\-} \def\PYZsq{\char`\'} \def\PYZdq{\char`\"} \def\PYZti{\char`\~} % for compatibility with earlier versions \def\PYZat{@} \def\PYZlb{[} \def\PYZrb{]} \makeatother % Exact colors from NB \definecolor{incolor}{rgb}{0.0, 0.0, 0.5} \definecolor{outcolor}{rgb}{0.545, 0.0, 0.0} % Prevent overflowing lines due to hard-to-break entities \sloppy % Setup hyperref package \hypersetup{ breaklinks=true, % so long urls are correctly broken across lines colorlinks=true, urlcolor=urlcolor, linkcolor=linkcolor, citecolor=citecolor, } % Slightly bigger margins than the latex defaults \geometry{verbose,tmargin=1in,bmargin=1in,lmargin=1in,rmargin=1in} \begin{document} \maketitle \section{Muon Physics}\label{muon-physics} \_\_\textbf{Measurement of muon lifetime and flux} University of California, Santa Barbara, Physics, 93117 Goleta, CA Yuning Zhang \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}103}]:} \PY{k+kn}{import} \PY{n+nn}{numpy} \PY{k}{as} \PY{n+nn}{np} \PY{k+kn}{import} \PY{n+nn}{matplotlib}\PY{n+nn}{.}\PY{n+nn}{pylab} \PY{k}{as} \PY{n+nn}{plt} \PY{k+kn}{from} \PY{n+nn}{datetime} \PY{k}{import} \PY{n}{datetime} \PY{k+kn}{from} \PY{n+nn}{scipy}\PY{n+nn}{.}\PY{n+nn}{optimize} \PY{k}{import} \PY{n}{curve\PYZus{}fit} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}104}]:} \PY{k+kn}{import} \PY{n+nn}{warnings} \PY{n}{warnings}\PY{o}{.}\PY{n}{filterwarnings}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ignore}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}105}]:} \PY{k}{def} \PY{n+nf}{read\PYZus{}data}\PY{p}{(}\PY{n}{file\PYZus{}name}\PY{p}{,}\PY{n}{delimiter}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{:} \PY{k}{with} \PY{n+nb}{open}\PY{p}{(}\PY{n}{data\PYZus{}path}\PY{o}{+}\PY{n}{file\PYZus{}name}\PY{p}{)} \PY{k}{as} \PY{n}{f}\PY{p}{:} \PY{n}{data}\PY{o}{=}\PY{n+nb}{list}\PY{p}{(}\PY{n+nb}{map}\PY{p}{(}\PY{k}{lambda} \PY{n}{x}\PY{p}{:}\PY{n+nb}{int}\PY{p}{(}\PY{n}{x}\PY{o}{.}\PY{n}{strip}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{split}\PY{p}{(}\PY{n}{delimiter}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)}\PY{p}{,}\PY{n}{f}\PY{o}{.}\PY{n}{readlines}\PY{p}{(}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{k}{return} \PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{n}{data}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}106}]:} \PY{k}{def} \PY{n+nf}{filter\PYZus{}data}\PY{p}{(}\PY{n}{data}\PY{p}{,}\PY{n}{low\PYZus{}bound}\PY{p}{,}\PY{n}{up\PYZus{}bound}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZsq{}\PYZsq{}\PYZsq{}} \PY{l+s+sd}{ return filtered data array between lower bound and upper bound.} \PY{l+s+sd}{ the unit of the boundary value is nanosecond} \PY{l+s+sd}{ eg: 6 uSec = 6000 nSec} \PY{l+s+sd}{ \PYZsq{}\PYZsq{}\PYZsq{}} \PY{k}{return} \PY{n}{data}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{vectorize}\PY{p}{(}\PY{k}{lambda} \PY{n}{x}\PY{p}{:} \PY{n}{low\PYZus{}bound}\PY{o}{\PYZlt{}}\PY{n}{x}\PY{o}{\PYZlt{}}\PY{n}{up\PYZus{}bound}\PY{p}{)}\PY{p}{(}\PY{n}{data}\PY{p}{)}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}107}]:} \PY{k}{def} \PY{n+nf}{get\PYZus{}hist\PYZus{}data}\PY{p}{(}\PY{n}{data}\PY{p}{,}\PY{n}{N\PYZus{}bins}\PY{p}{,}\PY{n}{precision\PYZus{}error}\PY{o}{=}\PY{l+m+mi}{20}\PY{p}{)}\PY{p}{:} \PY{l+s+sd}{\PYZsq{}\PYZsq{}\PYZsq{}} \PY{l+s+sd}{ filter data with the upper and lower bound } \PY{l+s+sd}{ given the bins number of histogram, return the counts and average in each bin, } \PY{l+s+sd}{ with the error of counts and standard deviation of average value} \PY{l+s+sd}{ \PYZsq{}\PYZsq{}\PYZsq{}} \PY{n}{N\PYZus{}data}\PY{o}{=}\PY{n+nb}{len}\PY{p}{(}\PY{n}{data}\PY{p}{)} \PY{n}{bin\PYZus{}counts}\PY{p}{,}\PY{n}{bin\PYZus{}partitions}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{histogram}\PY{p}{(}\PY{n}{data}\PY{p}{,}\PY{n}{bins}\PY{o}{=}\PY{n}{N\PYZus{}bins}\PY{p}{)} \PY{n}{labels}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{digitize}\PY{p}{(}\PY{n}{data}\PY{p}{,}\PY{n}{bins}\PY{o}{=}\PY{n}{bin\PYZus{}partitions}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{:}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n}{bin\PYZus{}sums}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{N\PYZus{}bins}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{N\PYZus{}data}\PY{p}{)}\PY{p}{:} \PY{n}{bin\PYZus{}sums}\PY{p}{[}\PY{n}{labels}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{]}\PY{o}{+}\PY{o}{=}\PY{n}{data}\PY{p}{[}\PY{n}{i}\PY{p}{]} \PY{n}{bin\PYZus{}means}\PY{o}{=}\PY{n}{bin\PYZus{}sums}\PY{o}{/}\PY{n}{bin\PYZus{}counts} \PY{n}{bin\PYZus{}square\PYZus{}errors}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{zeros}\PY{p}{(}\PY{n}{N\PYZus{}bins}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n}{N\PYZus{}data}\PY{p}{)}\PY{p}{:} \PY{n}{mean}\PY{o}{=}\PY{n}{bin\PYZus{}means}\PY{p}{[}\PY{n}{labels}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{]} \PY{n}{bin\PYZus{}square\PYZus{}errors}\PY{p}{[}\PY{n}{labels}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{p}{]}\PY{o}{+}\PY{o}{=}\PY{p}{(}\PY{n}{data}\PY{p}{[}\PY{n}{i}\PY{p}{]}\PY{o}{\PYZhy{}}\PY{n}{mean}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{n}{bin\PYZus{}stds}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{bin\PYZus{}square\PYZus{}errors}\PY{o}{/}\PY{n}{bin\PYZus{}counts}\PY{o}{+}\PY{n}{precision\PYZus{}error}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{xdata}\PY{o}{=}\PY{n}{bin\PYZus{}means}\PY{p}{[}\PY{o}{\PYZti{}}\PY{n}{np}\PY{o}{.}\PY{n}{isnan}\PY{p}{(}\PY{n}{bin\PYZus{}means}\PY{p}{)}\PY{p}{]} \PY{n}{ydata}\PY{o}{=}\PY{n}{bin\PYZus{}counts}\PY{p}{[}\PY{n}{bin\PYZus{}counts}\PY{o}{!=}\PY{l+m+mi}{0}\PY{p}{]} \PY{n}{xerror}\PY{o}{=}\PY{n}{bin\PYZus{}stds}\PY{p}{[}\PY{o}{\PYZti{}}\PY{n}{np}\PY{o}{.}\PY{n}{isnan}\PY{p}{(}\PY{n}{bin\PYZus{}stds}\PY{p}{)}\PY{p}{]} \PY{n}{yerror}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{ydata}\PY{p}{)} \PY{k}{return} \PY{p}{(}\PY{n}{xdata}\PY{o}{/}\PY{l+m+mi}{1000}\PY{p}{,}\PY{n}{ydata}\PY{p}{,}\PY{n}{xerror}\PY{o}{/}\PY{l+m+mi}{1000}\PY{p}{,}\PY{n}{yerror}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}108}]:} \PY{n}{precision\PYZus{}error}\PY{o}{=}\PY{l+m+mi}{20} \PY{c+c1}{\PYZsh{}ns} \PY{n}{data\PYZus{}path}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{./data/}\PY{l+s+s2}{\PYZdq{}}\PY{c+c1}{\PYZsh{}\PYZdq{}19\PYZhy{}05\PYZhy{}20\PYZhy{}14\PYZhy{}08.data\PYZdq{}} \PY{n}{file\PYZus{}name}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{19\PYZhy{}05\PYZhy{}22\PYZhy{}18\PYZhy{}03.data}\PY{l+s+s2}{\PYZdq{}}\PY{c+c1}{\PYZsh{}\PYZdq{}19\PYZhy{}05\PYZhy{}20\PYZhy{}14\PYZhy{}08.data\PYZdq{}\PYZsh{}\PYZdq{}19\PYZhy{}05\PYZhy{}02\PYZhy{}17\PYZhy{}41.data\PYZdq{}\PYZsh{}\PYZdq{}05\PYZus{}13\PYZus{}Muon.data\PYZdq{}\PYZsh{}\PYZdq{}simulation\PYZus{}5\PYZus{}17.data\PYZdq{}\PYZsh{}\PYZdq{}19\PYZhy{}04\PYZhy{}30\PYZhy{}14\PYZhy{}50.data\PYZdq{}} \PY{n}{test\PYZus{}data}\PY{o}{=}\PY{n}{read\PYZus{}data}\PY{p}{(}\PY{n}{file\PYZus{}name}\PY{p}{,}\PY{n}{delimiter}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{ }\PY{l+s+s2}{\PYZdq{}}\PY{p}{)}\PY{p}{[}\PY{l+m+mi}{5000}\PY{p}{:}\PY{p}{]} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}109}]:} \PY{n}{filtered}\PY{o}{=}\PY{n}{filter\PYZus{}data}\PY{p}{(}\PY{n}{test\PYZus{}data}\PY{p}{,}\PY{n}{low\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{40}\PY{p}{,}\PY{n}{up\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{20000}\PY{p}{)} \PY{n}{ratio}\PY{o}{=}\PY{l+m+mi}{1} \PY{n}{f}\PY{o}{=}\PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{p}{)} \PY{n}{f}\PY{o}{.}\PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{120} \PY{n}{ax1}\PY{o}{=}\PY{n}{f}\PY{o}{.}\PY{n}{add\PYZus{}subplot}\PY{p}{(}\PY{l+m+mi}{121}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Muon Events Counts [1]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Decay Time [ns]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{ax1}\PY{o}{.}\PY{n}{hist}\PY{p}{(}\PY{n}{filtered}\PY{p}{,}\PY{n}{bins}\PY{o}{=}\PY{l+m+mi}{60}\PY{p}{)} \PY{n}{xdata}\PY{p}{,}\PY{n}{ydata}\PY{p}{,}\PY{n}{xerror}\PY{p}{,}\PY{n}{yerror}\PY{o}{=}\PY{n}{get\PYZus{}hist\PYZus{}data}\PY{p}{(}\PY{n}{filtered}\PY{p}{,}\PY{n}{N\PYZus{}bins}\PY{o}{=}\PY{l+m+mi}{100}\PY{p}{)} \PY{n}{ax2}\PY{o}{=}\PY{n}{f}\PY{o}{.}\PY{n}{add\PYZus{}subplot}\PY{p}{(}\PY{l+m+mi}{122}\PY{p}{,}\PY{n}{sharex}\PY{o}{=}\PY{n}{ax1}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Decay Time [ns]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Log Scale Muon Events Counts [1]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{ax2}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{xdata}\PY{o}{*}\PY{l+m+mi}{1000}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{p}{)}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{.}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{k}{for} \PY{n}{ax} \PY{o+ow}{in} \PY{p}{[}\PY{n}{ax1}\PY{p}{,} \PY{n}{ax2}\PY{p}{]}\PY{p}{:} \PY{n}{xmin}\PY{p}{,} \PY{n}{xmax} \PY{o}{=} \PY{n}{ax}\PY{o}{.}\PY{n}{get\PYZus{}xlim}\PY{p}{(}\PY{p}{)} \PY{n}{ymin}\PY{p}{,} \PY{n}{ymax} \PY{o}{=} \PY{n}{ax}\PY{o}{.}\PY{n}{get\PYZus{}ylim}\PY{p}{(}\PY{p}{)} \PY{n}{ax}\PY{o}{.}\PY{n}{set\PYZus{}aspect}\PY{p}{(}\PY{n+nb}{abs}\PY{p}{(}\PY{p}{(}\PY{n}{xmax}\PY{o}{\PYZhy{}}\PY{n}{xmin}\PY{p}{)}\PY{o}{/}\PY{p}{(}\PY{n}{ymax}\PY{o}{\PYZhy{}}\PY{n}{ymin}\PY{p}{)}\PY{p}{)}\PY{o}{*}\PY{n}{ratio}\PY{p}{,} \PY{n}{adjustable}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{box\PYZhy{}forced}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Total Events Number: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n+nb}{len}\PY{p}{(}\PY{n}{test\PYZus{}data}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Muon Events Number: }\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n+nb}{len}\PY{p}{(}\PY{n}{filtered}\PY{p}{)}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_8_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] Total Events Number: 58775 Muon Events Number: 899 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}110}]:} \PY{c+c1}{\PYZsh{} Comments: } \PY{c+c1}{\PYZsh{} A.} \PY{c+c1}{\PYZsh{} The figure shows clearly that the impact of noise is more significant at } \PY{c+c1}{\PYZsh{} larger time zone, where the effective count of muons is relatively smaller.} \PY{c+c1}{\PYZsh{} The long tail data make no sense because the minimun count of events is 1 thus} \PY{c+c1}{\PYZsh{} when the at the time area where muon decay probability is very smaller, the count } \PY{c+c1}{\PYZsh{} we get from the histogram is actually random noise } \PY{c+c1}{\PYZsh{} B.} \PY{c+c1}{\PYZsh{} The peak at short time zone (100\PYZhy{}120) also should be abandoned since} \PY{c+c1}{\PYZsh{} the the precision of the apparatus is limited. The result is that all the muons } \PY{c+c1}{\PYZsh{} with decay time less than the minimum resolution of the apparatus (20ns) will be } \PY{c+c1}{\PYZsh{} counted in the same bin. The resolution at the short time area is relatively } \PY{c+c1}{\PYZsh{} too coarse to depict a exponential boost curve.} \PY{c+c1}{\PYZsh{} A successful fitting need to eliminate the noise and ineffective data} \end{Verbatim} Linear Fitting \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}111}]:} \PY{n}{filtered}\PY{o}{=}\PY{n}{filter\PYZus{}data}\PY{p}{(}\PY{n}{test\PYZus{}data}\PY{p}{,}\PY{n}{low\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{200}\PY{p}{,}\PY{n}{up\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{6000}\PY{p}{)} \PY{n}{xdata}\PY{p}{,}\PY{n}{ydata}\PY{p}{,}\PY{n}{xerror}\PY{p}{,}\PY{n}{yerror}\PY{o}{=}\PY{n}{get\PYZus{}hist\PYZus{}data}\PY{p}{(}\PY{n}{filtered}\PY{p}{,}\PY{n}{N\PYZus{}bins}\PY{o}{=}\PY{l+m+mi}{40}\PY{p}{)} \PY{n}{opt\PYZus{}param}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{polyfit}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{p}{)}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{100} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Logarithmic Muon Events Count [Log(N)]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Decay Time [us]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{poly1d}\PY{p}{(}\PY{n}{opt\PYZus{}param}\PY{p}{)}\PY{p}{(}\PY{n}{xdata}\PY{p}{)}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZhy{}\PYZhy{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{p}{)}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{o}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_11_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}112}]:} \PY{n}{muon\PYZus{}life}\PY{o}{=}\PY{n+nb}{abs}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{/}\PY{n}{opt\PYZus{}param}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{muon life by tentative linear fitting:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{muon\PYZus{}life}\PY{p}{)} \PY{c+c1}{\PYZsh{}ns} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] muon life by tentative linear fitting: 2.20297928231 \end{Verbatim} Tentative Exponential Fitting \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}113}]:} \PY{k}{def} \PY{n+nf}{exp\PYZus{}model}\PY{p}{(}\PY{n}{x}\PY{p}{,}\PY{n}{A}\PY{p}{,}\PY{n}{lambd}\PY{p}{,}\PY{n}{B}\PY{p}{)}\PY{p}{:} \PY{k}{return} \PY{n}{A}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{exp}\PY{p}{(}\PY{o}{\PYZhy{}}\PY{n}{lambd}\PY{o}{*}\PY{n}{x}\PY{p}{)}\PY{o}{+}\PY{n}{B} \PY{c+c1}{\PYZsh{} here we didn\PYZsq{}t consider the constant B because the long tail is cut off} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}114}]:} \PY{n}{filtered}\PY{o}{=}\PY{n}{filter\PYZus{}data}\PY{p}{(}\PY{n}{test\PYZus{}data}\PY{p}{,}\PY{n}{low\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{80}\PY{p}{,}\PY{n}{up\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{20000}\PY{p}{)} \PY{n}{xdata}\PY{p}{,}\PY{n}{ydata}\PY{p}{,}\PY{n}{xerror}\PY{p}{,}\PY{n}{yerror}\PY{o}{=}\PY{n}{get\PYZus{}hist\PYZus{}data}\PY{p}{(}\PY{n}{filtered}\PY{p}{,}\PY{n}{N\PYZus{}bins}\PY{o}{=}\PY{l+m+mi}{55}\PY{p}{)} \PY{n}{opt\PYZus{}param}\PY{p}{,}\PY{n}{opt\PYZus{}pcov}\PY{o}{=}\PY{n}{curve\PYZus{}fit}\PY{p}{(}\PY{n}{exp\PYZus{}model}\PY{p}{,}\PY{n}{xdata}\PY{p}{,}\PY{n}{ydata}\PY{p}{,}\PY{n}{sigma}\PY{o}{=}\PY{n}{yerror}\PY{p}{,}\PY{n}{p0}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{100}\PY{p}{,}\PY{l+m+mf}{0.1}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{)}\PY{p}{)} \PY{n}{t\PYZus{}obs}\PY{o}{=}\PY{l+m+mi}{1}\PY{o}{/}\PY{n}{opt\PYZus{}param}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]} \PY{n}{sigma\PYZus{}t\PYZus{}obs}\PY{o}{=}\PY{n}{muon\PYZus{}life}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{opt\PYZus{}pcov}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)}\PY{o}{/}\PY{n}{opt\PYZus{}param}\PY{p}{[}\PY{l+m+mi}{1}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{opt\PYZus{}params:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{opt\PYZus{}param}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{param\PYZus{}errors}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{opt\PYZus{}pcov}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{n}{i}\PY{p}{]}\PY{p}{)} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{n+nb}{len}\PY{p}{(}\PY{n}{opt\PYZus{}param}\PY{p}{)}\PY{p}{)}\PY{p}{]}\PY{p}{)}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{muon\PYZus{}life:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{t\PYZus{}obs}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{uncertainty:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{sigma\PYZus{}t\PYZus{}obs}\PY{p}{)}\PY{c+c1}{\PYZsh{}ns} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] opt\_params: [ 135.53119481 0.47203618 0.72856033] param\_errors [ 5.37392331 0.01730273 0.28425711] muon\_life: 2.11848166686 uncertainty: 0.0807513407365 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}115}]:} \PY{n}{t\PYZus{}neg}\PY{o}{=}\PY{l+m+mf}{2.043} \PY{n}{sigma\PYZus{}neg}\PY{o}{=}\PY{l+m+mf}{0.003} \PY{n}{t\PYZus{}pos}\PY{o}{=}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{t\PYZus{}obs}\PY{o}{\PYZhy{}}\PY{n}{t\PYZus{}neg} \PY{n}{sigma\PYZus{}t\PYZus{}pos}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{sigma\PYZus{}t\PYZus{}obs}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{+}\PY{n}{sigma\PYZus{}neg}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{corrected\PYZus{}muon\PYZus{}life:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{t\PYZus{}pos}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{corrected\PYZus{}uncertainty:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{sigma\PYZus{}t\PYZus{}pos}\PY{p}{)}\PY{c+c1}{\PYZsh{}ns} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] corrected\_muon\_life: 2.19396333371 corrected\_uncertainty: 0.114239039131 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}116}]:} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{100} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Muon Events Count [1]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Decay Time [us]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{title}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Distribution of Decay Time}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{ydata}\PY{p}{,}\PY{n}{xerr}\PY{o}{=}\PY{n}{xerror}\PY{p}{,}\PY{n}{yerr}\PY{o}{=}\PY{n}{yerror}\PY{p}{,}\PY{n}{fmt}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{markersize}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,}\PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Experiment Data}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{fitting\PYZus{}ydata}\PY{o}{=}\PY{p}{(}\PY{k}{lambda} \PY{n}{x}\PY{p}{:} \PY{n}{exp\PYZus{}model}\PY{p}{(}\PY{n}{x}\PY{p}{,}\PY{o}{*}\PY{n}{opt\PYZus{}param}\PY{p}{)}\PY{p}{)}\PY{p}{(}\PY{n}{xdata}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{fitting\PYZus{}ydata}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{r\PYZhy{}\PYZhy{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Fitting Curve}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_17_0.png} \end{center} { \hspace*{\fill} \\} Correct linear fitting using noise data \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}117}]:} \PY{n}{filtered}\PY{o}{=}\PY{n}{filter\PYZus{}data}\PY{p}{(}\PY{n}{test\PYZus{}data}\PY{p}{,}\PY{n}{low\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{100}\PY{p}{,}\PY{n}{up\PYZus{}bound}\PY{o}{=}\PY{l+m+mi}{6000}\PY{p}{)} \PY{n}{xdata}\PY{p}{,}\PY{n}{ydata}\PY{p}{,}\PY{n}{xerror}\PY{p}{,}\PY{n}{yerror}\PY{o}{=}\PY{n}{get\PYZus{}hist\PYZus{}data}\PY{p}{(}\PY{n}{filtered}\PY{p}{,}\PY{n}{N\PYZus{}bins}\PY{o}{=}\PY{l+m+mi}{30}\PY{p}{)} \PY{n}{ydata}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{abs}\PY{p}{(}\PY{n}{ydata}\PY{o}{\PYZhy{}}\PY{n}{opt\PYZus{}param}\PY{p}{[}\PY{l+m+mi}{2}\PY{p}{]}\PY{p}{)} \PY{n}{param}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{polyfit}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{p}{)}\PY{p}{,}\PY{l+m+mi}{1}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{100} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Logarithmic Muon Events Count [Log(N)]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Decay Time [us]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{p}{)}\PY{p}{,}\PY{n}{xerr}\PY{o}{=}\PY{n}{xerror}\PY{p}{,}\PY{n}{yerr}\PY{o}{=}\PY{p}{[}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{o}{\PYZhy{}}\PY{n}{yerror}\PY{p}{)}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{o}{+}\PY{n}{yerror}\PY{p}{)}\PY{o}{\PYZhy{}}\PY{n}{np}\PY{o}{.}\PY{n}{log}\PY{p}{(}\PY{n}{ydata}\PY{p}{)}\PY{p}{]}\PY{p}{,}\PY{n}{fmt}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{.}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,}\PY{n}{markersize}\PY{o}{=}\PY{l+m+mi}{3}\PY{p}{,}\PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Experiment Data}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{plot}\PY{p}{(}\PY{n}{xdata}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{poly1d}\PY{p}{(}\PY{n}{param}\PY{p}{)}\PY{p}{(}\PY{n}{xdata}\PY{p}{)}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZhy{}\PYZhy{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{label}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Linear Regression}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{legend}\PY{p}{(}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{show}\PY{p}{(}\PY{p}{)} \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_19_0.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}118}]:} \PY{n}{muon\PYZus{}life}\PY{o}{=}\PY{n+nb}{abs}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{/}\PY{n}{param}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n+nb}{print}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{muon life by corrected linear fitting:}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,} \PY{n}{muon\PYZus{}life}\PY{p}{)} \PY{c+c1}{\PYZsh{}us} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] muon life by corrected linear fitting: 2.10209113983 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}119}]:} \PY{k+kn}{import} \PY{n+nn}{pandas} \PY{k}{as} \PY{n+nn}{pd} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}290}]:} \PY{n}{shape\PYZus{}data}\PY{o}{=}\PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{n}{data\PYZus{}path}\PY{o}{+}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{data.xlsx}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{sheetname}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sheet3}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{precision}\PY{o}{=}\PY{l+m+mf}{0.1} \PY{c+c1}{\PYZsh{}cm} \PY{n}{D}\PY{p}{,}\PY{n}{h}\PY{p}{,}\PY{n}{s}\PY{p}{,}\PY{n}{a}\PY{p}{,}\PY{n}{b}\PY{p}{,}\PY{n}{c}\PY{p}{,}\PY{n}{d}\PY{p}{,}\PY{n}{L}\PY{p}{,}\PY{n}{m}\PY{o}{=}\PY{n}{shape\PYZus{}data}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)} \PY{c+c1}{\PYZsh{} mean value} \PY{n}{sigma\PYZus{}D}\PY{p}{,}\PY{n}{sigma\PYZus{}h}\PY{p}{,}\PY{n}{sigma\PYZus{}s}\PY{p}{,}\PY{n}{sigma\PYZus{}a}\PY{p}{,}\PY{n}{sigma\PYZus{}b}\PY{p}{,}\PY{n}{sigma\PYZus{}c}\PY{p}{,}\PY{n}{sigma\PYZus{}d}\PY{p}{,}\PY{n}{sigma\PYZus{}L}\PY{p}{,}\PY{n}{sigma\PYZus{}m}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{shape\PYZus{}data}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{+}\PY{n}{precision}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)} \PY{n}{R}\PY{p}{,}\PY{n}{sigma\PYZus{}R}\PY{o}{=}\PY{n}{D}\PY{o}{/}\PY{l+m+mi}{2}\PY{p}{,}\PY{n}{sigma\PYZus{}D}\PY{o}{/}\PY{l+m+mi}{2} \PY{n}{pd}\PY{o}{.}\PY{n}{concat}\PY{p}{(}\PY{p}{[}\PY{n}{shape\PYZus{}data}\PY{o}{.}\PY{n}{mean}\PY{p}{(}\PY{p}{)}\PY{p}{,}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{shape\PYZus{}data}\PY{o}{.}\PY{n}{std}\PY{p}{(}\PY{p}{)}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{+}\PY{n}{precision}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}\PY{p}{]}\PY{p}{,}\PY{n}{keys}\PY{o}{=}\PY{p}{[}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Mean}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{StdDev}\PY{l+s+s2}{\PYZdq{}}\PY{p}{]}\PY{p}{,}\PY{n}{axis}\PY{o}{=}\PY{l+m+mi}{1}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}290}]:} Mean StdDev D[cm] 16.475000 0.111803 h[cm] 6.450000 0.115470 s[cm] 2.825000 0.160728 a[cm] 10.033333 0.115470 b[cm] 10.066667 0.152753 c[cm] 7.333333 0.182574 d[cm] 5.266667 0.270801 L[cm] 36.000000 0.173205 m[cm] 4.833333 0.182574 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}291}]:} \PY{n}{time\PYZus{}data}\PY{o}{=}\PY{n}{pd}\PY{o}{.}\PY{n}{read\PYZus{}excel}\PY{p}{(}\PY{n}{data\PYZus{}path}\PY{o}{+}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{data.xlsx}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{sheetname}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Sheet1}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{time\PYZus{}data} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}291}]:} Angle[Deg] Start Time End Time Delta\_T Delta\_T.1 Count Flux 1 90 17:05:21 17:19:18 00:13:57 837 170 0.203106 2 90 17:19:18 17:35:12 00:15:54 954 219 0.229560 3 70 16:36:30 17:01:23 00:24:53 1493 281 0.188212 4 60 17:37:20 17:46:55 00:09:35 575 104 0.180870 5 60 17:46:55 17:56:45 00:09:50 590 115 0.194915 6 60 11:40:45 11:57:20 00:16:35 995 177 0.177889 7 50 11:59:55 12:14:15 00:14:20 860 104 0.120930 8 50 12:14:15 12:29:55 00:15:40 940 100 0.106383 9 40 12:32:19 13:01:39 00:29:20 1760 140 0.079545 10 30 13:03:11 13:31:43 00:28:32 1712 92 0.053738 \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}298}]:} \PY{n}{AOmega}\PY{o}{=}\PY{l+m+mf}{83.0584} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}299}]:} \PY{n}{angle}\PY{o}{=}\PY{p}{[}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{]} \PY{n}{time}\PY{o}{=}\PY{p}{[}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{]} \PY{n}{count}\PY{o}{=}\PY{p}{[}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{]}\PY{p}{]} \PY{k}{for} \PY{n}{i} \PY{o+ow}{in} \PY{n+nb}{range}\PY{p}{(}\PY{l+m+mi}{1}\PY{p}{,}\PY{n+nb}{len}\PY{p}{(}\PY{n}{time\PYZus{}data}\PY{p}{)}\PY{p}{)}\PY{p}{:} \PY{k}{if} \PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{n}{i}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{o}{==}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{:} \PY{n}{time}\PY{p}{[}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{+}\PY{o}{=}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{]} \PY{n}{count}\PY{p}{[}\PY{o}{\PYZhy{}}\PY{l+m+mi}{1}\PY{p}{]}\PY{o}{+}\PY{o}{=}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{]} \PY{k}{else}\PY{p}{:} \PY{n}{angle}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{]}\PY{p}{)} \PY{n}{time}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{l+m+mi}{4}\PY{p}{]}\PY{p}{)} \PY{n}{count}\PY{o}{.}\PY{n}{append}\PY{p}{(}\PY{n}{time\PYZus{}data}\PY{o}{.}\PY{n}{iloc}\PY{p}{[}\PY{n}{i}\PY{p}{,}\PY{l+m+mi}{5}\PY{p}{]}\PY{p}{)} \PY{n}{angle}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{n}{angle}\PY{p}{)} \PY{n}{time}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{n}{time}\PY{p}{)} \PY{n}{count}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{n}{count}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}302}]:} \PY{n}{muon\PYZus{}flux}\PY{o}{=}\PY{n}{count}\PY{o}{/}\PY{n}{time}\PY{o}{/}\PY{n}{AOmega}\PY{o}{*}\PY{l+m+mi}{100}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{n}{flux\PYZus{}error}\PY{o}{=}\PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{count}\PY{p}{)}\PY{o}{/}\PY{n}{time}\PY{o}{/}\PY{n}{AOmega}\PY{o}{*}\PY{l+m+mi}{100}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2} \PY{c+c1}{\PYZsh{} to m\PYZca{}2} \PY{n}{plt}\PY{o}{.}\PY{n}{figure}\PY{p}{(}\PY{p}{)}\PY{o}{.}\PY{n}{dpi}\PY{o}{=}\PY{l+m+mi}{120} \PY{n}{plt}\PY{o}{.}\PY{n}{ticklabel\PYZus{}format}\PY{p}{(}\PY{n}{style}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{sci}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{axis}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{y}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{scilimits}\PY{o}{=}\PY{p}{(}\PY{l+m+mi}{0}\PY{p}{,}\PY{l+m+mi}{0}\PY{p}{)}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{xlabel}\PY{p}{(}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Zenith Angle [Deg]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{ylabel}\PY{p}{(}\PY{l+s+sa}{r}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{Muon Flux [\PYZdl{}m\PYZca{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{\PYZhy{}2\PYZcb{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{cdot sr\PYZca{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{\PYZhy{}1\PYZcb{}}\PY{l+s+s2}{\PYZbs{}}\PY{l+s+s2}{cdot s\PYZca{}}\PY{l+s+s2}{\PYZob{}}\PY{l+s+s2}{\PYZhy{}1\PYZcb{}\PYZdl{}]}\PY{l+s+s2}{\PYZdq{}}\PY{p}{)} \PY{n}{plt}\PY{o}{.}\PY{n}{errorbar}\PY{p}{(}\PY{n}{angle}\PY{p}{,}\PY{n}{muon\PYZus{}flux}\PY{p}{,}\PY{n}{marker}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{d}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{linestyle}\PY{o}{=}\PY{l+s+s2}{\PYZdq{}}\PY{l+s+s2}{\PYZhy{}\PYZhy{}}\PY{l+s+s2}{\PYZdq{}}\PY{p}{,}\PY{n}{yerr}\PY{o}{=}\PY{n}{flux\PYZus{}error}\PY{p}{)} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}302}]:} <Container object of 3 artists> \end{Verbatim} \begin{center} \adjustimage{max size={0.9\linewidth}{0.9\paperheight}}{output_26_1.png} \end{center} { \hspace*{\fill} \\} \begin{Verbatim}[commandchars=\\\{\}] {\color{incolor}In [{\color{incolor}301}]:} \PY{n}{count}\PY{o}{*}\PY{l+m+mi}{60}\PY{o}{/}\PY{n}{time}\PY{o}{/}\PY{n}{AOmega}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{np}\PY{o}{.}\PY{n}{pi} \PY{c+c1}{\PYZsh{} cm\PYZca{}\PYZhy{}2 min\PYZca{}\PYZhy{}1 \PYZsh{} Standard value: 1 cm\PYZca{}\PYZhy{}2 min\PYZca{}\PYZhy{}1} \end{Verbatim} \begin{Verbatim}[commandchars=\\\{\}] {\color{outcolor}Out[{\color{outcolor}301}]:} array([ 0.98582896, 0.85426787, 0.83212581, 0.51440505, 0.36104632, 0.24391113]) \end{Verbatim} % Add a bibliography block to the postdoc \end{document}
{ "alphanum_fraction": 0.5627828408, "avg_line_length": 75.0406885759, "ext": "tex", "hexsha": "ba8cc43a1dee50e6e7a4e610d37e600a61c49c87", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-10-02T21:23:52.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-10T07:39:44.000Z", "max_forks_repo_head_hexsha": "f3be6b92f345442752febc3f8157fcb2a9537349", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Neuromancer43/PHYS128AL", "max_forks_repo_path": "Experiment3/tex/Muon_Physics.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f3be6b92f345442752febc3f8157fcb2a9537349", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Neuromancer43/PHYS128AL", "max_issues_repo_path": "Experiment3/tex/Muon_Physics.tex", "max_line_length": 813, "max_stars_count": 6, "max_stars_repo_head_hexsha": "f3be6b92f345442752febc3f8157fcb2a9537349", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Neuromancer43/PHYS128AL", "max_stars_repo_path": "Experiment3/tex/Muon_Physics.tex", "max_stars_repo_stars_event_max_datetime": "2021-10-02T21:22:33.000Z", "max_stars_repo_stars_event_min_datetime": "2019-06-04T20:04:03.000Z", "num_tokens": 21454, "size": 47951 }
\subsection{Map Fusion}\label{subsec:map_fusion} Next, we prove the higher order property of map fusion. First we define and axiomatize function composition and list map: \begin{code} axiomatize (.) (.) :: (b -> c) -> (a -> b) -> a -> c (.) f g x = f (g x) axiomatize map map :: (a -> b) -> L a -> L b map f N = N map f (C x xs) = C (f x) (map f xs) \end{code} \NV{Say why we need defunctionalization} \NV{We use app function like HALO (link to the theory)} \NV{Zombie with rewritting does not allow HIGHER ORDER reasoning} Then, we specify the map fusion property as a type specification for the function @map_fusion@ and prove the property by induction on the list argument. \begin{code} type MapFusion F G X = {map (F . G) X == (map F . map G) X} map_fusion :: f:(a -> a) -> g:(a -> a) -> xs:L a -> MapFusion f g xs \end{code} \begin{code} map_fusion f g N = ((map f) r. (map g)) N ==! (map f) (rmap g N) ==! rmap f N ==! N ==! gmap (f . g) N *** QED map_fusion f g (C x xs) = rmap (f . g) (C x xs) ==! (f . g) x `C` map (f . g) xs ==! (f . g) x `C` (map f r. map g) xs ? map_fusion f g xs ==! (f r. g) x `C` map f (map g xs) ==! f (g x) `C` map f (map g xs) ==! gmap f (C (g x) (map g xs)) ==! (map f) (gmap g (C x xs)) ==! (map f g. map g) (C x xs) *** QED \end{code} \subsection{Monadic Laws: Associativity} As a last example, we axiomatize the monadic list bind operator \begin{code} axiomatize >>= (>>=) :: L a -> (a -> L b) -> L b (C x xs) >>= f = f x ++ (xs >>= f) Emp >>= f = N \end{code} We use the above definition to inductively prove associativity of the bind operator \begin{code} type Associative M F G { M >>= F >>= G == M >>= (\x -> F x >>= G) } associativity :: m:L a -> f: (a -> L b) -> g:(b -> L c) -> Associative m f g associativity N f g = N r>>= f >>= g ==! N r>>= g ==! N ==! N g>>= (\x -> f x >>= g) *** QED associativity (C x xs) f g = (C x xs) r>>= f >>= g ==! (f x) ++ (xs >>= f) >>= g ? bind_append (f x) (xs >>= f) g ==! (f x >>= g) ++ ((xs >>= f) >>= g) ==! (f x >>= g) ++ (xs >>= (\y -> f y >>= g)) ? associativity xs f g ==! (\y -> f y >>= g) x ++ (xs >>= (\y -> f y >>= g)) -- eta-equivalence ==! (C x xs) g>>= (\y -> f y >>= g) *** QED \end{code} In the proof we used the bind-append fusion lemma \begin{code} bind_append :: xs:L a -> ys:L a -> f:(a -> L b) -> { (xs ++ ys) >>= f == (xs >>= f) ++ (ys >>= f) } \end{code} Moreover, we required $\beta$- and $\eta$-equilvalence on anonymous functions. For example, during the proof, we need the equality @f x >>= g ==! (\x -> f x >>= g) y@. % To prove this equality, in the logic, the anonymous functions are represented as functional variables axiomatized with extensionality axioms. % Thus, in the logic, we define @f'@ and the axioms @forall x. f' x = f x >>= g@ and @forall g x. (f' x = g x) => f' = g@. % These two axioms are sufficient to prove 1. $\eta$-equivalence that is required in the last step of the inductive case; and 2. $\beta$-equivalence that is required to prove that our proof @xs >>= f >>= g ==! xs >>= (\y -> f y >>= g)@ implies the specification.
{ "alphanum_fraction": 0.5190546529, "avg_line_length": 25.2611940299, "ext": "tex", "hexsha": "d36e6512ba2af5058d100fe3d2351820a375fea9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_forks_event_min_datetime": "2016-12-02T00:46:51.000Z", "max_forks_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "nikivazou/thesis", "max_forks_repo_path": "text/refinementreflection/examples.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "nikivazou/thesis", "max_issues_repo_path": "text/refinementreflection/examples.tex", "max_line_length": 65, "max_stars_count": 11, "max_stars_repo_head_hexsha": "a12f2e857a358e3cc08b657bb6b029ac2d500c3b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "nikivazou/thesis", "max_stars_repo_path": "text/refinementreflection/examples.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-20T07:04:01.000Z", "max_stars_repo_stars_event_min_datetime": "2016-12-02T00:46:41.000Z", "num_tokens": 1211, "size": 3385 }
\documentclass{report} \input{../macros} \tableofcontents \C \chapter{Functions}
{ "alphanum_fraction": 0.6739130435, "avg_line_length": 10.2222222222, "ext": "tex", "hexsha": "b2a7a6477b45d5421752fe1005a9f1ffcb79195d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "turkeydonkey/nzmath3", "max_forks_repo_path": "manual/header_function.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "turkeydonkey/nzmath3", "max_issues_repo_path": "manual/header_function.tex", "max_line_length": 23, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a48ae9efcf0d9ad1485c2e9863c948a7f1b20311", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "turkeydonkey/nzmath3", "max_stars_repo_path": "manual/header_function.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-26T19:22:17.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-26T19:22:17.000Z", "num_tokens": 24, "size": 92 }
\documentclass[12pt, titlepage]{article} \usepackage{booktabs} \usepackage{comment} \usepackage{tabularx} \usepackage{hyperref} \usepackage{enumitem} \usepackage[normalem]{ulem} \hypersetup{ colorlinks, citecolor=black, filecolor=black, linkcolor=red, urlcolor=blue } \usepackage[round]{natbib} \title{SE 3XA3: Software Requirements Specification\\T-Rex Acceleration} \author{Team 15, Dev\textsuperscript{enthusiasts} \\ Zihao Du (duz12) \\ Andrew Balmakund (balmakua) \\ Namit Chopra (choprn9) } \date{\today} \begin{document} \maketitle \pagenumbering{roman} \tableofcontents \listoftables \listoffigures \begin{table}[bp] \caption{\bf Revision History} \begin{tabularx}{\textwidth}{p{3cm}p{2cm}X} \toprule {\bf Date} & {\bf Version} & {\bf Notes}\\ \midrule 02/12/2021 & 1.0 & Finished First Version\\ 03/31/2021 & 2.0 & Revision 1\\ \bottomrule \end{tabularx} \end{table} \newpage \pagenumbering{arabic} This document describes the requirements for the T-Rex Acceleration Game. The template for the Software Requirements Specification (SRS) is a subset of the Volere template~\citep{RobertsonAndRobertson2012}. \section{Project Drivers} \subsection{The Purpose of the Project} The purpose of this project is to modify the simple offline game called T-Rex Runner. The game can be played on the Google Chrome browser when the user is disconnected from the internet. The team will reimplement the game with refreshed graphics and modularized code. The goal is to add new features to a simple game that can be enjoyed by anyone, with or without the internet. \subsection{The Stakeholders} \subsubsection{The Client} The clients of this project are the professor and teaching assistants of SFWRENG 3XA3 course. \subsubsection{The Customers} The customers of this project are individuals who enjoy playing 2D computer games and other developers who are interested in developing game modifications (i.e changing different aspects of the game such as its visual or game behaviour). \subsubsection{Other Stakeholders} The developers of the original game (T-Rex Runner) have an invested interest as this project is a redevelopment based on the existing one. Any open source Pygame community will have an invested interest because other developers using Pygame can use this product as a reference in their project. The developers of the product, the members of Team Dev\textsuperscript{enthusiasts} are stakeholders as well. They will be responsible for the redevelopment of the existing product and future updates. \subsection{Mandated Constraints} \begin{itemize} \item Description: The project shall use the Pygame library to implement the computer graphics.\\ \\Rationale: Pygame contains many GUI modules for game development and is highly portable. It is more efficient to use these Pygame libraries rather than the development team to start from scratch.\\ \\Fit Criterion: All modules have successfully loaded to run the game. \item Description: The project shall be a zero budget project.\\ \\Rationale: The developers of the project have no budget for this project.\\ \\Fit Criterion: The developer team should use open source only. \item Description: The project shall run properly on different operating systems like Windows and Mac OS.\\ \\Rational: The project shall have decent portability.\\ \\Fit Criterion: The user shall be able to run the game on different operating systems. \end{itemize} \subsection{Naming Conventions and Terminology} \begin{itemize} \item \textbf{SFRWENG 3XA3}: The Software Engineering Practice and Experience: Software Project Management course. \item \textbf{T-Rex Runner}: A game that can be played on Google Chrome when the user is disconnected from the internet. \item \textbf{GUI}: Graphical User Interface. \item \textbf{SRS}: Software Requirements Specification. \item \textbf{Pygame}: A Python library composed of modules designed for writing video games. \item \textbf{Game state}: Refers to the current game session the user is playing in. \item \textbf{OS}: Operating System \item \textbf{PEP 8}: The Style Guide for Python Code \end{itemize} \subsection{Relevant Facts and Assumptions} \begin{enumerate} \item It is assumed the user knows how to operate a computer. \item It is assumed the user has a computer that has the capability to run the game. \item It is assumed the user has physical access to a mouse and keyboard. \item It is assumed the user has the Python installed version \sout{3.9} \textcolor{red}{3.6.9} or higher. \item It is assumed the user has Pygame library installed. \item It is assumed the user has the supporting game files installed on their computer. \item It is assumed the user can understand basic English. \end{enumerate} \section{Functional Requirements} \subsection{The Scope of the Work and the Product} \subsubsection{The Context of the Work} The T-Rex Runner is a 2D endless runner game with black and white graphics and simple features. It can only be played on the Google Chrome browser when the user disconnected from the internet. The redevelopment of T-Rex Runner, called T-Rex Acceleration, will have improved and colourful graphics, including new characters and environment designs. The focus of T-Rex Acceleration is creating a more immersive and addictive gaming experience through new game sounds and features. T-Rex Acceleration will utilize a wider range of game sounds for player movements and obstacles than the original game. The most important aspect of the project to modularize the game. The project will utilize all free of cost libraries (Pygame), sounds, and GUI assets. Finally, the project will be written in a different programming language, Python instead of JavaScript. \subsubsection{Work Partitioning} \begin{table}[h] \centering \begin{tabular}{|p{0.33\linewidth} | p{0.33\linewidth} | p{0.33\linewidth}| } \hline Event & Input/Output & Summary \\ \hline User Controls Character movement & Input: KEYBOARD\_J-UMP or KEYBOAR-D\_DUCK & System responds and update current game state based on user's input\\ \hline Collision Detection & Input: Character Position, Obstacle Position \newline Output: Modifying current game state & When the user's character model collides with an obstacle, this will trigger the game state to end. \\ \hline Change settings & Input: New volumes/ themes selection \newline Output: Modified volume level/themes & The current volume or theme will be changed to the new volume or theme that is selected. \\ \hline Update Scores & Input: New highest score \newline Output: Updated highest score & When the user's current score is higher than the highest score, the highest score will be updated.\\ \hline Pause/Resume & Input: User inputs \newline Output: Modified game state & When the user uses RESUME/PAUSE, this will change the state of the game.\\ \hline \end{tabular} \caption{Work Partitioning Table} \label{tab:my_label} \end{table} \subsubsection{Individual Product Use Cases} The primary use case of this product is to play the game from start and trying to keep the character running as long as you can. The following is the use case in detail:\\\newline \textbf{Use case: Play the game}\\ \textbf{Primary Actor:} User\\ \textbf{Supporting Actors:} None\\ \textbf{Precondition:} The user has Python and Pygame libraries installed on the computer.\\ \textbf{Trigger:} The user opens the program\\ \textbf{Main Success Scenario} \begin{enumerate} \item User is on the main title screen and starts the game \item User uses keyboard inputs to dodge obstacles \item The system counts the current score according to how long the dinosaur runs \item The user hits on an obstacle and the game ends \item The system updates the highest score if the current score is higher than the previous ones \item The user \sout{clicks on} \textcolor{red}{presses} RESTART to start a new game \end{enumerate} \textbf{Secondary Scenarios} \begin{enumerate} \item User pauses the game: The user presses PAUSE and the game is paused. \item User resumes the game: The user presses RESUME to continue the stopped game. \item User leaves the game: The user terminates the game before the player collides with an obstacle. The game will terminate gracefully and the current score will not be recorded. \end{enumerate} \textbf{Success Postcondition:} The game is over and the highest score is updated. \subsection{Functional Requirements} \begin{enumerate} \item The character must jump when the user presses KEYBOARD\_JUMP as long as the character is on a platform.\\ \textbf{Rationale}: The user must able to jump with the character to dodge obstacles. The character must not be able to jump unless on an area meant for running (platform). \item The character must duck when the user presses KEYBOARD\_DUCK as long as the character is on the ground.\\ \textbf{Rationale}: The user must able to duck with the character to dodge obstacles. The character must not be able to duck unless on an area meant for running (platform). \item The system must detect an occurrence of a collision between the character and an obstacle.\\ \textbf{Rationale}: When the character model collides with an obstacle (i.e an object meant to impede the movement of the character) the game state must end. \item The system must spawn different obstacles in random order.\\ \textbf{Rationale}: Randomizing the order of the different obstacles spawned, prevents predictable character movement by the user. \item The system must freeze the current game state when the pause option has been pressed.\\ \textbf{Rationale}: It is essential the user has an option to pause the game. The spawning of obstacles, character movement, score tracking, and spawning of power-ups must be paused, preserving the game's state the moment it is paused. \item The system shall provide a menu for the user to resume the current game or quit the game when the game is paused.\\ \textbf{Rationale}: The user must also have an option to resume the game after it has been paused. The user must also have an option to quit the game depending on their preference/situation. \item The system must exit the current game state and takes the user to the main menu when the quit option is selected.\\ \textbf{Rationale}: The user shall be able to leave any time when playing the game. \item The system must resume back to the current game state with an additional time delay after the resume option has been pressed.\\ \textbf{Rationale}: When the user wants to resume back to the current game state, the time delay will allow the user to be aware of the current state of the game. It prevents the user from losing the game instantly by being unaware of the current game situation. \item The system shall store the score of the current gameplay and display it on the GUI. \\ \textbf{Rationale}: The user shall be able to know how they are performing currently in the game. \item The system shall update the highest score if the current score is higher than the existing highest score.\\ \textbf{Rationale}: The system must keep the highest updated allowing the user to reflect on their latest performance. \item The system must generate different power-ups randomly to be acquired by the user.\\ \textbf{Rationale}: Different power-ups will enhance gameplay experience for the user and bring new elements compared to the original game. \item The user shall be able to acquire a power-up when the character and power-up icon come in contact on the GUI.\\ \textbf{Rationale}: The method for the character to obtain the power-up. \item The character's stats change based on the power-up acquired.\\ \textbf{Rationale}: The user should notice a change or get a `feel' that their character stats have been modified after acquiring the power-up. \item The system shall play the corresponding sound effects when the player jumps, duck, and collide with obstacles.\\ \textbf{Rationale}: The system shall provide an immersive gaming experience. \item The user must have the option to restart the game or go to the main menu when the game ends (the user hits an obstacle).\\ \textbf{Rationale}: The user shall be able to choose if they want to start a new game after the previous game ends. It reduces the amount of time and effort for the user to play another game. \item The user can select to play the game, change the game settings\sout{, and select a different character from the main menu}\textcolor{red}{, and read the instruction}.\\ \textbf{Rationale}: The user shall be able to change \sout{settings like volume theme color, and text size in the main menu. } \textcolor{red}{the volume according to their preference from the main menu.} \item The system shall \sout{display instructions when the user starts a game} \textcolor{red}{ a menu that explains the game instructions and controls.}\\ \textbf{Rationale}: The users need to be guided on how to play when playing for the first time. \end{enumerate} \section{Non-functional Requirements} \subsection{Look and Feel Requirements} \subsubsection{Appearance Requirements} \begin{enumerate}[leftmargin=1.20cm, label={LF \arabic*}] \item The viewpoint shall look similar to the original game T-Rex Runner.\\ \textbf{Fit Criteria}: The game follows the character from a side viewpoint. \item The menu and game interface shall be properly colored and follow a consistent theme.\\ \textbf{Fit Criteria}: The menu will contain the use of colors and text in line with the style and art form of the games included. \item The menu shall be minimalistic.\\ \textbf{Fit Criteria}: The menu should only contain the essential elements. \end{enumerate} \subsubsection{Style Requirements} \begin{enumerate}[leftmargin=1.20cm, label={LF \arabic*}] \item The game shall use a bright colour scheme.\\ \textbf{Fit Criteria}: The colours shall contain at least 50\% \sout{brightness and} saturation. \end{enumerate} \subsection{Usability and Humanity Requirements} \subsubsection{Ease of use Requirements} \begin{enumerate}[leftmargin=1.45cm, label={UH\ \arabic*}] \item The game shall have few and simple controls.\\ \textbf{Fit Criteria}: Survey a group of individuals and 95\% of them should quickly understand the controls of the game within 30 seconds. \item MINIMUM\_AGE and up shall be able to navigate the game with ease.\\ \textbf{Fit Criteria}: Survey a group of individuals and 95\% of them should be satisfied with the ease of use of the game by navigating through all the different menus of the game. \end{enumerate} \subsubsection{Personalization and Internationalization Requirements} \begin{enumerate}[leftmargin=1.45cm, label={UH \arabic*}] \item The user shall be able to change the volume of the game.\\ \textbf{Fit Criteria}: The user can choose \sout{any} volume from 0\% to 100\%. \item \sout{The user shall be able to change the theme of the game.\\ \textbf{Fit Criteria}: The user can choose a theme from some provided ones which are constant during the gameplay.} \end{enumerate} \subsubsection{Learning Requirements} \begin{enumerate}[leftmargin=1.45cm, label={UH \arabic*}] \item The user shall get familiar with the game quickly.\\ \textbf{Fit Criteria}: Survey a group of individuals and 80\% of them should understand the basic rules and objectives of the game after their first playthrough. \end{enumerate} \subsubsection{Understandability and Politeness Requirements} \begin{enumerate}[leftmargin=1.45cm, label={UH \arabic*}] \item The game shall use clear and simple language that can be understood by any English reader above the MINIMUM\_AGE.\\ \textbf{Fit Criteria}: If we put the game instructions into the GRAMMAR\_CHECKER, it shall give a readability score that indicates it can be read by the MINIMUM\_AGE. \item The application must use universal symbols for common buttons and functions.\\ \textbf{Fit Criteria}: Survey a group of individuals and 90\% of them should have a positive experience with the user interface. \end{enumerate} \subsubsection{Accessibility Requirements} N/A \subsection{Performance Requirements} \subsubsection{Speed and Latency Requirements} \begin{enumerate}[leftmargin=1.45cm, label={PR \arabic*}] \item The game shall not take more than MAX\_DELAY to load all necessary files and libraries.\\ \textbf{Fit Criteria}: The user must be able to start playing the game before the MAX\_DELAY. \item The game shall respond to the user's input within RESPONSE\_TIME. \\ \textbf{Fit Criteria}: The game must be able to react to the user's input within the RESPONSE\_TIME. \end{enumerate} \subsubsection{Safety-Critical Requirements} N/A \subsubsection{Precision or Accuracy Requirements} \begin{enumerate}[leftmargin=1.45cm, label={PR \arabic*}] \item The score of the user shall be a whole number.\\ \textbf{Fit Criteria}: The score of the player is displayed on the screen as an integer. \end{enumerate} \subsubsection{Reliability and Availability Requirements} \begin{enumerate}[leftmargin=1.45cm,label={PR \arabic*}] \item The game shall always be available to download and install most of the time.\\ \textbf{Fit Criteria}: The user can download and install the game 95\% of the time (based on GitLab's availability). \end{enumerate} \subsubsection{Robustness or Fault-Tolerance Requirements} \begin{enumerate}[leftmargin=1.45cm, label={PR \arabic*}] \item The product shall not fail if unexpected inputs are provided by the user.\\ \textbf{Fit Criteria}: The game must not crash if non-game control keyboard inputs are provided. \end{enumerate} \subsubsection{Capacity Requirements} N/A \subsubsection{Scalability or Extensibility Requirements} \begin{enumerate}[leftmargin=1.45cm,label={PR \arabic*}] \item The game shall allow for easy modifications to game features. \\ \textbf{Fit Criteria}: The developer shall be able to change any visual element in the game. \end{enumerate} \subsubsection{Longevity Requirements} N/A \subsection{Operational and Environmental Requirements} \subsubsection{Expected Physical Environment} \begin{enumerate}[leftmargin=1.45cm, label={OE \arabic*} ] \item The game must run on laptops and desktops running the latest version of their respective OS.\\ \textbf{Fit Criteria}: The user shall be able to run the game on the latest version of Windows, Linux, and Mac OS. \end{enumerate} \subsubsection{Requirements for Interfacing with Adjacent Systems} N/A \subsubsection{Release Requirements} \begin{enumerate}[leftmargin=1.45cm, label={OE \arabic*} ] \item The game's final release shall be before DUE\_DATE.\\ \textbf{Fit Criteria}: The game shall be completed and ready for download on GitLab by this time. \end{enumerate} \subsection{Maintainability and Support Requirements} \subsubsection{Maintenance Requirements} \begin{enumerate}[leftmargin=1.45cm, label={MS \arabic*}] \item The project shall be maintained by developers until DUE\_DATE. \\ \textbf{Fit Criteria}: Any bug discovered before the DUE\_DATE shall be fixed as soon as possible by developers. \end{enumerate} \subsubsection{Supportability Requirements} \begin{enumerate}[leftmargin=1.45cm, label={MS \arabic*}] \item The project shall be supported on different OS's as mentioned earlier.\\ \textbf{Fit Criteria}: The user shall be able to run the game on the latest version of Windows, Linux, and Mac OS. \end{enumerate} \subsubsection{Adaptability Requirements} N/A \subsection{Security Requirements} \subsubsection{Access Requirements} \begin{enumerate}[leftmargin=1.45cm, label={SR \arabic*}] \item The source code and all game assets must be available for download to everyone.\\ \textbf{Fit Criteria}: All source code and game assets are available on GitLab. \item Only the developers and maintainers shall modify the source code and game assets. \\ \textbf{Fit Criteria}: Only the development team and teaching staff have the role of owners on the GitLab repository. \end{enumerate} \subsubsection{Privacy Requirements} \begin{enumerate}[leftmargin=1.45cm, start=1,label={SR \arabic*}] \item The game must not save any of user's personal information.\\ \textbf{Fit Criteria}: The product must not access any files and information outside its folder. \end{enumerate} \subsubsection{Immunity Requirements} \begin{enumerate}[leftmargin=1.45cm, start=1,label={SR \arabic*}] \item The developers must not provide any links that may lead to malware. \textbf{Fit Criteria}: There shall be no URL to an external web page in the project source code. \end{enumerate} \subsection{Cultural Requirements} \subsubsection{Cultural Diversity and Inclusion Requirements} \begin{enumerate}[leftmargin=1.20cm, label={CR \arabic*}] \item The game must not contain any offensive content for people in different cultures.\\ \textbf{Fit Criteria}: Survey a group of individuals and 100\% should be comfortable with the content of the game. \item The game shall use Canadian English spelling. \\ \textbf{Fit Criteria}: The in-game text shall be run through the GRAMMAR\_CHECKER without any spelling errors. \end{enumerate} \subsection{Legal Requirements} \subsubsection{Compliance Requirements} \begin{enumerate}[leftmargin=1.20cm, label={CR \arabic*}] \item The project must not violate any copyright laws. \\ \textbf{Fit Criteria}: The project shall abide the BSD-style license with respect to the original source code of the project. \end{enumerate} \subsubsection{Standards Requirements} \begin{enumerate}[leftmargin=1.20cm, label={LR \arabic*}] \item The code of T-Rex Acceleration must be written using the appropriate coding convention. \\ \textbf{Fit Criteria}: The code shall follow PEP 8 code style guideline. \end{enumerate} \subsection{Health and Safety Requirements} N/A %%%%%%%%%%%%%%%%%%%%%%%% \section{Project Issues} \subsection{Open Issues} N/A \subsection{Off-the-Shelf Solutions} Any user can play the original game, T-Rex Runner, on Google Chrome when disconnected from the internet. The source code of the original game can provide a reference to implement the basic functionality of the game. As the project will be written using a different language, the original code will be used to implement the fundamental gameplay. Due to the popularity of the game, there are many variations of T-Rex Runner available online. They are played either on the browser or a smart device. Variations and redevelopments are available on Google Play Store and App Store. There are also open source versions available. \subsection{New Problems} \subsubsection{Effects on the Current Environment} The user must have Python \sout{3.9} \textcolor{red}{3.6.9} or higher and Pygame installed to run the game. The game will not require a lot of processing power, thus having little effect on the user's operating system. The user shall not experience any form of lag when running the game. \subsubsection{Potential User Problems} If the user modifies the source code of the project on their local machine, the game may become unstable to play. \subsubsection{Follow-Up Problems} There will be a risk if the libraries, such as Pygame, get updated during the development. This may result in in-game features being unable to run or load. \subsection{Tasks} The following is the link to our project schedule:\\ \href{https://gitlab.cas.mcmaster.ca/se_3xa3_l3g15/se_3xa3_project/-/tree/master/ProjectSchedule/GanttT-Rex.gan}{\textit{Gantt chart and Resource chart}} \subsection{Migration to the New Product} N/A \subsection{Risks} \subsubsection{Testing Risk} Most of the methods in the project will only have a visual output and are highly coupled with other components, thus only limited automated testing can be done. Moreover, obstacles in the game are generated randomly and automated tests are usually insufficient to build confidence for developers. The majority of the testing will be manual system testing. As the game is endless, it is difficult to do exhaustive testing. This increases the probability of bugs and issues getting undiscovered.\\ Probability: 20\% \subsubsection{Strict Schedule} Due to the strict and tight time constraints, the team must stay disciplined and follow the project schedule. Failure to do so will result in rushed deliverables that are not complete. Rushing the final deliverable will result in incomplete functionality and poor performance. \\ Probability: 10\% \subsection{Costs} This game is based on an open resource game and the development will use free programs and images, so there is no monetary cost for the project. \subsection{User Documentation and Training} \subsubsection{User Documentation Requirements} User Documentation for the game will include a ReadME file for the instructions on installation, running the game, and game controls. The game will also include information about the controls and power-ups in the game settings. \subsubsection{Training Requirements} The game controls must be intuitive and simple, thus requires no training. \subsection{Waiting Room} The following features may be added to future versions of the game: \begin{itemize} \item Multiplayer: To allow multiple users to see and interact with each other in the same game state. \item Local Leaderboard: A leaderboard of the best scores achieved on the local machine. \item Global Leaderboard: A leaderboard of the best scores achieved globally. \end{itemize} \subsection{Ideas for Solutions} N/A \bibliographystyle{plainnat} \bibliography{SRS} \newpage \section{Appendix} This section has been added to the Volere template. This is where you can place additional information. \subsection{Symbolic Parameters} The definition of the requirements will likely call for SYMBOLIC\_CONSTANTS. Their values are defined in this section for easy maintenance. \begin{table}[h] \caption{\bf Symbolic Parameter Table} \begin{tabular}{|l|p{0.5\linewidth}|l|} \hline \multicolumn{1}{|l}{\bfseries Symbolic Parameter} & \multicolumn{1}{|l|}{\bfseries Description} & \multicolumn{1}{l|}{\bfseries Value}\\ \hline KEYBOARD\_JUMP & Keyboard key that moves the onscreen character vertically upwards & Up Arrow \\ \hline KEYBOARD\_DUCK & Keyboard key that moves the onscreen character duck. & Down Arrow \\ \hline RESTART & Keyboard key that restarts a new game. & \sout{Enter} \textcolor{red}{Key R}\\ \hline PAUSE & Keyboard key that pauses the game mid play & \sout{Space} \textcolor{red}{Key P}\\ \hline RESUME & Keyboard key that resumes the paused game & \sout{Space} \textcolor{red}{Key R}\\ \hline MINIMUM\_AGE & Children younger than this age may have difficulty playing the game & 8\\ \hline GRAMMAR\_CHECKER & The engine that checks for spelling errors & Grammarly\\ \hline MAX\_DELAY & The maximum delay time the project should have & $5$ seconds\\ \hline RESPONSE\_TIME & Typical input delay for the project & $5$ milliseconds\\ \hline DUE\_DATE & Deadline of the project & 04/05/2021\\ \hline \end{tabular} \end{table} \end{document}
{ "alphanum_fraction": 0.7677908078, "avg_line_length": 57.0852390852, "ext": "tex", "hexsha": "c016ef09ab319d284e493a65a0de0f0a9ebea91a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "491de89a6e532ac0cbda611a0ed3dd18fd858d11", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "qw33ha/T-Rex_Acceleration", "max_forks_repo_path": "Doc/SRS/SRS.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "491de89a6e532ac0cbda611a0ed3dd18fd858d11", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "qw33ha/T-Rex_Acceleration", "max_issues_repo_path": "Doc/SRS/SRS.tex", "max_line_length": 853, "max_stars_count": 2, "max_stars_repo_head_hexsha": "491de89a6e532ac0cbda611a0ed3dd18fd858d11", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "qw33ha/T-Rex_Acceleration", "max_stars_repo_path": "Doc/SRS/SRS.tex", "max_stars_repo_stars_event_max_datetime": "2021-07-12T05:56:49.000Z", "max_stars_repo_stars_event_min_datetime": "2021-06-02T05:52:23.000Z", "num_tokens": 6601, "size": 27458 }
\chapter{Domain Experts via Topic Modeling}~\label{chap:domainexperts} %\todo[inline]{Contains research questions for the experts; experimental setup; experimental results;summary} \section{Introduction} In this chapter, I address a problem which is analogous to the classic domain adaptation problem since the aim is to improve (morpho-)syntactic analysis for different domains, but more generic in nature. \textcolor{blue}{Classic domain adaptation assumes that there are two distinct data domains - source and target. However, often the source and the target contains a mix of domains, rather than a single one.} In order to simulate a more realistic scenario, I assume that the dataset on which the taggers and the parsers are trained on, does not come from a single domain but may contain a mix of different domains. \textcolor{blue}{E.g., Wall Street Journal consists of news articles spanning various topics such as politics, finance, theatre critique, to name a few.} The same is true for the sentence that is parsed (or tagged) using the model trained on this dataset. I.e., the test sentence could potentially come from any of the domains. Thus, in my case, the problem is two fold - identify domains automatically in the training dataset and then suitably adapt the method to parse sentences more accurately. There is no manual work involved. I describe my method for improving POS tagging and dependency parsing for such heterogeneous datasets from a variety of different genres by creating experts for automatically detected topics. In this case, the datasets consist of newspaper reports on the one hand and biomedical extracts on the other. I assume that the domains have an equal participation in the dataset. \textcolor{blue}{Structuring the dataset in this way enables me to simulate the following:} \begin{enumerate} \item \textcolor{blue}{I am creating a dataset consisting of two specific and distinctive domains.} \item Handcrafting the dataset from existing annotated corpora enables me to utilize the gold standard domain labels. \end{enumerate} I use Latent Dirichlet Allocation (LDA) to determine the topic of a sentence. LDA displays the latent topic structure in a document. In this case, a document to be clustered consists of a single sentence. I then assign each sentence to the most likely topic, for both training and test sentences. I train an expert for each topic and then use this expert to POS tag and parse the test sentences belonging to this topic. Note that, I use the term ``topic" in a slightly different sense than the topic modeler. I assume that the topics detected by the topic modeler do not only pertain to lexical differences, which can be beneficial for the POS tagger and the parser, but also to syntactic phenomena. Thus, one topic may focus on ``incomplete" sentences, such as headlines in a newspaper. Thus it can also encapsulate certain nuances of a genre. %E.g., one topic might show strong probability towards sentences containing political news. Another topic could be focused on sentences pertaining to different kinds of protein structures in DNA. The rest of the chapter is structured in the following way: \atrcomments{TBD after completion} \section{Research Questions}\label{sec:quest} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures/approach.png} \caption{Overview of the architecture of the POS tagging and parsing experts.}\label{fig:architecture0} \end{figure*} \textcolor{blue}{POS tagging and dependency parsing tend to perform poorly on heterogeneous datasets, i.e., datasets consisting of sentences from different genres. This is mainly because of the different morpho-syntactic nature of texts from various domains. Consider that we are training our tagger/parser on data consisting of sentences from newspaper domain and another widely different domain, such as conversation texts or biomedical abstracts. \atrcomments{The syntactic nature of the sentences is distinctive.} Thus, if we train a tagger/parser on one domain and then tag/parse sentences from another domain, it is highly likely that the sentences will not be annotated correctly. The same is true for heterogeneous datasets, i.e., datasets containing a mix of different domains. In this case, the tagger/parser cannot distinguish between domains. %When the tagger/parser is trained on this, there are ample chances that the resulting training model will misidentify new sentences as belonging to a different domain and hence tag/parse inaccurately. %An effective way could be to extend the training process and aid the tagger/parser in recognizing the domains. Thus, it is evident that the tagger/parser would benefit from recognizing domains and annotating sentences accordingly. An effective way to do this, could be to formulate a means for the tagger/parser to discern the domains and create domain-perceptive models. Hence, in order to build a domain-aware system, my goal is to create POS tagging and parsing experts for each of these genres/domains, which are present in the dataset. Given a mixed dataset, I create training experts which can effectively tag/parse new sentences. First step is to identify the domains in the heterogeneous dataset. %The idea is that, if we encounter a dataset with a mixture of domains, we can effectively determine the domains. Once the domains are determined, we can then create training models from these sentences clustered according to their specific domains. Thus, we have representative sentences and therefore training models for each domain. Figure~\ref{fig:architecture0} shows the architecture of such a system.} For example, the dataset might be a mixture of newspaper articles, blogs, financial reports, research papers and even specialized texts such as biomedical research papers and law texts. I create experts such that each expert would learn specific information about its own genre. I determine these experts by performing topic modeling on sentences and then train an expert on the sentences of the topic. I group sentences based on their most probable topic. To test the hypothesis that topic modeling can serve to group sentences into topics, I create a mixed dataset from the financial domain (using the Penn Treebank \cite{marcus:kim:ea:94}) and from the biomedical domain (using the GENIA Corpus \cite{tateisi:tsujii:04}) such that the new handcrafted corpus consists of sentences from both domains in equal measure. Consequently, there is a clear difference in the genres in the corpus, and I have gold standard topic information. In this chapter, I investigate the possibility of creating genre experts, given a heterogeneous dataset. Thus, I perform topic modeling on training and test data simultaneously: I assign a test sentence to the topic with the highest probability. %This means that I currently simplify the problem of assigning new sentences to topics. This is a simplification because this assignment of test sentences is not automated. We have to re-train when we encounter a new test sentence. In the next chapter, I assign new sentences to topics based their similarity to sentences in the topics created during training, following the work by \citep{plank2011effective}. To summarize, I present the results on the following research questions in this chapter. \subsection{Question 1: Can topic modeling successfully identify genres in a dataset?} \label{q1} %\atrcomments{need to remove WSJ ones for now} \blue{In this question, my goal is to determine whether topic modeling can indeed identify the hidden genres/domains in a text. Suppose, we mix two domains to create a dataset and then run the topic modeler on this. It is then important to note whether the ``topics" given by the topic modeler are strongly indicative of the domains/genres. Thus one topic should predominantly contain sentences from first domain and the other topic should consist of sentences from the second domain. I.e., I estimate whether the data splits obtained from the topic modeler are meaningful for creating domain experts. %In this question, I determine whether the data splits obtained from the topic modeler are meaningful for creating domain experts. I.e., I investigate whether an unsupervised topic modeler can detect topics in a heterogeneous corpus (table~\ref{tab:mixeddata}). %I mix two different corpora - news reports and biomedical abstracts - I assume if I mix two different corpora in equal measure, I can take their original corpus as gold standard. %I use an artificially created heterogeneous corpus containing sentences from the Wall Street Journal (WSJ) section of the Penn Treebank \citep{marcus:kim:ea:94} and from the GENIA Corpus \citep{tateisi:tsujii:04} and take their original corpus as the gold standard topic. The assumption is that a good split into the known topics, e.g., financial news and biomedical abstracts, will also improve POS tagging and parsing accuracy. If I assume two topics, we should be able to see a clear distinction between the financial news and biomedical sentences. I.e., for each topic, I should have a clear correspondence of its sentences to either domain/genre. I thus calculate the percentage of sentences in a given topic that belong to one genre and expect that one topic should have a high percentage and the other one a low percentage. I also experiment with a larger number of topics, to see if I can profit from a finer grained topic definition. However, this advantage will be offset by a smaller training set since we split into more sets. } %\atrcomments{Needs parsing experiments/need to check in the results sheet if this is there} %\atrcomments{REMOVE} The finer grained split could also potentially identify micro-genres within a genre. I.e., the primary assumption is that - there are two very different genres and the topic modeler should be able to identify this distinction. In my case, WSJ and GENIA are representative of corpora from two very different domains - newspaper and biomedical texts. However, often the distinction between genres could be subtle. In that case, it is imperative to check if topic modeler could also identify micro-genres, in addition to the broader classification. To investigate this, I exclusively use the WSJ data set. The hypothesis is that the WSJ corpus contains different newspaper sections, which may use different styles. Since there is no information available from the Penn Treebank about those section, I cannot evaluate how well the topic modeler splits the sentences into topics, but I can evaluate whether the POS tagging/\atrcomments{parsing} experts are successful in adapting to those micro-genres. \subsection{Question 2: Does POS Tagging Benefit from Using Topic Experts?} \blue{In this chapter, I explore the effects of creating domain experts to improve the performance of POS tagging and parsing. I suggest ways of using topic modeling to effectively cluster sentences into domain experts. It is then important to experiment whether using domain experts actually improves the performance. POS tagging is often considered as a solved problem for homogeneous datasets. However, there is no performance guarantee when it comes to domain adaptation. The problem is also evident when the dataset contains syntactically varying sentences. This is an important consideration since POS tagging serves as a precursor to a lot of tasks including dependency parsing. Thus, in this question, I examine whether the performance of POS tagging improves if we create experts based on the topics detected by the topic modeler. %In order to investigate this question, I generate a two-topic corpus by combining data from the Wall Street Journal (WSJ) section of the Penn Treebank \cite{marcus:kim:ea:94} and from the GENIA corpus \cite{tateisi:tsujii:04} as shown in table~\ref{tab:mixeddata}. I experiment on the mixed data corpus (table~\ref{tab:mixeddata}). The WSJ covers financial news while GENIA uses Medline abstracts as its textual basis. As a consequence, I have sentences from two different genres, but also slight variations in the POS tagsets. The tagset used in GENIA is based on the Penn Treebank tagset, but it uses the tags for proper names and symbols only in very restricted contexts. This setup allows me to test whether the topic modeler is able to distinguish the two genres, and whether POS tagging experts can profit from this separation. I assign sentences to topics based on the highest probability exhibited by the topic modeler. I then train a POS tagging expert on the training part of each topic. I then use the expert to tag the test sentences from this topic. In this setting, we can see if the experts can effectively handle the data sparseness caused by dividing the training set into multiple experts. Thus, I experiment with the setting in which I use topic modeling as hard clustering i.e., I assign each sentence to the topic for which the topic modeler gave the highest probability. I then delve deeper into the results to ascertain the root causes for improvement with the experts. For POS tagging, out of vocabulary words are a major concern. Hence, I analyze whether all the improvements based on lower rates of out-of-vocabulary words. For example, suppose we have two experimental settings, both using the same size of the training set, but in one setting, the majority of the training set is from GENIA while in the second setting, the training set is a mix of GENIA and WSJ. It is more likely that the former will contain a wider range of biomedical vocabulary than the latter. However, it is also possible that the experts will learn different regularities, for example with regard to how the proper name tags are used in the two corpora. Thus, I look at the ratio of unknown words in the different experiments and at the error rates of known and unknown words. I additionally look at the confusion matrices. %In a second setting, I experiment with soft clustering. I.e., I add each sentence to all topics, weighted by its probability distribution. } %\textcolor{red}{I could also potentially experiment with a different clustering technique - soft clustering, in which I add each sentence to all topics, weighted by its probability distribution. I discuss these clustering techniques in more detail in the next section} %I also experiment with soft clustering, in which I add each sentence to all topics, weighted by its probability distribution. \subsection{Question 3: Does Dependency Parsing Benefit from the Topics Experts?} \blue{Domain adaptation is a challenging problem for dependency parsing. %The problem is still sizeable for heterogeneous datasets because in addition to classic domain adaptation, we are dealing with an added problem of identifying underlying domains. The problem is also sizeable for heterogeneous datasets because in addition to classic domain adaptation, we are dealing with an added problem of identifying underlying domains. This is because, the training dataset can consist of data from multiple domains. Hence, identifying these domains is also a significant part of the challenge. Here, I investigate the effects of using topic modeling experts for dependency parsing. I use hard clustering i.e., assigning sentences to experts based on the highest probability topic given by the topic modeler. Initially, I use gold POS tags in order to abstract away from POS tagging quality. In a second step, I investigate the interaction between POS tagging and parsing experts. I.e., I am interested in whether dependency parsing can profit from using the POS tags that were determined by the POS tagging experts. This helps in determining whether integrating POS information given by the POS experts can improve dependency parsing or whether there is no interaction between the two levels. } I then analyze the parsing results in more detail to gauge the effect of using topic modeling experts. Here, I am primarily interested in whether there are specific types of sentences or dependencies that are grouped by the topic models, so that the parsing experts focus on a specific subset of syntactic properties. Genre differences can be captured by adapting to certain syntactic phenomena. Hence, I look more closely if there is a particular type of sentence that benefit from using experts over the general case. Mislabeling of dependency is another problem, which can impact overall performance of the parser. I look at confusion matrices for dependency labels to further investigate if there is an improvement from using the experts. %Similar to POS tagging experts, I experiment with two different clustering techniques - hard \& soft clustering to address the data sparsity issue. \subsection{Question 4: Can we use soft clusters of the topic models?} % The experts can be determined based on hard or soft clustering decisions: For hard clustering, the sentences are assigned to hard topics, based on the topic that has the highest probability in that sentence. I.e., if for sentence $s_x$, MALLET lists the topic $t_1$ as the topic with the highest probability, then $s_x$ is added to the data set of topic $t_1$. In other words, the data set of topic $t_1$ consists of all sentences for which MALLET showed topic $t_1$ as the most likely topic. This means that the data set sizes vary between topics. This is a simplification since a sentence can represent different topics to different degrees. Thus, I investigate whether I can utilize the soft clustering information directly and add every sentence to every POS tagging expert, weighted based on the degree to which it represents the topic of this expert. This not only allows me to model topics in more detail, it can also help combating data sparsity since every sentence contributes to every domain expert. % Hence, for soft clustering experiments, I utilize the entire topic distribution of a sentence by weighting sentences in the training data based on their topic distribution. I simulate weighting training sentences by adding multiple copies to the training files of the experts. Thus, for 2-topic experiments, a sentence with 80\% probability for topic~1 will be included 8 times in the expert for topic~1 and 2 times in the expert for topic~2, rounding up small percentages so that every sentence will be added to every expert at least once. Thus, I use a more fine grained topic model while mitigating data sparseness, but we risk adding non-typical / irrelevant sentences to experts. \blue{In the previous questions, I looked at creating POS tagging and dependency parsing experts using a hard clustering approach. However, this technique assumes that we assign sentences to the highest probability topic. I.e., the sentences are assigned to hard topics, based on the topic that has the highest probability in that sentence. I.e., if for sentence $s_x$, MALLET lists the topic $t_1$ as the topic with the highest probability, then $s_x$ is added to the data set of topic $t_1$. In other words, the data set of topic $t_1$ consists of all sentences for which MALLET showed topic $t_1$ as the most likely topic. This means that the data set sizes vary between topics. %This is a simplification since a sentence can represent different topics to different degrees. We are essentially dividing the dataset into the fraction of number of topics. Thus, creating experts causes an inherent data sparsity. Having a finer grained split, i.e., larger number of topics could result in severe data sparseness. %This is assuming that we treat topics as hard clusters, i.e., every sentence belongs to the topic with the highest probability. In addition to the data sparsity issue, the hard clustering assumption is also a simplification since a sentence can represent different topics to different degrees. This is especially true in case of identifying micro-genres or larger number of topics, in general. A potential solution is to investigate whether we can utilize the entire topic probability distribution for a sentence. %soft clustering information directly and I.e., I add every sentence to every domain expert, weighted based on the degree to which it represents the topic of this expert. I will refer to this method as soft clustering. %I simulate weighting training sentences by adding multiple copies to the training files of the experts. This not only allows me to model topics in more detail, it can also help combating data sparsity since every sentence contributes to every expert. Thus, for 2-topic experiments, a sentence with 80\% probability for topic~1 will be included 8 times in the expert for topic~1 and 2 times in the expert for topic~2, rounding up small percentages so that every sentence will be added to every expert at least once. Thus, I use a more fine grained topic model while mitigating data sparseness, but we risk adding non-typical / irrelevant sentences to experts. I.e., I may ``diffuse'' the expert knowledge too much by adding all sentences even if they are weighted. %It is important to consider the trade-offs between the clustering methods. Both of these methods have their pros and cons in different situations. My hypothesis is that, in cases where the domain split is clear and definite, hard clustering could potentially be a better choice than soft clustering, even at the risk of data sparsity caused by dividing the training set. I.e., If I create 10 experts, I am basically dividing the training set to a fraction of 10. This essentially means that, now my training experts can only train on $\frac{1}{10}^{th}$ of the full training set. There are sentences which could be ambiguous in terms of domain. In this case, soft clustering will be a better approach to take. This is because, we are not at risk of losing information by considering the highest probability topic and ignoring others. Hence, it will be interesting to observe whether adding more training data using soft clustering can actually improve the overall performance. % In the research questions, I outlined a setting of using a finer grained split with a larger number of topics. However, evidently, %mentioned that having a finer grained split, i.e., % larger number of topics could result in severe data sparseness. This is assuming that I treat topics as hard clusters, i.e., every sentence belongs to the topic with the highest probability. So,creation of experts causes a sparsity in the training data because I am dividing the training set into a number of experts. % I demonstrate the hypothesis with an example. % This is also a simplification since a sentence can represent different topics to different degrees. This is especially true in case of identifying micro-genres or larger number of topics, in general. %%MOVE SOMEWHERE%%%%%%%%%%%%%%%%%%%%%%%% } % \subsection{Question 5: What do the Experts Learn?} % \blue{ % In the previous questions, I described my hypothesis on creating experts for POS tagging and dependency parsing. POS tagging and dependency parsing has certain known issues in domain adaptation situation. Thus, evaluating the results on these issues, will help ascertain the source of improvements as a result of using the domain experts. Thus, in this question, I take a closer look at the results from the previous question to learn where the improvements by the experts (POS tagging \& parsing) come from. } %I investigate on certain known issues for both POS tagging and dependency parsing in domain adaptation situation and in general, to ascertain the source of these improvements. \section{System Overview}\label{sec:setup} \subsection{Architecture} Figure~\ref{fig:architecture0} shows the overall architecture of the system. I use sentences as documents. %Note that, I utilize the artificial heterogeneous WSJ+GENIA corpus, described in section~\ref{sec:wsjgeniamixedcorpus}. Topic modeling shows the distribution of topic probabilities for every sentence. Thus, if we have $n$ topics, we get a probability distribution for $n$ topics for a particular sentence. Based on this topic probability distribution, I group sentences into sentence clusters. These sentences are representative of the genres/domains. I create training models (for both POS tagging and dependency parsing) from these sentences, which can then be regarded as ``experts" for a particular genre/domain. For tagging/parsing a sentence from the test dataset, I re-execute the topic modeler to determine which domain/genre the sentence belongs to. This method of rerunning the topic modeling can be expensive and hence I devise a method to effectively address this problem in the next chapter. Once this sentence is assigned to a cluster, I use the training model for that cluster to tag/parse that sentence. %Based on the document topic information, I then group the sentences into genre topics. %Note that, I utilize the artificial heterogeneous WSJ+GENIA corpus, described in section~\ref{sec:wsjgeniamixedcorpus}. I collect all sentences from the training and test set, cluster them via the MALLET topic modeler, and determine for which expert(s) the sentence is relevant to. There are several ways of determining the best expert based on the probability distribution of topics in a sentence. Then, we separate the sentences for each expert into training and test sentences, based on the previously determined data splits (see above). %\subsection{Clustering Decision} %\todo[inline]{Not sure if this should be here or in a separate chapter or a research question} \subsection{Baselines} I define two baselines to compare my results with. As the first baseline, I take the complete training set when no topic modeling is performed. Note that this is a very competitive baseline since the topic modeling experts have access to considerably smaller amounts of training data. In order to avoid differences in accuracy resulting from different training set sizes, I create a second baseline by splitting the sentences randomly into the same number of groups as the number of topics, while maintaining the equal distribution of WSJ and GENIA sentences where applicable. I.e., I assume the same number of random ``topics'', all of the same size. Thus, in the 2-topic setting with the the genres, I create two separate training sets, each containing half of the WSJ training set and half of the GENIA one. In this setting, I test all experts on the whole test set and average over the results. % \subsection{Evaluation} % I use the script \texttt{tnt-diff} that is part of TnT to evaluate the POS tagging results and the CoNLL shared task evaluation script\footnote{http://ilk.uvt.nl/conll/software/eval.pl} for evaluating the parsing results. I report the overall accuracy for POS tagging and attachment scores (labeled \& unlabeled) for dependency parsing. % I report the following evaluation metrics for evaluation: % \begin{itemize} % \item POS tagging % \begin{itemize} % \item Overall Accuracy: For POS tagging, we evaluate the results based on the accuracy of identifying POS tags correctly in the test set as shown in~\ref{eq:posacc}. I.e., we measure how many tags have been correctly identified in the test set. % \begin{equation} \label{eq:posacc} % Accuracy = \frac{Number\ of\ correctly\ identified\ tokens}{total\ number\ of\ tokens} % \end{equation} % \item Accuracy for Known \& Unknown tokens: For POS tagging, unknown or out of vocabulary words (OOV) pose as a challenging problem. \texttt{tnt-diff} provides an utility to measure accuracy based on number of correct predictions for known vs. unknown words. I discuss the method employed by TnT to determine the POS tags for unknown words in chapter \atrcomments{TBD}. I.e., we can determine the accuracy on known and OOV words separately. Since performance on OOV words is an essential determining factor, I use this metric to further analyze the results from domain experts in greater detail. This is a bigger challenge in a domain adaptation situation since there are domain specific words for each domain, which tend to get misclassified in a heterogeneous dataset. % \end{itemize} % \item Dependency Parsing % \begin{itemize} % \item Labeled Attachment Scores (LAS): For evaluating the results of dependency parsing experiments in this chapter, I report LAS\footnote{I report micro-averaged LAS. Micro-averaged LAS is reported on words as opposed to macro-averaged LAS, which considers sentences as the grain.}. As \ref{eq:las} shows, LAS estimates the number of words with correctly predicted head and label. % \begin{equation} \label{eq:las} % LAS = \frac{number\ of\ words\ with\ correct\ head\ and\ label}{total\ words} % \end{equation} % \item Unlabeled Attachment Scores (UAS): UAS, as the name suggests, evaluates based on correctly predicted head. % \begin{equation} \label{eq:uas} % UAS = \frac{number\ of\ words\ with\ correct\ head}{total\ words} % \end{equation} % \end{itemize} % \end{itemize} \section{Results}\label{sec:results} In this section, I discuss the results of the experiments based on the research questions delineated in section~\ref{sec:quest}. \subsection{Question 1: Can topic modeling successfully identify genres in a dataset?} \begin{figure*}[!t] \centering \fbox{\includegraphics[width=\textwidth]{figures/dist-sent-2.png}} \caption{Distribution of GENIA sentences in 2-topic experts.}\label{fig:distsent2} \end{figure*} %\todo[inline]{Based on EACL split} \begin{table}[t!] \begin{center} \begin{tabular}{r|rr|rr} & \multicolumn{2}{c|}{2 topics} & \multicolumn{2}{c}{10 topics}\\ T. &\% in train & \%in test & \% in train & \% in test \\ \hline 1 & 0.71 & 0.71 & 0.48 & 0.52 \\ 2 & 97.99 & 98.6 & 98.58 & 98.35 \\ 3 & & & 1.16 & 0.73 \\ 4 & & & 94.87 & 97.14 \\ 5 & & & 0.17 & 0 \\ 6 & & & 0.28 & 0.29 \\ 7 & & & 99.47 & 99.12 \\ 8 & & & 98.93 & 100 \\ 9 & & & 98.92 & 99.33 \\ 10 & & & 94.85 & 95.35 \\ \hline \end{tabular} \end{center} \caption{Distribution of sentences from the WSJ+GENIA data set given 2 and 10 topics (showing the percentage of GENIA sentences per topic).\label{tab:cluster}} \end{table} Following question 1, I investigate whether LDA can separate the sentences into meaningful topics. Figures~\ref{fig:distsent2} and \ref{fig:distsent10} shows the distribution of GENIA sentences for 2 \& 10 topic experts. These results indicate that the topic modeler effectively separates topics. For the 2-topic case (figure~\ref{fig:distsent2}), a clear split is evident as the majority of the GENIA sentences are clustered in topic 2. The percentage of GENIA sentences clustered in topic 1, is less than 1\%. Thus, the rate of misclassification is extremely low, proving that this is a viable way to determine genre distinction. A lot of misclassified sentences are ambiguous in terms of genres and hence more challenging to classify. Table~\ref{tab:misclassifiedsent} shows some examples of WSJ sentences misclassified as GENIA and viceversa for the 2-topics case. These examples clearly exhibit that these sentences could easily be attributed to a different genre and hence could potentially benefit from using soft clustering. I discuss this in detail in section~\ref{}.~\todo{update the section number} Additionally, table~\ref{tab:ex:2topics} shows example words from the 2-topic experiment, which show a clear separation of topics into biomedical and financial terms. %\todo{describe more may be?} %Table~\ref{tab:cluster} shows the distribution of sentences in the training and test set into different topics when I assume 2 or 10 topics. % ; the misclassified sentences constitute less than 1\%. Table~\ref{tab:misclassifiedsent} shows examples of a GENIA sentences misclassified as WSJ sentence and vice versa.%\todo[inline]{insert a misclassification example here; write a little bit about it} \begin{table}[t] \begin{tabular}{c|c|l} \begin{tabular}[c]{@{}c@{}}Gold\\ Label\end{tabular} & \begin{tabular}[c]{@{}c@{}}Classified\\ Label\end{tabular} & \multicolumn{1}{c}{Sentences} \\ \hline WSJ & GENIA & \begin{tabular}[c]{@{}l@{}}The Diet plays a minor role compared with the powerful bureaucratic \\ system.\end{tabular} \\ WSJ & GENIA & "I tried." \\ \hline GENIA & WSJ & Copyright 1997 Academic Press. \\ GENIA & WSJ & But many puzzles of the drugs remain. \\ \hline \end{tabular} \caption{Example of misclassified sentences for 2-topic experiments} \label{tab:misclassifiedsent2topic} \end{table} \begin{table}[!htb] \begin{tabular}{l|p{14cm}} 1 & mr million ui year company market stock billion share corp years shares trading president time quarter sales government business \\ \hline 2 & cells cell expression il nf activation human binding gene transcription protein kappa ab cd ti factor alpha activity induced \\ \hline \end{tabular} \caption{Examples of words in topics for the 2-topic experiments on the WSJ+Genia corpus.} \label{tab:ex:2topics} \end{table} \begin{figure}[!htb] \centering \fbox{\includegraphics[width=\textwidth]{figures/dist-sent-10.png}} \caption{Distribution of GENIA sentences in 10-topic experts.}\label{fig:distsent10} \end{figure} \begin{table}[t] \centering \begin{tabular}{l|l} 1 & \begin{tabular}[c]{@{}l@{}}market, stock, trading, exchange, index, prices, stocks, year, york, investors, big, \\ rate, yesterday, futures, shares, mr, program, securities, markets\end{tabular} \\ \hline 2 & \begin{tabular}[c]{@{}l@{}}cells, il, nf, kappa, expression, alpha, factor, induced, activation, cd, human, cell, \\ activity, mrna, tnf, kappab, nuclear, transcription, monocytes\end{tabular} \\ \hline 3 & \begin{tabular}[c]{@{}l@{}}mr, government, state, federal, year, president, house, court, bush, bill, years, time, \\ congress, judge, tax, officials, soviet, law, people\end{tabular} \\ \hline 4 & \begin{tabular}[c]{@{}l@{}}ui, ab, role, cell, immune, cells, important, gene, genes, expression, suggest, \\ inflammatory, mechanisms, results, mechanism, play, regulation, molecular, development\end{tabular} \\ \hline 5 & \begin{tabular}[c]{@{}l@{}}mr, president, executive, chief, years, vice, time, officer, year, people, chairman, \\ company, director, business, named, warner, ms, cbs, world\end{tabular} \\ \hline 6 & \begin{tabular}[c]{@{}l@{}}million, company, year, billion, corp, share, quarter, shares, sales, earlier, \\ group, net, market, stock, bank, cents, companies, debt, unit\end{tabular} \\ \hline 7 & \begin{tabular}[c]{@{}l@{}}il, activation, stat, cells, kinase, protein, cell, nf, cd, receptor, induced, \\ phosphorylation, signaling, transcription, expression, tyrosine, ti, signal, activated\end{tabular} \\ \hline 8 & \begin{tabular}[c]{@{}l@{}}cells, cell, expression, human, cd, gene, ab, differentiation, virus, ti, lines, \\ protein, leukemia, expressed, ebv, type, ii, class, alpha\end{tabular} \\ \hline 9 & \begin{tabular}[c]{@{}l@{}}patients, receptor, glucocorticoid, cells, binding, receptors, blood, ti, ab, \\ levels, lymphocytes, peripheral, normal, gr, cortisol, human, cell, subjects, number\end{tabular} \\ \hline 10 & \begin{tabular}[c]{@{}l@{}}binding, promoter, transcription, gene, nf, cells, protein, site, dna, factor, \\ activity, cell, kappa, specific, human, expression, region, sequence, element\end{tabular} \\ \hline \end{tabular} \caption{Examples of words in topics for the 10-topic experiments on the WSJ+Genia corpus.} \label{tab:ex10topics} \end{table} \begin{table}[!htb] \centering \begin{tabular}{l|l|l} \begin{tabular}[c]{@{}l@{}}Gold \\ Label\end{tabular} & \begin{tabular}[c]{@{}l@{}}Classified\\ Label\end{tabular} & \multicolumn{1}{c}{Sentences} \\ \hline WSJ & GENIA & \begin{tabular}[c]{@{}l@{}}The sterilizing gene is expressed just before the pollen is about to \\ develop and it deactivates the anthers of every flower in the plant.\end{tabular} \\ WSJ & GENIA & \begin{tabular}[c]{@{}l@{}}Biosource Genetics Corp. , Vacaville , Calif. , is developing a spray \\ containing a gene that spreads from cell to cell and interferes with \\ the genes that are responsible for producing pollen .\end{tabular} \\ WSJ & GENIA & Perhaps . " \\ WSJ & GENIA & And it 's relaxing . \\ WSJ & GENIA & Finally , he got help . \\ WSJ & GENIA & BRIEFS : \\ \hline GENIA & WSJ & \begin{tabular}[c]{@{}l@{}}Representative environmental estrogenic compounds both from plant \\ and industrial sources were also tested .\end{tabular} \\ GENIA & WSJ & Copyright 1997 Academic Press . \\ GENIA & WSJ & Leukocytes migrated most rapidly at night . \\ GENIA & WSJ & Both messages rapidly declined thereafter . \\ \hline \end{tabular} \caption{Example of misclassified sentences for 10-topic experiments. } \label{tab:misclassifiedsentences10topic} \end{table} Figure~\ref{fig:distsent10} shows topic-wise distribution of GENIA sentences for train as well as test dataset for the 10-topic experiments. We notice that topics 2, 4, 7, 8, 9, and 10 contain mainly GENIA sentences while the remaining topics cover mainly WSJ sentences. Table~\ref{tab:ex10topics} shows sample words with the highest probability in the topics. This table corroborates with figure~\ref{fig:distsent10}. The topic modeler is successful in representing the domains, even for a finer grained split. Table~\ref{tab:misclassifiedsentences10topic} shows some sample sentences which are misclassified. These sentences further confirm that while the topic modeler is quite effective in indicating a probable genre of the sentence, we could benefit from utilizing the probability distribution. In both settings, the error rate is between 0.2\% and 5\% (see table~\ref{tab:cluster}), i.e., I obtain a distinct split between GENIA and WSJ, which should give us a good starting point for creating POS tagging and dependency parsing experts. \subsection{Question 2: Does POS Tagging Benefit from Using Topics?} \begin{table}[t] \centering \begin{tabular}{l|cc} & \multicolumn{2}{c}{Accuracy} \\ Setting & \multicolumn{1}{r}{2 topics} & \multicolumn{1}{r}{10 topics} \\ \hline Full training set & \multicolumn{2}{c}{96.64} \\ Random split & 96.48 & 95.49 \\ Topic model & \textbf{96.84} & 96.34 \\ \hline %Soft Clustering & 96.73 & \textbf{96.84} \\ \hline \end{tabular} \caption{Comparing the topic model experts to the baselines on the WSJ+GENIA data set for POS tagging.\label{tab:mixedresults}} \end{table} In this section, I discuss the results on the experiments for question whether the POS tagger can benefit from using topic modeling, i.e., whether POS tagging results can be improved by training experts for genres provided by topic modeling. I compare the topic modeling approach to two baselines for the 2-topic and 10-topic setting. I then analyze the results to determine the cause of improvement. %I also perform a soft clustering experiment, in which each sentence is added to every topic, weighted by its probability. \subsubsection{POS tagging Experts} The results in Table~\ref{tab:mixedresults} show that if I assume a 2-topic setting, the experts perform better than both baselines, i.e., the model trained on the full training set and the model with randomly chosen ``topics". The 2-topic expert model reaches an accuracy of 96.84\%, which is slightly higher than the full training set accuracy of 96.64\%. We know that the 2-topic setting provides a clear separation between WSJ and GENIA (Table~\ref{tab:cluster}). Thus, this setting outperforms the full training set using a smaller amount of training data. There is also an increase of 0.36 percent points over the accuracy of the 2 random split setting. For the 10-topic setting, the topic expert model outperforms the random split of the same size by 0.85 percent points, which is a higher difference than for the 2-topic setting. This shows that the finer grained splits model important information. However, the topic expert model does not reach the accuracy of the baseline using the full training set. This can be attributed to the reduced size of the training set for the experts. The experts encounter around $\frac{1}{10}/$ of the training data as compared to the full training baseline and this data sparseness causes a drop in performance. \subsubsection{Analysis of Results} I further investigate the differences between the models learned based on a random split as opposed to the models learned based on the topic models. I concentrate on the 2 topic models since this is the closest approximation of the mixed domain problem that I am addressing in this chapter. First, I take a closer look at the distribution of unknown words, and the POS taggers' accuracy on known and unknown words. Unknown words are defined as those words from the test set that do not occur in the training set. This means that the POS tagger needs to guess the word's possible tags without having access to its ambiguity class. The results for this investigation are listed in Table~\ref{tab:known}. The results indicate that the topic experts perform reasonably better than the random split baseline for unknown words. On an average, the percent of unknown words in topic experts is also lower than the random split baseline with the WSJ expert outperforming the GENIA expert by a slight margin. %The number of unknown words is $\frac{1}{6}^{th}$ of the total u %These results show that the percentage of unknown words is higher by 0.76 percent points in the random split setting than that of the topic experts. This means that the two topic models acquire more specialized lexicons that allow the taggers to cover more words. A look at the accuracies shows that, as expected, the accuracy for known words is higher in the topic model setting. However, the results also show that the accuracy on unknown words is significantly higher in this setting, 85.22\% for the topic model experts vs. 83.11\% for the random splits. This means that the POS tagging models learned from the topic model data split has acquired better models of unknown words based on the word distribution from the training corpora. \begin{table}[t] \begin{small} \begin{center} \begin{tabular}{lrlr|lrlr} \multicolumn{4}{c}{Random split} & \multicolumn{4}{|c}{Topic model}\\ \multicolumn{2}{l}{split 1} & \multicolumn{2}{l}{split 2} & \multicolumn{2}{|l}{GENIA-majority} & \multicolumn{2}{l}{WSJ-majority} \\ \hline %s1pos & s2pos & t1pos & t2pos NN & 335 & NN & 300 & NN & 387 & CD & 227 \\ JJ & 219 & JJ & 187 & JJ & 217 & NNP & 226 \\ CD & 151 & CD & 162 & CD & 70 & NN & 132 \\ NNP & 132 & NNP & 162 & NNS & 51 & JJ & 104 \\ NNS & 67 & NNS & 69 & NNP & 28 & NNS & 57 \\ VBN & 31 & VBG & 30 & FW & 13 & VBN & 32 \\ \hline \end{tabular} \end{center} \end{small} \caption{The 6 most frequent POS tags assigned to unknown words (2 topics).\label{tab:res:unkpos}} \end{table} \begin{table}[t] \begin{center} \begin{tabular}{lrlr|lrlr} \multicolumn{3}{c}{Random split} && \multicolumn{3}{|c}{Topic model}\\ Gold & TnT & No. & & Gold & TnT & No. \\ \hline NN & JJ & 141 & & NN & JJ & 122\\ JJ & NN & 111 & & JJ & NN & 104\\ NNP & NN & 93 & & VBD & VBN & 82\\ VBD & VBN & 88 & & NNP & NNPS & 70\\ NN & NNP & 66 & & RB & IN & 64\\ IN & RB & 65 & & IN & RB & 61\\ RB & IN & 62 & & NN & NNP & 53\\ NNP & NNPS & 53 & & VBG & NN & 50\\ %VBG & NN & 51 & & VBN & JJ & 48\\ %VBN & JJ & 50 & & VBN & VBD & 41\\ \hline \end{tabular} \end{center} \caption{The 8 most frequent confusion sets (2 topics).\label{tab:res:confus}} \end{table} \begin{table*}[!htb] \begin{center} \resizebox{\textwidth}{!}{% \begin{tabular}{l|rrr|rrr} & \multicolumn{3}{c}{Random split} & \multicolumn{3}{|c}{Topic model}\\ Topic & \% Unknown & Known Acc. & Unknown Acc. & \% Unknown & Known Acc. & Unknown Acc. \\ \hline 1 & 4.79 & 97.06 & 82.84 & 4.29 & 96.29 & 85.31 \\ 2 & 4.86 & 97.25& 83.38 & 3.85 & 98.35 & 85.12 \\ \hline avg. & 4.83 & 97.16 & 83.11 & 4.07 & 97.33 & 85.22\\ \hline \end{tabular}% } \end{center} \caption{Unknown word rates and accuracies for known and unknown words in the WSJ+GENIA experiment using 2 topics for POS tagging.\label{tab:known}} \end{table*} Overall accuracy of a POS tagger depends on how the unknown words are tagged. Hence, I investigate which POS labels are assigned to unknown words in the two settings. The 6 most frequent POS tags per setting and topic are shown in table~\ref{tab:res:unkpos}. A comparison shows that for the random split, both subsets have a very similar distribution: Unknown words are assigned one of the following labels: noun (NN), adjective (JJ), cardinal number (CD), proper name (NNP), plural noun (NNS), past participle (VBN) or present participle (VBG). The distributions for the topic models show a visibly different picture: In the %second topic (which is the WSJ-majority topic (topic 1), see table~\ref{tab:cluster}), cardinal numbers are the most frequent class for unknown words, followed closely by names. These two labels are three times and ten times more frequent than in topic 1. In contrast, GENIA-majority topic (topic 2) is closer to the distribution of the models based on random sampling, but it has a higher number of foreign words (FW), which is an indication that some biomedical terms are not recognized as such and are then marked as foreign words. Examples of such cases are the words ``aeruginosa" and ``Leishmania". Overall, these results corroborate our hypothesis that the topic models learn individual characteristics of unknown words. Finally, I consider the types of errors that the POS taggers make by looking at confusion sets. %, i.e., sets of gold standard and differing automatically assigned POS tag with their frequencies. The 8 most frequent confusion sets under both conditions are shown in table~\ref{tab:res:confus}. A closer look at the confusion sets of the two experiments shows that the categories in the random split setting are consistent with standard errors that POS taggers make: These POS taggers mostly confuse nouns (NN) with adjectives (JJ) and with names (NNP), past tense verbs (VBD) with participles (VBN), prepositions (IN) with adverbs (RB). One notable difference in the topic modeling setting is that the number of confusions between nouns (NN) and names (NNP) (in both directions) is almost reduced by half in comparison to the random split setting: 88 vs.\ 159 cases (note that the condition NN NNP is not among the 8 most frequent cases for the topic model as shown in table~\ref{tab:res:confus}, it is the 12th most frequent confusion set). Names are generally difficult because they constitute an open set, and thus not all of them will be found in the training set. For example, names that were misclassified as nouns in the random split data set included ``BART'', ``Jefferies'', and ``Tulsa''. Thus, a reduction of these errors means that the topic model experts are learning characteristics that allow them to handle domain specific names better, even though the respective learned model files of the topic model setting contain considerably fewer lexical entries. %Since training set size is a detrimental factor for the larger number of topics, I also conducted an experiment where I use soft clustering so that every sentence is represented in every topic, but to a different degree. The last row in table~\ref{tab:mixedresults} reports the results of this experiment. We notice that the 2-topic experts cannot benefit from the soft clustering. Since the separation between WSJ and GENIA is very clearly defined for the 2-topic experiments, the advantage of having a larger training set is outweighed by too many irrelevant examples from the other topic. However, the 10-topic model profits from the soft clustering, which indicates that soft clustering can alleviate the data sparseness problem of the POS tagging experts for larger numbers of topics. %A more detailed analysis of the POS tagging results (on a slightly different data split), see \cite{mukherjee:kuebler:ea:16}. This work includes an experiment showing that the POS tagging experts also increase performance for the WSJ corpus only, showing that POS tagging experts also perform better on more homogeneous collections, i.e., they adjust to less obvious differences between sentences. %\subsection{Parsing Experts} \subsection{Question 3: Does Dependency Parsing Benefit from the Topics?} In this section, I present the results of using topic experts for dependency parsing. I discuss my results from two perspectives. One, where I assume I have gold POS tags and another, where I use POS tags from TnT as an input to the parser. This will help me in determining the effect of POS tags on parsing choices. In addition to that, I also analyze certain aspects \subsubsection{Parsing Experts} \paragraph*{Using Gold POS Tags}\label{sec:goldpos} \begin{table*}[t!] \centering \begin{tabular}{l|cc|cc} \multicolumn{1}{c|}{\multirow{2}{*}{Setting}} & \multicolumn{2}{c|}{LAS} & \multicolumn{2}{c}{UAS} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{l}{2 topics} & \multicolumn{1}{r|}{10 topics} & \multicolumn{1}{l}{2 topics} & \multicolumn{1}{r}{10 topics} \\ \hline Full training set & \multicolumn{2}{c|}{88.67} & \multicolumn{2}{c}{91.71} \\ Random split & 87.84 & 84.91 & 90.86 & 88.64 \\ Topic model & \textbf{90.51} & 88.38 & \textbf{92.14} & 90.3 \\ \hline %Soft clustering & 89.86 & \textbf{89.91} & 91.99 & \textbf{91.84} \\ \hline \end{tabular} \caption{Results of the dependency parsing experiments using gold POS tags.} \label{tab:tmvsfs} \end{table*} I now look into the parsing experiments using gold standard POS tags. The choice of gold POS tags allows me to focus on the contribution of the topic modeling experts on parsing results mainly while abstracting away the effect of POS tags on parsing decisions. The results of the experiments are shown in Table~\ref{tab:tmvsfs}, for 2-topic and 10-topic settings and in comparison to the two baselines for hard clustering i.e., taking the highest topic probability as the determining factor for cluster membership. %and soft clustering experiments. The results in table~\ref{tab:tmvsfs} indicate that the 2-topic expert model reaches an improvement over the baseline using the full training set for both the labeled attachment score (LAS) and the unlabeled attachment score (UAS). There is an increase of around 2\% over the baseline for LAS, and an increase of 0.43\% for UAS. However, for the 10-topic setting, both the LAS and the UAS are slightly lower than the baseline. For LAS, the difference is 0.29 percent points while for UAS, the difference is 1.41 percent points. The results shows that the gain in LAS and UAS is offset by the reduced training set, parallel to the results for POS tagging. Both the 2-topic and the 10-topic experts outperform the random split baseline (which uses similar training set sizes), with a gain of more than 3 percent points. %The soft clustering results show the same trends as in the POS tagging experiments: For the 2-topic setting, soft clustering outperforms the full baseline by 1.19 percent points. But it does not exceed the hard clustering results. In the 10-topic setting, soft clustering outperforms the full baseline as well as the hard clustering setting. This is because sentences with a 50\% probability of belonging to topic~1 and a 40\% probability for topic~3 need to be considered to belong to both topics. This result also shows that this method effectively handles the training data sparsity in the 10-topic setting. \paragraph*{Using the POS Tagger}\label{TnTPOSinParsing} In this section, I explore the results of using POS tags from the POS tagger TnT as the input for the parser. Thus, in this case, I consider the effects of POS tags on parsing decisions. This gives rise to four major scenarios: \begin{enumerate} \item The full training set is used for POS tagging and for parsing (full baseline). In this case, I train the POS tagger on the full training set and then use these POS tags for parsing. The parser also has access to the full training set. This enables me to draw a parallel baseline to the original full training baseline. \item Random splits are used for parsing and POS tagging. I.e., the POS tagger and parser are trained on random splits (random baseline). This is similar to the full training set baseline, except I use random splits for POS tagging and these tags are incorporated in parsing. \item Topic models are used for training the parser, but the POS tagger, TnT, is trained on the whole training set. In this case, I abstract the effect of using topic modeling experts on POS tags. I.e., I evaluate the performance based on POS tags trained on the full training set. The parser, however, is trained on the experts which uses these POS tags. \label{S2} \item Topic models are used for training the parser and the POS tagger. For this scenario, I assess the effect of using POS tags determined by the POS experts in the parsing experts. \label{S1} \end{enumerate} I use the random split case as the lower baseline for these experiments and the full training set as the more competitive baseline. Table~\ref{tab:TnTPOS} shows the results. \begin{table*}[t!] \centering \begin{tabular}{l|rr|rr} \multicolumn{1}{l|}{\multirow{2}{*}{Setting}} & \multicolumn{2}{c|}{LAS} & \multicolumn{2}{c}{UAS} \\ \multicolumn{1}{c|}{} & 2 topics & 10 topics & 2 topics & 10 topics \\ \hline 1. Full set POS + full set parsing & \multicolumn{2}{c|}{86.70} & \multicolumn{2}{c}{90.26} \\ 2. Random split POS + random split parsing & 85.77 & 81.33 & 89.11 & 85.73 \\ 3. Full set POS + topic model parsing & 88.30 & 86.13 & 90.43 & 88.47 \\ 4. Topic model POS + Topic model parsing & \textbf{88.35} & 85.68 & \textbf{90.55} & 88.15 \\ \hline \end{tabular} \caption{Results of the dependency parsing experiments using TnT POS tags.} \label{tab:TnTPOS} \end{table*} Table~\ref{tab:TnTPOS} shows that in the 2-topic setting, using topic modeling experts on the POS level as well as on the parsing level reaches the highest results with an improvement of around 2\% in LAS in comparison to the full baseline parser, from 86.70\% to 88.35\%. The gain in UAS is considerably smaller: The topic modeling expert reaches 90.55\% as opposed to 90.26\% for the full baseline. In contrast, the topic modeling setting for the 10-topic setting outperforms the random baseline but does not reach the full baseline, thus mirroring the trends we have seen before. When I compare the experiments where I use the full POS tagging baseline along with topic model parsing experts (row 3 in table~\ref{tab:TnTPOS}) to the full topic model (row 4), I observe that the latter model reaches only very minimal gains by using the topic modeling POS tagger when I use 2 topics, and there is a negative trend when I use 10 topics. I.e. the overall quality of the POS tagger is more important than its specialization. Thus, even if the topic model POS tagger outperforms its full baseline, the learned adaptations only have a minimal effect on parsing accuracy. \subsubsection{Analysis} Since the goal is to determine the performance of experts, I take a closer look at the results presented for the parsing experiments using gold POS tags in section~\ref{sec:goldpos}. The results show that the 2-topic parsing experts outperform the general parser trained on the full training set by almost 2 percent points. I looked at the 5 sentences that had the lowest LAS when I used the general parser. These sentences are shown in table~\ref{tab:compLASTMvsFS}, along with their LAS for both settings. The table clearly shows that the topic expert parsers reach a much higher LAS across all these sentences, and the highest increase reaches 35 percent points. We also see that there are two headlines among these sentences. They are different in their syntactic patterns from other sentences and thus difficult to parse. For this reason, I decided to have a closer at all ``incomplete" sentences, i.e., sentences that do not have verbs, as an approximation of headlines. I found that of such 1~310 sentences in the training set, 437 were grouped into topic~1, the other 873 sentences in topic~2. In the test set, I had 65 such sentences, 15 in topic~1 and 50 in topic~2. For the sentences in topic~1, I calculate these sentence have an overall LAS of 76.54, for the ones in topic~2 an overall LAS of 89.91. These results show that the parser expert for topic~2 has adapted substantially better to the syntax of such untypical sentences than the parser expert for topic~1. \begin{table*}[t] \centering \begin{tabular}{p{11cm}|r|r} \multicolumn{1}{c|}{Sentence} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Fulltext\\ LAS\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}2-topic \\ LAS\end{tabular}} \\ \hline Phyllis Kyle, Stephenson Newport News , Va . & 0 & 25.00 \\ But volume rose only to 162 million shares from 143 million Friday . & 46.15 & 61.54 \\ Fidelity , for example , prepared ads several months ago in case of a market plunge . & 47.06 & 82.35 \\ CALL IT un-advertising . & 50.00 & 75.00 \\ ( See related story : " And Bills to Make Wishes Come True " -- WSJ Oct. 17 , 1989 . & 52.38 & 61.90 \\ \hline \end{tabular} \caption{Comparison of LAS for the sentences with the lowest LAS in the fulltext setting.} \label{tab:compLASTMvsFS} \end{table*} \begin{table*}[t] \centering \begin{tabular}{ll|rrr} Gold Dep. & Pred. Dep. & Fulltext &Topic~1 & Topic~2 \\ \hline ADV & NMOD & 121 & 37 & 86\\ PMOD & NMOD & 101 & 21& 67\\ NMOD & ADV & 100 & 34 & 57 \\ AMOD & NMOD & 91 & 26 & 83\\ CONJ & NMOD & 86 & 13 & 56\\ \hline \end{tabular} \caption{The 5 most frequent dependency label confusions of the full baseline parser.} \label{tab:conf:FT:TM} \end{table*} I also looked at the dependency labels that were mislabeled most often by the more general, full baseline parser. The 5 most frequent combinations are shown in table~\ref{tab:conf:FT:TM}, with their frequencies in the test sentences of the two topics. These numbers show that the topic~1 expert is much better adapted to these confusion sets, resulting in lower error rates than the topic~2. This shows very dramatically that the two topics learn different patterns. While topic~1 expert, i.e. the WSJ-expert, performs better than topic~2 expert on the incomplete sentences, topic~2, i.e. the GENIA-expert, handles the common dependency label errors better. \subsection{Question 4: How do we cluster sentences into experts? } \begin{table}[t] \centering \begin{tabular}{l|cc} & \multicolumn{2}{c}{Accuracy} \\ Setting & \multicolumn{1}{r}{2 topics} & \multicolumn{1}{r}{10 topics} \\ \hline Full training set & \multicolumn{2}{c}{96.64} \\ Random split & 96.48 & 95.49 \\ Topic model & \textbf{96.84} & 96.34 \\ \textbf{Soft Clustering} & 96.73 & \textbf{96.84} \\ \hline \end{tabular} \caption{Soft Clustering for POS tagging experts.\label{tab:poswsoftcluster}} \end{table} In the previous questions, I discussed creating experts for POS tagging and parsing using the highest probability topic as the guiding factor in deciding which cluster a sentence belong to. I refer to this method as hard clustering. However, the finer grained split into topics causes severe data sparsity for hard clustering. The training set size is actually a detrimental factor for the larger number of topics. Hence, I conduct experiments for POS tagging and parsing, where I use soft clustering so that every sentence is represented in every topic, but to a different degree. Consider the following example: In order to represent the distinct domains (WSJ, GENIA) and also the sub-genres of a particular domain, I create 2 as well as 10 topics experts. Typically, if I consider the topic prior as 2 \& 10, the distribution looks as given in Table~\ref{tab:2topicssent} \& \ref{tab:10topicssent}~\footnote{an approximation}, for the following WSJ sentence: ``Though growers can't always keep the worm from the apple , they can protect themselves against the price vagaries of any one variety by diversifying -- into the recently imported Gala , a sweet New Zealand native ; the Esopus Spitzenburg , reportedly Thomas Jefferson's favorite apple ; disease-resistant kinds like the Liberty." % Please add the following required packages to document preamble: %\usepackage{multirow} \begin{table*}[!htb] \centering \caption{2 topics distribution} \begin{tabular}{cc} %\multicolumn{2}{c}{2-topic probability distribution} \\ \hline 0 & 1 \\ 95.90 & 4.10 \\ \hline \end{tabular} \label{tab:2topicssent} \caption{10 topics distribution %\\ } \begin{tabular}{cccccccccc} %\multicolumn{10}{c}{10-topic probability distribution} \\ \hline 0 & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{9} \\ 0.11 & 0.10 & 0.08 & 0.10 & 0.09 & 30.58 & 15.28 & 49.55 & 3.93 & 0.16 \\ \hline \end{tabular} \label{tab:10topicssent} %\caption{2 \& 10-topic probability distribution for a WSJ corpus sentence: \\ } %\label{tab:10t-probab} \end{table*} In Table~\ref{tab:2topicssent} and~\ref{tab:10topicssent}, we can see the probability of the highest (7) and the second highest (5) topic are fairly close. Thus, if I take topic 7 and discard the other topics, I lose a fair share of information given by LDA. In other words, dividing a corpus into experts creates a significant sparsity in training data if I consider the highest probability topic and discard the rest. This problem can be mitigated if we effectively utilize the whole topic probability distribution by giving a sentence to more than one topic. Thus, I also investigate whether we can utilize the soft clustering information directly and add every sentence to every domain expert, weighted based on the degree to which it represents the topic of this expert. This not only allows us to model topics in more detail, it can also help combating data sparsity since every sentence contributes to every expert. The risk is that I diffuse the expert knowledge too much by adding all sentences even if they are weighted. Table~\ref{tab:poswsoftcluster} reports the results of using soft clustering on POS tagging. I report all the results of the POS tagging experiments in table~\ref{tab:poswsoftcluster} in order to establish the compare and contrast soft clustering with the previous experiments. Row 1,2 \& 3 report the results on full training baseline, random split baseline and hard-clustered topic experts respectively. %The last row in table~\ref{tab:poswsoftcluster} reports the results of this experiment. We notice that the 2-topic experts cannot benefit from the soft clustering. Since the separation between WSJ and GENIA is very clearly defined for the 2-topic experiments, the advantage of having a larger training set is outweighed by too many irrelevant examples from the other topic. However, the 10-topic model profits from the soft clustering, which indicates that soft clustering can alleviate the data sparseness problem of the POS tagging experts for larger numbers of topics. \begin{table*}[t!] \centering \begin{tabular}{l|cc|cc} \multicolumn{1}{c|}{\multirow{2}{*}{Setting}} & \multicolumn{2}{c|}{LAS} & \multicolumn{2}{c}{UAS} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{l}{2 topics} & \multicolumn{1}{r|}{10 topics} & \multicolumn{1}{l}{2 topics} & \multicolumn{1}{r}{10 topics} \\ \hline Full training set & \multicolumn{2}{c|}{88.67} & \multicolumn{2}{c}{91.71} \\ Random split & 87.84 & 84.91 & 90.86 & 88.64 \\ Topic model & \textbf{90.51} & 88.38 & \textbf{92.14} & 90.3 \\ \textbf{Soft clustering} & 89.86 & \textbf{89.91} & 91.99 & \textbf{91.84} \\ \hline \end{tabular} \caption{Results of the dependency parsing experiments using gold POS tags.} \label{tab:parsingsoftcluster} \end{table*} I investigate the effect of soft clustering on parsing experts in table~\ref{tab:parsingsoftcluster}. The soft clustering results show the same trends as in the POS tagging experiments: For the 2-topic setting, soft clustering (row 4) outperforms the full baseline by 1.19 percent points. But it does not exceed the hard clustering results. In the 10-topic setting, soft clustering outperforms the full baseline as well as the hard clustering setting. The primary reason can be attributed to the sheer data sparsity issue that soft clustering solves. A secondary reason could also be the ambiguity of the sentence in terms of genre. E.g., sentences with a 50\% probability of belonging to topic~1 and a 40\% probability for topic~3 need to be considered to belong to both topics. The results for soft clustering experiments show that it effectively handles the training data sparsity in the 10-topic setting. %%%%%%%%%%ANALYSIS % \subsection*{Question 5: What do the experts learn?} % It is important to delve deeper into the results to understand where improvement stems from. For POS tagging, the experts outperformed the random split baseline by a greater margin. Hence I take a closer look at the differences. I analyze the results for gold POS tags experiments to better understand the improvement for dependency parsing, since it yields most accurate predictions. % \subsubsection*{POS Tagging} % \subsubsection*{Dependency Parsing} \section{Summary} In this chapter, I have presented a flexible and fully automated methodology for POS tagging and parsing for different genres. These experts can be extracted from a heterogeneous text source, without the need of having to separate the genres manually. Additionally, I obtain individual experts, which can be used separately. I.e., I have training models for different domains which can be used as and when required. The results show considerable improvement in POS and parsing results on heterogeneous domains by using unsupervised topic modeling to separate the data into different topics. I can train POS tagging and parsing experts on the individual topics, which show an increased accuracy in comparison to their counterparts trained on the whole, heterogenous training set. In theory, I can repeat the experiments for any number of topics but at the cost of reducing training data. This data sparsity resulting from having to split the training set into different topics can be mitigated by assigning every sentence to every topic but weighting their importance to a topic by the probabilities of the topic modeler. I also showed that while the POS tagger and the dependency parser individually profit from the split into topic experts, the combination of topic expert POS tagger and parser does not improve over using a POS tagger trained on the whole data set. A deeper analysis of the results show interesting observations. For POS tagging, the analysis shows that a significant improvement is achieved, particularly, for proper names. The topic model experts are almost three times more likely to tag a name correctly than the random split models. The parsing results show that the experts are indeed successful in adapting to certain syntactic aspects than the full training baseline. This kind of technology can find substantial applications in adapting POS taggers \& parsers to characteristics of different speech or cognitive impediments but also to the characteristics of non-native speakers. In this chapter, I have simplified the problem of assigning sentences to the experts. I.e., I retrain the topic modeler for new test sentences. However, since topic modeling is non-parametric, this could potentially change the topic composition. A better approach is to estimate similarity between a test sentence and the domain experts and then assign the sentence to be tagged/parsed by that training expert. This potentially alleviates the problem posed by retraining the topic experts. I discuss the methods in detail in the next chapter. The key findings presented in this chapter have been published in two papers~\citep{icon-atr,eacl-atr}. %In this chapter, I have shown that we can improve POS and parsing results on heterogeneous domains by using unsupervised topic modeling to separate the data into different topics. We can then train POS tagging and parsing experts on the individual topics, which show an increased accuracy in comparison to their counterparts trained on the whole, heterogenous training set. The data sparsity resulting from having to split the training set into different topics can be mitigated by assigning every sentence to every topic but weighting their importance to a topic by the probabilities of the topic modeler. I also showed that while the POS tagger and the dependency parser individually profit from the split into topic experts, the combination of topic expert POS tagger and parser does not improve over using a POS tagger trained on the whole data set. %In our research, I have investigated whether we can use topic modeling in order to create specialized subsets of annotated data, which can then be used to train POS tagging experts for the topic. Our results show that the POS tagging experts achieve higher accuracies both for a manually created mixed data set with financial news and medical texts. The latter shows that our system is capable of adapting to nuances in the micro-genres within the Wall Street Journal texts. Our analysis also shows that a significant improvement is achieved, particularly, for proper names. The topic model experts are almost three times more likely to tag a name correctly than the random split models. %We have created a flexible and fully automatic methodology of POS tagging experts for different genres. These experts can be extracted from a heterogeneous text source, without the need of having to separate the genres manually. Additionally, we obtain individual experts, which can be used separately. Further applications for this kind of technology can be found in adapting POS taggers to characteristics of different speech or cognitive impediments but also to the characteristics of non-native speakers. %Our current experiments have used 2, 5, and 10 topic models. In theory, the number of topics can be set to a higher number, thus creating more subtle topics. However, as we have also shown, the higher the number of topics, the more severe data sparseness becomes. This can be mitigated by using training sentences for more than one topic, based on the distribution provided by the topic modeler. We plan on extending our work to syntactic parsing, for which the differences between genres will be more noticeable.
{ "alphanum_fraction": 0.7541841725, "avg_line_length": 112.900466563, "ext": "tex", "hexsha": "a9d2b28e9c59641800a1be961d5b6cabc4083a43", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "dcb8dc3a7b4747dd66837fb7e1310d2cd6d77ded", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "atreyee-m/PhD-Thesis", "max_forks_repo_path": "chapter5.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "dcb8dc3a7b4747dd66837fb7e1310d2cd6d77ded", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "atreyee-m/PhD-Thesis", "max_issues_repo_path": "chapter5.tex", "max_line_length": 1480, "max_stars_count": null, "max_stars_repo_head_hexsha": "dcb8dc3a7b4747dd66837fb7e1310d2cd6d77ded", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "atreyee-m/PhD-Thesis", "max_stars_repo_path": "chapter5.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 17432, "size": 72595 }
\chapter{Evaluation} \label{chap:evaluation} The online interactive map was evaluated in two different studies. Beforehand, it was published online and merged to the existing Energy Chart website to make it available for a large audience. In addition, an online survey was prepared to evaluate the map and deployed online to assess the impact, usefulness, and usability of the of this visualization tool. This online study evaluates the map on two different characteristics. The first study was focusing on the representational aspects of the map and the second was aiming at the usability of the map and its interactive functionality. The goal of this survey was to accumulate qualitative feedback on the map interface, visualization techniques, usability, and scopes of accessing data. In this chapter, the survey studies and their results are analyzed, explained, and discussed in detail. \section{Online Survey Setup} \label{chap:suveySetup} An online survey application is used to create and host the survey. LimeSurvey \footnote{Official LimeSurvey Website, \url{https://www.limesurvey.org/} (last accessed on \today)} is a free and open source tool for publishing surveys in the web environment. It provides wide variety of features for creating graphical and statistical analysis of survey results. Four different question groups with a brief introduction about the purpose of this evaluation and a little usage guide of the visualization tool were created using this survey application. Therefore, users can explore the map and discover the functionality before participating in the survey. The participants were invited over the email and social media. They did not get any reward for participating in the online survey. None of the survey questions were mandatory, therefore participants were not required to answer every question. In the short introduction, all the participants were informed about the purpose of making this visualization tool followed by a short description about the usability, and some tasks they need to perform for using the map and extracted the data out of it. The tasks were divided into the following steps: \textbf{Exploring power plant information:}\\ Participants were asked to click on the map marker to view the information of each power plants from the pop-up box. \textbf{Use navigation menu for interaction:}\\ Participants were told to use the navigation menu for selecting each source category and for rendering the power lines on the map. \textbf{View hourly production data:}\\ Participants were told to use the "Go to Energy Charts" link to view the hourly production data on the energy charts. \textbf{Compare power plants production:}\\ Participants were told to use the "Compare" button to compare between power plants in a comparison table. \textbf{View hourly production data of multiple power plants:}\\ Participants were told about the "Compare on Energy Charts" link that appears under the comparison table. By clicking on the link, they can see the hourly production data of the power plants, which are in the comparison list, on the Energy Charts. After the short introduction, there were 20 questions distributed in four different question groups in the online survey and participation time was estimated about 5 minutes. \subsection{Question Groups} \label{sssec:quesGroup} \subsection*{Introduction} \label{sssec:intro} The first question group was about the demographic information. Participants were asked to mention their gender, age, and professional area of working, besides that their frequency of using interactive maps and their acquaintance with the German power plants and its electricity production were asked. \subsection*{User Interface Evaluation} \label{sssec:uiEval} In this group, participants were asked to evaluate the user interface and its components especially about the cluster view, map markers, power lines, and comparison table. A collection of the demographic question, rating scale, and multiple choice question were asked for this evaluation. \subsection*{Map Usability Test} \label{sssec:MUtest} In this group, participants were asked to evaluate the usability and usefulness of this visualization tool. Participants were also requested to leave some comments only if it was difficult to use. \subsection*{Comments or Suggestion} \label{sssec:cORS} Participants were asked to comment on this complete work package and its features as well as requested to provide suggestions on adding new features or ideas. \section{Survey Results and Discussion} In this section, the result of survey questionnaire of each question group and the qualitative comments and consequences derived from these visions are discussed. \subsection{Summary of Participants and Their Backgrounds} The goal of this question group was to get an idea about the participants, their professional background, age ratio, and their familiarity with German power plants. In addition, we asked how frequently they use interactive maps in their daily life. For our online research study, 23 participants (19 males, 2 females, 2 did not answer), from different levels of expertise, participated in the survey. Participants belong to different age groups where 50\% with an average age of 27 years, 35\% are with an average of 53 years and 15\% are with an average age of 36 years (see Figure \ref{fig:participantBack}). In total, 19 full and 4 partial survey reports were submitted. \begin{figure} [h] \begin{center} \subfloat[Age ratio of participants\label{fig:age}] {\includegraphics[width=.45\linewidth]{study/age}}\hfill \subfloat[Professional backgroud of survey participants\label{fig:ageJob}] {\includegraphics[width=.45\linewidth]{study/pie}} \hfill \caption{Age ration and professional background of survey participants.} \label{fig:participantBack} \end{center} \end{figure} Summary of this question groups tells us that almost 50\% of the participants who are comparatively younger than other participants are familiar with German power plants and their production to some extent. Whereas other participants, who belong to older age groups, are somewhat or very much familiar with it (see Figure \ref{fig:familiar}). After digging up the data more deeply it has been noticed that younger participants are mostly students, research assistants, and engineers. However, participants older than other group are professors, businessmen, and working in the energy industry (see Figure \ref{fig:ageJob}). Nevertheless, the participants having no idea about German power plants also exists in all age groups. On the other hands, concerning the frequency of using interactive map per week, a maximum number of participants (around 70\%, see Figure \ref{fig:mapUsage}) use interactive map at least once or twice in a week. Therefore, we assumed that user would find the tool and its functionality easy to use. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{study/familiar} \caption{Familiarity with German power plants and their production.} \label{fig:familiar} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{study/frequency} \caption[participants frequency of using interactive map per week]{The result of online survey - participants frequency of using interactive map per week.} \label{fig:mapUsage} \end{center} \end{figure} %\begin{figure} % \begin{center} % \includegraphics[width=1\textwidth]{study/etu.pdf} % \caption{The result of online survey - map usability} % \label{fig:etu} % \end{center} %\end{figure} \begin{figure} [h] \begin{center} \subfloat[The evaluation result of map usability by participants.\label{fig:etu}] {\includegraphics[width=.45\linewidth]{study/etu.pdf}}\hfill \subfloat[The evaluation result of interface and functionality by participants.\label{fig:selfExp}] {\includegraphics[width=.45\linewidth]{study/selfexp.pdf}} \hfill \caption{The evaluation result of online survey.} \label{fig:self-Exp} \end{center} \end{figure} \subsection{Working with the Interactive Map} In this section, the evaluation result of map usability test as well as qualitative comments from the participants are discussed. Each participant of the survey are named with "P" including their id as a suffix (for example participant number 23 is called with "P23"). \subsection*{Evaluation of User Interface(UI)} We assumed that before evaluating the UI, user followed the steps described in the short introduction and gone through the interface accordingly. The purpose of this questionnaire was to evaluate the map UI and its complexity, comprehensibility, interface as well as visual aesthetics. The participants were very positive after using the map and its interface. They found it intuitive and easy to use which is confirmed by around 91\% participants. Where 77.27\% participants agreed and around 13.64\% strongly agreed on the question \textit{"I found the map easy to use"} of the survey (see Figure \ref{fig:etu}). Similar results are also noticed where participants were asked about the structure of the map especially the navigation menus, markers, marker pop-ups, buttons, and comparison table. This is confirmed by 91\% (about 72.73\% agreed and 18.18\% strongly agreed) participants on the question \textit{"Interface and functions of this map are self-explanatory"} (see Figure \ref{fig:selfExp}). There was also a comment section where participants can write about their confusions instead of answering the question. P12 said, \textit{"Definitely they are self-explanatory."} and \textit{"navigation menu could have a logo next to each label"}. \subsection*{Evaluation of Map API} Participants were also asked to evaluate the elements and functions that map API offers. The participants rated the initial cluster view by the average score of 3.81 (standard deviation = 1.40, lowest (strongly disagree) – 1.0, highest – 5.0 (strongly agree)). This score falls between neutral and agree. A mixed response from the participants on the cluster view is noticed from the high standard deviation. Some of them found it interesting but at the same time, some found it confusing and unnecessary. P12 again commented, \textit{"The power plant density function should be removed, it is only confusing"}. P13 noted the cluster view appears on higher zoom level and said, \textit{"The power plant density representation is not always helping, its appearance can be resolved at higher zoom level"}. The participants found the size of the marker \textit{"Just right"} although some participants found it little bigger, they were satisfied with the icons used inside the marker. The question \textit{"Icons used for power plants are self-explanatory"} got an average score of 4.13 (standard deviation =0.71). Participants also rated the readability of the marker pop-up by the average value of 3.9 (standard deviation = 0.9) and clarity of the navigation menu by the average value of 4.3 (standard deviation = 0.67). Participants found the power line visualization interesting and rated by the average value of 4.5 (standard deviation = 0.60) but were not highly satisfied with the loading time on the mobile devices. P2 tested the tool on the mobile device and commented, \textit{"The application crashes very often on tablets while loading power lines"}. This result was expected as we did not use any database for storing large GeoJSON files of power lines. Therefore, it takes between 3 to 5 seconds to load on the map. All in all, the participants found the user interface and other elements of the interface very user friendly and easy to understand. \subsection*{Evaluation of Comparison table} The participants also rated the comparison table on the question \textit{"I found the comparison table useful"}. Some participants did not find it useful and it was difficult to find the comparison list. P2 said, \textit{"Where is the comparison table?"}. One of the reasons for having this difficultly could be a short usage time or they overlooked the short introduction at the beginning. On the other hand, P19 noticed the feature and said, \textit{"Pity that you can compare only the same energy sources"}. All together participants rated this feature by the average score of 3.77 (standard deviation = 1). \subsection*{Usability Evaluation} The purpose of this question group was to evaluate the interactive functions, its usability, and usefulness. This study would tell whether the interactive techniques and functions inside the tool suited well for exploring data. From these individual test records, we observed that 75\% of the participants found this map very informative and agreed on the question \textit{"I found this map very informative"}. Others selected neutral as an answer to this question. P9, P11, P2 and P22 mentioned not having the solar power plant information. P2 was mostly interested in the high voltage transmission line visualization and requested to have more information regarding the transmission lines. P9 said, \textit{"It would be more informative marker pop-up shows the name of the location"}. Around 40\% participants disagreed when they answered the question \textit{"I found the interactivity very complex"} (see Figure \ref{fig:complex}). Around 30\% participants found it difficult to use because of having perhaps less experience in using interactive maps and others decision were neutral. Participants were also asked about the necessity of having a short tutorial beforehand or example of how to use the map. In this case, 55\% of the participants disagreed just like before and around 15\% were neutral or couldn't decide. Around 15\% participant agree on the question of \textit{"I think that I would need a tutorial before using this visualization tool"} (see Figure \ref{fig:introTutorial}). \begin{figure} \begin{center} \subfloat[The evaluation result of - "I found the interactivity very complex"\label{fig:complex}] {\includegraphics[width=.45\linewidth]{study/complex.pdf}}\hfill \subfloat[The evaluation result of - "I think that I would need a tutorial before using this visualization tool"\label{fig:introTutorial}] {\includegraphics[width=.45\linewidth]{study/intro.pdf}} \hfill \caption{The evaluation result of complete visualization tool.} \label{fig:selfExpo} \end{center} \end{figure} In figure \ref{fig:finalRev}, the evaluation results are shown ordered by participants based on the map usability and its usefulness. 22 participants completed this question group. Total usability test got a score of 73\%. Where 60\% of the participants rated above than average score and they think the map is very informative and its interactivity is not really complex and around 40\% of the participants found it difficult to use and rated less than the average score. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{study/finalreview.pdf} \caption{The result of online survey - complete map usability and usefulness.} \label{fig:finalRev} \end{center} \end{figure} \section{User Feedback} Besides having this information about the power plants the participants in general desired more information from the system. Because of missing information on solar power plants, they proposed to include solar in the information list. Participants also requested to include the city where the plant is located in the marker pop-up. Participants got confused when they saw the marker cluster layer and proposed to omit this feature. The participants also mentioned difficulties finding some features, for example, comparison table. Participants also talked about the limitation of comparison table and energy charts. They are interested to compare power plant units from different energy sources on energy chats. They are also interested to see power plants production in bar charts. Participants also requested to have this map in German language. People from social media also shared their thoughts and ideas who didn't participate in the survey. Their thoughts were quite similar to the participants of the survey. One user said, \textit{"No information on solar!"}. Another social media use provided some important information regarding the correctness of geographical location of power plants. Another user commented on the 220kV and 380kV power lines and their information. Information provided for the power lines in the map are outdated. However, users from social media noticed the work, got fascinated by the tool and appreciated saying, \textit{"Great Stuff" and "The Map is very informative"}. \section{Summary} The online interactive map was evaluated using an online survey for one week. In this very short time we received 23 survey reports. The study was focused on the qualitative feedback of the map user who are renewable energy professionals, professors, students, engineers working in the energy industry, journalists, and teachers. We were able to reach them through this survey and obtained various feedback, thoughts, ideas, suggestion, usability evaluation result as well as appreciations. The qualitative feedback "however" uncovered some gaps and limitations within the tool, which can be included in the list of future work.
{ "alphanum_fraction": 0.7984160018, "avg_line_length": 106.7777777778, "ext": "tex", "hexsha": "6506545295a25a66dd51941a4a090738f6c1a844", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d1ea12efcaac8ef2eb7e5f814ae2a24f73b41d3f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "ullahkz/OSM_documentation", "max_forks_repo_path": "content/evaluation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d1ea12efcaac8ef2eb7e5f814ae2a24f73b41d3f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "ullahkz/OSM_documentation", "max_issues_repo_path": "content/evaluation.tex", "max_line_length": 1959, "max_stars_count": null, "max_stars_repo_head_hexsha": "d1ea12efcaac8ef2eb7e5f814ae2a24f73b41d3f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "ullahkz/OSM_documentation", "max_stars_repo_path": "content/evaluation.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3654, "size": 17298 }
\chapter{Cookbook} \label{section:cookbook} In this chapter the reader will learn about the webaccess architecture by example. \section{Folder details plugin} \label{section:folderdetails} Our first task is to write a simple plugin that shows the status of the currently selected folder. As the user selects folders from the hierarchy tree on the left of the screen a line of text in the bottom toolbar is updated, showing the number of total and unread items in the folder. The screen shot in Figure \ref{figure:plugin} shows the component in action. The finished implementation can be found in the source tree in {\tt Zarafa.plugins.FolderStatus}. \begin{figure}[h!] \centering \includegraphics[width=9cm]{figures/plugin.eps} \caption{Folder status plugin.} \label{figure:plugin} \end{figure} We will start by writing some boilerplate, shown in Listing \ref{listing:plugin1}. The class {\tt Zarafa.plugins.FolderStatus} is declared, extending {\tt Zarafa.ui.Plugin} \footnote{More info on OO and ExtJS can be found at http://www.extjs.com/learn/Manual:Intro}. Each registered plug-in has a name, so the name 'folderstatus' is passed to the parent constructor. On line 11 a single instance of this plugin is created and registered with the global container. For convenience we add an {\tt init} method that is called right after the parent constructor. \begin{lstlisting}[caption={Plugin boilerplate}, label=listing:plugin1] Zarafa.plugins.FolderStatus = function() { config = { name : 'folderstatus' }; Zarafa.plugins.FolderStatus.superclass.constructor.call(this, config); // Initialise the plug-in. init(); }; Ext.extend(Zarafa.plugins.FolderStatus, Zarafa.ui.Plugin, { init : function() { this.registerInsertionPoint('statusbar.left', this.createFolderStatus, this); }, createFolderStatus : function(insertionPoint) { // Create a new toolbar text item. this.textItem = new Ext.Toolbar.TextItem({}); this.textItem.setText('Hello World!'); return this.textItem; } }); container.registerPlugin(new Zarafa.plugins.FolderStatus()); \end{lstlisting} In order to get something on screen we need to hook into an insertion point (see Section \ref{section:insertionpoints}). We hook into the {\tt statusbar.left} insertion point, which is located on the bottom bar, on the left-hand side (see Figure \ref{figure:plugin}). A component creation function is registered to this insertion point using {\tt registerInsertionPoint}. Remember that the last line of the listing is executed at 'load' time, so the function {\tt createFolderStatus} is registered at that time. It is \emph{called} later, when the UI hierarchy is constructed. This function constructs a single UI component, a toolbar text item with the text 'Hello World!'. Running this code results in Figure \ref{figure:plugin2}. \begin{figure}[h!] \centering \includegraphics[width=9cm]{figures/plugin2.eps} \caption{Folder status plugin.} \label{figure:plugin2} \end{figure} Now that we have successfully added a UI component to the interface we would like it to show something useful. We would like our plugin to display information about the folder that is currently selected. The {\tt container} object has a {\tt folderselect} event that fires when the user selects a folder. By listening for this event we can automatically update the text of our text item in the toolbar. \begin{lstlisting}[caption={Attaching an event handler to the container.}, label=listing:plugin2] init : function() { this.registerInsertionPoint('statusbar.left', this.createFolderStatus, this); // Hook into the folder select event of the container container.on('folderselect', this.folderSelect, this); }, updateFolderText : function(store, folder) { // Set the text item text. this.textItem.setText(String.format("{0} items - {1} unread", folder.content_count, folder.content_unread )); }, folderSelect : function(store, folder) { // Update the text item with the current store and folder. this.updateFolderText(store, folder); }, \end{lstlisting} The changed functions are shown in Listing \ref{listing:plugin2}. The event is hooked in {\tt init}, so that {\tt folderSelect} is called when the user clicks a folder. This method in turn calls {\tt updateFolderText} which updates the text on the toolbar text item. The final touch to this plug-in is automatically updating the text when the contents of the folder change due to user actions such the removal or creation of items. This is left as an exercise to the reader (hint: have a look at Zarafa.HierarchyModel). The answer can be found in the implementation of {\tt Zarafa.plugins.FolderStatus}. \section{A basic tasks context} \label{section:taskscontext} In this example we will build a tasks context. The context will be able to list tasks in a grid control. We will start by creating a fresh new context called {\tt TaskContext} as shown in Listing \ref{listing:taskcontext1}. Since a context is a type of plug-in the basic structure is similar to the plug-in described in Section \ref{section:folderdetails}. There are three functions that are defined in the {\tt Context} class that every context must override. The {\tt bid} method implements the bidding scheme described in Section \ref{section:contexts}. The {\tt createContentPanel} and {\tt createToolbar} methods create the neccesairy UI components. The bidding function returns the numerical value 2 if the folder that is being selected is a tasks folder, or -1 for any other. It checks this by looking at the 'container\_class' property of a folder. Since the built-in task context will bid 1 for task folders, our example implementation overrides it by bidding a higher value of 2. %\begin{itemize} % \item{{\tt bid(parameters:Object):Number} is used to bid on folders. See Section \ref{section:contexts}. } % \item{{\tt createContentPanel():Ext.Panel} creates a content panel. } % \item{{\tt createToolBar():Ext.Toolbar} creates a top tool bar. } %\end{itemize} \begin{lstlisting}[caption={Boilerplate for a new context.}, label=listing:taskcontext1] /** * @class Zarafa.ui.TaskContext * @extends Zarafa.ui.Context */ Zarafa.ui.TaskContext = function() { var config = { name : 'taskcontext' }; Zarafa.ui.TaskContext.superclass.constructor.call(this, config); }; Ext.extend(Zarafa.ui.TaskContext, Zarafa.ui.Context, { // Bid on task folders. bid : function(parameters) { // Task folders containsitems of type IPF.Task if (parameters.folder.container_class=='IPF.Task') return 2; // return -1, don't handle this content type return -1; }, // Create content panel. createContentPanel : function() { return new Ext.Panel({ title : 'Hello World!', html : 'Hello World!' }); }, // Create tool bar. createToolbar : function() { return new Ext.Toolbar({ items : 'Place holder.' }); } }); container.registerPlugin(new Zarafa.ui.TaskContext()); \end{lstlisting} Running this code will result in Figure \ref{figure:taskcontext}. When a tasks folder is selected the content panel provided by the context will be shown, along with its tool bar. \begin{figure}[h!] \centering \includegraphics[width=9cm]{figures/taskcontext.eps} \caption{Task context says hello.} \label{figure:taskcontext} \end{figure} The next step is to retrieve tasks from the server and show them on screen. We do this by creating a grid panel and attaching a store to it. Listing \ref{listing:taskcontext2} shows the modifed code. \begin{lstlisting}[caption={Adding a grid panel.}, label=listing:taskcontext2] /** * @class Zarafa.ui.TaskContext * @extends Zarafa.ui.Context */ Zarafa.ui.TaskContext = function() { var config = { name : 'taskcontext' }; Zarafa.ui.TaskContext.superclass.constructor.call(this, config); // Convenience initialisation function. this.init(); }; Ext.extend(Zarafa.ui.TaskContext, Zarafa.ui.Context, { init : function() { // Create a store to hold our loaded task records. this.store = new Zarafa.comm.TaskStore(); }, // called when the context is enabled (the user clicks a task folder) enable : function(parameters) { this.store.load({ storeId : parameters.store.id, entryId : parameters.folder.entryid }); }, // Create content panel. createContentPanel : function() { return new Ext.grid.GridPanel({ border : false, viewConfig : { forceFit : true, showPreview : true, enableRowBody : true }, // Column Model. cm : new Ext.grid.ColumnModel( [ { dataIndex : 'owner', header : "Owner", sortable : true }, { dataIndex : 'subject', header : "Subject", sortable : true } ]), // Connect our store to the grid. store : this.store }); }, // bid() and createToolbar() omitted for brevity }); container.registerPlugin(new Zarafa.ui.TaskContext());\end{lstlisting} We first create an {\tt init} method that constructs a new instance of {\tt Zarafa.comm.TaskStore}. This is the store that will contain the task records displayed on the screen. We want to load tasks into this store when a folder is selected by the user. To do this we override the {\tt enable} method. It is automatically called by the framework when a context has won the bidding round and just before its components are made visible. As with the {\tt bid} method, a parameter object is passed containing store and folder details. We call {\tt load} on our store and pass it the MAPI ids of the store and folder so that it knows what to load. Finally, to show the information to the user, we modify the {\tt createContentPanel} function to return a grid panel, configured to show data from the store. \begin{figure}[h!] \centering \includegraphics[width=9cm]{figures/taskcontext2.eps} \caption{Task context with real data.} \label{figure:taskcontext2} \end{figure} Figure \ref{figure:taskcontext2} shows the result. It is possible to sort by owner or subject by clicking the corresponding column headers. The store will remember the store- and folder MAPI ids automatically, so that subsequent load commands (in this case from the grid panel) collect the right data. % \subsection{Adding a 'new' button}
{ "alphanum_fraction": 0.7453033268, "avg_line_length": 34.527027027, "ext": "tex", "hexsha": "877d175b0a335537488d5cff51d3c85e5dffff76", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "24c024da7e0d6e5ab39eca3be252440b9c8dd092", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jelly/zarafa-webapp", "max_forks_repo_path": "client/design/chapters/cookbook.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "24c024da7e0d6e5ab39eca3be252440b9c8dd092", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jelly/zarafa-webapp", "max_issues_repo_path": "client/design/chapters/cookbook.tex", "max_line_length": 111, "max_stars_count": 4, "max_stars_repo_head_hexsha": "24c024da7e0d6e5ab39eca3be252440b9c8dd092", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jelly/zarafa-webapp", "max_stars_repo_path": "client/design/chapters/cookbook.tex", "max_stars_repo_stars_event_max_datetime": "2019-11-21T22:01:12.000Z", "max_stars_repo_stars_event_min_datetime": "2015-05-21T03:56:32.000Z", "num_tokens": 2498, "size": 10220 }
\documentclass[oneside]{book} \usepackage[utf8]{inputenc} \usepackage{authblk} \usepackage{setspace} \usepackage{amsmath} \usepackage{textcomp} \usepackage{amssymb} \usepackage{geometry} \usepackage{amsthm} \usepackage{runic} \usepackage{mathtools} \usepackage{graphicx} \usepackage[breaklinks=true,a4paper=true,pagebackref=true]{hyperref} \graphicspath{ {figures/} } \geometry{ a4paper, total={170mm,257mm}, left=20mm, top=20mm, } \hypersetup{ colorlinks=true, linktoc=true, linkcolor=blue, } \title{Libraria Algebrae} \author{Liam Gardner} \date{\today} %\doublespacing \newcommand\tab[1][1cm]{\hspace*{#1}} \newcommand\nextline{\newline\tab} \newcommand\nextquestion{\newline\newline} \newcommand\soln{$\text{sol}^\text{n}\text{ }$} \newcommand\fs{\mbox{\large $\mathrlap{f}s\,$}\,} \newcommand\thm[2]{\section*{Theorem: #1}\label{sec:#2}\addcontentsline{toc}{section}{Theorem: #1}} \newcommand\propn[2]{\section*{Proposition: #1}\label{sec:#2}\addcontentsline{toc}{section}{Proposition: #1}} \newcommand\defn{\textbf{Definition}: } \renewcommand\mod[1]{\text{ }\left(\text{mod }#1\right)} \begin{document} \DeclarePairedDelimiter\abs{\lvert}{\rvert} \maketitle \tableofcontents \chapter{Linear Diophantine Equations in $\mathbb{Z}^2$} \tab Note that for the general equation $ax+by$, we assume that $ab\neq0$, since if one is 0, then the equation is trivial. We wish to answer three fundamental problems \begin{enumerate} \item Does there exist an integer solution? \item If the answer is yes, find an integer solution. \item Can we find \textit{all} solutions? \end{enumerate} \tab It is common for existential theorems (those that say solutions exist) to not give a means of how to find said solutions. \subsection{Example} Solve $506x + 391y = 23$. Notice that $\gcd(506,391)=23$. Thus, by b\'ezout's\ lemma, a solution exists. We can use the EEA (Extended Euclidean Algorithm) to find a solution to the equation. \newline \begin{center} \begin{tabular}{|c|c|c|c|} \hline $x$ & $y$ & $r$ & $q$ \\ \hline \hline 1 & 0 & 506 & 0 \\ 0 & 1 & 391 & 0 \\ 1 & -1 & 115 & 1.0 \\ -3 & 4 & 46 & 3.0 \\ 7 & -9 & 23 & 2.0 \\ -17 & 22 & 0 & 2.0 \\ \hline \end{tabular} \end{center} \tab Thus, we know that $(7,-9)$ is a solution. Now, we can subtract the equation $506x+391y=23$ with $506x_0 + 391y_0=23$, which gives $506(x-x_0) + 391(y-y_0) = 0$. Thus, we get $506(x-x_0) = -391(y-y_0)$. We can divide by the GCD of 506 and 391 to get the equation $22(x-x_0) = -17(y-y_0)$. Since the GCD is a common divisor to 506 and 391, we get that both $\frac{506}{23}$ and $\frac{391}{23}$ are integers and are coprime to each other. Now, since we know that $-17\lvert -17(y-y_0)$, and that $-17(y-y_0)=22(x-x_0)$, we get that $-17\lvert 22(x-x_0)$. Thus, by CAD, we get $17\lvert (x-x_0)$. Therefore, we find that $x-x_0 = 17n$ for some $n\in\mathbb{Z}$. Thus, a solution for $x$ can be found with $x_0 + 17n$, $\forall n\in\mathbb{Z}$. \nextline Following the same process, we get that $y=y_0+22n$. Therefore, all solutions to the Linear Diophantine Equation $506x + 391y = 23$ are given by the points $(7 + 17n, -9 - 22n)$. \newline \newline Solve $506x + 391y = 24$. \nextline There are no \soln: Since $23=\gcd(506,391)$, we get that $23\lvert 506x+391y$ however, $23\nmid 24$, therefore, we've run into a contradiction. \newline \newline Solve $506x + 391y = 46 = 2\cdot23$. We know that $506\cdot7 + 391\cdot(-9) = 23$, thus if we multiply both sides by two, we get that $506(7\cdot2) + 391(-9\cdot2) = 2\cdot23 = 46$. $506(14) + 391(-18) = 46$ gives the \soln (14,-18). \thm{LDET Part 1}{ldeta} Suppose $a,b,c\in\mathbb{Z}$ and $ab\neq0$ then $ax+by=c$ has a \soln in integers if and only if $\gcd(a,b)\lvert c$. \subsection{Proof of forwards direction} \tab Assume $ax+by=c$ has an integer \soln $(x_0, y_0)$. Let $d=\gcd(a,b)$. Since $d\lvert a$ and $d\lvert b$, and since $c=ax_0+by_0$, then by Divsibility of Integer Combinations, $d\lvert c$. \subsection{Proof of backwards direction} \tab Let $d=\gcd(a,b)$, and assume that $d\lvert c$, thus $c=kd$ \fs integer $k$. Then By B\'ezout's\ Lemma, we can find $x_0$ and $y_0$ $\in\mathbb{Z}$ such that $ax_0+by_0=d$, but then $k(ax_0+by_0)=kd$. Thus $a(kx_0) + b(ky_0) = c$ \subsection{Remark} \tab if $d\lvert c$ then the proof tells us how to find a \soln. \begin{enumerate} \item solve $ax+by=d$ using EEA to get $(x,y)=(x_0,y_0)$. \item take $x=kx_0$ and $y=ky_0$, where $k=\frac{c}{d}$. \end{enumerate} \thm{LDET Part 2}{ldetb} Suppose that $(x_0, y_0)$ is a particular solution to the LDE $ax+by=c$. \nextline Then the set of all \soln $\in\mathbb{Z}$ is given by the following set: $$S = \left\{ (x,y) \Huge\mid x=x_0 + \frac{b\cdot n}{\gcd(a,b)},\,\,\, y=y_0 - \frac{a\cdot n}{\gcd(a,b)} \right\}$$ \subsection{Proof} \tab Let $D$ be the set of all integer solutions to $ax+by=c$. I.E. $$D = \left\{(x,y) \mid x,y,\in\mathbb{Z}, ax+by=c \right\}$$ \tab This can be proven by showing $S \subseteq D$ and $D \subseteq S$ \newline $S\subseteq D:$ \nextline Let $(x,y) \in S$, thus $x=x_0 + \frac{bn}{d}$ and $y=y_0-\frac{an}{d}$. $$ax+by = a\left(x_0+\frac{bn}{d}\right) + b\left(y_0 - \frac{an}{d}\right)$$ $$= ax_0 + by_0 = c$$ \tab since by definition, we know that $x_0$ and $y_0$ are a particular solution to the equation. Therefore, $S\subseteq D$ \newline $D\subseteq S:$ \nextline Let $(x,y)\in D$, thus $x,y\in\mathbb{Z}$ and $ax+by=c$. Since, $(x_0,y_0)\in D$, we know that $ax_0 + by_0 = c$. We can subtract $ax_0+by_0=c$ from $ax + by = c$ to get $a(x-x_0) + b(y-y_0) = 0$. Dividing by the GCD of $a$ and $b$, we get that $\frac{a(x-x_0)}{d} = \frac{-b(y-y_0)}{d}$. Thus, since $\frac{b}{d}\lvert \frac{a}{d}(x-x_0)$ then by CAD, we get that $\frac{b}{d}\lvert (x-x_0)$, since $\frac{b}{d}$ and $\frac{a}{d}$ are coprime. Then we get $x-x_0=n\frac{b}{d}\Rightarrow x=x_0+\frac{bn}{d}$ \fs $n\in\mathbb{Z}$. Similarly, we get $y=y_0-\frac{an}{d}$. Therfore, $(x,y)\in S$. \section{More Examples} $12x+18y=13$. \nextline $\gcd(12,18) = 6$. Since $6\nmid13$ the equation has no \soln by \hyperref[sec:ldeta]{LDET1} \newline \newline $14x-49y=28$ \nextline $\gcd(14,-49) = 7$. Since $7\lvert 28$ the equation has solutions by \hyperref[sec:ldetb]{LDET2} \nextline Consider $14x-49y=7\Rightarrow 2x-7y=1$ which has \soln $(4,1)$. If we multiply $14(4) - 49(1) = 7$ by 4, we get that $14(x_0\cdot4) - 49(y_0\cdot4) = 7$, and from this we can get all \soln using LDET2. \newline \newline \tab Find all \soln to $15x+35=5$. \newline \begin{center} \begin{tabular}{|c|c|c|c|} \hline $x$ & $y$ & $r$ & $q$ \\ \hline \hline 1 & 0 & 15 & 0 \\ 0 & 1 & 35 & 0 \\ 1 & 0 & 15 & 0.0 \\ -2 & 1 & 5 & 2.0 \\ 7 & -3 & 0 & 3.0 \\ \hline \end{tabular} \end{center} \tab Thus, we know that $\gcd(15,35)=5$, and $5\lvert 5$, there is a \soln. By EEA, we find $x=-2, y=1$ is a \soln. Then, we can find the general solution using \hyperref[sec:ldetb]{LDET2} to be $x=-2+\frac{35n}{5}\Rightarrow 7n-2$, $y=1+\frac{15n}{5}\Rightarrow 1+3n$ \subsection{Geometric Understanding} \tab Graphing the line $15x+35y=5$ or $3x+7y=1$, we can rearrange for $y$ to get $y=\frac{-3}{7}x+\frac{1}{7}$. Thus, picking any lattice point, we can construct a triangle of length 7 and height 3 from that point to find the next lattice point. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{l24f0} \caption{Triangle formed from moving between two lattice points} \end{figure} \subsection{Nonnegative \soln} \tab The solutions to $15x+35y=5$ is given by $(-2+7n, 1-3n)$, these will be nonnegative if $x\geq0$ and $y\geq0$. $$-2+7n\geq 0 \iff n\geq \frac{2}{7}$$ $$1-3n \geq 0 \iff n \leq \frac{1}{3}$$ Thus, since there are no integer solutions in the range $\frac{2}{7}\leq n \leq \frac{1}{3}$, as $n\in\mathbb{Z}$ there are no nonnegative solutions. \subsection{Find all integer \soln to $15x+35y^2 = 5$} \tab Let $Y=y^2$, then, since we have the \soln to $15x+35Y=5$, given by $(-2+7n, 1-3n)$, all we have to do is see when $1-3n$ is a perfect square. One way to solve this is to say let $z=y^2$ and solve $1-3n=z$. We can also solve this algebraically \newline \begin{center} \begin{tabular}{c c c} $\iff$ & $1-3n = y^2$ & \\ $\iff$ & $-3n=y^2-1$ & \\ $\iff$ & $3\lvert y^2-1$ & by euclid's lemma \\ $\iff$ & $3\lvert(y-1)(y+1)$ & \\ \end{tabular} \end{center} \tab By Euclid's Lemma, we know either $3\lvert(y-1)$ or $3\lvert(y+1)$, though not both. Suppose $3\lvert(y-1) \iff y-1=3k\iff y=3k+1\, \fs k\in\mathbb{Z}$ Then, if $y=3k+1$, we get that $y^2=(1+3k)^2=1-3(-2k-3k^2)$. Let $n=(-2k-3k^2)$, then we get $y^2=1-3n$. Thus, if we take $x=-2+7n=-2+7(-2-3k^2)$ and $y=1-3k$, we get a perfect square \soln to the diophantine equation. \nextline If $3\lvert y+1$, then we have $y=-1+3l\, \fs l \in \mathbb{Z}$. Then $y^2=(-1+3l)^2=1-3(2l+3l^2)$. Thus, $y=1-3l$ and $x=-2+7(2l+3l^2)$. Therfore, we can generate infinitely many perfect square solutions. \chapter{Congruence and Modular Arithmetic} \section{Clockwork Arithmetic Analogy} \tab Imagine a clock, we know that a clock has 12 spokes. If we look at it and see the hour hand at 2, and we know that 12 has already passed, then we also know that the clock really means it's 14. Thus, we can say $2\approx14$ \section{Definition} \tab $\forall a,b\in\mathbb{Z}$, we say that ``$a$ is congruent to $b$ mod(ulo) 12'' if $m\lvert(a-b)$. \nextline Notation: $a\equiv b\mod{m}$ \section{Examples} \tab $m=1$ then $a\equiv b\mod{1} \iff 1\lvert (a-b)$ which is true $\forall a,b\in\mathbb{Z}$ \nextline $m=2$ then $a\equiv b\mod{2} \iff 2\lvert (a-b) \iff a-b$ is even $\iff a$ and $b$ are both even or both odd. \nextline $2\equiv-116\mod{2}$, however $3\not\equiv 10024\mod{2}$ \nextline $14\equiv 2\mod{12} \iff 12\lvert(14-2)$ \nextline $6\equiv 26\mod{10} \iff 10\lvert(6-26)$ \nextline $6\not\equiv -26\mod{10} \implies 10\nmid(6+26)$ \propn{Congruence is an Equivalence Relation}{CER} \begin{center} \begin{tabular}{l|l} $\forall m\in\mathbb{N}, \forall a,b,c,\in\mathbb{Z}$ & Congruence is \\ \hline \hline $a\equiv a\mod{m}$ & symmetric \\ if $a\equiv b\mod{m}$ and $b\equiv c\mod{m}$ then $a\equiv c\mod{m}$ & transitive \\ $a\equiv b \mod{m}\implies b\equiv a\mod{m}$ & Reflexive \end{tabular} \end{center} \tab Remark: Any relation $a\sim b$ that satisfies all above properties is called an equivalence relation. \nextline In calculus, we say two functions $f(x)\sim g(x)$ are an equivalence relation if $f^\prime(x) = g^\prime(x)$ \section{Recap} \tab Fix $m\in\mathbb{N}$, $\forall a,b\in\mathbb{Z}$ \nextline \begin{tabular}{l c l} $a\equiv b\mod{m}$ & $\iff$ & $m\lvert (a-b)$ \\ & $\iff$ & $a-b=mk\,\, \fs k\in\mathbb{Z}$ \\ & $\iff$ & $a=b+mk$ \end{tabular} \newline \propn{Arithmetic Rules of Congruence}{ARC} Suppose $a\equiv a^\prime \mod{m}$ and $b\equiv b^\prime \mod{m}$ then, \begin{enumerate} \item $a+b\equiv a^\prime + b^\prime \mod{m}$ \item $a-b\equiv a^\prime - b^\prime \mod{m}$ \item $ab\equiv a^\prime b^\prime \mod{m}$ \end{enumerate} \subsection{Examples} $$2\equiv 9\mod{7}\land3\equiv 17\mod{7}\implies 2+3\equiv9+17\mod{7}\implies 5=26$$ \newline $$56\cdot30 \mod{40}$$ $$56=16+40\equiv 16\mod{40}$$ $$30=-10\equiv \mod{40}$$ $$56\cdot30 \equiv 16\cdot(-10)\mod{40}$$ $$\equiv-160\mod{40}$$ $$\equiv-40\cdot4\mod{40}\equiv 0\mod{40}$$ \subsection{Proof of addition} \tab Since $a\equiv a^\prime \mod{m}$ and $b\equiv b^\prime \mod{m}$ then $m\lvert(a-a^\prime)$ and $m\lvert(b-b^\prime)$ \nextline We have $(a+b)-(a^\prime-b^\prime)=a-a^\prime + b-b^\prime$, thus by DIC we get $m\lvert((a+b)-(a^\prime+b^\prime))$. Therefore $a+b\equiv a^\prime + b^\prime \mod{m}$ \subsection{Remark on Division} \tab Care is needed with division $ab\equiv ac\mod{m} \not\implies b\equiv c\mod{m}$ even if $a\not\equiv 0 \mod{m}$ \nextline $$10\equiv 4\mod{6}$$ $$2\cdot5 \equiv 2\cdot2 \mod{6}$$ $$5\not\equiv 2\mod{6}$$ \propn{Congruent Division}{CD} \tab If $ab\equiv ac\mod{m}$ and $a$ is coprime to $m$, then $b\equiv c\mod{m}$ \propn{Congruent Powers}{CP} \tab $a\equiv b\mod{m}\implies a^n\equiv b^n\mod{m}$ \subsection{Proof} \tab $$ab\equiv bc\mod{m}\iff m\lvert (ab-ac)\iff m\lvert a(b-c)$$ \nextline Then by CAD we get $m\lvert(b-c)$ since $m$ and $a$ are coprime\nextline $\implies b\equiv c \mod{m}$ \nextline By applying the above proposition repeatedly; if $a_1\equiv a^\prime_1 \mod{m}, a_2\equiv a^\prime_2 \mod{m}, \cdots a_n\equiv a^\prime_n \mod{m}$, then we get the following result \begin{enumerate} \item $a_1+\cdots+a_n\equiv a^\prime_1 + \cdots a^\prime_n \mod{m}$ \item $a_1-\cdots-a_n\equiv a^\prime_1 - \cdots a^\prime_n \mod{m}$ \item $a_1\cdots a_n\equiv a^\prime_1 \cdots a^\prime_n \mod{m}$ \item (special case) $\forall q \in \mathbb{N}, a^q \equiv \left(a^\prime\right)^q \mod{m}$ \end{enumerate} \subsection{More Examples} Simplify $4^10 \mod{18}$\nextline $4^{10} = \left(4^2\right)^5 = 16^5=(18-2)^5$ \nextline $(18-2)^5\equiv-2^5\mod{18}$ \nextline $\equiv-32\mod{18}$ \nextline $-32+2\cdot 18\mod{18}$ \nextline $4\mod{18}$ \newline \newline Is $3^9 + 62^{2020} - 20$ divisible by 7? \nextline Let $n=3^9+62^{2020}-20$. We know that $7\lvert n \iff 7\lvert(n-0) \iff n\equiv0\mod{7}$ \nextline We can compute $3^9=\left(3^3\right)^3=27^3=(28-1)^3$ \nextline $\equiv (-1)^3\mod{7}$\nextline $\equiv -1\mod{7}$\nextline We also know that $62^{2020} = (63-1)^{2020} \equiv (-1)^{2020}\mod{7} \equiv 1$ \nextline $20=21-1\equiv -1\mod{7}$ \nextline Using \hyperref[sec:ARC]{The arithmetic rules} we get that $n\equiv -1+1-(-1)\mod{7}\equiv1\mod{7}$ and thus $n$ is not divisible by 7. \thm{Congruence and Remainders}{CR} \subsection{Example} \tab What day of the week is it going to be a year from now? \nextline Since days cycle every 7, let's determine $365\mod{7}$. We know that $350=50\cdot7$ and thus \nextline $365 = 350 + 15$ \nextline $=7\cdot50+14+1$ \nextline $=7\cdot50+7\cdot2+1$\nextline $=7(50+2)+1$\nextline $\equiv 1\mod{7}$.\nextline $365\equiv1\mod{7}$\nextline Therefore, the day of the week one year from now is the same as the day of the week tomorrow. \subsection{Observation} \tab if $n\in\mathbb{Z}$ \nextline Any block of consecutive numbers will cycle through the numbers 0-6 inclusive: $$\{\cdots,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,\cdots\}$$ $$\equiv\{\cdots,0,1,2,3,4,5,6,0,1,2,3,4,5,6,0,1,2,3,4,5,6,\cdots\}\mod{7}$$ \section{Warning} \tab $\forall a,b,b^\prime \in \mathbb{N}$, if $b\equiv b^\prime \mod{m}$ then in general $a^b\not\equiv a^{b^\prime}\mod{m}$ \subsection{Example} \tab $4\equiv1\mod{3}$\nextline $2^4=16\equiv1\mod{3}$ however $2^1\equiv 2\mod{3}$ \nextline Thus $4\equiv1\mod{3}$ however $2^4\not\equiv2^1\mod{3}$ \propn{Finite Integers}{FI} $\forall a,b,\in\mathbb{Z} a\equiv b \mod{m}\iff a$ and $b$ have the same remainder after division by $m$ \subsection{Proof} \tab Applying the division algorithm, we get $a=qm+b$ and $b=q^\prime m+r^\prime$ where $0\leq r,r^\prime < m$.\nextline Notice if $a\equiv b\mod{m}$, we get that $m\lvert(a-b)$ and thus $m\lvert(qm+r - q^\prime m - r^\prime)$\nextline $\implies m\lvert(m(q+q^\prime) + (r-r^\prime))$ Then by DIC it follows that \nextline $\iff m\lvert (r-r^\prime)$ \nextline Now since $0\leq r < m$ and $0 \leq r^\prime < m$, we get that $-m\leq r-r^\prime < m$. Now, since $m\lvert(r-r^\prime)$, we get that by BBD, that $m\leq \abs{r-r^\prime}$ and so for both inequalities to hold, $r=r^\prime$. \propn{Congruent if and only if same remainder}{CISR} \tab $a\equiv b\mod{m} \iff a$ and $b$ have the same remainder after division by $m$. \propn{Congruent to Remainder}{CTR} $\forall a,b\in\mathbb{Z}\,, 0\leq b \leq m-1 : a\equiv b\mod{m}\iff$ the remainder of $a$ after division by $m$ is $b$. \nextline Consequently, every $a\in\mathbb{Z}$ is congruent to a unique integer in $[0,m-1]\subseteq\mathbb{Z}$ mod $m$. \subsection{Examples} \begin{tabular}{r l l} 25 & $\equiv 32$ & $\mod{7}$ \\ & $\equiv 18$ & $\mod{7}$ \\ & $\equiv 11$ & $\mod{7}$ \\ & $\equiv \mathbf{4}$ & $\mod{7}$ \\ & $\equiv -3$ & $\mod{7}$ \\ & $\equiv -10$ & $\mod{7}$ \\ & $\cdots$ & $\mod{7}$ \end{tabular} \nextline 4 is distinguished, as it is the remainder of 25 after division by 7. \newline \nextline \textit{Find the remainder of $5^{10}$ after division by 7} \nextline We want to compute $5^{10}\mod{7}$. Since $5^{10} = \left(5^2\right)^5 = 25^5$. We know $25\equiv 4\mod{7}$ and by \hyperref[sec:CP]{Congruent Powers} we get $\equiv 4^5\mod{7}$ \nextline $\equiv 4^3\cdot 4^2\mod{7}$ \nextline $\equiv (63+1) \cdot (14+2) \mod{7}$ \nextline $\equiv 1\cdot 2 \mod{7}$ \nextline $\equiv 2\mod{7}$ \nextline Therefore, the remainder of $5^{10}$ after division by 7 is 2. \newline \nextline \textit{Find the remainder of} $77^{100}\cdot 999 - 6^{83} \mod{4}$ \nextline We know that $77=80-3\equiv -3\mod{4}\equiv 1\mod{4}$ by \hyperref[sec:CP]{Congruent Powers}. $999=1000-1\equiv -1\mod{4}\equiv 3\mod{4}$. Now, notice that $6^{83}=6^2\cdot6^{81}$ and since $6^2\equiv 0\mod{4}$ we get that $0\cdot6^{81}\equiv 0\mod{4}$ by \hyperref[sec:CP]{Congruent Powers} \nextline $77^{100}\cdot999 - 6^{83} \equiv 1^{100}\cdot3 - 0\mod{4} \equiv 3\mod{4}$ \hyperref[sec:ARC]{Congruence and Multiplication} \subsection{Sum of Factorial Example} \tab \textit{What is the last decimal of the following expression?} $$\sum_{n=1}^{100} n!$$ \nextline Notice that the last digit of a number is the remainder mod 10. As a smaller example, notice that $7!\equiv 0\mod{10}$ because $7!$ contains a factor $2\cdot5=10$ and thus is a multiple of 10. Therefore, we know that if $k\geq 5$ we get that $k!\equiv 0\mod{10}$ \nextline Going back to our original problem, we can notice that $\forall n\geq 5$ the sum of $n!\mod{10}$ will be zero, and thus we only have to compute $1!+2!+3!+4!=1+2+6+24=33\equiv 3\mod{10}$ \section{Divisibility Rules} \subsection{Divisibility by 3} \tab $\forall a \in \mathbb{Z} 3\lvert a \iff 3\lvert$ the digit sum of $a$. \nextline $3\lvert 2046 \iff 3\lvert(2+0+4+6)$. $2+4+6=12$ and since $3\lvert12$ we know $3\lvert 2046$. \nextline $3\nmid271 \iff 3\nmid(2+7+1)$ $2+7+1=10$ and since we know $3\nmid 10$ we know $3\nmid 271$. \subsection{Proof of divisibility by 3} \tab If the digits of $a$ are $d_k, d_{k-1}, d_{k-2}, \cdots, d_1+d_0$ then $a=10^kd_k + 10^{k-1}d_{k-1}+10^{k-2}d_{k-2}, \cdots, 10d_1, d_0$. This is called the decimal expansion of $a$. Since $10\equiv 1\mod{3}$ we get that \newline$a\equiv d_k + d_{k-1} + d_{k-2} + \cdots + d_1 + d_0\mod{3}$ \subsection{Divisibility by 11} \tab $11\lvert a \iff 11\lvert$ the alternatign sum of the digits of $a$\nextline $11\lvert108097\iff 11\lvert(1+8+9) - (0+0+7)$ and since $1+8+9-7=11$ and $11\lvert 11$ we get that $11\lvert 108097$\nextline $11\nmid133 \iff 11\nmid(1+3-3)$, thus $11\nmid1\implies 11\nmid133$ \subsection{Proof of divisibility by 11} \tab take $a=10^kd_k + 10^{k-1}d_{k-1}+10^{k-2}d_{k-2}, \cdots, 10d_1+ d_0$ to be the decimal representation of $a$. Since $10\equiv-1\mod{11}\equiv 10\mod{11}$ thus $a\equiv (-1)^kd_k + (-1)^{k-1}d_k-1 + \cdots + (-1)^1d_1 + 1d_0$. Now, notice that this is the sum of the digits of $a$ indexed by even values of $k$ subtracted by the odd-indexed digits of $a$ mod 11. \section{Linear Congruence Relations} \subsection{Problem: does $x^3+x^2-x+1=0$ have an integer solution?} \tab Suppse $x=a\in\mathbb{Z}$ is a \soln. Thus, $a^3+a^2-a+1=0$. Thus, since both sides are integers, we know that $\forall m \in \mathbb{N}$, $a^3+a^2-a+1\equiv 0 \mod{m}$. \nextline Consider the equation in modulo 3. Notice now that $a$ can be either 0, 1, or 2 (mod 3). If $a\equiv0\mod{3}$ we get that $1\equiv0\mod{3}$ which is false. If $a\equiv1\mod{3}$ we get that $1+1=2\equiv0\mod{3}$ which is still false. If $a\equiv2\mod{3}$ then $a\equiv-1\mod{3}$ and thus we get $-1+1-(-1)+1=2\equiv0\mod{3}$ which is still false. Therefore, since there are no integer solutions moldulo 3, there are no integer solutions to the equation $a^3+a^2-a+1=0$. \chapter{Linear Congruence} \tab $y^2=4x+2$ can be solved over the integers by solving the relation mod m. If there are no solutions for a particular $m$, then there are no solutions over the integers. $y^2\equiv4x+2\mod{m}$. There's a finite process to check $y^2 \equiv 4x+2\mod{m}$ compared to the infinite process to check $y^2=4x+2$. \nextline \defn A Linear Congruence Equation is an equation of the form $ax\equiv c\mod{m}$ where $a,c\in\mathbb{Z}$ and $m\in\mathbb{N}$ are fixed and $a\not\equiv 0\mod{m}$. We wish to find \soln for $x$ over the integers. \section{Methods of Solving} \subsection{Brute Force} $5x\equiv 2\mod{3}$ \begin{center} \begin{tabular}{|c|c|c|c|} $x\mod{3}$ & 0 & 1 & 2 \\ \hline $5x$ & 0 & 5 & 10 \\ \hline $5x\mod{3}$ & 0 & 2 & 1 \\ \end{tabular} \end{center} \tab We see that the only solution to this is when $x\equiv 1\mod{3}$. \subsection{LDE} $5x\equiv 2\mod{3} \iff 5x=2+3k$ $\fs k\in\mathbb{Z}$. \nextline $\iff 5x-3k=2$ \nextline $\iff 5x+3y=2$ for $y=-k$ \nextline Let $d=\gcd(5,3)$, then notice that $d\lvert 2$ thus by \hyperref[sec:ldeta]{LDET1} there are infinite solutions. Solve for $x,y$. \nextline By inspection we see that a particular \soln is $(x_0,y_0) = (1,-1)$. Using \hyperref[sec:ldetb]{LDET2} we can get the general solutions given by $$\begin{cases} x=1 + 3n \\ y=-1 - 5n \end{cases}$$ \section{Examples} $2x\equiv3\mod{4}$ \nextline By Method 1: \begin{center} \begin{tabular}{|c|c|c|c|c|} $x\mod{4}$ & 0 & 1 & 2 & 3 \\ \hline $2x$ & 0 & 2 & 4 & 6 \\ \hline $2x\mod{4}$ & 0 & 2 & 0 & 2 \\ \end{tabular} \end{center} \tab Since there is no value of $3$ in the table, there is no \soln to $2x\equiv 3\mod{4}$ \nextline By Method 2: \nextline $2x\equiv3\mod{4} \iff 2x+4y=3$. Since $\gcd(2,4)\nmid3$, by \hyperref[sec:ldeta]{LDET1} there is no \soln to the equation. \thm{Linear Congruence Theorem}{LC} \tab Consider the Linear Congruence Equation $ax\equiv c\mod{m}$ where $a,c\in\mathbb{Z}$ and $m\in\mathbb{N}$ are fixed and $a\not\equiv 0\mod{m}$. Let $d=\gcd(a,m)$. Then, solutions exist if and only if $d\lvert c$ by \hyperref[sec:ldeta]{LDET1}. If $d\lvert c$ and if $x_0$ is a \soln then the general solution set is given by the following (using \hyperref[sec:ldetb]{LDET2}) $$\left\{ x\in\mathbb{Z}\mid x=x_0+\frac{m}{d}n\, \fs n\in\mathbb{Z} \right\}$$ $$\left\{ x\in\mathbb{Z}\mid x\equiv x_0\mod{\frac{m}{d}} \right\}$$ \tab Notice for the second set, there are $d$ \soln mod $m$ \subsection{Proof} $ax\equiv c\mod{m}\iff ax+my=c$ so \soln exist $\iff d\lvert c$ by \hyperref[sec:ldeta]{LDET1}.\nextline If $x_0$ is a \soln for x, then by \hyperref[sec:ldetb]{LDET2} the general \soln set is $$\left\{ x\in\mathbb{Z} \mid x=x_0+\frac{m}{d}n\,\fs n\in\mathbb{Z} \right\}$$ \tab Since $x=x_0+\frac{m}{d}n \iff x\equiv x_0\mod{\frac{m}{d}}$ \nextline Finally, if $x=x_0+\frac{m}{d}n$ then by applying the Division Algorithm, we get $n=qd+r$, $0\leq r < d$. Thus $x=x_0+\frac{m}{d}n \iff x=x_0+(qd+r)\frac{m}{d} = x_0 + mq + r\frac{m}{d}$\nextline $\iff x\equiv x_0+r\frac{m}{d}\mod{m}$, $0\leq r \leq d-1$ \newline \null\hfill$\mathcal{QED}$ \subsection{More Examples} Solve $12x\equiv 9\mod{15}$\nextline \soln: Step1: (gcd check) $d=\gcd(12,15)=3$ and $3\lvert9$ thus solutions exist\newline Step2 (Particular \soln)\nextline $12x+15y=9$. By EEA we get $(x_0, y_0) = (-3,3)$.\nextline $x\equiv x_0\mod{\frac{m}{d}}$.\nextline $\equiv -3\mod{5}$\nextline $\equiv 2\mod{5}$\nextline $x=2+5k$\nextline Since the only unique solutions are in the integer range $[0,14]$, we know that the only solutions to $x=2+5k$ in that interval are $x\equiv2,7,12\mod{15}$. \section{Nonlinear Congruence Equations} \tab There is no general/efficient method of finding solutions.\nextline \subsection{Examples} $x^2\equiv 1\mod{2}$. \begin{center} \begin{tabular}{|c|c|c|} $x$ & 0 & 1 \\ \hline $x^2$ & 0 & 1 \end{tabular} \end{center} \tab Thus $x\equiv 1\mod{2}$ is the only \soln \nextline $x^2\equiv 1\mod{4}$ \begin{center} \begin{tabular}{|c|c|c|c|c|} $x$ & 0 & 1 & 2 & 3\\ \hline $x^2$ & 0 & 1 & 4 & 9 \\ \hline $x^2\mod{4}$ & 0 & 1 & 0 & 1 \end{tabular} \end{center} \tab Therefore, the solutions are $x\equiv 1,3\mod{4}$. \nextline Solving $x^2\equiv 1\mod{8}$ gives 4 solutions $\left(x\equiv1,3,5,7\mod{8}\right)$. Solving $x^2\equiv 1\mod{2^k}\, \fs k\geq3$ will have only 4 solutions. \end{document}
{ "alphanum_fraction": 0.6494745742, "avg_line_length": 46.5984990619, "ext": "tex", "hexsha": "a6446c20a17b649aaf3fcae10765dd2e1c606137", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bfc2b3734230e883e439ea9bfec3b6c4103a01ed", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "GardnerLiam/Libraria-Mathematica", "max_forks_repo_path": "Libraria Algebrae/Libraria Algebrae.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "bfc2b3734230e883e439ea9bfec3b6c4103a01ed", "max_issues_repo_issues_event_max_datetime": "2020-11-17T06:21:41.000Z", "max_issues_repo_issues_event_min_datetime": "2020-11-17T06:21:41.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "GardnerLiam/Libraria-Mathematica", "max_issues_repo_path": "Libraria Algebrae/Libraria Algebrae.tex", "max_line_length": 742, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bfc2b3734230e883e439ea9bfec3b6c4103a01ed", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "GardnerLiam/Libraria-Mathematica", "max_stars_repo_path": "Libraria Algebrae/Libraria Algebrae.tex", "max_stars_repo_stars_event_max_datetime": "2020-11-23T20:16:39.000Z", "max_stars_repo_stars_event_min_datetime": "2020-11-23T20:16:39.000Z", "num_tokens": 10165, "size": 24837 }
\documentclass[12pt,scrartcl,titlepage]{article} \renewcommand{\baselinestretch}{1.5} \usepackage[utf8]{inputenc} \usepackage[margin=1.15in]{geometry} \usepackage{setspace} \usepackage{ifpdf} \usepackage{amsmath} \usepackage{amssymb} \usepackage{fancyhdr} \usepackage{float} \usepackage{xcolor} \usepackage{listings} \usepackage{graphicx} \usepackage[parfill]{parskip} \usepackage{hyperref} \usepackage{subfig} \usepackage{mwe} \usepackage[font=it]{caption} \pagestyle{fancy} \newcommand{\HRule}{\rule{\linewidth}{0.5mm}} \fancyhf{} \begin{document} \lhead{G121 Final Project Report} \rhead{J. Blake, M. Haney, R. Lawson, M. Notaros} \cfoot{\thepage} {\setstretch{1.0} \input{./report-titlepage.tex} } \tableofcontents \pagebreak % The report should focus on the Lab 6 add-on project components, and how the components are incorporated with the circuits completed in Labs 1-5. Provide a full description of the robot additional control circuits with design, simulation, and experimental results. Describe how the project design is split into more manageable sub-circuits or blocks for design, testing and de-bugging. Comment on design, hardware and code challenges that you had to work through to get the project circuitry fully operational. Propose future extensions to improve the performance and capabilities. \section{Introduction and Objectives} For our group's final project we added an additional custom component to our robot that incorporated both hardware and software. Our project consisted of a glove that controls the direction the robot moves, and a muscle sensor that controls the speed of the robot. The glove has two flex sensors and an accelerometer which combined will drive the robot forward, backwards, left, and right. We also have a muscle sensor that controls the speed by sensing the magnitude of the electrical activity of the muscle. There are three ranges of speeds: fast, medium, and slow. These components communicate wirelessly over WiFi. Overall, This project can be loosely divided into three parts: an array of inputs, a wireless transmitter that encodes the input data into a command, and a wireless receiver that translates the command into an action taken by the robot. \section{Equipment} { \setstretch{1.0} \noindent Hardware: \begin{itemize} \item EMG Sensor (Sparkfun Muscle Sensor v3) \item Arduino UNO R3 \item 2x Flex Sensor (FS7548) \item Accelerometer (ADXL337) \item Teensy 3.1 (PJRC) \item TeensyLC (PJRC) \item 2x Wireless Transceiver (ESP8266) \item Gardening Glove, Duct Tape \item Slide Potentiometer \item Push Button \item AA Batteries \item Robot with lab 1-4 components \item Agilent DSO1024A Oscilloscope \item Agilent E3631A Triple Output DC Power Supply \end{itemize} \noindent Software: \begin{itemize} \item Arduino IDE \item PJRC Teensy Software (Arduino IDE Extension) [1] \item IntuiLink Data Capture \end{itemize} } \section{Methods and Results} \subsection{Control Input Design and Testing} \subsubsection{Glove Controller Design} First we worked on the glove controller, which the user will use to control the movement of the robot. The glove is made up of two flex sensors, an accelerometer, a muscle sensor, a push button, and a slide potentiometer. The flex sensors are attached to the index and middle fingers of the glove. The user chooses the forward or backward direction based on which finger is bent. To make the robot go forward, the user bends their index finger and to make the robot go backward, the user bends their middle finger. The accelerometer chooses the turning direction of the robot. If the user tilts their hand to the left, the robot turns left, and if the user tilts their hand right, the robot turns right. The layout of the glove with the two tilt sensors and accelerometer is shown below. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{glove.png} \caption{Glove with tilt sensors and accelerometer} \end{figure} In addition to the sensors shown in Figure 1, the user wears an EMG muscle sensor which detects the level of muscle activity. The user controls the speed of the robot with this sensor. The more the user flexes their muscle, the faster the robot goes. Hence, the direction is chosen with the sensors pictured in Figure 1, and the speed is chosen using the muscle sensor. The muscle sensor is connected to the user via electrode sensor pads. The proper hookup of these pads is shown in Figure 2 below. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{arm.png} \caption{Proper connection of the electrode sensor pads to the muscle} \end{figure} As can be seen in Figure 2 above, the electrode pads should be connected with the red connector on the middle of the muscle, the blue connector on the bottom of the muscle, and the black connector on a bone near the muscle. Proper connection of the sensor pads is very important to get a clear signal from the sensor. The muscle sensor provides a very interactive way to control the speed of the sensor. One drawback with EMG control is that the electrode sensor pads are supposed to be used only once and then thrown away, so it is not very feasible to pass the muscle controls between users. Due to this fact, we designed a second control mode which uses a different type of speed control. The user can enter the second mode by pressing a push button. In the second mode, the user still uses the glove pictured in Figure 1 to choose the direction of the robot, as the glove is easy to switch between multiple users, but the speed is then chosen via the slide potentiometer. There is also a LCD screen which prints important sensor values and tells the user which mode they are in for convenience. \subsubsection{Muscle Sensor Testing} We tested the EMG muscle sensor to see how well it responds to a user’s muscle activity. We powered the muscle sensor with the power supply and connected the output signal to the oscilloscope, as shown in Figure 3 below. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{msv3-setup.png} \caption{Muscle sensor testing connections} \end{figure} We did testing with +-9V, as that is the recommended voltage for the sensor. We found that the sensor responds very well to muscle activity. Plots of the signal output for the resting position, high bursts of muscle activity, and gradual increase in muscle activity are shown below. \begin{figure}[h!] \begin{minipage}{.5\linewidth} \centering \subfloat[]{\label{main:a}\includegraphics[scale=]{plot-a.png}} \end{minipage}% \begin{minipage}{.5\linewidth} \centering \subfloat[]{\label{main:b}\includegraphics[scale=1]{plot-b}} \end{minipage}\par\medskip \centering \subfloat[]{\label{main:c}\includegraphics[scale=1]{plot-c}} \begin{center} \caption{Muscle sensor output signal: (a) no muscle activity/resting; (b) high spikes of movement; (c) gradual increase in muscle activity} \end{center} \end{figure} As can be seen in Figure 4 above, the muscle sensor detects muscle activity very well. When the user is resting, the output signal is very steady at a low voltage. When the user makes drastic movements, the output signal has huge spikes in voltage. It is also possible for the user to gradually increase the signal voltage by gradually flexing harder. We were able to get a good range of low, medium, and high amount of muscle activity that the user can easily differentiate between and maintain comfortably. Although $\pm$9V is the recommended input voltage, it is too high for the Teensy. We then powered the signal with +3.3V from the Teensy and -5V from a battery pack. Although the positive and negative voltage rails were not equal, the hardware had no problem working correctly. The positive rail was powered by the Teensy so that it did not receive input signals that would damage it. We then connected the output signal of the muscle sensor to an analog pin on the Teensy in order to read the values that the sensor produces. We found that the sensor produces a very steady signal for the different ranges of muscle activity. There is a significant difference in the output value based on the muscle activity of the user. We tested the sensor for various ranges of muscle activity. The results are seen below in Table 1. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Muscle Activity}&\textbf{Sensor Value}\\ \hline Resting&85-95\\ \hline Low&100-150\\ \hline Medium&180-250\\ \hline High&400-1023\\ \hline \end{tabular} \end{center} \caption{Sensor values for various muscle activity ranges} \end{table} As shown in Table 1 above, we were able to get good ranges for the different muscle activity levels. They are easy to achieve and maintain by the user. The four ranges shown above correspond to the different robot speeds, where resting is our lowest speed and high is the maximum speed. \subsubsection{Flex Sensor and Accelerometer Testing} The accelerometer has three axes on it; however, we only use one axis, as we only need to be able to detect tilting in one direction. Therefore, we leave two of the axis signal outputs unconnected. We tested the sensor with it connected to the Teensy to see what values are outputted for different tilt ranges. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Tilt}&\textbf{Sensor Value}\\ \hline Left&$V \geq 670$\\ \hline Right&$V \leq 550$\\ \hline \end{tabular} \end{center} \caption{Sensor values for left and right tilt orientation} \end{table} As seen above in Table 2, when the user tilts the glove left, the value read from the accelerometer is above 670, and when the user tilts the glove right, the value read is below 550. When the sensor is outputting a value between the two values seen in Table 2 it constitutes as the user keeping their hand flat, and therefore the robot does not turn. The flex sensors were tested in a similar manner. The flex sensor is a variable resistor, and we connected each of them in series with a 22k$\Omega$ resistor, creating a voltage divider, where the upper resistor was the flex sensor. The center node was then connected to an Analog Input pin on the Teensy. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{flex.png} \caption{Connection of a flex sensor to an Arduino (in our case a Teensy)} \end{figure} We found cutoff values for when the flex sensor is bent an appropriate amount. The value is 550 for the forward control flex sensor and 600 for the backward control flex sensor. It is interesting to note that the values for the two flex sensors are slightly different. This is most likely due to inherent tolerances within the flex sensors or resistors. \subsubsection{Push Button and Potentiometer Implementation} As stated earlier, the push button and slide potentiometer are used to implement a second mode during which the user can control the speed of the robot with the potentiometer rather than the EMG muscle sensor. This second mode is entered when the push button is pressed. Then the user can return back to the first mode by pressing the button again. The potentiometer controls the speed from 0\% to 100\% based on what position it is slid to. \subsection{Wireless Connection and Final Implementation} \subsubsection{Wireless Setup and Code} To communicate wirelessly with the robot, we chose to use a pair of ESP8266 wireless modules, one of which created a WiFi Access Point and ran the TCP server, the other of which connected to the AP. Although commands were only sent to the robot, bi-directional communication is feasible with this chip. A Teensy 3.1 and a Teensy LC were chosen due to having hardware serial with hardware buffers and 3.3V logic like the ESP8266 module, making communication easier than with an Arduino. We already had a Teensy 3.1 for the transmitting end and the LC version was purchased for the receiving end. This configuration led to an implementation using three microcontrollers to translate an array of inputs into outputs, as shown in the block diagram in Figure 7. The connection was handled using bi-directional logic level converters using the MOSFETs provided in our lab kit. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{level-shifter.png} \caption{Example bi-directional logic level shifter schematic} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{blkdgrm.png} \caption{Block diagram} \end{figure} Below is a brief description of the purpose of each microcontroller, and pseudocode of the software running on each. \textbf{Transmitter TeensyLC:} Configure WiFi AP, collect sensor data, transmit commands {\setstretch{1.0} Setup routine: \begin{itemize} \item Initialize inputs and Adafruit display \item Start serial communications \item Attach interrupt to push button handler function \item Configure ESP8266 as access point \end{itemize} Main loop: \begin{itemize} \item Check which sensor is activated and in which way \item Prints values to LCD screen \item Send a forward, backward, left, right, or stop command in response to glove controller sensors \item Send a speed command in response to EMG or potentiometer readings \end{itemize} } \textbf{Receiver TeensyLC:} Connect to WiFi AP, translate and pass on commands {\setstretch{1.0} Setup routine: \begin{itemize} \item ESP connection and configuration \item Start serial communications \end{itemize} Main loop: \begin{itemize} \item Detect which direction and speed is being sent by the transmitter \item Communicate with the Arduino on the robot (wired) \end{itemize} } \textbf{Robot Control – Arduino Uno:} Execute robot commands {\setstretch{1.0} Setup routine: \begin{itemize} \item Start serial communications \end{itemize} Main loop: \begin{itemize} \item Position control code from previous labs \item Call different direction and speed functions based on what command is received \end{itemize} } The code is several pages long, so we have created a zip folder containing the final code, which can be found in the references. The code for the transmitter/sensor input TeensyLC is located in ECEN\_TX. This microcontroller maintains configuration of the wireless module, reads sensor input, computes commands for the robot, and sends these to the wireless transmitter module. The code for the receiver TeensyLC is located in ECEN\_RX. This microcontroller connects to the wireless receiver module, reads commands from it, and passes these commands to the Arduino. The code for the robot control Arduino Uno is located in Robocode. This microcontroller reads commands from the wireless receiver module and translates them into actions (in this case, driving motors). \subsubsection{Packaging} Several of the input devices, including the accelerometer, the flex sensors, and the EMG were connected to a glove. We also designed a small box that would be attached to the user’s arm. The Teensy, an antenna, batteries, and a screen that displayed the output readings from the glove were placed in the box. The second receiver Teensy was placed on the robot next to all the circuitry from the previous labs. The information from the receiver Teensy was then fed into the Arduino on the robot, which led to motor control. \section{Further Explorations} If this project were extended and improved upon, a good course of action would be to spend more time on the packaging. For our hand controller we used a garden glove, as that was the best option with the amount of time that was given. However, if there was more time, it would have been a great idea to design and manufacture a glove with plastic coating that made it look like a robot hand. This would have made the project look more complete at the end and would probably result in a more entertaining experience for the user. In addition, more care should have been taken when connecting the EMG muscle sensor up to the batteries. The sensor has both a positive and a negative input voltage, so the user should be very careful when connecting the batteries to ensure that the positive pin receives a positive voltage and the negative pin receives a negative voltage. If the negative pin receives any amount of positive voltage, the sensor will heat up and break. \section{Conclusion} Our group successfully demonstrated wireless communication and motor control with the glove. We were able to implement robot direction control with the flex sensors and accelerometer as well as speed control with both the EMG muscle sensor and the slide potentiometer. The wireless communication proved to be the most difficult part of the project. We initially struggled with establishing the connection between the transmitter and receiver. There was also a lot of code to go through and a few bugs in the program that caused information to be lost between the components. It was more difficult than the last wireless system, but it did have its benefits. WiFi increased our range for the distance the robot could get from the operator while still responding. Using bluetooth could have been an ulterior method that might have been easier to build. \section{References} [1]. Teensy Arduino IDE Extension: \url{https://www.pjrc.com/teensy/td_download.html} [2]. Teensy Reference: \url{https://www.pjrc.com/teensy/} [3]. TeensyLC Product Page: \url{https://www.pjrc.com/teensy/teensyLC.html} [4]. ESP8266 Datasheet: \url{https://nurdspace.nl/images/e/e0/ESP8266_Specifications_English.pdf} [5]. ESP8266 Reference: \url{https://nurdspace.nl/ESP8266} [6]. GitHub Software Repository: \url{http://github.com/Dacilndak/2270-g121} [7]. GitHub Software Archive: \url{https://github.com/Dacilndak/2270-g121/archive/master.zip} \end{document}
{ "alphanum_fraction": 0.7711267606, "avg_line_length": 63.7754385965, "ext": "tex", "hexsha": "34fec3f2cdb229bf3a903e1b8e2c8ddd8af37b32", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a07e53722d3b76fa4be2f0a9690acfcf29b0be83", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "Dacilndak/2270-g121", "max_forks_repo_path": "report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a07e53722d3b76fa4be2f0a9690acfcf29b0be83", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "Dacilndak/2270-g121", "max_issues_repo_path": "report.tex", "max_line_length": 1100, "max_stars_count": null, "max_stars_repo_head_hexsha": "a07e53722d3b76fa4be2f0a9690acfcf29b0be83", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "Dacilndak/2270-g121", "max_stars_repo_path": "report.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4355, "size": 18176 }
\input{header} %% ---------------------------------- % % Title and authorship information % %% ---------------------------------- \title{Modelling biological and resource fluxes in fluvial meta-ecosystems} \author[1,*]{Matthew V. Talluto ([email protected])} \author[1]{Rubén del Campo ([email protected])} \author[1,2]{Edurne Estévez ([email protected])} \author[3]{Thomas Fuß ([email protected])} \author[1]{Lukas Thuile Bistarelli ([email protected])} \author[1,3]{Gabriel A. Singer ([email protected])} \affil[1]{Department of Ecology, University of Innsbruck, Innsbruck, Austria} \affil[2]{Faculty of Science and Technology, University of the Basque Country (UPV/EHU), Spain} \affil[3]{Leibniz-Institute of Freshwater Ecology and Inland Fisheries (IGB), Berlin, Germany} \affil[*]{Author for correspondence. Address: \protect\\ \hspace{3em} University of Innsbruck \protect\\ \hspace{3em} Department of Ecology \protect\\ \hspace{3em} Technikerstrasse 25 \protect\\ \hspace{3em} A-6020 Innsbruck, Austria \protect\\ \hspace{3em} tel: +43 (0)512 507-51738} %% just some outlining fanciness \newcommand{\fitem}[2]{\item {\bfseries #1} \\ #2} \begin{document} \begin{acronym} \newacro{fme}[FME]{fluvial meta-ecosystem} \end{acronym} \begin{titlepage} \maketitle \begin{flushleft} \textbf{Paper type:} \\ \textbf{Short title:} \\ \textbf{Keywords:} fluvial meta-ecosystems, biodiversity, ecosystem functioning, river networks \end{flushleft} \begin{abstract} Abstract here \end{abstract} \end{titlepage} \section{Introduction} \begin{itemize} \fitem{Biodiversity-ecosystem functioning} {Understanding how ecosystems function and the services they provide to society has long been a central goal of ecological research, having particular relevance the study of biodiversity’s effects on ecosystem functioning \autocite{Bannar2018}. Organisms shape ecosystem functions in many ways as they search for and consume resources, this is material that serves as sustenance, and that is needed for their growth and reproduction. Through these activities, organisms contribute to organic matter and nutrient cycling, productivity and several other ecosystem functions \autocite{Hooper2005,Balvanera2006}. It is well-established, both theoretically and empirically, that biodiversity is an important driver of ecosystem functioning \autocite{Cardinale2012,Hooper2012} and it can be as important as abiotic drivers (e.g., climate; \autocite{Duffy2017}). Traditionally, it was proposed that ecosystems containing more species would exhibit higher levels of ecosystem functions \autocite{Margalef1963}; (DARWIN 1859). However, recent experiments and models have revealed contraposed effects of diversity on ecosystem functioning (i.e., positive, negative and no effects; see \autocite{Pennekamp2018})Pennekamp et al., 2018), what points to non-linear and probably highly context dependent biodiversity-ecosystem functioning relationships \autocite{Thebault2006,Little2018}. This further suggests that the classic view that ecosystem functioning is a just a function of the diversity of organisms might be overly simplistic. On the one hand, it has been recently shown that not only species number but species identity are key determinants of ecosystem function because their different morphological, behavioural or physiological characteristics highly influence the acquisition, use and allocation of resources \autocite{Hooper2005,Diaz2013,Gagic2015}. On the other hand, resource properties, including both quantity and composition, highly determine their consumption and processing as different resources can contain different essential nutrients and carbon compounds that define their lability and palatability for consumers \autocite{Cornwell2008}. Hence, ecosystem functioning might likely be a function of the interaction of resource and consumer diversity. \marginnote{EE: Remove diversity? And simply state “interaction of resources and consumers”?} While much of early biodiversity-ecosystem functioning work has been conducted at the local scale of ecosystems (REFS), ecosystems are not isolated in space but highly spatially structured and interconnected among each other by the exchange of organisms, resources and energy (REFS) and, therefore, many controls of the spatial distribution of resources and consumers unfold at larger spatial scales. Over the last decades, spatial processes on community assembly processes have been considered, promoted by the meta-community theory. Just recently, the meta-ecosystem framework was developed to integrate the production, movement and transformation of resources with the meta-community dynamics, enabling investigating feedback processes between resources and consumers across spatial scales \autocite{Loreau2003}. Fluvial meta- ecosystems are a perfect example of meta-ecosystems as rivers could never be treated as a closed system. Rivers are embedded in a terrestrial matrix and organized in river networks, which have a hierarchical dendritic structure that strongly interconnects rivers via downstream flow of water. Therefore, the physical and geographical structure of river networks favors the exchange of resources and organisms between the aquatic and the surrounding terrestrial ecosystems as well as between river reaches, strongly influencing how these ecosystem components are distributed in space. To better understand biodiversity-ecosystem functioning relationships, which are likely scale dependent (e.g., \autocite{Gonzalez2020}), we need both (i) models that operate at the meta-ecosystem level that account for the community species composition and assembly processes and the properties and dynamics of the resources they rely on, and (ii) highly resolved temporal and spatial data on resources and consumers. The availability of datasets is rapidly increasing thanks to technological developments such as DNA sequencing, size-exclusion chromatography, mass spectrometry and infrared spectroscopy \autocite{Sleighter2008,Huber2011,Tremblay2011,Baird2012}. These datasets are unprecedented in both breadth—covering a great spatial extent, such as entire river networks—and depth—capturing much more biological and chemical diversity at a single location than has ever been possible \autocite{Altermatt2020}. However, we still lack models that can provide the context for these datasets at the appropriate spatio-temporal scales and enable investigating of both local and regional biodiversity-ecosystem functioning dynamics \autocite{Gounand2018}. Developing fluvial meta-ecosystem models presents important challenges due to the unique properties and structure of river networks. The spatial configuration of river networks imposes unique and strong controls on both organisms and resource fluxes. However, the distribution of these ecosystem components is driven by different mechanisms. Resources highly depend on structure of the surrounding terrestrial ecosystem (i.e., land cover shapes the nature of the resources that are transferred to the river corridor) and are passively transported by flow (Leopold et al., 1965; Rodríguez-Iturbe and Rinaldo, 1997). In contrast, the distribution of consumers is driven by the interplay between (i) neutral processes (e.g., extinctions); (ii) regional processes, predominantly dispersal which for some organisms can be only passive dispersion following downstream water flow while others have active dispersal capacities and can recolonize upstream habitats (flow-mediated or aerial dispersal); (iii) local environmental factors, including among others resource properties, that filter species from the regional species pool; and (iv) local biotic interactions, having special relevance the competition between consumers for resources \autocite{Leibold2004,Urban2004,Heino2015}. Thus, the interaction of land cover and river network structure may cause a spatially structured heterogeneous resource supply for consumers across the river network with a spatially variable temporal synchronicity (i.e.., variable flow affects portions of the river network asymmetrically) while the flow direction may constrain flow-mediated dispersal at the same time the network structure conditions the aerial dispersion (e.g., overland dispersion) \autocite{Benda2004,Heino2015,Helton2018}. Hence, the resulting spatial patterns of resources and its consumers in river networks may not conform, producing a complex pattern of resource-consumer diversity intersection (e.g., both high- and low-diversity situations for both resources and consumers as well as both good and poor matches of resource diversity to consumer diversity) that may result in gradients of ecosystem functioning and, ultimately, biodiversity-ecosystem functioning relationships. \marginnote{EE: Same sentence as in Matts conceptual paper} Despite the complexity of fluvial meta-ecosystem models, models that are able to incorporate the structure of river networks are essential to test hypotheses about how the spatial organization of rivers within a river network influences meta-ecosystem properties such as alpha and beta diversity patterns of organisms and resources at the scale of the entire river network as well as local and regional biodiversity-ecosystem functioning dynamics or the relationship between branching complexity and ecosystem stability \autocite{Terui2018}. Further, these models can be used to theoretically predict the effects of environmental and anthropic changes such as damming or flow variations associated to global change - including flow intermittency - that modify network structure and connectivity, as well as land use changes in the surrounding terrestrial ecosystems, which can have important implications for resource inputs and aerial dispersal. Here, we develop a meta-ecosystem model that explicitly couples the distribution and fluxes of both resources and their consumers in fluvial meta-ecosystems. We construct this model by joining two commonly-used model types in fluvial ecosystem and community ecology: reaction-transport models (for resources), and meta-community models (for organisms). By linking these two models, we allow for feedbacks between the consumers and the resources they are sustained on. These feedbacks are essential to enable a more mechanistic understanding of the connections between the processes underlying community assembly, resulting biodiversity patterns, and ecosystem functioning. \marginnote{MT: review this, it is redundant with last sentence} } \fitem{Define FMEs, importance for biodiversity \& functioning} {Introduce FME concept; scope of paper (be sure not so technical that no one can read it). Explain how FMEs can be a perfect meta-ecosystem model as the river network structure favours the exchange of organisms/resources between river reaches/habitats => resources \& consumers interact in space. Much work connecting diversity with ecosystem functioning is conducted at the scale of local ecosystems. Meta-ecosystems in general, and in particular \acp{fme}, connect multiple ecosystems across much larger scales (e.g., an entire fluvial network). Biodiversity-ecosystem functioning relationships are likely quite scale dependent \autocite{Gonzalez2020} (example?), and thus models that integrate the relevant processes from local to meta-ecosystem scales are needed to better understand both local and regional dynamics.} \fitem{Fluvial ecosystems present unique problems for modelling.} {The physical and geographical structure of river networks imposes unique and strong controls on both organism and resource (i.e., material that serves as sustenance) fluxes in \acp{fme}. Rivers are hierarchically branching and strongly interconnected via downstream flow of water. Thus, the properties of this hierarchical structure can strongly influence how ecosystem components are distributed in space via (i) structured spatial heterogeneity in resource supply (deriving, e.g., from structure in the surrounding terrestrial landscape), (ii) spatially variable temporal synchronicity (e.g., variable flow affecting portions of the network asymmetrically), and (iii) constrained avenues for dispersal of organisms \autocite{Helton2018,Benda2004,Heino2015}. Comments from Ruben about previous: This paragraph is very complex to me. Each point should be briefly explained to make sure the reader gets how RN features affect the spatial distribution of resources and organisms.I am not sure how to improve this, but maybe a more simple option would be to explain that the spatial distribution of resources and organisms are driven by different mechanisms: A) Resources: Depend on surrounding terrestrial landscape and are distributed by passive transport. B) Consumers: Depend on environmental filters and neutral processes (dispersal vs extinction). Some organisms have even active dispersal capacities and can recolonize upstream habitats. Actually, if using this approach, this paragraph´d help to introduce and justify the use of the reaction-transport model and meta-ecosystem model in the paper. Finally, after this paragraph, I think it´s necessary to give the reader an idea about the potential implications of the different spatial distribution of resources/consumers. For instance, introduce a little bit the concept of match/mismatch of diversity of resources and consumers and how this can affect to emergent properties of the ecosystem as productivity or functioning. This hierarchical branching structure that dictates the movement of material and species (NOTE - some consistency throughout is needed for this) can produce both quantitative and qualitative changes in the predictions of models when compared with terrestrial systems. For example, branching structure may promote ecosystem stability and act as a buffer against environmental change \autocite{Terui2018}. This unique property of \acp{fme} also generates opportunities; models that are able to incorporate the structure of river networks are well-positioned to test hypotheses about how the spatial organisation of rivers within a river network influences meta-ecosystem properties such as alpha and beta diversity and resource fluxes at the scale of the entire river network.} \fitem{Increasing data availability and need of models to catch up} {The availability of comprehensive data on the biodiversity and resource components is rapidly increasing thanks to technological developments such as DNA metabarcoding and high resolution mass spectrometry (other examples?) (cite). These enable the creation of datasets that are unprecedented in both breadth---covering a great spatial extent, such as entire river networks---and depth---capturing much more biological and chemical diversity at a single site than has been possible \autocite{Altermatt2020}. However, we still lack comprehensive theory and models that can provide the context for these datasets at the appropriate spatio-temporal scales \autocite{Gounand2018}.} \fitem{Objectives/summary of paper} {Here, we develop a model that explicitly couples the distribution and fluxes of both resources and species in \acp{fme}. We construct this model by joining two commonly-used model types in fluvial ecosystem and community ecology: reaction-transport models (for resources), and meta-community models (for species). By linking these two models, we allow for feedbacks between the biological community and resources. These feedbacks are essential to enable a more mechanistic understanding of the connections between the processes underlying community assembly, resulting biodiversity patterns, and ecosystem function. Moreover, the spatial structure of the river network allows us to test emergent properties related to the structure of the network. For example, changes to network structure via damming or due to flow intermittency can be evaluated in-silico, or theoretical predictions such as the relationship between branching complexity and ecosystem stability \autocite{Terui2018} can be evaluated.} \end{itemize} \section{Materials \& Methods} \subsection{Model Description} Our conceptual starting point for modelling the biological community is metapopulation theory \autocite{Levins1969}. The classic metapopulation model tracks the number of occupied patches $p$ in a landscape composed of $h$ available patches as a function of the rates of colonisation and extinction of local populations within those patches: \begin{equation} \frac{dp}{dt} = cp \left( h - p \right) - pm \label{eq:levins} \end{equation} Occupied patches ($p$) experience extinction according to the extinction rate $m$, while unoccupied patches ($h-p$) are colonised according to the colonisation rate $c$. The additional $p$ in the colonisation term takes dispersal into account; as more patches are occupied in the metapopulation as a whole, unoccupied patches are colonised more quickly due to increased dispersal from the occupied patches. \textcite{Hunt2009} extended this model to multi-species communities by adding competition to the extinction term: \begin{equation} \frac{\partial p_i}{\partial t} = c_i p_i \left( h-p_i \right) - p_i \left( \sum_{j \in S \setminus \left\{i \right\} }{m_{ij}p_j} + m_i \right) \label{eq:hunt} \end{equation} where the subscript $i$ indicates a focal species, the subscript $j$ a competitor, and $S \setminus \left\{i \right\}$ is the set of species in the local community excluding the focal species $i$. Here, the extinction rate is now broken into two terms. The first a species-specific intrinsic extinction rate $m_i$, representing, e.g., stochastic extinctions that are unrelated to other species in the local community. The $m_{ij}$ term is the effect of competition between $i$ and $j$ on the extinction rate of $i$ (multiplied by $p_j$ because competition only occurs when species $j$ is also present). For future clarity of notation, we assume all parameters are specific to a target species $i$ and thus omit the subscript. An important aspect of metacommunity dynamics is dispersal, which in the classic model is incorporated into the colonisation term using the prevalence $p_i$. In rivers, hydrological connections among habitat patches facilitates passive dispersal in the water (cite). However, many organisms are capable of active dispersal, either overland or upstream along the streambed, and even organisms that cannot move themselves can be carried upstream by various properties (cite). (fuss: Kristiansen 1996 in Hydrobiologia (DOI 10.1007/BF00010829) reviews dispersal ways of freshwater algae. E.g. wind, animals, humans) For simplicity, we refer to all of these forms of dispersal as “active” dispersal. Because these two dispersal mechanisms are likely to occur at quite different rates, and because the rates will likely vary among different types of organisms, we break the dispersal portion of the colonisation term here into active ($\alpha$) and passive ($\beta$) components, with the passive component additionally weighted by the discharge $Q$: \begin{equation} \frac{\partial p}{\partial t} = c p(\alpha + \beta Q) \left( h-p \right) - p \left( \sum_{j \in S \setminus \left\{i \right\} }{m_{j}p_j} + m \right) \label{eq:metacom} \end{equation} In order to produce a useful model where the meta-community interacts with the environment, it is necessary to further extend this model to incorporate local (and potentially dynamic) non-biological conditions in specific patches. More recent theoretical \autocite{Holt2000,Holt2005} and empirical \autocite{Talluto2017} work has extended the single-species Levins model (eqn. \ref{eq:levins}) by fitting the colonisation and extinction rates as functions of local climatic conditions. The result is a dynamic range model with long-term occupancy driven by the balance of local colonisation and extinction \autocite{Talluto2017}. We combine this approach with the multi-species Hunt model (eqn. \ref{eq:hunt}) by redefining the $c$ and $m$ terms to be functions of the quality $q_x$ of a focal patch $x$: \begin{equation} \begin{split} c_{x} &= f(q_{x}) \label{eq:talluto} \\ m_{x} &= g(q_{x}) \end{split} \end{equation} The shape of these two functions is flexible; for example \autocite{Talluto2017} defined them as quadratic functions of local climate conditions. Our more general approach here can incorporate local climatic and/or habitat conditions, and dynamical resource concentrations into the $q$ term. We explore these possibilities further in the case studies. For now, we consider the case where $q_x = R_x$, the concentration of an essential resource. We can use a simple reaction-transport model \autocite{Soetaert2009} to describe the fluctuations of this resource concentration in time (the subscript $x$ has been omitted for clarity). \begin{equation} \frac{\partial R}{\partial t} = \sum_{i \in S}{R\rho_i(R)} -\frac{QR - \sum Q_u R_u}{A l} \label{eq:rxn_transport} \end{equation} The right-hand term here gives loss due to advective transport, where $Q$ is the discharge of the focal patch (in volumetric units per unit time; we use m$^3$ s$^{-1}$ throughout), $Q_u$ and $R_u$ are discharge and resource concentrations of upstream patches (including lateral input, if relevant), $A$ is the cross-sectional area (m$^2$), and $l$ is the stream length of the patch (m). The left-hand term is the reaction component. \marginnote{RDC: Functions as organic carbon degradation rely greatly on water residence time. The higher the residence time, the higher can be the rate. So, I am wondering, should be the reaction component weighed also by Q? Or, is the right term of the equation considering the effect of residence time already? I.e.: the higher Q, the higher the downstream transport, and therefore, the lower the reaction component?} \marginnote{MT: I think this is already accounted for. Increase Q, and you must decrease R if mass is constant. Reaction rate depends on concentration (which is a sensible mechanism to me). But open to other implementations} We consider here only reaction due to biological activity, and postulate that the net reactive change in a patch is the sum of a set of resource use functions, $\rho_i(R)$, of all species $S$ in the local community. Each $\rho_i$ function describes the impact of a species on the resource in units of $s^-1$. The forms of these functions will depend on which resources are being modelled. \subsection{Implementation} We provide an implementation of the model in an R-package \textbf{flume} (standing for FLUvial Meta-Ecosystem model), available from (), and demonstrate some capabilities via case studies with code provided. In this section we describe some of the implementation details that are flexible in the formulation of the model above, but for which we have made specific choices in software. \subsubsection{Patch quality} Patch quality is defined by two functions describing the impact of quality on colonisation and extinction (eqn. \ref{eq:talluto}). We assumed that species colonisation niche (i.e., the shape of the $c$ function) follows a Gaussian curve \autocite{austin1999}, and thus can be described by three parameters: location, breadth, and scale. Extinctions in the model are constant with respect to the environment, controlled only by a scale parameter determining the overall background rate of stochastic extinctions. \subsubsection{Resource use functions} In principle, we expect that a species' total resource consumption in a site will be strongly correlated with abundance \autocite{diaz2003,winfree2015}. Although we do not track abundance in our model, we can make the simplifying assumption that abundances will be higher when species are closer to their niche optima \autocite{holt1997}, but see \autocite{mcgill2012}. Thus, the consumption $\rho_i(R_{x})$ for species $i$ at site $x$ (eqn \ref{eq:rxn_transport} will be proportional to the overall niche height at that site $c_{i,x} - m_{i,x}$ (eqn. \ref{eq:talluto}). We can further allow that species vary in their overall ability to use resources, scaling the resource use function by a species-specific consumption constant $r_i$: \begin{equation} \rho_i(R_x) = \begin{cases} r_i \frac{c_{i,x} - m_{i,x}}{\mathrm{max}(c_i - m_i)} & \quad c > m \\ 0 & \quad c \leq m \end{cases} \end{equation} Here we scale the niche height by the maximum niche height to allow us to interpret this term as the fraction of a species' maximum consumption rate, and to interpret $r_i$ as that maximum rate. Finally, we assume that when a species is outside it's equilibrium habitat (i.e., when $m > c$; \autocite{Talluto2017}), that resource consumption is negligible and can be ignored. \subsubsection{Stochastic simulations} For analysis, we discretised the model in space and time. We thus consider a series of habitat patches representing stream reaches. Each reach is characterised by the state variable $R$, the resource concentration, and by the community vector $\mathbf{C}$, which gives the absence (denoted by zeroes) or presence (denoted by ones) of all possible species in the community. For a time interval $\Delta t$ we can then derive the probability of observing colonisations and extinctions for a species $i$ from eq. \ref{eq:metacom}: \begin{equation} \begin{split} \mathrm{pr}\left( \mathbf{C}_{i, t+\Delta t} = 1 \mid \mathbf{C}_{i, t} = 0\right) &= 1 - \mathrm{e}^{-c_i p_i(\alpha_i + \beta_iQ) \Delta t} \\ \mathrm{pr}\left( \mathbf{C}_{i, t+\Delta t} = 0 \mid \mathbf{C}_{i, t} = 1\right) &= 1 - \mathrm{e}^{-\left( \sum{m_{ij}\mathbf{C}_{j, t}} + m_i \right)\Delta t} \label{eq:ceprob} \end{split} \end{equation} Here, the prevalence term $p_i$ indicates local prevalence, i.e., the number of occupied patches within dispersal range. We consider only the case of nearest neighbour dispersal, so $p_i$ is the sum of the occupancy states of patches immediately up- and downstream of the focal patch; longer distance dispersal can be incorporated via the regional species pool (see §\ref{ss:initial-boundary}). Changing resource concentration can be computed for each patch using a variety of numerical integration techniques. The flume package uses the lsoda algorithm from the DeSolve R-package \autocite{desolve}. \subsection{Initial \& Boundary Conditions} \label{ss:initial-boundary} Because of the directional nature of the movement of resources (and, to a lesser extent, organisms) in the model, it is necessary to provide external input representing, e.g., the input of resources and organisms from the terrestrial matrix or from groundwater input. For each habitat patch in the model, we implemented a virtual upstream patch representing this input. The resource concentrations and community composition of these patches are constant. Discharge from virtual patches is computed simply as the growth in discharge from one reach to the next moving downstream, according to the growth in catchment area and defined by hydraulic scaling relationships (cite). The actual resource concentration and community composition of the virtual patches can vary depending on modelling needs; for example, community composition could be uniform and contain all possible species to represent a classical “regional species pool” from metacommunity modelling, or resource concentrations could vary among headwaters or from upstream to downstream to represent land use gradients. \subsection{Case studies} \subsubsection{Sample river network} In order to demonstrate some features of the package and explore the behaviour of the model, we created a representation of the Kamp river in Austria to use as the spatial setting for the simulation (Fig. \ref{fig:kamp}). This choice is arbitrary, and any other network topology (or an artificial topology; CITE) could easily be used. The package comes with sample networks (including the Kamp), and a vignette is included to help users generate a network (CITE appendix). \begin{figure}[tb] \includegraphics[scale=1]{fig_kamp.pdf} \caption{Schematic of the Kamp river network. Line weights are proportional to discharge, which range from 0.05--0.88 $m^3 s^{-1}$ for this network. Node hues indicate membership in either the left tributary (orange), the right tributary (purple), or the mainstem (turquoise). Dark colours indicate headwater nodes.} \label{fig:kamp} \end{figure} \subsubsection{Competition-colonisation tradeoffs} The competition colonisation trade-off is a classical mechanism in community ecology that can promote species' coexistence in spatially structured habitats \autocite{tilman1994}. We implemented a simple (hypothetical) 5-species metacommunity competing for a single limiting resource, with traits varying from strong competitors to strong dispersers. For simplicity, we gave all species the same niche location (i.e., assuming that all species perform better at higher resource concentrations). The optima were set to the same concentration as the input concentration, thus all species can colonise sites in the absence of competition. Competitors had broader niches and smaller scales than dispersers, reflecting their superior performance at low resource concentrations and overall slower growth rate (Fig. \ref{fig:ex1_mcom}). Finally, competitors also had lower overall dispersal rates. \begin{figure}[tb] \includegraphics[scale=1]{fig_ex1_mcom.pdf} \caption{Visualisation of species' niches. Niche height is the difference between colonisation and extinction rate and reflects the overall speed with which a species reaches a (dynammic) equilibrium with the environment in the absence of competition and with constant resource concentration. Species range from strong competitors (light colours, sp1) to strong colonisers (dark colours, sp5).} \label{fig:ex1_mcom} \end{figure} We implemented competition purely indirectly for this scenario. In other words, species compete by reducing resources; strong competitors have the ability both to greatly reduce available resources and to survive at low resource concentrations. This only affects species' ability to colonise unoccpited sites (i.e., the $c_x$ term from eqn \ref{eq:talluto}). Direct competition via the $m_j$ term (eqn. \ref{eq:metacom}) was set to zero for this scenario. We began the simulations with all sites empty, and with immigration allowed only from headwater reachies (i.e., dark colors in Figure \ref{fig:kamp}). Under these conditions, we expect to see an initial pulse of coloniser species, where they rapidly move downstream and occupy empty sites. However, over time, they should be replaced by slow-dispersing competitors, that will reduce resource concentrations to levels not tolerable by colonisers. We then manipulated the strength of competition by gradually reducing species' resource use. We find.... To show how spatio/temporal structure can promote coexistence, we implemented two forms of disturbance. For the first, we introduce a point source of pollution (i.e., very high resource concentration) in a single headwater reach. As the pollution is diluted downstream, it promotes stable conditions where rapid dispersers can maintain low occupancy in the long term. For a second form of disturbance, we introduce regular large floods in a single tributary (FIG). These floods result increase discharge by 20-fold (and thus transport and dispersal) for a period of ten days. We also removed all species from affected sites during the floods to simulate the scouring of rocks and high disturbance of existing habitats. TODO: figure of the metacommunity niches \subsubsection{Simulated Algal-N:P meta-ecosystem} As an initial motivating example, we consider a simulated algal metacommunity, where species are differentiated along a single niche dimension. We use the nitrogen to phosphorous (N:P) ratio. This is a convenient example, as the N:P ratio is known to be quite important in algal communities (), and it can be easily modelled as a single scalar, thereby adding simplicity to the resource component of the model. \begin{itemize} \item niche descriptions/distribution (i.e., gaussian curves) \item strength of competition (overlap of curves) \item resource use function (effect of algae on resource)---proportional to height of gaussian curve at the existing N:P ratio, but what is the actual effect on N:P? \item communities tested \item landscapes tested \end{itemize} \subsubsection{Adding a habitat/niche dimension? (for Gabriel)} \subsubsection{Vjosa case study?} lukas.thuilebistarelli: I'm not sure if that fits well, but bacterial community composition and catchment area show a nice link. Also it seems like mean div is increasing and SD is decreasing with catchment area. We actually did also a bit of this modelling in rstan. Could be a nice add on an. And if wanted we could link it to bacterial functioning as well. \section{Results} \section{Discussion} \begin{itemize} \item limitation: we don't consider abundance, only presence-absence, but abundance is probably pretty important for ef \autocite{diaz2003}. Although we compensate for this somewhat, it could be improved, at the cost of complicating the model and potentially making it less realistic (abundances are harder to model). \end{itemize} \printbibliography \end{document}
{ "alphanum_fraction": 0.7950181321, "avg_line_length": 94.5, "ext": "tex", "hexsha": "c1e8e6f58dc62021d948e6dec48876f919c86678", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "76a36cda682121a90758f726f6bceea925e2ee03", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mtalluto/FLUFLUX_model", "max_forks_repo_path": "tex/model_ms.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "76a36cda682121a90758f726f6bceea925e2ee03", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mtalluto/FLUFLUX_model", "max_issues_repo_path": "tex/model_ms.tex", "max_line_length": 1228, "max_stars_count": null, "max_stars_repo_head_hexsha": "76a36cda682121a90758f726f6bceea925e2ee03", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mtalluto/FLUFLUX_model", "max_stars_repo_path": "tex/model_ms.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7535, "size": 33642 }
\documentclass{beamer} \usepackage{minted} \usepackage{hyperref} %\usetheme{vub} \usetheme[coloredtitles,coloredblocks]{vub} %\usetheme[showsection]{vub} \title{The VUB style package} \subtitle{The \LaTeX\ style package for the Vrije Universiteit Brussel} \author{Ruben De Smet} \AtBeginPart{\frame{\partpage}} \AtBeginSection{\frame{\sectionpage}} \AtBeginSubsection{\frame{\subsectionpage}} \begin{document} \frame{\titlepage} \begin{frame}{Outline} {\color{vubbleu}\large Part 1: regular documents} \tableofcontents[part=1] {\color{vubbleu}\large Part 2: slides using \texttt{beamer}} \tableofcontents[part=2] \end{frame} \part{regular documents} \begin{frame}[fragile]{Title page} Regular documents can use \begin{minted}{LaTeX} \usepackage{vub} \end{minted} This transforms \mint{LaTeX}|\maketitle| into a VUB-themed title page, complete with logo and triangle. \end{frame} \part{slides using \texttt{beamer}} \section{basic usage} \begin{frame}[fragile]{Setup} \framesubtitle{Loading the style} Create your regular \texttt{beamer} presentation, and use \begin{minted}{LaTeX} \usetheme{vub} \end{minted} This will load the relevant fonts, colors and images. \end{frame} \begin{frame}[fragile]{Setup} \framesubtitle{Title page as usual} \mint{LaTeX}|\maketitle| will generate a titlepage, or you can use \mint{LaTeX}|\titlepage| inside a \texttt{frame}, e.g. \begin{minted}{LaTeX} \frame{\titlepage} \end{minted} Both are standard \texttt{beamer} commands. \end{frame} \section{features} \begin{frame}[fragile]{Frame title comes here} \framesubtitle{and this is its subtitle} As usual, you can specify the subtitle using \begin{minted}{LaTeX} \framesubtitle{...} \end{minted} It will appear in a blue (``bleu'' of the ``oranje blanje bleu'' colors) bar \footnote{\url{http://huisstijl.vub.ac.be/styleguide/stijlelementen/stijlelementen-de-tekststrip/}} at the top of the slide. \smallskip The title itself will appear in an ``oranje'' bar. \end{frame} \begin{frame}{Blocks} \begin{block}{Default block} As usual, blocks can be used. The option ``coloredblocks`` makes them appear in VUB colors \end{block} \begin{theorem} The default color for blocks is blue but theorems appear in orange. \end{theorem} \begin{proof} Proof by visualisation. \end{proof} \end{frame} \begin{frame}{Very long text in a frame} \framesubtitle{with a subtitle} \emph{Lorem ipsum dolor} sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. \bigskip Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. \bigskip Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. \bigskip Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \bigskip Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \end{frame} \begin{frame}{Very long titles in a frame are not to be used, but at least they don't have a bug anymore.} \framesubtitle{I'd rather use a subtitle.} Please don't put sentences in slides. \end{frame} \end{document}
{ "alphanum_fraction": 0.7527607362, "avg_line_length": 24.5112781955, "ext": "tex", "hexsha": "e7eeea713c94304064945e6bcd5fb0541d7aac46", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0ead1ebc684a0979ffd68426b404d37dae935c8b", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "sthomer/ai-flanders_2020-07-15", "max_forks_repo_path": "tests/test_beamer.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0ead1ebc684a0979ffd68426b404d37dae935c8b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "sthomer/ai-flanders_2020-07-15", "max_issues_repo_path": "tests/test_beamer.tex", "max_line_length": 133, "max_stars_count": null, "max_stars_repo_head_hexsha": "0ead1ebc684a0979ffd68426b404d37dae935c8b", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "sthomer/ai-flanders_2020-07-15", "max_stars_repo_path": "tests/test_beamer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 962, "size": 3260 }
\documentclass[10pt]{article} \usepackage[usenames]{color} %used for font color \usepackage{amssymb} %maths \usepackage{amsmath} %maths \usepackage[utf8]{inputenc} %useful to type directly \usepackage{tikz} %diacritic characters \begin{document} \paragraph{Quotient Set Notation} There are many ways denote the quotient set (check the \emph{Book of Proof} for one way). A common method is to get a notation for an individual equivalence class first. Say $[x] = \{ y \in C: x \sim y\}$ which is the set of all bundles in $C$ that are indifferent to $x$. Then we might create a class of sets (set of sets) using the indexed collection of sets notation you covered in the prerequisite readings. For example $\mathcal{I} = \{ [x] : x \in C\}$ where $[x]=\{ y \in C: x \sim y\}$. Remember that sets only contain distinct elements so even though we ``loop'' through all values $x$ any two values $x,y$ such that $x \sim y$ will lead to $[x] = [y]$ and the distinct equivalence class will only show up once in the collection $\mathcal{I}$. \paragraph{Problem 10} Let $\succsim$ be rational preference relation and define $\sim$ as the indifference relation where $x\sim y$ iff $[x\succsim y \wedge y\succsim x]$. Because $\succsim$ is rational we know that it is complete, reflexive and transitive. To show $(x,x) \in \sim, \; \forall x$ we can refer to completeness to know that either $x\succsim x$ or $x\succsim x$ and by reflexiveness we know both statements are true so $x\sim x$. Symmetry would imply that for any $x,y\in C$ if $(x,y)\in \sim$ then $(y,x) \in \sim$. Again we now know that $\sim$ is reflexive and we still know $\succsim$ is complete, reflexive and transitive and we should use these facts to establish symmetry and transitivity of $\sim$. Let $x,y\in C$ and suppose $x \sim y$. Then we know $[x\succsim y \wedge y\succsim x]$ is true by definition of $\sim$. Because the order of arguments in the conjunction don't alter its truth value we could say $[y\succsim x \wedge x \succsim y]$ which is the definition of $y\sim x$. For transitivity, suppose $x,y,z \in C$ and $[x\sim y \wedge y \sim z]$. Then by definition $[x\succsim y \wedge y\succsim x]$ and $[y\succsim z \wedge z\succsim y]$. Hence we have $x \succsim y \succsim z$ which by transitivity of $\succsim$ implies $x\succsim z$ and we also have $z\succsim y \succsim x$ which by transitivity of $\succsim$ implies $z\succsim x$. Since we have $[x\succsim z \wedge z\succsim x]$ we see that $x \sim z$ and therefore $\sim$ is transitive. \end{document}
{ "alphanum_fraction": 0.7235387046, "avg_line_length": 76.7272727273, "ext": "tex", "hexsha": "8e9fd4bf98683098b62d4c53d7ad73c25bfc4ee9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "joepatten/joepatten.github.io", "max_forks_repo_path": "assets/pdfs/math_bootcamp/2016/HW1Prob10Suggest.tex", "max_issues_count": 5, "max_issues_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_issues_repo_issues_event_max_datetime": "2020-08-10T14:48:57.000Z", "max_issues_repo_issues_event_min_datetime": "2020-08-09T16:28:31.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "joepatten/joepatten.github.io", "max_issues_repo_path": "assets/pdfs/math_bootcamp/2016/HW1Prob10Suggest.tex", "max_line_length": 257, "max_stars_count": null, "max_stars_repo_head_hexsha": "4b9acc8720f3a33337368fee719902b54a6f2f68", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "joepatten/joepatten.github.io", "max_stars_repo_path": "assets/pdfs/math_bootcamp/2016/HW1Prob10Suggest.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 774, "size": 2532 }
\chapter{Unresolved issues} \label{chap:unresolved} \begin{issues} \iss{1}{Can symbol resolution rules be simplified?} Currently the rules for symbol resolution (see section~\ref{sec:ref-symbol-resolution}) define the class of symbol that can be referred to. For example in the GeneralCovariate parameter definition, we restrict any equation defined there to only reference parameters and covariates --- random variables are prohibited. This ensures that this part of the \pharmml document is used correctly, but it is it too restrictive? \iss{2}{More work to define operations and algorithms in \pharmml.} There is not specification for what estimation operations and algorithms should be supported by \pharmml. Ideally algorithm definitions will be supported by external resources such as KiSAO (\url{http://biomodels.net/kisao/}), but there is no support there yet. \iss{3}{The way we map a dataset column to an independent variable is not consistent.} In the Estimation Step we map to the independent variable symbol (t) using a \xelem{SymbRef} element and in the Trial Design we use the \xelem{IndependentVariableMapping} element. It will simplify the rules if w are consistent. \iss{4}{Units} We had planned to introduce units into this release using the mechanism adopted by SBML. However, their approach does not enable the encoding of temperature in either Fahrenheit or Celsius (because these conversions require the addition of a constant). SBML only alow temperature in Kelvin as a result. Do we wish to follow their approach or try and find a different solution? \iss{5}{Interpolation} When estimating from experimental data it is often necessary to use time-points between those for which we have experimental data. Software interpolate between the known data-points to obtain a value, but of course there is more than one way to do this. The approach taken is tool specific, so the question is do we wish to specify what interpolation method is used in the estimation step? \iss{6}{Use of the \xatt{columnNum} in the dataset definition.} In version 0.1.0 of \pharmml the dataset read from a tabular ascii file, such as a tab delimited file. Because of this we used the \xatt{columnNum} to map the column in the data-file to that in the dataset. Now that the data is defined in XML this use of the \xatt{columnNum} is not used and the columnNum had been reinterpreted to define the order of the column in the dataset definition. This is superfluous and the column order could be easily defined by the order in the XML document - as it is for the contents of the \xelem{Row} element. \end{issues}
{ "alphanum_fraction": 0.7970126388, "avg_line_length": 217.5833333333, "ext": "tex", "hexsha": "29008bd6ec0a685591126250839c3adea96f9746", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "pharmml/pharmml-spec", "max_forks_repo_path": "input/unresolved_issues.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "pharmml/pharmml-spec", "max_issues_repo_path": "input/unresolved_issues.tex", "max_line_length": 607, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b102aedd082e3114df26a072ba9fad2d1520e25f", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "pharmml/pharmml-spec", "max_stars_repo_path": "input/unresolved_issues.tex", "max_stars_repo_stars_event_max_datetime": "2018-01-26T13:17:54.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-26T13:17:54.000Z", "num_tokens": 588, "size": 2611 }
\documentclass{article} \usepackage[margin=1in]{geometry} \setlength{\parindent}{0in} \usepackage[utf8]{inputenc} \usepackage{latexsym,amsfonts,amssymb,amsthm,amsmath} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Id}{Id} \usepackage{braket} \usepackage{mathrsfs} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \title{Quantum Computing - Assignment 3} \author{Kishlaya Jaiswal} \begin{document} \maketitle \subsection*{Exercise 1} Let $\ket{\psi}_{AB} \in H_A \otimes H_B$ be a pure state. Assume wlog $dim(H_A) \geq dim(H_B) = m$. By Schmidt decomposition there exist orthonormal bases $\{u_i \mid i \leq m\} \subset H_A$ and $\{v_i \mid i \leq m\} \subset H_B$ such that $$\ket{\psi}_{AB} = \sum_i \alpha_i \ket{u_i} \otimes \ket{v_i}$$ \textbf{Claim} $\ket{\psi}_{AB}$ is entangled iff more than one Schmidt coefficients $\{\alpha_i \mid i \leq m\}$ are non-zero \begin{proof} Firstly, it is clear that atleast one Schmidt coefficient is non-zero because otherwise $\ket{\psi}_{AB} = 0$ is not a valid pure state. \\ So suppose exactly one Schmidt coefficient $\alpha_i$ is non-zero, then $$\ket{\psi}_{AB} = \alpha_i \ket{u_i}_A \otimes \ket{v_i}_B$$ is a product state. Conversely, suppose more than one Schmidt coefficient is non-zero, say $\alpha_i$ and $\alpha_j$ where $i \neq j$. We need to show that $\ket{\psi}_{AB}$ is entangled. For the sake of contradiction assume that $\ket{\psi}_{AB} = \ket{\phi}_A \otimes \ket{\varphi}_B$ is a product state. Then we have \begin{align*} \ket{\phi}_A \otimes \ket{\varphi}_B &= \sum_i \alpha_i \ket{u_i} \otimes \ket{v_i} \\ \implies \big(\bra{u_i} \otimes \bra{v_i}\big) \big(\ket{\phi}_A \otimes \ket{\varphi}_B\big) &= \big(\bra{u_i} \otimes \bra{v_i}\big) \left(\sum_i \alpha_i \ket{u_i} \otimes \ket{v_i}\right) \\ \implies \braket{u_i | \phi} \braket{v_i | \varphi} &= \alpha_i \end{align*} Similarly, $\braket{u_j | \phi} \braket{v_j | \varphi} = \alpha_j$. Furthermore, \begin{align*} \ket{\phi}_A \otimes \ket{\varphi}_B &= \sum_i \alpha_i \ket{u_i} \otimes \ket{v_i} \\ \implies \big(\bra{u_i} \otimes \bra{v_j}\big) \big(\ket{\phi}_A \otimes \ket{\varphi}_B \big) &= \big(\bra{u_i} \otimes \bra{v_j} \big) \left(\sum_i \alpha_i \ket{u_i} \otimes \ket{v_i}\right) \\ \implies \braket{u_i | \phi} \braket{v_j | \varphi} &= 0 \end{align*} Thus, we get $\alpha_i \alpha_j = \braket{u_i | \phi} \braket{v_i | \varphi} \braket{u_j | \phi} \braket{v_j | \varphi} = 0$. That is both $\alpha_i$ and $\alpha_j$ can't be non-zero, leading us to a contradiction. \end{proof} \subsection*{Exercise 2} \begin{proof} We will prove that Trace is a commutative operator, that is $\Tr(AB)= \Tr(BA)$ \begin{align*} \Tr(AB) &= \sum_i \bra i AB \ket i \\ &= \sum_i \bra i A \left(\sum_j \ket j \bra j \right)B \ket i \\ &= \sum_i \sum_j \bra i A \ket j \bra j B \ket i \\ &= \sum_j \sum_i \bra j B \ket i \bra i A \ket j \\ &= \sum_j \bra j B \left( \sum_i \ket i \bra i \right) A \ket j \\ &= \sum_j \bra j BA \ket j \\ &= \Tr(BA) \end{align*} Now $\Tr(A(BC)) = \Tr((BC)A)$ and $\Tr(B(CA)) = \Tr((CA)B)$. Since matrix multiplication is associative we get $\Tr(ABC) = \Tr(BCA) = \Tr(CAB)$ \\ \begin{align*} \Tr_B (\ket{x_1} \bra{x_2}_A \otimes \ket{y_1} \bra{y_2}_B) &= \sum_i I_A \otimes \bra{i}_B \bigg( \ket{x_1} \bra{x_2}_A \otimes \ket{y_1} \bra{y_2}_B \bigg) I_A \otimes \ket{i}_B \end{align*} Using the rule $(A \otimes B).(C \otimes D) = AB \otimes CD$, we get \begin{align*} \Tr_B (\ket{x_1} \bra{x_2}_A \otimes \ket{y_1} \bra{y_2}_B) &= \sum_i \ket{x_1} \bra{x_2}_A \otimes \bra{i}_B \ket{y_1} \bra{y_2}_B \ket{i}_B \\ &= \ket{x_1} \bra{x_2}_A \otimes \sum_i \bra{i}_B \ket{y_1} \bra{y_2}_B \ket{i}_B \\ &= \ket{x_1} \bra{x_2}_A \otimes \Tr (\ket{y_1} \bra{y_2}_B) \\ &= \ket{x_1} \bra{x_2}_A \otimes \Tr (\braket{y_2 | y_1}) \text{ (Trace is commutative)} \\ &= \ket{x_1} \bra{x_2}_A \braket{y_2 | y_1} \end{align*} \end{proof} \subsection*{Exercise 3} Let $N : \mathscr{L}(H) \to \mathscr{L}(H)$, $N(\rho) = (1-p)\rho + p Z \rho Z$. We will show that $N$ is a quantum channel, that is $N$ is a linear, trace-preserving and completely positive operator. \\ \textbf{Claim} $N$ is linear \begin{proof} \begin{align*} N(\rho_1 + \lambda \rho_2) &= (1-p)(\rho_1 + \lambda \rho_2) + p Z (\rho_1 + \lambda \rho_2) Z \\ &= (1-p)(\rho_1) + (1-p)(\lambda \rho_2) + p Z (\rho_1) Z + pZ(\lambda \rho_2) Z \\ &= ((1-p)\rho_1 + p Z \rho_1 Z) + \lambda ((1-p)\rho_2 + p Z \rho_2 Z) \\ &= N(\rho_1) + \lambda N(\rho_2) \end{align*} \end{proof} \textbf{Claim} $N$ is trace preserving \begin{proof} \begin{align*} \Tr (N(\rho)) &= \Tr ((1-p)\rho + p Z \rho Z) \\ &= (1-p) \Tr (\rho) + p \Tr (Z \rho Z) \\ &= (1-p) \Tr (\rho) + p \Tr (Z^2 \rho) \\ &= (1-p) \Tr (\rho) + p \Tr (\rho) \\ &= \Tr (\rho) \end{align*} \end{proof} \textbf{Claim} $\forall E$, $\Id_E \otimes N : \mathscr{L}(H_E \otimes H) \to \mathscr{L}(H_E \otimes H)$ is a positive operator \begin{proof} Let $\theta \in \mathscr{L}(H_E \otimes H)$ be any PSD matrix. Want to show that $(\Id_E \otimes N)(\theta)$ is also PSD matrix. Since $\theta$ is a PSD matrix, we have $\forall x$ $\braket{x|\theta|x} \geq 0$ Identify $\mathscr{L}(H_E \otimes H) \cong \mathscr{L}(H_E) \otimes \mathscr{L}(H)$ and fix any basis $\{\sigma_i \mid i \leq m\}$ for $\mathscr{L}(H_E)$ and $\{\rho_j \mid j \leq n\}$ for $\mathscr{L}(H)$, then $\theta = \sum_{i,j} c_{ij} (\sigma_i \otimes \rho_j)$ for some $c_{ij} \in \mathbb{C}$. $$(\Id_E \otimes N)(\theta) = \sum_{i,j} c_{ij} (\Id_E \otimes N)(\sigma_i \otimes \rho_j) = \sum_{i,j} c_{ij} \sigma_i \otimes N(\rho_j)$$ \begin{align*} \braket{x | \Id_E \otimes N(\theta) | x} &= \sum_{i,j} c_{ij} \braket{x | \sigma_i \otimes N(\rho_j) | x} \\ &= \sum_{i,j} c_{ij} (1-p) \braket{x | \sigma_i \otimes \rho_j | x} + \sum_{i,j} c_{ij} p \braket{x | \sigma_i \otimes Z\rho_j Z | x} \\ \end{align*} But we can re-write $$\sigma \otimes Z \rho Z = (I_E \otimes Z)(\sigma \otimes \rho)(I_E \otimes Z)$$ $$\implies \braket{x|\sigma \otimes Z \rho Z|x} = \braket{y|\sigma \otimes \rho|y}$$ where $\ket{y} = (I_E \otimes Z) \ket{x}$ (because $I_E \otimes Z$ is Hermitian). Hence we get $$\braket{x | \Id_E \otimes N(\theta) | x} = (1-p) \sum_{i,j} c_{ij} \braket{x | \sigma_i \otimes \rho_j | x} + p \sum_{i,j} c_{ij} \braket{y | \sigma_i \otimes \rho_j | y} = (1-p)\braket{x|\theta|x} + p\braket{y|\theta|y} \geq 0$$ that is $N$ is a completely positive operator. \end{proof} % First we show that $N$ is a positive map. Let $v$ be any vector and $\rho$ be a PSD matrix, i.e. $\braket{v|\rho|v} \geq 0$, then % \begin{align*} % \braket{v|N(\rho)|v} &= (1-p)\braket{v|\rho|v} + p \braket{v|Z \rho Z|v} \\ % &= (1-p)\braket{v|\rho|v} + p \braket{w| \rho |w} \text{ where } w = Z\ket{v}\\ % &\geq (1-p)0 + p0 = 0 % \end{align*} % Now consider any elementary PSD matrix $(\sigma \otimes \rho)$ and vector $\ket{u} \otimes \ket{v}$, then % \begin{itemize} % \item $(\bra{u} \otimes \bra{v})(\sigma \otimes \rho)(\ket{u} \otimes \ket{v}) = \bra{u} \sigma \ket{u} \bra{v} \rho \ket{v} \geq 0$ % \item and similarly, $\bra{u} \sigma \ket{u} \bra{w} \rho \ket{w} \geq 0$ where $w = Z\ket{v}$ % \end{itemize} % \begin{align*} % \braket{u \otimes v|\Id_E \otimes N(\sigma \otimes \rho)|u \otimes v} &= \braket{u \otimes v|\Id_E(\sigma) \otimes N(\rho)|u \otimes v} \\ % &= \braket{u \otimes v|\sigma \otimes N(\rho)|u \otimes v} \\ % &= \braket{u | \sigma | u} \braket{v | N(\rho) | v} \\ % &= (1-p) \braket{u | \sigma | u} \braket{v|\rho|v} + p \braket{u | \sigma | u} \braket{w| \rho |w} \text{ where } w = Z\ket{v}\\ % &\geq (1-p)0 + p0 = 0 % \end{align*} % For any general PSD matrix $\theta \in H_E \otimes H$ and any vector $x$, using Schmidt decomposition, we can write % \begin{itemize} % \item $\theta = \sum_i \alpha_i \sigma_i \otimes \rho_i$ where $\alpha_i \geq 0$ % \item $\ket{x} = \sum_i \beta_i \ket{u_i} \otimes \ket{v_i}$ where $\beta_i \geq 0$ % \end{itemize} % \begin{align*} % \braket{x | \Id_E \otimes N (\theta) | x} &= \sum_i \alpha_i \braket{x | \Id_E \otimes N(\sigma_i \otimes \rho_i) | x} \\ % &= \sum_i \alpha_i \sum_j \sum_k \beta_j \beta_k \braket{u_j \otimes v_j | \Id_E \otimes N(\sigma_i \otimes \rho_i) | u_k \otimes v_k} \\ % &= \sum_{i,j,k} \alpha_i \beta_j \beta_k \bigg(\braket{u_j \otimes v_j | \Id_E \otimes N(\sigma_i \otimes \rho_i) | u_k \otimes v_k}\bigg) \\ % &\geq 0 % \end{align*} % Hence $\Id_E \otimes N$ is a positive map for every $E$. That is $N$ is a completely positive map. We begin by noting the following relations: $$ZXZ = -X, ZYZ = -Y$$ because \begin{itemize} \item $ZXZ\ket0 = ZX\ket0 = Z\ket1 = - \ket1 = -X\ket0$ and $ZXZ\ket1 = -ZX\ket1 = -Z\ket0 = - \ket0 = -X\ket1$ \item $ZYZ\ket0 = ZY\ket0 = iZ\ket1 = - i\ket1 = -Y\ket0$ and $ZYZ\ket1 = -ZY\ket1 = iZ\ket0 = i\ket0 = -Y\ket1$ \\ \end{itemize} Thus, we get $N(X) = (1-p)X + pZXZ = (1-2p)X$ and $N(Y) = (1-p)Y + pZYZ = (1-2p)Y$ and $N(Z) = (1-p)Z + pZZZ = Z$. Hence \begin{align*} N\left(\frac12 (I + r_x X + r_y Y + r_z Z)\right) &= \frac12 (N(I) + r_x N(X) + r_y N(Y) + r_z N(Z)) \\ &= \frac12 (I + r_x (1-2p)X + r_y (1-2p) Y + r_z Z) \end{align*} \subsection*{Exercise 4} \textbf{Claim} Let $A$ be any linear operator over a $\mathbb{C}-$vector space, then we can write $A = B+iC$ for some $B,C$ Hermitian operators \begin{proof} Consider $B = (A+A^\dagger)/2$ and $C = (A-A^\dagger)/2i$ then $A = B+iC$ is clear. Moreover, $B^\dagger = (A^\dagger + A)/2 = B$ and $C^\dagger = -(A^\dagger-A)/2i = C$, as desired. \end{proof} \textbf{Claim} Let $A$ be a Hermitian operator then $\forall v$, $\braket{v|A|v} \in \mathbb{R}$ \begin{proof} Using spectral theorem, we know that there exists an orthonormal eigenbasis $\{v_i \mid i \leq n\}$ such that $Av_i = \lambda_i v_i$ where $\lambda_i \in \mathbb{R}$, $\forall i \leq n$ Now for any $v$, write $v = \sum_i c_i v_i$ and then $$\braket{v|A|v} = \sum_i |c_i|^2 \lambda_i \in \mathbb{R}$$ \end{proof} \textbf{Claim} Let $A$ be any linear operator over a $\mathbb{C}-$vector space such that $\forall v$, $\braket{v|A|v} \in \mathbb{R}$. Then $A$ is a Hermitian operator. \begin{proof} We write $A = B+iC$ with $B,C$ Hermitian. Since $B,C$ are Hermitian, $\braket{v|B|v} \in \mathbb{R}$ and $\braket{v|C|v} \in \mathbb{R}$. Therefore $\forall v$ $$\braket{v|A|v} = \braket{v|B|v} + i\braket{v|C|v} \in \mathbb{R} \implies \braket{v|C|v} = 0$$ Hence $C = 0 \implies A = B$ which is Hermitian. \end{proof} Finally, let $A$ be a positive operator, which means $\forall v$, $\braket{v|A|v} \geq 0$. That is $\forall v$, $\braket{v|A|v} \in \mathbb{R}$. Using above claim, we get that $A$ is Hermitian. \end{document}
{ "alphanum_fraction": 0.6097403793, "avg_line_length": 52.2211538462, "ext": "tex", "hexsha": "c6a903a571fa3f3a932baca58cdea3fc363d95c1", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1aa76e32d7e5059499a93359cb52118ccbf07028", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kishlaya/assignments", "max_forks_repo_path": "quantum_computing/assign3_soln.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1aa76e32d7e5059499a93359cb52118ccbf07028", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kishlaya/assignments", "max_issues_repo_path": "quantum_computing/assign3_soln.tex", "max_line_length": 307, "max_stars_count": 2, "max_stars_repo_head_hexsha": "1aa76e32d7e5059499a93359cb52118ccbf07028", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kishlaya/assignments", "max_stars_repo_path": "quantum_computing/assign3_soln.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-14T17:40:34.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-17T09:28:32.000Z", "num_tokens": 4527, "size": 10862 }
\section{Library Evaluation} This section presents the evaluation results of the TAkka library. We show that the Wadler's type pollution problem can be avoided in a straightforward way by using TAkka. We further assess the TAkka library by porting examples written in Erlang and Akka. Results show that TAkka detects type errors without causing obvious runtime and code-size overheads. \subsection{Wadler\rq{}s Type Pollution Problem} \label{type_pollution} Wadler\rq{}s type pollution problem refers to the situation where a communication interface of a component publishes too much type information to another party and consequently that party can send the component a message not expected from it. Without due care, actor-based systems constructed using the layered architecture or the MVC model \begin{comment}citation? MVC is a well-known model\end{comment} can suffer from the type pollution problem. One solution to the type pollution problem is using separate channels for distinct parties. Programming models that support this solution includes the join-calculus \citep{full_join} and the typed $\pi$-calculus \citep{pi_book}. TAkka solves the type pollution problem by using polymorphism. Take the code template in Figure \ref{MVC} for example. Let {\tt V2CMessage} and {\tt M2CMessage} be the type of messages expected by the View and the Model respectively. Both {\tt V2CMessage} and {\tt M2CMessage} are subtypes of {\tt ControllerMsg}, which is the least general type of messages expected by the controller. In the template code, the controller publishes itself as different types to the view actor and the model actor. Therefore, both the view and the model only know the communication interface between the controller and itself. The {\tt ControllerMsg} is a sealed trait so that users cannot define a subtype of {\tt ControllerMsg} outside the file and send the controller a message of unexpected type. Although type convention in lines 22 and 23 can be omitted, we explicitly use the {\tt publishAs} to express our intention and let the compiler check the type. The code template is used to implement the Tic-Tac-Toe example in the TAkka code repository. \begin{comment} \mycomment{I failed to found small-sized example that uses layered architecture. Therefore, I use the core structure of the Tik-Tak-Tok example here. I implement the Tik-Tak-Tok example for demonstration purpose. I want to keep the code size small, but the Tik-Tak-Tok example still has 291 lines of code.} \end{comment} \begin{figure}[h] \label{MVC} \begin{lstlisting} sealed trait ControllerMsg class V2CMessage extends ControllerMsg class M2CMessage extends ControllerMsg trait C2VMessage case class ViewSetController(controller:ActorRef[V2CMessage]) extends C2VMessage trait C2MMessage case class ModelSetController(controller:ActorRef[M2CMessage]) extends C2MMessage class View extends TypedActor[C2VMessage] { private var controller:ActorRef[V2CMessage] // rest of implementation } Model extends TypedActor[C2MMessage] { private var controller:ActorRef[M2CMessage] // rest of implementation } class Controller(model:ActorRef[C2MMessage], view:ActorRef[C2VMessage]) extends TypedActor[ControllerMsg] { override def preStart() = { model ! ModelSetController( typedSelf.publishAs[M2CMessage]) view ! ViewSetController( typedSelf.publishAs[V2CMessage]) } // rest of implementation } \end{lstlisting} \caption{Template for Model-View-Controller} \end{figure} \subsection{Expressiveness} \label{expressiveness} Table \ref{express} lists the examples used for expressiveness checks. We selected examples from Erlang Quiviq \citep{quviq} and open source Akka projects to ensure that the main requirements for actor programming are not unintentionally neglected. Examples from Erlang Quiviq are re-implemented using both Akka and TAkka. Examples from Akka projects are re-implemented using TAkka. Following the suggestion in \citep{HePa06}, we assess the overall code modification and code size by calculating the geometric mean of all examples. The evaluation results in Table \ref{express} show that when porting an Akka program to TAkka, about 7.4\% lines of code need to be modified including additional type declarations. Sometimes, the code size can be smaller because TAkka code does not need to handle unexpected messages. On average, the total program size of Akka and TAkka applications are almost the same. \begin{table*}[t] \begin{center} \begin{tabular}{| p{2.7 cm} | p{3.3 cm} | c | c | c | c | c |} \hline Source & Example & \specialcell{Akka Code \\ Lines} & \specialcell{Modified\\ TAkka Lines} & \specialcell{\% of \\Modified Code} & \specialcell{TAkka Code\\ Lines} & \specialcell{\% of \\Code Size} \\ \hline Quviq \citep{quviq} & ATM simulator & 1148 & 199 & 17.3 & 1160 & 101 \\ \cline{2-7} & Elevator Controller & 2850 & 172 & 9.3 & 2878 & 101 \\ \hline & Ping Pong & 67 & 13 & 19.4 & 67 & 100 \\ \cline{2-7} Akka & Dining Philosophers & 189 & 23 & 12.1 & 189 & 100 \\ \cline{2-7} Documentation\citep{akka_doc} & Distributed Calculator & 250 & 43 & 17.2 & 250 & 100 \\ \cline{2-7} & Fault Tolerance & 274 & 69 & 25.2 & 274 & 100 \\ \hline & Barber Shop \citep{BarberShop}& 754 & 104 & 13.7 & 751 & 99 \\ \cline{2-7} Other Open Source & EnMAS \citep{EnMAS} & 1916 & 213 & 11.1 & 1909 & 100 \\ \cline{2-7} Akka Applications & Socko Web Server \citep{SOCKO} & 5024 & 227 & 4.5 & 5017 & 100 \\ \cline{2-7} & Gatling \citep{Gatling} & 1635 & 111 & 6.8 & 1623 & 99 \\ \cline{2-7} & Play Core \citep{play_doc} & 27095 & 15 & 0.05 & 27095 & 100 \\ \hline geometric mean & & 991.7 & 71.6 & 7.4 & 992.1 & 100.0 \\ \hline \end{tabular} % } \caption{Results of Correctness and Expressiveness Evaluation} \end{center} \label{express} \end{table*} \begin{comment} \begin{table*}[t] \label{efficiency_description} \begin{center} \begin{tabular}{| l | p{10.5 cm} |} \hline Example & Description \\ \hline bang & This benchmark tests many-to-one message passing. The benchmark spawns a specified number sender and one receiver. Each sender sends a specified number of messages to the receiver.\\ \hline big & This benchmark tests many-to-many message passing. The benchmark creates a number of actors that exchange ping and pong messages. \\ \hline ehb & This is a benchmark and stress test. The benchmark is parameterized by the number of groups and the number of messages sent from each sender to each receiver in the same group. \\ \hline mbrot & This benchmark models pixels in a 2-D image. For each pixel, the benchmark calculates whether the point belongs to the Mandelbrot set. \\ \hline %parallel & This benchmark spawns a number of processes, where a list of %N timestamps is concurrently created. \\ %\hline %genstress & This benchmark is similar to the bang test. It spawns an %echo server and a number of clients. Each client sends some dummy messages to %the server and waits for its response. The Erlang version of this test can be %executed with or without using the gen\_server behaviour. For generality, this %benchmark only tests the version without using gen\_server. \\ %\hline ran & This benchmark spawns a number of processes. Each process generates a list of ten thousand random integers, sorts the list and sends the first half of the result list to the parent process. \\ \hline serialmsg & This benchmark tests message forwarding through a dispatcher. \\ \hline \end{tabular} % } \caption{Examples for Efficiency and Scalability Evaluation} \end{center} \end{table*} \hline timer\_wheel & This benchmark is a modification to the big test. While responding to ping messages, a process in this message also waits pong messages. If no pong message is received within the specified timeout, the process terminates itself. \\ \end{comment} A type error is reported by the compiler when porting the Socko example \citep{SOCKO} from its Akka implementation to its equivalent TAkka implementation. SOCKO is a library for building event-driven web services. The SOCKO designer defines a {\tt SockoEvent} class to be the supertype of all events. One subtype of {\tt SockoEvent} is {\tt HttpRequestEvent}, representing events generated when an HTTP request is received. The designer further implements subclasses of the {\tt Method} class, whose {\tt unapply} method is intended to have an output of type {\tt Option[HttpRequestEvent]}. The SOCKO designer made a type error in the method declaration so that the {\tt unapply} has output type {\tt Option[SockoEvent]}. The type error is not exposed in test examples because those examples only tests HTTP events. Fortunately, the design flaw is exposed when upgrading the SOCKO implementation using TAkka. \subsection{Efficiency, Throughput, and Scalability} \label{efficiency} The TAkka library is built on top of Akka so that code for shared features can be re-used. The three main sources of overheads in the TAkka implementation are: (i) the cost of adding an additional operational layer on top of Akka code, (ii) the cost of constructing type descriptors, and (iii) the cost of transmitting type descriptor in distributed settings. We assess the upper bound of the cost of the first two factors by a micro benchmark which assesses the time for initializing {\it n} instances of {\tt MyActor} defined in Figure \ref{fig:akkastring} and Figure \ref{takkastring}. When {\it n} ranges from $10^4$ to $10^5$, the TAkka implementation is about 2 times slower than the Akka implementation. The cost of the last factor is close to the cost of transmitting the string representation of fully qualified type names. The JSON serialization example \citep{techempower} is used to compare the throughput of 4 web services built using Akka Play, TAkka Play, Akka Socko, and TAkka Scoko. For each HTTP request, the example gives an HTTP response with pre-defined content. All web services are deployed to Amazon EC2 Micro instances (t1.micro), which has 0.615GB memory. The throughput is tested with up to 16 EC2 Micro instances. For each number of EC2 instances, 10 rounds of throughput measurement are executed to gather the average and standard deviation of the throughput. The results reported in Figure \ref{throughput} shows that web servers built using the Akka-based library and the TAkka-based library have similar throughput. We further investigated the speed-up of multi-node TAkka applications by porting 6 micro benchmark examples, from the BenchErl benchmarks in the RELEASE project \citep{RELEASE}. Each BenchErl benchmark spawns one master process and many child processes for a tested task. Each child process is asked to perform a certain amount of computation and report the result to the master process. The benchmarks are run on a 32 node Beowulf cluster at the Heriot-Watt University. Each Beowulf node comprises eight Intel 5506 cores running at 2.13GHz. All machines run under Linux CentOS 5.5. The Beowulf nodes are connected with a Baystack 5510-48T switch with 48 10/100/1000 ports. Figures \ref{runtime} and \ref{scalability} reports the results of the BenchErl benchmarks. We report the average and the standard deviation of the run-time of each example. Depending on the ratio of the computation time and the I/O time, benchmark examples scale at different levels. In all examples, TAkka and Akka implementations have almost identical run-time and scalability. \begin{figure*}[h] \begin{center} \subfigure[]{ \label{fig:play_throughput} \includegraphics[scale=0.38]{Play_throughput.png} } \subfigure[]{ \label{fig:socko_throughput} \includegraphics[scale=0.38]{Socko_throughput.png} } \end{center} \caption{Throughput Benchmarks} \label{throughput} \end{figure*} \begin{figure*}[h] \begin{center} \subfigure[]{ \label{fig:2_1} \includegraphics[scale=0.25]{Bang_time.png} } \subfigure[]{ \label{fig:2_2} \includegraphics[scale=0.25]{Big_time.png} } \subfigure[]{ \label{fig:2_3} \includegraphics[scale=0.25]{EHB_time.png} }\\ \subfigure[]{ \label{fig:2_5} \includegraphics[scale=0.25]{MBrot_time.png} } \subfigure[]{ \label{fig:2_7} \includegraphics[scale=0.25]{RUN_time.png} } \subfigure[]{ \label{fig:2_8} \includegraphics[scale=0.25]{SerialMsg_time.png} }\\ \end{center} \caption{Runtime Benchmarks} \label{runtime} \end{figure*} \begin{figure*}[p] \begin{center} \subfigure[]{ \label{fig:2_9} \includegraphics[scale=0.25]{Bang_speedup.png} } \subfigure[]{ \label{fig:2_10} \includegraphics[scale=0.25]{Big_speedup.png} } \subfigure[]{ \label{fig:2_11} \includegraphics[scale=0.25]{EHB_speedup.png} }\\ \subfigure[]{ \label{fig:2_13} \includegraphics[scale=0.25]{MBrot_speedup.png} } \subfigure[]{ \label{fig:2_15} \includegraphics[scale=0.25]{RUN_speedup.png} } \subfigure[]{ \label{fig:2_16} \includegraphics[scale=0.25]{SerialMsg_speedup.png} }\\ \end{center} \caption{Scalability Benchmarks} \label{scalability} \end{figure*}
{ "alphanum_fraction": 0.7112318582, "avg_line_length": 39.2893258427, "ext": "tex", "hexsha": "05f810171d14e4b7144434bc97d196216b805743", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_forks_repo_licenses": [ "BSD-Source-Code" ], "max_forks_repo_name": "Jiansen/TAkka", "max_forks_repo_path": "s1024484/Paper/Scala2014/evaluation.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-Source-Code" ], "max_issues_repo_name": "Jiansen/TAkka", "max_issues_repo_path": "s1024484/Paper/Scala2014/evaluation.tex", "max_line_length": 98, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d2410190552aeea65c1da5f0ae05f08ba1f4d102", "max_stars_repo_licenses": [ "BSD-Source-Code" ], "max_stars_repo_name": "Jiansen/TAkka", "max_stars_repo_path": "s1024484/Paper/Scala2014/evaluation.tex", "max_stars_repo_stars_event_max_datetime": "2019-06-27T06:36:09.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-11T14:35:53.000Z", "num_tokens": 3629, "size": 13987 }
\chapter{RISC-V Assembly Programmer's Handbook} \label{assembly} This chapter is a placeholder for an assembly programmer's manual. Table~\ref{regmap} lists the assembler mnemonics for the {\tt x} and {\tt f} registers and their role in the standard calling convention. \vspace{0.2in} \begin{table*}[htbp] \begin{center} \begin{tabular}{|l|l|l|l|} \hline Register & ABI Name & Description & Saver \\ \hline \tt x0 & \tt zero & Hard-wired zero & --- \\ \tt x1 & \tt ra & Return address & Caller \\ \tt x2 & \tt sp & Stack pointer & Callee \\ \tt x3 & \tt gp & Global pointer & --- \\ \tt x4 & \tt tp & Thread pointer & --- \\ \tt x5 & {\tt t0} & Temporary/alternate link register& Caller \\ {\tt x6}--{\tt 7} & {\tt t1}--{\tt 2} & Temporaries & Caller \\ \tt x8 & {\tt s0}/\tt fp & Saved register/frame pointer & Callee \\ \tt x9 & {\tt s1} & Saved register & Callee \\ {\tt x10}--{\tt 11} & {\tt a0}--{\tt 1} & Function arguments/return values & Caller \\ {\tt x12}--{\tt 17} & {\tt a2}--{\tt 7} & Function arguments & Caller \\ {\tt x18}--{\tt 27} & {\tt s2}--{\tt 11} & Saved registers & Callee \\ {\tt x28}--{\tt 31} & {\tt t3}--{\tt 6} & Temporaries & Caller \\ \hline {\tt f0}--{\tt 7} & {\tt ft0}--{\tt 7} & FP temporaries & Caller \\ {\tt f8}--{\tt 9} & {\tt fs0}--{\tt 1} & FP saved registers & Callee \\ {\tt f10}--{\tt 11} & {\tt fa0}--{\tt 1} & FP arguments/return values & Caller \\ {\tt f12}--{\tt 17} & {\tt fa2}--{\tt 7} & FP arguments & Caller \\ {\tt f18}--{\tt 27} & {\tt fs2}--{\tt 11} & FP saved registers & Callee \\ {\tt f28}--{\tt 31} & {\tt ft8}--{\tt 11} & FP temporaries & Caller \\ \hline \end{tabular} \end{center} \caption{Assembler mnemonics for RISC-V integer and floating-point registers.} \label{regmap} \end{table*} Tables~\ref{pseudos} and \ref{csr-pseudos} contain a listing of standard RISC-V pseudoinstructions. \begin{table}[h] \begin{small} \begin{center} \begin{tabular}{l l l} Pseudoinstruction & Base Instruction(s) & Meaning \\ \hline \multirow{2}{*}{\tt la rd, symbol} & {\tt auipc rd, symbol[31:12]} & \multirow{2}{*}{Load address} \\ & {\tt addi rd, rd, symbol[11:0]} \\ \multirow{2}{*}{\tt l\{b|h|w|d\} rd, symbol} & {\tt auipc rd, symbol[31:12]} & \multirow{2}{*}{Load global} \\ & {\tt l\{b|h|w|d\} rd, symbol[11:0](rd)} \\ \multirow{2}{*}{\tt s\{b|h|w|d\} rd, symbol, rt} & {\tt auipc rt, symbol[31:12]} & \multirow{2}{*}{Store global} \\ & {\tt s\{b|h|w|d\} rd, symbol[11:0](rt)} \\ \multirow{2}{*}{\tt fl\{w|d\} rd, symbol, rt} & {\tt auipc rt, symbol[31:12]} & \multirow{2}{*}{Floating-point load global} \\ & {\tt fl\{w|d\} rd, symbol[11:0](rt)} \\ \multirow{2}{*}{\tt fs\{w|d\} rd, symbol, rt} & {\tt auipc rt, symbol[31:12]} & \multirow{2}{*}{Floating-point store global} \\ & {\tt fs\{w|d\} rd, symbol[11:0](rt)} \\ \hline {\tt nop} & {\tt addi x0, x0, 0} & No operation \\ {\tt li rd, immediate} & {\em Myriad sequences} & Load immediate \\ {\tt mv rd, rs} & {\tt addi rd, rs, 0} & Copy register \\ {\tt not rd, rs} & {\tt xori rd, rs, -1} & One's complement \\ {\tt neg rd, rs} & {\tt sub rd, x0, rs} & Two's complement \\ {\tt negw rd, rs} & {\tt subw rd, x0, rs} & Two's complement word \\ {\tt sext.w rd, rs} & {\tt addiw rd, rs, 0} & Sign extend word \\ {\tt seqz rd, rs} & {\tt sltiu rd, rs, 1} & Set if $=$ zero \\ {\tt snez rd, rs} & {\tt sltu rd, x0, rs} & Set if $\neq$ zero \\ {\tt sltz rd, rs} & {\tt slt rd, rs, x0} & Set if $<$ zero \\ {\tt sgtz rd, rs} & {\tt slt rd, x0, rs} & Set if $>$ zero \\ \hline {\tt fmv.s rd, rs} & {\tt fsgnj.s rd, rs, rs} & Copy single-precision register \\ {\tt fabs.s rd, rs} & {\tt fsgnjx.s rd, rs, rs} & Single-precision absolute value \\ {\tt fneg.s rd, rs} & {\tt fsgnjn.s rd, rs, rs} & Single-precision negate \\ {\tt fmv.d rd, rs} & {\tt fsgnj.d rd, rs, rs} & Copy double-precision register \\ {\tt fabs.d rd, rs} & {\tt fsgnjx.d rd, rs, rs} & Double-precision absolute value \\ {\tt fneg.d rd, rs} & {\tt fsgnjn.d rd, rs, rs} & Double-precision negate \\ \hline {\tt beqz rs, offset} & {\tt beq rs, x0, offset} & Branch if $=$ zero \\ {\tt bnez rs, offset} & {\tt bne rs, x0, offset} & Branch if $\neq$ zero \\ {\tt blez rs, offset} & {\tt bge x0, rs, offset} & Branch if $\leq$ zero \\ {\tt bgez rs, offset} & {\tt bge rs, x0, offset} & Branch if $\geq$ zero \\ {\tt bltz rs, offset} & {\tt blt rs, x0, offset} & Branch if $<$ zero \\ {\tt bgtz rs, offset} & {\tt blt x0, rs, offset} & Branch if $>$ zero \\ \hline {\tt bgt rs, rt, offset} & {\tt blt rt, rs, offset} & Branch if $>$ \\ {\tt ble rs, rt, offset} & {\tt bge rt, rs, offset} & Branch if $\leq$ \\ {\tt bgtu rs, rt, offset} & {\tt bltu rt, rs, offset} & Branch if $>$, unsigned \\ {\tt bleu rs, rt, offset} & {\tt bgeu rt, rs, offset} & Branch if $\leq$, unsigned \\ \hline {\tt j offset} & {\tt jal x0, offset} & Jump \\ {\tt jal offset} & {\tt jal x1, offset} & Jump and link \\ {\tt jr rs} & {\tt jalr x0, rs, 0} & Jump register \\ {\tt jalr rs} & {\tt jalr x1, rs, 0} & Jump and link register \\ {\tt ret} & {\tt jalr x0, x1, 0} & Return from subroutine \\ \multirow{2}{*}{\tt call offset} & {\tt auipc x1, offset[31:12]} & \multirow{2}{*}{Call far-away subroutine} \\ & {\tt jalr x1, x1, offset[11:0]} \\ \multirow{2}{*}{\tt tail offset} & {\tt auipc x6, offset[31:12]} & \multirow{2}{*}{Tail call far-away subroutine} \\ & {\tt jalr x0, x6, offset[11:0]} & \\ \hline {\tt fence} & {\tt fence iorw, iorw} & Fence on all memory and I/O \\ \hline \end{tabular} \end{center} \end{small} \caption{RISC-V pseudoinstructions.} \label{pseudos} \end{table} \begin{table}[h] \begin{small} \begin{center} \begin{tabular}{l l l} Pseudoinstruction & Base Instruction & Meaning \\ \hline {\tt rdinstret[h] rd} & {\tt csrrs rd, instret[h], x0} & Read instructions-retired counter \\ {\tt rdcycle[h] rd} & {\tt csrrs rd, cycle[h], x0} & Read cycle counter \\ {\tt rdtime[h] rd} & {\tt csrrs rd, time[h], x0} & Read real-time clock \\ \hline {\tt csrr rd, csr} & {\tt csrrs rd, csr, x0} & Read CSR \\ {\tt csrw csr, rs} & {\tt csrrw x0, csr, rs} & Write CSR \\ {\tt csrs csr, rs} & {\tt csrrs x0, csr, rs} & Set bits in CSR \\ {\tt csrc csr, rs} & {\tt csrrc x0, csr, rs} & Clear bits in CSR \\ \hline {\tt csrwi csr, imm} & {\tt csrrwi x0, csr, imm} & Write CSR, immediate \\ {\tt csrsi csr, imm} & {\tt csrrsi x0, csr, imm} & Set bits in CSR, immediate \\ {\tt csrci csr, imm} & {\tt csrrci x0, csr, imm} & Clear bits in CSR, immediate \\ \hline {\tt frcsr rd} & {\tt csrrs rd, fcsr, x0} & Read FP control/status register \\ {\tt fscsr rd, rs} & {\tt csrrw rd, fcsr, rs} & Swap FP control/status register \\ {\tt fscsr rs} & {\tt csrrw x0, fcsr, rs} & Write FP control/status register \\ \hline {\tt frrm rd} & {\tt csrrs rd, frm, x0} & Read FP rounding mode \\ {\tt fsrm rd, rs} & {\tt csrrw rd, frm, rs} & Swap FP rounding mode \\ {\tt fsrm rs} & {\tt csrrw x0, frm, rs} & Write FP rounding mode \\ \hline {\tt frflags rd} & {\tt csrrs rd, fflags, x0} & Read FP exception flags \\ {\tt fsflags rd, rs} & {\tt csrrw rd, fflags, rs} & Swap FP exception flags \\ {\tt fsflags rs} & {\tt csrrw x0, fflags, rs} & Write FP exception flags \\ \hline \end{tabular} \end{center} \end{small} \caption{Pseudoinstructions for accessing control and status registers.} \label{csr-pseudos} \end{table}
{ "alphanum_fraction": 0.5564148073, "avg_line_length": 51.5555555556, "ext": "tex", "hexsha": "1f0724d213597444d049368fff80a8dfd9516c84", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1b2322007b269c9557a3959d8065e418f605cf51", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "bonzini/riscv-isa-manual", "max_forks_repo_path": "src/assembly.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1b2322007b269c9557a3959d8065e418f605cf51", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "bonzini/riscv-isa-manual", "max_issues_repo_path": "src/assembly.tex", "max_line_length": 127, "max_stars_count": null, "max_stars_repo_head_hexsha": "1b2322007b269c9557a3959d8065e418f605cf51", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "bonzini/riscv-isa-manual", "max_stars_repo_path": "src/assembly.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2821, "size": 7888 }
\chapter{Using the Library} \newcommand{\fenccontext}{fenc\_context} \newcommand{\fencglobalparams}{fenc\_global\_params} \newcommand{\fencerror}{FENC\_ERROR} \newcommand{\fencerrornone}{FENC\_ERROR\_NONE} \newcommand{\fencschemeLSW}{FENC\_SCHEME\_LSW} \newcommand{\libfencinitialize}{libfenc\_init} \newcommand{\libfencerrortostring}{libfenc\_error\_to\_string} \newcommand{\libfenccreatecontext}{libfenc\_create\_context} \newcommand{\libfencgenparams}{libfenc\_gen\_params} This section provides a brief tutorial on {\libraryshort}, tailored for the developers who wish to use the library in their applications. We first describe the process of building and installing the library, then give some examples of how the library is used in practice. For a full description of the library API, see chapter~\ref{chap:api}. \section{Building the Library} \section{Using {\libraryshort} in an application} \subsection{Compiling the application} The build process above produces the static library {\libraryunixlib} which should be located in a known location in your system. \subsection{A brief tutorial} \label{sec:tutorial} The basic unit of the {\libraryname} is the {\em encryption context}. This is an abstract data structure responsible for storing the scheme type as well as the public and/or secret parameters associated with the scheme. An application may instantiate multiple encryption contexts if desired, running the same or different encryption schemes. Most API routines return an error code of type {\tt \fencerror}. Always be sure to check that the returned value is {\fencerrornone}, or the library may not operate correctly. Error codes can be converted into strings using the {\tt \libfencerrortostring()} call. \begin{enumerate} \item Initialize the {\libraryshort} library. An application must execute this routine before conducting any operations with the library: ~~~~ {\tt err\_code = \libfencinitialize();} \item Next, create an encryption context for a given scheme type. The caller is responsible for allocating the {\fenccontext} structure which is passed to this routine. A list of encryption schemes is provided in \S\ref{sec:schemes}: ~~~~ {\tt {\fenccontext} context;} ~~~~ {\tt err\_code = \libfenccreatecontext(\&context, {\fencschemeLSW});} \item The next step is to provision the scheme with a set of parameters. For most schemes, only public parameters are needed for encryption. Secret parameters will also be needed if the application wishes to extract decryption keys. Keys may be loaded from an external source, or they can be generated from scratch. To generate both the public and secret parameters, use the {\tt \libfencgenparams} call as in the following snippet: ~~~~ {\tt {\fencglobalparams} global\_params;} ~~~~ {\tt err\_code = \libfencgenparams(\&context, \&global\_params);} \end{enumerate} \medskip \noindent {\bf Library Initialization.}
{ "alphanum_fraction": 0.7786493861, "avg_line_length": 53.3090909091, "ext": "tex", "hexsha": "da20ad2ec386130bbfa2df8c73b88135dca76b88", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2018-01-11T09:43:38.000Z", "max_forks_repo_forks_event_min_datetime": "2016-05-25T04:48:48.000Z", "max_forks_repo_head_hexsha": "04498be422b76fba70efbf7d7b4d154fdfb54d65", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "Gurut/libfenc", "max_forks_repo_path": "doc/usage.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "04498be422b76fba70efbf7d7b4d154fdfb54d65", "max_issues_repo_issues_event_max_datetime": "2019-02-06T19:11:16.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-05T14:50:42.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "Gurut/libfenc", "max_issues_repo_path": "doc/usage.tex", "max_line_length": 344, "max_stars_count": 2, "max_stars_repo_head_hexsha": "04498be422b76fba70efbf7d7b4d154fdfb54d65", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "Gurut/libfenc", "max_stars_repo_path": "doc/usage.tex", "max_stars_repo_stars_event_max_datetime": "2019-12-24T08:22:41.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-22T14:54:02.000Z", "num_tokens": 705, "size": 2932 }
%!TEX root = ../../thesis.tex \section{Advantages of rich-text editing without editing APIs} With a pure JavaScript implementation, many of the problems that HTML editing APIs have, can be solved. The issues discussed in \refsection{sec:disadvantages_of_html_editing_apis} will be addressed hereinafter. \subsection{Generated output and flawed API} \label{subsec:adv_flawed_api} % an editor can be implemented to allow developers using the editor to \textit{choose} the output The generated markup, if implemented through JavaScript and DOM Level 1 methods, can be chosen with the implementation of the editor. Furthermore, the decision of the generated output can be given to the developers working with the editor. Section \refsection{sec:disadvantages_of_html_editing_apis} describes the inconsistent output across various browsers as well as the restrictions of the API design of \texttt{execCommand}. Both issues can be addressed by offering a method to wrap the current selection in arbitrary markup. jQuery's \texttt{htmlString} implementation\footnote{\url{http://api.jquery.com/Types/\#htmlString}, last checked on 07/19/2015} demonstrates a simple and stable way to define markup as a string and pass it as an argument to JavaScript methods. A sample call could read as follows. \begin{lstlisting}[language=JavaScript, caption=Example calls to format text, label=lst:format-examples-api-alternative] // Mimicking document.execCommand('italic', false, null); editor.format('<em />'); // Added functionality editor.format('<span class="highlight" />'); \end{lstlisting} This will allow developers to choose which markup should be generated for italicizing text. The markup will be consistent in the scope of their project. Since the DOM manipulation is implemented in JavaScript and not by high-level browser methods, this will also ensure the same output across all systems and solve cross-browser issues. The second example function call in listing \ref{lst:format-examples-api-alternative} demonstrates that custom formatting, fitting the needs of a specific project, can be achieved with the same API, giving developers a wider functionality. \refsubsec{subsec:disadv_mimic_native} discusses the disadvantage, that when not using HTML editing APIs, native components like the caret or the text input must be implemented with JavaScript as they are not provided without using HTML editing APIs. On the flip side, this allows full control over these components that can be exposed via an API to other developers. %As discussed in section AB, many components native to text editing have to be implemented in JavaScript. This requires some effort but also enables full control and direct over it. Ultimately, these components can be exposed in an API to other developers, enabling options for developing editors, not offered by HTML editing APIs. An example API will be discussed in sectionXImplementationn. \subsection{Restrictions} When implementing an editor in pure JavaScript, the limitations imposed by the HTML editing APIs, do not apply. Anything that can be implemented in a browser environment can also be implemented as part of a rich-text editor. The Google document editor demonstrates rich functionality that would not be possible with an implementation based on HTML editing APIs.%including layouting tools or floating images. Both are features are hardly possible in an editing mode enabled environment\footnote{\url{http://googledrive.blogspot.fr/2010/05/whats-different-about-new-google-docs.html}, last checked on 07/19/2015}. %SectionABC discusses some use cases exploring the possibilites of rich-text editing implemented this way. % etherpad, markdown etc \subsection{Clipboard} Without a native text input or an element switched to editing mode with HTML editing APIs, clipboard functionality is not available. Users cannot paste contents from the clipboard unless one of these elements is focused. However chapter \refchapter{ch:impl} demonstrates a way that not only allows clipboard support, but also grants full control over the pasted contents. % In a pure JavaScript environment, clipboard functionality seems to be harder to implement than with the use of editing mode. Apart from filtering the input, pasting is natively available---via keyboard shortcuts as well as the context menu. However, as demonstrated in section IMPLEMENTATION, it is possible to enable native pasting---via keyboard and context menu---even without editing mode. Furthermore, it is possible to filter the pasted contents before inserting them in the editor. \subsection{Bugs} %No software can be guaranteed to be bug free. However, By refraining from using HTML editing APIs, all of its numerous bugs will be avoided. An implementation can be aimed to minimize interaction with browser APIs, especially unstable or experimental interfaces. DOM manipulation APIs have been standardized for more than 15 years and tend to be well-proven and stable. Bugs that occur will mostly be part of the library and can be fixed and not only worked around. Bug fixes can be rolled out to users when they are fixed. This will free development from being dependent on browser development, update cycles and user adoption. % minimize the number of ''unfixable'' bugs and ultimately % This means that, other than with HTML editing APIs, bugs that occur are part of the library can be fixed and not only worked around. Furthermore, with minimizing browser interaction, bugs probably occur indeopendently of the browser used, which makes finding and fixing bugs easier. %By refraining from using HTML editing APIs developing an editor will be independent from all of the APIs' bugs. Going a step further, % It puts the contenteditable implementation in the hand of JavaScript developers. We no longer have to wait for browsers to fix issues *and conform each other* and thus can be faster, at least possibly, than browsers are. % An implementation without editing APIs cannot guarantee to be bug free, but not using these APIS, using only well-proven APIs and minimizing interaction with browser APIs will put the development and the fixing of bugs into the hands of JavaScript delopers. In other words, we \textit{can} actually \textit{fix} bugs and do not have to work around them and wait for browsers and browser usage to change (both takes long time).
{ "alphanum_fraction": 0.8087174663, "avg_line_length": 130.1632653061, "ext": "tex", "hexsha": "eee98f608974d5e9820604812f44c4c83608cc0a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "LukasBombach/old-type-js", "max_forks_repo_path": "Report/thesis/content/browser/no_apis_advantages.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "LukasBombach/old-type-js", "max_issues_repo_path": "Report/thesis/content/browser/no_apis_advantages.tex", "max_line_length": 811, "max_stars_count": null, "max_stars_repo_head_hexsha": "8d641664b5325512474ab662417c555ff025ff41", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "LukasBombach/old-type-js", "max_stars_repo_path": "Report/thesis/content/browser/no_apis_advantages.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1297, "size": 6378 }
% 西风吹老洞庭波,一夜湘君白发多。 醉后不知天在水,满船清梦压星河 \documentclass[fleqn,usenatbib,useAMS]{mnras} \usepackage{float} \usepackage{natbib} \usepackage{hhline} \usepackage{multirow} \usepackage{graphicx} \usepackage{ae,aecompl} \usepackage{deluxetable} \usepackage{amssymb, amsmath} \usepackage[usenames, dvipsnames]{xcolor} \usepackage{xcolor,colortbl} \usepackage{CJKutf8} \usepackage{fontawesome5} \usepackage{hyperref} \usepackage{xurl} \definecolor{orcidlogocol}{HTML}{A6CE39} \hypersetup{colorlinks=true, citecolor=MidnightBlue, linkcolor=MidnightBlue, filecolor=magenta, urlcolor=cyan} \urlstyle{same} \DeclareGraphicsExtensions{.pdf,.png,.jpg} \input{define.tex} \title[Outer Galaxy Mass as a Halo Mass Proxy]{ The Outer Stellar Mass of Massive Galaxies: A Simple Tracer of Halo Mass with Scatter Comparable to Richness and Reduced Projection Effects} \author[S. Huang et al.]{ Song Huang (黄崧)\ \href{https://orcid.org/0000-0003-1385-7591}{\textcolor{orcidlogocol}{\faOrcid}}$^{1,2}$\thanks{E-mail: [email protected] (SH)}, Alexie Leauthaud\ \href{https://orcid.org/0000-0002-3677-3617}{\textcolor{orcidlogocol}{\faOrcid}}$^{2}$, Christopher Bradshaw\ \href{https://orcid.org/0000-0003-0833-573X}{\textcolor{orcidlogocol}{\faOrcid}}$^{2}$, \newauthor Andrew Hearin\ \href{https://orcid.org/0000-0003-2219-6852}{\textcolor{orcidlogocol}{\faOrcid}}$^{3}$, Peter Behroozi\ \href{https://orcid.org/0000-0002-2517-6446}{\textcolor{orcidlogocol}{\faOrcid}}$^{4}$, Johannes Lange\ \href{https://orcid.org/0000-0002-2450-1366}{\textcolor{orcidlogocol}{\faOrcid}}$^{2, 5}$, Jenny Greene\ \href{https://orcid.org/0000-0002-5612-3427}{\textcolor{orcidlogocol}{\faOrcid}}$^{1}$, \newauthor Joseph DeRose\ \href{https://orcid.org/0000-0002-0728-0960}{\textcolor{orcidlogocol}{\faOrcid}}$^{6}$, Joshua S. Speagle (沈佳士)\ \href{https://orcid.org/0000-0002-5065-9896}{\textcolor{orcidlogocol}{\faOrcid}}$^{7, 8, 9}$, Enia Xhakaj$^{2}$\\ $^{1}$Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08540, USA \\ $^{2}$Department of Astronomy and Astrophysics, University of California Santa Cruz, 1156 High St., Santa Cruz, CA 95064, USA\\ $^{3}$Argonne National Laboratory, Argonne, IL 60439, USA\\ $^{4}$Department of Astronomy and Steward Observatory, University of Arizona, Tucson, AZ 85721, USA\\ $^{5}$Kavli Institute for Particle Astrophysics and Cosmology and Department of Physics, Stanford University, CA 94305, USA\\ $^{6}$Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 93720, USA\\ $^{7}$Department of Statistical Sciences, University of Toronto, Toronto, M5S 3G3, Canada\\ $^{8}$David A. Dunlap Department of Astronomy \& Astrophysics, University of Toronto, Toronto, M5S 3H4, Canada\\ $^{9}$Dunlap Institute for Astronomy \& Astrophysics, University of Toronto, Toronto, M5S 3H4, Canada } \date{Accepted 2021. Received 2021; in original form 2021 Sep 1} \pubyear{2021} \begin{document} \begin{CJK*}{UTF8}{gbsn} \label{firstpage} \pagerange{\pageref{firstpage}--\pageref{lastpage}} \maketitle \begin{abstract} Using the weak gravitational lensing data from the Hyper Suprime-Cam Subaru Strategic Program (HSC survey), we study the potential of different stellar mass estimates in tracing halo mass. We consider galaxies with \logms{}$>11.5$ at $0.2 < z < 0.5$ with carefully measured light profiles, and clusters from the \redm{} and \camira{} richness-based algorithms. We devise a method (the ``\topn{} test'') to evaluate the scatter in the halo mass-observable relation for different tracers, and to inter-compare halo mass proxies in four number density bins using stacked galaxy-galaxy lensing profiles. This test reveals three key findings. Stellar masses based on \cmodel{} photometry and aperture luminosity within $R<$30 kpc are poor proxies of halo mass. In contrast, the stellar mass of the outer envelope is an excellent halo mass proxy. The stellar mass within $R=[50,100]$ kpc, \menve{50}{100}, has performance comparable to the state-of-the-art richness-based cluster finders at \logmvir{}$\gtrsim 14.0$ and could be a better halo mass tracer at lower halo masses. Finally, using N-body simulations, we find that the lensing profiles of massive halos selected by \menve{50}{100} are consistent with the expectation for a sample without projection or mis-centering effects. Richness-selected clusters, on the other hand, display an excess at $R\sim 1$ Mpc in their lensing profiles, which may suggest a more significant impact from selection biases. These results suggest that \mstar{}-based tracers have distinct advantages in identifying massive halos, which could open up new avenues for cluster cosmology. The codes and data used in this work can be found here: \href{https://github.com/dr-guangtou/jianbing}{\faGithub} \end{abstract} \begin{keywords} cosmology: observations -- gravitational lensing: weak -- galaxies: structure -- galaxies: cluster: general -- galaxies: haloes \end{keywords} \section{Introduction} \label{sec:intro} With the rapid developments of multi-wavelength sky surveys, galaxy clusters have become increasingly important for studies of cosmology and the galaxy-halo connection. As the rare highest density peaks of the matter density distribution, galaxy clusters have long been recognised as powerful probes of the mean cosmic matter density ($\Omega_{\rm m}$), the amplitude of the power spectrum ($\sigma_{8}$), and the cosmic expansion (e.g., \citealt{Evrard1989, Peebles1989, White1993, Viana1996, Wang1998, Wagoner2021}; see \citealt{Allen2011, Kravtsov2012, Weinberg2013} for recent reviews). The abundance, spatial distribution, and total mass distributions of galaxy clusters encode valuable cosmological information (see, e.g., \citealt{Haiman2001, Holder2001, Vikhlinin2009b, Rozo2010, Benson2013, Mantz2014, Bocquet2019, Abbott2020, To2021a, To2021b, Wu2021}). Galaxy clusters are also promising laboratories for studying the boundaries of dark matter halos (e.g., \citealt{Diemer2014, More2015b, More2016, Chang2018, Shin2019, Zurcher2019, Tomooka2020, Xhakaj2020})\footnote{Please see \href{http://www.benediktdiemer.com/research/splashback/}{Benedikt Diemer's webpage} for a more complete list of reference on this topic.}, and for investigating halo assembly bias (e.g., \citealt{Tinker2012, Miyatake2016, Zu2017}). To achieve these goals, a reliable ``cluster finder'' that can identify galaxy clusters is fundamental. In addition the identification of clusters, it is also critical to be able to measure the halo masses of clusters, as well as to calibrate halo mass--observable scaling relations. Thanks to the advent of large optical surveys such as the Sloan Digital Sky Survey (SDSS, \citealt{York2000, SDSSDR7, SDSSDR16})\footnote{\url{https://www.sdss.org/}}, the Dark Energy Survey (DES, \citealt{DES2016, Abbott2018, DES2021})\footnote{\url{https://www.darkenergysurvey.org/}}, and the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP, \citealt{Miyazaki2012, HSC-SSP, HSC-DR1, HSC-DR2})\footnote{\url{https://hsc.mtk.nao.ac.jp/ssp/}}, optical cluster finders are widely used to construct cluster samples (e.g., \citealt{Kepner1999, GladdersYee2000, Koester2007, Hao2010, Wen2012, Rykoff2014, Oguri2018, Aguena2021, Wen2021, Zou2021}), and weak gravitational lensing is also regarded as the most promising approach for calibrating mass-observable relations (e.g., \citealt{Leauthaud2010, Becker2011, vonderLinden2014, Applegate2014, Applegate2016, Okabe2016, Grandis2019}; also see \citealt{Umetsu2020b} for a recent review). Among optical cluster finders, red--sequence based methods such as \redm{} (e.g., \citealt{Rykoff2014, Rozo2014, Rozo2015a, Rozo2015b, Rykoff2016}) and \camira{} (e.g., \citealt{Oguri2014, Oguri2018}) are among the most widely-used in the literature. While red-sequence cluster finders enjoy many successes, these methods are subject to numerous potential sources of systematic error, such as anisotropic selection biases (including both projection bias and orientation bias; e.g., \citealt{NohCohn2012, Dietrich2014, Osato2018, Herbonnet2019}) and mis-centering (e.g., \citealt{Saro2015, Zhang2019b}). Projection effects arising from structures surrounding the clusters in the line-of-sight direction raise a number of especially challenging difficulties (e.g., \citealt{Cohn2007, Erickson2011, Farahi2016, Zu2017, Busch2017, Costanzi2019, Sunayama2019, Sunayama2020}). In particular, projection effects can significantly complicate the calibration of the mass-richness relation, which in turn impacts cosmological inference (e.g., \citealt{Erickson2011, Costanzi2019, Sunayama2020, Wu2021}). In \citet{DES2020}, the authors conclude that projection effects alone can lead to a $\sim 20$\% over-estimate of halo mass in a given richness bin, and could lead to a ``tension'' with a Planck 2018 cosmology (e.g., \citealt{PLANCK2020}). However, it is difficult to precisely evaluate the impact of projection effects on red-sequence cluster finders, because such a quantification requires realistic mock catalogues of cluster galaxies with red-sequences that are consistent with observations, which is not an easy task (e.g., \citealt{DeRose2019, Korytov2019}). In this context, it is of great interest to study potential alternative methods that might suffer less from projection effects. One example is to use the light from massive central galaxies (or the brightest cluster galaxy, BCG). The stellar mass of the BCG follows a well-established stellar-halo mass relation (SHMR, e.g., \citealt{Leauthaud2012, Tinker2017, Kravtsov2018}; also see \citealt{Wechsler2018} for a recent review) with moderate scatter at the high-mass end (e.g., \citealt{More2009, Leauthaud2012, Reddick2013, Zu2015, Lehmann2017, Kravtsov2018}). Historically, BCG luminosity or stellar mass has not been considered as a competitive halo mass proxy, but optical surveys have also struggled to accurately measure BCG total luminosity (e.g., \citealt{Bernardi2013, Huang2018b}). Recently, deep imaging surveys have showed that total BCG luminosity may correlate well with halo mass (e.g., \citealt{Huang2018c, SampaioSantos2021}). In \citet{Huang2020}, for example, the authors showed that a simple phenomenological model based on the stellar masses of BCGs measured within two apertures further reduces the scatter in the halo mass -- observable relation. In addition, recent work has also highlighted the connection between the diffuse envelope around a BCG (often referred to as the Intra-Cluster Light, or ICL) and dark matter halo mass (e.g., \citealt{Montes2018, Montes2019, Zhang2019b, Furnell2021}). In this paper, we use data from the HSC survey to quantify the potential of using BCG light to identify massive clusters. We design a so-called \topn{} test to evaluate their relative performance with respect to the red-sequence methods. The \topn{} test compares the stacked galaxy--galaxy lensing profiles (the excess surface density profiles, or the \dsigma{} profiles) of ``clusters'' selected by different halo mass proxies in fixed number density bins (e.g., \citealt{Reyes2008}). We model these lensing signals using cosmological simulations and evaluate the scatter of the halo mass-observable relations. Section \ref{sec:method} explains the philosophy behind the \topn{} test and the methodology for estimating the scatter in mass-observable relations. Section \ref{sec:data} presents the data and Section \ref{sec:measure} presents key measurements, including different \mstar{} measurements based on 1-D mass profiles and galaxy-galaxy lensing profiles. Section \ref{sec:proxies} presents the different proxies that we test. Our results are presented in Section \ref{sec:result} and discussed in Section \ref{sec:discussion}. Finally, we summarise and conclude in Section \ref{sec:summary}. We assume $H_0$ = 70~km~s$^{-1}$ Mpc$^{-1}$, ${\Omega}_{\rm m}=0.3$, and ${\Omega}_{\rm \Lambda}=0.7$. Stellar mass (\mstar{}) is derived using a Chabrier Initial Mass Function (IMF; \citealt{Chabrier2003}). We adopt $M_{\rm vir}$ for dark matter halo mass as defined in \citealt{BryanNorman1998}. We use $\mathcal{M}\equiv \log_{10} (M_{\rm vir}/M_{\odot})$ and $\mathcal{O}\equiv \log_{10} \rm Observable$ to indicate the ten-base logarithms of halo mass and observables. We also use \sigmvir{}$\equiv \sigma_{\log_{10} M_{\rm vir}}$ for the scatter of halo mass and \sigms{}$\equiv \sigma_{\log_{10} M_{\star}}$ for the scatter of stellar mass. \begin{figure*} \includegraphics[width=\textwidth]{figure/fig_1} \caption{ \textbf{Left:} coloured hexbins and corresponding lines illustrate three mock \mvir{}--observable relations with different slope and scatter values (grey: $\alpha=1.2$ and \scatterObsSymMhalo{}$=0.2$; orange: $\alpha=0.8$ and \scatterObsSymMhalo{}$=0.4$; light green: $\alpha=0.4$ and \scatterObsSymMhalo{}$=0.2$). Hexbins with darker colours highlight the top 5000 objects for each observable. The right panel shows the number density distributions of these observables and highlights the \topn{} selections using shaded regions with corresponding colours. The bottom panel shows the \mvir{} distributions of the three \topn{} bins. We use short vertical bars to indicate the mean \mvir{} values. \textbf{Right:} stacked \rdsigma{} profiles of the three \topn{} samples using the \mdpl2{} simulation. The bottom panel shows the ratios of lensing profiles using the sample with the steepest slope (blue) as a reference. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig1.ipynb}{\faGithub}. } \label{fig:theory_1} \end{figure*} \section{Methodology and Modelling Framework} \label{sec:method} This section explains the basic idea of the \topn{} test and presents our modelling framework for estimating the scatter in \mvir{}-observable relations. \subsection{Philosophy of the \texorpdfstring{\topn{}}{TopN} Test} \label{sec:topn_intro} Cosmological simulations permit a precise prediction for how \dsigma{} of dark matter halos scales with true halo mass, \mvir{}. Since simulated halos can easily be rank-ordered by their mass, it is a trivial matter to use a cosmological simulation to generate a prediction for the \dsigma{} profiles of samples of dark matter halos that have been stacked according to number density. The philosophy behind the \topn{} test is to capitalise upon this predictive power. When analysing observational data, of course one does not have direct access to true halo mass, and so one must instead rely upon an observational proxy. In the analogous manner as can be done for simulated halos, observed galaxy clusters can be arranged into stacked samples according to any particular halo mass proxy, and so it is equally straightforward to measure the \dsigma{} profile of clusters as a function of the number density defined by the choice of proxy. When the halo mass proxy presents a scaling relation with \mvir{} that has low scatter and a steep slope, then the associated stacked samples will exhibit a lensing amplitude that scales steeply with number density, and the stacks will furthermore exhibit \dsigma{} profiles whose shape closely resembles the profile of \mvir{}-ranked stacks of simulated halos of the corresponding number density. For example, by comparing the stacked \dsigma{} profile of the top 100 most massive galaxies with the \dsigma{} profile of the top 100 richest clusters selected in the same survey volume, one can compare which of these proxies is more ``\mvir{}-like''. In this manner, the \topn{} test compares $\Delta\Sigma$ for cluster samples defined in bins of fixed number density, and uses such comparisons to inform the optimal choice of halo mass proxy. Figure \ref{fig:theory_1} illustrates the main idea of the \topn{} test using halos from the MultiDark Planck 2 (\mdpl2{})\footnote{\url{https://www.cosmosim.org/cms/simulations/mdpl2/}} simulation. In this exercise, each halo is characterised by its true halo mass, \haloSym{}$\equiv \log_{10}M_{\rm Vir},$ and additionally by three hypothetical observables, \obsSym{}$\equiv \log_{10}{(\rm Observable)}$. We assume that each \obsSym{} follows a $\log$-linear scaling relation with \mvir{} that is characterised by a value for the slope, $\alpha$, and by a level of Gaussian scatter in \obsSym{} at fixed \haloSym{}, \scatterObsSymMhalo{} (e.g., \citealt{Lieu2016}, \citealt{Ziparo2016}, \citealt{Evrard2014, Farahi2018}). We then select the top $N=5000$ objects using \obsSym{} to rank-order the clusters. The value of $N$ translates into a fixed volume number density threshold shown on the right sub-panel using the number density distributions of these observables. When comparing the \mvir{} distributions of the \topn{}-selected samples (bottom panel), the halo mass proxies with smaller $\alpha$ and/or larger \scatterObsSymMhalo{} result in \mvir{} distributions with larger \scatterMhaloObsSym{} and lower mean \mvir{}. This selection at fixed number density (\topn{} selection) yields a \mvir{} distribution that reflects the properties of the underlying \mvir{}-observable relation. Figure \ref{fig:theory_1} also shows the \emph{stacked} \dsigma{} profiles of these \topn{} samples. The \topn{} sample with a higher mean \mvir{} and a lower value of \scatterMhaloObsSym{} has a \dsigma{} profile with larger overall amplitude. We can therefore use the \emph{stacked} \dsigma{} profiles of different \topn{} samples to probe their underlying \mvir{}--observable relations. \citet[][]{Reyes2008} applied a similar method to develop improved halo mass tracers of clusters. The right panel of Figure \ref{fig:theory_1} illustrates that the ratio of \dsigma{} profiles exhibits scale-dependent features that reveal subtle differences in other halo properties, and also in large-scale environment. Our use of \topn{} tests in this paper will additionally leverage the discriminating power of this scale-dependence when assessing various halo mass proxies. Finally, Figure \ref{fig:theory_1} also reveals a degeneracy between the slope, $\alpha,$ and the scatter, \scatterObsSymMhalo{}, such that different combinations of $\alpha$ and \scatterObsSymMhalo{} can produce identical \mvir{} distributions. We discuss this degeneracy in the next section (\S\ \ref{sec:comp_scatters}). \subsection{Modelling Methodology} \label{sec:model} To quantitatively interpret differences between \dsigma{} profiles, we develop a simple forward-modelling method based on data from the \mdpl2{} and the Small MultiDark Planck (\smdpl{})\footnote{\url{https://www.cosmosim.org/cms/simulations/smdpl/}} N-body simulations (e.g., \citealt{Klypin2016}). We assume a $\log$-linear \mvir{}-observable relation with a constant Gaussian scatter. We use this model to estimate the scatter in the mass-proxy relation, and also to infer the underlying \mvir{} distribution of different samples. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure/fig_2} \caption{ The \dsigma{} profiles and \mhalo{} distributions of different \topn{} samples with a wide range of scatter values ($0<$\scatterMhaloObsSym{}$<1$). This combines data from both \mdpl2{} and \smdpl{}. The top and bottom rows display the first and last \topn{} bins (the most and least massive in average \mhalo{}) used in this work (see \S\ \ref{sec:binning}). Their number density thresholds correspond to $0.2 < z < 0.5$ \redm{} clusters in HSC \texttt{S16A} area with $35 < \lambda < 150$ (Bin 1) and $6 \leq \lambda < 10$ (Bin 4). \textbf{Left}: the \rdsigma{} profiles of the \topn{} samples with different $\sigma_{\langle s \mid \mu\rangle}$ values. We highlight the profiles corresponding to $\sigma_{\langle s \mid \mu\rangle}=0.4$ (dashed line) and $=0.6$ (dot-dashed line). \textbf{Middle}: the ratios between the \dsigma{} profiles of the \topn{} samples with non-zero scatter and the ``perfect'' sample ($\Delta\Sigma_{\sigma=0}$). \textbf{Right}: $\log M_{\rm vir}$ distributions of the ``perfect'' \topn{} sample and the $\sigma_{\langle s \mid \mu\rangle}=0.4$ and $=0.6$ samples. Grey vertical lines indicate the mean $\log M_{\rm vir}$ for each distribution. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig2.ipynb}{\faGithub}. } \label{fig:mdpl2} \end{figure*} \subsubsection{The relationship between \scatterMhaloObsSym{}, \scatterObsSymMhalo{}, and $\alpha$} \label{sec:comp_scatters} In this section, we use a simple analytic model to explore the connection between the scatter and slope of the $\log$-linear \mvir{}-observable relation and the amplitude of the \dsigma{} profiles. The shape of the halo mass function (HMF) directly influences the characteristics of a \mvir{}--observable relation \citep[\eg{}][]{Tinker2008}. We will first use an analytic form of the HMF to demonstrate the relation between \scatterMhaloObsSym{} and \scatterObsSymMhalo{}. We approximate the HMF using the following exponential functional form suitable for the high--\mvir{} end: \begin{equation} \hmf{} \equiv \frac{dn(\haloSym{})}{d\mu{}} = \exp \left(\beta_0 - \beta_1 \haloSym{} - \frac{\beta_2}{2} \haloSym{}^2 \right). \label{eq:quadratic_hmf} \end{equation} \noindent For large values of mass, the HMF declines rapidly with \mhalo{} ($\beta_1 > 0$) with a steepening slope ($\beta_2 > 0$)\footnote{For the \mdpl2{} HMF at $z\sim 0.4$, we adopt the best--fit parameter values of $\beta_{0}=-0.558$, $\beta_{1}=0.670$, and $\beta_{2}=2.959$.}. We model the halo mass proxy \obsSym to follow a $\log$-linear relation with a constant $\log$-normal scatter value: \begin{equation} \obsSym = \mathcal{N}(\slope \haloSym + \intercept,\ \scatterObsSymMhalo). \label{eq:lognormal_obs_given_mhalo} \end{equation} The \scatterObsSymMhalo{} value here is often quoted as the ``scatter of the SHMR'' (e.g., \citealt{More2011, Leauthaud2012, Reddick2013, Behroozi2013}) and has been frequently used to infer physical information about galaxy formation (e.g., \citealt{Gu2016, Matthee2017, Tinker2017c, Wechsler2018}). Yet it is the scatter of \mvir{} at fixed observable, \scatterMhaloObsSym{}, that we estimate in observations using the \topn{} test. We now briefly discuss the connection between \scatterObsSymMhalo{} and \scatterMhaloObsSym{}. First, the probability density of the observable $\obsSym$ is given by, \begin{equation} P(\obsSym{}) \equiv \int_{0}^{\infty} \hmf{} P(\obsSym{} | \haloSym{}) d\haloSym{} \end{equation} At fixed \obsSym{}, the mean value of \haloSym{} is, \begin{equation} \begin{aligned} \langle \haloSym{} | \obsSym \rangle &= \frac{1}{P(\obsSym)} \int_{0}^{\infty} \hmf{} P(\obsSym{} | \haloSym{}) \haloSym{} d\haloSym{} \\ &= \frac{\left( \frac{\obsSym- \intercept}{\slope} - \beta_1 \left(\frac{\scatterObsSymMhalo}{\slope}\right)^2 \right)}{ 1 + \beta_2 \left(\frac{\scatterObsSymMhalo}{\slope}\right)^2} \label{eq:mean_of_mu} \end{aligned} \end{equation} \noindent The three components of $\langle \haloSym{} | \obsSym \rangle$ are: \begin{enumerate} \item The mean relation between the observable and halo mass, $(\obsSym- \intercept) / \slope$. \item A shift due to the Eddington bias caused by the linear slope of the HMF, $-\beta_1 (\frac{\scatterObsSymMhalo}{\slope})^2$. In the case of $\beta_1 > 0$, this shift is to lower \haloSym{} as there are more low \haloSym{} objects up-scattered into the selection. \item A second shift is caused by the curvature of the HMF, $(1 + \beta_2 (\frac{\scatterObsSymMhalo}{\slope})^2)^{-1}$. Again, $\beta_2 > 0$ results in more low \haloSym{} objects and thus a shift to lower \haloSym{}. \end{enumerate} For the scatter in \haloSym{} at fixed \obsSym{}, we have \begin{equation} \begin{aligned} \scatterMhaloObsSym{} &= \frac{1}{P(\obsSym{})} \int_{0}^{\infty} \hmf{} P(\obsSym{} | \haloSym{}) ( \haloSym{} - \langle \haloSym{} \rangle )^2 d\haloSym{} \\ &= \frac{\scatterObsSymMhalo}{\sqrt{\beta_2 \scatterObsSymMhalo^2 + \slope^2}} \label{eq:scatter_of_mu} \end{aligned} \end{equation} \noindent In the case of a power law halo mass function ($\beta_2 = 0$), this expression reduces to the commonly seen $\scatterObsSymMhalo / \slope$. The positive $\beta_2$ of the HMF decreases this scatter. Finally, the higher moments of $P(\obsSym{})$ such as the skewness or excess kurtosis confirm that $P(\haloSym{} | \obsSym{})$ follows a Gaussian distribution for the approximated HMF in Equation \ref{eq:quadratic_hmf}. We now rewrite Equation \ref{eq:scatter_of_mu} in a more practical form that makes it clear that \scatterMhaloObsSym{} depends on the {\em ratio} of \scatterObsSymMhalo{} and \slope. This is obvious in the case of a power law mass function ($\beta_2 = 0$) and is also true for the more general quadratic form (\ref{eq:quadratic_hmf}), \begin{equation} \scatterMhaloObsSym{} = \frac{\scatterObsSymMhalo}{\sqrt{\beta_2 \scatterObsSymMhalo^2 + \slope^2}} = \left(\beta_2 + (\frac{\slope}{\scatterObsSymMhalo})^2\right)^{-1/2} \label{eq:ratio_is_what_matters} \end{equation} This equation shows that, for a given \topn{} selection, two \mvir{}--observable relations with the same $\scatterObsSymMhalo / \slope$ ratio will have the same value \scatterMhaloObsSym{}. We demonstrate this in the right panel of Figure \ref{fig:theory_1} by populating \mdpl2{} halos with mock observables that follow different \mvir{}--observable relations. The two observables whose \mvir{}--observable relations share the same value of $ \alpha / \scatterObsSymMhalo = 2$ (red and green) lead to the same \mvir{} distributions in the top $N=5000$ sample and result in almost identical stacked \dsigma{} profiles. As shown above, our \topn{} test only probes the observed \mvir{} distribution, and cannot even in principle distinguish between {\em i)} mass-observable relations with steep slopes and high scatter versus {\em ii)} mass-observable relations with shallow slopes but low scatter. We note that this degeneracy is unimportant for the present work, because here we only wish to perform $relative$ comparisons between different proxies for \mvir{}. In this paper, we are not concerned with inferring the specific values of either $\alpha$ or \scatterObsSymMhalo{}; rather, our principal focus is on a comparative assessment of which observable proxies \obsSym{} supply the strongest discriminating power regarding \mvir{}. For a few specific cases of \obsSym{} that are of special importance, we will estimate the value of \scatterObsSymMhalo{} by using the same value for the slope as is conventionally assumed in the literature on \mvir{}--observable relations. This allows us to inter-compare different proxies using \scatterObsSymMhalo{}, and also to make comparisons with previous results from observations or hydro simulations. However, we reiterate that in such cases, the specific value of our estimate on \scatterObsSymMhalo{} will strictly depend upon the assumed values for the slope $\alpha,$ as this mathematical degeneracy cannot be evaded. \subsubsection{Estimating \scatterMhaloObsSym{}} \label{sec:estimate_scatter} The discussion in \S\ \ref{sec:comp_scatters} demonstrates that we can compare the \scatterMhaloObsSym{} values of two \topn{} samples using their stacked \dsigma{} profiles: the sample selected by the ``better'' \mhalo{} tracer should yield a \dsigma{} profile with higher amplitude. More importantly, we can also estimate \scatterMhaloObsSym{} from \dsigma{} profiles by comparing with model profiles built from simulations using the same number density selection. We build our model by populating halos in simulations with mock observables that follow $\log$-linear relations (Equation \ref{eq:lognormal_obs_given_mhalo}) with fixed slope value at $\alpha = 1$ (see justification in previous section) but with different \scatterObsSymMhalo{} values. In each realisation, we derive the best-fit $\haloSym| \obsSym$ relation and estimate the \scatterMhaloObsSym{} value for the same pre-defined number density bins (\topn{} bins) used in the observations. For each \topn{} bin, we calculate the stacked \dsigma{} profiles and store the underlying \mhalo{} distributions at different \scatterMhaloObsSym{} values. We adopt a densely sampled grid of \scatterMhaloObsSym{} values between 0.0 and 1.0 dex. Observations suggest that at high-\mhalo{} and low redshift, \scatterObsSymMhalo{}$=0.2$ dex (e.g., \citealt{More2011, Leauthaud2012, Reddick2013, Behroozi2013, Tinker2017}). Given the slope of the SHMR, this means that \scatterMhaloObsSym{} is expected to be in the $\sim 0.4$-0.6 dex range (e.g., Figure 5 \& 7 of \citealt{Wechsler2018}). Therefore it is essential to cover a large range in \scatterMhaloObsSym{} values. We use the \href{https://halotools.readthedocs.io/en/latest/api/halotools.mock_observables.mean_delta_sigma.html}{\texttt{mean\_delta\_sigma}} function from \texttt{halotools} \citep{Hearin2017}\footnote{ \url{https://github.com/astropy/halotools}} to calculation the stacked \dsigma{} profiles based on the algorithm described in the Appendix B of \citet{Lange2019} in comoving coordinates. We then convert them into physical coordinates before comparing to observations. We use 50 (10) millions down-sampled particles from the \mdpl2{} (\smdpl{}) simulations for the calculation and choose the \texttt{Z} direction as the line-of-sight. The \mdpl2{} simulation has a large box size of 1 Gpc$/h$ that helps sample the very high-\mhalo{} end of HMF. However, its particle mass resolution ($1.51 \times 10^{9} M_{\odot}/h$) is not sufficient to resolve the $< 10^{12.5} M_{\odot}/h$ halos presented in samples with large \scatterMhaloObsSym{} values. In contrast, the \smdpl{} simulation has a much better mass resolution ($9.63 \times 10^{7} M_{\odot}/h$) for calculating accurate \dsigma{} profiles for less massive halos but does not have sufficient volume (box size $=0.4$ Gpc$/h$) to sample the very high-\mvir{} end. Therefore, we combine the predictions from the \mdpl2{} and \smdpl{} simulations. We use the \mdpl2{} simulation to cover the $0.00 <$\scatterMhaloObsSym{}$<0.65$ dex range with a 0.01 dex grid, and use \smdpl{} to cover the $0.65 <$\scatterMhaloObsSym{}$<1.0$ dex range with a 0.05 dex grid size. Using the overlapping $\scatterMhaloObsSym{}$ range, we confirm the two simulations provide \dsigma{} profiles that are consistent within their statistical uncertainties. We use the $z=0.364$ snapshot from \mdpl2{} and the $z=0.404$ snapshot from \smdpl{}, which are the closest ones to the mean redshift ($\sim 0.4$) of the HSC sample. Figure \ref{fig:mdpl2} shows the predicted \dsigma{} profiles as a function of \scatterMhaloObsSym{} in two number density bins. In addition to the expected decreasing \dsigma{} amplitudes with increasing \scatterMhaloObsSym{} values, we also see scale dependent differences in the ratios of the predicted \dsigma{} profiles. We highlight the \dsigma{} profiles with \scatterMhaloObsSym{}$=0.4$ and 0.6 dex along with their \mhalo{} distributions. We estimate \scatterMhaloObsSym{} by matching the observed \dsigma{} profiles to the model predictions. For an observed lensing profile ($\Delta\Sigma_{\rm O}$) and its covariance matrix ($\boldsymbol{C}$), we use the $\chi^2$ statistic to evaluate how well a given model profile ($\Delta\Sigma_{\rm M}$) describes the observations: \begin{equation} \chi^{2}=(\Delta\Sigma_{\rm M}-\Delta\Sigma_{\rm O})^{\top} \boldsymbol{C}^{-1}(\Delta\Sigma_{\rm M}-\Delta\Sigma_{\rm O}). \label{eq:chi2} \end{equation} \noindent This ignores the uncertainties in the theoretical \dsigma{} profiles (\eg{} due to the stochasticity when populating halos or sample variance) which are negligible compared to the observed uncertainties. Appendix \ref{app:fitting} includes further details about the fitting process. In this remainder of this paper, we will use \sigmvir{}$\equiv \sigma_{\log M_{\rm vir}} \equiv $\scatterMhaloObsSym{} to refer to the scatter of halo mass at given observable. This model only focuses on the underlying distribution of \mvir{} which is {\em not} the only factor that determines the \dsigma{} profile. For example, our model does not account for mis-centering or baryonic physics -- these and other effects will be discussed in later sections. To evaluate the impact of satellite galaxies on \dsigma{} profile, we also provide a special version of the model that can match the observed stellar mass function (SMF) and clustering signals of HSC massive galaxies. We provide more details of this model in Appendix \ref{app:hsc_model}. We include massive satellite galaxies in our model when comparing to HSC massive galaxies since they should present in the observed sample. For richness-selected clusters, we remove the satellite galaxies from the model before calculating the \dsigma{} profiles. We assume the cluster finders identify the correct central galaxies even though that is not always the case. However, including satellites or not in our models does not have any impact on our results. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure/fig_3} \caption{ \textbf{Left}: the HMF of the \mdpl2{} simulation and the \mvir{} range of our \topn{} tests for an ``ideal'' (zero scatter) \topn{} selection. The four coloured regions highlight the \mvir{} ranges of the four \topn{} bins. Dashed lines with corresponding colours label the number density boundaries of each bin. \textbf{Middle}: the richness function of the \texttt{S16A} \redm{} clusters used in this work. Coloured regions reflect the $\lambda$ ranges of all four bins. We describe the choice of these bins in \S\ \ref{sec:binning} and list their properties in Table \ref{tab:summary}. \textbf{Right}: The grey shaded region and the dashed-line contours show the distribution of HSC massive galaxies over the \maper{100}-\maper{10} plane. The dot-dash line highlights \logmmax{}$=11.5$, a conservative \maper{100} completeness limit. The four solid-line contours with corresponding colours highlight the distributions of \menve{50}{100}-selected galaxies in the four \topn{} bins. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig3.ipynb}{\faGithub}. } \label{fig:density_bins} \end{figure*} \section{Data} \label{sec:data} In this section, we introduce the imaging data (\S\ \ref{sec:hsc}), the HSC massive galaxy sample (\S\ \ref{sec:galaxy_sample}), and the richness-selected galaxy cluster catalogues (\S\ \ref{sec:cluster_sample}) \subsection{Hyper Suprime-Cam Survey Subaru Strategic Program} \label{sec:hsc} In this work, we use $\sim 137$ \sqdeg{} of deep optical images from the \texttt{WIDE} layer of the \texttt{S16A} release of the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP, or HSC survey; e.g. \citealt{HSC-SSP, HSC-DR1, HSC-DR2}) \footnote{\url{https://hsc.mtk.nao.ac.jp/ssp/}} - an ambitious cosmology survey using the 8.2-m Subaru Telescope. HSC multi--band ($grizY$) images have impressive depth ($\sim$3--4 mag deeper than SDSS), superb seeing conditions (the mean $i$-band seeing has a $\sim 0.58$ arcsec Full-Width Half-Maximum, or FWHM), and fine pixel resolution (0.168 arcsec), all making it ideal for studying galaxy structure and performing galaxy-galaxy lensing measurements. We use the coadd images produced by \texttt{hscPipe 4.0.2}. \texttt{hscPipe} is a specifically modified version of the Large Synoptic Survey Telescope (LSST) pipeline (e.g.\ \citealt{Juric2015}; \citealt{Axelrod2010}\footnote{{\url{https://pipelines.lsst.io}}} for HSC\footnote{ The most recent version of \texttt{hscPipe} can be found here: \url{https://hsc.mtk.nao.ac.jp/pipedoc_e/}. }. Please see \citet{HSC-PIPE} for more details about the data reduction process and \citet{SynPipe} for its photometric performance. We also make use of the photometric redshift (photo-$z$) measurements of HSC galaxies from the \href{https://github.com/joshspeagle/frankenz}{\texttt{frankenz}} \citep{Speagle2019} algorithm. Please see \citet{HSC-PHOTOZ} for a summary of its performance. For galaxy-galaxy lensing measurements, we use the public shape catalogue for \texttt{S16A}\footnote{The \texttt{S16A} weak lensing shape catalogue is released here: \url{https://hsc-release.mtk.nao.ac.jp/doc/index.php/s16a-shape-catalog-pdr2/}} based on the $i$-band images and the re-Gaussianization algorithm (\citealt{HirataSeljak2003}). HSC Y1 cosmology (e.g., \citealt{Hikage2019, Hamana2020}) and other cluster lensing analyses (e.g., \citealt{Umetsu2020}) used the same catalogue. Please see \citet{HSC-PIPE}, \citet{HSC-WLCAT}, and \citet{HSC-WLCALIB} for details about the shape measurements and lensing calibration. All galaxies and clusters used in this work are filtered through the bright star masks (see \citealt{HSC-STAR} for details) used in the HSC \texttt{S18A} or \texttt{PDR2} data release to avoid the contamination from saturated stars. Please refer to \citet{Huang2018b, Huang2018c, Huang2020} for further details about the HSC data. All imaging data, along with the photometric and the photo-$z$ catalogues, have been released to the public\footnote{\url{https://hsc.mtk.nao.ac.jp/ssp/data-release}}. \subsection{HSC Massive Galaxy Sample} \label{sec:galaxy_sample} Using the \texttt{S16A} data, we select a sample of massive galaxies at $0.19 < z < 0.52$. In this redshift range, we can resolve their inner light profile ($r<10$ kpc) but also have the depth to explore the faint outskirt ($r \sim 100$ kpc). This is the same sample used in \citet{Huang2020}. Please refer to \citet{Huang2020} for a detailed description of the sample, here we only provide a brief summary. The sample contains 24926 massive galaxies selected using a cut on the \cmodel{}-based \mstar{}, $\mcmodel{} \geq 10^{11.2} M_{\odot}$. The \mcmodel{} is based on the \mlratio{} estimated by five-band SED fitting using \texttt{iSEDfit} (see \citealt{Moustakas2013}). All galaxies have valid 1-D surface brightness profiles measured in $i$-band with empirical background correction that enables non-parametric \mstar{} measurements out to $>100$ kpc. During the extraction of the 1-D profiles, $\sim 9$\% of the original sample was excluded due to contamination by nearby objects. We treat this as a small decrease of the effective survey area in this work. Among all 24051 galaxies, 15558 have useful spec-$z$. However, when making a cut using the 100 kpc aperture \mstar{}, at $\maper{100} \geq 10^{11.5} M_{\odot}$, 4429 of the 4848 galaxies ($\sim 91$\%) have spec-$z$. And 2190 of the 2299 galaxies ($\sim 95$\%) with $\maper{100} \geq 10^{11.6} M_{\odot}$ have spec-$z$. Using this sample, we have uncovered remarkable structural diversity in the outer stellar halo (\citealt{Huang2018b}) and shown that the stellar mass distribution has a connection with \mvir{} (\citealt{Huang2018c, Huang2020}). We have also compared these observed 1-D stellar mass density profiles with those from state-of-the-art hydro-simulations to gain insights into their assembly history (\citealt{Ardila2021}). We release the catalogue of this massive galaxy sample here: \href{https://zenodo.org/record/4902141}{\faDatabase}. \subsection{Red Sequence Cluster catalogues} \label{sec:cluster_sample} Taking advantage of the well-defined ``red-sequence'' of low-$z$ galaxy clusters and the potential low-scatter nature of the \mvir{}-richness scaling relation (e.g., \citealt{Rozo2009, Rykoff2012}), richness-based cluster finders provide a promising way to identify massive halos in imaging data. Here we will evaluate two cluster catalogues based on red-sequence algorithms using the \topn{} test by estimating the \sigmvir{} values of these two cluster samples and directly comparing their \dsigma{} profiles to those from \mstar{}-based \mvir{} proxies. The massive galaxy sample and the cluster samples are independently selected from the same HSC footprint but there is no one-to-one correspondence between the two catalogues. For example, there are massive galaxies not contained in either cluster catalogues. We will briefly discuss this in \S\ \ref{sec:discussion}. \subsubsection{\redm{} Clusters} \label{sec:cluster_redmapper} \redm{} \citep{Rykoff2014, Rozo2014, Rozo2015a, Rozo2015b} \footnote{\url{http://risa.stanford.edu/redmapper/}} is a popular cluster finding algorithm based on the richness of red-sequence galaxies. It has been applied to several large imaging surveys including SDSS (e.g., \citealt{Rykoff2014}), DES (\citealt{Rykoff2016, McClintock2019}), and HSC. The \mvir{}--richness relation of \redm{} clusters has been investigated in multiple works (e.g., \citealt{Saro2015, Farahi2016, Simet2017, Melchior2017, Baxter2018, Murata2018, McClintock2019}) We use an internal version of the \redm{} cluster catalogue for \texttt{S16A} data (Kawinwanichakij \& Rykoff, private communication) based on the updated \texttt{Python} version of \redm{}\footnote{\url{https://github.com/erykoff/redmapper}}. The algorithm is similar to that used in \citet{Rykoff2016} with minor modifications. At $0.19 < z < 0.52$, it contains 2409 clusters with $\lambda \geq 5$ and 227 with $\lambda \geq 20$. Of these clusters, 1623 have spec-$z$ (from a variety of sources) and the rest have a high-quality photo-$z$ from their red-sequence. The sample has a median photo-$z$ bias of $\delta_{z} \sim 0.0012$ (0.0008), a scatter of $\sigma_{z}/(1 + z) \sim 0.011$ (0.007), and a 4-$\sigma$ outlier fraction of $\sim 0.7$\% (0.5\%) for $\lambda \geq 5$ ($\geq 20$) clusters, showing performance consistent with that of the DES catalogue \citep{McClintock2019}. We confirm that only using the photo-$z$ from \redm{} does not affect relevant conclusions. Regarding the completeness of the cluster sample, \citet{McClintock2019} estimates that the DES limiting magnitude is deep enough for $0.2 L_{\star}$ galaxies at $z \sim 0.7$, and that the galaxy sample for \redm{} is $>90$-95\% complete. Given the deeper imaging in HSC, it is safe to expect even better completeness at $z<0.52$. In addition to the richness, \redm{} provides a list of candidates of the central galaxy along with their central probability ($P_{\rm cen}$). We choose the galaxy with the highest $P_{\rm cen}$ as the centre of the cluster. Using a sub-sample of X-ray detected clusters, \citet{Zhang2019b} analyses \redm{} mis-centring in DES for clusters with $\lambda > 20$. They find $\sim 83$\% of the clusters are well-centred. In the HSC \redm{} sample, 66\% (77\%) of clusters have central galaxies with $P_{\rm cen} > 0.8$ (0.7). We also use the 364 SDSS \texttt{DR8} \redm{} clusters (\citealt{Rykoff2014}) with $\lambda_{\rm SDSS} \geq 20$ in the \texttt{S16A} footprint to show that the results found for the HSC \redm{} clusters also hold for SDSS \redm{} catalogue (see Appendix \S\ \ref{app:sdss_redm}). The SDSS sample is only complete at $z < 0.33$. In Appendix \S\ \ref{app:des_redm}, we compare the stacked \dsigma{} profiles of HSC and DES \redm{} (\eg{} \citealt{Chang2018, McClintock2019}) clusters in the same redshift ($0.2 < z < 0.5$) and richness ($20 \leq \lambda < 100$) bins and show they are consistent with each other. \subsubsection{\camira{} Clusters} \label{sec:cluster_camira} \camira{}\footnote{\url{https://www.slac.stanford.edu/~oguri/cluster/}} is a red-sequence cluster finding algorithm developed by \citet{Oguri2014}. It has been applied to SDSS (\citealt{Oguri2014}) and HSC (\citealt{Oguri2018}) data. Unlike \redm{}, \camira{} does not have a richness-dependent radius, and instead counts red galaxies with $L \geq 0.2L_{\star}$ within a fixed $R\leq 1 h^{-1}$ Mpc. Its \mvir{}-richness relation has been calibrated using a variety of methods \citep{Murata2019, Chiu2020a, Chiu2020b}. Here we use the public \texttt{S16A} \camira{} catalogue that contains 998 (263) clusters with $N_{\rm Mem} \geq 10$ ($\geq 20$) for $0.19 < z < 0.52$. Among them, 725 clusters have spec-$z$ measurements for their central galaxies. Our \camira{} sample has a median photo-$z$ bias of $\delta_{z} \sim -0.0042$ ($-0.0036$), a scatter of $\sigma_{z}/(1 + z) \sim 0.013$ (0.009), and a 4-$\sigma$ outlier fraction of $\sim 1.4$\% (0.9\%) for the $N_{\rm Mem} \geq 10$ ($\geq 20$) clusters. Similar to \redm{}, only using the photo-$z$ has no impact on any key results. The \camira{} catalogues shows excellent completeness when compared to X-ray clusters ($\gtrapprox 0.8$; see \citealt{Oguri2018} \S\ 5.3) or mock galaxy catalogues ($> 0.8$ for $M_{200c} > 5 \times 10^{13} h^{-1} M_{\odot}$ clusters at $0.3 < z < 0.6$; see \citealt{Oguri2018} \S\ 6). \camira{} assigns a central galaxy to each cluster without providing a central probability. \citet{Oguri2018} investigated the off-centre distance ($R_{\rm off}$) distribution using matched X-ray clusters. While the distribution centred at $R_{\rm off} \approx 0.0$ Mpc, $\sim 30$\% of the clusters are offset from the X-ray peak with their $R_{\rm off}$ distribution described by a $\sigma=0.26 \pm 0.04 h^{-1}$ Mpc Gaussian component. We also test the internal \texttt{S18A}, \texttt{S19A}, and \texttt{S20A} \camira{} catalogues within the \texttt{S16A} footprint. Differences in the data reduction process (e.g., background subtraction, deblending) cause subtle differences in the cluster detection and richness measurements. However, we verify that these updates do not change any conclusions. \section{Measurements} \label{sec:measure} Here we briefly introduce the methodologies behind the key measurements used in the \topn{} tests: the 1-D surface stellar mass density profiles ($\mu_{\star}$, \S\ \ref{sec:1d_prof}) and the galaxy-galaxy lensing \dsigma{} profiles ($\S$ \ref{sec:dsigma}). \subsection{1-D Surface Mass Density Profiles} \label{sec:1d_prof} Our method for extracting 1-D $\mu_{\star}$ profiles is presented in previous work \citep{Huang2018b, Huang2018c, Ardila2021}. We refer readers to these papers for full technical details and only provide a brief summary here. Using the \ellipse{} isophotal analysis function from \iraf{}, we extract 1-D $i$-band surface brightness profiles after aggressively masking out nearby contamination and empirically correcting for the local background. In addition to the mask, the strategy of taking the median of flux density values along the isophote after 3-$\sigma$ clipping makes our 1-D profile robust against the high density of faint objects around massive galaxies (\citealt{Ardila2021}). With background subtraction, the 1-D profile is stable above $\sim 28$ \smag{}, roughly corresponding to $r\sim 100$ kpc for our sample. The inner $\sim 5$-6 kpc of the profile is smeared by the seeing. We then convert the $i$-band surface brightness profile to the $\mu_{\star}$ profile using the average $i$-band \mlratio{} derived from SED fitting after applying corrections for galactic extinction and cosmological dimming. We ignore the \mlratio{} gradient in this work. Low-$z$ massive galaxies have shallow but negative colour gradients (e.g., \citealt{Huang2018b, Wang2019, Montes2021}), which suggests the average \mlratio{} will underestimate the \mstar{} in the central region and overestimate it in the outskirts. However, the lack of clear dependence of colour gradients on \mstar{} (\citealt{Huang2018b}) suggests this systematic will not influence the conclusions of this work. We release the massive galaxy catalogue along with the 1-D $\mu_{\star}$ profiles here: \href{https://zenodo.org/record/5259075}{\faDatabase}. \subsection{Galaxy-Galaxy Lensing Measurements} \label{sec:dsigma} The galaxy-galaxy (g-g) lensing measurements done here follow almost exactly those in \citet{Speagle2019} and \citet{Huang2020}, which are themselves based on the methodology presented in \citet{Leauthaud2017}. This method subtracts lensing signals around a large number of random positions to achieve unbiased measurements (\citealt{Singh2017}). The equations used to derive the \dsigma{} profile are given in Appendix \ref{app:dsigma_detail}. Compared to these earlier works, we provide a new recipe for the $f_{\rm bias}$ factor that more accurately accounts for the photo-$z$ dilution effect (see Equation \ref{eq:fbias})\footnote{The typical $f_{\rm bias}$ factor value is at the $\sim 1$-2\% level, and has no impact on the results of this work.}. We measure \dsigma{} in 11 physical logarithmic radial bins from 200 kpc to 10 Mpc using the \texttt{S16A} weak lensing shape catalogue (\citealt{HSC-WLCAT, HSC-WLCALIB}). We adopt the \texttt{frankenz}\footnote{\url{https://github.com/joshspeagle/frankenz}} photo-$z$ for source galaxies. For the photo-$z$ quality cut, we use the ``basic'' cut ($\chi^{2}_{5} < 6$) in \citet{Speagle2019} that removes about 5\% of source galaxies with unreliable photo-$z$. Most of them are at very low redshift so will not contribute to the lensing signals in this work. The lens-source separation criteria are: $z_{\rm s} - z_{\rm L} \ge 0.1$ and $z_{\rm s} > z_{\rm L} + \sigma_{s,68}$, where $\sigma_{s,68}$ is the 1$\sigma$ uncertainty of the source photo-$z$. We confirm that other photo-$z$ quality cuts and slightly different lens-source separation criteria do not affect any results. We use both jackknife resampling in 40 pre-defined sub-regions and bootstrap resampling with 2000 iterations to estimate the covariance matrix and the uncertainties of the \dsigma{} profiles. The two methods lead to fully consistent results. We use \texttt{v0.2} of the \texttt{Python} g-g lensing code \texttt{dsigma} \footnote{\url{https://github.com/johannesulf/dsigma}} to calculate \dsigma{} profiles, and we release the \dsigma{} measurements for our massive galaxies and clusters here: \href{https://zenodo.org/record/5259075}{\faDatabase}. \section{Halo Mass Proxies and Bins} \label{sec:proxies} This section introduces different \mvir{} proxies in our \topn{} tests. We broadly grouped these observables into \mstar{}- and richness-based categories. For \mstar{}-based proxies, we include \mstar{} based on the default HSC photometry for extended objects (\S\ \ref{sec:mcmodel}), a series of \mstar{} measures from the 1-D $\mu_{\star}$ profiles (\S\ \ref{sec:maper} \& \S\ \ref{sec:menvelope}), and a linear combination of different aperture \mstar{} measures (\S\ \ref{sec:masap}). And for richness-based methods, we include clusters from both \redm{} and \camira{}. We also describe our number density bins and show the estimated scatter by comparing to the model described in \S\ \ref{sec:estimate_scatter}. \subsection{Proxies} \subsubsection{\cmodel{} stellar mass} \label{sec:mcmodel} \cmodel{} is the default photometric model for extended objects in both the SDSS and HSC surveys, and will continue to be used in future imaging surveys. \cmodel{} attempts to describe the 2-D flux distribution of all extended objects using a combination of an exponential and a de Vacouleurs component (e.g., \citealt{HSC-PIPE}). It is an efficient and flexible model and can provide statistically robust colour measurements down to very faint magnitudes (e.g., \citealt{SynPipe}). However, \cmodel{} does not always provide accurate total flux measurements, especially for massive galaxies whose surface brightness profiles can not be described with the underlying assumptions. In both the SDSS and HSC surveys, \cmodel{} photometry significantly underestimates the flux in the extended outskirts of massive, early-type galaxies (e.g., \citealt{Bernardi2013, Huang2018b}). In addition to the intrinsic limitations associated with the assumed model, systematics in critical steps in the data reduction process such as background subtraction and object deblending often interfere with \cmodel{} fitting, making it even more challenging to accurately recover the total flux. These issues becomes especially pronounced with deep imaging surveys such as HSC. Because \cmodel{} is the default photometry provided for extended objects in many imaging surveys, it is worth testing using the \topn{} methodology. We will quantify the impact of \cmodel{} photometry errors on the use of \cmodel{} masses as a halo mass proxy. The \cmodel{} stellar mass will be labelled as \mcmodel{}. We note that the \cmodel{} photometry used here is from HSC \texttt{S16A} and an old version of \hscpipe{} (\texttt{v4}). Although the updated \hscpipe{} includes multiple improvements and modifications, they do not solve the aforementioned issues for bright galaxies. We compare the \cmodel{} magnitudes of our sample using \texttt{S16A}, {\tt S18A}, and {\tt S20A} data release\footnote{The {\tt S18A} release applies a much improved background subtraction around bright object. But the well preserved low surface brightness envelopes around massive galaxies make object deblending more challenging. This global background correction algorithm was then turned off in the following release {\tt S20A}.}. We find no systematic difference between these measurements, hence our results about \mcmodel{} should apply to all HSC data release. \subsubsection{Aperture \mstar{} From 1-D Profiles} \label{sec:maper} In \citet{Huang2018b}, we showed that the \mstar{} within a 100 kpc aperture is a better estimate of the ``total'' \mstar{} of massive galaxies than \mcmodel{}. We also demonstrated in \citet{Huang2018c, Huang2020} that changing the aperture used to measure \mstar{} changes the \mstar{}--\mvir{} relation. Here, we measure \mstar{} in apertures of 10, 30, 50, 75, 100, and 150 kpc in our \topn{} tests, and we evaluate how each performs as a proxy for \mvir{}. Throughout the paper, we will label these ``aperture \mstar{}" measurements with \maper{10}, \maper{100}, etc. In practice, we integrate the 1-D $\mu_{\star}$ profile after accounting for the isophotal shape of the galaxy to get the ``curve-of-growth'' (CoG) of \mstar{}, which describes the relation between the semi-major axis length of an elliptical aperture and the enclosed \mstar{}. Interpolation of the CoG provides the measurements of different aperture \mstar{}. We note that the $\mu_{\star}$ profile outside 100 kpc becomes less reliable due to background subtraction issues which affects the accuracy of aperture \mstar{} using larger radii. Given the imaging depth of HSC data, we do not recover substantial amount of \mstar{} beyond 100 kpc. The mean difference between \maper{150} and \maper{100} is only $\sim 0.02$ dex, while the maximum difference is $\sim 0.15$ dex. The intrinsic $\mu_{\star}$ of massive galaxies certainly extends beyond the HSC surface brightness limit for individual galaxies (e.g., \citealt{Wang2019, Zhang2019, Montes2021, Kluge2021}), hence the true total \mstar{} is beyond the reach of our current aperture \mstar{} measurements. We attempt to account for the ``missing \mstar{}'' by fitting a 1-D \ser{} model to the $\mu_{\star}$ profile between 50 and 100 kpc. We use this model to predict the mass beyond the regime in which it can be measured with HSC. This model (assuming it correctly predicts the true profile) confirms that there is little \mstar{} beyond 100 kpc. Using this technique, the predicted average difference between \maper{300} and \maper{100} is only $\sim\,0.05$ dex. Using the CoG, we also measure the radius that contains 50\%, 80\%, and 90\% of the maximum \mstar{} measured by the 1-D profile (\mmax{}). We denote these radii as $R_{50}, R_{80}, R_{90}$. The ``half-mass'', or effective radius ($R_{50}$), provides another way to define apertures. For example, we can measure \mstar{} out to $2\times R_{50}$ or $4\times R_{50}$. Aperture masses using $R_{50}$ will be labelled as $M_{\star, 2R_{50}}$, $M_{\star, 4R_{50}},$ etc. We briefly explore the result of using these radii-based proxies in Appendix \ref{app:size}. \subsubsection{Stellar Mass of the Outer Envelope} \label{sec:menvelope} In \citet{Bradshaw2020}, by studying simulated data the authors noticed that the ``\exsitu{}'' component (the stellar mass that formed outside the halo of the main progenitor) of massive galaxies seems to have a tighter relation with \mvir{} than either the ``\insitu{}'' component or the total \mstar{}. This is also consistent with the modelling results from \citet{Huang2020}. While we cannot separate the \exsitu{} component from the \mstar{} distribution directly when using observational data alone, recent simulations and observations suggest that the \exsitu{} stars dominate the outskirts of massive galaxies (e.g., \citealt{Lackner2012, RodriguezGomez2016, Pulsoni2021}). It is therefore interesting to test whether the stellar mass in the outer envelope is a useful \mvir{} proxy using the \topn{} tests. Here we simply define this ``outer envelope'' (or outskirt) \mstar{} as the difference between two aperture \mstar{}. For example, we will use \menve{50}{100} to denote the stellar mass between 50 and 100 kpc, and $M_{\star,\ [2,4]R_{50}}$ to denote the stellar mass between $2 \times R_{50}$ and $4 \times R_{50}$. It is not obvious a prior which combination of radial boundaries will provide the best \exsitu{} \mstar{} proxy, and so we will explore a range of different definitions of the outer envelope. Many of the massive galaxies in \citet{Huang2020} are the central galaxies (or the brightest cluster galaxy, BCG) of a galaxy cluster. Their ``outer envelope'' is also sometimes called the ICL. We avoid this terminology because: 1) the photometric definition of ICL is often ambiguous and arbitrary (e.g., \citealt{Kluge2021}), and 2) not all massive galaxies in our sample live in clusters (i.e., \mvir{}$\geq 10^{14} M_{\odot}$). Therefore we prefer to use the more general term -- outer envelope -- to describe the outer structure of all massive galaxies. To estimate the outer envelope \mstar{} from the 1-D surface brightness profile, we assume a fixed \mlratio{} value and isophotal shape that represents the inner region. These low-$z$ massive galaxies on average show shallow negative optical colour (hence \mlratio{}) and axis ratio gradients. In our case, the colour gradient means we could slightly over-estimate the outer envelope stellar mass while the axis ratio gradient could lead to an under-estimation. We ignore these minor systematics here and will look into more accurate outer envelope \mstar{} measurement in future work. We have performed \topn{} tests using the luminosity of the outskirts and this does not impact any of our main conclusions. \subsubsection{\asap{} model} \label{sec:masap} In \citet{Huang2020}, we presented a phenomenological model (\asap{}) that connects a linear combination of \maper{10} and \maper{100} to the \mvir{} of the host halo of massive galaxies (\maper{100}{}$\geq 10^{11.5} M_{\odot}$). We constrained this model using the SMFs for \maper{10} and \maper{100} along with the \dsigma{} profiles of galaxies in twelve 2-D bins over the \maper{100}-\maper{10} plane\footnote{Note that \citet{Huang2020} used the notation \mmax{} to refer to \maper{100}.}. In \citet{Ardila2021}, we provided an updated \asap{} recipe to predict the \mvir{} of a massive galaxy based on its \maper{100} and \maper{10}: \begin{equation} \begin{aligned} \log M_{\mathrm{vir}} &=3.26 \times\left(\log M_{\star}^{100}-11.72\right) \\ &-2.46 \times\left(\log M_{\star}^{10}-11.34\right) \\ &+13.69. \end{aligned} \label{eq:asap} \end{equation} In \citealt{Huang2020}, we showed that the \asap{} model scaling relation summarized by Eq.~\ref{eq:asap} can predict the {\em average} \mvir{} of massive halos better than \maper{100}. Throughout this paper, we will use the label \masap{} to refer to the left-hand side of Eq.~\ref{eq:asap}, and we will investigate the scatter exhibited by individual galaxies for this \asap{} prediction. \input{table1} \subsubsection{Richness} \label{sec:proxy_richness} We compare these \mstar{}-based proxies to the cluster richness by two popular red-sequence cluster finders: \redm{} and \camira{} (introduced in \S\ \ref{sec:cluster_redmapper} and \S\ \ref{sec:cluster_camira}). Calibrations of the \mvir{}--richness relations suggest that the richness of red-sequence galaxies is a very promising \mvir{} proxy (e.g., \citealt{Melchior2017, Murata2018, McClintock2019}). Theoretically speaking, richness measurements should outperform \mstar{}-based \mvir{} proxies for massive halos if the majority of their satellites have not merged onto the central galaxies. The $\log$-linear slopes of \mvir{}--richness relations are typically $> 1.0$ (e.g., \citealt{Saro2015, Mantz2016, Farahi2016, Simet2017, Baxter2018, Melchior2017, McClintock2019}), while the slopes of \mvir{}--\mstar{} relations are usually around 0.3-0.5 at $z<0.5$ (e.g., \citealt{RodriguezPuebla2017, Tinker2017, Moster2018, Kravtsov2018, Huang2020}). Recent calibrations of \mvir{}--richness relations also suggest modest intrinsic scatter values at the high-\mvir{} end (at $\sim 25$\% or 0.1-0.2 dex level; e.g., \citealt{Rykoff2014, Saro2015, Simet2017}). Both of these observations support the idea that richness should be a superior \mvir{} proxy to the \mstar{} of the central galaxy. Therefore a \topn{} comparison between \mstar{}- and richness-based proxies can help confirm this expectation, or reveal new insights. For \redm{}, we denote its richness as $\lambda_{\rm redMaPPer}$. For \camira{}, we use $N_{\rm CAMIRA}$ to represent its richness measurement. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figure/fig_4} \caption{ Ratio of \dsigma{} profiles for all galaxies (central \& satellite; $\Delta\Sigma_{\rm All}$) compared to central galaxies only ($\Delta\Sigma_{\rm Cen}$). Grey shaded regions show results from our fiducial HSC mock catalogue. Data points show results from HSC data where satellites are removed using a simple cylinder based technique. Solid symbols correspond to galaxies selected by \maper{100}{}. Open symbols correspond to galaxies selected by \menve{50}{100} (data points are slightly offset along the X--axis). Satellites have almost no impact in the first two bins (corresponding to \maper{100}$> 10^{11.8} M_{\odot}$ and \menve{50}{100}$> 10^{11.1} M_{\odot}$). The HSC mock suggests that satellites have a maximum impact of $\sim 15$\% ($\sim 20$\%) in Bin 3 (4) at $R\sim 2$--4 Mpc. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig4.ipynb}{\faGithub}. } \label{fig:satellite} \end{figure} \subsection{Number Density Bins} \label{sec:binning} To perform the \topn{} tests using the previously mentioned proxies, we design four number density bins based on the richness of HSC \redm{} clusters ($\lambda_{\rm redMaPPer}$). These four bins correspond to the $\lambda_{\rm redMaPPer}$ ranges of $[35, 100], [20, 35), [10, 20), [6, 10)$ and have 50, 197, 662, \& 1165 objects in each bin. We refer to these bins as Bin 1 (richest clusters or highest average \mvir{}) through to Bin 4 (least rich clusters or lowest average \mvir{}). The total number (2074) of objects is slightly smaller than the number of \logmaper{100}$\geq 11.6$ galaxies (2247), which defines a \mstar{}-complete sample. The area of the \texttt{S16A} data means that we do not have enough massive clusters to sample the high $\lambda_{\rm redMaPPer}$ range. Hence, Bin 1 covers a fairly wide richness range. Figure \ref{fig:density_bins} illustrates how these bins in $\lambda_{\rm redMaPPer}$ correspond to bins in number density (top-left panel) and \mvir{} using the HMF from \mdpl2{} (top-right panel). Assuming an ideal tracer with zero scatter, Bin 1, 2, \& 3 are well above the conventional standard for ``galaxy cluster'' (\logmvir{}$\geq 14.0$) while the mean \mvir{} of Bin 4 is on the boundary between a cluster and a ``massive group''. In reality, the \mvir{} distributions will shift to lower values due to the scatter of the \mvir{} -- observable relation. The $N_{\rm Mem} > 10$ threshold for the \camira{} clusters means it does not have enough objects for Bin 4, therefore we only consider the first three bins. Similarly, for the SDSS \redm{} catalogue, we only include Bins 1 \& 2, and we note that the richness range for Bin 4 is challenging even for deep HSC images. We must therefore take the results for \redm{} in Bin 4 with some caution. We were unable to measure \mstar{} for $\sim\,9$\% of galaxies due to excessive blending which prevented the extraction of a 1-D profile. This reduces the effective area and volume of this sample. We do not correct for this when selecting the \topn{} galaxies. The effect of this can only reduce the \dsigma{} amplitude, but we verify it does not affect our results. We summarize the key properties of these four bins in Table \ref{tab:summary}. \section{Results} \label{sec:result} In this section we present our main results. We begin in \S\ \ref{sec:satellite} with a qualitative evaluation of how satellite contamination impacts measurements of \dsigma{} for \mstar{}-selected samples. We then summarize the key findings from the \topn{} tests in \S\ \ref{sec:topn_results}. First, we show how \scatterMhaloObsSym{} scales with number density for samples selected according to different choices of aperture mass (\S\ \ref{sec:m_aper}) and outskirt mass (\S\ \ref{sec:m100_outskirt}). We also compare the \dsigma{} profiles of samples selected by \maper{100} relative to samples selected by \menve{50}{100} (\S\ \ref{sec:m100_outskirt}) and \mcmodel{} (\S\ \ref{sec:m100_cmodel}). We summarize results related to \masap{} in \S\ \ref{sec:asap_result}. In \S\ \ref{sec:richness_results}, we examine the \topn{} results of richness-based cluster finders. We then show the behavior of \scatterMhaloObsSym{} for a series of different \mvir{} proxies (\S\ \ref{sec:trend}). Finally, in \S\ \ref{sec:mstar_vs_richness}, we compare the {\em shape} of the \dsigma{} profiles of \mstar{}- and richness-selected massive halos, and we assess the level of consistency of these profiles with theoretical expectations based on simulated halos that have been selected based on true halo mass. The results shown in \S\ \ref{sec:result} focus on the most interesting cases, but the \topn{} results for all proxies are made publicly available here: \href{https://github.com/dr-guangtou/jianbing/tree/master/data/results}{\faGithub} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figure/fig_5} \caption{ Scatter in \mvir{} at fixed observable versus number density bins. Aperture masses (the mass withing the indicated radius) are shown in the top panel and and outskirt \mstar{} is shown in the bottom panel. The Y-axis shows the best-fit values of \scatterMhaloObsSym{} (data points) with uncertainties (shaded regions). The X-axis labels on the top indicate the HSC \texttt{S16A} \redm{} richness thresholds corresponding to the four number density bins. The \maper{100} trend (green dashed-line) is used as a reference. We also slightly shift the symbols along the X-axis for visibility. This figure shows that the outer mass is an excellent \mvir{} proxy. In contrast, inner stellar mass is a very poor tracer of present day \mvir{}. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig5.ipynb}{\faGithub}. } \label{fig:scatter_trend} \end{figure*} \subsection{Impact of Satellite Contamination on \texorpdfstring{\dsigma{}}{DSigma}} \label{sec:satellite} Although satellite galaxies only make up a small fraction of \logmaper{100}$>$11.5 galaxies (e.g., \citealt{Reid2014, Saito2016, vanUitert2016, Huang2020}), they could affect our evaluation of the performance of \mstar{}-based \mvir{} proxies. The \dsigma{} profiles of satellite galaxies show a unique ``bump"-like features at around $R \sim 1$ Mpc (e.g., \citealt{LiShan2014, LiShan2016, Sifon2015, Sifon2018}) which corresponds to the offset profile of the main parent halo. Here, we evaluate the impact of satellites on \mstar{}-based proxies and \sigmh{} estimates by comparing the \dsigma{} profiles of a pure central sample to that of a central$+$satellites sample in the same \topn{} bin. We use two methods: One based on a realistic mock catalogue with central/satellite assignments and one using satellites identification in real HSC data. We use a mock catalogue (see Appendix \ref{app:hsc_model}, DiMartino et al in prep) that was specifically designed to have realistic values for both \scatterMhaloObsSym{} and for the satellite fractions of massive galaxies. This mock was constrained using the SMF and the two-point correlation functions (2PCF) of HSC massive galaxies. Using this mock, we select the pure central and the central$+$satellites samples based on the model \mstar{} and the four number density bins defined in \S\ \ref{sec:binning}. In the four \topn{} bins, the HSC mock yields satellite fractions of [5.0\%, 6.9\%, 8.9\%, \& 10.0\%], which are consistent with the expectation of low satellite fractions among massive galaxies. Next, we attempt to classify massive satellite galaxies in our HSC sample directly by recursively identifying less massive galaxies around more massive ones. We start with the galaxy with the largest \mstar{} value and label all the other galaxies within a cylindrical region with a 1.0 physical Mpc radius in projection and a 30 comoving Mpc length in the line-of-sight (LOS) direction as satellites. We then turn to the next most massive galaxy and repeat this exercise. Using this simple strategy, the observed satellite fractions are [0.0\%, 3.6\%, 4.7\%, \& 8.8\%] in the four \topn{} bins. These observed fraction values are slightly lower than when we use the mock catalogue but these differences are not large enough to affect any of our results. Small variations of the radius and length of the cylinder do not change any results. Figure \ref{fig:satellite} shows the ratio of \dsigma{} profiles of the central$+$satellite (\dsigma{}$_{\rm All}$) and the pure central (\dsigma{}$_{\rm Cen}$) samples using both the mock catalogue and the HSC data. For HSC data, we test both the \maper{100} and the \menve{50}{100} sample. In Bin 1 \& 2, the satellite fractions are low enough that there is no discernible impact on \dsigma{}. In Bin 3 \& 4, massive satellites lead to a small enhancement in the \dsigma{} profile at $R > 500$ kpc. In Bin 3 (4), the mock catalogue predicts a maximum $\sim 10$\% ($\sim 20$\%) enhancement at $R \sim 2$-3 Mpc. Both \mstar{}-based proxies demonstrate behaviors that are statistically consistent with the mock catalogue despite our naive central/satellite classification scheme. The details of the fiducial mock catalogue do not affect this conclusion. Even the simple $\alpha=1$ model with varying scatter values used for estimating \scatterMhaloObsSym{} can also lead to the same results\footnote{See the additional figure in this \texttt{Jupyter} notebook: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig4.ipynb}{\faGithub}.} In the rest of this paper, we include massive satellite galaxies both in our HSC samples and in the models that we draw from the mock catalogues. Due to the low satellite fraction, the inclusion of satellite galaxies does not affect any of our conclusions\footnote{The removal of massive satellite candidates usually only leads to 0.01-0.02 dex variation in \scatterMhaloObsSym{} values.}. The impact of satellites is further discussed in Appendix \ref{app:sat_cen}. \subsection{Amplitude of \texorpdfstring{\dsigma{}}{DSigma} and Inferred \texorpdfstring{\sigmvir{}}{SigMvir} Values} \label{sec:topn_results} We now present the \topn{} results with a focus on the overall amplitude of \dsigma{} for the \mvir{} proxies introduced in \S\ \ref{sec:proxies}. We first describe the three proxies which yield the most interesting results: aperture \mstar{}, outskirt \mstar{}, and the \mcmodel{}. Then we compare all the \mvir{} proxies and present the inferred \sigmvir{} values. \subsubsection{Aperture Stellar Masses} \label{sec:m_aper} We start by exploring the performance of aperture \mstar{}. The upper panel of Figure \ref{fig:scatter_trend} shows the \sigmvir{} for various apertures. It is clear that the \sigmvir{} decreases with increasing aperture size. This decrease is particularly obvious in Bins 1 \& 2 (the most massive halos). In Bin 1 (2), \sigmvir{} decreases from 0.70 (0.72) dex for \maper{10}, to 0.52 (0.57) dex for \maper{30}, to 0.38 (0.51) dex for \maper{100}. Figure \ref{fig:scatter_trend} shows that the aperture size used to estimate \mstar{} has significant impact on how well different samples trace \mvir{}. More importantly, Figure \ref{fig:scatter_trend} shows that \textbf{the inner regions of massive galaxies (10 to 30 kpc) is a very poor tracer of present day halo mass}. This is consistent with the SHMR constraints in \citet{GoldenMarx2019} where the authors focused more on the slope of the SHMR using different aperture \mstar{} (see their Figure 2). Using a sample of massive BCGs at $0.0 < z < 0.3$ in \mvir{}$>10^{14.0} M_{\odot}$ clusters, the authors find the slope of the SHMR increases from $\alpha \sim 0.1$ for \maper{10} to $\alpha \sim 0.4$ for \maper{100}. Assuming a constant \sigms{}$\sim 0.2$ dex value for the SHMR, such variation of slopes correspond to \sigmvir{}$\sim 0.6$ dex for \maper{10} and $\sim 0.4$ dex for \maper{100}, broadly consistent with the \topn{} results in Bin 1. Further enlarging the aperture size to 150 kpc does not result in much improved \sigmvir{} in any of the bins (see Table \ref{tab:summary}). It is unclear whether the lack of improvement with apertures larger than 100 kpc reflects the intrinsic limitation of large aperture \mstar{} as a \mvir{} proxy or the statistical uncertainty of the current imaging data in the low surface brightness regime. In the remainder of this paper, we use \maper{100} as the benchmark against which we will compare other \mvir{} proxies. In the Appendix \ref{app:size}, we also explore definitions of aperture \mstar{} based on $R_{50}$ but do not find any that has better performance. \subsubsection{Outer Envelope Mass} \label{sec:m100_outskirt} Figure \ref{fig:scatter_trend} suggests that removing the inner portion of the galaxy and using only the outskirts could yield an improved \mvir{} proxy. From hydro-simulations or semi-empirical modelling of massive galaxy formation, we know that the accreted stellar component (or the \exsitu{} stars) dominates the \mstar{} budget, especially in the outskirt (\eg{} \citealt{RodriguezGomez2016}). We will discuss this further in \S\ \ref{sec:outskirt_discussion}, but if the \exsitu{} stars have a tighter relation with \mvir{}, we should expect outskirt \mstar{} to be a better \mvir{} proxy. We test this hypothesis here. Since there is no preference a priori for an optimal definition of what constitutes the ``outskirts" of a massive galaxy, we empirically study a number of different values. The bottom panel of Figure \ref{fig:scatter_trend} shows one of the main findings of this paper. Namely, \textbf{the \mstar{} in the outskirts of massive galaxies is an excellent proxy of halo mass, and largely outperforms any form of aperture mass, especially relative to masses defined by the inner regions of the galaxy}. Figure \ref{fig:scatter_trend} shows that \menve{50}{100} is the best \mvir{} proxy among the outer masses that we tested, and that \menve{30}{100} also displays comparable performance. This figure also helps inform our understanding of the trade-off between retaining enough light to achieve a high \snratio{} measurement, while at the same time removing the \insitu{} stars that lie preferentially within the inner region. For example, in Bins 2, 3, and 4, we can see that the performance of \menve{10}{100} is slightly worse than either \menve{30}{100} \menve{50}{100}. This result taken together with the results shown in the top panel of Figure \ref{fig:scatter_trend} indicates that the stellar mass located within the inner 10-20 kpc does not correlate well with \mvir{}, and so should be excluded. Additionally, we can see from Figure \ref{fig:scatter_trend} that using outskirt masses that extend beyond 100 kpc (\eg{} \menve{75}{150}) does not improve the performance of the halo mass proxy. We have also explored outskirt \mstar{} defined using $R_{50}$, but fail to find one whose performance is as good as \menve{50}{100}. These alternate definitions are discussed in Appendix \ref{app:size}. We now compare the \dsigma{} profiles of the \menve{50}{100} \topn{} samples to that of \maper{100} in Figure \ref{fig:m100_mout}. In general, the overall amplitudes confirm that \menve{50}{100} performs better than \maper{100} as a \mvir{} proxy. With the exception of Bin 1, the \dsigma{} profiles of the \menve{50}{100} samples show statistically higher amplitudes at $R < 2$ Mpc than the \maper{100} ones. The difference is more pronounced for lower mass bins. The average \dsigma{}$_{100\ \rm kpc}/$\dsigma{}$_{[50,100]}$ ratios are $[0.90\pm0.17, 0.85\pm0.13, 0.82\pm0.09, 0.80\pm0.09]$ for Bin 1-4. As a result, \menve{50}{100} also has lower \sigmvir{} values than \maper{100}: \sigmvir{}$=[0.36, 0.43, 0.44, 0.48]$ for \menve{50}{100} and \sigmvir{}$=[0.38, 0.51, 0.56, 0.60]$ for \maper{100} in Bin 1-4. The right column of Figure \ref{fig:m100_mout} shows the \mvir{} distributions of these two samples. The \menve{50}{100} samples have higher mean \mvir{} (\logmvir{}$=[14.39, 14.04, 13.76, 13.52]$) values compared to \maper{100} samples (\logmvir{}$=[14.36, 13.86, 13.54, 13.32]$) too. These conclusions also qualitatively apply to \menve{50}{150}, and do not change if we switch \maper{100} with \maper{150}, or \mmax{}. We emphasis that both \maper{100} and \menve{50}{100} based selections yield \dsigma{} profiles that are statistically consistent with predictions based on our simple ``pure scatter'' model (\S\ \ref{sec:estimate_scatter}). We can qualitatively confirm this conclusion using the left columns of Figure \ref{fig:m100_mout} and Figure \ref{fig:m100_mout} as there is no systematic deviations from the predicted \dsigma{} profiles (grey shaded regions). As for goodness-of-fit statistics, the \chisq{} values of \maper{100} samples are $[6.47, 9.56, 14.92, 9.60]$\footnote{Each \dsigma{} profile has 11 data points and the scatter of \logmvir{} is the only ``free parameter'' in our model. Therefore one can roughly estimate a reduced \chisq{} value using degree-of-freedom $\nu=10$. However, given the small sample size in our \topn{} bins, the simple resampling method used to estimate covariance matrix, we do not recommend to take the reduced \chisq{} values too literally. Relative comparison is a more meaningful way to use these \chisq{} values.} and the values for \menve{50}{100} samples are $[6.15, 11.23, 11.94, 5.99]$. This conclusion is not only true for \menve{50}{100} and \maper{100}, but also valid for the majority of \mstar{}-based \mvir{} proxies with similar \sigmvir{} values. Removing candidates of massive satellites using the method described in \S\ \ref{sec:satellite} can lead to marginal improvements of \chisq{} values, but does not change any conclusions. It is important to remember that a $\log$-normal \mvir{}-observable relation with \emph{just} a Gaussian scatter can already describe the \dsigma{} profiles of some promising \mstar{}-based \mvir{} proxies at the current \snratio{}. We will come back to this point when comparing with \topn{} results of richness-based clusters (\S\ \ref{sec:mstar_vs_richness}). And we discuss the origin and the implications of this result in \S\ \ref{sec:outskirt_discussion}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure/fig_6} \caption{ Comparison of the \topn{} results for \maper{100} and \menve{50}{100}. Rows correspond to number density bins (see \S\ \ref{sec:binning}). \textbf{Left} column: \rdsigma{} profiles of \menve{50}{100}-selected (circles) and \maper{100}-selected (hexagons) samples. Grey shaded regions show the best-fit profiles and their associated uncertainties. The overall amplitude of \dsigma{} profiles are similar in Bin 1. But \menve{50}{100} samples have consistently higher lensing amplitudes in the three other bins compared to the \maper{100} samples. \textbf{Middle} column: ratio of \dsigma{} profiles. Samples selected by \menve{50}{100} in Bin 2-4 show $\sim 20$--30\% higher \dsigma{} amplitudes at $R<2$ Mpc than the \maper{100} samples. \textbf{Right} column: inferred \mvir{} distributions for the two samples using the model described in \S\ \ref{sec:estimate_scatter}. Grey histograms indicate the \mvir{} distributions of an ideal tracer with \scatterMhaloObsSym{}{}$=0$. Vertical lines indicate the average \mvir{}. For Bins 2-4, \menve{50}{100} yield an average halo mass that is $\sim 0.2$ dex higher than for \maper{100}. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig6.ipynb}{\faGithub}. } \label{fig:m100_mout} \end{figure*} \subsubsection{The Case of \mcmodel{}} \label{sec:m100_cmodel} Since \cmodel{} is still widely used for galaxy luminosities and masses, it is of great interest to evaluate its performance as \mvir{} proxy. Figure \ref{fig:m100_cmod} compares the \topn{} results of the benchmark aperture stellar mass \maper{100} to that of \mcmodel{}. The \dsigma{} profiles of \mcmodel{} selected samples have significantly lower lensing amplitudes (on average by 20-50\%) in all four bins over the entire radial range (left and middle panels of Figure \ref{fig:m100_cmod}). The best-fit \sigmvir{} values for the \mcmodel{} samples are $[0.61, 0.71, 0.87, 0.85]$ and are much worse than those of the \maper{100} samples. Such large \sigmvir{} values also mean significantly lower \mvir{} values. As shown in the right panels of Figure \ref{fig:m100_cmod}, the mean \mvir{} of the \mcmodel{} \topn{} samples are $[0.52, 0.17, 0.79, 0.53]$ dex lower than that of the \maper{} ones. All these results make it obvious that \cmodel{} is not a good \mvir{} proxy. The \chisq{} values of the \mcmodel{}-based \dsigma{} profiles are $[7.88, 8.76, 15.27, 15.87]$. These values are statistically similar to those of \maper{100} samples in Bin 1-3, indicating they are still broadly consistent with the ``pure scatter'' model. Note that the \sigmvir{} values for Bin 2-4 are so large that it includes halos with \logmvir{}$\leq 12.0$ which are unlikely to host real massive galaxies with \logmcmodel{}$\geq 11.0$. Given this, we suggest not taking the \sigmvir{} values for \mcmodel{} too literally. Instead, the main point here is that \mstar{} based on \cmodel{} photometry is not a promising \mvir{} proxy and the \sigmvir{} values associated with \mcmodel{} are much larger than other proxies. \subsubsection{The Case of \masap{}} \label{sec:asap_result} As mentioned in \S\ \ref{sec:masap}, in \citet{Huang2020}, we proposed the \asap{} model that can use a linear combination of \maper{10} and \maper{100} (or \mmax{}) to improve the prediction of \mvir{} for massive galaxies. Here we briefly summarize the results. In Figure \ref{fig:scatter_trend_2}, we compare the \sigmvir{} trends with number density for a few important \mstar{}-based \mvir{} proxies including the \masap{}. As expected, \masap{} shows improvement in \sigmvir{} values when compared to large aperture \mstar{} such as \maper{100}. Meanwhile, in Bin 2-4, outskirt \mstar{} like \menve{50}{100} still displays small advantage in \sigmvir{} over \masap{}, especially in Bin 4. We notice that the \sigmvir{}$=[0.38\pm0.03, 0.44\pm0.02, 0.48\pm0.02, 0.56\pm0.02]$ of \masap{} are very similar to the results of \menve{10}{100}: \sigmvir{}$=[0.36\pm0.03, 0.47\pm0.02, 0.50\pm0.02, 0.56\pm0.02]$. This shows that the \asap{} model, which uses a linear combination of \maper{10} and \maper{100}, has minimal improvements over the one which just uses the difference between those masses. This, along with the fact that the preferred definition of outskirt is between 50 and 100 kpc, strongly suggests that it is very difficult to gain additional information about \mvir{} using the inner regions ($R < 30$ kpc) of massive galaxies. We should note that the \dsigma{} profiles of \masap{} samples are also well described by the simple ``pure scatter'' model. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure/fig_7} \caption{\topn{} results for \maper{100} and \mcmodel{}. The format of this figure is the same as Figure \ref{fig:m100_mout}. \textbf{Left} column: the \rdsigma{} profiles for the \maper{100}- (circles) and \mcmodel{}-selected (crosses) samples. The lensing amplitude for \maper{100} is significantly higher than for \mcmodel{} in all four bins. Even without fitting a model, it is obvious that \maper{100} is a much better tracer of \mvir{} than \mcmodel{}. \textbf{Middle} column: the ratio of \dsigma{} profiles. Samples selected by \mcmodel{} have lensing amplitudes $\sim$30-50\% lower than \maper{100} selected samples. \textbf{Right} column: the inferred \mvir{} distributions for the two samples using the model described in \S\ \ref{sec:estimate_scatter}. Grey shaded regions indicate the \mvir{} distributions of an ideal tracer with \sigmh{}$=0$. Vertical lines indicate mean \mvir{}. The differences of the mean \mvir{} between \mcmodel{} and \maper{100} based selections range from 0.2-0.4 dex in Bin 1 \& 2 to 0.6-0.8 dex in Bin 3 \& 4. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig7.ipynb}{\faGithub}. } \label{fig:m100_cmod} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{figure/fig_8} \caption{Number density--\sigmvir{} relation for six key \mvir{} proxies (similar format as Figure \ref{fig:scatter_trend}). For \camira{}, we only show \sigmvir{} in the first three bins since the cluster sample size is not large enough to include Bin 4. \menve{50}{100} shows performance comparable to richness-based proxies with lower \sigmvir{} values at the low-\mvir{} end. While the formal values of \sigmvir{} are better for richness-based estimators for Bin 1, the quality of the fits for richness-based estimators is not as good as for \menve{50}{100}. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig8.ipynb}{\faGithub}. } \label{fig:scatter_trend_2} \end{figure*} \subsubsection{Comparison with Richness-based Proxies} \label{sec:richness_results} We now compare the \mstar{}-based proxies with richness-based ones. Figure \ref{fig:scatter_trend_2} compares the number density--\sigmvir{} trends for a representative set of \mvir{} proxies in this work, including \mcmodel{} (default survey photometry), \maper{100} (large aperture \mstar{}), \menve{50}{100} (the best outer envelope mass), \masap{} (a combination of the inner and large aperture mass), and $\lambda_{\rm redM,\ HSC}$ and $\lambda_{\rm CAMIRA,\ HSC}$ (richness of red-sequence galaxies). We summarize the \sigmvir{} values, along with the precise cuts that define the bins for each \mvir{} proxy, in Table \ref{tab:summary}. We remind readers that the \mstar{}-based samples and the two richness-based cluster catalogues are independently selected from the HSC \texttt{S16A} dataset. While there is a considerable overlap, not all the \mstar{}-selected massive galaxies in the \topn{} samples belong to identified clusters, and not all cluster centrals are included in the parent sample of massive galaxies. We will briefly discuss this in \S\ \ref{sec:perfect_finder}. For \mstar{}-based proxies, we \emph{do not exclude} massive satellite galaxies from both observations and mock catalogues. For richness-selected clusters, we use a central-only mock catalogue to calculate their \dsigma{} profiles and estimate their \sigmvir{} values. This assumes the cluster finders identify the correct central galaxies, which is not always the case, but as we showed in \S\ref{sec:satellite}, satellite contamination is not likely to affect any of the key results shown here. Judged solely by the \sigmvir{} values, the richness of red-sequence galaxies is an excellent \mvir{} proxy for massive halos. Both richness-based cluster finders show lower \sigmvir{} values in Bin 1 \& 2 than any of the \mstar{}-based \mvir{} proxies: \sigmvir{}$=[0.27, 0.38]$ for HSC \redm{} clusters (red diamonds in figure \ref{fig:scatter_trend_2}), $[0.30, 0.36]$ dex for the \camira{} \texttt{S16A} catalogues (red open squares). These two bins have \redm{} $\lambda > 20$ and \camira{} $N_{\rm mem} > 21$, corresponding to \mvir{}$\geq 2\times 10^{14} M_{\odot}$. In this \mvir{} range for typical galaxy clusters, such low values of \sigmvir{} computed with our simple model qualitatively agree with previous calibrations (e.g., \citealt{Murata2018, Murata2019}). The \sigmvir{} increases slightly for richness-based proxies towards the lower--\mvir{} end. The \sigmvir{} values for HSC \redm{} clusters are $[0.44, 0.62]$ dex in Bin 3 \& 4 and 0.52 dex in Bin 3 for \camira{} clusters. This trend with richness (or number density) is also qualitatively consistent with the results from \citet{Murata2018, Murata2019}. Taken at face value, the performance of the two richness-based proxies becomes comparable with the outskirt \mstar{} in Bin 3 \& 4. Note that the $\lambda_{\rm HSC}$ range in Bin 4 ($6 \leq \lambda_{\rm HSC} < 10$) is very low and is challenging for any richness-based cluster finder. More importantly, we underscore the fact that the shapes of the \dsigma{} profiles of richness-selected clusters show systematic deviations from the pure scatter model. This means that the inferred \sigmvir{} values for richness based selections could be underestimated. We will return to this question in detail in \S\ \ref{sec:mstar_vs_richness}. \subsubsection{Summary of \sigmvir{} Trends for Different \mvir{} Proxies} \label{sec:trend} We briefly summarize the number density - \sigmvir{} trends for different \mvir{} proxies. Note that not all \dsigma{} profiles are equally well described by the ``pure scatter'' model, but we only focus on the best-fit \sigmvir{} values here. \begin{itemize} \item The outer mass of massive galaxies is a promising \mvir{} proxy. The \menve{50}{100} mass out-performs large aperture \mstar{} such as \maper{100}. \item Outer stellar mass is a competitive proxy with richness and may outperform richness in the low-\mvir{} regime (e.g. $\lambda \leq 20$, or $N_{\rm Mem} \leq 20$). \item Galaxy inner mass ($r< 10$-30 kpc) is a poor tracer of present day halo mass. For this reason, only the outskirt \mstar{} that excludes the inner 30 kpc demonstrates clear improvement over large aperture \mstar{} (see Figure \ref{fig:scatter_trend}). \item Empirical model such as \asap{} that attempt to take advantages of more than one aperture \mstar{} do show smaller \sigmvir{} values compared to single large aperture \mstar{}. However, the decision to use \maper{10} (which we now know only adds noise) in \citet{Huang2020} limits the level of improvement. \item Stellar masses derived from default survey photometry pipelines are likely to yield poor \mvir{} proxies as \mcmodel{} has the worst overall performance. This not only applies to \cmodel{}, but could be also true for the small aperture photometry, \texttt{SourceExtractor} \texttt{MAG\_AUTO}, or even single-\ser{} 2-D models. \end{itemize} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figure/fig_9} \caption{ \topn{} comparisons between the richness-based optical clusters (\redm{} and \camira{}) and the massive galaxies selected using outer envelope stellar mass (\menve{50}{100}). The layout is very similar to Figure \ref{fig:m100_cmod} and Figure \ref{fig:m100_mout}. \textbf{Left} column compares the \rdsigma{} profiles of the HSC \redm{} clusters (open diamond) and the \menve{50}{100} (solid circle) selected samples. The grey shaded region shows the best-fit profile of the \menve{50}{100} samples. In Bin 1-3, while the overall lensing amplitudes are similar, there are interesting scale-dependent differences that become more clear using the ratio of the \dsigma{} profiles (\textbf{middle} column): The lensing amplitudes of \redm{} clusters are systematically higher than the \menve{50}{100} samples around $\sim 1$-3 Mpc by $\sim$20--40\%. Meanwhile, the amplitudes of \redm{} \dsigma{} profiles are slightly lower than or similar to the \menve{50}{100} ones in the central ($R < 0.5$ Mpc) and outer ($R>6$-8 Mpc) regions. We also show the ratio of lensing profiles using the HSC \camira{} cluster samples (square filled with grey colour) to highlight the similar behaviour of these two richness-based cluster finders. In Bin 4, the \redm{} sample displays a $\sim 20-50$\% lower lensing amplitudes than the corresponding \menve{50}{100} sample. In the \textbf{right} column, we visualize the trend of the average \mvir{} in each bin: while the \redm{} samples show $\sim 0.2$ dex higher average \mvir{} value in Bin 1, the differences become smaller in Bin 2 \& 3. In Bin 4, the \menve{50}{100} selected sample shows a $\sim 0.2$ dex higher average \mvir{} values than the \redm{} one instead. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig9.ipynb}{\faGithub}. } \label{fig:mout_richness} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figure/fig_10} \caption{ The comparisons between the shape of \rdsigma{} profiles and the best fit ``pure-scatter'' model for richness based cluster finders. \textbf{Left two} columns: the observed \rdsigma{} profiles (symbols) and their best--fit models (grey shaded regions) for \redm{} (left; filled diamonds) and \camira{} (middle; open squares) clusters in Bins 1-3. We ignore Bin 4 because the cuts applied to the \camira{} catalogue preclude using this bin. \textbf{The third} column from the left: same as the left two columns, but for \menve{50}{100} as a reference for \mstar{}-based \mvir{} proxy (filled hexagons). \textbf{Right column}: ratio of the observed \dsigma{} profiles to their best-fit model. We also show the ratio for \menve{50}{100} (grey circles) as reference. The simple ``pure-scatter'' model is not a good fit and scale-dependent ``residuals'' are clearly visible for richness-based \mvir{} proxies when compared to \menve{50}{100}. While the exact values of the ratios are different, the two richness-based cluster finders display qualitatively similar behavior: the observed \dsigma{} profiles are lower than the best-fit models at $R<1$ Mpc by $\sim 30$\% but show higher amplitudes at $1-3$ Mpc. This shape of \rdsigma{} may be a generic ``feature'' of richness-based cluster selections and could be due to mis-centering or projection effects. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig10.ipynb}{\faGithub}. } \label{fig:richness_residual} \end{figure*} \begin{figure*} \centering \includegraphics[width=16cm]{figure/fig_11} \caption{ Summary statistic that quantifies the overall shape of a \dsigma{} profile in each \topn{} bin. $S$ is the ratio of the integrated \dsigma{} profile at $0.8 < R \leq 3.0$ Mpc and $0.2 < R \leq 0.8$ Mpc. Galaxies selected by \menve{50}{100} galaxies are shown with blue circles, \redm{} clusters with red hexagons, and \camira{} clusters with open red pentagons. \textbf{Left} panel shows the $S$ statistics of the observed \dsigma{} profiles. The richness-based clusters finders yield higher $S_{\rm obs}$ values than \menve{50}{100} in Bin 1-3. \textbf{Right} panel shows the ratio between the observed and best-fit ``pure scatter'' model \dsigma{} profiles ($S_{\rm obs}/S_{\rm mod}$). Galaxies selected by \menve{50}{100} have \dsigma{} profiles that are well described by a ``pure scatter" model whereas richness selected clusters have \dsigma{} profiles that have a ``bump" like feature compared to a ``pure scatter" model. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig11.ipynb}{\faGithub}. } \label{fig:dsigma_summary} \end{figure*} \subsection{Information Contained in the the Shape of \texorpdfstring{$\Delta\Sigma$}{DSigma}} \label{sec:mstar_vs_richness} In the previous section we focused on the overall amplitude of \dsigma{} and the inferred \sigmvir{} values. Now we consider the shape of the \dsigma{} profiles. We focus in particular on two questions: \begin{enumerate} \item Are there differences in the shape of the \dsigma{} profiles for samples selected by \mstar{}-based \mvir{} proxies and the clusters selected by richness? \item Which type of \mvir{} proxy can yield \dsigma{} profiles whose shapes are consistent with a ``clean selection" (we also use the term ``pure scatter") of massive halos? ``Clean selection'' here means a selection based on a simple $\log$-linear \mvir{}-proxy relation with Gaussian scatter. \end{enumerate} Figure \ref{fig:mout_richness} compares the \dsigma{} profiles of HSC \redm{}, \camira{} clusters (first three bins only), and \menve{50}{100}-selected massive halos. In Bin 1 to 3, both the \redm{} and \camira{} \dsigma{} profiles show similar systematic differences in shape compared to the \menve{50}{100} profiles. The most prominent difference is that, between $1 < R < 3$ Mpc, the richness-based \dsigma{} profiles demonstrate significantly enhanced ($\sim 30$-40\%) \dsigma{} amplitudes. On the other hand, at $R < 1$ Mpc, the \dsigma{} profiles of \redm{} and \camira{} samples are $\sim 20$-40\% lower than the outer envelope \mstar{} ones. At larger scale ($R > 5$ Mpc), we find that the richness-based and \menve{50}{100} \dsigma{} profiles become statistically similar but we are also limited by the low \snratio{} of the current profiles. Other \mstar{}-based proxies with similar \sigmvir{} values (\eg{} \maper{100}, \masap{}, \menve{50}{150}) can also yield qualitatively same conclusions. In Appendix \ref{app:sdss_redm}, we show that the SDSS \redm{} clusters in the HSC \texttt{S16A} footprint display similar systematics in their \dsigma{} profiles using HSC lensing data. In appendix \ref{app:des_redm}, we show that the DES \redm{} clusters in the same redshift and richness bins show consistent shape in their lensing profile with the HSC one. The DES sample is not only based on a different imaging dataset, its lensing profile is from an independent shear catalogue with different strategies for shape measurements and lensing calibration. Both of these comparisons demonstrate the robustness of our result. The shape of \dsigma{} profiles robustly show that the \sigmvir{} values alone cannot fully explain the difference between \mstar{}- and richness-based \mvir{} proxies. Instead of the overall higher lensing amplitude expected from the lower \sigmvir{} values for \redm{} and \camira{} clusters, we see a ``bump''-like feature at $R \sim 1$-2 Mpc. In Bin 4, the \dsigma{} profile of \redm{} clusters shows lower amplitude than that of \menve{50}{100} sample, consistent with its higher \sigmvir{} value in this bin. The \dsigma{} profile in Bin 4 also does not show a clear 1 Mpc ``bump'' as in the other three bins. These results make question (ii) more interesting: are these richness-selected clusters consistent with a selection from a $\log$-linear \mvir{}--richness relation with Gaussian scatter? To address this question, Figure \ref{fig:richness_residual} compares the observed \dsigma{} profiles of \redm{}, \camira{} clusters, and \menve{50}{100}-selected sample to each of their best--fit profiles using our ``pure scatter'' model (\S\ \ref{sec:estimate_scatter}). In the larger, left-hand part of Figure \ref{fig:richness_residual}, each column shows results for a different \topn{} selection method, with results from different bins shown in different rows. Within each panel, comparing the points with error bars to the shaded gray band allows us to assess how the {\em shape} of the \dsigma{} profiles compare to theoretical expectations based on an unbiased selection of clusters based on true \mvir{} (i.e., the ``pure scatter'' model). In the single vertical column on the right-hand side of Figure \ref{fig:richness_residual}, we show the ratio of the observed \dsigma{} to the corresponding profile based on the ``pure scatter'' model; on the right-hand side, different symbols correspond to results based on different cluster-selection methods.\footnote{We remind the reader that for the case of \menve{50}{100}, we remove the possible massive satellites using the procedure described in \S\ \ref{sec:satellite}.} From Figure \ref{fig:richness_residual} it is visually apparent that clusters selected according to \menve{50}{100} exhibit a \dsigma{} profile that closely mimics the profile of cluster samples that have been selected according to \mvir{} in an unbiased fashion. Relative to \menve{50}{100}-selected clusters, we can see that neither \redm{} nor \camira{} clusters have \dsigma{} profiles that are as well-described by the ``pure scatter'' model. For both \redm{} and \camira{}, the most prominent residual is the steep drop in the profile in the $R < 1$ Mpc region (by up to $\sim 50$\%) relative to the profile of the corresponding best-fit ``pure scatter'' model. In addition to this steep drop, the \dsigma{} profile of \redm{} and \camira{} clusters presents a distinct ``bump-like" feature around 1-2 Mpc in Bin 1 \& 2, with profiles that have a visibly larger lensing amplitude relative to the ``pure scatter'' model at this spatial scale. Using \chisq{} to quantify the quality of the fits of the ``pure scatter" model, we find that the HSC \redm{} clusters have values of $[15.36, 55.92, 46.23]$ for Bins 1, 2, and 3, respectively; the \camira{} \texttt{S16A} samples have values of $[34.51, 35.65, 62.11]$; the quality of these fits is thus significantly poorer relative to \menve{50}{100}-selected clusters. This exercise indicates that the \menve{50}{100}-based selection function closely resembles an unbiased selection of clusters based on \mvir{} with simple, Gaussian scatter, whereas the cluster selection function defined by \redm{} or \camira{} cannot be as well-described by such a simple model. \section{Discussion} \label{sec:discussion} We now discuss the implications of the results presented in the previous section. We first discuss the connection between our findings based on the \topn{} tests and the \asap{} model in \S \ref{sec:asap_discussion}. In \S\ \ref{sec:outskirt_discussion}, we focus on the potential of the outer envelope mass as a tool for identifying dark matter halos. \S\ \ref{sec:perfect_finder} discusses the possibility of using the techniques developed in this paper to search for even better \mvir{} proxies. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure/fig_11} \caption{ We explore the connections between the \asap{}-predicted \mvir{} and the outskirt \mstar{} used in the \topn{} test. \textbf{Left}: relations between the \asap{}-predicted \mvir{} and different stellar mass measurements, including \maper{100} (green; shifted up by 0.3 dex for better visibility), \menve{10}{100} (blue), and \menve{50}{100} (red). To help visualize these scaling relations, we highlight the best-fit $\log$--linear relations at \masap{}$> 10^{13.6} M_{\odot}$. \menve{50}{100} displays steeper slopes than both \maper{100} and \menve{10}{100}. \textbf{Middle}: the 2-D plane of \maper{50} v.s. \maper{50}{100} colour coded by the \asap{}-predicted \mvir{}. Several iso-\mvir{} contours are used to highlight the relation between \mvir{} and \menve{50}{100}. \textbf{Right}: similar to the middle panel, here we show the \maper{10} and \menve{10}{100} 2-D plane colour-coded by the \asap{}-predicted \mvir{}. Same set of iso-\mvir{} contours are used to compare with the middle panel. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/fig12.ipynb}{\faGithub}. } \label{fig:outskirt_discussion} \end{figure*} \subsection{Relationship with Previous Work and the \asap{} model} \label{sec:asap_discussion} Using the \topn{} tests, we confirm that the \asap{} empirical model (\citealt{Huang2020}) has a lower \sigmvir{} value than large aperture stellar masses such as \maper{100} (see Figure \ref{fig:scatter_trend_2}). However, we also find that \asap{} is outperformed by outer mass measured such as \menve{30}{100} and \menve{50}{100}. The \asap{} model uses \maper{10} and \maper{100} as rough proxies of the \insitu{} and ``total'' \mstar{} to take advantage of the additional \mvir{}-dependence of the stellar mass profiles of massive galaxies. However, the \topn{} test clearly shows that \maper{10} alone is a very poor \mvir{} proxy with \sigmvir{}$\sim 0.8$ in all four bins (see Figure \ref{fig:scatter_trend}). Therefore, the better performance of \masap{} must be driven by a tighter \mvir{}--\menve{10}{100} relation than \mvir{}-\maper{100}. Figure \ref{fig:outskirt_discussion} shows the \masap{}-observable relations for \maper{100}, \menve{10}{100}, and \menve{50}{100} with their best--fit $\log$--linear relations at \masap$\geq 10^{13.6} M_{\odot}$\footnote{We use the \texttt{Python} implementation of the Least Trimmed Squares (LTS) algorithm (\href{https://pypi.org/project/ltsfit/}{\texttt{ltsfit}}) by \citet{Cappellari13b} for the fitting.}. Although \masap{} is not the true \mvir{}, it can be used to illustrate the underlying average SHMRs. The slope of the \masap{}-\menve{10}{100} relation is $\alpha = 0.47$, steeper than that of the \masap{}-\maper{100} relation $\alpha = 0.34$. Both relations have similar scatter values \sigms$\sim 0.15$ dex at the high-\masap{} end. Hence, the \asap{} model predictions also suggest that outer mass \menve{10}{100} is a better \mvir{} proxy than \maper{100}. Figure \ref{fig:outskirt_discussion} also compares the \masap{}-\menve{10}{100} and the \menve{50}{100} relations. Although \asap{} does not explicitly use \menve{50}{100}, it shows that \menve{50}{100} is a better \mvir{} proxy than \menve{10}{100} because the \masap{}-\menve{50}{100} relation has a steeper slope ($\alpha=0.67$) and slightly smaller \sigms$\sim 0.11$ dex at \masap$\geq 10^{13.6} M_{\odot}$. To further highlight this point, we also show the \maper{50}--\menve{50}{100} and \maper{10}--\menve{10}{100} 2-D planes. At \masap{}$>10^{13.5} M_{\odot}$, the iso--\masap{} curves runs almost parallel to \menve{50}{100}. When compared to the \maper{10}--\menve{10}{100} distributions in the right panel, it becomes clear that at fixed outskirt \mstar{} value, \menve{50}{100} will yield smaller scatter of \masap{}. This comparison reaffirms the finding that the $R > 30$--50 kpc outer mass is an excellent \mvir{} proxy and that adding \mstar{} from inner regions actually degrades the scatter. This also suggests that \asap{}-like models could be improved using an optimized choice for the outer mass tracer. The idea behind the original \asap{} model was to use two different radii to tracer regions dominated by \insitu{} and \exsitu{} stars. Our \topn{} tests suggest the the \insitu{} may actually contain very little information about present day halo mass. A next key step will be understanding the different SHMRs of \insitu{} and \exsitu{} stars better in the near future. \subsection{Scatter in Stellar Mass at Fixed Halo mass} \label{sec:sigma_mstar} In this work, we focus on the \sigmvir{} values and their trends with number density. Meanwhile, works about SHMR often focus on the \sigms{} values. For \maper{100}, assuming a SHMR with $\alpha \sim 0.35$ slope (\eg{} \citealt{GoldenMarx2019, Huang2020}), a \sigms{}$\sim 0.2$ dex scatter at high-\mvir{} end corresponds to a \sigmvir{}$\sim 0.4$ dex scatter according to Equation \ref{eq:ratio_is_what_matters}. This is consistent with our results at high-\mvir{} end and with some recent modelling constraints (\eg{} \citealt{Kravtsov2018, Behroozi2018}). Meanwhile, the \sigmvir{} values in lower-\mvir{} bins require \sigms{}$>0.3$ dex under the same slope. For the \camira{} clusters, if we adopt the \mvir{}-richness relation calibrated by \citet{Murata2019} with a $\alpha = 0.6$ slope, the inferred \sigms{} values are around 0.25 to 0.35 dex. For \menve{50}{100}, assuming the $\alpha \sim 0.67$ slope from the \asap{} model shown in \S\ \ref{sec:asap_discussion}, we derive a larger \sigms{} value ($\sim 0.4$ dex) for this outskirt \mstar{} than \maper{100} and richness measurements. As the \sigms{} value inferred here is \emph{not} the intrinsic scatter value, this result may reflect the nosier measurements of outer light profile. Since we do not directly constrain the SHMRs, the inferred \sigms{} values here depend on the assumed slope values and other systematics. They should be only used for relative comparisons within the \topn{} results. We will look into the constraint of SHMRs for different \mvir{} proxies in future works. \subsection{Physical Insight: Why might the Outer Stellar Halo Trace Halo Mass Better than the Inner Mass or the Total Mass?} \label{sec:outskirt_discussion} Our \topn{} tests show that the outer mass of $z < 0.5$ massive galaxies are promising halo mass proxies with superior performance compared to ``total'' \mstar{} measurements using large apertures. This result may be explained via the ``two-phase'' formation scenario of massive galaxies (e.g., \citealt{Oser2010, vanDokkum2010, Moster2020}). According to this picture, a massive galaxy consists of stars formed within the halo of its main progenitor at high redshift (\insitu{} component) and accreted stars from repeated mergers (\exsitu{} component). In \citet{Bradshaw2020}, the authors explored the SHMRs of both components using the \texttt{UniverseMachine} semi-empirical model (e.g., \citealt{Behroozi2018}). They showed that the \exsitu{} component displays a much tighter correlation with the current \mvir{} relative to either the \insitu{} component (see their Figure~9), or relative to the total \mstar{}. In the \texttt{UniverseMachine} model, the average \insitu{} \mstar{} at $z \sim 0.4$ is almost constant over a wide range of \mvir{} ($\sim 10^{10.9} M_{\odot}$), whereas the SHMR of \exsitu{} component shows a steep slope. In a recent analysis of the TNG-300 simulation, it was found that at fixed \mvir{}, both cluster richness and BCG \mstar{} exhibit residual correlations with halo assembly history \citep{Anbajagane2020}; our results provide motivation to consider whether such correlations persist for true \exsitu{} \mstar{}, and for \mstar{} estimations of the BCG that exclude the inner regions. Although different models and simulations display different scaling relations between \mvir{} and \exsitu{} \mstar{}, there is nonetheless compelling theoretical support for the idea that the scatter of SHMRs at high-\mvir{} end is closely tied to the assembly of the \exsitu{} component \citep[see, e.g.,][]{Gu2016}. Whereas massive galaxies at high redshift grow primarily by \insitu{} mass buildup, these galaxies are predominantly quenched at lower redshift, implying that massive galaxies grow primarily via merging at late times. Under this picture, the \exsitu{} \mstar{} should scale with the number of accreted satellites, and can be considered as a measure of ``historical richness'' of the halo. In simulations, the \exsitu{} component dominates the outskirts of massive galaxies, and its fraction increases with both stellar and halo mass (e.g., \citealt{Lackner2012, RodriguezGomez2016, Pulsoni2021, Pillepich2017b}). For this reason, outer mass measures such as \menve{50}{100} are likely to scale better with the true \exsitu{} \mstar{} relative to mass estimates that include stars from the inner 30 kpc, and so \mstar{} measurements defined by the outskirts of a galaxy may be proxies of the ``historical richness'' of a parent halo. For low-$z$ galaxy clusters, multiple studies have explored the connection between halo properties and the flux, shape, and radial profile of the ICL -- essentially the extended outer envelope around the central galaxy (or the BCG) of the cluster (e.g., \citealt{Montes2018, Montes2019, Zhang2019b, Furnell2021, Kluge2021, SampaioSantos2021}. While several of these studies demonstrate the tight correlation between the \mvir{} and the stellar mass/luminosity of the BCG$+$ICL component\footnote{This is sometimes referred as the ``diffuse stellar light'' component following the definition in the Illustris-TNG simulation (e.g., \citealt{Zhang2019b, SampaioSantos2021}).} (e.g., \citet{Zhang2019b, Kluge2021, SampaioSantos2021}), whether the ICL alone is a promising \mvir{} proxy is still under debate (e.g., \citealt{Furnell2021}). As discussed in \citet{Huang2018b} and \citet{Kluge2020}, the definition of ICL is often ambiguous and somewhat arbitrary, but the light between 50 to 100 kpc around a BCG is often considered part of the ICL. Our results support the idea that the ICL correlates with halo mass, but we also generalize this finding to all massive galaxies. \subsection{On the Possibility of Building Even Better Halo Mass Proxies} \label{sec:better_halo_proxy} Our work shows that a better understanding of the formation process of massive galaxies, together with high quality imaging data, offers the exciting prospect of developing better proxies of halo mass. This work is only a first step in this direction and follow-up work may yield even better proxies than \menve{50}{100}. Here we discuss possible improvements to the outskirt \mstar{}-based \mvir{} proxy. First, the outer stellar mass is estimated using the portion of the light profile that has low \snratio{}, which can potentially be affected by issues related to background subtraction, by contamination from other objects, and by galactic cirrus (\eg{} \citealt{Roman2020}). Deeper images and improved data-reduction strategies should help improve the accuracy of measurements of outer mass. Second, we have ignored the radial variation of \mlratio{}, and also the fact that galaxies are three-dimensional in nature. More importantly, our results provide motivation for the quest for better proxies of \exsitu{} \mstar{}. For example, it could prove fruitful to explore improved definitions of outer stellar mass based on physical radial boundaries. In this work, we have explored boundaries based on $R_{50}$ (Appendix \ref{app:size}) and found no substantial improvement beyond results based on \menve{50}{100}, but scaling boundaries according to the total \mstar{} might still be an interesting idea. This research direction is also motivated by recent simulations which are able to reproduce HSC light profiles fairly well (e.g., \citealt{Ardila2021}). Ideally, we would like to be able to physically decompose massive galaxies into their \insitu{} and \exsitu{} components using real data. More careful approaches towards this decomposition that take these additional effects into account is worthy of exploration in future work and are well-motivated by our results. \subsection{Implications for Optical Cluster Finding} \label{sec:perfect_finder} Our results have profound implications for cluster finding with optical surveys including the Vera Rubin Observatory's Legacy Survey of Space and Time (LSST)\footnote{\url{https://www.lsst.org}}, the \textit{Euclid} satellite\footnote{\url{https://sci.esa.int/web/euclid}}, and the Nancy Grace Roman space telescope (\textit{Roman})\footnote{\url{https://roman.gsfc.nasa.gov}}. These results potentially open up a new way of approaching the problem of identifying massive halos from imaging surveys. Traditionally, optical/NIR cluster finders are mostly based on the relation between \mvir{} and some estimate of cluster richness. The \redm{} and \camira{} cluster finders rely on the number of quenched member galaxies on the red-sequence, while others use the over-density of galaxies within a narrow photo-$z$ range (e.g., \citealt{Wen2021, Zou2021}). The prevalence of richness-based cluster identification reflects the widely-held expectation that the \mvir{}-richness relation has lower scatter than the SHMR. However, in this work we have shown that this comparative assessment has overlooked two critical aspects of the \mstar{} estimation of massive galaxies. 1) Default photometry from data reduction pipelines often provide poor fits to the light profiles of massive galaxies, and can be significantly impacted by issues related to background subtraction and deblending. Such photometry is a source of both bias and additional scatter in the \mstar{} estimates (see Figure \ref{fig:scatter_trend_2} and \S\ \ref{sec:m100_cmodel}). 2) Inner stellar mass is an {\em intrinsically} poor proxy of \mvir{} (see Figure \ref{fig:scatter_trend} and \S\ \ref{sec:m_aper}). Yet, commonly adopted photometry measures for massive galaxies focus on the bright, inner ``core'' region where the signal-to-noise is high. Our \topn{} tests demonstrate that both of these issues have a strong influence on the level of scatter in the SHMR. We have furthermore shown that through careful consideration of how the \mstar{}-based proxy is measured and defined, cluster samples selected by \mstar{} exhibit scatter in \mvir{} that is both tighter and simpler relative to richness-selected clusters. Finally, it is important to note that customized estimates of the outer light profiles of massive galaxies are necessary to build \mvir{} proxies with comparable \sigmvir{} values with richness (at least with current versions of data reduction pipeline). Outer galaxy mass may offer distinct advantages over richness-based cluster finders with respect to two key systematics: mis-centering and projection effects. Mis-centering bias occurs when the cluster finder identifies the wrong central galaxy, or when the central galaxy is not at the true center of the dark matter halo. Projection effects have a variety of origins, such as the anisotropic distribution of satellite galaxies within the halo, and the presence of large-scale structure along the line-of-sight to the cluster. While the calibration of the \mvir{}--richness relation now routinely includes mis-centering effects when modelling cluster \dsigma{} profiles (e.g., \citealt{Murata2018, Murata2019, McClintock2019}), projection bias is still a major issue (e.g., \citealt{Costanzi2019, Sunayama2020, DES2020, To2021b}). In this paper, we show that \mstar{}-based proxies using both larger apertures or outer mass display stacked \dsigma{} profiles that are consistent with having negligible mis-centering effect and projection bias (see Figure \ref{fig:mout_richness} and \S\ \ref{sec:mstar_vs_richness}) -- this is very exciting as it suggests that outer mass measures such as \menve{50}{100} directly trace central galaxies and could yield a more simple selection function than richness-based methods. We have not identified the exact causes of the systematic differences in the \dsigma{} profiles between \mstar{}- and richness-selected samples (\S\ \ref{sec:mstar_vs_richness}). Recently, \citet{Sunayama2020} explored the impact of projection bias on cluster \dsigma{} profiles using mock catalogues based on N-body simulations. They show that projection effects can boost the stacked \dsigma{} profiles of clusters at $R > 2$ Mpc by up to 20\%. Also, this ``bump" in the the outer \dsigma{} profile seems to increase with the intrinsic richness. While there are qualitative similarities between their Figure 4 and the middle panels of Figure \ref{fig:mout_richness}, the \citet{Sunayama2020} model cannot not fully explain the differences we see, and so a sophisticated modelling effort will be required in order to fully understand the origin of this feature in the lensing profiles of richness-selected clusters. Another key limitation of richness-based cluster finders stems from the difficulty of generating realistic mock catalogues of galaxies. Such mocks are essential for the calibration of the \mvir{}-richness relation and for understanding the selection biases. However, considerable sophistication is required to produce these mock catalogues, since red-sequence richness estimation fundamentally requires that the mock galaxies have multi-wavelength properties such as a tight red-sequence and colour bimodality for {\em all} galaxies down to $\sim0.1L_{\star},$ and moreover that these features are realistically connected to the cosmic density field across a wide range of halo mass, redshift and larger-scale environment; thus the mocks used for cluster analysis by cosmological surveys have historically struggled to meet these challenges at the required levels of quantitative detail (e.g., \citealt{Trayford2015, Trayford2017, Nelson2018, DeRose2019}, although see \citealt{Hearin2020, DeRose2021} for recent progress). It is also difficult to take systematics in colour measurements into account when building mocks. On the other hand, it is easier to calibrate \mstar{} estimates using the same definition of stellar mass (e.g., \citealt{Ardila2021}) and to account for uncertainties in \mstar{}. It is also easier to reproduce the observed properties of galaxy samples composed primarily of centrals (e.g., \citealt{Moster2020}). To utilise a \mstar{}-based ``cluster finder'', we would nonetheless need to account for contamination from massive satellite galaxies (see \ref{sec:satellite}), but generating the required mock catalogues for this purpose is far simpler in comparison to the multi-wavelength needs of richness-based methods. Lastly, the combination of \mstar{} together with richness-based cluster finders could help unveil more insight into the assembly history of massive dark matter halos. Measures of ``historical'' and ``current'' richness might have different selection effects with regards to secondary properties (e.g., concentration, merging history) at similar \mvir{}. As mentioned previously, the \mstar{} and richness-selected samples do not fully overlap. Among the top 50 galaxies selected by \menve{50}{100}, 7 of them are not identified as the central of any \redm{} and \camira{} cluster. The non-overlap fraction increases with decreasing \mstar{} limits: among the top 200 (1000) \menve{50}{100}-selected galaxies, 49 (467) are not considered cluster centrals by \redm{} or \camira{}. If we limit the samples to the same \topn{} bin, the overlap fraction is even lower: within the top 50 (500) \menve{50}{100} galaxies, only 9 (158) of them are also included in the top 50 (500) richest clusters selected by \redm{}. This suggests that the richness--\mstar{} relation for central galaxies has considerable scatter that would be worth exploring in further detail. The numerous advantages discussed above suggest that \mstar{}-based cluster finders could not only help us to understand the systematics of richness-based cluster finders, but that they may also yield competitive constraints on the growth of structure and on galaxy-halo connection models. \section{Summary and Conclusions} \label{sec:summary} Taking advantage of the deep images and unprecedented lensing capabilities of the HSC survey (\S\ \ref{sec:hsc}), we show that the outer envelope of low-redshift massive galaxies is a promising \mvir{} proxy with scatter comparable to richness. We further show that this proxy is less affected by systematics such as projection bias and mis-centering effects. The outskirts of massive galaxies are dominated by \exsitu{} stars -- the stellar content accreted from previous satellites galaxies within the halo -- and thus the outer envelope \mstar{} provides an estimate of the ``historical richness'' of massive halos. This opens up new possibilities for tracing massive halos, studying their galaxy-halo connection, and investigating the assembly histories of massive galaxies. We have conducted our study by comparing the stacked \dsigma{} profiles (\S\ \ref{sec:dsigma} and Appendix \ref{app:dsigma_detail}) of massive halos selected by different \mvir{} proxies (\S\ \ref{sec:topn_intro} and \S\ \ref{sec:comp_scatters}) in four volume number density bins (\S\ \ref{sec:binning}). Assuming a simple $\log$-linear \mvir{}-observable relation model with Gaussian scatter, we estimate the scatter in \mvir{} in each bin by matching the observed \dsigma{} profiles to models generated from N-body simulations (\S\ \ref{sec:estimate_scatter}). Using this \topn{} methodology, we evaluate different \mstar{}-based and richness-based \mvir{} proxies for massive galaxies and halos at $0.2 < z < 0.5$ (\S\ \ref{sec:proxies}). These proxies include \mstar{} based on the default survey photometry (\cmodel{}), large aperture \mstar{} (\S\ \ref{sec:maper}) and outer envelope \mstar{} (\S\ \ref{sec:menvelope}) based on deep 1-D surface mass density profiles (\S\ \ref{sec:1d_prof}). We also include richness estimates from the \redm{} (\S\ \ref{sec:cluster_redmapper}) and \camira{} cluster (\S\ \ref{sec:cluster_camira}) catalogues. The main results of this work are: \begin{itemize} \item Outer galaxy mass is an excellent tracer of halo mass (\S\ \ref{sec:m100_outskirt}; Figure \ref{fig:scatter_trend}). The performance of \menve{50}{100} and other similar outer envelope measures are competitive with red-sequence cluster finders at the high-richness end (e.g. $\lambda > 20$) and may outperform \redm{} or \camira{} at the low-richness regime (see Figure \ref{fig:scatter_trend_2}). Since the outer envelope is likely to have been built from merging processes, we suggest that the outer envelope mass serves as an estimate of the ``historical richness" of a cluster, and so could serve as a proxy for \mvir{} that is complementary to the ``current richness'' measurements used by contemporary cluster finders. \item While both richness-based \mvir{} proxies (\redm{} and \camira{}) have impressively low inferred \sigmvir{} values, they result in stacked \dsigma{} profiles that are not consistent with predictions based on a ``pure scatter'' model (see Figure \ref{fig:mout_richness}). Instead, the \dsigma{} profiles of richness-selected clusters have enhanced amplitudes around $R\sim 1$-2 Mpc and suppressed inner profiles at $R < 1$ Mpc when compared to ``pure scatter'' models (\S\ \ref{sec:mstar_vs_richness} and Figure \ref{fig:richness_residual}). These results indicate that the richness-based \mvir{} proxies have additional systematics (e.g., mis-centering, projection effect) that need to be accounted for. In contrast, the \dsigma{} profiles of galaxies selected according to their outer mass are very well described by a ``pure scatter'' model, suggesting that cluster samples selected by stellar mass of the outer envelope suffer from little to no mis-centering effects or projection effects. \item Inner galaxy mass (e.g. \mstar{} within 10 to 30 kpc) is a very poor tracer of halo mass (Figure \ref{fig:scatter_trend}). The total \mstar{} enclosed within a large aperture such as \maper{100} shows much better performance, but is still not as effective as the outer envelope mass in terms of its ability to serve as a proxy for \mvir{} (Figure \ref{fig:m100_mout}). This indicates that there is a comparatively weak physical correlation between the stars in the inner region of massive galaxies and the total mass of their dark matter halos. \item Stellar masses based on \cmodel{} or any other default photometry from a generic data reduction pipeline do not yield good halo mass proxies (see \S\ \ref{sec:m100_cmodel} and Figure \ref{fig:m100_cmod}). \cmodel{} does not accurately account for the flux in the extended stellar halo of massive galaxies, which is the specific region of the galaxy that has the tightest connection with the underlying \mvir{}. We therefore caution against the use of \cmodel{}-like photometry in studies of the galaxy-halo connection. LSST will eventually be even deeper than HSC and currently shares a very similar data reduction pipeline. In order to realise the scientific potential afforded by accurate measurements of the masses and profiles of massive galaxies, it will be critical to improve the image deblending and galaxy modelling algorithms in \texttt{lsstPipe}. It would highly beneficial to the wider cosmology community if \texttt{lsstPipe} could accurately perform the required measurements without the need for the custom pipelines that were needed for the present work. \item We have shown that satellite galaxies do not have a strong impact on the stacked \dsigma{} profile of the highest \mstar{} samples (due to low satellite fraction). Satellites do impart a mild effect on \dsigma{} towards the lower-\mstar{} end (\S \ref{sec:satellite}) -- further work will be required in order to model this effect for cosmological applications. \end{itemize} Motivated by these results, we plan to further explore \mstar{}-based \mvir{} proxies for studies of the galaxy-halo connection and for cosmology. Recent HSC data releases (\texttt{S20A} or \texttt{PDR3} in 2021) have increased sky coverage to $> 600$ deg$^2$, four times larger than the current sample. Not only would these larger samples improve the statistical uncertainty of the \topn{} tests, they would also enable us to explore the high-\mvir{} regime in much finer detail than is permitted by the broad richness bins used here. Moreover, the new data releases come with improved background subtraction that will improve measurements of the outer profile of massive galaxies. We are also working on an improved outer envelope \mstar{} measurement using a more accurate \mlratio{} and a more sophisticated modelling approach. On the theoretical side, we will use state-of-the-art hydro-simulations and semi-empirical models to investigate the connection between the outer envelope of massive galaxies and the assembly history of their dark matter halo. It would also be interesting to compare cluster samples selected by richness- and \mstar{}-based methods. In addition to the selection biases of different methods, this could yield further insight into the distribution of halo properties at the high-\mvir{} end. Finally, for purposes of developing an \mstar{}-based cluster finder, it will also be fruitful to compare the properties of \mstar{}-selected clusters to samples identified by other multi-wavelength methods that are less sensitive to projection effect (e.g., samples identified in X-ray or microwave bands). As outlined in \citet{Bradshaw2020}, we also suggest that a ``hybrid'' cluster finder that combines the advantages of richness- and \mstar{}-based \mvir{} proxies may be possible by simply combining the \mstar{} of the central galaxy and a few (e.g., top 2 or 3) massive satellite galaxies. Such a ``Cen$+N$'' method could be an excellent \mvir{} proxy, with low \sigmvir{} values in a given number density bin, while also being minimally impacted by projection effects. Both the \mstar{}-based and the ``Cen$+N$'' methods require accurate identification of massive satellite galaxies. This is a challenging task when using photometric redshift from imaging surveys. Spectroscopic surveys such as DESI (e.g., \citealt{DESI2016}) will greatly improve the situation. Using images from the DECam Legacy Survey (DECaLS, e.g., \citealt{Dey2019})\footnote{ \url{https://www.legacysurvey.org/} }, we will measure large aperture and outer envelope \mstar{} of $z<0.5$ massive galaxies out to $\sim 100$ kpc (e.g., Li et al. in prep.). When combined with their DESI spec-$z$ in the next few years, this much larger DECaLS ($\sim 9000$ deg$^2$) survey will provide us an ideal sample to constrain galaxy-halo connection models. We will extend our \topn{} tests to include group/cluster catalogues for DECaLS (e.g., \citealt{Yang2020, Zou2021}), and apply our ``Cen$+N$'' method to define a sample of massive halos that are suitable for unbiased cosmological analysis. \section*{Data Availability Statements} The data underlying this article are available in Zenodo at \url{https://doi.org/10.5281/zenodo.5259075}. The \texttt{Python} code, \texttt{Jupyter} notebooks, and the data files for reproducing the results and figures of this work can be found on \texttt{Github} at \url{https://github.com/dr-guangtou/jianbing}. The Hyper Suprime-Cam Subaru Strategi Program data used in this work are included in the Public Data Release 2 at \url{https://hsc-release.mtk.nao.ac.jp/doc/}. \section*{Acknowledgements} The authors would like to thank Benedikt Diemer and Matthew Becker for useful discussions and suggestions. This material is based upon work supported by the National Science Foundation under Grant No. 1714610. The authors acknowledge support from the Kavli Institute for Theoretical Physics. This research was also supported in part by National Science Foundation under Grant No. NSF PHY11-25915 and Grant No. NSF PHY17-48958 We acknowledge use of the lux supercomputer at UC Santa Cruz, funded by NSF MRI grant AST 1828315. AL is supported by the U.D Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0019301. AL acknowledges support from the David and Lucille Packard foundation, and from the Alfred .P Sloan foundation. JD is supported by the Chamberlain Fellowship at Lawrence Berkeley National Laboratory. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by National Astronomical Observatory of Japan (NAOJ), Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), University of Tokyo, High Energy Accelerator Research Organization (KEK), Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. Funding for SDSS-III has been provided by Alfred P. Sloan Foundation, the Participating Institutions, National Science Foundation, and U.S. Department of Energy. The SDSS-III website is http://www.sdss3.org. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration, including University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, University of Florida, the French Participation Group, the German Participation Group, Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. The Pan-STARRS1 surveys (PS1) have been made possible through contributions of Institute for Astronomy; University of Hawaii; the Pan-STARRS Project Office; the Max-Planck Society and its participating institutes: the Max Planck Institute for Astronomy, Heidelberg, and the Max Planck Institute for Extraterrestrial Physics, Garching; Johns Hopkins University; Durham University; University of Edinburgh; Queen's University Belfast; Harvard-Smithsonian Center for Astrophysics; Las Cumbres Observatory Global Telescope Network Incorporated; National Central University of Taiwan; Space Telescope Science Institute; National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate; National Science Foundation under Grant No. AST-1238877; University of Maryland, and Eotvos Lorand University. This research makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST project for making their code available as free software at http://dm.lsstcorp.org. The CosmoSim database used in this research is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP). The MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064. This research made use of: \href{http://www.stsci.edu/institute/software_hardware/pyraf/stsci\_python}{\texttt{STSCI\_PYTHON}}, a general astronomical data analysis infrastructure in Python. \texttt{STSCI\_PYTHON} is a product of the Space Telescope Science Institute, which is operated by Association of Universities for Research in Astronomy (AURA) for NASA; \href{http://www.scipy.org/}{\texttt{SciPy}}, an open source scientific tool for Python (\citealt{SciPy}); \href{http://www.numpy.org/}{\texttt{NumPy}}, a fundamental package for scientific computing with Python (\citealt{NumPy}); \href{http://matplotlib.org/}{\texttt{Matplotlib}}, a 2-D plotting library for Python (\citealt{Matplotlib}); \href{http://www.astropy.org/}{\texttt{Astropy}}, a community-developed core Python package for astronomy (\citealt{AstroPy}); \href{http://scikit-learn.org/stable/index.html}{\texttt{scikit-learn}}, a machine-learning library in Python (\citealt{scikit-learn}); \href{https://ipython.org}{\texttt{IPython}}, an interactive computing system for Python (\citealt{IPython}); \href{https://github.com/kbarbary/sep}{\texttt{sep}} Source Extraction and Photometry in Python (\citealt{PythonSEP}); \href{https://jiffyclub.github.io/palettable/}{\texttt{palettable}}, colour palettes for Python; \href{http://dan.iel.fm/emcee/current/}{\texttt{emcee}}, Seriously Kick-Ass MCMC in Python; \href{http://bdiemer.bitbucket.org/}{\texttt{Colossus}}, COsmology, haLO and large-Scale StrUcture toolS (\citealt{Colossus}). \bibliographystyle{mnras} \bibliography{topn} \appendix \section{Derivation of the galaxy--galaxy lensing profiles} \label{app:dsigma_detail} Here we walk through the derivation of the final \dsigma{} profile used in the \topn{} test. As mentioned in \S\ \ref{sec:dsigma}, we adopt a slightly modified version of the methodology from \citet{Singh2017} to measure the excess surface mass density (ESD or \dsigma{}) profiles around massive galaxies or clusters. This method emphasises the importance of subtracting lensing signals around large number of random positions from the signals for real lenses to achieve unbiased measurement. The \dsigma{} signal at a physical radius $R$ is: \begin{equation} {\Delta\Sigma}_{\rm LR}(R) = f_{\rm bias}({\Delta\Sigma}_{\mathrm{L}}(R) - {\Delta\Sigma}_{\mathrm{R}}(R)) \label{eq:ds1} \end{equation} Here, $L$ indicates measurements for the lens galaxies while $R$ is for random positions. For each \dsigma{} profile, we use a set of $1.5 \times 10^5$ random points whose redshift distribution is matched to the lenses. The number of random points is at least 100 time larger than the largest \topn{} sample. The \dsigma{} profile around lenses is: \begin{equation} {\Delta\Sigma}_{\rm L}(R) = \frac{1}{2 \mathcal{R}(R) [1+\mathcal{K}(R)]} \frac{\Sigma_{\rm Ls} w_{\rm Ls} \gamma_{t}^{(\rm Ls)} \Sigma_{\rm crit}^{(\rm Ls)}}{\Sigma_{\rm Ls} w_{\rm Ls}} \label{eq:ds2} \end{equation} \noindent where $\gamma_{t}$ is the tangential shear component, $\Sigma_{\rm crit}$ is the critical surface density, $w_{\rm Ls}$ is the weight used for each lens-source pair. Following the calibration strategy outlined in \citet{HSC-WLCALIB}, we also include the shear responsivity factor $\mathcal{R}(R)$ and the correction for the multiplicative shear bias $[1+\mathcal{K}(R)]$. Here $\Sigma{\rm Ls}$ represents the summation over all lens-source pairs. We perform the same measurements for random points, so replacing L with R in Equation \ref{eq:ds2} will form the estimator for randoms. The critical surface density is: \begin{equation} \Sigma_{\rm crit}=\frac{c^2}{4\pi G} \frac{D_A(z_s)}{D_A(z_l) D_A(z_l, z_s) (1+z_l)^2} \label{eq:sigcrit} \end{equation} \noindent where $D_A(z_L)$, $D_A(z_s)$, and $D_A{z_L, z_s}$ represent the angular diameter distances to the lens, source, and the distance between the lens-source pair. The weight applied to each lens-source pair is described by: \begin{equation} w_{\rm Ls} = \frac{\Sigma_{\rm crit}^{-2}}{\sigma^2_{e, {\rm Ls}} + \sigma^2_{\rm rms}} \equiv \frac{\Sigma_{\rm crit}^{-2}}{\sigma^2_{{\rm Ls}}} \label{eq:weight} \end{equation} \noindent where $\sigma_{\rm rms}$ represents the intrinsic shape dispersion while $\sigma_{e, \rm Ls}$ is the per-component shape measurement error. Meanwhile, the shear responsivity factor is defined by: \begin{equation} \mathcal{R}(R) = 1 - \frac{\Sigma_{\rm Ls} w_{\rm Ls} \sigma^2_{e, {\rm Ls}}}{\Sigma_{\rm Ls} w_{\rm Ls}} \label{eq:rfactor} \end{equation} \noindent And the multiplicative shear bias correction is defined as: \begin{equation} \mathcal{K}(R) = \frac{\Sigma_{\rm Ls} w_{\rm Ls} m_{\rm s}}{\Sigma_{\rm Ls} w_{\rm Ls}} \label{eq:kfactor} \end{equation} \noindent where $m_{\rm s}$ is multiplicative shear bias value for each source. The shape catalogue provides estimates of $\sigma_{\rm rms}$, $\sigma_{e, \rm Ls}$, and $m_{\rm s}$, while \citet{HSC-WLCALIB} provides in-depth discussion of these calibration related issues. In difference with \citet{Singh2017}, we do not use boost factor to account for the photo-$z$ dilution effect. Following the strategy in \citet{Leauthaud2017}, we develop a correction factor, $f_{\rm bias}$, to account for it. We define $f_{\rm bias}$ as the ratio between the \dsigma{} profile calculated using the real redshift and the one using photo-$z$ from the COSMOS photo-$z$ calibration catalogue\footnote{ \url{https://hsc-release.mtk.nao.ac.jp/doc/index.php/s17a-wide-cosmos/}}. In practice, it is estimated based on: \begin{equation} f_{\rm bias} = \frac{ \sum_{\rm Ls} w_{\rm Ls} w_{\rm sys} \left(\Sigma_{{\rm crit, T}} / \Sigma_{{\rm crit, P}} \right)} {\sum_{\rm Ls} w_{\rm Ls} w_{\rm sys}} \label{eq:fbias} \end{equation} based on the photo-$z$ calibration sample in the COSMOS field for such purpose (e.g., \citealt{Mandelbaum2008, Nakajima2012, Leauthaud2017}). As for the $f_{\rm bias}$, \noindent where $\Sigma_{{\rm crit, T}}$ is the critical surface density estimated using the ``true'' redshift in the calibration catalogue (can be spec-$z$ or COSMOS 30-band photo-$z$), while $\Sigma_{{\rm crit, P}}$ is the one using \texttt{frankenz} photo-$z$. $w_{\rm sys}$ is the systematic photo-$z$ weight in the calibration catalogue that matches the colour-magnitude distribution of the COSMOS photo-$z$ calibration catalogue to the same distribution of the source catalogue (e.g., \citealt{Mandelbaum2008, Nakajima2012}). Note that the estimator shown here is different from the ones in \citet{Leauthaud2017} and \citet{Speagle2019}, and it accounts for the photo-$z$ dilution effect more accurately. The $f_{\rm bias}$ level for galaxies in our sample is general very low ($\sim 1$-2\%). To estimate the covariance matrix of a \dsigma{} profile, we use both jackknife and bootstrap resampling method. For the jackknife case, we assign lens and randoms into the same 45 jackknife regions with similar area around 2.5 \sqdeg{} and regular shapes. The covariance matrix from the jackknife resampling is: \begin{equation} \mathrm{Var}_{\rm Jk}(\widehat{\Delta\Sigma}) = \frac{N_{\rm Jk} - 1}{N_{\rm Jk}} \sum\limits_{i=1}^{N_{\rm Jk}} (\Delta\Sigma_{i} - \overline{\Delta\Sigma})^2 \end{equation} \noindent here $N_{\rm Jk}=45$, $\Delta\Sigma_{i}$ represents the \dsigma{} profile from each Jackknife region, and $\overline{\Delta\Sigma}$ is the mean profile of all regions. For the \topn{} test, the small sample size in Bin 1 \& 2 sometimes make it difficult to assign jackknife regions. Therefore we also calculate the covariance matrix using bootstrap resampling with $N_{\rm Bt}=5000$ iterations: \begin{equation} \mathrm{Var}_{\rm Bt}(\widehat{\Delta\Sigma}) = \frac{1}{N_{\rm Bt} - 1} \sum\limits_{i=1}^{N_{\rm Bt}} (\Delta\Sigma_{i, \rm Bt} - \overline{\Delta\Sigma})^2 \end{equation} \noindent The two methods provide consistent measurements of covariance matrix. \begin{figure*} \centering \includegraphics[width=\textwidth]{figure/fig_A1} \caption{ Illustration of the scatter-matching process in the four \topn{} bins using the \maper{150} as an example. \textbf{Left} column shows the observed \rdsigma{} profiles (solid hexagon), the best-fit profiles using \mdpl2{} simulation (shaded regions with similar colours), and the lensing profiles of the ``perfect'' \topn{} sample. The uncertainties of the model lensing profiles are inflated to match the expected statistical uncertainties of the volume occupied by the HSC data. \textbf{Middle} column shows the reduced $\chi^{2}$ curve of the fitting process. A horizontal dashed-line highlights where $\chi^{2}=1$. The vertical dashed-line and the shaded region show the best-fit scatter value and the associated uncertainty. We also display these values on the figure. \textbf{Right} column shows the best-fit \mvir{} distributions in each bin predicted by the \mdpl2{} simulation (coloured histograms; dot-dashed lines label the average \mvir{} values). We also compare them with the ``true'' \mvir{} distributions in each number density bin (grey histograms; grey dashed-lines label the average \mvir{} values). The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/figA1.ipynb}{\faGithub}. } \label{fig:fitting} \end{figure*} \section{Matching the lensing profiles} \label{app:fitting} As described in \S\ \ref{sec:estimate_scatter}, we estimate the \sigmh{} value from an observed \dsigma{} profile from the \topn{} test through matching it to a densely sampled grid of model \dsigma{} profiles that cover a wide range of \sigmh{} values. For each pair of observed and predicted \dsigma{} profiles, we define a simple $\chi^2$ statistic (Equation \ref{eq:chi2}) to describe the ``similarity'' between them. In Figure \ref{fig:fitting}, we use the \topn{} result for \maper{150} stellar mass as example to visualise the ``scatter matching'' procedure, which produce a well-behaved reduced $\chi^2$ curves with a clear minimum. For \maper{150}, the reduced $\chi^2$ values in all four \topn{} bins are reasonably close to 1.0 ([0.65, 0.88, 1.31, 0.90]). In line with this impression, the left panels show that the best-fit ``scatter only'' \dsigma{} profile is fully consistent with the observed one. As discussed in \S\ \ref{sec:mstar_vs_richness}, this is not always the case (see Figure \ref{fig:richness_residual}). However, even when the best-fit model is not satisfying (e.g., reduced $\chi^2 >2$), we still estimate the ``best-fit'' \sigmh{} value. Since we only calculate the $\chi^2$ on a grid of \sigmh{} values and the statistical uncertainties of the predicted \dsigma{} profiles cannot be completely ignored, we did not just report the \sigmh{} value with the lowest $\chi^2$. Instead, we interpolate the normalised cumulative distribution of the likelihood $\equiv \exp{(-0.5 \times \chi^2)}$ to derive the \sigmh{} at 50th percentile as the ``best-fit'' scatter value. We estimate the 1-$\sigma$ uncertainty range in the same way. We should note that the choice of covariance matrix (Jackknife v.s. bootstrap) does not affect any results of this work. We also attempted to include the uncertainties of the predicted \dsigma{} profile to the covariance matrix as additional diagonal term, and verify it has no impact on any conclusions. In the figure, we inflate the error bars of the model profiles to reflect the volume difference between the HSC data and the simulation used. For \mdpl2{} simulation, the volume is about $\sim 25 \times$ larger than the HSC volume. We therefore increase the error bar by a factor of 5. However, we did not include the model uncertainty during the fitting process. \section{Scaling relation model calibrated to match HSC observations} \label{app:hsc_model} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{figure/fig_B1} \caption{ \textbf{Left} panels demonstrate how well the model (red line) can fit the observed SMF (grey symbols; shaded regions are uncertainties) that combines data from HSC at high-\mstar{} end and \texttt{PRIMUS} survey at low-\mstar{} end. The bottom sub-panel shows the relative residual of the best-fit SMF. The three vertical dashed-lines highlight the \mstar{} boundaries of the three \mstar{}-bins ([$11.50$, $11.55$, $11.70$]) used for measuring the two--point correlation functions of massive HSC galaxies. \textbf{Right} panels summarise the observed clustering of HSC massive galaxies (symbols) and their best-fit models (lines). [11, 22, 33] are the auto-correlation functions of the three \mstar{} bins, while [12, 13, 23] indicate the cross-correlation functions among the three bins. The bottom sub-panel shows the ratio between the observed and the model clustering signals. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/figB1.ipynb}{\faGithub}. } \label{fig:best_mock} \end{figure*} In \S\ \ref{sec:estimate_scatter}, we describe the method to predict the stacked \dsigma{} profile of a sample of number density selected halos with certain \scatterMhaloObsSym{} value based on a $\log$-normal scaling relation with fixed slope ($\alpha=1$). This simple model helps us predict the stacked \dsigma{} profile of a specific \topn{} bin (see Figure \ref{fig:mdpl2}). Meanwhile, to evaluate the impact of satellite galaxies on the \topn{} tests, we still need a mock catalogue from simulation that can fit basic HSC observations of massive galaxies and have realistic satellite fraction at high-\mstar{} end. Taking advantage of the work by (DeMartino \etal{} in prep.), we create such a mock catalogue that can reproduce the SMF and clustering statistics of HSC massive galaxies using a sub-halo abundance matching model (SHAM) based on peak halo mass ($M_{\rm Peak}$). We also use this model to constrain the SHMR and its scatter at high-\mhalo{} end. In particular, we model the SHMR using the functional form from \citet{Behroozi2013} but fixing the slope at low-\mhalo{} end ($\beta$). In total, the model has five free parameters: 1. The four parameters that govern the mean SHMR at high-\mhalo{} end from \citet{Behroozi2013}; 2. And the scatter of \mstar{} at fixed \mhalo{}. As shown in Figure \ref{fig:best_mock}, the best-fit model can reproduce the observed mass function and clustering statistics of massive galaxies reasonably well. To ensure the model can fit the SMF beyond just the high-\mstar{} end, we adopt a ``hybrid'' SMF: we use the complete sample of HSC massive galaxies at $0.2 < z < 0.5$ to cover the \logms{}$>11.5$ range, and use the \texttt{PRIMUS} $0.3 < z < 0.4$ SMF (\citealt{Moustakas2013}) for the $10.5 <$\logms{}$<11.5$ range. Both the HSC and the \texttt{PRIMUS} \mstar{} are from the \texttt{iSEDfit} code under very similar assumptions of stellar population properties. The \mstar{} of the HSC sample is based on our customised 1-D profile that capture the luminosity beyond 100 kpc, while the \texttt{PRIMUS} sample is based on small aperture photometry. Using the \texttt{PRIMUS} galaxies that also have the HSC 1-D \mstar{} measurements from \citet{Huang2018b}, we derive a simple constant offset term that help us ``stitch'' the two SMFs together. We note that this just ensures a smooth SMF for the fitting, and does not affect any results in this work. As for the clustering signals of HSC massive galaxies, we compute the auto- and cross-correlation signals after separating the sample into three \mstar{} bins: $11.50 <$\logms{}$\leq 11.55$, $11.55 <$\logms{}$\leq 11.70$, and \logms{}$> 11.70$. The best-fit SHMR is broadly consistent with previous works including the scatter of \mstar{} value ($\sim 0.2$ dex). More importantly, the satellite fraction at high-\mstar{} end is between 5 and 10\%, which is also similar to the results of previous works. We also verify that the satellite fraction value is robust to small changes in abundance matching methodology. \section{\texorpdfstring{$\Delta\Sigma$}{DSigma} profiles of massive satellite galaxies} \label{app:sat_cen} In \S\ \ref{sec:satellite}, we introduced our method for identifying candidates of massive satellite galaxies from our sample and investigated their impacts on the stacked \dsigma{} profile. In Figure \ref{fig:sat_cen}, we compare the stacked \dsigma{} profiles of massive satellite galaxies within $11.6 < \log (M_{\star, 100\ \rm kpc}/M_{\odot} < 11.8$ to that of the central galaxies in the same \mstar{} bin. As explained in \S\ \ref{sec:satellite}, for massive galaxies in our sample, we iteratively identify satellite galaxies with lower \maper{10} in a cylinder with radius $R=1$ Mpc and LOS length of $L=40$ Mpc. We ignore the redshift and \mstar{} uncertainties during this procedure, so strictly speaking these are just candidates of satellite galaxies. Within the \maper{100} bin, we find 161 massive satellite galaxies and 1804 central galaxies. The \maper{100} distribution of satellite clearly skews toward lower values than the one for centrals. To make it a fair comparison, we match the centrals to satellites in the 2-D \maper{50}--\menve{50}{100} plane: using a k--d tree, we search for the nearest 7 centrals around each satellite and keep the unique centrals. This yield 765 central galaxies with similar \maper{100} and \maper{50}--\menve{50}{100} distributions to the satellites. They also share very similar redshift distributions. In the top panel of Figure \ref{fig:sat_cen}, we compare their \dsigma{} profiles. While the centrals and satellites share similar profiles within inner 500 kpc, the satellites display clearly enhanced \dsigma{} signals at $R>1$ Mpc. We highlight this result in the bottom panel of Figure \ref{fig:sat_cen} using the ratio of the satellite \dsigma{} profile to that of the centrals. We also show the ratios for satellites selected using different cylinders. This comparison shows that small variation of the radius (from 1.0 to 1.5 Mpc) and length (20 to 40 Mpc) of the cylinders will not affect the results. Figure \ref{fig:sat_cen} shows that, at the same \mstar{}, massive satellite galaxies show very different \dsigma{} profiles with centrals due to the strong impact from their host dark matter halos. Despite the small impact on the stacked \dsigma{} profiles due to the low satellite fraction value, the \dsigma{} profile of massive satellite galaxies alone contains valuable information about the galaxy--halo connection of massive galaxies (\eg{} \citealt{Sifon2015, Li2016, Dvornik2020}). We will aim to understand it more so that we can deal with satellites better when using \mstar{}-based \mvir{} proxies. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figure/fig_D1} \caption{ The comparison of \dsigma{} profiles of centrals and satellites at similar \mstar{}. \textbf{Top} panel: Blue crosses show the \dsigma{} profile of satellites within $11.6 < \log (M_{\star, 100\ \rm kpc}/M_{\odot} < 11.8$ selected using a cylinder with $R=1$ Mpc and $L=40$ Mpc. The red circles show the \dsigma{} profile for a sample of central galaxies with matched \mstar{} distributions. \textbf{Bottom} panel shows the ratios of the stacked \dsigma{} profiles for satellites and centrals selected using different definitions of cylinders. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/figD1.ipynb}{\faGithub}. } \label{fig:sat_cen} \end{figure} \section{Galaxy size as \texorpdfstring{\mvir{}}{Mvir} indicator} \label{app:size} In this work, we have explored different aperture and outskirt stellar mass defined using fixed physical radius (\eg{} 100 kpc, 50 to 100 kpc). They provide unambiguous definitions of apertures, which is important when comparing results from different imaging data or between simulation and observation. But, for galaxies with very different size, \mstar{} defined using fixed radius could have very different physical meanings. For example, while \menve{50}{100} is a good measurement of outer envelope \mstar{} for very massive elliptical galaxies, it is not even practical to apply it to Milky Way-mass galaxies. The half-light (-mass) radius ($R_{50}$), or the effective radius ($R_{\rm e}$), is a commonly adopted galaxy size measurement. It naturally provides another way to define aperture and outskirt for galaxies. In Figure \ref{fig:scatter_trend_size}, we summarise the \topn{} results for a few different aperture (top panel) and outskirt (bottom panel) \mstar{}. In this work, the $R_{50}$ is measured using the $i$-band integrated 1-D intensity profiles (also known as the curve-of-growth) along the major axis (so it is not ``circulized''). It is defined as the radius that contains 50\% of light within 100 kpc radius. We choose this definition because the surface brightness profile at $R>100$ kpc becomes less reliable, but replacing 100 kpc with larger radius such as 150 kpc will not change our results. Aperture \mstar{} defined using $R_{50}$ show similar performance with \maper{100}. This is expected for $M_{\star, R_{50}}$ as it represents 50\% of \maper{100} by definition. Meanwhile, none of the other larger aperture masses using $R_{50}$ show any improvement. Outskirt masses using $R_{50}$ do have lower \sigmvir{} values than \maper{100}. While this confirms the result using fixed radius, none of the outskirt \mstar{} using $R_{50}$ has performance as good as \menve{50}{100} especially in Bin 3 \& 4. We note that the stellar masses defined by $R_{50}$ directly ties to the measurement of galaxy size, which is not an easy task. Replacing the $R_{50}$ from 1-D curve-of-growth with the $R_{\rm e}$ from single \ser{} fitting could lead to different results. We will explore more \mvir{} proxies related to galaxy size in future works. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure/fig_E1} \caption{ The relations between the cumulative number density of each \topn{} bin and \sigmvir{} for \mstar{}-based \mvir{} proxies defined using $R_{50}$. The format is the same with Figure \ref{fig:scatter_trend_2} and Figure \ref{fig:scatter_trend}. \textbf{Top} panel shows the \topn{} results for different aperture \mstar{} defined using $R_{50}$ while the \textbf{bottom} panel is for different outskirt \mstar{} defined using $R_{50}$. We use the \sigmvir{} trends for \maper{100} (green dashed line) and \menve{50}{100} (red dot--dashed line) as the references. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/figE1.ipynb}{\faGithub}. } \label{fig:scatter_trend_size} \end{figure} \section{The comparison of \texorpdfstring{\dsigma{}}{DSigma} profile between the HSC and SDSS \texorpdfstring{\redm{}}{redMaPPer} clusters} \label{app:sdss_redm} In Figure \ref{fig:sdss_redm}, we compare the \dsigma{} profiles of \redm{} clusters from SDSS survey to those of HSC data, and also to the HSC massive galaxies selected using \menve{50}{100}. We are using the \href{http://risa.stanford.edu/redmapper/}{\texttt{v6.3} catalogue for SDSS DR8}. While the SDSS images are much shallower than HSC, they also suffer less from the over-deblending issue that affects the red--sequence cluster finders using deeper data. The $u$-band image could also help improve the red--sequence redshift of low redshift clusters. Given the redshift coverage and richness completeness of SDSS \redm{}, we do not have enough objects to perform \topn{} tests except for Bin 1. We therefore define two SDSS \redm{} samples for our comparison: 1) 55 clusters in $0.19 < z < 0.50$ and $\lambda_{\rm SDSS} \geq 50$ (top panel of Figure \ref{fig:sdss_redm}; 2) 191 clusters in $0.19 < z < 0.35$ and $\lambda_{\rm SDSS} \geq 20$ (bottom panel). We then use the same redshift bins and number density to select HSC \redm{} clusters and massive galaxies. In Figure \ref{fig:sdss_redm}, we show that the \dsigma{} profiles of SDSS \redm{} clusters are not only consistent with the HSC \redm{} ones, they also display very similar systematic differences with the \menve{50}{100}--selected massive galaxies. This reinforces our conclusions in \S\ \ref{sec:mstar_vs_richness} and \S\ \ref{sec:richness_results}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure/fig_F1} \caption{ Similar to Figure \ref{fig:mout_richness}, here we compare the \dsigma{} profiles of SDSS \redm{} clusters (empty diamonds) to those of HSC \redm{} clusters (solid hexagons) and \menve{50}{100}-selected HSC massive galaxies (solid circles). Given the completeness of SDSS \redm{} clusters, we show the comparison in two richness and redshift bins: \textbf{top} panel is for $\lambda_{\rm SDSS} \geq 50$ clusters at $0.2 < z < 0.5$ while the \textbf{bottom} panel is for $\lambda_{\rm SDSS} \geq 20$ clusters at $0.2 < z < 0.35$. \textbf{Left} panels show the comparisons of lensing profiles. The shaded regions display the best-fit model profile of the \menve{50}{100} samples. \textbf{Right} panels show the ratio of \dsigma{} profiles between richness- and \menve{50}{100}-selected samples. The SDSS \redm{} clusters' \dsigma{} profiles show similar systematic differences with the \menve{50}{100} profile just like their HSC counterparts. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/figF1.ipynb}{\faGithub}. } \label{fig:sdss_redm} \end{figure} \section{The comparison of \texorpdfstring{\dsigma{}}{DSigma} profile between the HSC and DES \texorpdfstring{\redm{}}{redMaPPer} cluster} \label{app:des_redm} The Dark Energy Survey (DES) has adopted the \redm{} algorithm for finding galaxy clusters (\eg{} \citealt{Rykoff2016}). It would be interesting to compare the lensing profiles of HSC and DES \redm{} clusters. Since the overlapping area between DES Y1 and HSC \texttt{S16A} is very small, here we directly compare the stacked \dsigma{} profile of DES \redm{} clusters at $0.20 \leq z < 0.55$ and $20 \leq \lambda_{\rm DES} < 100$ presented in \citet{Chang2018} to their HSC counterparts. We ignore the small offset between richness measurements using different data and select 285 HSC \redm{} clusters in the same richness and redshift bin. This roughly corresponds to the combination of the Bin 1 \& 2 in our \topn{} tests. In \citet{Chang2018}, the authors adopted the same cosmology but use comoving coordinates instead. Therefore we calculate a new \dsigma{} profile for HSC clusters using comoving coordinates as well. Figure \ref{fig:des_redm} shows that the \dsigma{} profiles for HSC and DES \redm{} clusters are broadly consistent with each other. Note that the DES \dsigma{} profile is based on an completely independent lensing catalogue using different algorithms for shear measurement, lensing calibration, and photo-$z$ estimation. While the two profiles show subtle difference at $1 < R < 4$ Mpc, they show very similar overall shapes and amplitudes. This again shows that our results about the shape of lensing profiles of \redm{} clusters should be robust against imaging dataset and lensing measurements. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure/fig_G1} \caption{ Comparison of the stacked \dsigma{} profiles of HSC and DES \redm{} clusters within the same richness ($20 \leq \lambda < 100$) and redshift ($0.2 \leq z < 0.55$) bin. The DES \redm{} \dsigma{} profile is from \citet{Chang2018}. In different with the other \dsigma{} profiles in this work, we use \emph{comoving} coordinate here to be consistent with \citet{Chang2018}. The \texttt{Jupyter} notebook for reproducing this figure can be found here: \href{https://github.com/dr-guangtou/jianbing/blob/master/notebooks/figure/figG1.ipynb}{\faGithub}. } \label{fig:des_redm} \end{figure} \bsp \label{lastpage} \clearpage\end{CJK*} \end{document}
{ "alphanum_fraction": 0.7139365129, "avg_line_length": 63.3195435093, "ext": "tex", "hexsha": "2a3ff1423aab9d189e0444d1a811e5f268521303", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0fbf82c973c6761d892115281b52ad9964c731db", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mattkwiecien/jianbing", "max_forks_repo_path": "paper/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0fbf82c973c6761d892115281b52ad9964c731db", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mattkwiecien/jianbing", "max_issues_repo_path": "paper/main.tex", "max_line_length": 191, "max_stars_count": 2, "max_stars_repo_head_hexsha": "0fbf82c973c6761d892115281b52ad9964c731db", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mattkwiecien/jianbing", "max_stars_repo_path": "paper/main.tex", "max_stars_repo_stars_event_max_datetime": "2021-08-18T20:53:40.000Z", "max_stars_repo_stars_event_min_datetime": "2021-08-14T17:33:07.000Z", "num_tokens": 48243, "size": 177548 }
\documentclass[titlepage][fleqn]{article} \usepackage[version=3]{mhchem} \begin{document} \title{AP Chemistry Lab 8: Analysis of a Mixture} \author{Bay Foley-Cox \\\\ Lab partners: \\ \\Justin Schaaf, Evan Beal} \date{Performed on October 5th, 2017} \maketitle \section{Introduction} This lab makes use of the idea that information about an unknown substance can be determined by reacting that substance and measuring its products. In this lab, the unknown substance is a mixture of metal carbonate and metal bicarbonate. The goal is to determine the percent by mass of the bicarbonate. When heated to above 110℃ metal bicarbonates decompose by the following reaction: \ce{2 MHCO3(s) -> M2CO3(s) + H2O(g) + CO2(g)} . Metal carbonates on the other hand will remain stable at temperatures less than 800℃. Since in the equation both water and carbon dioxide are gases, they will escape during heating. Therefore, their mass is determinable by the difference in pre and post heating masses. This mass can be converted to moles CO2 and H2O. Then, using the stoichiometric ratio from the equation of the decomposition reaction, the moles of bicarbonate can be found as well. This can be converted to the mass of the original bicarbonate. Its percent mass in the mixture can then be found by dividing its mass by the total mass of the initial sample. \section{Procedure} First, the mass of a crucible and lid was found. Then, 1 to 2 grams of the unknown mixture of a metal bicarbonate and a metal carbonate were added to the crucible. The crucible and cover were subsequently reweighed. Next, the crucible was heated on a bunsen burner at a temperature above 110℃ and below 800℃ for five minutes with its lid ajar. Afterwards, it was allowed to cool fully and then was weighed again. This process of heating, allowing to cool and reweighing, was repeated until the change in mass after each heating became negligible. At this point the final mass was recorded. \section{Data and Observations} \subsection{Data Collected} \begin{table}[h] \def\arraystretch{1.5} \begin{tabular}{|c|c|} \hline Mass of Crucible and Lid & 23.359g \\ \hline Mass of Crucible, Lid and Mixture & 25.397g\\ \hline Mass of crucible, lid, and mixture after first heating & 24.795g\\ \hline Mass of crucible, lid, and mixture after second heating & 24.793g\\ \hline \end{tabular} \end{table} \subsection{Observations} \begin{itemize} \item The mass of the crucible and its contents only changed by 0.001g, which is withing the balances maragin of error, between the first and second weighing. This indicates that all the sodium bicarbonate was succesfully reacted. \item The contents of the crucible looked identical before and after the heating meaning it was impossible to assess visually whether the reaction had occured. \end{itemize} \section{Calculations} $$\mbox{Mass mixture} = \mbox{mass crucible, lid, contents} - \mbox{mass crucible, lid}$$ $$Mass mixture = 25.397g - 23.359g$$ $$Mass mixture = 2.038g$$ $$\Delta mass mixture = mass mixture - (mass crucible, contents, lid after final heating - mass crucible, lid)$$ \section{Conclusion} The percentage by mass of sodium carbonate in the mixture of sodium bicarbonate and sodium carbonate was 80.28\%. \end{document}
{ "alphanum_fraction": 0.759543132, "avg_line_length": 59.4107142857, "ext": "tex", "hexsha": "52a3a8e08e235f791d43d05ea1a4f11ef4d3cbd6", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c5a2d78d1e69544fb89c9d6d4779fb782db08869", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "Bafoleco/Personal-Site", "max_forks_repo_path": "public_html/lab8.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c5a2d78d1e69544fb89c9d6d4779fb782db08869", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "Bafoleco/Personal-Site", "max_issues_repo_path": "public_html/lab8.tex", "max_line_length": 1062, "max_stars_count": null, "max_stars_repo_head_hexsha": "c5a2d78d1e69544fb89c9d6d4779fb782db08869", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "Bafoleco/Personal-Site", "max_stars_repo_path": "public_html/lab8.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 852, "size": 3327 }
\documentclass[11pt]{scrartcl} \usepackage{graphicx} \title{\textbf{HTML5 key frame based videoplayer}} \author{\emph{Peter Spiess-Knafl, Armin Trattnig}} \begin{document} \date{} \maketitle \section{Motivation} We decided to choose this project, because it was the most interesting one for both of us. And we also thought that a good videobrowser could be useful in everyday life. Also none of us had good practice in HTML5/JavaScript programming. \begin{center}\includegraphics[scale=0.5]{screenshot}\end{center} \section{Implementation} \subsection{Userinterface} We used Bootstrap\footnote{http://getbootstrap.com} for a nice looking userinterface, because we are no experts at all in webdesign. \subsection{Architecture} We split the source code in two main fragments: \begin{itemize} \item index.html: contains all visible HTML elements (player, timeline, keyframes, buttons). \item videoplayer.js: contains the actual implementation of the videoplayer. \end{itemize} \subsection{Important functions} There are three main functions which are important for the videoplayer: \begin{itemize} \item \emph{initThings()}: This function binds all variables to the HTML tags by using jQuery\footnote{http://jquery.com} \item \emph{drawThumbs()}: Extracts random keyframes according to the zoomlevel. \item \emph{newTimeline()}: Draws everything visible on the timeline (timestamps, key frame references, current playbacktime). \end{itemize} \section{Problems} We ran into several problems during the programming phase: \subsection{Frame extraction} This was the first problem we ran into. How can we extract Frames from future timestamps, while the player is playing the video at the current playback time? The HTML5 \emph{video tag} does not have native support for that. So we decided to clone the existing video tag and use this one for frame extraction. \\ With the \emph{seeked event}, we were able to extract future frames or even frames at a random time in the video, without disturbing the current playback. \subsection{The timeline} The timeline was a very challenging part, because we got no support by any existing HTML5 objects, beside a plain canvas field. So we had to render everything that is visible on the timeline manually: \begin{itemize} \item Current time resolution (blue lines) \item Current playback spot (red line) \item Reference to key frames (green lines) \item Timestamps according to current zoomlevel/playbacktime \end{itemize} Especially the zoomfactor required a lot of thinking, to calculate the right X-Pixel-Offset on the canvas and also the corresponding timesets (in HH:MM:SS). \subsection{Zooming} At first we choose a Zoomfactor of 1 / N (Number of keyframes) on each zooming-step. But especially for our testing video (which was about 10 minutes long). This approach was not very satisfying, because after two times zoooming, you could see only familiar frames. So we decided to choose a constant Zoomingfactor of 2. This gave us far better results in the random key-frame extraction and is still sufficient for longer videos. \section{Conclusio} We have learned a lot about HTML5/Javascript coding, and it was quite fun. A very nice effect was, that we actually build something that we can make use of in everyday life. It is a powerful tool. \end{document}
{ "alphanum_fraction": 0.7906766917, "avg_line_length": 49.6268656716, "ext": "tex", "hexsha": "cef3cdd5bd366afb6b76eb29660d62df35ea4f09", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-06-27T15:04:34.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-27T15:04:34.000Z", "max_forks_repo_head_hexsha": "50c8dc6f6429e050bd89204c607d30a01579dc22", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cinemast/html5videobrowser", "max_forks_repo_path": "doc/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "50c8dc6f6429e050bd89204c607d30a01579dc22", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cinemast/html5videobrowser", "max_issues_repo_path": "doc/report.tex", "max_line_length": 266, "max_stars_count": 4, "max_stars_repo_head_hexsha": "50c8dc6f6429e050bd89204c607d30a01579dc22", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cinemast/html5videobrowser", "max_stars_repo_path": "doc/report.tex", "max_stars_repo_stars_event_max_datetime": "2020-06-24T08:33:09.000Z", "max_stars_repo_stars_event_min_datetime": "2015-04-28T16:59:18.000Z", "num_tokens": 782, "size": 3325 }
\documentclass{article} %[11pt]{amsart} \usepackage{geometry} \geometry{letterpaper} \usepackage{graphicx} \usepackage{amssymb} \usepackage{amsmath} \usepackage{siunitx} \usepackage{tikz} \usepackage{lscape} \usepackage{pgfplots} \usetikzlibrary{ arrows, calc, decorations.markings, decorations.pathreplacing, dsp, fit, positioning } \input{../util/control.tex} \newcommand{\qhat}{\hat{q}} \newcommand{\cbar}{\bar{c}} \newcommand{\qbar}{\bar{q}} \newcommand{\cmd}{\mathrm{cmd}} \newcommand{\ff}{\mathrm{ff}} \newcommand{\eff}{\mathrm{eff}} \newcommand{\app}{\mathrm{app}} \newcommand{\wind}{\mathrm{wind}} \newcommand{\kite}{\mathrm{kite}} \newcommand{\nom}{\mathrm{nom}} \newcommand{\aero}{\mathrm{aero}} \newcommand{\geom}{\mathrm{geom}} \begin{document} \section{Describing function analysis of rate limit nonlinearity} There are two modes for the rate limit nonlinearity: one where the signal is only rate limited for a portion of the period and another where the signal is rate limited for the entire duration. For simplicity, we focus on the latter case. Reference \cite{pratt2000} has a good analysis of this case, and reference \cite{ponce2003} has a more complete analysis of both cases. The describing function for a rate-limit, when it is completely rate-limiting the input signal is: \begin{eqnarray} \gamma &=& \sin^{-1} \left( \frac{\pi}{2} \frac{R}{M \omega} \right) \\ N(M, \omega) &=& \frac{4 R}{\pi M \omega} (\sin \gamma + i \cos \gamma) \end{eqnarray} This may be reduced to a function of one variable $x = M \omega / R$: \begin{eqnarray} N(x) &=& \frac{2}{x^2} + i \frac{4}{\pi x} \cos \sin^{-1} \left( \frac{\pi}{2 x} \right) \\ &=& \frac{2}{x^2} \left(1 + i \sqrt{\left(\frac{2 x}{\pi}\right)^2 - 1} \right) \end{eqnarray} \begin{equation} -1/N(x) = -\frac{\pi^2}{8} \left( 1 - i \sqrt{\left(\frac{2 x}{\pi}\right)^2 - 1} \right) \end{equation} \begin{equation} C(s) = k_p + k_d s + k_i / s \end{equation} \begin{equation} G_{\mathrm{motor}}(s) = \frac{\omega_m}{s + \omega_m} \end{equation} \begin{equation} G_{\mathrm{yaw}}(s) = \frac{1}{I_{zz} s^2} \end{equation} \begin{equation} L(s) = C(s) G_{\mathrm{motor}}(s) G_{\mathrm{yaw}}(s) \end{equation} \begin{table} \begin{center} \begin{tabular}{cc} \hline \hline Variable & Value \\ \hline $k_p$ & $2.58 \times 10^5$ \\ $k_i$ & $2.04 \times 10^4$ \\ $k_d$ & $1.31 \times 10^5$ \\ $I_{zz}$ & $3 \times 10^4$ \\ $\omega_m$ & $2 \pi \cdot 6$ \\ \hline \hline \end{tabular} \end{center} \end{table} \begin{figure}[!ht] \begin{center} \begin{tikzpicture} \begin{axis}[ scale=1.3, xlabel=Re, xmin=-5, xmax=5, ylabel=Im, ymin=-5, ymax=5, grid=both ] \def\a{(2 / (x * x))}; \def\b{(2 / (x * x) * sqrt((2 * x / 3.14)^2 - 1))}; \def\m{(\a * \a + \b * \b)}; \def\kp{2.58e5}; \def\ki{2.04e4}; \def\kd{1.31e5}; \def\Izz{3e4}; \def\omegam{(2 * 3.14 * 6)}; \def\Ga{(-\kd * \omegam * x^2 + \ki * \omegam)}; \def\Gb{(\kp * \omegam * x)}; \def\Gc{(\Izz * x^4)}; \def\Gd{(-\Izz * \omegam * x^3)}; \def\Gm{(\Gc * \Gc + \Gd * \Gd)}; \addplot [smooth, color=blue, mark=none, domain=0:10] ({-\a / \m}, {\b / \m}); \addplot [smooth, color=green, mark=none, domain=-15:-1] ({(\Ga * \Gc + \Gb * \Gd) / \Gm}, {(\Gb * \Gc - \Ga * \Gd) / \Gm}); \addplot [smooth, color=green, mark=none, domain=1:15] ({(\Ga * \Gc + \Gb * \Gd) / \Gm}, {(\Gb * \Gc - \Ga * \Gd) / \Gm}); \end{axis} \end{tikzpicture} \caption{$L(j \omega) = -1 / N(M \omega / R)$ Intersection occurs at $x = 2.44$ and $\omega = -2.77$ rad/s.} \label{fig:rlocus} \end{center} \end{figure} \begin{thebibliography}{1} \bibitem{pratt2000} Pratt, Roger. ``Flight Control Systems: Practical Issues in Design and Implementation.'' {\it IET}. 2000.\ \bibitem{ponce2003} Ponce, E. and Roman, M. ``The describing function method accuracy in first order plants with rate-limited feedback.'' {\it Proceedings of the European Control Conference}. 2003. \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.6253723932, "avg_line_length": 27.033557047, "ext": "tex", "hexsha": "fe0842fba02f56eb4e697421f1069263e782f2dd", "lang": "TeX", "max_forks_count": 107, "max_forks_repo_forks_event_max_datetime": "2022-03-18T09:00:14.000Z", "max_forks_repo_forks_event_min_datetime": "2020-09-10T17:29:30.000Z", "max_forks_repo_head_hexsha": "c94d5c2b600b98002f932e80a313a06b9285cc1b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "leozz37/makani", "max_forks_repo_path": "documentation/control/hover/hover_angles.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "c94d5c2b600b98002f932e80a313a06b9285cc1b", "max_issues_repo_issues_event_max_datetime": "2020-05-22T05:22:35.000Z", "max_issues_repo_issues_event_min_datetime": "2020-05-22T05:22:35.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "leozz37/makani", "max_issues_repo_path": "documentation/control/hover/hover_angles.tex", "max_line_length": 83, "max_stars_count": 1178, "max_stars_repo_head_hexsha": "c94d5c2b600b98002f932e80a313a06b9285cc1b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "leozz37/makani", "max_stars_repo_path": "documentation/control/hover/hover_angles.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-31T14:59:35.000Z", "max_stars_repo_stars_event_min_datetime": "2020-09-10T17:15:42.000Z", "num_tokens": 1559, "size": 4028 }
\chapter{Odem Perks}\label{ch:odemPerks} Odem is a strange and mystical power that some people are born with. It usually lies dormant for a long time, until the person is in emotional or physical distress and it manifests into a psychic burst of energy that harms everyone in their surroundings. Most people consider Odem to be a curse, and thus many Odem-wielders are enslaved, ostracized, or outright banned from civilized countries. In order to combat this, the Church of Four has created an elite troupe of hunters called "The Seekers", who are tuned to Odem and can feel it manifest in others. Those found be the Seekers are usually captured and brought to a monastery, temple or other holy organization, where they recieve a Sigil that suppresses their powers. Oftentimes, these holy places will then keep the person prisoner.\\ Some Odem wielders have been able to manifest their Odem in the form of coloured flames, each of which has different effects. These so-called "Dervishes" can summon their flames and invoke powerful effects with them. However, mastering a flame is difficult, and takes many years of training, meditation and experience.\\ This means that the already acquired levels in one flame perk accumulate, and increase the cost of the next level of flame perk. This results in the following perk costs:\\ \\ Level Progression:\\ \\ \begin{minipage}{0.30\textwidth} \rowcolors{2}{lightgray}{white} \begin{tabular}{l | l} Total Flame Level & Cost\\ \hline I & 100\\ II & 200\\ III & 400\\ IV & 700\\ V & 1,100\\ \end{tabular} \end{minipage} \begin{minipage}{0.30\textwidth} \rowcolors{2}{lightgray}{white} \begin{tabular}{l | l} Total Flame Level & Cost\\ \hline VI & 1,600\\ VII & 2,200\\ VIII & 2,900\\ IX & 3,700\\ X & 4,600\\ \end{tabular} \end{minipage} \begin{minipage}{0.30\textwidth} \rowcolors{2}{lightgray}{white} \begin{tabular}{l | l} Total Flame Level & Cost\\ \hline XI & 5,600\\ XII & 6,700\\ XIII & 7,900\\ XIV & 9,200\\ XV & 10,600\\ \end{tabular} \end{minipage} \input{perks/odem/odemcurse.tex} \input{perks/odem/odemsigil.tex} \input{perks/odem/redOdemFlame.tex} \input{perks/odem/blueOdemFlame.tex} \input{perks/odem/greenOdemFlame.tex}
{ "alphanum_fraction": 0.6953025815, "avg_line_length": 40.7413793103, "ext": "tex", "hexsha": "ddd15864e115ffe83d47f5d514e662cb04ffd246", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_forks_repo_path": "perks/odem.tex", "max_issues_count": 155, "max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_issues_repo_path": "perks/odem.tex", "max_line_length": 185, "max_stars_count": 6, "max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_stars_repo_path": "perks/odem.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z", "num_tokens": 678, "size": 2363 }
\chapter{One-dimensional spaces} We can divide one-dimensional spaces into two categories: \emph{lines} and \emph{curves}. Lines are straight. Curves may bend. Thus every line is a curve. \ResearchQuestion{% Drawing a line is easy, but how do we describe a line \emph{algebraically}? } \ResearchQuestion{% A line is a straight one-dimensional space. \emph{Straight} is defined by the ambient space that contains the line. How do we define \emph{straight}? } \section{Defining a line} If two points \(a\) and \(b\) are on a line, then their midpoint \((a + b) / 2\) is also on the line. (These aren't obvious to the uninitiated?) If both \(a\) and \(b\) are on a line, then the point \(k \cdot (b - a) + a\) is also on the line, for every \(k:\Real\). If three points \(a,b,c\) are on a line, then the displacements \(b-a\) and \(c-b\) are parallel. \paragraph{Infinite extension of a line segment} A \emph{line segment} is what is drawn using a straightedge. A \emph{line} is obtained by infinitely extending a line segment in both directions. \paragraph{Embedding of \(\Real\)} \emph{A line is a straight embedding of \(\Real\).} A line is something straight-shaped and isomorphic to \(\Real\). But there is a problem with that definition. That definition may include a pathological sheet, which should be a two-dimensional object. This is a mapping from \(\Real^2\) to \(\Real\): Let there be two real numbers \(a\) and \(b\). Define the number \(c\) as \(\ldots a_1 b_1 a_0 b_0 . a_{-1} b_{-1} a_{-2} b_{-2} \ldots\).% \footnote{\url{https://math.stackexchange.com/questions/75107/injective-map-from-mathbbr2-to-mathbbr}}% \footnote{\url{https://math.stackexchange.com/questions/183361/examples-of-bijective-map-from-mathbbr3-rightarrow-mathbbr}} We can define a line by a parametric equation: \( x(k) = k g + p \). We can define a line by an algebraic equation: \( a \cdot x + b = 0 \). \subsection{Defining a line as a curve with constant velocity} The \emph{velocity} of the curve \( x : \Real \to \Real^n \) is the derivative of \(x\). The velocity of \(x\) is the rate of change of \(x\). \emph{A line is a curve whose velocity is constant.} \subsection{Defining a line as a geodesic} \section{Describing lines in a two-dimensional ambient space} Every line can be described as the set \( \{ (x,y) ~|~ (x,y) \in \Real^2, ~ a x + b y = c \} \). (Why?) \subsection{Describing a line that passes two points} Describe a line that passes \((x_1,y_1)\) and \((x_2,y_2)\). The description is \begin{align*} a x_1 + b y_1 &= c \\ a x_2 + b y_2 &= c \end{align*} Rearrange: \begin{align*} x_1 a + y_1 b &= c \\ x_2 a + y_2 b &= c \end{align*} Solve for \(a,b,c\). \begin{align*} \Matrix{x_1 & y_1 \\ x_2 & y_2} \Matrix{a \\ b} = \Matrix{c \\ c} \end{align*} We can solve it using GNU Octave by typing \verb@[x1,y1;x2,y2] \ [c;c]@ but we have to substitute the variables with numbers first. \subsection{Finding the angle formed by two lines} \subsection{Translating lines and describing parallel lines} Translating a line produces another line that is parallel to the original line. Two lines \(ax+by=c\) and \(a'x+b'y=c'\) are parallel iff \(\abs{a/b} = \abs{a'/b'}\)? \subsection{Describing orthogonal lines} This is important for tangents, normals, and osculating circles. \section{Describing higher-dimensional lines} Every \(n\)-dimensional line can be described as \( \{ k g + p ~|~ k \in \Real \} \) if \(g, p : \Real^n\). Describe a line that passes \(x_1\) and \(x_2\). The description is \begin{align*} x_1 &= k_1 g + p \\ x_2 &= k_2 g + p \end{align*} \begin{align*} x_i - p &= k_i g \end{align*} How do we solve the equation \(a = kb\) if \(a,b\) are vectors and \(k\) is a scalar?
{ "alphanum_fraction": 0.6792252587, "avg_line_length": 30.893442623, "ext": "tex", "hexsha": "0c127f0fb987fca371aa3537677e7c431c640027", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-02T15:20:22.000Z", "max_forks_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_forks_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_forks_repo_name": "edom/work", "max_forks_repo_path": "research/physics/line.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_issues_repo_issues_event_max_datetime": "2022-02-16T00:55:32.000Z", "max_issues_repo_issues_event_min_datetime": "2020-12-02T18:37:37.000Z", "max_issues_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_issues_repo_name": "edom/work", "max_issues_repo_path": "research/physics/line.tex", "max_line_length": 123, "max_stars_count": null, "max_stars_repo_head_hexsha": "df55868caa436efc631e145a43e833220b8da1d0", "max_stars_repo_licenses": [ "Apache-2.0", "CC0-1.0" ], "max_stars_repo_name": "edom/work", "max_stars_repo_path": "research/physics/line.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1175, "size": 3769 }
\section{Used technologies}\label{sec:used-technologies} Now, I would like to introduce the reader to the technologies used to build the migration tool. As was already stated in the non-functional requirements in section \ref{sec:used-technologies}, the tool needs to be fully compatible and interoperable with the current application. \subsection{Server application} I described the tools currently used in the \gls{dsw} in the analysis in chapter \ref{cptr:analysis}. Here, I will briefly highlight the main parts. \subsubsection*{Haskell programming language} The server part of the application is fully implemented in the Haskell programming language. Haskell is a purely functional programming language. It means that all functions are \textit{pure} and all data are immutable\cite{haskell-web}. By pure functions, we understand every function which is free of side effects. Thanks to side effects elimination, functions become more straightforward and more comfortable to reason about -- each function will for the same input return the same output every time. Because the entire server application was already built in Haskell, there was not much space for choosing a programming language. The migration tool could be built as an individual service (often called microservice \cite{micros-web}) in an arbitrary programming language. This would, however, introduce unnecessary complexity in development itself, but also deployment and application management. Because building microservice would mean to rebuild the vast majority of the existing application (an significant part of the model layer and the \gls{api} layer), I decided to implement the migrator as a new module which is part of the existing code base. \subsubsection*{Integrated Development Environment} \gls{ide} is a software application integrating numerous tools for helping faster development\cite{ssq-ide}. Such application helps with code syntax highlighting, compiling, testing or even deploying the developed application. To name a few, applications like Atom, Visual Studio Code or IntelliJ IDEA support development in Haskell\footnote{Haskell is usually not supported out of the box by \gls{ide}s. Instead, a plugin with language support and advanced features needs to be installed. Those plugins are usually based on either \texttt{ghc-mod} or \texttt{Intero} libraries.}. For a project as extensive as \gls{dsw} is, neither of those applications was working correctly. All of the mentioned suffered by lousy performance, invalid symbols recognition, and invalid error reporting. After consultation with team members of the \gls{dsw} maintainers, I decided to turn off all advanced language support and used only syntax highlighting in IntelliJ IDEA. Such disadvantage had unfortunately significant impact on the development time and orientation in the project. \subsubsection*{Scotty web framework} The communication between the server and the client application is done using the \gls{rest} \gls{api}. The \gls{api} interface is built on top of the \textit{Scotty web framework}. Scotty is a framework written in Haskell which allows to create type-safe \gls{api} routing and provides convenient helper functions to parse \gls{http} requests. Most of the work with integrating Scotty with \gls{dsw} was already done when I joined the project. My only interaction with the framework was to register all supported routes for the migration tool and convert data between internal representation and public \gls{json}. \subsection{Client application} Similarly to the server tooling, used technologies were already decided by the \gls{dsw} maintainers at the beginning of the project. In the application analysis in chapter \ref{cptr:analysis}, I described in depth the client side application architecture and its base modules. In this section, I would like to summarize tools and used technologies briefly. \subsubsection*{Elm programming language} The entire client application is currently written in the Elm programming language (described in \ref{sec:frontend-application}). Elm is a functional language with similar syntax to Haskell. Its base in building frontend application lays in unidirectional architecture called \texttt{Elm architecture}. The term unidirectional describes how data are passed through the application. In Elm, the data are always passed one way, through its base architectural pattern represented by the \texttt{model}, \texttt{update} and \texttt{view}. The application state is represented by the \texttt{model}, changed using \texttt{update} function and then presented in \texttt{view}. The great advantage of using Elm is that its functional approach may be shared and discussed with the server-side team because of syntax similarity. Even though both languages are functional, there are few differencies\cite{mmh-elm-func-fe}. I would like to point out the two most significant. First one is lack of \textit{Typeclasses} in Elm. Such disadvantage makes it harder to create generic constraints which are widely used in Haskell. This makes Elm \texttt{core} library more verbose as functions such as \texttt{map} must be implemented for each type individually. On the other hand, Elm's record syntax is much more powerful than Haskell's. In Elm, all records may be automatically used as free functions (taking an object as argument and returning record value) out of the box. Records, thus, may be used in function composition, in combination with functions like \texttt{Maybe.map}. This makes the code easily readable and maintainable. Haskell offers a similar feature by using \texttt{lenses} language extension; it, however, requires records to be structured in a specific way which makes harder to read. \subsubsection*{Integrated Development Environment} After the struggle I had with choosing the right Haskell \gls{ide}, I decided to keep the Elm environment as simple as possible. I chose the Visual Studio Code (or \textit{VS Code}, for short) for development as it is lightweight, and based on my own experience, it is faster and more responsible. I used the VS Code together with \textit{elm}\cite{gh-elm-pg} plugin which enables features like syntax highlighting, error reporting, type definitions and "jump to definition". This made my onboarding on the project much faster and development more convenient than doing the same things on the server side. \subsubsection*{Node environment and Webpack} Because Elm runs in an internet browser, it needs some kind of a web server to handle requests and serve the application to the user. In production, open-source servers like NGINX\footnote{NGINX homepage: \url{https://www.nginx.com}.} or Apache\footnote{Apache HTTP server homepage: \url{https://httpd.apache.org}.} are often used\cite{nc-webservers}. For development purpose, Elm offers a simple \gls{http} server called \textit{reactor}. With reactor, the developer is able to run an arbitrary Elm source code in a browser. The reactor is however shipped with its own \gls{html} template which means that all code used in application \gls{html} will not be loaded. This includes stylesheets, custom JavaScript scripts or \textit{ports} (Elm-to-Javascript interoperability \gls{api}). Together with Elm code, application sources also contains \glsentryshort{sass} which is compiled into standard \glsentryshort{css}. Therefore, building the application is a multistep process -- such process, however, can not be handled by the reactor. There are many solutions to this problem, the maintaining team of \gls{dsw} chose the \texttt{Node.js}\footnote{Node.js$^\textrm{\textregistered}$ is a JavaScript runtime built on Chrome's V8 JavaScript engine.} environment together with \texttt{Webpack} tool. Webpack is used to compile all application sources with appropriate compilers and creates an application bundle from output. Webpack is also shipped with development server (as a separate tool) which is able to watch application sources and run compiler whenever the source change. In addition to that, It is also capable of refreshing the browser window, so the changes are visible immediately. The production build is also done using Webpack which will also set production configuration to all bundled sources. \subsubsection*{Text Difference} The functional requirements from section \ref{sec:functional-requirements} say that the application should show the exact text difference for all migrated questionnaire nodes. Because there is no information about the migration differences stored on the server, this feature is fully implemented on the client side. The problem of finding the difference between two given texts is called \textit{string metric} (also known as \textit{similarity metric} or \textit{string distance})\cite{wiki-string-metric}. There are multiple algorithms solving string distance problem. To name a few, there is the Myers' algorithm\cite{art-myer} and Wu's algorithm\cite{art-wu} which are based on the same idea. Wu's algorithm, however, achieves up to four times better performance for strings, which shares most of the strings' characters. In migrations, I assume the string differences will be in most cases rewordings or typo corrections. Therefore the Wu's algorithm is an excellent fit for this problem. This algorithm is implemented in \texttt{elm-diff}\cite{epkg-elm-diff} open-source library. For any given collection of elements, it will return a new collection with the difference information for each element. Such elements may be in the following states: \vbox{% \begin{itemize} \item NoChange, \item Added, \item Removed. \end{itemize} } With a slight modification, it can be used to receive strings (which are not collections in Elm) directly. The output can be then used to render string character-by-character and highlighting it with the appropriate color. The usage of string difference (built on top of the \texttt{elm-diff}) is shown in \ref{code:elm-diff}. \elmcode{code:elm-diff}{String difference using \texttt{elm-diff}}{elm-diff.elm}
{ "alphanum_fraction": 0.8039313015, "avg_line_length": 71.4397163121, "ext": "tex", "hexsha": "cfa69014ea8d36d81a8502cdb250c2b9b2e8d405", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ded674e9483956fdbba8ed16907330212a03a2f", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "josefdolezal/fit-mi-dip", "max_forks_repo_path": "chapters/realization/technologies.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ded674e9483956fdbba8ed16907330212a03a2f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "josefdolezal/fit-mi-dip", "max_issues_repo_path": "chapters/realization/technologies.tex", "max_line_length": 353, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ded674e9483956fdbba8ed16907330212a03a2f", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "josefdolezal/fit-mi-dip", "max_stars_repo_path": "chapters/realization/technologies.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2150, "size": 10073 }
\section{Attack Tree} \newlist{myEnumerate}{enumerate}{9} \setlist[myEnumerate,1]{label*=\arabic*.} \setlist[myEnumerate,2]{label*=\arabic*.} \setlist[myEnumerate,3]{label*=\arabic*.} \setlist[myEnumerate,4]{label*=\arabic*.} \setlist[myEnumerate,5]{label*=\arabic*.} \setlist[myEnumerate,6]{label*=\arabic*.} \setlist[myEnumerate,7]{label*=\arabic*.} \setlist[myEnumerate,8]{label*=\arabic*.} \setlist[myEnumerate,9]{label*=\arabic*.} \begin{myEnumerate} \item Compromise the software/firmware update \begin{myEnumerate}[label*=\arabic*.] \item Gather knowledge (and) \begin{myEnumerate}[label*=\arabic*.] \item Find the IP address or URL of the webserver (or) \begin{myEnumerate}[label*=\arabic*.] \item Google the website of the target USB \item Collect the static contents of the web site (e.g. CSS, javascript, images, \dots). \end{myEnumerate} \item Get the same model as the target USB \begin{myEnumerate}[label*=\arabic*.] \item Try to see if it is easy to feed in any malware/keylogger \end{myEnumerate} \item Find the place (IP range) where he/she uses the target USB \begin{myEnumerate}[label*=\arabic*.] \item Use port scanner in the same subnet \item Tap his/her network cable \item Find the public key of the target USB \end{myEnumerate} \end{myEnumerate} \item Build a fake web site which looks like the real one (and) \begin{myEnumerate}[label*=\arabic*.] \item Prepare the MITM attack \begin{myEnumerate}[label*=\arabic*.] \item Do the reverse engineering to build a malicious app to feed into the target USB \end{myEnumerate} \end{myEnumerate} \item Gain an access (and) \begin{myEnumerate}[label*=\arabic*.] \item Use the DNS Spoofing trick to redirect the victims access of app update to the fake website \begin{myEnumerate}[label*=\arabic*.] \item Wait until the target USB tries to access the website to get it updated (or) \begin{myEnumerate}[label*=\arabic*.] \item Intercept the connection to the real website (or) \begin{myEnumerate}[label*=\arabic*.] \item Push the malicious app/firmware (and) \begin{myEnumerate}[label*=\arabic*.] \item Collect all the passwords of the target USB \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \item Mail a fake notification of the new release of the app with the URL of the fake website \begin{myEnumerate}[label*=\arabic*.] \item Intercept the connection to the real website (or) \begin{myEnumerate}[label*=\arabic*.] \item Push the malicious app/firmware (and) \begin{myEnumerate}[label*=\arabic*.] \item Collect all the passwords of the target USB \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \item Use the IP Spoofing trick to intercept the victims access of app update to the fake website \begin{myEnumerate}[label*=\arabic*.] \item Wait until the target USB tries to access the website to get it updated (or) \begin{myEnumerate}[label*=\arabic*.] \item Intercept the connection to the real website (or) \begin{myEnumerate}[label*=\arabic*.] \item Push the malicious app/firmware (and) \begin{myEnumerate}[label*=\arabic*.] \item Collect all the passwords of the target USB \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \item Mail a fake notification of the new release of the app with the URL of the fake website \begin{myEnumerate}[label*=\arabic*.] \item Intercept the connection to the real website (or) \begin{myEnumerate}[label*=\arabic*.] \item Push the malicious app/firmware (and) \begin{myEnumerate}[label*=\arabic*.] \item Collect all the passwords of the target USB \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \end{myEnumerate} \end{myEnumerate}
{ "alphanum_fraction": 0.6490396016, "avg_line_length": 42.595959596, "ext": "tex", "hexsha": "1d0e5a808cd26fedf54b8ad6b048d1eed51cca9a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "e5b21e352efa257102ffe9ea8674ea73e910e449", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "pacman47403/B547A1", "max_forks_repo_path": "text/attack_tree.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e5b21e352efa257102ffe9ea8674ea73e910e449", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "pacman47403/B547A1", "max_issues_repo_path": "text/attack_tree.tex", "max_line_length": 88, "max_stars_count": null, "max_stars_repo_head_hexsha": "e5b21e352efa257102ffe9ea8674ea73e910e449", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "pacman47403/B547A1", "max_stars_repo_path": "text/attack_tree.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1183, "size": 4217 }
\part{} In this first half of the paper, I will consider several semantic paradigms: Conceptual Role Semantics, Conceptual Space Theory, Truth-Theoretic Semantics, and ultimately Interface Theory of Maning (a term due to Orlin Vakarelov). I approach this material as, hopefully, a mediator: I'd like to negotiate the differences and aggregate the best parts of these analyses, while avoiding reductionism and remaining true to phenomenology. I will eventually tie the peices together into a \q{Cognitive State} Semantics \mdash{} but in this case semantic analysis overlaps with theories of syntax, so I will develop this aggregative perspective more in Part II, that addresses syntax more head-on. \section{Conceptual Role Semantics and Externalism} \p{Conceptual Role Semantics is often discussed together with a particular internalism/externalism debate which it tends to engender. Here I want to defend a kind of Conceptual Role Semantics (hereafter CRS) but I will first outline an account of compromise between externalism and internalism. I will suggest a compromise different, I believe, than Ned Block's \q{two factor} model that seems considered the leading example of an externalist/internalist hybrid. } \p{The basic CRS picture is that linguistic meanings should be associatd with conceptual roles in our understanding situations more than in terms of their reference to external objects. Given sentences like \begin{sentenceList}\sentenceItem{} \label{itm:ornate} He opened the wine bottle with an ornate corkscrew. \sentenceItem{} \label{itm:butterfly} He opened the beer bottle with a butterfly corkscrew. \sentenceItem{} He collects antique corkscrews and just bid on one online. \sentenceItem{} \label{itm:screwtop} I thought this was a screw-top but it turns out I need a corkscrew. \sentenceItem{} \label{itm:x3d} This X3D file shows a very realistic corkscrew created with NURBS surfaces. \sentenceItem{} \label{itm:sendx3d} Could you send me the corkscrew (the X3D file you just mentioned)? \end{sentenceList} we should interpret \q{corkscrew}, first, as a concept in a kind of functional organization. In some of these sentences there is also a specific corkscrew (qua phsyical object) on hand as a referent, but its actual physical properties \mdash{} or even identity \mdash{} is not decisive for the meaning of the sentence. After all, in (\ref{itm:screwtop}) the speaker is not thinking of any corkscrew in particular (probably \mdash{} more on that later) and in (\ref{itm:x3d}) and (\ref{itm:sendx3d}) the corkscrew is not real (at least not real qua corkscrew). But the conceptualization associated with \q{corkscrew} does not seem markedly different in (\ref{itm:ornate}) or (\ref{itm:butterfly}) versus (\ref{itm:screwtop}), at least (more on the other three later). } \p{Not only physical details but even lexical identity seems tangential to the important conceptual meanings. Suppose I am hosting two guests, one has a magnum of ale and one a bottle of Malbec. They ask, respectively: \begin{sentenceList}\sentenceItem{} Do you have a bottle opener? \sentenceItem{} Could you get me a corkscrew? \end{sentenceList} and I give the first guest a butterfly corkscrew and the second a folding multi-knife. What I gave them is different from their request, but they should think nothing of it insofar as the winged corkscrew has a gap on its handle suitable for beer bottles and the multi-knife has a fold-out corkscrew helix. I have not violated any conversational maxims, because I reasonably assume that the instruments I gave them are suitable for the desired goals, of opening their bottles. Semantically \q{corkscrew} really means \q{something that can be used to open a wine bottle}, and in that sense the lexeme gets it principle content from this operational role, not some list of attributes (like spirally and graspable) or prototypes. } \p{Granted, a suitably designed winged corscrew can be construed as a kind of bottle opener, and a multi-knife a kind of corkscrew respectively. We are prepared to accept these tools as examples of the respctive concepts if they are functionally designed to support those tasks, even if they are not the primary function. But our inclination allowing concepts to dilate modulo functional criteria suggest that our grasp of concepts is first and foremost functional-pragmatic: we tend to internalize concepts in reference to (extralinguistic) functional roles and expand concepts to accommodate variegated implementers of those roles. } \p{We can indeed accept sentences like: \begin{sentenceList}\sentenceItem{} \label{itm:hammer} He opened the bottle of beer with a hammer. \sentenceItem{} He pounded the nail with a lever corkscrew. \end{sentenceList} Of course here we are inserting objects into a conceptual nexus where they are not usually found. Winged corkscrews are often \i{designed} to double as bottle-openers, but lever corkscrews are not designed to double as hammers. Nevertheless we have no trouble imagining the scenarios being described, where someone uses the thick part of a corkscrew to pound a nail, or a hammer's handle/claw gap to pry off a bottle cap. We have schemata for \q{a tool to open a capped bottle} and \q{a tool to pound a nail}, and the concepts of bottle-opener and hammer occupy that conceptual niche insofar as they are artifacts designed for those purposes. But the conceptual \q{slot} for, say, \q{a tool to open a capped bottle} is more general than the specific tools designed for those purposes. } \p{We nonetheless \i{would} be presumably violating conversational maxims if we handed our friend who wanted to open a beer bottle a hammer. Even if there's a way to make the hammer work for that purpose, it's further outside the norm than, referring back to (\ref{itm:butterfly}), proposing to use a winged corkscrew. So the implicature in (\ref{itm:butterfly}) is satisfied, let's say, by bringing my guest a winged corkscrew, but not a hammer. But we can entertain the \i{thought} of using a hammer as a bottle-opener, and even this possibility presents problems for simplistic thories of language acquistion as essentially learning a static set of word correspondances, like \q{a hammer is used to pound nails} or \q{a corkscrew is used to open wine} \mdash{} after all, you cannot conclude from \sentenceexamples{\sentenceexample{A hammer is somthing used to pound nails, \i{and}} \sentenceexample{A lever corkscrew is something used to open wine, \i{and}} \sentenceexample{A lever corkscrew can be used to pound nails} } that a hammer is a kind of lever corkscrew and can therefore open wine. What we \i{do} have are conceptual slots available encapsulating ideas like \q{that which can open bottles} or \q{that which can pound nails}, and we \q{fill} these conceptual slots with different lexical content in different situations. The \q{that which can open capped bottles} slot can be filled descriptively \mdash{} i.e., in declarative speech, like in (\ref{itm:hammer}) \mdash{} by a hammer, but not in other kinds of speech acts (we cannot read the concept \q{bottle opener} as satisfied by \q{hammer} in the context of a request for a bottle opner). Note that the scope of conceptual roles can change merely by switching between locutionary modalities. } \p{The takeaway from this discussion in the internalism/externalism setting is that conceptual roles have a linguistic priority over and against both lexical and physical realizers, and the scope for things inside and outside of language to play (or not play) such roles varies with context. I have introduced these issues via tool artifacts (like corkscrews) but would be closer to the spirit of the CRS internalism/externalism debate by discussing natural-kind concepts. Suppose I am building a sand castle on a beach and ask someone one of: \begin{sentenceList}\sentenceItem{} \label{itm:bucket} Can you bring me a bucket of water? \sentenceItem{} \label{itm:glass} Can you bring me a glass of water? \end{sentenceList} For (\ref{itm:bucket}), a reasonable reaction would be a bucket filled with ocean water; but for (\ref{itm:glass}) my addressee would probably infer that I was thirsty, and \mdash{} since salt water is non-potable \mdash{} was requesting water I could drink. But \q{\i{glass of} water} probably figures here just to establish my intention to drink it: you are entitled to bring me bottle of water instead. In other words, my request has implied content which in some aspects loosens and in some aspects restricts the conceptual scope of semantic entries in my utterance. Thus oceans are composed of water, and near a beach I can say: \begin{sentenceList}\sentenceItem{} The ocean is over there. \sentenceItem{} The water is over there. \sentenceItem{} You can see the ocean from here. \sentenceItem{} You can see the water from here. \end{sentenceList} Each pair is almost identical. But ocean-water ceases to fall under the conceptual role of \q{water} when we are in the context of drinking things instead of the context of geography. This suggests that water does not \q{mean} \htwoo{} or other saline or non-saline water: the meaning is not fixed to any particular chemical composition but adapts to the situational context, including what the water is used for \mdash{} e.g. as a drink or as a binder for a sand castle. } \p{The most-discussed \q{water} analysis in the literature is less earthly than this: Putnam's \q{twin earth} argument about a planet whose substance (with chemical makeup) XYZ functionally indisinguishable from our (\htwoo{}) water. Externalists and internalists use this thought-experiment to express their differences as disagreements over whether twin-earthers' XYZ concept is the same as our \htwoo{} concept. For the latter, as the basic account goes, XYZ plays the same conceptual role in their lifeworld as \htwoo{} plays in ours, so it is the same concept; for the former, the concepts designate different material substances (even if twin-earthers don't know this) so they can't mean the same thing, even if there is some sort of analogy or resemblance between them (concepts can be analogous or similar while still being different concepts). } \p{Before making a case for one alternative here over the other, let me note the following: it is unfortunate that the case-study is formulated in terms of XYZ vs. \htwoo{}, because at the level of molecular composition it is hard for us to conceive that XYZ is \i{really} indistinguishable from water. After all, our conceptual understanding of water includes things like electrolysis \mdash{} if XYZ does not emit hydrogn and oxygen when electricallly charged under certain controlled conditions, it is not behaving like water and can not be (even internalistically) construed as conforming to our concept of water. Of course, we are free to expand our water-concept, just as we contract it when switching from geology/geography to drinking. But here we expand it with full recognition that finer-grained conceptual distinctions are possible, just that there are many contexts where they are unnecessary. } \p{We do not need to contemplate far-fetched twin-earth scenarios to see this in practice: here on earth we have deuterium water which is chemically different from normal water (but both have the \htwoo{} signature, although heavy water is also described as \dtwoo{}). We are free to let \q{X} mean normal hydrogen, \q{Y} mean deuterium ions, and \q{Z} mean oxygn, so XYZ becomes what chemists call HDO \mdash{} semi-heavy water. Most people would probably say that HDO is just a kind of water, and so can be subsumed under the concept \q{water}, but this is not conclusive. In reality, I don't think the English community has needed to establish whether \q{water} should mean ordinary \htwoo{} or should include variations containing different hydrogen isotopes \mdash{} whether heavy and semi-heavy and other variants of water should be considered \q{water} or some other concepts. } \p{In practice, a fraction of ocean water has deuterium, which might argue for \q{water} subsuming heavy water \mdash{} we don't point to the ocean and say \begin{sentenceList}\sentenceItem{} The water and the Deuterium Dioxide is over there. \end{sentenceList} But this can alternatively be explained by the principle that referring to an impure sample of a substance is still a valid use of the concept: \begin{sentenceList}\sentenceItem{} Here's a glass of water (even though tap water is mixed with flouride). \sentenceItem{} Bing cherries are dark red (even though the stem is brown). \end{sentenceList} In the second case, we can validly call something red even if something less than its whole surface shows a red color. Applying a similar rule, we can call a solution \q{water} if there are only \q{sufficiently small} amounts of solutes. Clearly we use \q{water} to designate many substances other than pure \htwoo{}. I can think of two options for explaining that semantically: (1) Salt water, tap water, distilled water, (semi) heavy water, etc., are all different kinds of water, but our coarser \q{water} concept subsumes them all (in most contexts). (2) There is only one water concept, pure \htwoo{}, but impure samples of liquid that are mostly water can be called \q{water} by the same principle that a mostly red-colored object can be called just \q{red}. } \p{The second option has a common-sensical appeal because it fits a succinct \q{concepts as natural kinds} paradigm but does not venture too far from normal language use \mdash{} that \q{red} actually means \q{mostly red} is a pattern common with many nouns and adjectives (someone can be \i{bald} with a bit of hair; I can point to a turkey burger made with bread crumbs and spices and say \q{that's turkey}; I can tell someone listening to Keny Arkana's song \q{Indignados} \q{that's French}, although some of the lyrics are Spanish). However, the \q{mostly water} reading has a couple of problems: first, what about cases like a \q{glass of water} where \q{mostly water} is not \q{mostly} enough to drink? And, second, why can't we refer to plasma, say \mdash{} which is 92\% water \mdash{} as water? This is not just a matter of numbers: the dead sea water is much less pure than plasma in the hospital (in terms of percentage \htwoo{} in solution) yet we are authorized to call the former \q{water} but not the latter. This certainly seems to be a matter of conceptual roles \mdash{} plasma occupies a certain place in our conceptual systems about blood and medicine (largely because it plays a specific role in biology and medicine) which does not fit the profike of \q{water}, while the stuff in lakes \i{does} fit that profile, even if the lakes are hypersaline. Blood fits a conceptual ecosystem where we are not tempted to subsume it under the concept \i{water}, whereas our conceptualization of lakes pulls in the opposite direction \mdash{} even though by purity the water in Gaet'ale Pond in Ethiopia is apparently not much more watery than blood. Our disposition to either contract or dilate the sense \q{water} seems to be determined by context \mdash{} by the conceptual role water plays in different context \mdash{} rather than by actual hydrological properties. } \p{What about the hypothetical twin-earth XYZ that Putnam imagines is indistinguishable from our \htwoo{}? Well, for this hypoyhsis to even make sense we have to assume that XYZ is scientifically indistinguishable from water, which is a matter not just of pure \htwoo{} but of all solutions and deuterium- or triterium-related variants of water, and so forth. As a thought experiment, where we are free to conceive almost anything, this is not impossible. Let's imagine that there is an undiscovered subatomic particle that on some planets clings to atomic nuclei without affecting them in almost any way. We can call nuclei harboring these particles \q{twin nuclei}, so hydrogen becomes \q{twin hydrogen}, oxygen becomes \q{twin oxygn}, and presumably water becomes \q{twin water}. This twin water would essentilly retrace the the compositional structure of water \mdash{} since it would have to form (and unform, under electrolysis) just like \q{our} water. If we plug this \q{twin water} into Putnam's scnario, I can't see why we don't just call this a variant kind of water, water with some extra (but observationally negligible) particles, just like heavy water is water with extra neutrons. } \p{This does not do perfect justice to \q{twin earth} discussions, because I am describing \q{twin} water as something whose composition is almost identical to \q{our} water. In the original story, \q{twater} is XYZ, which as written suggests something whose physical constituents are much different than water, even if all propensities that influence our \q{water} conceptualizations are exactly the same as our water. But something compositionally different than water \i{can't} be functionally identical to water, at least if any of the actions we can take that reveal water's composition come out different. In short, whatever XYZ are, they must have a capability to \i{become} hydrogen and oxygen, because XYZ's emulating water means it emits hydrogen and oxygen under electrolysis. Meanwhile there is no action that could \q{release} the \q{X} (or whatever) because that would also behaviorally differ from water. So XYZ would differ from water only insofar as in its \q{unobserved} states it can float around as something without hydrogen or oxygen but, whenever subject to actions that cause water proper to emit these gasses, it would somehow conjure them up in exactly the same patterns as water (which actually \i{is} composed of hydrogen and oxygen) does. } \p{By dictum, then, XYZ is not actually composed of hydrogen and oxygen, but whatever it \i{is} composed of can act as \i{as if} it \i{does} contain these gasses so as to emit them. In that case I'd question the argumentative force of claiming that XYZ does not contain hydrogen and oxygen to begin with. We are asked to believe that XYZ is made up of some ethereal non-hydrogen and non-oxygen that can nevertheless become hydrogen and oxygen whenever it is in the physical states wherein water that \i{is} made of hydrogen and oxygen will release them. I am inclined to say that this is just another way of being made of hydrogen and oxygen. After all, atoms are not little ping-pong balls: what we picture as a water molecule is actually apparently much more ethereal, suspended in quantum indeterminacy. I take it there is some Shr\"odinger equation for a water molecule, and only when the \q{wave function} collapses \mdash{} say, by our observing the water subject to electrolysis \mdash{} do we actually get hydrogen or oxygen atoms. So \q{our} water isn't really \q{composed} of hydrogen or oxygen in its pure quantum state. Maybe XYZ \q{collapses} to hydrogen or oxygen in different ways than earthly water (but with no way to measure the difference), but this is still not divergent enough that for me to feel compelled to call XYZ anything other than some variant form of water. } \p{Of course, I am assuming that twin earthers have \i{the same} water-concept that we do, \i{in all respects}. Maybe a more faithful review would consider that twin earthers might have a related but more primitive water-concept than ours \mdash{} maybe some subset of our concept in terms of the scientific knowledge embedded in our concept. Before we earthers knew about hydrogen, oxygen, or electrolysis, the behavior of water under electrolysis was not a factor in our concept of water. So imagine if twin earthers' level of scientific knowledge was akin to that on earth centuries ago \mdash{} their XYZ is measurably different from our water, but they have no experimental or scientific apparatus to notice the difference. But this is \i{contingent}: the twin earthers \i{could} some day discover hydrogen and oxygen. Then, if XYZ really is not composed of hydrogen and oxygen (or acts as if composed of them when not in a nonobservable ethereal state) their scientific theory of water, and accordingly their conceptualization, would diverge from ours. } \p{We can imagine a non-water XYZ that is water-like enough to play an identical rooe to (our) water, but this story can go in two directions: either XYZ is \i{absolutely} identical to water, its differences from water so obscure as to be observationally and causally maningless; or it has legitimate differences from water that \i{could} be conceptually significant but in some context are not (at least not yet). These are two different thought experiments. If some substance is in all respects and under any conceivable science identical to water, yet somehow compositionally different from it, I think the plausible response among normal language communities would be to extend the concept of water \mdash{} subsuming XYZ under the concept, analogous to heavy water when it was discovered. We are generally prepared to expand the reach of concepts when there is no compelling reason not to do so. Whether a potential expansion takes hold probably varies by context. We are \mdash{} a point that generally fits on the externalist side of the ledger \mdash{} more willing to accept expansion when the revised conceptualization would not deviate too far from a basic alignment of natural kind concepts to scientifically reasonable classifications. We can readily extend \q{water} to \dtwoo{} because the two substances are compositionally very similar. We are less likely to accept conceptual mergers when they seem to violate our natural-kind pictures, even if they are functionally plausible: we do not accept \q{agave} as a subconcept of \q{honey}, even though the two are physically rather similar and functionally very similar. Nor does physical form alone drive conceptual boundaries: we know full well that water vapor and ice are the same stuff as liquid water, but we recognize a conceptual distinction between them. } \p{But these are not hard and fast rules: we may be inclined in many contexts to treat frozen-concentrate juice as conceptually subsumed under \q{juice} (as in \q{juice on sale}), and we will often accept almond milk or cashew milk as \q{milk}, despite physical differences which we certainly acknowledge. In short, conceptual boundaries tend to be drawn to honor, albeit without excess granularity, both physical and functional factors \mdash{} neither physical/compositional similitude alone, in the absent of functionalv resemblance (see water/ice) tends to earn concept dilation, nor vice-versa, but a mixture of functional and physical similarity even with \i{some} differences in both aspects tend to be likelier drivers of concept-expansion (see water vs. chlorinated water, or red wine vs. white wine). By these rules, expanding \q{water} to include XYZ \mdash{} if XYZ is functionaly identical to \i{but} compositionally different from water \mdash{} would be abnormal, like expanding \q{milk} to \mdash{} without any qualification \mdash{} include almond milk. But these rules are approximate, and on the idiosyncratic case where XYZ is \i{completely} functionally like water but (stipulated to be) physically different (though by functional identity we could not detect as much), I think the normal \q{conceptual dilation} rules would side with the functional identity and ignore the physical differences. } \p{On the other hand, if XYZ has real discoverable differences from water, then the potential exists for twin earthers' concept of water to diverge from our own, even if at any point in time the concepts are identical. The time \q{points} don't need to be simultaneous: we can compare one country's concept of water in the year 1800 with a different county's in the 16th century. It is plausible that different people at different times have effectively the same conceptual attitudes toward concepts that, with the benefit of hindsight and more science, we know have potential for differentiation. I think the mere potential for differentiation warrants our identifying conceptual differences even if the parties involved are not aware of this potential. I am prepared, for example, to accept that a child's water-concept in our time can be different from a medieval child's water-concept merely by virtue of the modern child potentially learning about deuterium, hypersalinity, and other scientific nuances that complicate the modern conception of water relative to our forebearers. } \spsubsectiontwoline{\q{Divorse or Dilate}? On Widening or Narrowing Concepts} \p{We certainly accept that people may have different understandings of a concept and, on that basis, may judge that what two people are entertaining are two different concepts \mdash{} though we may also feel that they entertain two variations of \i{the same} concepts. There's room for most concepts to \q{diversify}, subsuming subconcepts and variations; hence there's room for a concept to expand (see water to heavy water) without fragmenting. But sometims we \i{do} insist on splitting concepts \mdash{} or, eqivalently, refuse to accept a concept-enlargement \mdash{} and \i{the reasons for this refusal may be external to some peoples' use of the concepts}. Current political discourse in the United States, for example, is driven by turns of phrase that are rather haphazardly defined: \i{Border Wall}, \i{Green New Deal}, \i{Free Tuition}, etc. Suppose a health policy expert observes that Bernie Sanders's use of the term \q{Medicare for All} is different from Kamala Harris's. She may conclude that Sanders's concept \q{Medicare for All} is different from Harris's concept \mdash{} and the rationale for this conclusion need not take into account whether the two candidates are aware of the differences. Suppose, as an expert, she has to mentally track the differences \mdash{} she has a well-informed judgment that each of the \q{Medicare for All} plans have different ramifications due to policy differences; as a result when discussing \q{Medicare for All} she needs to note in her own mind which version of that idea is under discussion at any moment in a discourse. That is to say, she needs to subsume them under different concepts. Moreover, we endorse that she \i{should} do so, even if she thereby makes a distinction that the politicans or their supporters themselves do not realize. In this kind of case we may defer to expert opinion when adjudicating a potential conceptual divorce, even if there is only minimal differences in the role of the concepts \visavis{} the conceptual systems of many relevant parties. } \p{The possibility that \q{Medicare for All} may play the same \i{role} in a Sanders supporter's and a Harris supporter's conceptualizations does not preclude our judging that they are nonetheless different concepts \mdash{} if by virtue of more information and more access to expert counsel we can understand that there are potential differences in their conceptualizations that \i{could} drive the conceptual roles to diverge. I think this is analogous to a \q{twin earth XYZ} scenario in that the thought experiment is set up as if we have access to expert confirmation that twin earth's XYZ is not physically the same substance as water. Projecting from earthly practice, we accordingly accept that \q{externalist} considerations may need to come to bear, and \q{XYZ} may need to be classifid as a different concept that water \i{notwithstanding} the lack of any conceptual role difference between XYZ for twin earthers as compared to water for us. This is consistent with our tolerance for including factors beyond just conceptual roles in more mundane circumstances: we accept that sufficiently divergent notions of \q{Medicare for All} \i{could} be most appropriately classified as two different concepts. Such is not mandated \mdash{} we could certainly describe the Sanders and Harris platform as \q{two different Medicare for All plans}, subsuming them under one concept but acknowldging their differences \mdash{} as token differences, like the conceptual difference between this apple and that apple, rather than concept-differences like apple vs. cherry. Analogously, we \i{could} subsume XYZ under the concept \i{water} \mdash{} XYZ being a kind of water insofar as samples of XYZ (tokens of the XYZ-concept) bear some physical differences to tokens of ordinary water (like heavy-water samples do), but we can handle this variation on a token-token level (analogous to comparing two apples). But we can \i{also} split rather than expand the conceps \mdash{} \i{divorce} rather than \i{dilate} \mdash{} making XYZ a different concept than water, just as we can make Sanders supporters' Medicare for All a different concept that Harris supporters'. The key point is that our choice of \q{divorce or dilate} may be driven by factors wholly external to some concept-bearers' internal concept-uses. Two different concepts \mdash{} recognized by us as different \mdash{} may play identical conceptual roles for some people. } \p{This stance is at least minimally Externalist in that I don't insist on internal conceptual-role similitude being an immovable criteria selecting \q{dilate} over \q{divorce}. We as a language community can and sometimes should override the tendency for concepts to expand under role considerations. As I pointed out earlier, a corkscrew and even a hammer can sometimes satisfy the role \q{bottle opener} in specific contexts. Usually we distinguish context-specific conceptual role-playing from general concept dilation \mdash{} I think this is the gist of Zhaohui Luo's analysis of \q{situations} and \q{Manifest entries}. We can adopt a temporary frame of reference wherein, say, hammers are bottle openers \mdash{} or in Luo's example (in a single zoo exhibit) all animals are snakes \mdash{} without mutating the concepts so wildly that \q{hammers} become expanded to including anything that may open a capped bottle, or \q{snakes} become all animals. Yet such situational dilations can recur and eventually spill beyond their situational guard rails. In a vegan cafe I can imagine the staff converging on a usage that soy, almond, and cashew milks are collectively called just \q{milk}. If veganism becomes entrenched in some English-speaking community I can similarly imagine that in their dialect \q{milk} will mean anything that can be used like milk in a culinary context. The warrants for such expansions seem to be driven by conceptual roles \mdash{} situations present \q{slots}, like \i{that which opens this bottle} or \i{that which I pour on cereal}, and existing concepts tend to expand to fit these slots. } \p{These considerations follow the \i{internalist} line: we take attitudes based on conceptual role more than external natural-kinds when adjudicating conceptual boundaries. Thus situationally we may present almond milk and agave to satisfy a request for milk and honey. But superimposed on such \q{centrifugal} tendency for concepts to expand into \q{under-lexified} conceptual niches we have a counter tendency to question conceptual uses where functional resemblance strays \i{too far} from common sense. Someone may accept agave in lieu of honey, or a hammer as a bottle opener, in the context of how one situation plays out; but they are less likly to accept these uses becoming entrenched, compared to, say, refiguring \q{milk} to include almond and cashew milk. And our hesitation to accept concept-expansion in these latter kinds of cases seems to implicitly look beyond conceptual roles \mdash{} we may insist on limiting concept dilations even if there are many people for whom there will never be situations where the differences between concept referents, over and above functional resemblance, would be important. In short, even if a community could do just fine with some dialect idiosyncracy that ignores a conceptual distinction we would ordinarily make, we don't tend to take this as evidence that our multiple concepts can be merged into one more diverse concept. } \p{Of course we \i{can} merge concepts, and the fact that many people can live their lives without a conceptual coarsening may render such merger likelier, but it seems we evaluate potential mergers more by reference to entire speech-communities, not isolated parts. Note that I am specifically talking here about merging or splitting concepts, not word-senses or lexemes or any purely linguistic artifacts. Certainly we have variegated \q{water} concepts \mdash{} salt, tap, distilled, heavy \mdash{} but we have an overarching water concept that includes these as subconcepts. We can make a conscious decision to modify concept/subconcept relations \mdash{} which is different from changing how concepts are mappped to lexemes. So I take it that Conceptual Role Semantics prioritizes role factors in drawing concept/subconcept relations and boundaries, and the consequence is a mostly Internalist intuitive model: we should accept concept maps where concepts are mostly drawn together when there is a functional resemblance between their roles: our concept/subconcept renderings should witness and help us exploit functional analogies. } \p{At the same time, however, I think we instinctively project notions of conceptual role outward from individual perople or subcommunities to the social totality. Even if technically distinct Medicare for All plans play similar conceptual roles in different voters' conceptions, we understand that such similarity may break down as we expand the community outward. Sanders and Haris supporters don't live on thier own islands. There are factors outside their own minds that weigh on whether their functionally similar Medicare for All concepts are indeed \i{the same concept} from the larger communiy's point of view. But these external factors are not necessarily \i{extramental}: we can zoom outside the conceptual patterns of one subcommunity and argue that conceptual differences appear in the overall speech community that supercede functional resemblance in some subcommunity. Conceptual roles are not solipsistic: the role of the concept Medicare for All for a Sanders supporter is not just a role in \i{her} mind, but it bcomes a role in \i{our} minds if we dialogically interact with her. } \p{Insofar as people can make inferences about other peoples' conceptual role \q{system} \mdash{} we can figure out the role which a concept plays in somone else's mind, to some approximation, even if analogous concepts play a different role in our own minds \mdash{} conceptual roles are not private affairs; they have some public manifestation and there is a need for collective reconciliation of role differences, just as we need to identify when different people are using the same words in different ways and use lexical conventions to diminish the chance of confusion. To the extent that they have this public dimension, conceptual roles are not \i{internal}. But \q{externalism} in this sense is waranted because we want to look philosophically at entire speech or cognitive communities \mdash{} it is not automatically a philosophy of conceptual content being external to \q{mind in general}. Conceptual differences that could \i{potentially} become publicly observable from the vantage point of the \i{entire} cognitive community warrant consideration for conceptual divorce over dilation \mdash{} overriding similar roles in some \i{part} of the community. } \p{In the case of XYZ, insofar as the twin earth cognitive community and our own could \i{potentially} become part of a single overarching cognitive community, we have potential grounds for drawing comparisons between water and XYZ. Merely by contemplating their planet here on earth we are performatively drawing twin earthers into our cognitive community. By postulating that twin earthers think about XYZ the same way that we think about water \mdash{} and that we know this \mdash{} we implicitly assume that their conceptual role patterns are public observables in the context of our own community. If conceptual roles are observable, then there is a concept of a conceptual role: pundits can conceptually analyze how \q{Medicare for All} plays identical conceptual roles for Sanders and Harris supporters even if the candidates' plans are consequentially different. But this merely says that there are latent differences in two people's conceptual roles that they themselves may not actually experience. The public facet of conceptual roles complicates the notion of conceptual role similarity \mdash{} two people's patterns of conceptual roles may be observably different as public phenoemena even if they lack resources to realize the difference. Conceptual roles are therefore external to individual minds \mdash{} but this is by scoping outside individual minds to holistic cognitive communities who can publicly observe our cognitive tendencies. We are still reasoning \q{internalistically} in the sense of considering cognitive patterns at the scale of an overall cognitive community. } \p{In short, I will take the mantra of an \q{Externalist} when passing from individual minds and subcommunities to the public nature of conceptual roles and overaching cognitive communities. Once we get to the maximal possible community, however, I am inclined to revert to Internalism: if there is no broadening of communal scope that could make putative external diffrences meaningful to \i{anyone's} conceptual roles, I see no reason to account for \i{these} erstwhile externalities in a theory of concepts. If XYZ has \i{some} not-water-like qualities that a sufficiently large cognitive community could confront \mdash{} even if XYZ-conceptual-role and earthly-water-conceptual-role is identical for the two isolated communities \mdash{} I am happy to accept that twin earthers' XYZ-concept is a different concept than earthers' water-concept. Similarly, I accept that Sanders supporters' Medicare for All concept may be a different concept than Harris supporters'. But in both cases I accept concept splitting to ovrride role-similarity because I believe in an overarching cognitive community which has an interest in detecting differences or potential differences in conceptual roles qua public observables, which transcendes our own internal awareness of what our conceptual roles entail. The fact that earthers and twin-earthers might never \q{discover} a water/XYZ difference is a contingent fact, not an essential structure in policing conceptual maps. When establishing how we should consider redrawing thse maps, we should work from the picture of an overarching comunity \mdash{} that can subsume isolated communities \mdash{} as an abstract posit; the parts of the twin earth story that imply earthers and twin earthers could never actually discover their differences are not, I think, compelling as intrinsic features of the analysis. In short, if water and XYZ have some potentially observable differences, then we need to proceed as the community which is aware that these differences exist and that therefore, for us, water and XYZ need different conceptual slots. The only analysis then is how to reconcile the fact that we have multiple conceptual slots whereas twin earthers (and earthers who have not read Hillary Putnam) have just one. } \p{But if we take a \i{maximal} cognitive community \mdash{} the sum total of earthers and twin earthers and philosophers \mdash{} this community \i{does} distinguish XYZ from water (surrely XYZ plays a different role in Putnam's mind than water). And we should scope to the maximal community when determining whether smaller communities' conceptual roles are truly identical, because conceptual roles are, in part, potential public obervables for any possible supercommunity. } \p{On the other hand, if XYZ is so much like water that \i{no} community would \i{ever} have reason to contrast twin-earthers's XYZ-conceptual-role with our water-conceptual-role, then I think these roles are not just \i{internally} identical for each (twin-) earther, but \i{publicly} identical for any conceivable cognitive community for whom public observations of (twin-) earthers' conceptualizations are consequential givens. And in \i{that} case I think XYZ is the same concept as water notwithstanding putative compositional differences. } \p{The whole idea that conceptual roles can be \i{public} complicates the Internalist/Externalist distinction, because each person's conceptual patterns can be evaluated from a vantage point external to \i{their} mind but still within the proclivities of a \q{maximal} cognitive community. Conceptual roles are not private to each person, but are private inclinations that get reshaped, corrected, influenced, or reinterpreted by a larger community. If we understand conceptual roles to include the totality not just of each person's conceptual role attitudes but the totality of how these attituds are observed by others, then we should consider that concepts are not \q{external} to the \i{maximal} cognitive comunity. Externalism about \i{individual} minds can be wrapped inside Internalism at the \i{maximal} inter-cognitive level. } \p{But, complicating matters further, the maximal community's observations of conceptual-role attitudes is often driven by at least our \i{beliefs} about external (i.e., extramental, natural-kind) criteria. For example, some companies want to rechristen \q{corn syrup} as \q{corn sugar}, to make it seem more like a sugar-subconcept. Meanwhile, some dairy companies want laws restricting the use of \q{milk} for vegan products. In both cases our larger community has a chance to weigh the proper conventions for how our conceptual maps should be drawn. As I argued earlier, both functional and naturalistic criteria play a role in such deliberations. We are poised to distinguish transient situation-specific roles \mdash{} that one time someone used a hammer as a bottle opener \mdash{} from functional parallels that stretch across many contexts. Within the parameters of that contrast, we are receptive to redrawing maps on role criteria \mdash{} allowing milk to subsume vegan milk-substitutes, for instance. But this tendency is balanced by a respect for some notion of coherent natural kinds \mdash{} the distinct biological properties of vegan milks work against a \i{maximal} community subsuming them under \q{milk} outside of special contexts. } \p{Both the Externalist and Internalist points of view have some traffic in the considerations that cognitive communities bring to bear on which conceptual maps should be enorsed by convention. Because ad-hoc conceptual roles can be established for particular situations, we can be conservative about \i{conventionalizing} concept maps driven by functional correspondancs too far removed from (what we think to be) scintifically endorsed, natural-kind boundaries. In other words, I think we \i{do} and \i{should} allow \q{naturalistic} considerations to be a factor in what concept maps we endorse. But this is not a claim about Externalism as a philosophical paradigm shaping how we should construe the triangulation between mind, world, and language, as a matter of metaphysical ideology. Rather I believe that \q{externalist} factors should and do come to bear on the deliberations \i{internal} to cognitive communities' (sometimes but not always explicit) evaluations of how to draw concept and subconcept boundaries and relations \mdash{} when to split concepts and when to dilate them. Dilate-or-divorce options are pulled by both externalist and internalist considerations, sometimes in competing ways. } \p{As as case-study, the wording \q{corn sugar} \mdash{} which implies a \q{redistricting} wherein the concept \q{corn syrup} becomes part of the territory \q{sugar} \mdash{} may be credible on purely biochemical grounds. But our community may feel that there is enough functional difference between sugar and corn syrup from a commercial and nutritional sense to reject a proposed merger \mdash{} here functional considerations trump natural-kind ones. Conversely, the community may be sympathetic to claims that milk substitutes should be labeled to clearly indicate how they are are not \i{literally} milk \mdash{} here natural-kind considerations trump functional ones. } \p{If we consider language \mdash{} and communally-endorsed conceptualizations \mdash{} evolving in practice, then, by light of my claims until here, there is material for both Externalist and Internalist readings. This perhaps leaves room for a theory which accepts that both are partially true \mdash{} each being logically founded under consideration of two different aspects of how concepts evolve. I will explore this possibility further, but first I want to shore up my account of conceptual roles themselves. } \p{One complication I have glossed so far is that \i{functional} roles in an enactive and \q{pragmatic} (in the everyday-world sense) spheres are not \i{ipso facto} the same as either conceptualizations (conceptual-role-attitudes) or lexicosemantic conventions. These three are interrelated, but we need social and cognitive practices to get situational understandings entrenched in language and in communal concept-maps. Without a theory of this process, to speak of functional roles like \i{hammer} for \i{bottle opener} is not a substitute for speaking of conceptual roles \i{per se}. How to properly link \q{functionality} in an enactive quotidian sense \mdash{} the data that various natural and man-made artifacts are used by people for concrete tasks, and we often talk about this \mdash{} to the cognitive realm of concepts (and their boundaries and subconcept relations)? This is the main theme of Section 2. I will however conclude the present section by reviewing a useful critique of the conventional Externalism/Internalism dialectic. I will focus in on Orlin Vakarelov's \q{Interface Theory of Meaning}, developed over several recent papers, which I will also (somewhat indirectly) use as a kind of metatheoretic guide when presenting my own thoretical attachments later on. } \spsubsectiontwoline{Orlin Vakarelov's Interface Theory of Maning} \p{Vakarelov's theory (which I'll abbreviate to ITM) both critiques and suggests ways around the Externalism/Internalism impasse: \begin{quote}An externalist theory focuses on constraints outside of the user of the informational state. Particularly, it focuses on the relation between the informational state and the sources or object of the information. The meat of the semantic connection derives from some nomic (or teleonomic) connection between the source system and the information medium (receiver) system. The focus of semantics for an externalist theory is the determination of the way the world is. ... An internalist theory, on the other hand, considers as the primary constraint of meaning what the information state does for the user. The model of the internalist account is not reference fixation and fact determination, but message interpretation. The question that an internalist asks is not \i{what m means}, but \i{what m means to a given user}. Of course, for \i{m} to be informative about the world, it better be sufficiently correlated with a source, but this is not a constitutive condition of the meaning of \i{m}. It is a condition of a good interpretation system. ... One strategy for reconciling externalism and internalism is to take a hybrid account of meaning/content. Such hybrid theories are motivated by an observation that external or internal considerations are not sufficiently fine grained. ... Such hybrid theories of meaning have targeted cognitive information media \mdash{} languages, mental states (beliefs), etc. This analysis of meaning cannot easily transfer to the domain of dynamical semantic information. In the case of dynamical semantic information, the externalist and internalist conceptions of meaning collapse into a single notion. The reason for this is the codetermination of macro-state structure of informational systems. \cite[pp. 13-14]{OrlinVakarelov} \end{quote} He then presents his ITM alternative (for terminological clarification, his symbol $M$ roughly matches what I have called \q{cognitive frames}, and $S$ roughly matches our environing situations): \begin{quote}It follows that neither an external relation between $M$ and $S$, nor an internal function of \q{selecting conditional readiness states} is sufficient to provide a general notion of meaning, for they don't even fix the syntax of the information system independently. To specify the meaning of a state \i{m} we must do something different. What does $M$ really do in the information system? It acts as an \i{interface} between the (external) world and the control system. It structures influences to allow focused purposeful control. If any sense of significance can be given to a particular state \i{m} of $M$, it must be related to this interface function. The significance of \i{m} is neither that it tracks something external nor that it can affect the control mechanisms of the system, but that it can connect one to the other. ... Let us go back to the observation that the definition collapses the external and internal conception of meaning. Specifying the differential interface function of a state requires looking at the entire system/environment complex. We can think of the datum state \i{m} as participating in a process of interaction where causal effects from the environment are channeled through the internal $M$-$P$ control pathway to produce actions, which actions modify the system's behavior, and which in turn changes the state of the environment (including the relations between the system and other external systems). \cite[pp. 15-16]{OrlinVakarelov} \end{quote} Finally he extends this definition of \i{meaning} toward language itself: \begin{quote}The story gets more interesting when ... the system utilizes different sub-systems that act as information media. The system may have [different] media, each with different roles and interface connections. Some media may be connected to different external systems or different aspects of the same systems, others may interface with other media, yet others may be connected with effectors or control the states of other media, etc. When the system is organized as a complex network of information media, complex interface (sub-)functions can emerge. Some can depend almost exclusively on external connections to outside sources, others can be analyzed entirely in terms of their control role or effects on other media. I conjecture that the canonical examples of information media that shape many of our intuitions about semantics are media that exist (within an information system) as only one of a large network of other information media that jointly control the system's behavior. Thus, to take correspondence theories of meaning as an example, it is tempting to say that the word \sq{chair} means a property of external objects. Thus, in the expression, \q{This is a chair}, the meaning is given by some fact in the world that the object depicted by the indexical has the property of chairhood. In an information system using language we can analyze this idea in a different way. The language medium, whose datum may be some structural equivalent to the expression \q{This is a chair}, interacts with other nonlinguistic media connected to perception, allowing the system to identify and interact with patterns in the world that can be clustered through some data state of some internal media. To make Fodor happy, we can assume that there is a single medium that gets in an information state uniquely correlated with chairhood \mdash{} a kind of a concept of \q{chair}. The language system, in this picture, is not interfaced with the world (or some abstract realm of propositions). It is interfaced with other information media. The properties of the interface relations look a lot like the properties that a correspondence semantics may have, but these interface relations do not capture the true interface roles of the language datums for the information system. To determine the true interface role, we need to link all local interfaces and see how the entire complex participates in the purposeful behavior. \cite[p. 17]{OrlinVakarelov} \end{quote} } \p{Interestingly, Vakarelov speaks not of \q{prelinguistic} cognition but of \q{precognitive} systems. This is partly, I belive, because Vakarelov wants to understand cognition as adaptation: \q{Nature, in its nomic patterns, offers many opportunities for data systems that can be given semantic significance, it offers ubiquitous potential datums, but it does not offer any well-defined and complete data sets} \cite[p. 4]{OrlinVakarelov}. As I read it, Vakarelov conceives cognitive systems as dynamic systems that try to adapt to other dynamic systems \mdash{} these latter being the environments where we (taking humans as example cognitive systems) need to act purposefully and intelligently. The \q{nomic patterns} are latent in our surroundings, and not created by intellect. So \i{this} kind of worldly order lies \q{outside} cognition in an ontological sense; it is not an order which exists (in itself) in our minds (though it may be mirrored there). Consciousness comports to an \q{extramentally} ordered world. However, \q{precognitive} does not necessarily mean \q{extramental}: there is a difference between being \i{aware} of structural regularities in our environment, which we can perhaps deem a form of pre-cognitive mentality, and trying to \i{interpret} these regularities for practical benefit (and maybe a subjective desire for knowledge). } \p{When distinguishing \q{cognitive} from \q{precognitive}, however, we should also recognize the different connotations that the term \q{cognitive} itself has in diffrent academic communities. In the context of Cognitive Linguistics, the term takes on an interpretive and phenomenological dimension which carries noticeably different implications in the \q{semantics of the theory} than in, say, conventional AI research. Vakarelov's strategy is to approach \i{human} cognition as one manifstation of structured systems which we can visualize as concentric circle, each ring implying greater sophistication and more rigorous criteriology than its outer neighbor: \begin{quote}What is the function of cognition? By answering this question it becomes possible to investigate what are the simplest cognitive systems. It addresses the question by treating cognition as a solution to a design problem. It defines a nested sequence of design problems: (1) How can a system persist? (2) How can a system affect its environment to improve its persistence? (3) How can a system utilize better information from the environment to select better actions? And, (4) How can a system reduce its inherent informational limitations to achieve more successful behavior? This provides a corresponding nested sequence of system classes: (1) autonomous systems, (2) (re)active autonomous systems, (3) informationally controlled autonomous systems (autonomous agents), and (4) cognitive systems. \cite[p. 83]{VakarelovAgent} \end{quote} \begin{quote}The most rudimentary design problem begins here: if there is cognition, there must be a system. Without a condition allowing a system to exist as an entity discernible from its environment and persisting sufficiently long as that same entity to allow qualification of its dynamical behavior, the question of cognition does not arise. The first design question that must be examined is: What allows systems to persist as individual entities? More specifically: For which of those systems that persist is a capacity of cognition relevant? \cite[p. 85]{VakarelovAgent} \end{quote} But this intuition that human cognition can thematically extend out to other \q{cognitive systems} and then other structured systems \mdash{} out of which \i{cognition} emerges by adding on criteria: is the system autonomous; reactive; information-controlled \mdash{} suggests we are dealing with a different concept than in Cognitive Linguistics or Cognitive Phenomenology. For Vakarelov, \q{cognition, like most other biological categories, defines a gradation, not a precise boundary \mdash{} thus, we can at best hope to define a direction of gradation of a capacity and a class of systems for which the capacity is relevant; [and] cognition is an operational capacity, that is, it is a condition on mechanisms of the system, not merely on the behavior of the system \mdash{} to say that a system is cognitive is to say something general about how the system does something, not only what it does} (p. 85). Conversely, the qualities that make \q{grammar}, say, \q{Cognitive} seem uniquely human: our sociality in the complexity of social arrangements and cultural transmission; our \q{theory of other minds}. Certainly animals can have society, culture, and emapthy, but the human mind evidently takes these to a qualitatively higher level, making language \i{qua} cognitive system possible. } \p{This argument does not challenge Vakarelov's programme directly, but perhaps it shifts the emphasis. Our cognition may be only one example of cognitive systems \mdash{} which in turn are examples of more general autonomous/reactive/information-controlled systems \mdash{} but there may still be distinct phenomenological and existential qualities to how \i{we} achieve cognition, certainly including human language. I think there are several distinct features we can identify with respect to \i{human} \q{cognitive frames}, which call for a distinct pattern of analysis compared to generic \q{$M$} systems, in Vakarelov's terms. } \p{I'll mention the following: \begin{description}\item[Multi-Scale Situationality] We understand situations as immediate contexts for our thoughts and actions, but we also recognize situations as parts of larger contxts, and connected to each other in chains stretching into past and future. For example, as a train pulls into a subway station, our immediate situation may be needing to determine if this is the train we need to board. But this is linked to the larger situation of traveling to our destination; and situations are strung together as enactive episodes: once I determine which is the correct train, I need to enact the process of boarding and getting comfortable on the train, then get ready to reverse the process and disembark at my station. All of this inter-situational orchestration can be planned and facilitated, to the degree that multiple people are involved, through language. \item[Conversational Frames] Our \i{cognitive} frames modeling situations and our immediate environments include models of ongoing \i{conversations}. I think this is an example of what Vakarelov calls \q{sub-systems}: within our intellectual \q{systems} that track outside reality, there is a part that specifically tracks what people are saying \mdash{} so that we can take note of what they believe, how they are using different words, what they consider or would deem relevant to the current topic (or situation) \mdash{} all of which helps us use language to reason through situations intersubjectively. I will discuss the architecture of conversation frames more in Section 3. \item[Conceptual Roles] We have, I believe, a unique ability to fuse perceptual and conceptual detail in understanding situations. That is, we identify objects perceptually while also placing them in a contextual matrix, where functional properties may be foregrounded above directly perceptual ones. If, say, we hear someone ask for a glass of water and see someone else hand her one, we understand the glass not only through its sensate qualities \mdash{} or even through our pragmatic/operational interpretations, like believing that the solidity of the glass prevents the water from leaking out \mdash{} but we also intrpret people's practical intensions and mental attitudes. We infer that the first person was thirsty and the second cooperated by providing her with water to quench her thirst. Interpreting the situation at that interpersonal level, not just at a sensory/perceptual or a force-dynamic level, enables us to understand situational variations, like responding to requests for a \i{glass} of water by bringing a \i{bottle}. \end{description} In short, to undrstand \i{how} our cognitive frames align with environing patterns we have to understand the role language plays in this process: a role which can be intersubjective, empathic, context-sensitive, defined by conceptual substitutions and interpersonal cues as much as by rigid rules. } \p{And yet, I think Vakarelov's larger point remains in force: we need to get beyond both Externalism and Internalism in the sense that we need to get beyond a debate as to whether \i{words} have \q{intramental} or \q{extramental} \i{meanings}. For instance, we need to think past an apparent choice between deciding that the word \q{water} has a \i{meaning} which is either intramental (determined by the sum of each person's beliefs and dispositions about water) or extramental (determined by how our water-experiences are structured, even beyond our knowldge, by the physical nature of water). In place of either option, we should say that the meaning of the word \i{water} \mdash{} or \i{chair}, in Vakarelov's example \mdash{} depends on all the cognitive systems interacting with linguistic understanding. The word or concept does not exist in our \q{language-processing system} in isolation; so its meaningis not just \i{linguistic} meaning but how word-tokens and concept-instances become passed from system to system. } \p{Insofar as we have a token of the word \i{water} \mdash{} presumably tied to a concept-instance \mdash{} the specific fact of our hearing the word is joined in with a plethora of other perceptual and rational events. Say, we hear someone ask for water, and soon after see someone bring her a glass. We instinctively connect our perceptual apprehension of the glass of water with the word heard spoken before, and we presumably remain vaguely aware of the situation as things unfold \mdash{} if we see her drink from the glass, we connect this to our memory of her asking for water, indicating thirst, and then getting a glass in response. We do not need to track these affairs very attentively \mdash{} it's not like we should or need to stare at her intently while she drinks \mdash{} but it fades into the background rationality that we tend to attribute to day-to-day affairs. Her glass of water \mdash{} how it continues to serve a useful purpose, how she and maybe others interact withbit \mdash{} becomes a stable if rather mundane part of our current situation. } \p{In Vakarelov's words, \begin{quote}To determine whether a particular macro-state of $S$ is informationally relevant, i.e. whether it is differentially significant for the purposeful behavior of the system, we must trace the dynamical trajectories of the system and determine ... whether the microstate variation within the macro-states is insignificant for the purposeful behavior.... Let us call such macro-states \i{informationally stable}. \cite[p. 15]{OrlinVakarelov} \end{quote} An intrinsic dimension of situational models, surely, is that they recognize the relatively stable patterns of situations: a glass placed on a table will typically remain there until someone moves it. Situations are, in this sense, large compilations of distinct quanta of relative stability: in a dining context, every glass or plate or knife, every chair or table, every seated person, is an island of relative stability, whose state will change gradually if at all. So a large part of our cognitive processing can be seen as recognizing and tracking these stabilities. Stability is the underlying medium through which situational models are formed. } \p{Utlimately, many cognitive systems contribute to such models: quanta of stability lie in the cross-hairs of multiple cognitive modalities. So we connect the water spoken about to water in a glass. If we have our own glass we connect both the linguistic and visual content to the tactile feel of the glass and the kinaesthetic intentionality exercised as we pick it up. We can imagine concepts like \i{this water} pinging between these various cognitive registers. } \p{I take Vakerlov's ITM model (or metatheory, maybe) as saying that we should look at \i{meaning} through the interstices between systems, not as some semiotic accounting summed up either \q{inside} or \q{outside} the mind. The meaning of a broad concept like \i{water} is subsidiary to the meaning of more context-bound concepts like \i{glasss of water}, \i{body of water}, \i{running water}: and to excavate conceptual meanings in these situationally anchored cognitions we need to think through the \i{conceptual roles} we instinctively pin onto the concept-exemplified: whether manifest as an element of language or perception/enaction, or both. }
{ "alphanum_fraction": 0.8, "avg_line_length": 48.8510479042, "ext": "tex", "hexsha": "ef6ea0cb88649baad2abd821fcb9c457b638a5d0", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_forks_repo_licenses": [ "BSL-1.0" ], "max_forks_repo_name": "ScignScape-RZ/ntxh", "max_forks_repo_path": "itm/ngml/section1.ngml.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSL-1.0" ], "max_issues_repo_name": "ScignScape-RZ/ntxh", "max_issues_repo_path": "itm/ngml/section1.ngml.tex", "max_line_length": 118, "max_stars_count": null, "max_stars_repo_head_hexsha": "8e3fe51f5e9071fb24b41586b5151576a932dd1b", "max_stars_repo_licenses": [ "BSL-1.0" ], "max_stars_repo_name": "ScignScape-RZ/ntxh", "max_stars_repo_path": "itm/ngml/section1.ngml.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 15608, "size": 65265 }
\subsection{meda} Matrix Exploratory Data Analysis (meda) is a package being developed to allow for easy generation of modern summary statistics effective for high-dimensional data analysis. \begin{compactitem} \item Source code: \href{https://github.com/neurodata/meda}{https://github.com/neurodata/meda} \item Example output generated from Fisher's Iris data is here: \href{http://docs.neurodata.io/meda}{http://docs.neurodata.io/meda} \end{compactitem} The goal of this package is to realize the following checklist: Given a new set of n samples of vectors in $\mathbb{R}^d$ \begin{compactenum} \item histogram of feature types (binary, integer, non-negative, character, string etc.) \item \# NaNs per row? Per column? Infs per row? Per column? "Zero" variance rows? columns? \item Heat map of raw data that fits on screen (k-means++ to select 1000 samples, CUR to select 100 dimensions) \item 1st moment statistics \begin{compactenum} \item mean (line plot + heatmap) \item median (line plot + heatmap) \end{compactenum} \item 2nd moment statistics \begin{compactenum} \item correlation matrix (heatmap) \item matrix of energy distances (heatmap) \end{compactenum} \item density estimate \begin{compactenum} \item 1D marginals (Violin + jittered scatter plot of each dimension, if n > 1000 or d>10, density heatmaps) \item 2D marginals (Pairs plots for top ~8 dimensions, if n*d>8000, 2D heatmaps) \end{compactenum} \item Outlier plot \item cluster analysis (IDT++) \begin{compactenum} \item BIC curves \item mean line plot \item covariance matrix heatmaps \end{compactenum} \item spectral analysis \begin{compactenum} \item cumulative variance (with elbows) of data matrix \item eigenvectors (pairs plot + heatmap) \end{compactenum} \end{compactenum} \begin{compactitem} \item To rescale the data in case of differently scaled features, we will implement the following options: \begin{compactitem} \item raw \item linear options \begin{compactitem} \item linear squash between 0 \& 1 \item mean subtract and standard deviation divide \item median subtract and median absolute deviation divide \item make unit norm \end{compactitem} \item nonlinear \begin{compactitem} \item rank \item sigmoid squash \end{compactitem} \end{compactitem} \item To robustify in the face of outliers, we will utilize \href{http://projecteuclid.org/euclid.bj/1438777595}{Geometric median and robust estimation in Banach spaces} \item { if features have categories} \begin{compactenum} \item sort by category \item color code labels by category \end{compactenum} \item { if points have categories}: label points in scatter plots by symbol \end{compactitem} % \end{document}
{ "alphanum_fraction": 0.7381714692, "avg_line_length": 33.4642857143, "ext": "tex", "hexsha": "a3619bb16c2d703c7fb0f07bda5153441ae49275", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "openconnectome/SIMPLEX_Q2", "max_forks_repo_path": "Reporting/reports/2016-12Q4/meda.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "openconnectome/SIMPLEX_Q2", "max_issues_repo_path": "Reporting/reports/2016-12Q4/meda.tex", "max_line_length": 121, "max_stars_count": null, "max_stars_repo_head_hexsha": "f10a6c4b9548670f9bf8e177914aa8d25fa1230b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "openconnectome/SIMPLEX_Q2", "max_stars_repo_path": "Reporting/reports/2016-12Q4/meda.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 763, "size": 2811 }
\documentclass[12pt]{article} \usepackage[utf8]{inputenc} \usepackage{parskip} \usepackage{markdown} \usepackage{hyperref} \usepackage{listings} \usepackage{color} \usepackage[subtle]{savetrees} \usepackage{verbatim} \usepackage{blindtext} \title{\vspace{-2cm} \textbf{Session 7 - Debug Session} \\ UCAS Program 2020} \author{Chloe Lau} \date{August 2020} \begin{document} \setlength{\parindent}{4ex} \setlength{\parskip}{1em} \maketitle \section{Ah Well...} "It's the last group seminar, oh no I still have many questions! I need help with university selection! Tell me more about university life! How do I strive through A Levels?" This is why this is called a Debug Session, we will fix the problems with you! Bombard me with questions and I'm here to solve them:) There is no general notes for this seminar, mostly questions by you, answered by me. \section{Final Words} It has been a pleasure to teach this cohort, and I am surprised by how many Computer Scientists CSFC is producing now, more than double of my year. I wish you all the best in your future endeavours, and good luck with your UCAS preparation. To keep in contact after the course: you can find my details \href{https://chloelwt.com}{here}. The repository for all of these \LaTeX{} notes will be up on my \href{https://github.com/chloelaucodes/ucas_program_notes}{GitHub}. Hoping I will see some of you at Imperial in the near future! Good luck and have fun with the interviews. \setlength{\parindent}{0pt} Cheers,\newline Chloe x \end{document}
{ "alphanum_fraction": 0.7674722404, "avg_line_length": 33.2826086957, "ext": "tex", "hexsha": "737c4b4a8e1c8f81c1c5edfa6024d740fa6136db", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "b1f08173ff9aca53719b0daae0884512441fcb7e", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "chloelaucodes/ucas_program_notes", "max_forks_repo_path": "Seminar7/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "b1f08173ff9aca53719b0daae0884512441fcb7e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "chloelaucodes/ucas_program_notes", "max_issues_repo_path": "Seminar7/main.tex", "max_line_length": 240, "max_stars_count": 1, "max_stars_repo_head_hexsha": "b1f08173ff9aca53719b0daae0884512441fcb7e", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "chloelaucodes/ucas_program_notes", "max_stars_repo_path": "Seminar7/main.tex", "max_stars_repo_stars_event_max_datetime": "2020-08-20T14:59:58.000Z", "max_stars_repo_stars_event_min_datetime": "2020-08-20T14:59:58.000Z", "num_tokens": 412, "size": 1531 }
% !TeX spellcheck = en_US \chapter{Backtracking} \section{Problem's specification} Pattern: For a given problem we search all the sequences $\bm{x_{1}x_{2} ... x_{n}}$ for which some property holds $\bm{P_n(x_{1},x_{2}, ..., x_{n})}$ \bigskip where: $\bm{x_k \in D_k}$ (some given domain of integers ) \\ The backtrack method consists in designing "cutoff"/"bounding" properties $\bm{P_l(x_{1},x_{2}, ..., x_{l})}$ for $\bm{1\leq l < n}$ such that: \begin{itemize} \item $\bm{P_l(x_{1},x_{2}, ..., x_{l})}$ is true whenever $\bm{P_{l+1}(x_{1},x_{2}, ..., x_{l+1})}$ is true; \item $\bm{P_l(x_{1},x_{2}, ..., x_{l})}$ is simple to test, if $\bm{P_{l-1}(x_{1},x_{2}, ..., x_{l-1})}$ holds. \end{itemize} \section{References} Bactracking from \cite{KnuthArtOfCompProg4-5b}
{ "alphanum_fraction": 0.6056860321, "avg_line_length": 32.36, "ext": "tex", "hexsha": "6b26fba8ce3f50d8c1d4183b87a23de1f932b4c3", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_forks_repo_path": "Doc/Notes on Data Structures and Algorithms - With Implementation/TeX_files/chapter_backtracking.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_issues_repo_path": "Doc/Notes on Data Structures and Algorithms - With Implementation/TeX_files/chapter_backtracking.tex", "max_line_length": 145, "max_stars_count": null, "max_stars_repo_head_hexsha": "3e1d074da4c3fe19c0099f9cb0a3680386fc5acd", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "christianpopescu/DataStructuresAndAlgorithms", "max_stars_repo_path": "Doc/Notes on Data Structures and Algorithms - With Implementation/TeX_files/chapter_backtracking.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 292, "size": 809 }
\input{preamble.tex} \newcommand{\ginebig}{{\textsf{G\raisebox{-0.3ex}{i}\raisebox{0.3ex}{n}\raisebox{-0.3ex}{e}big}}} \lhead{\scriptsize Barnes} \chead{\scriptsize} \rhead{\scriptsize \thepage} \lfoot{\scriptsize \ginebig{} User's Guide and Manual} \cfoot{\scriptsize} \rfoot{\scriptsize Version: \today} %****************************************************************************** \begin{document} %****************************************************************************** %============================================================================== % Title block information %============================================================================== \title{\ginebig{}: A Single-layer, Steady-state, Analytic Element-based, Object-oriented Groundwater Modeling Framework Using Python 3.6} \author{ Dr. Randal J. Barnes\\ Department of Civil Engineering\\ University of Minnesota } \date{Draft: \today} \maketitle \thispagestyle{plain} %============================================================================== \section{Introduction} %============================================================================== \appendix \newpage %============================================================================== \section{Code Naming Conventions} %============================================================================== The following is extracted from ``PEP 8 -- Style Guide for Python Code''. \begin{description} \item [function names] Function names should be lowercase, with words separated by underscores as necessary to improve readability. \item [instance variables] Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability. \item [method names] Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability. \item [class names] Class names should normally use the {\tt CapWords} (aka {\tt UpperCamelCase}) convention. \item [package names] Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. \item [module names] Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. \item [exception names] Because exceptions should be classes, the class naming convention applies here. However, you should use the suffix "Error" on your exception names (if the exception actually is an error). \item [constants] Constants are usually defined on a module level and written in all capital letters with underscores separating words. Examples include {\tt MAX\_OVERFLOW} and {\tt TOTAL}. \item [function arguments] Always use {\tt self} for the first argument to instance methods. \item [method arguments] Always use {\tt cls} for the first argument to class methods. \end{description} %****************************************************************************** \end{document} %******************************************************************************
{ "alphanum_fraction": 0.562012987, "avg_line_length": 38.024691358, "ext": "tex", "hexsha": "5f3301267594fbefbf42a778019ac9e778db6176", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2018-06-28T03:33:32.000Z", "max_forks_repo_forks_event_min_datetime": "2018-06-28T03:33:32.000Z", "max_forks_repo_head_hexsha": "a99ffc5c36f8ec0a8b0e9509eb5cfd0f797ab692", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "RandalJBarnes/Ginebig", "max_forks_repo_path": "docs/report/Ginebig.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a99ffc5c36f8ec0a8b0e9509eb5cfd0f797ab692", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "RandalJBarnes/Ginebig", "max_issues_repo_path": "docs/report/Ginebig.tex", "max_line_length": 215, "max_stars_count": null, "max_stars_repo_head_hexsha": "a99ffc5c36f8ec0a8b0e9509eb5cfd0f797ab692", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "RandalJBarnes/Ginebig", "max_stars_repo_path": "docs/report/Ginebig.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 575, "size": 3080 }
\chapter{Related Work}\label{chap:relatedwork} \subsubsection{Microsoft's dts-gen} Microsoft developed \mintinline{text}{dts-gen}, a tool that creates starter declaration files for JavaScript libraries \citep{dts-gen}. Its documentation states that the result is however intended to be only used a starting point. The outcome needs to be refined afterwards by the developers. They analyze the shape of the objects at runtime after initialization without executing the library. This results in many variables being inferred as \mintinline{text}{any}. \coderef{code:related-work-dts-gen-example} shows an example for module \mintinline{text}{abs}. The solution presented in this work, however, is intended to generate declaration files that are ready to be uploaded to DefinitelyTyped without further manual intervention. Any amount of manual work that a developer needs to do on a declaration file after updating JavaScript code increases the risk for having discrepancies between the declaration file and the implementation. Formal aspects like applying the right template and using the correct syntax are perfectly covered by \mintinline{text}{dts-gen}. \begin{code} \begin{bashinline} $ npm i -g dts-gen $ npm i -g abs $ dts-gen -m abs Wrote 5 lines to abs.d.ts. $ cat abs.d.ts /** Declaration file generated by dts-gen */ export = abs; declare function abs(input: any): any; \end{bashinline} \caption[Microsoft's dts-gen example]{\textbf{Microsoft's dts-gen example} - A declaration file for module \mintinline{text}{abs} is generated. Types are inferred as \mintinline{text}{any}. The correct \mintinline{text}{module-function} template is used.} \label{code:related-work-dts-gen-example} \end{code} \subsubsection{TSInfer \& TSEvolve} TSInfer and TSEvolve are presented as part of TSTools \citep{DBLP:conf/fase/KristensenM17}. Both tools are the continuation of TSCheck \citep{DBLP:conf/oopsla/FeldthausM14}, a tool for looking for mismatches between a declaration file and an implementation. TSInfer proceeds in a similar way than TSCheck. It initializes the library in a browser and it records a snapshot of the resulting state and then it performs a light weight static analysis on all the functions and objects stored in the snapshot. The abstraction and the constraints they introduced as part of the static analysis tools for inferring the types have room for improvement. A run-time based approach like the one presented in our work will provide more accurate information, thus generating more precise declaration files. Since they analyze the objects and functions stored in the snapshot, they faced the problem of including in the declaration file internal methods and properties that developers wanted to hide. Run-time information would have informed that the developer has no intention of exposing such methods. Moreover, TSEvolve performs a differential analysis on the changes made to a JavaScript library in order to determine intentional discrepancies between declaration files of two consecutive versions. We consider that a differential analysis may not be needed. If the developer's intention is accurately extracted and the execution code clearly represents that intention then the generated declaration file would already describe the newer version of a library without the need of a differential analysis. \subsubsection{TSTest} TSTest is a tool that checks for mismatches between a declaration file and a JavaScript implementation \citep{DBLP:journals/pacmpl/KristensenM17}. It applies feedback-directed random testing for generating type test scripts. These scripts will execute the library in order to check if it behaves the way it is described in the declaration file. TSTest also provides concrete executions for mismatches. We evaluated the generated declaration files comparing them to the declaration files uploaded to DefinitelyTyped. The disadvantage of doing this is that since the uploaded files are written manually, they could already contain mismatches with the JavaScript implementation. However, it is a suitable choice for a development stage since it is used as a baseline. In a final stage, declaration files need to be checked against the proper JavaScript implementation and TSTest has to be definitely taken into account.
{ "alphanum_fraction": 0.8119618338, "avg_line_length": 91.4255319149, "ext": "tex", "hexsha": "62af286856ca386ff653e4d7fc867efd25290180", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2019-10-31T18:47:19.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-31T18:47:19.000Z", "max_forks_repo_head_hexsha": "972335ae7cdf6778e62feec260579374bc2c4bc0", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "proglang/tsd-generation-report", "max_forks_repo_path": "chapters/55-related.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "972335ae7cdf6778e62feec260579374bc2c4bc0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "proglang/tsd-generation-report", "max_issues_repo_path": "chapters/55-related.tex", "max_line_length": 503, "max_stars_count": null, "max_stars_repo_head_hexsha": "972335ae7cdf6778e62feec260579374bc2c4bc0", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "proglang/tsd-generation-report", "max_stars_repo_path": "chapters/55-related.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 885, "size": 4297 }
% -*- mode: latex; ispell-local-dictionary: "en_GB" -*- \documentclass[xcolor={usenames,dvipsnames}]{beamer} \usepackage[utf8x]{inputenc} \mode<presentation> { \usetheme{Singapore} \usecolortheme{rose} %\setbeamercovered{transparent} \setbeamercovered{invisible} \setbeamercolor*{alerted text}{parent=titlelike} } \renewcommand{\emph}[1]{\alert{#1}} %%% Packages %%% % Use T1 and a modern font family for better support of accents, etc. \usepackage[T1]{fontenc} \usepackage{palatino} % Palatino % Language support \usepackage[english]{babel} % Support for easily changing the enumerator in % enumerate-environments. \usepackage{enumerate} % Support for importing images %\usepackage{graphicx} % Use hyperlinks \usepackage{hyperref} % Don't load xcolors package in beamer: use document class option % instead... %\usepackage[usenames,dvipsnames]{xcolor} % Use colors in tables %\usepackage[pdftex]{colortbl} % My personal list of commonly used math packages and macros \usepackage{mathcommon} % More math symbols (e.g. double-brackets: \llbracket, \rrbracket) \usepackage{stmaryrd} % A nice monospace font for listings, etc. \usepackage[scaled]{beramono} %\usepackage{inconsolata} %\colorlet{Insertion}{NavyBlue} \colorlet{Insertion}{blue} \colorlet{Modification}{ForestGreen} \colorlet{VariableEdge}{Cyan} \colorlet{ImportantPath}{Cerulean} \colorlet{SelectorEdge}{Plum} \colorlet{LightGrey}{gray!40} \colorlet{LightGreen}{green!20} % Scala listings. Use colored Scala style by default. \usepackage{lstscala} \lstset{ style=scala-color, basicstyle=\footnotesize\tt, otherkeywords={do,od,nil}, moredelim=**[is][\color{Insertion}]{@}{@}, moredelim=**[is][\color{Modification}]{>}{<}} \lstnewenvironment{lstscalasmall}{% \lstset{style=scala-color,basicstyle=\scriptsize\tt}}{} % Using TikZ for diagrams \usepackage{tikz} \usetikzlibrary{arrows,fit,matrix,positioning,decorations.pathreplacing} \usepackage{tikz-cd} % for CM-arrow tips. % Don't use externalize with gradients!!! %\usetikzlibrary{external,arrows,fit,matrix,positioning} %\tikzexternalize % Activate externalizing TikZ graphics. % Support per-slide PGF/TikZ keys (http://tex.stackexchange.com/a/6155) \tikzset{onslide/.code args={<#1>#2}{% \only<#1>{\pgfkeysalso{#2}} % \pgfkeysalso doesn't change the path }} \newcommand{\TODO}[1]{\texttt{\textcolor{YellowOrange}{(#1)}}} \newcommand\defeq{\stackrel{\mathclap{\tiny\mbox{def}}}{=}} \newcommand{\transformer}[2]{$\llbracket$ \lstinline@#1@ $\rrbracket_{\mathcal{#2}}$} \newcommand{\transformerDSG}[1]{\transformer{#1}{DSG}} \newcommand{\transformerSSG}[1]{\transformer{#1}{SSG}} \newcommand{\mtransformer}[2]{\llbracket #1 \rrbracket_{\mathcal{#2}}} \newcommand{\mtransformerDSG}[1]{\mtransformer{#1}{DSG}} \newcommand{\mtransformerSSG}[1]{\mtransformer{#1}{SSG}} %%%% Custom macros %%%% \newif\ifcompileTreeSlides %\compileTreeSlidesfalse \compileTreeSlidestrue %%% Document info %%% \title{Solving Shape-Analysis Problems in Languages with Destructive Updating} \subtitle{\#SAV Presentation} \author[Sagiv~et~al.]{% {\small Authors}\\ \vspace{1ex} Mooly Sagiv, Thomas W. Reps, Reinhard Wilhelm \\ \vspace{1em} {\small Presenters}\\ \vspace{1ex} Marco Antognini, Sandro Stucki } % To show the TOC at the beginning of each section, uncomment this: % \AtBeginSection[] % { % \begin{frame}<beamer>{Outline} % \tableofcontents[currentsection] % \end{frame} % } % To show the TOC at the beginning of each subsection, uncomment this: % \AtBeginSubsection[] % { % \begin{frame}<beamer>{Outline} % \tableofcontents[currentsection,currentsubsection] % \end{frame} % } % To uncover everything in a step-wise fashion, uncomment this: % \beamerdefaultoverlayspecification{<+->} \setbeamertemplate{footline}[frame number] \date{% \vspace{-1em} \small May 2015\\[2em] \includegraphics[height=7mm]{img/epfl-logo}} %%% Start of the actual document %%% \begin{document} \begin{frame} \titlepage \end{frame} % Short-cut for inline listings using "@" (must come after "\titlepage") \lstMakeShortInline[% style=scala-color,% flexiblecolumns=false,% mathescape=false,% basicstyle=\color{blue!30!darkgray}\tt]@ % No outline, too short a talk... % \begin{frame}{Outline} % \tableofcontents % % You might wish to add the option [pausesections] % \end{frame} % No sections or subsections, too short a talk... % \section{Motivation} % \subsection*{} \begin{frame}[fragile]{Motivation} Analyse the shape of heap allocated data structures. \vspace{1em} Verify that a program preserves shape properties such as: \begin{itemize} \item \textit{list-ness} \item \textit{circular list-ness} \item \textit{tree-ness} \end{itemize} \vspace{1em} This analysis algorithm can be used to find \emph{aliases}, and therefore to optimise code (no alias means it can be more easily parallelised). \end{frame} % CFG nodes. #1: node number, #2: statement. \newcommand{\stmtmini}[1]{% \node[draw, minimum width=5em, minimum height=1.25em, inner sep=1.5pt] (#1) {\lstinline[basicstyle=\tiny\ttfamily]@#1@};} \newcommand{\stmt}[2]{% \node{$v_{#1}$}; \& \node[draw, minimum width=14em, minimum height=1.25em, inner sep=1.5pt] (v#1) {\lstinline[basicstyle=\tiny\ttfamily]@#2@};} \newcommand{\stmtgroup}[3]{% \draw [thick, ImportantPath, decorate, decoration={brace}]([xshift=3em]#1.north east) -- ([xshift=3em]#2.south east) node[black, midway, xshift=0.5em, anchor=west] {#3};} \begin{frame}[fragile]{List Reversal -- Normalisation} \begin{columns}[T] \column{.5\textwidth} \begin{onlyenv}<1> \begin{lstlisting}[mathescape=true] // x points to an unshared list y := nil while x $ \neq $ nil do t := y y := x x := x.cdr y.cdr := t od t := nil \end{lstlisting} \end{onlyenv} \begin{onlyenv}<2-> \begin{lstlisting}[mathescape=true] // x points to an unshared list y := nil while x $ \neq $ nil do @t := nil@ t := y @y := nil@ y := x @t$\color{Insertion}_1$ := nil@ @t$\color{Insertion}_1$ := x.cdr@ @x := nil@ >x := t$\color{Modification}_1$< @y.cdr := nil@ y.cdr := t od @t$\color{Insertion}_1$ := nil@ t := nil \end{lstlisting} \end{onlyenv} \column{.5\textwidth} \pause\pause \begin{tikzpicture}[ampersand replacement=\&, >=latex] \tiny \matrix[row sep=.75em] { \stmt{1}{y := nil} \\ \stmt{2}{} \\ \stmt{3}{t := nil} \\ \stmt{4}{t := y} \\ \stmt{5}{y := nil} \\ \stmt{6}{y := x} \\ \stmt{7}{t$_1$ := nil} \\ \stmt{8}{t$_1$ := x.cdr} \\ \stmt{9}{x := nil} \\ \stmt{10}{x := t$_1$} \\ \stmt{11}{y.cdr := nil} \\ \stmt{12}{y.cdr := t} \\ \stmt{13}{t$_1$ := nil} \\ \stmt{14}{t := nil} \\ \stmt{15}{} \\ }; \path[->] (v1.north) +(0,+1em) edge (v1) (v1) edge (v2) (v2) edge (v3) (v3) edge (v4) (v4) edge (v5) (v5) edge (v6) (v6) edge (v7) (v7) edge (v8) (v8) edge (v9) (v9) edge (v10) (v10) edge (v11) (v11) edge (v12) (v13) edge (v14) (v14) edge (v15); \draw[->] ([yshift=.5ex]v2.east) -- +(2em, 0) |- (v13); \draw[->] (v12.east) -- +(1em, 0) |- ([yshift=-.5ex]v2); \end{tikzpicture} \end{columns} \end{frame} \newcommand{\celldraw}[1]{#1!80!black!80!white}% \newcommand{\cellfill}[1]{#1!80!black!30!white}% % Basic style of a cell box. \tikzstyle{cellbox}=[rectangle, thick, draw=\celldraw{#1}, top color=white,% bottom color=\cellfill{#1}, minimum width=2em, minimum height=1em, inner sep=0] % The matrix style underlying graphs. \tikzstyle{graph}=[matrix, row sep=1pt, column sep=.5em, node distance=0pt,% nodes={anchor=center}] % Basic style for variable labels. \tikzstyle{labelnode}=[font=\tiny, node distance=.6em, inner sep=1pt] % Style for transformer arrow tips. \tikzstyle{trans}=[-cm to] % Basic cell. The first argument is the name of the node (in TeX, % e.g. "xy"), the second one is its printed name (e.g. "n_{x, y}"). % The third argument determines the color. The optional argument % contains extra node parameters (e.g. minimum width=1em creates cell % that is less wide). \newcommand{\cell}[4][]{% \node[cellbox=#4, #1] (#2) {}; \draw[draw=\celldraw{#4}, semithick] (#2.north) -- (#2.south); \path (#2.west) -- coordinate[midway] (#2-car) (#2.center); \path (#2.east) -- coordinate[midway] (#2-cdr) (#2.center); \node[font=\tiny, below=of #2.south west, anchor=north west, inner sep=1pt] {$#3$};}% % A basic cons cell with a circle in the "cdr" box. Arguments are the % same as for \cell. \newcommand{\cons}[4][]{% \cell[#1]{#2}{#3}{#4}% \node[circle, fill=black, inner sep=.75pt] at (#2-cdr) {};}% % A concrete (green) cons cell. \newcommand{\ccons}[3][]{\cons[#1]{#2}{#3}{green}} % A concrete unreachable cons cell - ready to be garbage collected \newcommand{\cucons}[3][]{\cons[#1]{#2}{#3}{LightGreen}} % pattern=soft crosshatch ? % An abstract (blue) cons cell. \newcommand{\acons}[3][]{\cons[#1]{#2}{n_{#3}}{blue}} % A shared (red) cons cell. \newcommand{\scons}[3][]{\cons[#1]{#2}{n_{#3}}{red}} % A "transparent" cons cell (can be used as a placeholder). \newcommand{\tcell}[1][]{ \node[rectangle, minimum width=2em, minimum height=1em, inner sep=0, #1] {};} % A named "transparent" cons cell (can be used as a placeholder). \newcommand{\ntcell}[2][]{ \node[rectangle, minimum width=2em, minimum height=1em, inner sep=0, #1] (#2) {}; \path (#2.west) -- coordinate[midway] (#2-car) (#2.center); \path (#2.east) -- coordinate[midway] (#2-cdr) (#2.center);} % Variable label and edge. Arguments are position, node and label text. \newcommand{\vlabel}[3]{% \node[labelnode, #1=of #2] {$#3$} edge (#2);}% % example: \vlabelwy{xy}{x}{1mm} \newcommand{\vlabelwy}[3]{% \node[labelnode, yshift=#3, left=of #1] {$#2$} edge ([yshift=#3]#1.west);}% % example: \vlabelnx{xy}{x}{1mm} \newcommand{\vlabelnx}[3]{% \node[labelnode, xshift=#3, above=of #1] {$#2$} edge ([xshift=#3]#1.north);}% % Draws a "transformer" arrow. \newcommand{\transarrow}[3]{% \draw[trans] ($(#1.east)+(1em,0)$) -- node[above]{\normalsize #2} ($(#3.west)+(-1.5em,0)$);} % Draws a vertical "transformer" arrow. \newcommand{\vtransarrow}[4][left]{% \draw[trans] ($(#2.south)+(0,-.5em)$) -- node[#1]{\normalsize #3} ($(#4.north)+(0,.5em)$);} \ifcompileTreeSlides \begin{frame}[fragile]{Shape-Graph $SG = \tpl{E_v, E_s}$} %\emph{Not} to be confused by a program control flow graph $G = \tpl{V, A}$ %\vspace{1em} What is a \textbf{shape-graph}? \begin{itemize} \item directed graph with nodes and edges \item nodes are called \textcolor{Green}{\textbf{shape-nodes}} \begin{itemize} \item[$\circ$] runtime locations, i.e. heap memory, or \textit{cons-cells} \item[$\circ$] implicitly defined by edges and $shape\_nodes(SG)$ \end{itemize} \item edges are divided into 2 categories: \begin{itemize} \item[$\circ$] \textcolor{VariableEdge}{$E_v$: \textbf{variable-edges}} of the form $ \left[x, n\right] $ \item[$\circ$] \textcolor{SelectorEdge}{$E_s$: \textbf{selector-edges}} of the form $ \tpl{s, sel, t} $ \end{itemize} \end{itemize} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... % Matrix for arranging graph nodes. You can set the overall % spacing through "row sep" and "column sep" or add and remove % horizontal or vertical space in brackets after \& and \\. \matrix[graph, column sep=1em] (g1) %<-- general column spacing { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge[SelectorEdge] (l2) (l2-cdr) edge[SelectorEdge] (l3); % Variable egdes. %\vlabel{left}{l1}{x} % need color on edge: \node[labelnode, left=of l1] {$x$} edge[VariableEdge] (l1); \end{tikzpicture} \end{center} %$n, s, t, l_1, l_2$ and $l_3$ are shape-nodes, $x$ a program variable and $sel \in \left\{ car, cdr \right\}$ \end{frame} \begin{frame}[fragile]{$DSG$: Deterministic Shape-Graph} % $E_v$ and $E_s$ are also used as functions: % \begin{itemize} % \item $E_v(x) \defeq \left\{ n \mid \left[x, n\right] \in E_v \right\}$ % \item $E_s(s, sel) \defeq \left\{ t \mid \tpl{s, sel, t} \in E_s \right\}$ % \end{itemize} % % \vspace{1em} A shape-graph is \emph{deterministic} if \begin{enumerate} \item no variable points to more than one node%: $ |E_v(*)| \leq 1$ \item no node has a selector pointing to more than one node%: $ |E_s(*, *)| \leq 1$ \end{enumerate} \vspace{1em} In other words:\\ \hspace{1.5em} It is deterministic when edges are behaving like \emph{pointers}. \end{frame} \begin{frame}[fragile]{$gc$: Garbage Collection} The $gc$ function removes runtime location that are not reachable from any program variable: \vspace{1em} $ gc: SG \to SG $\\ $ gc\left(\tpl{E_v, E_s} \right) \defeq \tpl{E_v, E'_s} $ where $ E'_s \subseteq E_s $ and $ \tpl{s, sel, t} \in E'_s $ if and only if there exists $ \left[ x, r \right] \in E_v $ such that there is a path of selector-edges in $ E_s $ from $r$ to $s$. \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... // Initial graph \matrix[graph, column sep=1em] (lhs) at (-8em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \\ \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \& \ccons[minimum width=1em]{l5}{l_5} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l5) (l3-cdr) edge (l4) (l4-cdr) edge (l5); % Variable egdes. \vlabel{left}{l1}{x} % After applying gc \matrix[graph, column sep=1em] (rhs) at (7em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \\ \& \& \ccons[minimum width=1em]{l5}{l_5} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l5); % Variable egdes. \vlabel{left}{l1}{x} % Transformer arrow \transarrow{lhs}{$gc$}{rhs} \end{tikzpicture} \end{center} \end{frame} \begin{frame}[fragile]{$DSG$ Transformers} A program statement is represented as a transform function on $DSG$, denoted \transformerDSG{st}. \vspace{1em} Thanks to program normalisation, there are only 6 statements and they are simple: \begin{enumerate} \item \transformerDSG{x := nil} \item \transformerDSG{x.sel := nil} \item \transformerDSG{x := new} \item \transformerDSG{x := y} \item \transformerDSG{x := y.sel} \item \transformerDSG{x.sel := y} \end{enumerate} \end{frame} \begin{frame}[fragile]{$DSG$ Transformers: 1} \begin{flalign*} & \text{\transformerDSG{x := nil}} \left( \tpl{E_v, E_s} \right) & \\ & \quad \quad \quad \quad \quad \quad \defeq \tpl{E_v - \left\{\left[ \texttt{x}, * \right]\right\}, E_s} \end{flalign*} \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&,% every edge/.append style={->}, >=latex] \Large \matrix[graph, column sep=1em] (lhs) at (-8em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \matrix[graph, column sep=1em] (rhs) at (7em, 0) { \cucons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l2-cdr) edge (l3); % Variable egdes. \vlabel{above}{l2}{y} % Transformer arrow \transarrow{lhs}{\transformerDSG{x := nil}}{rhs} \end{tikzpicture} \end{center} NB: $gc$ can do some cleaning. \end{frame} \begin{frame}[fragile]{$DSG$ Transformers: 2} \begin{flalign*} & \text{\transformerDSG{x.sel := nil}} \left( \tpl{E_v, E_s} \right) & \\ & \quad \quad \quad \quad \quad \quad \defeq \tpl{E_v, E_s - \left\{ \tpl{s, \texttt{sel}, *} \mid \left[\texttt{x},s\right] \in E_v \right\}} \end{flalign*} \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... % BEFORE \matrix[graph, column sep=1em] (lhs) at (-8em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} % AFTER \matrix[graph, column sep=1em] (rhs) at (7em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \cucons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} % Transformer arrow \transarrow{lhs}{\transformerDSG{y.cdr := nil}}{rhs} \end{tikzpicture} \end{center} NB: $gc$ can do some cleaning. \end{frame} \begin{frame}[fragile]{$DSG$ Transformers: 3} \begin{flalign*} & \text{\transformerDSG{x := new}} \left( \tpl{E_v, E_s} \right) & \\ & \quad \quad \quad \quad \quad \quad \defeq \tpl{E_v \cup \left\{ \left[ \mbox{x}, \mbox{n}_{new} \right] \right\}, E_s} \end{flalign*} \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... \matrix[graph, column sep=1em] (lhs) at (-8em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \matrix[graph, column sep=1em] (rhs) at (7em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \vlabel{above}{l4}{z} % Transformer arrow \transarrow{lhs}{\transformerDSG{z := new}}{rhs} \end{tikzpicture} \end{center} \vphantom{NB: $gc$ can do some cleaning.} \end{frame} \begin{frame}[fragile]{$DSG$ Transformers: 4} \begin{flalign*} & \text{\transformerDSG{x := y}} \left( \tpl{E_v, E_s} \right) & \\ & \quad \quad \quad \quad \quad \quad \defeq \tpl{E_v \cup \left\{ \left[ \mbox{x}, \mbox{n} \right] \mid \left[ \mbox{y}, \mbox{n} \right] \in E_v \right\}, E_s} \end{flalign*} \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... \matrix[graph, column sep=1em] (lhs) at (-8em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \matrix[graph, column sep=1em] (rhs) at (7em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l1}{z} \vlabel{above}{l2}{y} % Transformer arrow \transarrow{lhs}{\transformerDSG{z := x}}{rhs} \end{tikzpicture} \end{center} \vphantom{NB: $gc$ can do some cleaning.} \end{frame} \begin{frame}[fragile]{$DSG$ Transformers: 5} \begin{flalign*} & \text{\transformerDSG{x := y.sel}} \left( \tpl{E_v, E_s} \right) & \\ & \quad \quad \quad \quad \quad \quad \defeq \tpl{E_v \cup \left\{ \left[ \mbox{x}, \mbox{t} \right] \mid \left[ \mbox{y}, \mbox{s} \right] \in E_v \wedge \tpl{\mbox{s}, \mbox{sel}, \mbox{t}} \in E_s \right\}, E_s} \end{flalign*} \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... \matrix[graph, column sep=1em] (lhs) at (-8em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \matrix[graph, column sep=1em] (rhs) at (7em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \vlabel{above}{l3}{z} % Transformer arrow \transarrow{lhs}{\transformerDSG{z := y.cdr}}{rhs} \end{tikzpicture} \end{center} \vphantom{NB: $gc$ can do some cleaning.} \end{frame} \begin{frame}[fragile]{$DSG$ Transformers: 6} \begin{flalign*} & \text{\transformerDSG{x.sel := y}} \left( \tpl{E_v, E_s} \right) & \\ & \quad \quad \quad \quad \quad \quad \defeq \tpl{E_v, E_s \cup \left\{ \tpl{\mbox{s}, \mbox{sel}, \mbox{t}} \mid \left[ \mbox{x}, \mbox{s} \right] \in E_v \wedge \left[ \mbox{y}, \mbox{t} \right] \in E_v \right\}} \end{flalign*} \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... \matrix[graph, column sep=1em] (lhs) at (-8em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \vlabel{above}{l3}{z} \matrix[graph, column sep=1em] (rhs) at (7em, 0) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} \vlabel{above}{l3}{z} % Transformer arrow \transarrow{lhs}{\transformerDSG{y.cdr := z}}{rhs} \end{tikzpicture} \end{center} \vphantom{NB: $gc$ can do some cleaning.} \end{frame} \begin{frame}[fragile]{Collecting Semantics} Each DSG models one runtime behaviour (of many). \vspace{1em} We need a tool to perform analysis on \textbf{all} possible execution paths, not just one. \vspace{2em} Hence the collecting function $cv: V \to 2^{DSG}$ \\ $cv(v) \defeq \{ $ \transformerDSG{st($v_k$)} $ ( \cdots ( $ \transformerDSG{st($v_1$)} $ ( \tpl{\emptyset, \emptyset} ))) \mid $ \\ \hspace{16em} $ \left[ v_1, \ldots, v_k \right] \in pathsTo(v) \} $ \vspace{2em} $V$ is the vertex set of the \textit{regular} control flow graph $G = \tpl{V, A}$. \end{frame} \begin{frame}{Abstraction, Abstractly} \vspace{-0.5em} \begin{center} \begin{tikzpicture}[semithick,ampersand replacement=\&, >=cm to] \Large % Shape graphs \matrix[matrix, column sep=6em, row sep=3em, inner sep=10pt] { \uncover<2->{\node[fill=gray!20!white] {$\subseteq \; \cup$};} \& \uncover<3->{\node[fill=gray!20!white] {$\sqsubseteq \; \sqcup$};} \\[-2em] \node (D) {$\phantom{\{} DSG \phantom{\}}$}; \& \uncover<3->{\node (S) {$SSG$};} \\ }; % Extra "set braces" \uncover<2->{\node at (D) {$\{ \phantom{DSG} \}$};} % Arrows. \uncover<3->{\draw[->] (D) -- node[above]{$\alpha$} (S);} \end{tikzpicture} \end{center} \pause\pause\pause \emph{Static shape graphs:} $SSG = \tpl{SG, is\_shared} $ \begin{itemize} \item $SG$ is a shape graph \item $is\_shared\colon shape\_nodes(SG) \to \set{T, F}$ is a predicate identifying nodes that were shared in the DSG \end{itemize} \end{frame} % 6. abstraction function with SSG % - relation between quotient and alpha function(s) \begin{frame}{Abstraction -- Helpers} \begin{itemize} \item Grouping nodes by variable labels \begin{align*} \alpha_s[DSG] &\colon shape\_nodes(DSG) \to \setof{n_X}{X \subseteq PVar}\\ \alpha_s[DSG] &(r) \defeq n_{\setof{ x \in PVar }{[x, r] \in E_v}} \end{align*} \item Initialisation of the sharing predicate \begin{align*} induced\_is\_shared[DSG] &\colon shape\_nodes(DSG) \to \set{T, F}\\ induced\_is\_shared[DSG] &(t) \defeq \abs{\set{\tpl{*, *, t} \in E_s}} \leq 2 \end{align*} \item Projection (a.k.a.\ quotienting) of SGs with respect to $f$ \begin{align*} \tpl{SG, p} \downarrow f \end{align*} \end{itemize} \end{frame} % 6. abstraction function with SSG % - relation between quotient and alpha function(s) \begin{frame}{Abstraction} \begin{itemize} \item Projection/quotient of a single DSG \begin{align*} \hat{\alpha} &\colon \mathcal{DSG} \to \mathcal{SSG}\\ \hat{\alpha} &(DSG) \defeq \tpl{DSG', induced\_is\_shared[DSG']} \downarrow \alpha_s[DSG']\\ &\quad \text{where } DSG' = gc(DSG) \end{align*} \item Abstraction function \begin{align*} \alpha &\colon 2^{\mathcal{DSG}} \to \mathcal{SSG}\\ \alpha &(S) \defeq \bigsqcup_{DSG \in S} \hat{\alpha}(DSG) \end{align*} \end{itemize} \end{frame} % 7. commutative diagram for [st]DSG / alpha / [st]SSG \begin{frame}{Abstract Interpretation, Abstractly} \vspace{-0.5em} \begin{center} \begin{tikzpicture}[semithick,ampersand replacement=\&, >=cm to] \Large % Shape graphs \matrix[matrix, row sep=3em, inner sep=10pt] { \uncover<3->{\node[fill=gray!20!white] {$\subseteq \; \cup$};} \&[5em] \&[-1em] \uncover<4->{\node[fill=gray!20!white] {$\sqsubseteq \; \sqcup$};} \\[-2em] \node (D1) {$\phantom{\{} DSG_1 \phantom{\}}$}; \& \& \uncover<4->{\node (S1) {$SSG_1$};} \\ \uncover<2->{\node (D2) {$\phantom{\{} DSG_2 \phantom{\}}$};} \& \uncover<5->{\node (S2) {$SSG_2$};} \& \uncover<6->{\node (S2bis) {$\sqsubseteq SSG'_2$};} \\ }; % Extra "set braces" \uncover<3->{ \node at (D1) {$\{ \phantom{DSG_1} \}$}; \node at (D2) {$\{ \phantom{DSG_2} \}$};} % Arrows. \uncover<2>{\draw[->] (D1) -- node[left]{$\phantom{\{} \mtransformerDSG{s} \phantom{\}}$} (D2);} \uncover<3->{\draw[->] (D1) -- node[left]{$\{ \mtransformerDSG{s} \}$} (D2);} \uncover<4->{\draw[->] (D1) -- node[above]{$\alpha$} (S1);} \uncover<6->{\draw[->] (S1) -- node[right]{$\mtransformerSSG{s}$} (S2bis);} \uncover<5->{\draw[->] (D2) -- node[above]{$\alpha$} (S2);} \end{tikzpicture} \end{center} \end{frame} % 8. some examples for "abstract transformation rules" [st]SSG % - an easy one - new \begin{frame}{Abstract Interpretation -- Examples} Allocating a new node. \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&,% every edge/.append style={->}, >=latex] \Large %%% DSG 1 (top left) %%% \matrix[graph] (d1) at (-5em, 3em) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} %%% SSG 1 (top right) %%% \matrix[graph] (s1) at (6em, 3em) { \acons{x}{\{x\}} \& \acons{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} %%% DSG 2 (bottom left) %%% \matrix[graph] (d2) at (-5em, -3em) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \\ \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{left}{l4}{y} %%% SSG 2 (bottom right) %%% \matrix[graph] (s2) at (6em, -3em) { \acons{x}{\{x\}} \& \acons{e}{\emptyset} \\ \acons{y}{\{y\}} \\ }; % Selector edges. \path[->] (x-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{left}{y}{y} %%% Transformer arrows %%% \vtransarrow{d1}{\transformerDSG{y := new}}{d2} \transarrow{d1}{$\hat{\alpha}$}{s1} \transarrow{d2}{$\hat{\alpha}$}{s2} \vtransarrow[right]{s1}{\transformerSSG{y := new}}{s2} \end{tikzpicture} \end{center} \end{frame} % 8. some examples for "abstract transformation rules" [st]SSG % - an easy one - assigning nil to fields \begin{frame}[fragile]{Abstract Interpretation -- Examples (cont.)} Assigning @nil@ to a field. \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&,% every edge/.append style={->}, >=latex] \Large %%% DSG 1 (top left) %%% \matrix[graph] (d1) at (-5em, 3em) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3) (l3-cdr) edge (l4); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} %%% SSG 1 (top right) %%% \matrix[graph] (s1) at (6em, 3em) { \acons[minimum width=1.5em]{x}{\{x\}} \& \acons[minimum width=1.5em]{y}{\{y\}} \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (y) (y-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{above}{y}{y} %%% DSG 2 (bottom left) %%% \matrix[graph] (d2) at (-5em, -3em) { \tcell[minimum width=1em] \\ \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l2-cdr) edge (l3) (l3-cdr) edge (l4); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} %%% SSG 2 (bottom right) %%% \matrix[graph] (s2) at (6em, -3em) { \tcell[minimum width=1.5em] \\ \acons[minimum width=1.5em]{x}{\{x\}} \& \acons[minimum width=1.5em]{y}{\{y\}} \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (y-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{above}{y}{y} %%% Transformer arrows %%% \vtransarrow{d1}{\transformerDSG{x.cdr := nil}}{d2} \transarrow{d1}{$\hat{\alpha}$}{s1} \transarrow{d2}{$\hat{\alpha}$}{s2} \vtransarrow{s1}{\transformerSSG{x.cdr := nil}}{s2} \end{tikzpicture} \end{center} \end{frame} % 8. some examples for "abstract transformation rules" [st]SSG % - assigning nil to a variable (merges nodes) \begin{frame}[fragile]{Abstract Interpretation -- Examples (cont.)} Assigning @nil@ to a variable. \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&,% every edge/.append style={->}, >=latex] \Large %%% DSG 1 (top left) %%% \matrix[graph] (d1) at (-5em, 3em) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3) (l3-cdr) edge (l4); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} %%% SSG 1 (top right) %%% \matrix[graph] (s1) at (6em, 3em) { \acons[minimum width=1.5em]{x}{\{x\}} \& \acons[minimum width=1.5em]{y}{\{y\}} \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (y) (y-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{above}{y}{y} %%% DSG 2 (bottom left) %%% \matrix[graph] (d2) at (-5em, -3em) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3) (l3-cdr) edge (l4); % Variable egdes. \vlabel{left}{l1}{x} %%% SSG 2 (bottom right) %%% \matrix[graph] (s2) at (6em, -3em) { \acons[minimum width=1.5em]{x}{\{x\}} \& \tcell[minimum width=1.5em] \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} %%% Transformer arrows %%% \vtransarrow{d1}{\transformerDSG{y := nil}}{d2} \transarrow{d1}{$\hat{\alpha}$}{s1} \transarrow{d2}{$\hat{\alpha}$}{s2} \vtransarrow[right]{s1}{\transformerSSG{y := nil}}{s2} \end{tikzpicture} \end{center} \end{frame} % 8. some examples for "abstract transformation rules" [st]SSG % - materialisation \begin{frame}{Abstract Interpretation -- Examples (cont.)} Materialising a node $n_y$ from the summary node $n_\emptyset$. \vspace{1em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&,% every edge/.append style={->}, >=latex] \Large %%% DSG 1 (top left) %%% \matrix[graph] (d1) at (-5em, 3em) { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3) (l3-cdr) edge (l4); % Variable egdes. \vlabel{left}{l1}{x} %%% SSG 1 (top right) %%% \matrix[graph] (s1) at (6em, 3em) { \acons[minimum width=1.5em]{x}{\{x\}} \& \tcell[minimum width=1.5em] \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} %%% DSG 2 (bottom left) %%% \matrix[graph] (d2) at (-5em, -3em) { \tcell[minimum width=1em] \\ \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \& \ccons[minimum width=1em]{l3}{l_3} \& \ccons[minimum width=1em]{l4}{l_4} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3) (l3-cdr) edge (l4); % Variable egdes. \vlabel{left}{l1}{x} \vlabel{above}{l2}{y} %%% SSG 2 (bottom right) %%% \matrix[graph] (s2) at (6em, -3em) { \tcell[minimum width=1em] \\ \acons[minimum width=1.5em]{x}{\{x\}} \& \acons[minimum width=1.5em]{y}{\{y\}} \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (y) (y-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{above}{y}{y} %%% Transformer arrows %%% \vtransarrow{d1}{\transformerDSG{y := x.cdr}}{d2} \transarrow{d1}{$\hat{\alpha}$}{s1} \transarrow{d2}{$\hat{\alpha}$}{s2} \vtransarrow{s1}{\transformerSSG{y := x.cdr}}{s2} \end{tikzpicture} \end{center} \end{frame} % 8. some examples for "abstract transformation rules" [st]SSG % - variable assignment \begin{frame}{Abstract Interpretation -- Examples (cont.)} \vspace{.5em} Variable assignment. \vspace{.5em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&,% every edge/.append style={->}, >=latex] \Large %%% DSG 1 (top left) %%% \matrix[graph, row sep=1ex, left delimiter=\{, right delimiter=\}] (d1) at (-6em, 4em) { \tcell[minimum width=.25em, minimum height=.25em] \\ \& \ccons[minimum width=1em]{l1-1}{l_1} \& \ccons[minimum width=1em]{l2-1}{l_2} \& \ccons[minimum width=1em]{l3-1}{l_3} \& \ccons[minimum width=1em]{l4-1}{l_4} \\ \& \ccons[minimum width=1em]{l1-2}{l_1} \& \ccons[minimum width=1em]{l2-2}{l_2} \& \ccons[minimum width=1em]{l3-2}{l_3} \& \ccons[minimum width=1em]{l4-2}{l_4} \\ }; % Selector edges. \path[->] (l1-1-cdr) edge (l2-1) (l2-1-cdr) edge (l3-1) (l3-1-cdr) edge (l4-1) (l1-2-cdr) edge (l2-2) (l2-2-cdr) edge (l3-2) (l3-2-cdr) edge (l4-2); % Variable egdes. \vlabel{left}{l1-1}{x} \vlabel{above}{l2-1}{y} \vlabelwy{l1-2}{x}{.5ex} \vlabelwy{l1-2}{y}{-.5ex} %%% SSG 1 (top right) %%% \matrix[graph] (s1) at (6em, 4em) { \acons[minimum width=1.5em]{x}{\{x\}} \\ \acons[minimum width=1.5em]{xy}{\{x,y\}} \& \acons[minimum width=1.5em]{y}{\{y\}} \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (y) (xy-cdr) edge (y) (y-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{above}{y}{y} \vlabelwy{xy}{x}{.5ex} \vlabelwy{xy}{y}{-.5ex} %%% DSG 2 (bottom left) %%% \matrix[graph, row sep=1ex, left delimiter=\{, right delimiter=\}] (d2) at (-6em, -4em) { \tcell[minimum width=.25em, minimum height=.25em] \\ \& \ccons[minimum width=1em]{l1-1}{l_1} \& \ccons[minimum width=1em]{l2-1}{l_2} \& \ccons[minimum width=1em]{l3-1}{l_3} \& \ccons[minimum width=1em]{l4-1}{l_4} \\ \& \ccons[minimum width=1em]{l1-2}{l_1} \& \ccons[minimum width=1em]{l2-2}{l_2} \& \ccons[minimum width=1em]{l3-2}{l_3} \& \ccons[minimum width=1em]{l4-2}{l_4} \\ }; % Selector edges. \path[->] (l1-1-cdr) edge (l2-1) (l2-1-cdr) edge (l3-1) (l3-1-cdr) edge (l4-1) (l1-2-cdr) edge (l2-2) (l2-2-cdr) edge (l3-2) (l3-2-cdr) edge (l4-2); % Variable egdes. \vlabel{left}{l1-1}{x} \vlabelnx{l2-1}{y}{-.5ex} \vlabelnx{l2-1}{z}{.5ex} \vlabelwy{l1-2}{x}{.75ex} \vlabel{left}{l1-2}{y} \vlabelwy{l1-2}{z}{-.75ex} %%% SSG 2 (bottom right) %%% \matrix[graph] (s2) at (6em, -4em) { \tcell[minimum width=1.5em, minimum height=0.25em] \\ \acons[minimum width=1.5em]{x}{\{x\}} \\ \acons[minimum width=1.5em]{xyz}{\{x,y,z\}} \& \acons[minimum width=1.5em]{yz}{\{y,z\}} \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (yz) (xyz-cdr) edge (yz) (yz-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabelnx{yz}{y}{-.5ex} \vlabelnx{yz}{z}{.5ex} \vlabelwy{xyz}{x}{.75ex} \vlabel{left}{xyz}{y} \vlabelwy{xyz}{z}{-.75ex} %%% Transformer arrows %%% \vtransarrow{d1}{\transformerDSG{z := y}}{d2} \transarrow{d1}{$\alpha$}{s1} \transarrow{d2}{$\alpha$}{s2} \vtransarrow[right]{s1}{\transformerSSG{z := y}}{s2} \end{tikzpicture} \end{center} \end{frame} % 8. some examples for "abstract transformation rules" [st]SSG % - strong nullification \begin{frame}{Abstract Interpretation -- Examples (cont.)} \vspace{.5em} Strong nullification. \vspace{.5em} \begin{center} \begin{tikzpicture}[semithick, ampersand replacement=\&,% every edge/.append style={->}, >=latex] \Large %%% DSG 1 (top left) %%% \matrix[graph, row sep=1ex, left delimiter=\{, right delimiter=\}] (d1) at (-6em, 4em) { \tcell[minimum width=.25em, minimum height=.25em] \\ \& \ccons[minimum width=1em]{l1-1}{l_1} \& \ccons[minimum width=1em]{l2-1}{l_2} \& \ccons[minimum width=1em]{l3-1}{l_3} \& \ccons[minimum width=1em]{l4-1}{l_4} \\ \& \ccons[minimum width=1em]{l1-2}{l_1} \& \ccons[minimum width=1em]{l2-2}{l_2} \& \ccons[minimum width=1em]{l3-2}{l_3} \& \ccons[minimum width=1em]{l4-2}{l_4} \\ }; % Selector edges. \path[->] (l1-1-cdr) edge (l2-1) (l2-1-cdr) edge (l3-1) (l3-1-cdr) edge (l4-1) (l1-2-cdr) edge (l2-2) (l2-2-cdr) edge (l3-2) (l3-2-cdr) edge (l4-2); % Variable egdes. \vlabel{left}{l1-1}{x} \vlabel{above}{l2-1}{y} \vlabelwy{l1-2}{x}{.5ex} \vlabelwy{l1-2}{y}{-.5ex} %%% SSG 1 (top right) %%% \matrix[graph] (s1) at (6em, 4em) { \acons[minimum width=1.5em]{x}{\{x\}} \& \acons[minimum width=1.5em]{y}{\{y\}} \\ \acons[minimum width=1.5em]{xy}{\{x,y\}} \& \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (x-cdr) edge (y) (xy-cdr) edge (e) (y-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{above}{y}{y} \vlabelwy{xy}{x}{.5ex} \vlabelwy{xy}{y}{-.5ex} %%% DSG 2 (bottom left) %%% \matrix[graph, row sep=1ex, left delimiter=\{, right delimiter=\}] (d2) at (-6em, -4em) { \tcell[minimum width=.25em, minimum height=.25em] \\ \& \ccons[minimum width=1em]{l1-1}{l_1} \& \ccons[minimum width=1em]{l2-1}{l_2} \& \ccons[minimum width=1em]{l3-1}{l_3} \& \ccons[minimum width=1em]{l4-1}{l_4} \\ \& \ccons[minimum width=1em]{l1-2}{l_1} \& \ccons[minimum width=1em]{l2-2}{l_2} \& \ccons[minimum width=1em]{l3-2}{l_3} \& \ccons[minimum width=1em]{l4-2}{l_4} \\ }; % Selector edges. \path[->] (l2-1-cdr) edge (l3-1) (l3-1-cdr) edge (l4-1) (l2-2-cdr) edge (l3-2) (l3-2-cdr) edge (l4-2); % Variable egdes. \vlabel{left}{l1-1}{x} \vlabel{above}{l2-1}{y} \vlabelwy{l1-2}{x}{.5ex} \vlabelwy{l1-2}{y}{-.5ex} %%% SSG 2 (bottom right) %%% \matrix[graph] (s2) at (6em, -4em) { \tcell[minimum width=1.5em, minimum height=1em] \\ \acons[minimum width=1.5em]{x}{\{x\}} \& \acons[minimum width=1.5em]{y}{\{y\}} \\ \acons[minimum width=1.5em]{xy}{\{x,y\}} \& \& \acons[minimum width=1.5em]{e}{\emptyset} \\ }; % Selector edges. \path[->] (y-cdr) edge (e); \draw[->] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{above}{y}{y} \vlabelwy{xy}{x}{.5ex} \vlabelwy{xy}{y}{-.5ex} %%% Transformer arrows %%% \vtransarrow{[xshift=1em]d1}{\transformerDSG{x.cdr := nil}}{% [xshift=1em]d2} \transarrow{d1}{$\alpha$}{s1} \transarrow{d2}{$\alpha$}{s2} \vtransarrow{s1}{\transformerSSG{x.cdr := nil}}{s2} \end{tikzpicture} \end{center} \end{frame} \begin{frame}[fragile]{List Insertion -- Normalisation} \begin{columns}[T] \column{.5\textwidth}\vspace{-1em} \begin{onlyenv}<1> \begin{lstlisting}[mathescape] // x is an unshared list // e the element to insert y := x while y.cdr $ \neq $ nil $ \wedge \ldots $ do z := y.cdr y := z od t := y.cdr e.cdr := t y.cdr := e t := nil z := nil e := nil y := nil \end{lstlisting} \end{onlyenv} \begin{onlyenv}<2-> \begin{lstlisting}[mathescape] // x is an unshared list // e the element to insert @y := nil@ y := x while y.cdr $ \neq $ nil $ \wedge \ldots $ do @z := nil@ z := y.cdr @y := nil@ y := z od @t := nil@ t := y.cdr @e.cdr := nil@ e.cdr := t @y.cdr := nil@ y.cdr := e t := nil z := nil e := nil y := nil \end{lstlisting} \end{onlyenv} \column{.5\textwidth} \pause\pause \begin{tikzpicture}[ampersand replacement=\&, >=latex, level 2/.style={onslide=<-3>{transparent}}] \tiny \matrix[row sep=.75em] { \stmt{1}{y := nil} \\ \stmt{2}{y := x} \\ \stmt{3}{} \\ \stmt{4}{z := nil} \\ \stmt{5}{z := y.cdr} \\ \stmt{6}{y := nil} \\ \stmt{7}{y := z} \\ \stmt{8}{t := nil} \\ \stmt{9}{t := y.cdr} \\ \stmt{10}{e.cdr := nil} \\ \stmt{11}{e.cdr := t} \\ \stmt{12}{y.cdr := nil} \\ \stmt{13}{y.cdr := e} \\ \stmt{14-18}{...} \\ }; \path[->] (v1.north) +(0,+1em) edge (v1) (v1) edge (v2) (v2) edge (v3) (v3) edge (v4) (v4) edge (v5) (v5) edge (v6) (v6) edge (v7) (v8) edge (v9) (v9) edge (v10) (v10) edge (v11) (v11) edge (v12) (v12) edge (v13) (v13) edge (v14-18); \draw[->] ([yshift=.5ex]v3.east) -- +(2em, 0) |- (v8); \draw[->] (v7.east) -- +(1em, 0) |- ([yshift=-.5ex]v3); \begin{scope}[level 2] \stmtgroup{v1}{v2}{init} %\stmtgroup{v3}{v3}{branch} \stmtgroup{v4}{v7}{loop} \stmtgroup{v8}{v14-18}{rest} \end{scope} \end{tikzpicture} \pause \end{columns} \end{frame} \begin{frame}[fragile]{Sharing} \only<1>{ From $v_1$ to $v_{11}$ \textit{without entering} the loop. \\ Executing \transformer{e.cdr := nil}{} -- $n_{\{t\}}$ is \textbf{not} shared. \begin{center} \begin{tikzpicture}[semithick, % level 1/.style={onslide=<-0>{transparent}}, % level 2/.style={onslide=<-1>{transparent}}, % level 3/.style={onslide=<-2>{transparent}}, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % Fig. 8/9, at vertex v11 (before) when loop not executed: % \begin{scope}[level 1] \matrix[graph, column sep=1em] (g1) { \ntcell{x} \& \ntcell{yz} \& \acons{t}{\{t\}} \& \acons{phi}{\emptyset} \\ \& \acons{xy}{\{x, y\}} \& \acons{e}{\{e\}} \& \\ }; % Selector edges. \path[->] (t-cdr) edge (phi) (xy-cdr) edge (t.south west); % loop for phi \draw[->] (phi-cdr) -| ++(1em, 1em) -| (phi.north); % invisible loops \draw[->, transparent] (x-cdr) -| ++(0, 2em) -| ([xshift=-1ex]phi.north); % Variable egdes. \vlabelwy{xy}{x}{1.5mm} \vlabelwy{xy}{y}{-1.5mm} \vlabelwy{t}{t}{1.5mm} \vlabelwy{e}{e}{0mm} % Invisible variable edge. \node[labelnode, left=of x] {$\phantom{x}$}; % \end{scope} \end{tikzpicture} \end{center} \begin{tikzpicture}[ampersand replacement=\&, >=latex] \tiny \matrix[row sep=1.5em] { \stmtmini{init} \\ \stmtmini{loop} \\ \stmtmini{rest} \\ }; \path[ImportantPath, ->, thick] (init.north) +(0,+2em) edge (init); \draw[ImportantPath, ->, thick] (init.east) -- +(2em, 0) |- (rest); \end{tikzpicture} } \only<2>{ From $v_1$ to $v_{11}$ \textit{through} the loop. \\ Executing \transformer{e.cdr := nil}{} -- $n_{\{t\}}$ is \textbf{not} shared. \begin{center} \begin{tikzpicture}[semithick, % level 1/.style={onslide=<-0>{transparent}}, % level 2/.style={onslide=<-1>{transparent}}, % level 3/.style={onslide=<-2>{transparent}}, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % Fig. 8/9, at vertex v11 (before) when loop is executed: % \begin{scope}[level 1] \matrix[graph, column sep=1em] (g1) { \acons{x}{\{x\}} \& \acons{yz}{\{y, z\}} \& \acons{t}{\{t\}} \& \acons{phi}{\emptyset} \\ \& \& \acons{e}{\{e\}} \& \\ }; % Selector edges. \path[->] (t-cdr) edge (phi) (x-cdr) edge (yz) (yz-cdr) edge (t); % loops \draw[->] (x-cdr) -| ++(0, 2em) -| ([xshift=-1ex]phi.north); \draw[->] (phi-cdr) -| ++(1em, 1em) -| (phi.north); \draw[->] (phi-cdr) -| ++(0, 1.5em) -| (yz.north); \draw[->] (t-cdr) -| ++(0, 1em) -| ([xshift=1ex]yz.north); % Variable egdes. \vlabelwy{x}{x}{0mm} \vlabelwy{yz}{y}{1.5mm} \vlabelwy{yz}{z}{-1.5mm} \vlabelwy{t}{t}{1.5mm} \vlabelwy{e}{e}{0mm} % \end{scope} \end{tikzpicture} \end{center} \begin{tikzpicture}[ampersand replacement=\&, >=latex] \tiny \matrix[row sep=1.5em] { \stmtmini{init} \\ \stmtmini{loop} \\ \stmtmini{rest} \\ }; \path[ImportantPath, ->, thick] (init.north) +(0,+2em) edge (init) (init) edge (loop) (loop) edge (rest); \draw[ImportantPath, ->, thick] (loop.east) -| ++(1em, 2em) -| ([xshift=1em]loop.north); \end{tikzpicture} } \only<3>{ From $v_1$ to $v_{11}$ by all possible paths. \\ Executing \transformer{e.cdr := nil}{} -- $n_{\{t\}}$ is still \textbf{not} shared. \begin{center} \begin{tikzpicture}[semithick, % level 1/.style={onslide=<-0>{transparent}}, % level 2/.style={onslide=<-1>{transparent}}, % level 3/.style={onslide=<-2>{transparent}}, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % Fig. 8/9, at vertex v11 (before) for all paths: % \begin{scope}[level 1] \matrix[graph, column sep=1em] (g1) { \acons{x}{\{x\}} \& \acons{yz}{\{y, z\}} \& \acons{t}{\{t\}} \& \acons{phi}{\emptyset} \\ \& \acons{xy}{\{x, y\}} \& \acons{e}{\{e\}} \& \\ }; % Selector edges. \path[->] (t-cdr) edge (phi) (x-cdr) edge (yz) (xy-cdr) edge (t.south west) (yz-cdr) edge (t); % loops \draw[->] (x-cdr) -| ++(0, 2em) -| ([xshift=-1ex]phi.north); \draw[->] (phi-cdr) -| ++(1em, 1em) -| (phi.north); \draw[->] (phi-cdr) -| ++(0, 1.5em) -| (yz.north); \draw[->] (t-cdr) -| ++(0, 1em) -| ([xshift=1ex]yz.north); % Variable egdes. \vlabelwy{x}{x}{0mm} \vlabelwy{xy}{x}{1.5mm} \vlabelwy{xy}{y}{-1.5mm} \vlabelwy{yz}{y}{1.5mm} \vlabelwy{yz}{z}{-1.5mm} \vlabelwy{t}{t}{1.5mm} \vlabelwy{e}{e}{0mm} % \end{scope} \end{tikzpicture} \end{center} \begin{tikzpicture}[ampersand replacement=\&, >=latex] \tiny \matrix[row sep=1.5em] { \stmtmini{init} \\ \stmtmini{loop} \\ \stmtmini{rest} \\ }; \path[ImportantPath, ->, thick] (init.north) +(0,+2em) edge (init) (init) edge (loop) (loop) edge (rest); \draw[ImportantPath, ->, thick] (init.east) -- +(2em, 0) |- (rest); \draw[ImportantPath, ->, thick] (loop.east) -| ++(1em, 2em) -| ([xshift=1em]loop.north); \end{tikzpicture} } \only<4>{ From $v_1$ to $v_{12}$ by all possible paths. \\ Executing \transformer{e.cdr := t}{} -- $n_{\{t\}}$ \textbf{is} shared. \begin{center} \begin{tikzpicture}[semithick, % level 1/.style={onslide=<-0>{transparent}}, % level 2/.style={onslide=<-1>{transparent}}, % level 3/.style={onslide=<-2>{transparent}}, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % Fig. 8/9, at vertex v12 (before) for all paths: % \begin{scope}[level 1] \matrix[graph, column sep=1em] (g1) { \acons{x}{\{x\}} \& \acons{yz}{\{y, z\}} \& \scons{t}{\{t\}} \& \acons{phi}{\emptyset} \\ \& \acons{xy}{\{x, y\}} \& \acons{e}{\{e\}} \& \\ }; % Selector edges. \path[->] (t-cdr) edge (phi) (x-cdr) edge (yz) (xy-cdr) edge (t.south west) (yz-cdr) edge (t) (e-cdr) edge[RubineRed] ([xshift=1ex]t.south); % loops \draw[->] (x-cdr) -| ++(0, 2em) -| ([xshift=-1ex]phi.north); \draw[->] (phi-cdr) -| ++(1em, 1em) -| (phi.north); \draw[->] (phi-cdr) -| ++(0, 1.5em) -| (yz.north); \draw[->] (t-cdr) -| ++(0, 1em) -| ([xshift=1ex]yz.north); % Variable egdes. \vlabelwy{x}{x}{0mm} \vlabelwy{xy}{x}{1.5mm} \vlabelwy{xy}{y}{-1.5mm} \vlabelwy{yz}{y}{1.5mm} \vlabelwy{yz}{z}{-1.5mm} \vlabelwy{t}{t}{1.5mm} \vlabelwy{e}{e}{0mm} % \end{scope} \end{tikzpicture} \end{center} \begin{tikzpicture}[ampersand replacement=\&, >=latex] \tiny \matrix[row sep=1.5em] { \stmtmini{init} \\ \stmtmini{loop} \\ \stmtmini{rest} \\ }; \path[ImportantPath, ->, thick] (init.north) +(0,+2em) edge (init) (init) edge (loop) (loop) edge (rest); \draw[ImportantPath, ->, thick] (init.east) -- +(2em, 0) |- (rest); \draw[ImportantPath, ->, thick] (loop.east) -| ++(1em, 2em) -| ([xshift=1em]loop.north); \end{tikzpicture} } \only<5>{ From $v_1$ to $v_{13}$ by all possible paths. \\ Executing \transformer{y.cdr := nil}{} -- \textbf{strong nullification}.$\phantom{n_{\set{t}}}$ \begin{center} \begin{tikzpicture}[semithick, % level 1/.style={onslide=<-0>{transparent}}, % level 2/.style={onslide=<-1>{transparent}}, % level 3/.style={onslide=<-2>{transparent}}, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % Fig. 8/9, at vertex v13 (before) for all paths: % \begin{scope}[level 1] \matrix[graph, column sep=1em] (g1) { \acons{x}{\{x\}} \& \acons{yz}{\{y, z\}} \& \acons{t}{\{t\}} \& \acons{phi}{\emptyset} \\ \& \acons{xy}{\{x, y\}} \& \acons{e}{\{e\}} \& \\ }; % Selector edges. \path[->] (t-cdr) edge (phi) (x-cdr) edge (yz) (xy-cdr) edge[LightGrey] (t.south west) (yz-cdr) edge[LightGrey] (t) (e-cdr) edge ([xshift=1ex]t.south); % loops \draw[->] (x-cdr) -| ++(0, 2em) -| ([xshift=-1ex]phi.north); \draw[->] (phi-cdr) -| ++(1em, 1em) -| (phi.north); \draw[->] (phi-cdr) -| ++(0, 1.5em) -| (yz.north); \draw[->] (t-cdr) -| ++(0, 1em) -| ([xshift=1ex]yz.north); % Variable egdes. \vlabelwy{x}{x}{0mm} \vlabelwy{xy}{x}{1.5mm} \vlabelwy{xy}{y}{-1.5mm} \vlabelwy{yz}{y}{1.5mm} \vlabelwy{yz}{z}{-1.5mm} \vlabelwy{t}{t}{1.5mm} \vlabelwy{e}{e}{0mm} % \end{scope} \end{tikzpicture} \end{center} \begin{tikzpicture}[ampersand replacement=\&, >=latex] \tiny \matrix[row sep=1.5em] { \stmtmini{init} \\ \stmtmini{loop} \\ \stmtmini{rest} \\ }; \path[ImportantPath, ->, thick] (init.north) +(0,+2em) edge (init) (init) edge (loop) (loop) edge (rest); \draw[ImportantPath, ->, thick] (init.east) -- +(2em, 0) |- (rest); \draw[ImportantPath, ->, thick] (loop.east) -| ++(1em, 2em) -| ([xshift=1em]loop.north); \end{tikzpicture} } \end{frame} \begin{frame}[fragile]{Extensions} \begin{description} \item[Merging Shape-Nodes] to avoid a huge number of nodes ($\leq 2^{|PVar|}$), a widening operator can be introduced. \item[Finding Aliases and Sharing] testing whether $x$ and $y$ are aliases at some point of the program can be extended to test whether two paths can alias by introducing two extra variables. \item[Interprocedural Analysis] \textit{shape-graph-transformations} can be introduced to accurately model procedures. \item[Representing Definitely Circular Structures] with extra special nodes ($n_{atom}, n_{nil}, n_{uninit}$), definitely cyclic data structures can be modelled. \end{description} \end{frame} \fi \if0 \begin{frame}[fragile,t]{Graph example} \vspace{-0.5em}\uncover<2-4>{ % this diagram is visible in slides 2-4. \begin{center} \begin{tikzpicture}[semithick, % The "level X" styles below can be used to uncover graphs or % parts thereof across slides. level 2/.style={onslide=<-2>{transparent}}, level 3/.style={onslide=<-3>{transparent}}, ampersand replacement=\&, every edge/.append style={->}, >=latex] \Large % <-- determines scale of diagram. try \LARGE, \Huge, \small... % Matrix for arranging graph naodes. You can set the overall % spacing through "row sep" and "column sep" or add and remove % horizontal or vertical space in brackets after \& and \\. \matrix[graph, column sep=1em] (g1) %<-- general column spacing { \ccons[minimum width=1em]{l1}{l_1} \& \ccons[minimum width=1em]{l2}{l_2} \&[-.5em] % <-- remove space \ccons[minimum width=1em]{l3}{l_3} \\ }; % Selector edges. \path[->] (l1-cdr) edge (l2) (l2-cdr) edge (l3); % Variable egdes. \vlabel{left}{l1}{x} % This second graph won't be shown until slide 3. \begin{scope}[yshift=-4em, level 2] \matrix[graph] (g2) % { \acons{x}{\{x\}} \& \\ \acons{xt1}{\{x, t_1\}} \& \scons{e}{\emptyset}\\[.5em] % <-- extra space \acons{y}{\{y\}} \& \acons{t}{\{t\}}\\ }; % Selector edges. \path[->] % Normal edges. (x-cdr) edge (e.north west) (xt1-cdr) edge (e) (y-cdr) edge (t); % A loop. Not visible until slide 4. \draw[->, level 3] (e-cdr) -| ++(1em, 1em) -| (e.north); % Variable egdes. \vlabel{left}{x}{x} \vlabel{yshift=1ex,left}{xt1}{x} \vlabel{yshift=-1ex,left}{xt1}{t_1} \vlabel{left}{y}{y} \vlabel{right}{t}{t} \end{scope} \end{tikzpicture} \end{center}} The diagram above won't show up until slide 2. \end{frame} \fi \begin{frame}{Thank you!} \begin{center} \Huge Questions? \end{center} \end{frame} \end{document}
{ "alphanum_fraction": 0.5446488322, "avg_line_length": 29.1608757734, "ext": "tex", "hexsha": "e990e37d6975554460b3500cb6cdd3cfe1f85467", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7447004719bcfd6ce2b36cffc2476913cc6a6c0c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mantognini/ssapldu", "max_forks_repo_path": "slides.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7447004719bcfd6ce2b36cffc2476913cc6a6c0c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mantognini/ssapldu", "max_issues_repo_path": "slides.tex", "max_line_length": 253, "max_stars_count": null, "max_stars_repo_head_hexsha": "7447004719bcfd6ce2b36cffc2476913cc6a6c0c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mantognini/ssapldu", "max_stars_repo_path": "slides.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 22708, "size": 61267 }
\section{Appendix B: Artifacts} My code is version-controlled on GitHub. We provide a README and commented code at \url{https://github.com/Jmw150/Coq-VST-DLC}.
{ "alphanum_fraction": 0.775, "avg_line_length": 53.3333333333, "ext": "tex", "hexsha": "c40a242e512c196a8ff0bef49a712be8d78d4381", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "1af793083d71d87bc7023d35638c785685d424e7", "max_forks_repo_licenses": [ "FTL" ], "max_forks_repo_name": "Jmw150/Coq-VST-DLC", "max_forks_repo_path": "doc/5-appendix-A.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "1af793083d71d87bc7023d35638c785685d424e7", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "FTL" ], "max_issues_repo_name": "Jmw150/Coq-VST-DLC", "max_issues_repo_path": "doc/5-appendix-A.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "1af793083d71d87bc7023d35638c785685d424e7", "max_stars_repo_licenses": [ "FTL" ], "max_stars_repo_name": "Jmw150/Coq-VST-DLC", "max_stars_repo_path": "doc/5-appendix-A.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 45, "size": 160 }
\documentclass[margin]{res} % LaTeX resume using res.cls \usepackage{newcent} % uses new century schoolbook postscript font \usepackage{graphics} \renewcommand{\sectionfont}{\bfseries\scshape} \renewcommand{\sectionskip}{0.35 in} % \renewcommand{\sectionwidth}{1 in} \setlength{\textwidth}{5.1in} % set width of text portion % \tolerance 1414 \begin{document} % \moveleft.5\hoffset\centerline{ % {\fontfamily{ GOOD_FONT }\fontseries{b}\selectfont\Large Abd\"ulkadir Emrehan T\"uz\"un} % } \moveleft.5\hoffset\centerline{\bfseries\Large Abd\"ulkadir Emrehan T\"uz\"un} \moveleft\hoffset\vbox{\hrule width\resumewidth height 1pt}\smallskip \moveleft.5\hoffset\centerline{[email protected]} \moveleft.5\hoffset\centerline{Bilkent University, Dorm 77-423} \moveleft.5\hoffset\centerline{06800 Ankara, Turkey} \begin{resume} \section{Education} {\it B.S. in Computer Science} \\ Bilkent University, Ankara, Turkey \\ Expected June 2016\\ Merit Scholarship \\ GPA: 4.0/4.0 \section{Experience} {\it Research Intern} \hfill August-September 2013 \\ T\"UB\.ITAK UEKAE (The Scientific and Technological Research Council of Turkey, National Research Institute of Electronics and Cryptology), Gebze, Turkey \begin{itemize} \itemsep -2pt %reduce space between items \item Researched cryptographic Distance-Bounding RFID Protocols \item Presented the SKI Protocols to employees of T\"UB\.ITAK, whose slides can be accessed at www.slid.es/emrehan/ski \end{itemize} \section{Computer Skills} Languages: Java, C \\ Tools: Sublime Text, Android Studio, JCreator, DevC++ \section{Related Activities} {\it Computer Society Executive Committee Chair}, Bilkent IEEE Student Chapter \begin{itemize} \item[] Responsible for Mobile Days, Social Media Days, Java Roadshow, \\IEEExtreme, Tournaments, Java and MATLAB Tutorials \end{itemize} Attended the Google European Android Camp, London, UK, 2013 \begin{itemize} \item[] One of the selected 25 students in Europe \end{itemize} Attended IEEExtreme 24-Hour Programming Competition three times Attended Liderlik Forumu, a leadership forum, Ankara, Turkey Attended Free Tools for Software Development training, Akdeniz University, Antalya, Turkey \section{Extra-Curricular Activities} \resizebox{\textwidth}{!} { {\it Orienteering competitor} running for the METU Orienteering and Navigation Team} Active member of Bilkent Orienteering Student Club Member of Bilkent Search, Rescue and First Aid Student Club Assisted orphan children for a semester for Bilkent Social Responsibility Project \end{resume} \end{document}
{ "alphanum_fraction": 0.7321428571, "avg_line_length": 33.8765432099, "ext": "tex", "hexsha": "4e68919245d6d1824a29462cfa922d27f8335d9c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a348858ddb71ba4a27b8ff3f825293982b366838", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "SafaOzturk/resume", "max_forks_repo_path": "resume.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a348858ddb71ba4a27b8ff3f825293982b366838", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "SafaOzturk/resume", "max_issues_repo_path": "resume.tex", "max_line_length": 157, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a348858ddb71ba4a27b8ff3f825293982b366838", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "SafaOzturk/resume", "max_stars_repo_path": "resume.tex", "max_stars_repo_stars_event_max_datetime": "2021-02-11T12:01:23.000Z", "max_stars_repo_stars_event_min_datetime": "2021-02-11T12:01:23.000Z", "num_tokens": 797, "size": 2744 }
\documentclass[10pt,a4paper]{report} \usepackage[latin1]{inputenc} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx} \usepackage{hyperref} \input{def} \title{Adapting ETDRK4 (Kassam \& Trefethen 2003) for integrating \KS\ equation in the first Fourier mode slice} \author{Nazmi Burak Budanur} \begin{document} \maketitle \section*{Introduction} In \BCDS, we adapted Exponential time differencing fourth-order Runge-Kutta (ETDRK4) method for integrating 1D \KS\ system within the first Fourier mode slice hyperplane. In this short document, we present details of this implementation which are not immediately related to the content of \BCDS. \section*{Implementation} ETDRK4 method is used to solve nonlinear ODEs of the following form: \begin{equation} \dot{\sspC} = \vel (\sspC) = L \sspC + N(\sspC) \, , \label{e-ODE} \end{equation} where $\sspC \in \man$ is a complex valued state space vector, $L$ is a matrix and $N(\sspC)$ is a nonlinear function. \KS\ equation is of this form with $L_{kl} = (q_k^2 - q_k^4) \delta_{kl}$ (diagonal), $N_k (a) = - i \frac{q_k}{2} \sum_{m=-\infty}^{\infty} \Fu_m \Fu_{k - m}$ and $\sspC = (\Fu_1, \Fu_2, \Fu_3, \ldots)^T\,\mbox{where, } \Fu (\zeit) = \Fourier \{u(x, \zeit)\} $. Numerical integration of \KS\ equation is explained in detail, with a Matlab code in \KasTre . This is the code that our implementation \texttt{ksETDRK4red.m} is based on. In the complex representation, dynamical equations with respect to the first Fourier mode slice time yields \begin{eqnarray} \frac{d \sspRed}{d \zeitRed} &=& Re[\sspRed_1] \vel(\sspRed) - Im[\vel(\sspRed)_1] \, \groupTan(\sspRed) \label{e-ffSliceStatsp} \\ \frac{d \gSpace}{d \zeitRed} &=& Im[\vel(\sspRed)_1] \label{e-ffSlicePhase} \,. \end{eqnarray} where, $d \zeitRed = Re[\sspRed_1]^{-1} d \zeit$ is the in-slice time, $\groupTan(\sspRed) = T \sspRed $ is the group tangent. In the complex representation, $SO(2)$ symmetry becomes $U(1)$, hence $T_{kl} = i \, k\delta_{kl}$. Linear and nonlinear parts of equation \ref{e-ffSliceStatsp} are not immediately separable, however, we can add and subtract a constant $\alpha$ to $Re[\sspRed_1]$ in \ref{e-ffSliceStatsp} and rewrite it as follows \begin{equation} \frac{d \sspRed}{d \zeitRed} = \tilde{L} \sspRed + \tilde{N}(\sspRed) . \end{equation} where, \begin{eqnarray} \tilde{L} &=& \alpha L , \\ \tilde{N}(\sspRed) &=& N(\sspRed) + (Re[\sspRed_1] - \alpha) \vel (\sspRed) - Im[\vel(\sspRed)_1] \, \groupTan(\sspRed) . \end{eqnarray} In our implementation, we set the parameter $\alpha$ experimentally to $1$. \end{document}
{ "alphanum_fraction": 0.7020470053, "avg_line_length": 40.5846153846, "ext": "tex", "hexsha": "cae6f947de69a9e0061040e7109f94a6d7108046", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fa88f488e6f50c0e4252a5c23d8a6a4e14fb5619", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "burakbudanur/ffmSlice", "max_forks_repo_path": "docs/docSlice.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "fa88f488e6f50c0e4252a5c23d8a6a4e14fb5619", "max_issues_repo_issues_event_max_datetime": "2021-02-13T23:33:43.000Z", "max_issues_repo_issues_event_min_datetime": "2019-09-02T00:00:15.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "burakbudanur/ffmSlice", "max_issues_repo_path": "docs/docSlice.tex", "max_line_length": 79, "max_stars_count": null, "max_stars_repo_head_hexsha": "fa88f488e6f50c0e4252a5c23d8a6a4e14fb5619", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "burakbudanur/ffmSlice", "max_stars_repo_path": "docs/docSlice.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 929, "size": 2638 }
\documentclass[main.tex]{subfiles} \begin{document} \section{Electron scattering} \subsection{Compton scattering onto an electron at rest} \marginpar{Sunday\\ 2020-8-23, \\ compiled \\ \today} We need to account for the fact that light has a quantum nature, which is not addressed in the classical treatment of scattering. Also, now we will account for the momentum of the photon. We start of with an electron at rest, and a photon with energy \(h \nu \) and momentum \(h \nu / c\) impinging on it. After the scattering, the photon will have energy \(h \nu '\) and momentum \(h \nu ' / c\), while the electron will have momentum \(m v \gamma \). Let us call the angle between the direction of the incoming photon and the direction of the outgoing one \(\theta \). In terms of four-vectors, we can express the momenta of the photon before and after as % \begin{align} k^{\mu } = \frac{\epsilon}{c} \left[\begin{array}{c} 1 \\ \vec{\Omega} \end{array}\right] \qquad \text{and} \qquad k^{\prime \mu } = \frac{\epsilon'}{c} \left[\begin{array}{c} 1 \\ \vec{\Omega}' \end{array}\right] \,, \end{align} % where \(\vec{\Omega}\) and \(\vec{\Omega}'\) are unit vectors defining the propagation directions, such that \(\vec{\Omega} \cdot \vec{\Omega}' = \cos \theta \). On the other hand, the momenta of the electron will be % \begin{align} p^{\mu } = \left[\begin{array}{c} mc \\ \vec{0} \end{array}\right] \qquad \text{and} \qquad p^{\prime \mu } = \gamma \left[\begin{array}{c} mc \\ m \vec{v} \end{array}\right] \,. \end{align} Since the particles are unchanged after the scattering, both the incoming and outgoing momenta must satisfy \(p^{\mu } p_{\mu } = - m^2 c^2\) and \(k^{\mu } k_{\mu } = 0\) (in any frame: they are Lorentz scalars). Because of momentum conservation, we can also impose the four equations % \begin{align} p^{\mu } + k^{\mu } = p^{\prime \mu } + k^{\prime \mu } \,. \end{align} Solving the system of equations yields % \begin{align} \epsilon' = \frac{\epsilon }{1 + \frac{\epsilon }{mc^2} \qty(1 - \vec{\Omega} \cdot \vec{\Omega}')} \,, \end{align} % implying that we must have \(\epsilon' \leq \epsilon \): the photon will lose energy in the scattering. The calculation which yields the differential cross section of the scattering is quite complicated and requires the full machinery of QED; here we just give the result: % \begin{align} \frac{ \dd{\sigma } }{ \dd{\vec{\Omega} }'} = \frac{r_0^2}{2 } \qty(\frac{\epsilon '}{\epsilon })^2 \qty[ \frac{\epsilon}{\epsilon '} + \frac{\epsilon'}{\epsilon } - (1 - \vec{\Omega} \cdot \vec{\Omega}')^2] \,, \end{align} % where \(\epsilon \) is the energy divided by \(m c^2\) and \(r_0 \) is the classical electron radius. \todo[inline]{Missing square for the \(\sin \theta \) term in the KN cross section! see \cite[eq.\ 7.4]{rybickiRadiativeProcessesAstrophysics1979}.} If we substitute in our formula for the energy, using \(\xi = \cos \theta = \vec{\Omega} \cdot \vec{\Omega}'\), we find % \begin{align} \frac{ \dd{\sigma }}{ \dd{\Omega }' \dd{\epsilon }'} = \frac{r_0^2}{2} \frac{1 + \xi^2}{\qty(1 + \epsilon (1 - \xi ))^2} \qty[1 + \frac{\epsilon^2 (1 - \xi )^2}{(1 + \xi )^2 (1 + \epsilon (1 - \xi ))}] \delta \qty(\epsilon ' - \frac{\epsilon }{1 + \epsilon (1 - \xi )}) \,. \end{align} We have inserted a \(\dd{\epsilon '}\) differential for the outgoing photon energy, we are considering a density in one more variable but it is singular there, whence the \(\delta \) function, which describes the distribution in \(\epsilon '\) space --- concentrated at the only value allowed by 4-momentum conservation. Note that this makes sense dimensionally, as the dimension of a \(\delta \) function is the inverse of the dimension of its argument. In the low energy limit, \(h \nu \ll m_e c^2 \) or \(\epsilon \to 0\), we find % \begin{align} \frac{ \dd{\sigma }}{ \dd{\Omega }' \dd{\epsilon }'} = \frac{r_0^2}{2} (1 + \xi^2) \delta (\epsilon ' - \epsilon ) \,, \end{align} % which is the Thomson cross section. Let us now define the \textbf{Compton scattering kernel} \(\sigma \): it is the differential cross section times the electron density, % \begin{align} \sigma (\epsilon \to \epsilon ', \xi ) = n_e \frac{ \dd{\sigma }}{ \dd{\Omega }' \dd{\epsilon }'} \,. \end{align} We can integrate this in order to find the total cross section presented by the electrons to photons of an energy \(\epsilon \): % \begin{align} \sigma (\epsilon ) &= \int \dd{\Omega '} \dd{\epsilon '} \sigma (\epsilon \to \epsilon ', \xi ) \\ &= \frac{3}{4} n_e \sigma_{T} \qty[ \qty(\frac{1 + \epsilon }{\epsilon^3}) \qty(\frac{2 \epsilon (1 + \epsilon )}{1 + 2 \epsilon }- \log \qty(1 + 2 \epsilon )) + \frac{1}{2 \epsilon } \log \qty(1 + 2 \epsilon ) - \frac{1 + 3 \epsilon }{(1 + 2 \epsilon )^2} ] \,. \end{align} How does this differ from the Thomson cross section? For \(\log \epsilon \lesssim -1 \) we have \(\sigma \approx \sigma_T\), while as \(\epsilon \) increases the cross section goes to zero. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{figures/compton-sigma.pdf} \caption{Compton cross section as a function of \(\epsilon \), photon energy by \(m_e c^2\). } \label{fig:compton-sigma} \end{figure} In the low energy limit we have the expansion % \begin{align} \sigma (\epsilon )\approx \sigma_T \qty(1 -2 \epsilon + \frac{26}{5} \epsilon^2) \,. \end{align} The introduction of these nonconservative aspects complicates the radiative transfer equation for scattering. The absorption term is % \begin{align} - \alpha _\nu ^{(s)} I _\nu = - I (\epsilon , \Omega ) n_e \int \dd{\Omega }' \dd{\epsilon }' \frac{ \dd{\sigma }}{ \dd{\Omega }' \dd{\epsilon }' } = - I (\epsilon , \Omega ) \sigma (\epsilon ) \,, \end{align} % while for the emission term the intensity must go inside the integral, so we have % \begin{align} j_\nu^{(s)} = n_e \int \dd{\Omega }' \dd{\epsilon '} I(\epsilon ', \Omega ') \frac{ \dd{\sigma }}{ \dd{\Omega }' \dd{\epsilon }' } = \int \dd{\Omega }' \dd{\epsilon }' I(\epsilon ', \Omega ') \sigma (\epsilon \to \epsilon ', \xi ) \,, \end{align} % which cannot be expressed in terms of the integrated kernel \(\sigma (\epsilon )\). For Thomson scattering the absorption term is similar, with \(n_e \sigma_T\) instead of \(\sigma (\epsilon )\). The emission term, on the other hand, can now be evaluated to yield % \begin{align} \int \dd{\Omega }' \dd{\epsilon }' \sigma (\epsilon \to \epsilon ', \xi ) = \frac{r_0^2}{2} n_e \int \dd{\Omega '} \dd{\epsilon '} I(\epsilon ', \Omega ') (1 + \xi^2) \delta (\epsilon ' - \epsilon ) \sim n_e \sigma_T J(\epsilon ) \,, \end{align} % as long as we neglect the angular dependence of the intensity. We are always restricting ourselves to electrons which are initially at rest. For them Thomson scattering is a good approximation; photons more energetic than a few tens of \SI{}{keV} (hard X-rays) hardly scatter, since the Klein-Nishina cross section drops at high energies. So, by using the Thomson limit we do not get it wrong by much. The main conclusions we can draw from this is that we expect no spectral modifications (since Thomson scattering is conservative) but we do expect angular redistribution (since Thomson scattering is essentially isotropic). \subsection{Scattering in plasmas} Electrons are not at rest in any realistic astrophysical setting. In order to have free electrons we need a plasma, which is made of (at least partially) ionized gas. So, we need a way to describe the fractional ionization. Let us consider only hydrogen for simplicity: we define the collisional ionization fraction \(x\) (ionized atoms divided by total atoms), which can be expressed in terms of the temperature as % \begin{align} x = \frac{F}{1 + F} \qquad \text{where} \qquad F = 2T \exp(-\frac{\SI{1.58e5}{K}}{T}) \,. \end{align} The way \(x\) looks as a function of temperature is a kind of sigmoid: it is close to zero, then at \(T\) around \SI{e4}{K} it quickly rises, and becomes close to 1 at a few times \SI{e4}{K}. At temperatures larger than \SI{e5}{K} the plasma is basically fully ionized (which makes sense: the first ionization energy of hydrogen is \(\SI{13.6}{eV} \approx \SI{1.6e5}{K}\)). At these temperatures, the electrons will be moving quite a lot because of thermal motion. \subsection{Inverse Compton scattering} Scattering onto moving electrons is often called \textbf{inverse Compton scattering}, since in this case the photon might gain energy instead of losing it. We start off with a photon with momentum \(k^{\mu } = (\epsilon /c) (1, \vec{\Omega})\) and an electron with momentum \(p^{\mu } = \gamma (mc, m \vec{v})\); the outgoing electron and momentum momenta, energies and velocities will be denoted with a prime. We know how to deal with scattering off a stationary electron, and stationarity is relative: if we boost to the electron's rest frame we can apply the results from regular Compton scattering, and then we will need to boost back to our frame. We shall denote quantities calculated in the ERF (electron rest frame) with a pedix \(e\): for example the energy of the photon in the ERF before the scattering will be \(\epsilon_e\). This means that in the ERF we will have the equality % \begin{align} \epsilon_e' = \frac{\epsilon_e}{1 + \frac{\epsilon_e}{mc^2} (1 - \cos \Theta )} \approx \epsilon_e \qty(1 - \frac{\epsilon_e}{mc^2} (1 - \cos \Theta )) \approx \epsilon_e \qquad \text{if } \epsilon_e \ll m c^2 \,. \end{align} We are saying that in the rest frame the scattering will basically be conservative, this is for sure an approximation but, because of what we discussed earlier about the Klein-Nishina cross section dropping off at high energies, not a large one. The Lorentz transform from the LAB frame to the ERF for the energy of the photon is: % \begin{align} \epsilon_e = \gamma \epsilon \qty(1 - \beta \cos \theta ) \,, \end{align} % while the transform back from the ERF to the LAB frame is % \begin{align} \epsilon' = \gamma \epsilon_e' \qty(1 + \beta \cos \theta _e ) \,. \end{align} Note that the quantities \(\beta \) and \(\gamma \) refer to the velocity of the electron: since we are approximating the scattering as conservative in the ERF calculating them before or after the scattering is the same (the direction of the electron can change: this is accounted for by letting \(\theta\) and \(\theta '\) be different). So, we can just insert these two multiplicative factors one after another to find the total change in energy of the photon: % \begin{align} \epsilon ' &\approx \gamma^2 \epsilon \qty(1 - \beta \cos \theta ) \qty(1 + \beta \cos \theta'_e ) \,. \end{align} \todo[inline]{Inaccuracy in the slides: the first formula does not really make sense, since the approximation of the nonconservativeness factor being equal to one has been made in part of it (\(\beta \) and \(\gamma \) being the same before and after), so it does not make sense to write it out.} Let us neglect the angular terms for now: generally they are of order unity. Instead, the main factor in the formula is \(\gamma^2>1\): this means that in general the energy of the scattered photon is of the order \(\epsilon ' = \gamma^2 \epsilon > \epsilon \). The boost of the energy of the photon depends on how relativistic the electron is; if the electron is very relativistic the boost in energy for the photon can be quite large. In order to make predictions about the effect of this for a population of electrons and photons, we can Lorentz transform the Compton Scattering Kernel: the final result of the manipulation is % \begin{align} \sigma (\epsilon \to \epsilon ', \xi ) = \frac{D}{D'} \sigma_e \qty(\epsilon _e \to \epsilon _e', \xi _e) \,, \end{align} % where \(D = 1 - \vec{\Omega} \cdot \vec{v} / c = 1 - \beta \cos \theta \) and \(D' = 1 - \vec{\Omega}' \cdot \vec{v}' / c \approx 1 + \beta \cos \theta '_e\) are the factors in the Lorentz transforms, which allow us to write \(\epsilon _e = \gamma D \epsilon \) and \(\epsilon _e' = \gamma D' \epsilon '\). The transformation law for \(\xi = \vec{\Omega} \cdot \vec{\Omega}\) is % \begin{align} 1 - \xi = \frac{1 - \xi _e}{\gamma^2 D D'} \,, \end{align} % while the electron energy density transforms as \(n = \gamma n_e\). This allows us to write down an explicit expression for \(\sigma (\epsilon \to \epsilon ', \vec{\Omega}, \vec{\Omega}')\) in the lab frame for a population of single-speed electrons: % \begin{align} \sigma (\epsilon \to \epsilon ', \vec{\Omega}, \vec{\Omega}', v) = \frac{n r_0^2}{2 \epsilon \nu \gamma } \qty[1 + \qty(1 + \frac{1 - \xi }{\gamma^2 D D'})^2 + \frac{\epsilon \epsilon ' (1 - \xi )^2}{\gamma^2 D D'}] \delta \qty(\xi -1 + \frac{\gamma D}{\epsilon'} - \frac{\gamma D'}{\epsilon }) \,, \end{align} % but we must also consider the fact that the electrons are distributed with different velocities: if their distribution is isotropic, so that \(\dd{n} = n f(v) \dd[3]{v}\) then we can integrate across all velocity space, getting an expression like % \begin{align} \sigma (\epsilon \to \epsilon ', \xi ) = \int \dd{v} \sin \theta_v \dd{\theta _v} \dd{\phi _v} v^2 n f(v) \sigma (\epsilon \to \epsilon ', \vec{\Omega}, \vec{\Omega}', v) \,. \end{align} \end{document}
{ "alphanum_fraction": 0.6839011155, "avg_line_length": 46.8833922261, "ext": "tex", "hexsha": "911b5909d6741bfa5b2e48739eac4902a33b44f6", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2021-08-06T16:11:07.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-03T16:20:19.000Z", "max_forks_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jacopok/notes", "max_forks_repo_path": "ap_second_semester/radiative_processes/apr09.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jacopok/notes", "max_issues_repo_path": "ap_second_semester/radiative_processes/apr09.tex", "max_line_length": 374, "max_stars_count": 6, "max_stars_repo_head_hexsha": "805ebe1be49bbd14c6b46b24055f9fc7d1cd2586", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jacopok/notes", "max_stars_repo_path": "ap_second_semester/radiative_processes/apr09.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-13T14:52:50.000Z", "max_stars_repo_stars_event_min_datetime": "2019-10-10T13:10:57.000Z", "num_tokens": 3970, "size": 13268 }
\chapter{Prior Work in Interface Prediction} \label{chap:relatedwork} %In silico prediction of interfaces began in the As previously mentioned, the experimental determination of protein complexes is time and resource intensive, prompting computational modeling approaches. Esmaielbeiki, Krawczyk, Knapp, Nebel, \& Deane ~\cite{esmaielbeiki2015} describe three slightly different computational problems: protein interaction prediction, protein interface prediction, and protein-protein docking. The first problem seeks to identify pairs of proteins which interact, elucidating the complicated protein interaction networks that give rise to cellular processes. The second problem, and the focus of this thesis, is concerned with identifying the specific residues or pairs of residues which make up the interface. The third problem considers two specified proteins and seeks the bound 3D structure of their complex. \section{Docking} Docking begins with the known unbound structures of two proteins known to interact, and conducts two main steps: search and scoring. During search, the proteins are translated and rotated relative to each other and brought into contact to create a several putative 3D bound structures for the complex. The putative structures are then evaluated by a scoring function to identify the most likely conformation of the complex. Different docking methods differ in their the search algorithms and scoring functions. Scoring functions may incorporate complementarity in geometry, chemistry, and electrostatics, or incorporate van der Walls forces or evidence based (i.e. statistical) potentials~\cite{tuncbag2011}\cite{janin1995}. It is worth noting that docking methods can also be used to predict interfaces, by first solving for the 3D structure of the complex and then extracting the interface from the complex. Indeed, docking methods were some of the earliest computational approaches developed for modeling protein interactions~\cite{janin1995}. One of the major advantages of docking is its ability to produce interface predictions \emph{ab initio} without requiring examples of known interfaces, which is particularly useful when experimental complex data are sparse or absent. Unfortunately, docking methods traditionally suffer from relatively high false positive rates, are considerably less effective for complexes which undergo conformational change when binding than those that do not, and are computationally expensive because of the vast search space~\cite{janin1995}\cite{tuncbag2011}. \section{Other Early Methods} Some early alternatives to docking used sequence information, residue properties, and unbound structures for each protein in the complex to directly predict the interface without predicting the structure of the whole complex. Lichtarge, Bourne, \& Cohen~\cite{lichtarge1996} used inferred evolutionary relationships between different proteins to identify conserved residues and then identified those conserved residues which lay on the surface of the protein. This method was based on the hypothesis that conserved surface residues must be vital to a protein's function and therefore probably constitute an interface. Pazos, Helmer-Citterich, Ausiello, \& Valencia~\cite{pazos1997} took a similar approach, but instead looked at evolutionary relationships between protein complexes and identified pairs of residues between the proteins in the complex which have co-evolved. This method requires only sequence information and therefore is applicable even in cases where the protein structures are unknown, but relies on having sufficient data to infer evolutionary relationships. Additionally, this method is partner-specific since it identifies residue pairs which show correlated changes. Gallet, Charloteaux, Thomas, \& Brasseur~\cite{gallet2000} used a sliding window on a protein sequence and calculated measures of hydrophobicity in a region, which can easily be calculated knowing the residue identities and secondary structure. This method requires no phylogenetic information so is applicable even when no close evolutionary relatives can be identified. Early methods such as those listed above were crafted for the available data and computational resources of the time, but were unable to fully account for the growing body of research surrounding protein interfaces and their properties, as in Jones \& Thornton~\cite{jones1996}. It was Jones \& Thornton~\cite{jones1997} that proposed a method which incorporates multiple structural features such as surface planarity, protrusion, and accessible surface area, with residue level features such as solvation potential, hydrophobicity, and interface residue propensity. They constructed a manual scoring function whose inputs are the aforementioned features and output is a score, where higher scores are intended to correspond with members of the interface. They constructed a different scoring function for each of three different categories of complex, reflecting observations made into the characteristics of different complex types. These categories were homomeric and small heteromeric proteins, larger heteromeric proteins, and antibody/antigen complexes. Prediction was performed on small patches of residues. Like docking, these methods avoid using examples of known interfaces when making predictions. Evaluation and comparison of early methods was challenging due to the paucity of experimentally determined protein interfaces~\cite{esmaielbeiki2015}. Thankfully, the turn of the twenty first century coincided with an increase in the number of experimentally determined structures added to databases such as the Protein Data Bank~\cite{berman2000}. Curated subsets also emerged which focused on evaluating protein-protein docking methods, such as the Critical Assessment of Predicted Interactions (CAPRI)~\cite{janin2003} and the Docking Benchmark Dataset (DBD)~\cite{chen2002}. These datasets also became useful in the evaluation (and sometimes training) of interface prediction methods. \section{Data Driven Methods} The increasing availability of data and increased interest in interface prediction led to a growing number and diversity of approaches. Template based methods emerged which utilize a non-redundant library of known protein interfaces to make predictions about unknown proteins. For a given query protein, a search is made in the library for known complexes where a partner is similar to the query protein. The interface of the query protein is then inferred from the interfaces of the most similar query results. Similarity may be measured via sequence or structural similarity~\cite{esmaielbeiki2015}. Other data-driven methods have appeared which are based on either machine learning or statistical methods. Some early machine learning based approaches used a support vector machine (SVM) to classify residues as either belonging to an interface or not. %TODO: explain SVMs? An SVM essentially provides a scoring function which is dependent on training data rather than being manually constructed. Koike \& Takagi~\cite{koike2004} trained an SVM classifier to perform partner-independent prediction of interfaces. They represented a residue by its profile, a vector of relative abundances of each amino acid type among homologous proteins at that location. They experimented with different feature representations to make predictions at a particular residue, finding that incorporating profiles from sequential or spatially neighboring residues improves performance, as does incorporating accessible surface area and accounting for the relative interface size. Bradford \& Westhead~\cite{bradford2004} also performed partner-independent prediction with an SVM classifier, but instead made predictions for surface patches rather than individual residues. Zhou \& Shan~\cite{zhou2001} were among the first to train a neural network for partner-independent prediction. They incorporated profile and solvent exposure of residues and their neighbors to make predictions at the residue level. Their method uses residue profiles. In a follow up paper, Chen \& Zhou~\cite{chen2005} used an ensemble of neural networks to make a consensus prediction concerning a residue of interest. %TODO: talk about proto-convolution of chen2005? Various statistical approaches to interface prediction have also been proposed, many of which attempt to model the interdependence between different residues and between residue features. Bradford, Needham, Bulpitt, \& Westhead~\cite{bradford2006} compared a naive Bayes approach to a Bayesian network, which accounts for observed correlations between features, and found that both perform equivalently when predicting interface patches. They also found that these methods perform well even when some data are missing, particularly when conservation scores can't be determined due the absence of homologs. Friedrich, Pils, Dandekar, Schultz, \& M{\"u}ller~\cite{friedrich2006} adapted a hidden Markov model (HMM) originally used for homology detection~\cite{eddy1998} in order to detect interacting residues. The advantage of an HMM is the ability to jointly model all residues in a sequence at once. Li, Lin, Wang, \& Liu~\cite{li2007} generalized this joint modeling to an undirected graphical model using conditional random fields (CRFs) which performs comparably to other data based approaches. Early machine learning and statistical methods for interface prediction provide predictions at the individual residue or patch level, in contrast to docking methods which generate global solutions for the complex. These methods also typically incorporate both sequence and structural information in order to make partner-independent predictions. However, in a 2007 review paper, Zhou \& Qin\cite{zhou2007} identified the need for partner-specific methods in order to improve prediction specificity. \section{Partner Specific Methods} Whereas early partner-specific interface prediction methods are based on sequence co-evolution or derived from docking solutions, recent approaches have also included machine learning based methods. Notably, two such methods have incorporated the same types of features as the partner-independent machine learning methods, but have instead considered pairs of residues from separate proteins when making predictions. In Prediciton of Protein-protein Interacting Position Pairs (PPiPP), Ahmad \& Mizuguchi~\cite{ahmad2011} used a neural network which only uses sequence based features. They experimented with two types of sequence based features, a sparse encoding and a position specific scoring matrix (PSSM) encoding. %TODO: describe PSSM? refer to appendix? The sparse encoding is a one-hot binary array of length 20 indicating the amino acid type. The PSSM encoding instead represents each amino acid type by its log-odds frequency in iterative multiple-sequence-alignment results. Using these features, they also experimented with different sized sequence-windows, where a residue of interest is represented by the concatenation of feature vectors of all residues inside a sequence window. Both the feature representations and the sequence windows are in keeping with prior work in machine learning based partner-independent predictors. Residues whose windows extended past the end of the bounds of the sequence were excluded from the training set. The data used are from version 3.0 of the Docking Benchmark Dataset~\cite{hwang2008}, which includes 124 unbound and bound structures of both proteins. A pair of residues, one from each protein was considered positive (part of the interface) if they are within 6\AA~of each other and negative otherwise. Training examples were created by concatenating the feature representation of each residue in a pair together. Due to the inherent asymmetry of this concatenation, two examples were produced for each pair by concatenating the representations in both orders (AB and BA). Predictions for pairs were made by taking the average prediction of the two orderings. There are significantly fewer positive than negative examples, so negative examples were sampled to prevent extreme class imbalance. Specifically, either 2\% or 1000 negative examples were randomly selected, whichever was smaller. The model consists of an ensemble of 24 neural networks, each using different window sizes for the sparse and PSSM encodings. The predictions from each of these models are averaged to produce a final prediction for a residue pair. Networks were evaluated in a leave-one-out fashion (train on all but one complex and test performance on the omitted complex). Performance was measured using area under the receiver operating characteristic curve (AUC) for each left out complex and averaged. The ensemble achieved an AUC of 72.9\% compared to 67.9\% for a single neural network with windows of size 7 for both encodings. The authors compared this to an analogously constructed neural network ensemble which performs partner-independent prediction. Examples consist of a single residue which is positive if it is part of an interface and negative otherwise. Partner independent predictions were naively converted to partner-specific predictions by averaging the scores of each residue in the pair, yielding an AUC of 71.0\%, worse than the partner-specific predictor. Conversely, partner-specific predictions were converted to partner-independent predictions by taking the max over all potential neighbors. This yielded an AUC of 66.1\% for the partner-independent prediction problem, better than the 63.8\% AUC of the partner-independent model. Thus, the partner-specific model outperformed the partner-independent model on both partner-specific and partner-independent predictions. In PAIRPred, Minhas, Geiss, \& Ben-Hur~\cite{minhas2014} incorporated custom symmetric kernels into an SVM formulation~\cite{minhas2014}. In addition, this method includes structural information for each residue as well. The structure based features consisted of the relative accessible surface area (rASA), residue depth, half sphere amino acid composition, and protrusion index. The sequence based features included the same PSSM encoding as PPiPP, a position frequency scoring matrix (PSFM) encoding (like the PSSM encoding but with raw frequencies instead of log-odd frequencies), and predicted rASA (prASA). The authors used complexes from DBD version 3.0~\cite{hwang2008} for comparison to PPiPP. They also utilized the updated DBD version 4.0~\cite{hwang2010}, which is a superset of the complexes in version 3.0, and complexes from the CAPRI~\cite{janin2013} experiment. The authors constructed specialized symmetric pairwise kernels which compute a similarity between any two pairs of residues, independent of the ordering of each pair. Each pairwise kernel is constructed by taking a symmetric combination of kernels for individual residues, where residue kernels were themselves sums of radial basis function kernels for each feature type. %TODO: explain this better and write out some math? Several individual pairwise kernels are summed and the result is normalized to produce the final pairwise kernel. The authors also investigated a postprocessing step where pair scores are smoothed based on the scores in a neighborhood around each residue. Cross validation was used to tune the soft margin parameter C and the residue kernels and pairwise kernels were optimized in similar fashion. Following the same leave-one-out procedure as Ahmad \& Mizuguchi, PAIRPred achieved an AUC of 87.3\% before any postprocessing, and 88.7\% after post processing. When using only sequence based features, PAIRPred achieved an AUC of 80.9\%, which demonstrates a significant improvement over PPiPP even in the absence of structural features. Partner-independent prediction was performed by taking the maximum score across all potential neighbors, and achieved an AUC of 70.8\% and 77.0\% using only sequence features and all features respectively, which also outperforms PPiPP's best partner-independent performance. Minhas et al.~\cite{minhas2014} note that the extreme class imbalance inherent to the partner-specific classification problem means AUC is not as easy to interpret. Therefore they also calculate the rank of the first positive prediction (RFPP) at a given percentile, where $\textsc{RFPP}(p) = q$ means that $p$ of the complexes in the test set have at least one true positive pair among the top $q$ predictions. Low values of $q$ corresponding to high values of $p$ indicate better classifier performance because a higher percentage of complexes have a true positive near the top predictions. The authors argue that this is more relevant to a biologist because it indicates the trustworthiness of the top few predictions of the classifier. PAIRPred outperforms PPiPP for values of $p$ equal to 10, 25, 50, and 75, but is worse at the 90\% level. The authors conclude by noting that there is much room for improvement in partner-specific interface prediction, particularly for complexes with a high degree of conformational change. They propose adding new features to PAIRPred which capture shape complementarity between binding interfaces, co-evolution, and protein flexibility. However, PAIRPred's improvement over PPiPP suggests that not only is the choice of features important to classifier performance, but also their representation and the construction of the predictor itself. This supports the investigations of this thesis into the graph representation of proteins and new convolution methods which operate on graphs. Chapter \ref{chap:neuralnetworks} gives a primer on the convolutional neural networks which helped inspire these new methods.
{ "alphanum_fraction": 0.8258908435, "avg_line_length": 122.3172413793, "ext": "tex", "hexsha": "9d28d73f182f6ae27b466af31bd3bf51b2b0c112", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_forks_repo_licenses": [ "Unlicense" ], "max_forks_repo_name": "fouticus/msthesis", "max_forks_repo_path": "relatedwork.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Unlicense" ], "max_issues_repo_name": "fouticus/msthesis", "max_issues_repo_path": "relatedwork.tex", "max_line_length": 316, "max_stars_count": null, "max_stars_repo_head_hexsha": "50362de8bb633e7cc3737936b5c4ba2920423e19", "max_stars_repo_licenses": [ "Unlicense" ], "max_stars_repo_name": "fouticus/msthesis", "max_stars_repo_path": "relatedwork.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3500, "size": 17736 }
% This LaTeX file is written by Zhiyang Ong to record notes for his digital biology class regarding the basics of software engineering and the UNIX environment. % The MIT License (MIT) % Copyright (c) <2014> <Zhiyang Ong> % Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: % The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. % THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. % Email address: echo "cukj -wb- 23wU4X5M589 TROJANS cqkH wiuz2y 0f Mw Stanford" | awk '{ sub("23wU4X5M589","F.d_c_b. ") sub("Stanford","d0mA1n"); print $5, $2, $8; for (i=1; i<=1; i++) print "6\b"; print $9, $7, $6 }' | sed y/kqcbuHwM62z/gnotrzadqmC/ | tr 'q' ' ' | tr -d [:cntrl:] | tr -d 'ir' | tr y "\n" %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{Software Engineering Basics} \label{chp:SWEngrBasics} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{UNIX Basics} \label{sec:UNIXBasics} My TA, Ricardo, suggested using Guake (\url{http://en.wikipedia.org/wiki/Guake}) as a substitute for the common/normal {\tt Terminal} application. \\ We will be using the {\tt Terminal} to do a lot of our work in this class. Prof. Rodolfo Aramayo briefly talked about the history of UNIX and its derivatives, such as {\it Linux}, {\it BSD}, {\it Oracle/SUN Solaris}, and {\it Mac OS X}. UNIX was started at {\it Bell Labs}. He also talked about the UNIX philosophy. \\ We shall operate in the UNIX environment via text files. Everything (including directories) is a file in UNIX. Some files can be read visually (i.e., text files), while others (i.e., binary files) cannot. \\ The kernel is the heart of the operating system. The UNIX shell (accessed via applications, such as the {\tt Terminal}) is an application that allows users to interact with the kernel indirectly. \\ Anatomy of UNIX commands: {\tt command\_name [options] [arguments]}. Double dashes for options of UNIX commands cannot be combined. However, for options for single dash lines, they can be combined. \\ The ``{\tt man}'' page is the UNIX manual. To find documentation of a UNIX command, use the {\tt man} command. \\ UNIX commands to learn: \vspace{-0.3cm} \begin{enumerate} \itemsep -4pt \item alias: ``alias ll'' \item apropos: ``apropos copy'' would search the UNIX ``man'' pages for the keyword ``copy''. \item cat: conCATenate \item cd \item chmod: Change mode \item clear \item cp: cp --version \item dir -l \item date \item du: ``du -hd 0 .'' list the size of the directory in KB, and ``du -hd 1 .'' list the size of the directory and its files. ``df -h'' indicates the size of the directory and its contents. \item echo: ``echo -e'' refers to {\tt echo} enhanced, which redirects the output in the UNIX pipeline to a file. {\it echo -e `` `date`'' $>$ tata1}. ``echo \$PATH'' \item file \item history \item info cp \item less \item ls [-al] \item more \item mkdir \item mv \item nohup: Used in conjunction as a prefix command to allow the script to run, even when the process has already terminated. \item pwd \item rm \item rmdir \item rsync: \vspace{-0.3cm} \begin{enumerate} \itemsep -2pt \item An example of how the command can be used is: ``rsync -v username@host:$\sim$/path/to/file .''. This commands copies the file at the specified path to the current working directory. The ``-v'' option runs the UNIX command in verbose mode. \item Its ``-vr'' option runs the command recursively in verbose mode. \item Prof. Aramayo mentioned something about an option that transfers files with automatic compression and decompression. Is this option ``-a'', or something else? Use the ``tar'' command to compress/uncompress files. \end{enumerate} \item script: \vspace{-0.3cm} \begin{enumerate} \itemsep -2pt \item Use this UNIX command to keep a log of the terminal session. That is, use it to record the terminal session. Some {\tt Terminal} applications allow people to save the terminal session as a text file. \item Prof. Aramayo suggested redirecting the standard output stream to a file to keep a record of the commands that are executed and their standard output. However, this requires appending each UNIX command with the redirection symbols. \item {\tt unix-command} $>$ zlog \&. This creates a new file, if the file {\tt zlog} does not exist, and redirects the standard output to the logfile ({\tt zlog}). Using the ampersand symbol allows the command to be run in the background while allowing me to continue using the {\tt Terminal} application. \item {\tt unix-command} $>>$ zlog \&. This creates a new file, if the file {\tt zlog} does not exist, and redirects the standard output to the logfile ({\tt zlog}). However, if the file {\tt zlog} already exist, the redirected standard output would be appended to the end of the logfile ({\tt zlog}). \item Put {\tt \&$>$ slog \&} at the end of each command??? \end{enumerate} \item touch \item tree \item type: type zrio \item whatis \item which: which blastn \end{enumerate} Use ``tab'' to autocomplete filenames and directory names. Avoid using spaces in filenames and directories to keep file and directory access simple. \\ Directory access: The ``.'' file is the current working directory, and the ``..'' is the parent directory. A directory can also be called a folder. By using the {\tt cd} command, I can return to my home directory. \\ You cannot undo operations in UNIX. Hence, save and backup files before performing removal operations in UNIX. There is also no ``trash can'' or ``recycle bin''. \\ Microsoft Excel has a maximum limit of 65,000 rows in the spreadsheet. Hence, this limits the amount of information that I can process with Microsoft Excel. To process more data, such as GBs or TBs of data, I need other software applications or develop my own computer program. \\ Symbolic links in UNIX are like shortcuts or aliases in Windows. An example of creating a symbolic link is: ``ln -s ../01/test01''. \\ The human genome has been decoded into a file about 7 TB. \\ The colon ``:'' serves as a dummy placeholder to remove the contents of a file; ``: $>$ filename'' \\ Standard output stream, {\tt stdout}, is described along with exit signals of UNIX processes. Standard error output stream will write to the standard error output file. UNIX redirection for standard output and error streams are described. \\ Use {\it tree} to show contents of a directory as a tree. \\ Discussed UNIX path redirection, pipelining of UNIX commands, and separate execution of UNIX commands (using the semicolon ``;'' symbol). \\ Covered special/escape characters to use tabs and newlines to print information. \\ Covered information on how to go to the ``home'' directory. ``$\sim$'' refers to the home directory. \\ Covered absolute paths and relative paths in UNIX. \\ Detailed explanation of the ``ls'' command. It indicates when the file has been created/modified. It also indicates the size of the file in bytes. It also indicates the username (``db0015'') and the group (``student'') that I belong to. Permissions to access files are determined by the group that I belong to. File permissions are indicated for read, write, and execute. They are set for individual users, groups, and everybody with access to the computer network/system. File types are indicated for directories (``d''), regular/normal files (``-''), and symbolic links (``l''). \\ Most files have the file permissions set as 755. \\ Discussed how to create aliases in UNIX. \\ Configure my UNIX environment with the ``.bashrc'' (or ``.bash\_profile'') file. \\ The {\tt .profile} is used by {\tt shell}, and is equivalent to {\tt .bashrc} for Bash. \\ GUI-based {\it Galaxy} is used for this class. \\ Regarding file transfer, avoid unencrypted file transfer that can be accessed by others. People can listen or snoop on the packet transmission of files, and find out what you are doing. An aside: Email service providers, such as Google, transmit emails between their servers without encryption. \\ There are many applications for downloading files from the Internet. The applications {\tt curl} and {\it wget} are more common for downloading files. \\ The UNIX command {\tt ifconfig} gives you information about computer networking for your computer or computing account (if you are connected to a remote computer). \\ Further references in my research database about UNIX include the following: \cite{Apple2011,Kernighan1984,Kerrisk2010,Mitchell2001,Petersen2008,Raymond2004,Raymond2004a,Rochkind2004,Rosen2007a,Stallings2005,Stevens2013,Storimer2012,VibrantPublishers2010a,VibrantPublishers2011b,VibrantPublishers2011c,VibrantPublishers2011h}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{SSH Basics} \label{ssec:SSHBasics} {\tt SSH} is an application that allows me to connect securely to another computer that is connected to the same computer network, or to the Internet. It uses encryption for network connection, including file transfer between computers in the same network, or between different networks. Its various levels of encryption correspond to various levels of simplicity in the encryption. \\ The actual/real {\tt SSH} application requires paid subscription. However, its open source variant is FREE!!! \\ SSH key generation creates a pair of private and public keys. Keep the private key private to myself (only). Allow others to have the public key, so that a valid authentication of myself can be made. \\ {\tt rsync} is an application for file copying and synchronization between different computer accounts. It does not copy all files in your directory, but copy modifications to existing files and copies only new files. It transfers files in compressed format. That is, it transfer files between different computers by synchronizing them via delta modifications. This is because copying entire directories of huge files take a lot of time. Hence, use {\tt rsync} to carry out file transfer to save time. {\tt rsync} uses the public key of SSH (from SSH key generation) to connect the local machine to the remote machine. For example, I can create the authentication file (SSH public key) and transfer the public key to the remote machine. \\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Shell Scripting Basics} \label{sssec:ShellScriptingBasics} To review the basics of UNIX shell script, see my internship report (and associated material) for my internship at the Institute of Microelectronics, Singapore \cite{Ong2004a}. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsubsection{Related Issues} \label{sssec:ShellScriptingBasics:RelatedIssues} Download data to group directory on ``Geiger'', so that I do not corrupt the local machine. \\ Use ``tree'' to find out the directory structure of the specified directory. \\ For class on June 17, 2014, clone the repository from Prof. Aramayo, \url{https://geiger.tamu.edu/gitlab/raramayo/digitalbiology_project_summer2014}. Work on this directory to practise the UNIX sub-lesson for today. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Version Control (Or Revision Control)} \label{ssec:Version Control} Revision control is also known as version control or source control. It is an aspect of software configuration management (SCM). For this class, {\tt Git} \cite{Chacon2009,Swicegood2010,Humble2011,VibrantPublishers2012b,Fox2013} will be our revision control tool. \\ {\tt git status} tells me the status of my {\tt Git} repository. {\tt git diff} tells me the difference between different commits/stages of my repository. Watch videos about {\tt Git} to learn more about {\tt Git}, via hyperlinks provided on the class Wiki. Also, read ``{\tt Git} in the Trenches.'' \\ While adding files to my {\tt Git} repository, use the {\tt Markdown} language to provide some structure to the presentation of information for my project repository. Save files in the {\tt Markdown} language as {\tt filename.md}. {\tt Markdown} is a document markup language, just like \LaTeX, HTML, and XML. \\ %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Other Computing Issues} \label{sec:OtherComputingIssues} % genomics01.bio.tamu.edu % usr: dbiology % passwd (integralmente) Launch system monitor to track how much CPU time are processes taking.
{ "alphanum_fraction": 0.7340441621, "avg_line_length": 54.6446280992, "ext": "tex", "hexsha": "e12bd39206265d71767ffe1cce04a8913317488b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5206958bb516d321edb7178d29a841e53f181e74", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eda-ricercatore/Calabria-Digital-Bio", "max_forks_repo_path": "notes/sw_engr_env/sw_engr_env.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "5206958bb516d321edb7178d29a841e53f181e74", "max_issues_repo_issues_event_max_datetime": "2021-06-25T15:16:14.000Z", "max_issues_repo_issues_event_min_datetime": "2018-10-19T20:55:14.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eda-globetrotter/Calabria-Digital-Bio", "max_issues_repo_path": "notes/sw_engr_env/sw_engr_env.tex", "max_line_length": 739, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5206958bb516d321edb7178d29a841e53f181e74", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eda-ricercatore/Calabria-Digital-Bio", "max_stars_repo_path": "notes/sw_engr_env/sw_engr_env.tex", "max_stars_repo_stars_event_max_datetime": "2015-12-02T00:38:35.000Z", "max_stars_repo_stars_event_min_datetime": "2015-12-02T00:38:35.000Z", "num_tokens": 3159, "size": 13224 }
\subsection*{Name} series -- generate an additive series of numbers \subsection*{Usage} {\bf series} start end [stepsize] \subsection*{Description} {\bf series} prints the real numbers from {\bf start} to {\bf end}, one per line. {\bf series} begins with {\bf start} to which {\bf stepsize} is repeatedly added or subtracted, as appropriate, to approach, possibly meet, but not pass {\bf end}. If all arguments are integers, only integers are produced in the output. The {\bf stepsize} must be nonzero; if it is not specified, it is assumed to be of unit size (1). In all other cases, {\bf series} prints an appropriate error message. \subsection*{Example} To count from 1 to 100: \begin{verbatim} series 1 100 \end{verbatim} To do the same, but backwards: \begin{verbatim} series 100 1 \end{verbatim} \subsection*{Limitations} The reported number of significant digits is limited. If the ratio of the series range to the {\bf stepsize} is too large, several numbers in a row will be equal. The maximum length of a series is limited to the size of the maximum long integer that can be represented on the machine in use. Exceeding this value has undefined results. \subsection*{Author} Gary Perlman
{ "alphanum_fraction": 0.7228820269, "avg_line_length": 26.3125, "ext": "tex", "hexsha": "63904756e647939e04edc5169d34bca0df5b9688", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "0416e211c7e97c66275e43c339b67450805619f1", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "chrisinmtown/defect-detect-expt", "max_forks_repo_path": "t-series/ts2-spec/ts2-e.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "0416e211c7e97c66275e43c339b67450805619f1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "chrisinmtown/defect-detect-expt", "max_issues_repo_path": "t-series/ts2-spec/ts2-e.tex", "max_line_length": 70, "max_stars_count": null, "max_stars_repo_head_hexsha": "0416e211c7e97c66275e43c339b67450805619f1", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "chrisinmtown/defect-detect-expt", "max_stars_repo_path": "t-series/ts2-spec/ts2-e.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 310, "size": 1263 }
% compile 3 times: latex tex4ht-info-javahelp % or htlatex tex4ht-info-javahelp "html,sections+" % or ht latex tex4ht-info % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % tex4ht-info-javahelp.tex % % Copyright (C) 2006-- Eitan M. Gurari % % % % This work may be distributed and/or modified under the % % conditions of the LaTeX Project Public License, either % % version 1.3 of this license or (at your option) any % % later version. The latest version of this license is % % in % % http://www.latex-project.org/lppl.txt % % and version 1.3 or later is part of all distributions % % of LaTeX version 2003/12/01 or later. % % % % This work has the LPPL maintenance status "maintained".% % % % This Current Maintainer of this work % % is Eitan M. Gurari. % % % % If you modify this file your changing the signature % % in \message{(signature)} below will be appreciated. % % % % [email protected] % % http://www.cse.ohio-state.edu/~gurari % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \message{(<signature>)} \ifx \HTML\UnDef \def\HTML{infojh} \def\CONFIG{\jobname} \def\MAKETITLE{\author{Eitan M. Gurari}} \def\next{\input mktex4ht.4ht \endinput} \expandafter\next \fi \input{common-info} \input{common} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{INFO} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \<infojh\><<< %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % infojh.4ht |version % % Copyright (C) |CopyYear.2000. Eitan M. Gurari % % % % This program can redistributed and/or modified under % % the terms of the LaTeX Project Public License % % Distributed from CTAN archives in directory % % macros/latex/base/lppl.txt; either version 1 of the % % License, or (at your option) any later version. % % % % If you modify this program your changing its signature % % with a directive of the following form will be % % appreciated. % % \message{signature} % % % % [email protected] % % http://www.cse.ohio-state.edu/~gurari % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \immediate\write-1{version |version} { \catcode`\@=0 \catcode`\\=11 @relax @gdef@infoIVht[#1]#2//{% @ifnum #1>1 @def@infoIVht[##1]##2//{% @ifnum ##1>1 @ifnum ##1<#1 @bgroup @no:catcodes0{255}{11}% @no:catcodes{91}{91}{12}% [ @no:catcodes{47}{47}{12}% / @newlinechar13 % @long@def@infoIVht####1\ifx\infoIVht####2infoIVht[####3//{% @def@infoIVht{******************************************}% @immediate@write-1{@infoIVht}% @immediate@write-1{****** @csname :[email protected]}% @immediate@write-1{@infoIVht}% @bgroup @def@infoIVht{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*}% @let~=@space @immediate@write-1{@infoIVht}% @egroup @immediate@write-1{####1}% @bgroup @def@infoIVht{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~*}% @let~=@space @immediate@write-1{@infoIVht}% @egroup @immediate@write-1{@infoIVht}% @egroup}% @expandafter@expandafter@expandafter@infoIVht @fi@fi }% @fi } } >>> \chapter{The Code} \section{tex4ht} \<configure infojh tex4ht\><<< \Configure{mapIdTarget}....................2 #1 target #2 definitions Given: \sectionType, \sectionId, \sectionName Examples: \Configure{mapIdTarget} {\sectionName} {} \Configure{mapIdTarget} {\spacelessName} {\immediate\openout15=\jobname .tmp \immediate\write15{\def\string\spacelessName{\sectionName}}% \immediate\closeout15 \catcode`\ =9 \input \jobname .tmp \catcode`\ =10 } >>> \endinput
{ "alphanum_fraction": 0.4200081833, "avg_line_length": 33.7103448276, "ext": "tex", "hexsha": "dee37156a4befeb1d7830bdb98f8a4959138b27c", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a256a7136d6638e90f07799892c005f2eb20730a", "max_forks_repo_licenses": [ "LPPL-1.3c" ], "max_forks_repo_name": "dgalcius/tex4ht-sync", "max_forks_repo_path": "lit/tex4ht-info-javahelp.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a256a7136d6638e90f07799892c005f2eb20730a", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "LPPL-1.3c" ], "max_issues_repo_name": "dgalcius/tex4ht-sync", "max_issues_repo_path": "lit/tex4ht-info-javahelp.tex", "max_line_length": 74, "max_stars_count": null, "max_stars_repo_head_hexsha": "a256a7136d6638e90f07799892c005f2eb20730a", "max_stars_repo_licenses": [ "LPPL-1.3c" ], "max_stars_repo_name": "dgalcius/tex4ht-sync", "max_stars_repo_path": "lit/tex4ht-info-javahelp.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1158, "size": 4888 }
\documentclass[12pt,oneside,a4paper]{article} \usepackage{graphicx} \usepackage{hyperref} \begin{document} % title (fold) \begin{center} {\LARGE \bfseries DIGIEZ --- Repair a corrupted video \\[0.3cm] } {\large \textsc{CentraleSupélec} --- Max Spahn --- P2018\\[0.7cm] } \end{center} % title (end) \section{Idea} \label{sec:Idea} Given the task of reconstructing the video, my first step was to frame the video in images so that i can treat them individually. The main idea of my program is to compare individual pixel of all picture created that way. It is up to the user how many pixels he wants to be compared. An increase in pixel-number increases the accurancy of the program but causes a longer execution time. Once the images are put in the right order,you need to rebuild the video. \section{Technical details and problems} \label{sec:Technical details and problems} The programming language I am the most familiar with is C++, but I couldn't find a good library for my purpose. That's was the reason to choose \textsc{ffmpeg} as an external programe that needed to be combined with the cpp code in a bash file. On the other hand, the bash-file gave me much opportunities to automate the program. For example, it is kind difficult to compute the number of images created by ffmpeg, but a simple line in the bash-file counts the number of imagese. The number is than used for the iteration in the C++ program. The images are stored in the folder "images/" and they are named \textit{outxxx.bmp} where xxx is a place holder for the number of a particular image. I decided to use the bitmap-format because it is one of the easiest to work with. The drawback is that there is now compression so the storage might become problematic if the video becomes too long. Then the C++ program is executed with two command-line arguments, the first is the number of pixel in each picture direction, the second is the number of images previously counted. \begin{figure} \begin{center} \includegraphics[scale=0.3]{explication_image.png} \end{center} \caption{Explanation of how two images are compared, for divison-factor div=4} \label{fig:image} \end{figure} An array of changings is created to keep track of the order of the images. When every picture is put into the right position, the image-files are renamed with regard to the changings-array. They are now called \textit{newxxx.bmp}. \textsc{ffmpeg} is used a second time to reform the video, now in order. \section{Problems} \label{sec:Problems} The major problem might be the compability with other operating systems than linux which i used for the program. On top of that you need to preinstall \textsc{ffmpeg} and the C++-library for treating bitmaps. I put all the files needed to use \textsc{EasyBMP} in my github, \url{https://github.com/maxspahn/digiez} with the other file, and \textsc{ffmpeg} can be downloaded on the link: \url{https://ffmpeg.org/}. In case of problems I would be happy to talk to about my solution. \end{document}
{ "alphanum_fraction": 0.7719880518, "avg_line_length": 51.9482758621, "ext": "tex", "hexsha": "3eaf58ac165ea6e0b613f7223e3fd488f90b13a4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "9e37becc08450d112b9f394c03b6d47b36684f6b", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "maxspahn/digiez", "max_forks_repo_path": "report/digiez.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9e37becc08450d112b9f394c03b6d47b36684f6b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "maxspahn/digiez", "max_issues_repo_path": "report/digiez.tex", "max_line_length": 144, "max_stars_count": null, "max_stars_repo_head_hexsha": "9e37becc08450d112b9f394c03b6d47b36684f6b", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maxspahn/digiez", "max_stars_repo_path": "report/digiez.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 758, "size": 3013 }
\section{Results} \label{sec:results} \subsection{\fred as a Teaching Aid} \fred (v0.0.3)\cite{stephen_thompson_2020_3946090} was used at our Medical Summer School in 2020 with a cohort of 5 students. Informal feedback indicated that it had improved their understanding of fiducial based registration. \fred also forms part of the \gls{BARD} which was a finalist at the MICCAI 2020 educational challenge \footnote{\href{https://miccai-sb.github.io/materials.html}{https://miccai-sb.github.io/materials.html}}. \subsection{Extension to Anisotropic Errors} We simulated an anisotropic independent \gls{FLE}, with \gls{FLE} in the x direction being 3 times that in the y and z, as described in Section \ref{sec:anis_method}. Errors were scaled so that the expected absolute value of the \gls{FLE} was the same as for the isotropic case. \fred was then used to perform at least 200 simulated registrations, the results of which are shown in Fig. \ref{fig:anis_error}. \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{images/anisitropic_error.eps} \caption{\label{fig:anis_error}Results of 213 registrations using an anisotropic model of {FLE}} \end{center} \end{figure} It was apparent when performing the registrations that the majority of target registration error was in the x direction. However this is not communicated in the statistics of Figure \ref{fig:anis_error}. It would be an interesting extension exercise to use \fred to explore ways of communicating anisotropic errors during treatment. \subsection{Addition of Systematic Errors} We added a systematic \gls{FLE} as an isotropic uniform random variable, in the range -0.5 to 0.5, as described in Section \ref{sec:sys_method}. This error will be applied to all fiducial markers for a given registration. We performed at least 200 simulated registrations using \fredns, the results of which are shown in Fig. \ref{fig:sys_error}. \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{images/systematic_error.eps} \caption{\label{fig:sys_error}Results of 201 registrations with systematic error.} \end{center} \end{figure} It is noticeable that average \gls{TRE}s are higher than in cases where there is no systematic error, while \gls{FRE} remains similar. This is as expected as \gls{FRE} will not account for systematic errors. This is a useful demonstration of this effect, though it might be more instructive to implement systematic errors in the game based method, see Section \ref{sec:game_method}, to investigate the likely clinical impact of these systematic errors. \subsection{Statistical Significance} As discussed in Section \ref{sec:methods} it is useful to perform tests of statistical significance on \fredns's registration results. We used the function \href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html}{stats.linregress} from {SciPy}\cite{2020SciPy-NMeth} to perform a Wald Test against the null hypothesis that the slope is zero. For the results shown in Figures \ref{fig:correlation}, \ref{fig:anis_error}, and \ref{fig:sys_error} there was no significant relationship between actual \gls{TRE} and either actual or expected \gls{FRE}. There was a significant relationship between actual \gls{TRE} and the expected \gls{TRE}. All of these results are as expected. Tests against the expected value of \gls{FLE} and the number of fiducial markers are more problematic, as they are not continuous variables, which should be apparent when looking at the results charts. The reason for this is obvious for the number of fiducial markers. The expected value of \gls{FLE} is clustered into groups as this is only set once for a given target, however the registration for this target will be repeated each time a new fiducial marker is added beyond the minimum of 3. Hence if the user adds a total of 10 fiducial markers, this will create 8 registration results all with different \gls{TRE} and \gls{FRE}, but with a single value of expected \gls{FLE}. \subsection{\fred for Research} The results of the game based study described in Section \ref{sec:game_method} are shown in Figure \ref{fig:usability}. As expected scores are highest when the actual \gls{TRE} is known. Interestingly it appears that when told only the expected value of the \gls{FLE} the students tended to under treat the target more, resulting in lower overall scores. \begin{figure} \begin{center} \includegraphics[width=0.5\linewidth]{usability.eps} \caption{\label{fig:usability}Average scores and treatment failure rates for the game based study.} \end{center} \end{figure} The statistical significance of the results was tested using a two sided, unpaired, T-test implemented in {SciPy}'s\cite{2020SciPy-NMeth} \href{https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html}{stats.ttest{\textunderscore}ind} function. Using a p-value of 0.05 none of the results were statistically significant. Currently there are only 100 data points (20 for each category), so this is not surprising.
{ "alphanum_fraction": 0.7829912023, "avg_line_length": 54.414893617, "ext": "tex", "hexsha": "2c8d46a2a12fc94ccd866af5da6ebf11990fc736", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bc72551650facf1adf0e5954810f6ac2aec81080", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "SciKit-Surgery/scikit-surgeryfred-paper", "max_forks_repo_path": "results.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bc72551650facf1adf0e5954810f6ac2aec81080", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "SciKit-Surgery/scikit-surgeryfred-paper", "max_issues_repo_path": "results.tex", "max_line_length": 180, "max_stars_count": null, "max_stars_repo_head_hexsha": "bc72551650facf1adf0e5954810f6ac2aec81080", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "SciKit-Surgery/scikit-surgeryfred-paper", "max_stars_repo_path": "results.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1317, "size": 5115 }
\documentclass[10pt,landscape,letterpaper]{article} \usepackage{multicol} \usepackage{wrapfig} \usepackage[colorlinks]{hyperref} \usepackage{microtype} \usepackage{calc} \usepackage{ifthen} \usepackage[margin=16pt]{geometry} \usepackage{amsmath, amssymb, amsfonts, amsthm} \usepackage{subcaption} \usepackage{enumitem} \usepackage{mathrsfs} \usepackage{textcomp} \usepackage{verbatim} \usepackage{graphicx} \renewcommand{\baselinestretch}{.85} \pagestyle{empty} \usepackage{listings} \usepackage[utf8]{inputenc} \def\version{0.1} \newcommand{\header}{ \bgroup \centering \scalebox{0.925}[1.0]{\LARGE\scshape Typesetting in \LaTeXe} \par\vskip 6pt \hrule \par\vskip 6pt \parbox{.25\columnwidth}{\sc Version \,\version\hfill}\parbox{.5\columnwidth}{\hfil\centering\large\bfseries CHEAT SHEET\hfil}\parbox{.25\columnwidth}{\hfill\sc Jacob House} \par\vskip 6pt \hrule\par\vskip 10pt \egroup } \makeatletter \def\laTeX{% L\kern-.36em{\sbox\z@ T\vbox to\ht\z@{\hbox{\check@mathfonts \fontsize\sf@size\z@\math@fontsfalse\selectfont A}}}\kern-.12em(\kern-.12em\TeX\kern-.12em)% } \lstdefinestyle{macrocode}{ name=macrocode, % language=[LaTeX]TeX, basicstyle=\fontsize{9}{9.2}\selectfont\ttfamily\SetTracking{encoding=*}{-75}\lsstyle, columns=fullflexible, numbers=left, breakatwhitespace=true, numberfirstline=1, firstnumber=auto, numberstyle=\scriptsize, numbersep=5pt, frame=single, breaklines=true, breakindent=12pt, aboveskip=0.5 \baselineskip, belowskip=0.5 \baselineskip, firstnumber=last, prebreak=\mbox{$\hookleftarrow$}, } \lstnewenvironment{macrocode}[1][]{% \lstset{ style=macrocode, #1 } % \csname\@lst @SetFirstNumber\endcsname }{% % \csname \@lst @SaveFirstNumber\endcsname } % Redefine section commands to use less space \renewcommand{\section}{\@startsection{section}{1}{0mm}% {.2ex}% {.2ex}%x {\sffamily\bfseries}} \renewcommand{\section}{\@startsection{section}{1}{0mm}% {-1ex plus -.5ex minus -.2ex}% {0.5ex plus .2ex}%x {\normalfont\large\bfseries}} \renewcommand{\subsection}{\@startsection{subsection}{2}{0mm}% {-1explus -.5ex minus -.2ex}% {0.5ex plus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand{\subsubsection}{\@startsection{subsubsection}{3}{0mm}% {-1ex plus -.5ex minus -.2ex}% {1ex plus .2ex}% {\normalfont\small\bfseries}} % Don't print section numbers \setcounter{secnumdepth}{0} \parindent=0.75\parindent \columnsep=36pt \columnseprule=0.4pt \newcommand{\bs}{\symbol{92}} % backslash \newwrite\example@out \newcounter{exacnt} \setcounter{exacnt}{1} \newlength{\savefboxrule} \newlength{\savefboxsep} \newlength{\outdent} \setlength{\outdent}{0cm} %\addtolength{\headwidth}{\outdent} \newenvironment{example}% {\begingroup% Lets Keep the Changes Local \@bsphack \immediate\openout \example@out \jobname.exa \let\do\@makeother\dospecials\catcode`\^^M\active \def\verbatim@processline{% \immediate\write\example@out{\the\verbatim@line}}% \verbatim@start}% {\immediate\closeout\example@out\@esphack\endgroup% % % And here comes my part. :- % \stepcounter{exacnt}% \setlength{\parindent}{0pt}% \par%\addvspace{3.0ex plus 0.8ex minus 0.5ex}\vskip -\parskip % Page \lsspageref{exa:\theexacnt} \expandafter\ifx\csname r@exa\theexacnt\endcsname\relax\else %\ifx\pdfoutput\undefined % We're not running pdftex % \ifodd\lsspageref{exa\theexacnt}\hspace*{0pt}\else\hspace*{-\outdent}\fi% %\else %% HyPsd@pageref internal hyperref command v6.69c \ifodd\HyPsd@pageref{exa\theexacnt}\hspace*{0pt}\else\hspace*{-\outdent}\fi% %\fi \fi \makebox[\linewidth][l]{% %\raisebox{-\height}[0pt][\totalheight]{% \begin{minipage}[c]{0.445\linewidth}% \small\lstinputlisting[style=macrocode]{\jobname.exa} \end{minipage}% %}% \hfill\hfill% \setlength{\savefboxrule}{\fboxrule}% \setlength{\fboxrule}{0.1pt}% \setlength{\savefboxsep}{\fboxsep}% % \setlength{\fboxsep}{3mm}% % \raisebox{-\height}[0pt][\totalheight]{% \setlength{\fboxsep}{4pt}% \fbox{% \begin{minipage}{0.48\linewidth}% % \setlength{\fboxrule}{\savefboxrule}% % \setlength{\fboxsep}{0pt}% % \setlength{\fboxrule}{0.5pt}% \setlength{\parskip}{1ex plus 0.4ex minus 0.2ex}% \begin{trivlist}\item\small\input{\jobname.exa} \end{trivlist} \end{minipage} }% % }% }\label{exa\theexacnt}% \newline % \addvspace{3ex plus 0.8ex minus 0.5ex}\vskip -\parskip } \def\R{\ifmmode\mathbb{R}\else$\mathbb{R}$\fi} \makeatother \begin{document} \small \begin{multicols*}{3} \header \section{Paragraphs and Alignment} To tell \laTeX\ that you would like to create a new paragraph, press the return key twice. That is, leave a blank line between the paragraphs in your \verb|.tex| or \verb|.ltx| file. Similarly, one may use the \TeX\ primitive \verb|\par| at the end of a paragraph. This prevents the need of a blank line in the source code. For example, \begin{macrocode} The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. \par The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. \end{macrocode} Two consecutive empty lines generate two \verb|\par| tokens. For all practical purposes this is equivalent to one \verb|\par|, because after the first one \TeX\ enters vertical mode, and in vertical mode a \verb|\par| only exercises the page builder, and clears the paragraph shape parameters. By default, \LaTeX\ uses full justification; both the left and right edges of the text are smooth. One can disable full justification in favour of left-aligned text by using the \verb|\raggedright| command. Similarly, default vertical alignment settings attempt to avoid large white spaces at the bottom of pages by stretching page contents. This may be disabled with the \verb|\raggedbottom| command. Additionally, \LaTeX\ provides \verb|center|, \verb|flushleft|, and \verb|flushright| environments which do exactly what one may expect. \subsection{Line Breaks} To manually break a line, one may the \verb|\newline| command. Note that \verb|\\| is an alias of \verb|\newline|; the two have identical meaning. As we can see, when a line \newline is manually broken, the following line is not indented. Also notice that the broken line was not stretched to alignment with the paragraph. We can also use \verb|\hfill\break| to create a non-stretching linebreak. This is contrasted by the \verb|\linebreak| command, which \textit{will} stretch or shrink whitespace between text when \linebreak breaking lines to compensate for the text that has been forced out to the next line. Text, combined with an adjustable white space, is called glue and \TeX\ may find the required stretch intolerable and deny the requested line break. To suppress \TeX's fussiness over line breaking badness, use \verb|\sloppy|. Issuing a \verb|\fussy| returns \TeX\ to its ordinary compulsive self. Do \textit{not}, under any circumstances, use \verb|\newline|, \verb|\\|, \verb|\hfill\break|, or \verb|\linebreak| to insert line breaks between paragraphs. Always insert a blank line or use \verb|\par|. Control sequences \verb|\newpage| and \verb|\pagebreak| behave similar so their horizontal equivalents; \verb|\newpage| will immediately switch to the next page whereas \verb|\pagebreak| will stretch page contents then break. \section{Boxes and Glue} A box is the \TeX\ term for an invisible container that can hold a visible element, nothing, or other boxes. Glue is the \TeX\ term for an invisible connector that determines the separation between boxes. Each separate visible element contained within a \TeX\ document is contained within a box. A visible element can be a letter, image, geometric shape, etc. \TeX\ builds pages by gluing boxes together according to the default \TeX\ rules, default \LaTeX\ rules, or document commands. In a typical document, letter boxes are glued to other letter boxes to form words, which are then elastically glued to other words to form sentences. Sentences are broken into lines and placed in paragraph boxes. Elastic glue is squeezed or stretched to fully justify lines within paragraph boxes. Paragraph boxes are glued to diagram boxes, and so on. \subsection{Producing Boxes} The \verb|\makebox| control sequence may be used to create a box whose contents will not be broken, so it is often used to prevent hyphenation or to group text that should not be broken across several lines. It takes two optional parameters, width and position: \begin{macrocode} \makebox[width][pos]{text} \end{macrocode} These parameters allow \verb|\makebox| to be used in many ways, for example, \begin{example} \makebox[9ex][s]{Bad text}% \hskip-9ex% \makebox[9ex][s]{X X X X} Text \makebox[1.5\width][r]% {running away} \end{example} The control sequence \verb|\mbox{text}| is the shorthand no-option version of \verb|\makebox|. \subsubsection{Framed Boxes} The command \verb|\framebox| behaves identically to \verb|\makebox| except that it additionally draws a box around its contents. So we have \begin{macrocode} \framebox[width][pos]{text} \fbox{text} \end{macrocode} \subsection{Inserting Vertical and Horizontal Glue} The general form to express a glue is: {\ttfamily <fixed part> plus <stretchable part> minus <shrinkable part>}. Each of these parts can be expressed in any of \LaTeX\ units (mm, cm, pt, pc, em, etc.). For example {\ttfamily 2cm plus 2mm minus 1mm}. When composing a box which contains glues, TeX uses first their ``natural dimensions'' which is the fixed part (2cm in the above example). If the resulting box is underfull, then \TeX\ expands all glue which has a non-zero stretchable part, up to the amount specified in that glue. In our example, the glue can stretch 2mm at maximum. If the box contains several glues with different stretchability, each one is stretched proportionally to the given stretchability. If the box is still underfull after stretching all glue to its maximum, a warning about ``Underfull box'' is issued. Analogously, if the box is overfull, \TeX\ tries to reduce the space by shrinking that glue. So, in our example, the final inserted glue can vary between 1.9cm and 2.2cm, depending on the size of the box which contains that glue. The \verb|plus| part in the glue can specify the value ``infinite'', through one of the following keywords: \verb|fil|, \verb|fill| or \verb|filll|. Each of these infinites is infinitely greater than the preceding one. Now that we have a basic understanding of glue, we can describe its insertion in a document. We use the control sequence \begin{macrocode} \hspace{<length>} \end{macrocode} so insert a horizontal rubber length of \verb|<length>|. We can utilize infinite stretch with the control sequences \verb|\hfil| and \verb|\hfill|, defined as \begin{macrocode} \hfil = \hskip 0pt plus 1fil minus 0pt \hfill = \hskip 0pt plus 1fill minus 0pt \end{macrocode} and, though it is not predefined as a macro, we can also use \begin{macrocode} \hskip 0pt plus 1filll \end{macrocode} to get that third level of infinity. If glue is to be inserted at the beginning of a line, the starred variant of \verb|\hspace{}|, \verb|\hspace*{}|, is to be used. Similarly, we use \verb|\vspace{}| when inserting a vertical rubber length. Like its horizontal cousin, \verb|\vspace{}| accepts standard glue lengths as an argument. For example, \begin{macrocode} \vspace{2in plus 1in minus 0.5in} \end{macrocode} produces a vertical space ranging between 1.5 and 3 inches, depending on surrounding text. As before, there is also a \verb|\vspace*{}| command. This is because the command \verb|\vspace{}| has no effect at the top of a page or at the bottom. Why would you want space when you are about to move to a new page? If you insist, you must use \verb|\vspace*{}| to force \LaTeX\ to make space. \subsection{Indentation} The horizontal distance by which the first line of paragraphs is indented is stored in \verb|\parindent| which can be set to a constant or to a multiple of another length. \begin{macrocode} \parindent=0pt \parindent=1.5\parindent \end{macrocode} To produce a zero \verb|\parindent|, one may also load the {\sffamily parskip} package. \section{Typography} \LaTeX, being a markup language, uses special syntax to denote text styles such as {\itshape italicised}, {\bfseries bold-face}, and \textsf{sans-serif}. In particular, we have the commands shown below. \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|lll|} \hline \textit{italic} & \verb|\textit{...}| & \verb|{\itshape ...}| \\ \textbf{bold-face} & \verb|\textbf{...}| & \verb|{\bfseries ...}| \\ \textsl{slanted} & \verb|\textsl{...}| & \verb|{\slshape ...}| \\ \textsc{Small Caps} & \verb|\textsc{...}| & \verb|{\scshape ...}| \\ \textrm{roman} & \verb|\textrm{...}| & \verb|{\rmfamily ...}| \\ \textsf{sans-serif} & \verb|\textsf{...}| & \verb|{\sffamily ...}| \\ \texttt{monospaced} & \verb|\texttt{...}| & \verb|{\ttfamily ...}| \\ \emph{emphasised} & \verb|\emph{...}| & \verb|{\em ... }| \\ \hline \end{tabular} \end{center} Notice that there are two ways to encode each font style: as a command with an argument (e.g., \verb|\textit{...}|) and as a group (\verb|{}|) containing a switch (e.g., \verb|\itshape|). For short texts, the command variant is often more useful whereas for longer texts the switch variant is more aptly suited (see Defining Macros). \LaTeX\ uses braces (\verb|{}|) to denote what are referred to as \textit{groups}. Looking at the command variant of the font control sequences above, one may mistake the \verb|...| to be the argument passed to the command. In fact, the argument is what immediately follows the command; that is, the entire group \verb|{...}|. What this means in terms of how commands behave, however, is that a command such as \verb|\textbf| affects only the one argument that follows it. So, \begin{macrocode} \textit italics \end{macrocode} would in fact only print ``\textit italics,'' not \textit{italics}. Conversely, a switch will affect \textit{all} text that follows it, up to the end of the group. For this reason, we may also use \verb|\bgroup| and \verb|\egroup| rather than opening and closing braces, \begin{example} \textit\bgroup some italics text\egroup \end{example} though this is shown here as an example and rarely used in practice. It is recommended to use \verb|\emph| to emphasise text, not \verb|\textit|. \verb|\emph| will italicise normal font text and, when invoked from an already-italicised context, convert text to a normal font for emphasis. Older \LaTeX2.09 documents may contain switches such as \verb|\it|, \verb|\bf|, \verb|\sl|, \verb|\sc|, \verb|\rm|, \verb|\sf| and \verb|\tt| to denote the above font shapes. These commands are obsolete in \LaTeXe\ and should not be used. \subsection{Legacy Support} The obsolescent \TeX\ commands \verb|\rm|, \verb|\it|, \verb|\bf|, etc. are declared in class files to function as their modern equivalents. \begin{macrocode} \DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm} \DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf} \DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit} \end{macrocode} So, writing \begin{macrocode} {\bf\it some text} \end{macrocode} is equivalent to writing \begin{macrocode} {\normalfont\bfseries\normalfont\itshape some text} \end{macrocode} As a result, the above code produces {\normalfont\bfseries\normalfont\itshape some text}, not {\bfseries\itshape some text}, as intended. This is because \verb|\normalfont| negated the affect \verb|\bfseries| had. The moral here is to \textit{never} use the old font commands. You gain nothing and lose much of the flexibility of the new ones. (Well, after \newcount\yearsSinceTex \yearsSinceTex=\numexpr\the\year-1994\relax \the\yearsSinceTex\ years they aren't \textit{really} new.) We also have the following sizing commands, each of which have a corresponding environment. \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|ll|} \hline {\tiny tiny} & \verb|{\tiny ...}| \\ {\scriptsize scriptsize} & \verb|{\scriptsize ...}| \\ {\footnotesize footnotesize} & \verb|{\footnotesize ...}| \\ {\small small} & \verb|{\small ...}| \\ {\normalsize normalsize} & \verb|{\normalsize ...}| \\ {\large large} & \verb|{\large ...}| \\ {\Large Large} & \verb|{\Large ...}| \\ {\LARGE LARGE} & \verb|{\LARGE ...}| \\ {\huge huge} & \verb|{\huge ...}| \\ {\Huge Huge} & \verb|{\Huge ...}| \\ \hline \end{tabular} \end{center} \subsection{Special Characters} \LaTeX\ defines several special symbols and characters using combinations of other keyboard characters, either because the actual symbol is reserved for \LaTeX\ syntax or keyboards do not have the symbol in question. Opening quotation marks are produced using the backtick (\textasciigrave) key while closing quotation marks are produced using the vertical quote (\textquotesingle) key. The double quote character is never used. Rather, to produce double quotation marks, use two backticks or vertical quotes in succession. A \verb|\thinspace| (aliased to \verb|\,|) can be used to separate double and single quotation marks that come one after another. \begin{macrocode} ``\,`I'm positive,' he said, `I can do it.'\,'' \end{macrocode} To produce dashes of varying sizes, different numbers of hyphens must be used in the \TeX\ source. One hyphen (\verb|-|) is used to typeset a hyphen, used in compound words like `over-the-counter.' Two hyphens (\verb|--|) are used to create an en-dash that may be used to denote a range of numbers (1994--\the\year, for example), and three hyphens (\verb|---|) are used to typeset em-dashes --- used in paragraphs for interjections. The tilde (\verb|~|) character is used in \TeX\ to denote what is referred to as a non-breaking space --- known as a {\itshape tie}. That is, a space that cannot be used as a place to break a line. \begin{macrocode} \catcode`\~=\active \def~{\penalty10000\ } \end{macrocode} This should be used whenever a label and a number follow one another, as well as other situations such as phone numbers. \begin{macrocode} See Section~3.1 or call (234)~555-6789 for more details. \end{macrocode} Ellipsis points may be produced using the \verb|\ldots| macro and letter accents may be produced as in the following example. \begin{example} H\^otel, na\"ive, \'el\=eve,\\ sm\o rrebr\o d, !`Se\~norita!,\\ Sch\"onbrunner Schlo\ss{} Stra\ss e \end{example} The below table shows a more comprehensive list of accents and non-English characters with their control symbols. \vspace{0pt minus 20pt} \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{|*{3}{lp{0.95cm}}ll|} \hline \`o & \verb|\`o| & \'o & \verb|\'o| & \^o & \verb|\^o| & \~o & \verb|\~o| \\ \=o & \verb|\=o| & \.o & \verb|\.o| & \"o & \verb|\"o| & \c c & \verb|\c c| \\ \d o & \verb|\d o| & \b o & \verb|\b o| & \t oo & \verb|\t oo| & \ss & \verb|\ss| \\[0.5ex] \oe & \verb|\oe| & \OE & \verb|\OE| & \ae & \verb|\ae| & \AE & \verb|\AE| \\ \aa & \verb|\aa| & \AA & \verb|\AA| & \i & \verb|\i| & \j & \verb|\j| \\ \o & \verb|\o| & \O & \verb|\O| & \l & \verb|\l| & \L & \verb|\L| \\ \hline \end{tabular} \end{center} We also note that the ten special characters \begin{center} \# \hspace{0.5em} \$ \hspace{0.5em} \% \hspace{0.5em} \& \hspace{0.5em} \~\ \hspace{0.5em} \_ \hspace{0.5em} \^\ \hspace{0.5em} \textbackslash \hspace{0.5em} \{ \hspace{0.5em} \} \end{center} are produced by preceding the symbol with a backslash (with the exception of `\textbackslash', which is typeset using \verb|\textbackslash|.) \subsection{Ligatures} \TeX\ also produces ligatures for certain characters (shown below). \begin{center} \large ff \quad fi \quad fl \quad ffi \end{center} This may be suppressed by inserting an empty group \verb|{}| between the characters: \begin{macrocode} f{}f \quad f{}i \quad f{}l \quad f{}f{}i \end{macrocode} produces \begin{center} \large f{}f \quad f{}i \quad f{}l \quad f{}f{}i. \end{center} \section{Modes in \TeX} \section{Math Mode} \subsection{Inline v. Display Math Mode} \subsection{Equation Environments} \subsection{Math Symbols} \subsection{Theorems} \section{Floating Bodies} \section{Graphics} \subsection{Graphic Formats} \subsubsection{Encapsulated PostScript} \subsubsection{Portable Document Format, JPEG, and PNG} \section{Cross-Referencing} \subsection{Indexing} \section{Sectioning} \TeX\ uses a counter for each of its headings. Due to the hierarchical nature of document headings, the counter for a given heading is required to be reset to $1$ each time the next higher-level number is incremented. The nesting of headers is as follows: \begin{macrocode} \newcounter{part} \newcounter{chapter} %% Book and report classes only \newcounter{section}[chapter] \newcounter{subsection}[section] \newcounter{subsubsection}[subsection] \newcounter{paragraph}[subsubsection] \newcounter{subparagraph}[paragraph] \end{macrocode} What this means is that each time the \texttt{chapter} counter is incremented, the \texttt{section} counter resets, each time the \texttt{section} counter is incremented, the \texttt{subsection} counter is reset, and so on. \section{Defining Macros} % TeX internals, @, \let v. \def, etc % checking newcommand vs renewcommand VS just def The words `control sequence,' `macro,' and `command' all reference the same thing in \LaTeX. That is, a shorthand way of repeating much longer and more cumbersome code very easily. For example, \verb|\TeX| is defined in \verb|latex.ltx| (from \verb|ltlogos.dtx|) as \begin{macrocode} \def\TeX{T\kern-.1667em\lower.5ex\hbox{E}\kern-.125emX\@} \end{macrocode} Users of \TeX\ can also define macros. Suppose we wish to use \R\ to denote the real numbers. Ordinarily this would need to be typeset as \verb|\mathbb{R}| in math mode or even more complicatedly as \verb|$\mathbb{R}$| in text mode. Instead, define \verb|\R| as follows. \begin{macrocode} \def\R{\ifmmode\mathbb{R}\else$\mathbb{R}$\fi} \end{macrocode} Then we can write simply \verb|\R|, either in math mode or horizontal mode, to get \R. \makeatletter \the\year/\two@digits{\the\month}/\two@digits{\the\day}:% \two@digits{\the\count@}:\two@digits{\the\count2} \makeatother \subsection{Delimited Arguments} User-defined control sequences can be even more flexible than the example above through the use of delimited arguments. \subsection{The {\ttfamily \bs makeatletter} and {\ttfamily \bs makeatother} Macros} The \verb|\makeatletter| macro changes the category code of `@' character to 11 (which is the catcode of ordinary characters a-z, A-Z). \verb"\makeatother" reverts this to its original catcode of 12. Knuth assigns a category code for each and every character like 0 for escape `\textbackslash', 1 for begining of a group `\{', 2 for end of group `\}', 3 for math shift `\$', 4 for alignmet tab `\&', 5 for end of line, 6 for paramter `\#', 7 for superscript `\^', 8 for subscript `\_', 9 for ignored character, 10 for space, 11 for letters, 13 for active character `\~', 14 for comment character `\%', 15 for invalid character and 12 for characters other than the above. Knuth gives the freedom to change the catcode of any character anywhere. One could change the catcode of \textbackslash{} to 11 (i.e., a letter) and assign the catcode 0 to $|$ so that \verb+|section+ becomes a function or control sequence. You may have noted that an escape character combined with the characters of catcode 11 becomes a control sequence. As such, all the user defined control sequences or macros will be of this nature. This raises the issue of a user-defined macro having the same name as that of a macro in a package or even the \TeX{} kernel. This can break packages and cause unpredictable behaviour. In order to circumvent this foreseeable problem, package writers always use the character `@' in their control sequences by using \verb|\makeatletter| to change the catcode of `@' character to 11 which is the catcode of alpha characters. At the end of the package, the author will revert the catcode of `@' to 12 with the command \verb|\makeatother|. As a result, these macros cannot be redefined within the document without changing the catcode of `@' to 11 and novice users cannot accidentally create macros that might clash with kernel macros. For completeness we also remark that \verb|\makeatletter| and \verb|\makeatother| are defined as below and hence the following definitions may be used instead of the control sequences. \begin{macrocode} \def\makeatletter{\catcode`\@11\relax} \def\makeatother{\catcode`\@12\relax} \end{macrocode} For example, suppose we wish to make a counter that the user can increment and get the value of, but not redefine or decrement. We might use \begin{macrocode} \makeatletter \newcount\@counta \@counta=0 \def\addtocounta{\advance\@counta by 1\relax} \def\countaval{\the\@counta} \makeatother \end{macrocode} \subsection{Registers and Tokens} \section{Flow Control} Like any programming language, \TeX\ provides all the standard mechanisms for flow control. \subsection{If \ldots\ Then \ldots\ Else \ldots} Of these flow-control mechanisms, the most commonly used is likely the {\em if-then} and {\em if-then-else} construct. \TeX nicians creating documents designed for modularity and flexibility in reuse will find \TeX's {\em if} construct of particular interest. \subsection{The {\sffamily ifthen} package} \subsection{The {\sffamily optional} package} \section{Troubleshooting} \subsection{Overfull and Underfull {\ttfamily hbox}es and {\ttfamily vbox}es} \subsection{File ended errors} \subsection{Unresolved References} \subsection{Undefined Control Sequence} There are two likely scenarios where one may encounter an `undefined control sequence' error. The first is that not all of the correct macro packages have been loaded. For example, the \verb|\mathscr{}| control sequence will throw this error if the {\sffamily mathrsfs} has not been loaded. The second and more complex reason that one may encounter this error is that a macro or definition was created within a group and \TeX\ is now outside that group. For example, \begin{macrocode} {\def\a{b}}\a \end{macrocode} will produce such an error because \verb|\a| is only defined within the group. For completeness, we also note that \begin{macrocode} \count0=1 {\count0=2 } \showthe\count0 \end{macrocode} will display the value 1; the assignment made inside the group is undone at the end of the group. Moreover, The choice of the brace characters for the beginning and end of group characters is not hard-wired in \TeX. It is arranged like this in the plain format: \begin{macrocode} \catcode`\{=1 % left brace is begin-group character \catcode`\}=2 % right brace is end-group character \let\bgroup={ \let\egroup=} \end{macrocode} \subsection{Paragraph Ended Before \ldots Was Complete} \vfill\copyright\ \uppercase\expandafter{\romannumeral \the\year} Jacob House \end{multicols*} \end{document}
{ "alphanum_fraction": 0.7381355309, "avg_line_length": 50.4304267161, "ext": "tex", "hexsha": "2ff63238c0e57dea64286a15f515f7eba04deec9", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "087a0ce6c6f7f414ef81ed200e5a91f9699e178e", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "jwfh/tex-support", "max_forks_repo_path": "how-to-tex/how-to-tex.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "087a0ce6c6f7f414ef81ed200e5a91f9699e178e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "jwfh/tex-support", "max_issues_repo_path": "how-to-tex/how-to-tex.tex", "max_line_length": 909, "max_stars_count": null, "max_stars_repo_head_hexsha": "087a0ce6c6f7f414ef81ed200e5a91f9699e178e", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "jwfh/tex-support", "max_stars_repo_path": "how-to-tex/how-to-tex.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 8080, "size": 27182 }
\subsection{Pans}
{ "alphanum_fraction": 0.7, "avg_line_length": 5, "ext": "tex", "hexsha": "2871104e25531347efd2017b4b10d43453ccca65", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "adamdboult/nodeHomePage", "max_forks_repo_path": "src/pug/theory/culture/methods/02-01-Pans.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_issues_repo_issues_event_max_datetime": "2022-01-01T22:16:09.000Z", "max_issues_repo_issues_event_min_datetime": "2021-03-03T12:36:56.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "adamdboult/nodeHomePage", "max_issues_repo_path": "src/pug/theory/culture/methods/02-01-Pans.tex", "max_line_length": 17, "max_stars_count": null, "max_stars_repo_head_hexsha": "266bfc6865bb8f6b1530499dde3aa6206bb09b93", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "adamdboult/nodeHomePage", "max_stars_repo_path": "src/pug/theory/culture/methods/02-01-Pans.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 20 }
\documentclass[hidelinks, 12pt]{style} \usepackage[utf8]{inputenc} \usepackage{fancyhdr} \pagestyle{fancy} \usepackage{amssymb} \usepackage{caption} \usepackage{subcaption} \usepackage{float} \graphicspath{{Images/}} \usepackage[backend=biber,style=phys]{biblatex} \newtheorem{definition}{Definition} \usepackage{tocbibind} \usepackage{tabularx} \usepackage{amsmath} \usepackage[toc,title,page]{appendix} \usepackage{hyperref} \usepackage{minted} \usepackage{blindtext} \usepackage{url} \usepackage{epigraph} \usepackage{tikz} \usepackage[intoc]{nomencl} \usepackage[makeroom]{cancel} \usepackage{algpseudocode} \usepackage{algorithm} \usepackage{chemfig} \usetikzlibrary{quotes,arrows.meta} \usetikzlibrary{decorations.markings} \makenomenclature \addbibresource{references.bib} \usepackage{etoolbox} \renewcommand\nomgroup[1]{% \item[\bfseries \ifstrequal{#1}{A}{Greek Letters}{} \ifstrequal{#1}{B}{Roman Letters}{} ]} % This will add the units %---------------------------------------------- \newcommand{\nomunit}[1]{% \renewcommand{\nomentryend}{\hspace*{\fill}#1}} %---------------------------------------------- \title{AMSIMP: An Open Source Ensemble Prediction Scheme Using Recurrent Neural Networks to Improve Numerical Weather Prediction} \author{Conor Casey} \college{Teacher: Ms. Abbott} \degree{Physical Sciences} \degreedate{SciFest@College - IT Tralee (Online)} \begin{document} \pagenumbering{roman} \maketitle \chapter*{ \centering ``Although the infinitesimal calculus has been a splendid success, yet there remain problems in which it is cumbrous or unworkable. When such difficulties are encountered, it may be well to return to the manner in which they did things before the calculus was invented, postponing the passage to the limit until after the problem has been solved for a moderate number of moderately small differences” \\[5pt] \rightline{{\rm --- Lewis Fry Richardson}} } \chapter*{Abstract} \addcontentsline{toc}{chapter}{Abstract} This report hypothesises that it is possible to create open source software implementation to simulating atmospheric dynamics, with a recurrent neural network (RNN) and ensemble prediction system (EPS) being used in combination with a physical model, that such a software implementation has a reasonable execution time to forecast length ratio, and to determine if such an implementation has a statistically significant accuracy improvement over a traditional deterministic physical model. Firstly, this report explains, and derives a series of relevant equations for simulating atmospheric dynamics on a synoptic scale. Following this process, the software was developed and released onto the open-source platform, GitHub. To prove the hypothesis, it was determined that a series of appropriate benchmarks would be carried out in the areas of performance, and accuracy. The performance benchmark would demonstrate whether or not the software has a reasonable execution time to forecast length ratio, and the accuracy benchmark would highlight whether or not the forecasts produced by the software had a reasonable level of accuracy. Considering there is no open source software to which one can compare the software, it was determined that the software would be compared against a proprietary implementation. In regards to performance benchmarking, the ratio between the runtime of the forecast and the fixed length of the forecast was determined. For the physical model, the ratio between the two parameters was $0.0076$; for the physical model with the RNN enabled, the ratio was $0.0799$; and for the physical model with the ensemble forecast system enabled, the ratio was $0.1147$. This correlates to a usable forecast length of 119 hours (99\%), 110 hours (92\%), and 106 hours (89\%) respectively. Hence, the forecast would be regarded as highly usable. In regards to accuracy benchmarking, three distinct benchmarks were carried out. The purpose of the first benchmark was to determine whether the forecast produced by a given scheme of the software was more accurate than the naive forecasting model by using the mean absolute scaled error. Three parameters, geopotential height, zonal wind, and meridional wind, were consistently more accurate than the naive forecasting model across the board. The mean absolute scaled error across the entire forecasting period was $0.9399$, $0.9192$, and $0.9933$ respectively. In regards to the virtual temperature, the physical model and the physical model with the ensemble forecast system under-performed in comparison to the naive forecasting model. The physical model with the RNN enabled, however, performed better than the naive forecasting model with a mean absolute scaled error of $0.9511$. In regards to air temperature, a similar story applies, however, the physical model with the RNN in this case was unable to beat the naive forecasting model, with a mean absolute scaled error of $1.51$. In regards to relative humidity, the physical model with the RNN enabled, and the physical model with the EPS enabled performed dramatically worse than the physical model. The physical model was also unable to match the accuracy of the naive forecasting model with a mean absolute scaled error of $1.17$. That being said, the majority of the forecast schemes within the software had a higher accuracy than the naive forecasting model, hence, the software as a whole has a reasonable level of accuracy. The purpose of the second benchmark was to determine whether a particular scheme had a statistically significant increase in accuracy over the physical model. In regards to the physical models with the RNN enabled, a statistically significant increase in accuracy was seen in two out of the six atmospheric parameters benchmarked, with a significance level of $0.1$. These were air temperature, and virtual temperature; and had a p-value of approximately $0.0543$ and $0.0115$ respectively. In regards to relative humidity, a statistically significant decrease in performance was observed. In regards to geopotential height, a RNN was not developed due to the restrictions created by the COVID-19 pandemic. Hence, a statistically significant increase or decrease in accuracy was not observed. As zonal wind, and meridional wind are determined using geopotential height, an increase or decrease in accuracy was not observed for the aforementioned parameters. Overall, using a RNN in combination with a physical model, shows significant promise. In regards to the physical models with the EPS enabled, a statistically significant increase or decrease in accuracy was not observed in any of the six atmospheric parameters benchmarked. While an increase in performance was observed across the board, except in the case of relative humidity, it was not statistically significant. The purpose of the third benchmark was to compare the accuracy of the software's most accurate scheme against a proprietary implementation. The results shows that there is a statistically significant increase across all of the five atmospheric parameters benchmarked in favour of the forecast from the proprietary implementation. Hence, against a proprietary implementation, the software significantly under-performs. Hence, the hypothesis that was proposed has partially been proven, and can be accepted, as such. \chapter*{Acknowledgements} \addcontentsline{toc}{chapter}{Acknowledgements} \input{Chapters/Acknowledgements} \tableofcontents \listoffigures % Nomenclatures \mbox{} \nomenclature[A, 01]{$\beta$}{Rossby Number \nomunit{$rad \cdot s^{-1} m^{-1}$}} \nomenclature[A, 03]{$\theta$}{Potential Temperature \nomunit{$K$}} \nomenclature[A, 04]{$\lambda$}{Longitude \nomunit{$^{\circ}$}} \nomenclature[A, 05]{$\Pi$}{Exner Function \nomunit{$\frac{K}{K}$}} \nomenclature[A, 06]{$\Phi$}{Geopotential Height \nomunit{$m$}} \nomenclature[A, 07]{$\phi$}{Latitude \nomunit{$^{\circ}$}} \nomenclature[A, 08]{$\varphi$}{Geopotential \nomunit{$m^2 \cdot s^{-2}$}} \nomenclature[A, 09]{$\rho$}{Density \nomunit{$kg \cdot m^{-3}$}} \nomenclature[A, 10]{$\sigma$}{Static Stability \nomunit{$J \cdot hPa^{-2} \cdot kg^{-1}$}} \nomenclature[A, 11]{$\omega$}{Vertical Velocity \nomunit{$Pa \cdot s^{-1}$}} \nomenclature[A, 12]{$\Omega$}{Angular Rotation Rate of Earth \nomunit{$7.29246206 \cdot 10^{-5}\, rad \cdot s^{-1}$}} \nomenclature[B, 01]{$a$}{Earth mean radius \nomunit{$6378100 m$}} \nomenclature[B, 02]{$c_p$}{Specific Heat Capacity on a Constant Pressure Surface \nomunit{$1004, J \cdot kg^{-1} \cdot K^{-1}$}} \nomenclature[B, 03]{$e$}{Vapour Pressure \nomunit{$hPa$}} \nomenclature[B, 04]{$F$}{Coriolis Force \nomunit{$rad \cdot s^{-1}$}} \nomenclature[B, 04]{$f$}{Coriolis Parameter \nomunit{$rad \cdot s^{-1}$}} \nomenclature[B, 05]{$g$}{Gravitation Acceleration \nomunit{$m \cdot s^{-2}$}} \nomenclature[B, 06]{$m$}{Mixing Ratio \nomunit{$\frac{kg}{kg}$}} \nomenclature[B, 07]{$h$}{Pressure Thickness \nomunit{$m$}} \nomenclature[B, 08]{$p$}{Pressure \nomunit{$hPa$}} \nomenclature[B, 09]{$r$}{Relative Humidity \nomunit{$\%$}} \nomenclature[B, 10]{$R$}{Specific Gas Constant for Dry Air \nomunit{$287, J \cdot kg^{-1} \cdot K^{-1}$}} \nomenclature[B, 11]{$T$}{Temperature \nomunit{$K$}} \nomenclature[B, 12]{$T_v$}{Temperature \nomunit{$K$}} \nomenclature[B, 13]{$t$}{Time \nomunit{$s$}} \nomenclature[B, 14]{$u$}{Zonal Wind \nomunit{$m \cdot s^{-1}$}} \nomenclature[B, 15]{$v$}{Meridional Wind \nomunit{$m \cdot s^{-1}$}} \nomenclature[B, 16]{$W$}{Precipitable Water \nomunit{$mm$}} \nomenclature[B, 17]{$x, y$}{Horizontal Distances \nomunit{$m$}} \printnomenclature \chapter{Introduction} \pagenumbering{arabic} \input{Chapters/Introduction} \chapter{Atmospheric Science}\label{2} \input{Chapters/Atmospheric_Science.tex} \chapter{Dynamical Core}\label{3} \input{Chapters/Dynamical_Core.tex} \chapter{Simulating Dynamics}\label{4} \input{Chapters/Simulating_Dynamics.tex} \chapter{Implementation Details}\label{5} \input{Chapters/Implementation_Details.tex} \chapter{Benchmarking}\label{6} \input{Chapters/Benchmarking.tex} \chapter{Results}\label{7} \input{Chapters/Results.tex} \chapter{Conclusions}\label{8} \input{Chapters/Conclusions.tex} \newpage \pagenumbering{Roman} \appendix \renewcommand{\thesection}{\Alph{section}.\arabic{section}} \setcounter{section}{0} \input{Chapters/Appendices.tex} \pagenumbering{Roman} \printbibliography[heading = bibintoc] \end{document}
{ "alphanum_fraction": 0.7635913549, "avg_line_length": 54.1391752577, "ext": "tex", "hexsha": "ccd45d3d59b1d7f121849ec55c5dc17e67a142d4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a212b3f65140f0292d51055be324a7c1b084e121", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "amsimp/papers", "max_forks_repo_path": "scifest/online/project-book/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a212b3f65140f0292d51055be324a7c1b084e121", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "amsimp/papers", "max_issues_repo_path": "scifest/online/project-book/main.tex", "max_line_length": 1591, "max_stars_count": 1, "max_stars_repo_head_hexsha": "a212b3f65140f0292d51055be324a7c1b084e121", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "amsimp/papers", "max_stars_repo_path": "scifest/online/project-book/main.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-15T10:06:17.000Z", "max_stars_repo_stars_event_min_datetime": "2020-05-15T10:06:17.000Z", "num_tokens": 2716, "size": 10503 }
\chapter{Conclusions} We presented a visual debugging and exploration tool capable of showing how paths shot by a path tracer interact with the scene. We showed how the tool is structured in a data gathering part and a visualization client one; the gatherer can be plugged into the code of pre-existing path tracers as a library and dumps the needed data on disk during rendering; the client lets users explore the gathered data. Due to the sheer amount of paths a tracer has to shoot, we showed how the provided filtering tools, combined with the visualization widgets and options help users in exploring the datasets with relative ease. We dived into the implementation details of the bits that make our tool unique and into the performance exploits to speed up operations that would not be possible otherwise. Such as, to name one, the multi-pass rendering pipeline that lets user visually explore hundreds of thousands of paths on a personal computer. We also detailed two couples of datasets that show the capabilities of the previously introduced features combined, partially demonstrating how the tool can be used as both a valid debug framework for path tracers and as an educational tool to explain with practical examples how a tracer works under the hood. \section{Future developments} The tool we presented is far from being defined complete. Several improvements can be done all over and many have been noted and described throughout this document. Many others, though, since not inherent to any particular feature are gathered here. \begin{description} \item[More filters] Several path selection filters have been imagined during development, such as one excluding paths carrying less than a threshold radiance or another selecting paths only shot from a set of pixels selected via the \textbf{“Image”} panel. Interesting was the idea of extending the sphere filter adding a sort of \textit{sub-filters} such as one selecting the paths both bouncing in the sphere and bouncing in a user-controlled cone of directions; that would have been for example helpful for focusing on paths going toward a light after bouncing on an interesting surface patch. Another useful sphere filter sub-filter would have been one splitting between the bounces that have their direction sampled from the surface material and the ones that sampled scene lights; it would have been helpful in both examples of chapter \ref{results} but implementing it would have meant almost doubling the memory footprint of each bounce. \item[Save and load status] The visualization client has several parameters of which settings are lost every time the application is closed. Useful with no doubts to every user would be a save and load feature for the parameter settings. Since most parameters are controlled by ImGui widgets, maybe it would be clever to use the state preservation feature of the library. \item[Disk streaming] Every dataset we tested hardly went over 1 billion bounces, but production renders tend to have way more bounces: a Full HD ($1920 \times 1080$ pixels) render with 1024 spp has roughly double the bounces, which means a memory footprint twice as big. So much data will not entirely fit on the primary memory of most personal computers and it makes data streaming from disk an essential feature for those cases. Even with the most careful implementation, streaming would impact performances but having something working slow is better than something not working at all. \item[Spatial acceleration structures] Talking about performances brings up a somehow obvious possible improvement: spatial acceleration structures. Embedding every bounce in a data structure such as a \textit{k-d tree} \cite{bentley1975multidimensional} and then storing the paths' topology --- so which bounces make each path --- in a separate data structure, would improve performances of many bits of the visualization tool. Consider how the current filtering options are all based upon spatial queries and how these could be immensely faster if bounces were in a acceleration structure. As already mentioned in section \ref{heatmap}, the same goes for the bounce density heatmap generation, which is currently the slowest process on our tool. \item[User testing] Assessing the usability and usefulness of a software piece is no easy task and is canonically delegated to user testing. Gathering people orbiting around path tracing and perform usability testings with them would surely give great insight on what features can be improved, added or discarded all together. \end{description}
{ "alphanum_fraction": 0.8169906093, "avg_line_length": 286.1875, "ext": "tex", "hexsha": "c10d8e2cfcb264f6532471e1c7c6818cab6ede8b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "68f0900281fbb7c36fdfa34d6b86ec6099f9e274", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "giuliom95/msc-thesis", "max_forks_repo_path": "chapters/chapter_conclusions/text.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68f0900281fbb7c36fdfa34d6b86ec6099f9e274", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "giuliom95/msc-thesis", "max_issues_repo_path": "chapters/chapter_conclusions/text.tex", "max_line_length": 947, "max_stars_count": null, "max_stars_repo_head_hexsha": "68f0900281fbb7c36fdfa34d6b86ec6099f9e274", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "giuliom95/msc-thesis", "max_stars_repo_path": "chapters/chapter_conclusions/text.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 876, "size": 4579 }
% chapter included in forwardcom.tex \documentclass[forwardcom.tex]{subfiles} \begin{document} \RaggedRight \chapter{Copyright notice} This document is copyrighted 2016-2021 by Agner Fog with a Creative Commons license. \href{https://creativecommons.org/licenses/by/4.0/legalcode}{creativecommons.org/licenses/by/4.0/legalcode}. \end{document}
{ "alphanum_fraction": 0.8011527378, "avg_line_length": 28.9166666667, "ext": "tex", "hexsha": "82da6b9a4c6fd564f422f77e53b14157231d6821", "lang": "TeX", "max_forks_count": 5, "max_forks_repo_forks_event_max_datetime": "2021-05-25T13:18:36.000Z", "max_forks_repo_forks_event_min_datetime": "2016-06-26T09:50:23.000Z", "max_forks_repo_head_hexsha": "3330fc4cdece5417775a4357151576402b146f6a", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "ForwardCom/manual", "max_forks_repo_path": "fwc_copyright_notice.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "3330fc4cdece5417775a4357151576402b146f6a", "max_issues_repo_issues_event_max_datetime": "2021-11-12T16:38:55.000Z", "max_issues_repo_issues_event_min_datetime": "2016-06-26T11:28:50.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "ForwardCom/manual", "max_issues_repo_path": "fwc_copyright_notice.tex", "max_line_length": 108, "max_stars_count": 138, "max_stars_repo_head_hexsha": "3330fc4cdece5417775a4357151576402b146f6a", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "ForwardCom/manual", "max_stars_repo_path": "fwc_copyright_notice.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-19T03:01:51.000Z", "max_stars_repo_stars_event_min_datetime": "2016-06-26T08:31:44.000Z", "num_tokens": 87, "size": 347 }
\chapter{Loop Optimization} This section is specific to loop optimization and show several tutorial examples using the optimization mechanisms within ROSE. \fixme{We might want to reference Qing's work explicitly since this is really just showing off here work.} \section{Example Loop Optimizer} Simple example translator showing use of pre-defined loop optimizations. \fixme{We are not running performance tests within this tutorial, but perhaps we could later.} Figure~\ref{Tutorial:exampleLoopOptimization} shows the code required to call some loop optimizations within ROSE. The translator that we build for this tutorial is simple and takes the following command line options to control which optimizations are done. \begin{verbatim} -ic1 :loop interchange for more reuses -bk1/2/3 <blocksize> :block outer/inner/all loops -fs1/2 :single/multi-level loop fusion for more reuses -cp <copydim> :copy array -fs0 : loop fission -splitloop: loop splitting -unroll [locond] [nvar] <unrollsize> : loop unrolling -bs <stmtsize> : break up statements in loops -annot <filename>: Read annotation from a file which defines side effects of functions -arracc <funcname> : Use special function to denote array access (the special function can be replaced with macros after transformation). This option is for circumventing complex subscript expressions for linearized multi-dimensional arrays. -opt <level=0> : The level of loop optimizations to apply (By default, only the outermost level is optimized). -ta <int> : Max number of nodes to split for transitive dependence analysis (to limit the overhead of transitive dep. analysis) -clsize <int> : set cache line size in evaluating spatial locality (affect decisions in applying loop optimizations) -reuse_dist <int> : set maximum distance of reuse that can exploit cache (used to evaluate temporal locality of loops) \end{verbatim} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleDirectory/loopOptimization.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleDirectory/loopOptimization.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Example source code showing use of loop optimization mechanisms.} \label{Tutorial:exampleLoopOptimization} \end{figure} \clearpage \section{Matrix Multiply Example} Using the matrix multiply example code shown in figure~\ref{Tutorial:exampleInputCode_LoopOptimization}, we run the loop optimizer in figure~\ref{Tutorial:exampleLoopOptimization} and generate the code shown in figure~\ref{Tutorial:exampleOutput_LoopOptimization}. \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleDirectory/inputCode_LoopOptimization_blocking.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleDirectory/inputCode_LoopOptimization_blocking.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Example source code used as input to loop optimization processor.} \label{Tutorial:exampleInputCode_LoopOptimization} \end{figure} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_blocking.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_blocking.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Output of loop optimization processor showing matrix multiply optimization (using options: {\tt -bk1 -fs0}).} \label{Tutorial:exampleOutput_LoopOptimization} \end{figure} \clearpage \section{Loop Fusion Example} Using the loop fusion example code shown in figure~\ref{Tutorial:exampleInputCode_LoopOptimization_fusion}, we run the loop optimizer in figure~\ref{Tutorial:exampleLoopOptimization} and generate the code shown in figure~\ref{Tutorial:exampleOutput_LoopOptimization_fusion}. \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleDirectory/inputCode_LoopOptimization_fusion.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleDirectory/inputCode_LoopOptimization_fusion.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Example source code used as input to loop optimization processor.} \label{Tutorial:exampleInputCode_LoopOptimization_fusion} \end{figure} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_fusion.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_fusion.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Output of loop optimization processor showing loop fusion (using options: {\tt -fs2}).} \label{Tutorial:exampleOutput_LoopOptimization_fusion} \end{figure} \section{Example Loop Processor (LoopProcessor.C)} This section contains a more detail translator which uses the command-line for input of specific loop processing options and is more sophisticated than the previous translator used to handle the previous two examples. Figure~\ref{Tutorial:exampleLoopProcessor} shows the code required to call the loop optimizations within ROSE. The translator that we build for this tutorial is simple and takes command line parameters to control which optimizations are done. \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/loopProcessor.C.aa} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/loopProcessor.C.aa} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Detailed example source code showing use of loop optimization mechanisms (loopProcessor.C part 1).} \label{Tutorial:exampleLoopProcessor} \end{figure} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/loopProcessor.C.ab} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/loopProcessor.C.ab} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{loopProcessor.C source code (Part 2).} \label{Tutorial:exampleLoopProcessor2} \end{figure} \clearpage \section{Matrix Multiplication Example (mm.C)} Using the matrix multiplication example code shown in figure~\ref{Tutorial:exampleInputCode_LoopOptimization_mm}, we run the loop optimizer in figure~\ref{Tutorial:exampleLoopProcessor} and generate the code shown in figure~\ref{Tutorial:exampleOutput_LoopOptimization_mm}. \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleDirectory/inputCode_LoopOptimization_mm.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleDirectory/inputCode_LoopOptimization_mm.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Example source code used as input to loopProcessor, show in figure~\ref{Tutorial:exampleLoopProcessor}.} \label{Tutorial:exampleInputCode_LoopOptimization_mm} \end{figure} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_mm.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_mm.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Output of loopProcessor using input from figure~\ref{Tutorial:exampleInputCode_LoopOptimization_mm} (using options: {\tt -bk1 -fs0}).} \label{Tutorial:exampleOutput_LoopOptimization_mm} \end{figure} \clearpage \section{Matrix Multiplication Example Using Linearized Matrices (dgemm.C)} Using the matrix multiplication example code shown in figure~\ref{Tutorial:exampleInputCode_LoopOptimization_dgemm}, we run the loop optimizer in figure~\ref{Tutorial:exampleLoopProcessor} and generate the code shown in figure~\ref{Tutorial:exampleOutput_LoopOptimization_dgemm}. \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleDirectory/inputCode_LoopOptimization_dgemm.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleDirectory/inputCode_LoopOptimization_dgemm.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Example source code used as input to loopProcessor, show in figure~\ref{Tutorial:exampleLoopProcessor}.} \label{Tutorial:exampleInputCode_LoopOptimization_dgemm} \end{figure} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_dgemm.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_dgemm.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Output of loopProcessor using input from figure~\ref{Tutorial:exampleInputCode_LoopOptimization_dgemm} (using options: {\tt -bk1 -unroll nvar 16}).} \label{Tutorial:exampleOutput_LoopOptimization_dgemm} \end{figure} \clearpage \section{LU Factorization Example (lufac.C)} Using the LU factorization example code shown in figure~\ref{Tutorial:exampleInputCode_LoopOptimization_lufac}, we run the loop optimizer in figure~\ref{Tutorial:exampleLoopProcessor} and generate the code shown in figure~\ref{Tutorial:exampleOutput_LoopOptimization_lufac}. \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleDirectory/inputCode_LoopOptimization_lufac.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleDirectory/inputCode_LoopOptimization_lufac.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Example source code used as input to loopProcessor, show in figure~\ref{Tutorial:exampleLoopProcessor}.} \label{Tutorial:exampleInputCode_LoopOptimization_lufac} \end{figure} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_lufac.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_lufac.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Output of loopProcessor using input from figure~\ref{Tutorial:exampleInputCode_LoopOptimization_lufac} (using options: {\tt -bk1 -fs0 -splitloop -annotation}).} \label{Tutorial:exampleOutput_LoopOptimization_lufac} \end{figure} \clearpage \section{Loop Fusion Example (tridvpk.C)} Using the loop fusion example code shown in figure~\ref{Tutorial:exampleInputCode_LoopOptimization_tridvpk}, we run the loop optimizer in figure~\ref{Tutorial:exampleLoopProcessor} and generate the code shown in figure~\ref{Tutorial:exampleOutput_LoopOptimization_tridvpk}. \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleDirectory/inputCode_LoopOptimization_tridvpk.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleDirectory/inputCode_LoopOptimization_tridvpk.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Example source code used as input to loopProcessor, show in figure~\ref{Tutorial:exampleLoopProcessor}.} \label{Tutorial:exampleInputCode_LoopOptimization_tridvpk} \end{figure} \begin{figure}[!h] {\indent {\mySmallFontSize % Do this when processing latex to generate non-html (not using latex2html) \begin{latexonly} \lstinputlisting{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_tridvpk.C} \end{latexonly} % Do this when processing latex to build html (using latex2html) \begin{htmlonly} \verbatiminput{\TutorialExampleBuildDirectory/rose_inputCode_LoopOptimization_tridvpk.C} \end{htmlonly} % end of scope in font size } % End of scope in indentation } \caption{Output of loopProcessor input from figure~\ref{Tutorial:exampleInputCode_LoopOptimization_tridvpk} (using options: {\tt -fs2 -ic1 -opt 1 }).} \label{Tutorial:exampleOutput_LoopOptimization_tridvpk} \end{figure}
{ "alphanum_fraction": 0.7906545904, "avg_line_length": 31.9605263158, "ext": "tex", "hexsha": "ef0df54470a217cfb7016cd45cbd0160b350ef3c", "lang": "TeX", "max_forks_count": 146, "max_forks_repo_forks_event_max_datetime": "2022-03-04T07:32:53.000Z", "max_forks_repo_forks_event_min_datetime": "2015-04-27T02:48:34.000Z", "max_forks_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "sujankh/rose-matlab", "max_forks_repo_path": "docs/Rose/Tutorial/loopOptimization.tex", "max_issues_count": 174, "max_issues_repo_head_hexsha": "7435d4fa1941826c784ba97296c0ec55fa7d7c7e", "max_issues_repo_issues_event_max_datetime": "2022-03-31T16:51:05.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-28T18:41:32.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "sujankh/rose-matlab", "max_issues_repo_path": "docs/Rose/Tutorial/loopOptimization.tex", "max_line_length": 113, "max_stars_count": 488, "max_stars_repo_head_hexsha": "7597292cf14da292bdb9a4ef573001b6c5b9b6c0", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "maurizioabba/rose", "max_stars_repo_path": "docs/Rose/Tutorial/loopOptimization.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-30T07:15:46.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-09T08:54:48.000Z", "num_tokens": 3676, "size": 14574 }
\subsection{Rings}\label{subsec:rings} \subsubsection{Ring of Protection} Size: T\\ Price: 500G\\ This ring increases the wearer's resistance to blunt, piercing and cutting by 1. \subsubsection{Fox Ring} Size: T\\ Price: 1000G\\ This ring increases the wearer's agility by 1. \subsubsection{Ring of Health} Size: T\\ Price: 2500G\\ This ring increases the wearer's health by 4 for every level of Increase Health that they have. \subsubsection{Ring of Might} Size: T\\ Price: 1000G\\ This ring increases the wearer's strength by 1. \subsubsection{Ring of the Mage} Size: T\\ Price: 2500G\\ This ring increases the wearer's mana by 4 for every level of Increase Mana that they have. \subsubsection{Ring of Stars} Size: T\\ Price: 5000G\\ This ring can be activated by the wearer by taking 4 AP to speak the ring's command phrase. When doing so, the wearer gains +3 on Intellect, Perception and Empathy for the next 10 minutes. The ring then ceases to function for the next 24 hours.
{ "alphanum_fraction": 0.7606490872, "avg_line_length": 29.8787878788, "ext": "tex", "hexsha": "2025dc29c699950ac2f0851d8fa02f954693fc43", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_forks_repo_path": "items/equipment/rings.tex", "max_issues_count": 155, "max_issues_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_issues_repo_issues_event_max_datetime": "2022-03-03T13:49:05.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-18T13:19:57.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_issues_repo_path": "items/equipment/rings.tex", "max_line_length": 96, "max_stars_count": 6, "max_stars_repo_head_hexsha": "73781f7cd7035b927a35199af56f9da2ad2c2e95", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "NTrixner/RaggedLandsPenAndPaper", "max_stars_repo_path": "items/equipment/rings.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-03T09:32:08.000Z", "max_stars_repo_stars_event_min_datetime": "2018-03-13T09:33:31.000Z", "num_tokens": 276, "size": 986 }
\chapter{ICT Skills in Higher Education} \section{Use of Technology for Teaching} \section{Audio-Visual Tools Available for Teaching} \section{My Experiences in using Technology for Teaching/Research}
{ "alphanum_fraction": 0.8208955224, "avg_line_length": 40.2, "ext": "tex", "hexsha": "3b15567d81a3ab1a63fc7831f375cac2446a80d0", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-09-01T04:21:21.000Z", "max_forks_repo_forks_event_min_datetime": "2021-09-01T04:21:21.000Z", "max_forks_repo_head_hexsha": "5268ecb57ef96d54ab09054aedd99fece1232619", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "GayashanNA/USJ_CTHE_Template", "max_forks_repo_path": "chapters/chapter07.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5268ecb57ef96d54ab09054aedd99fece1232619", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "GayashanNA/USJ_CTHE_Template", "max_issues_repo_path": "chapters/chapter07.tex", "max_line_length": 66, "max_stars_count": null, "max_stars_repo_head_hexsha": "5268ecb57ef96d54ab09054aedd99fece1232619", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "GayashanNA/USJ_CTHE_Template", "max_stars_repo_path": "chapters/chapter07.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 43, "size": 201 }
\section{Security} This hybrid solution allows us to use the parent chain as a reliable protector against most of the attacks that target the PoS systems. \cite{pos_attacks} In this section, we assume that the parent chain is well secured, and every user has decent and reliable access to it. \subsection{Nothing at stake} This attack splits into two cases: microforks and generational forks. \subsubsection{Microforks} When a malicious leader is elected, they can produce blocks without any cost. They are free to create conflicting branches within a single generation. This case is very similar to BitcoinNG's. We introduce the Proof of Fraud (PoF) mechanism to punish the malicious leaders, but in this situation, we can make the penalties much more severe by acting not only on the transaction fees but also staked tokens. On the other hand, this kind of forking is not dangerous at all – it may introduce some mess but will be instantly solved with the next keyblock. \begin{figure}[h] \caption{Microfork} \centering \includegraphics[scale=0.35]{microfork} \end{figure} \subsubsection{Generational forks} This kind of attack is not a problem in PoW based BitcoinNG as producing keyblocks is a hard task. On the contrary keyblocks on hyperchains are really, really cheap - a malicious leader might flood the network with conflicting keyblocks: \begin{figure}[h] \caption{Generational Fork} \centering \includegraphics[scale=0.35]{genfork} \end{figure} The $K1a$, $K1b$, etc. are keyblocks on the child chain emitted by a malicious leader over different microblocks. This effectively splits the network in many parts, as it becomes unclear to which of the forks the delegates should commit. To resolve this issue, a single leader must be agreed upon using some additional mechanism – a new election should be performed with the exclusion of the compromised delegate. To notify the network about the generational fork, the peers need to announce this fact on the parent chain by publishing the fraud commitments and the cryptographic proofs of generational fraud (PoGF). Each commitment has to point to the latest child keyblock considered by the delegates to be valid, that is the generation where the fork starts. The committer also needs to submit to which fork they want to contribute – if they get elected they will be required to build on it (we disallow rollbacks). The voting power should be calculated based on the latest block before the fraud was detected. \begin{figure}[h] \caption{Generational Fork Solving} \centering \includegraphics[scale=0.45]{genfraud} \end{figure} Due to network propagation delays and connectivity failures, some nodes might not notice that a generational fork was created and not respond accordingly. New nodes or the ones catching up after downtime need to be aware that a generational fork was created and apply the appropriate resolution. Some of them may commit to one of the forks and receive a fraud notification after they started mining. Therefore, we introduce a metric over the branches that we are going to call "difficulty" for a conventional reason. The rule to resolve conflicts caused by such data races is similar to already existing PoW strategy "follow the most difficult chain," and the formula goes as follows: \begin{minipage}{\linewidth} \begin{lstlisting} difficulty : Block -> Int difficulty(Genesis) = 0 difficulty(block) = if exists proof of generational fraud on /block/ then sum of the voting power of delegates that committed to /prev(block)/ + sum of the voting power of delegates that committed to solve the genfork else sum of the voting power of delegates that committed to /prev(block)/ \end{lstlisting} \end{minipage} If two forks have the same difficulty, then we prefer the one with lower block hash. This formula ensures that keyblocks pointing to a PoGF are always strongly preferred over ordinary keyblocks – delegates who detect generational forks have higher priority over poorly connected/bootstrapping/syncing ones. This solution may look vulnerable to situations where the leader doesn't respond or responds with significant delay. To solve these cases, we just elect a new leader, stalling the network for a while. To vanish the doubts arising when the previous leader reappears, we can assume finalization after $f$ (implementation dependent) generations. There is a compelling case where the malicious leader submits a generational fork by publishing keyblocks on conflicting microblocks. Here we want to prioritize PoGF because the consequences of the forks on keyblocks are much more severe than these on microblocks, and we would need to solve them anyway. \subsection{Stake grinding} Since the RNG depends ultimately on the keyblock hash on the PoW chain, it is impossible to predict its outcomes. One could try to mine the parent chain in a special way, but it would require so much computational power that in most cases it would be easier to take control over it by some 51\% attack. \subsection{Long range attack} While it is still possible to perform it, it would be impossible to do it in silence and without preparation since the very beginning. The commitments guarantee that the information of the delegates is stored on an immutable chain, and one would need to announce their will of mining some suspicious blocks during the full period of the attack. This would quickly expose the intention of the attacker and let the others prepare for an eventual surprise (by blacklisting them, for example). \subsection{Avoiding punishments} Depending on the circumstances on the network the transaction fees may vary – this makes the original penalty system from BitcoinNG not sufficient in every case as the risk of fraud detection may be much lesser than expected profit. One of the most natural ideas is to freeze the stake with some period before the election. In this scenario, the protocol will be able to painfully slash the malicious leaders by burning/redistributing their stake. However, this may raise some problems when the delegated voting is used – the malicious leader may vote for their second, empty account that will do the fraud losing potentially nothing in case of getting compromised. This scenario can be dealt with, allowing only top $k$ stakers to be voted on or slashing \textit{everyone} who supported the malicious leader. We leave this implementation-dependent as different solutions require different security approaches.
{ "alphanum_fraction": 0.7984401285, "avg_line_length": 47.384057971, "ext": "tex", "hexsha": "71b1e3757fa40835d7465cd986d96b75b1d41dfd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "fcde74d0f120bcaf9479d151c9f0a5bd855cba99", "max_forks_repo_licenses": [ "ISC" ], "max_forks_repo_name": "akovari/hyperchains-whitepaper", "max_forks_repo_path": "security.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "fcde74d0f120bcaf9479d151c9f0a5bd855cba99", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "ISC" ], "max_issues_repo_name": "akovari/hyperchains-whitepaper", "max_issues_repo_path": "security.tex", "max_line_length": 91, "max_stars_count": null, "max_stars_repo_head_hexsha": "fcde74d0f120bcaf9479d151c9f0a5bd855cba99", "max_stars_repo_licenses": [ "ISC" ], "max_stars_repo_name": "akovari/hyperchains-whitepaper", "max_stars_repo_path": "security.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1427, "size": 6539 }
% $Id: SDTools.tex,v 1.1 2008/01/31 18:04:16 dconway Exp $ \chapter{GMAT Software Development Tools} \chapauthor{Darrel J. Conway}{Thinking Systems, Inc.} GMAT is a cross-platform mission analysis tool under development at Goddard Space Flight Center and Thinking Systems, Inc. The tool is being developed using open source principles, with initial implementations provided that run on 32-bit Windows XP, Linux, and the Macintosh (OS X). This appendix describes the build environment used by the development team on each of these platforms. The GMAT code is written using ANSI-standard C++, with a user interface developed using the wxWindows toolkit available from http://www.wxwidgets.org. Any compiler supporting these standards should work with the GMAT code base. The purpose of this document is to describe the tools that were actually used in the development process. Source code control is maintained using the Concurrent Versions System (CVS 1.11) running on a server at Goddard. Issues, bugs, and enhancements are tracked using Bugzilla 2.20 running on a server at Goddard. \section{Windows Build Environment} \begin{itemize} \item Compiler: gcc version 3.4.2 (mingw special) \item IDE Tool: Eclipse 3.1.1, with CDT 3.0.1 plug-in \item wxWindows Version: wxMSW 2.6.2 \end{itemize} On Windows, GMAT has also been built using the Dev-C++ environment. \section{Macintosh Build Environment} \begin{itemize} \item Compiler: gcc 4.0.1, XCode v. 2.2 \item IDE Tool: Eclipse 3.1.2, with CDT 3.0.1 plug-in \item wxWindows Version: wxMac 2.6.2 \end{itemize} \section{Linux Build Environment} GMAT is regularly built on two different Linux machines at Thinking Systems, one running Mandriva Linux, and the second running Ubuntu Linux. Both build environments are listed here. \paragraph{On Mandriva 2006} \begin{itemize} \item Compiler: gcc version 4.0.1 (4.0.1-5mdk for Mandriva Linux release 2006.0) \item IDE Tool: Eclipse 3.1.1, with CDT 3.0.1 plug-in \item wxWindows Version: wxGTK 2.6.2 \end{itemize} \paragraph{On Ubuntu 5.10, Breezy Badger} \begin{itemize} \item Compiler: gcc version 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9) \item IDE Tool: Eclipse 3.1.2, with CDT 3.0.2 plug-in \item wxWindows Version: wxGTK 2.6.2 \end{itemize}
{ "alphanum_fraction": 0.7605388961, "avg_line_length": 42.6111111111, "ext": "tex", "hexsha": "fa413a98baaab0acf41d53de15b3c679cc5af04a", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-12-09T07:06:55.000Z", "max_forks_repo_forks_event_min_datetime": "2019-10-13T10:26:49.000Z", "max_forks_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_forks_repo_licenses": [ "NASA-1.3" ], "max_forks_repo_name": "ddj116/gmat", "max_forks_repo_path": "doc/SystemDocs/ArchitecturalSpecification/SDTools.tex", "max_issues_count": 1, "max_issues_repo_head_hexsha": "39673be967d856f14616462fb6473b27b21b149f", "max_issues_repo_issues_event_max_datetime": "2018-03-20T20:11:26.000Z", "max_issues_repo_issues_event_min_datetime": "2018-03-15T08:58:37.000Z", "max_issues_repo_licenses": [ "NASA-1.3" ], "max_issues_repo_name": "ddj116/gmat", "max_issues_repo_path": "doc/SystemDocs/ArchitecturalSpecification/SDTools.tex", "max_line_length": 99, "max_stars_count": 2, "max_stars_repo_head_hexsha": "d6a5b1fed68c33b0c4b1cfbd1e25a71cdfb8f8f5", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Randl/GMAT", "max_stars_repo_path": "doc/SystemDocs/ArchitecturalSpecification/SDTools.tex", "max_stars_repo_stars_event_max_datetime": "2020-12-09T07:05:07.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-01T13:14:57.000Z", "num_tokens": 687, "size": 2301 }
\documentclass[10pt,letterpaper]{article} \usepackage[margin=1in]{geometry} \usepackage{setspace} \usepackage{fancyhdr} \usepackage{lastpage} \pagestyle{fancyplain} % Put watermark on \usepackage{draftwatermark} \SetWatermarkText{Draft} \SetWatermarkScale{7} \lhead{} \chead{Central Massachusetts Amateur Radio Association} \rhead{} \lfoot{\texttt{https://github.com/mide/cmara-meeting-minutes/}} \cfoot{} \rfoot{Page \thepage\ of \pageref{LastPage}} \begin{document} \begin{center} {\huge August 2018 Board of Directors Meeting}\\ \emph{of the}\\ {\Large Central Massachusetts Amateur Radio Association}\\ \emph{Notes by Dan Rau (\texttt{K1RAU}), prepared by Mark Ide (\texttt{W1IDE}), Secretary} \end{center} \section{Meeting Called to Order} The CMARA August 2018 board meeting was called to order on August 21, 2018 at 6:55 PM by CMARA president Bob Peloquin (\texttt{W1TAB}).\\ This meeting was \textbf{NOT} held on the standard third-Thursday. This was held on August 21, 2018 at Dan Rau's home. \section{Attendance} \subsection{Officers Present} \begin{tabular}{|l|l|l|c|} \hline \textbf{Position} & \textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline President & Bob Peloquin & \texttt{W1TAB} & Yes \\ Vice President & Brian Loverro & \texttt{K1BML} & No \\ Secretary & Mark Ide & \texttt{W1IDE} & No \\ Treasurer & Randolph Dore & \texttt{W4FEB} & No \\ Webmaster & Lyn Glagowski & \texttt{WB1CCL} & No \\ \hline \end{tabular} \subsection{Board of Directors Present} \begin{tabular}{|l|l|c|} \hline \textbf{Name} & \textbf{Callsign} & \textbf{Present} \\ \hline Adrian Zeffert & \texttt{AB2IX} & Yes \\ \hline George Gumbrell & \texttt{KA3RLZ} & Yes \\ \hline L. Greg Algieri & \texttt{WA1JXR} & Yes \\ \hline Terry Glagowski & \texttt{W1TR} & No \\ \hline Dan Rau & \texttt{K1RAU} & Yes \\ \hline Scott Olsen & \texttt{KB1EZF} & No \\ \hline \end{tabular}\\ % \subsection{Members Present} % \texttt{WI1Y}, \texttt{WW2JS} \subsection{Guests \& Visitors} \begin{enumerate} \item Patrick Faucher \item Herb Gilbert (\texttt{KC1GIB}) \item Chris Wentworth \end{enumerate} \section{Primary Discussions} \subsection{Planning for 2018/2019 Season} \begin{itemize} \item \textbf{September:}\\ Dan Pedtke on Impedance. \item \textbf{October:}\\ Bob Peloquin on Inverter Generators \item \textbf{November:}\\ Arian Zeffert on 3-D printers. Nominations for officers. \item \textbf{December:}\\ Christmas Party\\ Election of officers \item \textbf{January:}\\ Open \item \textbf{February:}\\ Open \item \textbf{March:}\\ Mickey Westover on building an off-center diapole. \item \textbf{April:}\\ Adrian Zeffert and the Club Swap Meet to benefit new hams \item \textbf{May:}\\ Field Day Planning \item \textbf{June:}\\ N1MM training for Field Day. \end{itemize} \subsection{Other Business} Greg Algeri brought info from the trustees about the repeater equipment situation. The trustees will be ready to make a recommendation either in the September or October meeting.\\ \noindent Preliminary cost estimates for repeater controller, preamp, amp and antenna would be between 3 and 4k. Greg feels that unless we resolve the transmit antenna issue soon a good fix for the transmit coverage is a 200 amplifier running at 100 watts. That should restore the transmit coverage to roughly the same contour as the receive coverage. There is now internet at the W1BIM site which could allow us to have telemetry and full multiple access remote control. \section{Next Month's Presentation} Dan Pedtke on Impedance \section{Meeting Adjourned} The meeting was adjourned August 21, 2018 at 8:00 PM by CMARA president Bob Peloquin (\texttt{W1TAB}). \end{document}
{ "alphanum_fraction": 0.7209850668, "avg_line_length": 34.3873873874, "ext": "tex", "hexsha": "5baa5e0ae323c5f8fb772f2197072cb6070e3ac9", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2021-03-17T09:20:26.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-17T09:20:26.000Z", "max_forks_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "cmara/meeting-minutes", "max_forks_repo_path": "minutes/2018-08-16-board-meeting.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "cmara/meeting-minutes", "max_issues_repo_path": "minutes/2018-08-16-board-meeting.tex", "max_line_length": 461, "max_stars_count": 1, "max_stars_repo_head_hexsha": "e1f7e3debca5145a668321f75d12ce3db418eb5c", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "cmara/meeting-minutes", "max_stars_repo_path": "minutes/2018-08-16-board-meeting.tex", "max_stars_repo_stars_event_max_datetime": "2020-01-27T17:33:16.000Z", "max_stars_repo_stars_event_min_datetime": "2020-01-27T17:33:16.000Z", "num_tokens": 1199, "size": 3817 }
\section{Material Testing Example}
{ "alphanum_fraction": 0.8529411765, "avg_line_length": 34, "ext": "tex", "hexsha": "4b8d8fc876492fda5a1bb2aa8febec1f7d8543b5", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_forks_event_min_datetime": "2020-06-29T23:14:09.000Z", "max_forks_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "yetisir/up-scaling-dem-simulations", "max_forks_repo_path": "unused/section_materialTestingExample.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "yetisir/up-scaling-dem-simulations", "max_issues_repo_path": "unused/section_materialTestingExample.tex", "max_line_length": 34, "max_stars_count": null, "max_stars_repo_head_hexsha": "9c9043effdb72a608ffec11726af97154751722e", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "yetisir/up-scaling-dem-simulations", "max_stars_repo_path": "unused/section_materialTestingExample.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 7, "size": 34 }
% Author: Christian Ruppert <[email protected]> % $Id: insert.tex,v 1.1 2003/05/13 07:58:40 ruppert Exp $ % Copyright (C) 2003, Berlin University of Technology \documentclass{webpage} \begin{document} \title{MMTex insert samples} \subtitle{A sample of the insert command} \tableofcontents \section{The Source Code} \verbatim This is some \emph{normal} code, followed by the \emph{insert} command. \insert{insert.tex.code} \endverbatim \section{The Result} This is some \emph{normal} code, followed by the \emph{insert} command. \insert{insert.tex.code} \end{document}
{ "alphanum_fraction": 0.7491467577, "avg_line_length": 20.9285714286, "ext": "tex", "hexsha": "f4e1c3ee2ee5e1768ff6f719213fcf189502937f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "322f6772f5bf1ce42fc00d0c6f8c3eba27ecc010", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "TU-Berlin/Mumie", "max_forks_repo_path": "mmtex/samples/tex/insert.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "322f6772f5bf1ce42fc00d0c6f8c3eba27ecc010", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "TU-Berlin/Mumie", "max_issues_repo_path": "mmtex/samples/tex/insert.tex", "max_line_length": 71, "max_stars_count": null, "max_stars_repo_head_hexsha": "322f6772f5bf1ce42fc00d0c6f8c3eba27ecc010", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "TU-Berlin/Mumie", "max_stars_repo_path": "mmtex/samples/tex/insert.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 180, "size": 586 }
\documentclass[a4paper,onecolumn,11pt,accepted=2018-05-04]{quantumarticle} \usepackage[utf8]{inputenc} \pdfoutput=1 \usepackage[T1]{fontenc} \usepackage[UKenglish]{babel} %\usepackage{lmodern} \usepackage{hyperref} %[colorlinks] %\usepackage{url}Gleason-typequantum \usepackage{graphicx} \usepackage{amsmath,amssymb,amsthm} \usepackage{xspace} \usepackage{mathtools} \usepackage{dsfont} \usepackage{xcolor} \usepackage{paralist} \usepackage{array} \usepackage{lipsum} \usepackage{subfig} %\usepackage[top=2cm, bottom=2.5cm, left=3.5cm, right=2.5cm]{geometry} \usepackage{cite} %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%COMMANDS%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% %define \newcommand{\defeq}{\vcentcolon=} %trace \DeclareMathOperator{\tr}{Tr} %Probabilities \DeclareMathOperator{\pr}{Pr} \DeclareMathOperator{\doo}{do} %span (linear space) \DeclareMathOperator{\Span}{Span} %identity matrix \newcommand{\id}{\mathds{1}} %bra-ket notation \newcommand{\bra}[1]{\left\langle #1 \right|} \newcommand{\ket}[1]{\left| #1 \right\rangle} \newcommand{\braket}[2]{\left\langle #1 \middle| #2 \right\rangle} \newcommand{\ketbra}[2]{\left|#1\middle\rangle\middle\langle#2\right|} %\newcommand{\proj}[1]{\left|#1\middle\rangle\middle\langle#1\right|} \newcommand{\proj}[1]{[#1]} %double ket notation \newcommand{\Bra}[1]{{ \langle \! \langle{#1}\vert }} \newcommand{\Ket}[1]{{ \vert {#1} \rangle \! \rangle}} \newcommand{\KetBra}[2]{{\Ket{#1}\!\Bra{#2} }} \newcommand{\BraKet}[2]{{\langle \! \langle {#1}\vert {#2} \rangle \! \rangle}} %\newcommand{\Proj}[1]{\left|#1\middle\rangle \!\rangle\middle\langle\! \langle#1\right|} \newcommand{\Proj}[1]{[[#1]]} %smaller overbar \newcommand{\overbar}[1]{\mkern 1.5mu\overline{\mkern-2.5mu#1\mkern-1.5mu}\mkern 1.5mu} %Theorem-like environments \newtheorem{theorem}{Theorem} \newtheorem*{theorem*}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}{Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{problem}{Problem} \newtheorem{assumption}{Assumption} %space of linear operators over a Hilbert space, to be used as \lin({\cal H}) \newcommand{\lin}{{\cal L}} %%%%%%%%%% %Local definitions for quantum causal models (some require \usepackage{xspace}) %Parents and children \newcommand{\pa}{P\!A} \newcommand{\ch}{C\!H} %parent space \newcommand{\ps}{P\!S} %variable associated with parent space (lower case) \newcommand{\psv}{ps} \newcommand{\opa}{S} \newcommand{\ech}{R} %\newcommand{\opa}{E\!P\!A} %\newcommand{\ech}{E\!C\!H} % revision tools %\usepackage{comment} % comment-out piece of text \usepackage[normalem]{ulem} %need for strikethrough (\sout{}) \usepackage{cancel} % strike-out in mathmodecyan \newcommand{\cross}[1]{\textcolor{cyan}{\sout{#1}}} \newcommand{\canc}[1]{\textcolor{cyan}{\cancel{#1}}} \newcommand{\comment}[1]{\textit{\small\textcolor{red}{ [Fabio: #1]}}} \newcommand{\Scomment}[1]{\textit{\small\textcolor{cyan}{ [Sally: #1]}}}\newcommand{\cyan}[1]{\textcolor{cyan}{#1}} \newcommand{\new}[1]{\textcolor{blue}{#1}} %%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \title{Causation does not explain contextuality} \author{Sally Shrapnel} \email{[email protected]} \orcid{0000-0001-8407-7176} \affiliation{Centre for Engineered Quantum Systems, School of Mathematics and Physics, The University of Queensland, St Lucia, QLD 4072, Australia} \author{Fabio Costa} \email{[email protected]} \orcid{0000-0002-6547-6005} \affiliation{Centre for Engineered Quantum Systems, School of Mathematics and Physics, The University of Queensland, St Lucia, QLD 4072, Australia}%%% \date{21st~November 2018} \begin{abstract} Realist interpretations of quantum mechanics presuppose the existence of elements of reality that are independent of the actions used to reveal them. Such a view is challenged by several no-go theorems that show quantum correlations cannot be explained by non-contextual ontological models, where physical properties are assumed to exist prior to and independently of the act of measurement. However, all such contextuality proofs assume a traditional notion of causal structure, where causal influence flows from past to future according to ordinary dynamical laws. This leaves open the question of whether the apparent contextuality of quantum mechanics is simply the signature of some exotic causal structure, where the future might affect the past or distant systems might get correlated due to non-local constraints. Here we show that quantum predictions require a deeper form of contextuality: even allowing for arbitrary causal structure, no model can explain quantum correlations from non-contextual ontological properties of the world, be they initial states, dynamical laws, or global constraints. \end{abstract} \maketitle \section*{Introduction} The appeal of an operational physical theory is that it makes as few unwarranted assumptions about nature as possible. One simply assigns probabilities to experimental outcomes, conditioned on the list of experimental procedures required to realise these outcomes. Ideally, such operational theories are \emph{minimal}: procedures that cannot be statistically discriminated are given the same representation in the theory. Quantum mechanics is an example of such a minimal operational theory: all the statistically significant information about the preparation procedure is contained in the quantum state, and the probability of an event (labelled by a Positive Operator Valued Measure (POVM) element) does not depend on any other information regarding the manner in which the measurement was achieved (such as the full POVM). However, one of the most debated questions in the foundations of the theory is whether one can go beyond this statistical level and also provide an \emph{ontological} description of some actual state of affairs that occurs during each run of an experiment. That is, a statement about the world that tells us what is responsible for the observed experimental outcomes. The task of providing such an ontological model for quantum theory has proven to be exceedingly difficult. A plethora of no-go theorems exists that describe the various natural assumptions one must forgo in order to produce an ontological model that accords with experiment. One such caveat is \emph{non-contextuality}. Ultimately an \emph{apriori} assumption, non-contextual theories posit the existence of physical properties that do not depend on the way they are measured. There is a large literature discussing the various ways one may wish to cash out this notion more precisely. Broadly speaking, non-contextuality no-go theorems fall into two distinct categories. Kochen-Specker style proofs show that quantum measurements cannot be regarded as deterministically uncovering pre-existing, or ontic, properties of systems~\cite{kochen67, bell66, cabello08}. Spekkens style proofs, on the other hand, show that one cannot explain quantum statistics via ontological properties that mirror the context-independence seen at the operational level~\cite{spekkens05, Montina2011, kunjwal2016, Mazurek2016, Schmid2017}. While both approaches are well justified and have led to interesting and relevant results, our own definition of non-contextuality is more closely related to the latter. This particular view of non-contextuality can more broadly be seen as an analogue of the no fine-tuning argument from causal modelling~\cite{cavalcanti2017}, an analogue of Leibniz's principle of the Identity of Indiscernibles~\cite{spekkens05, kunjwal2016}, and a methodological assumption akin to Occam's razor. Non-contextuality no-go theorems are not merely of foundational interest but can also serve as security proofs for a range of simple cryptographic scenarios~\cite{chailloux2016, spekkens09}, can herald a quantum advantage for computation~\cite{Howard2014}, and also for state discrimination~\cite{Schmid2017}. Such results, however, require the assumption of a fixed background causal structure; at the very minimum, a single causal arrow from preparation to measurement. This leaves open the question of whether one can produce a non-contextual ontological model by allowing for a suitably exotic causal structure. Some authors attempt to explain quantum correlations by positing backwards-in-time causal influences~\cite{price2012, priceWharton2015, Evans01062013, Wharton2014, Aharonov2016, Leifer2016, Sutherland17}, while others claim it is the existence of non-local constraints that does the explanatory work~\cite{carati1999nonlocality, Weinstein2009}. The rationale in both cases is that non-contextuality could emerge naturally in such models: physical properties might well be ``real'' and ``counterfactually definite'', but depend on future or distant measurements because of some physically motivated---although radically novel---causal influence. Such proposals do not fit neatly within the classical causal modelling framework, and so are not ruled out by recent work in this direction~\cite{wood2015, cavalcanti2017}, nor by any of the existing no-go theorems. In this paper, we characterise a new ontological models framework to prove that even if one allows for \emph{arbitrary} causal structure, ontological models of quantum experiments are necessarily contextual. Crucially, what is contextual is not just the traditional notion of ``state'', but any supposedly objective feature of the theory, such as a dynamical law or boundary condition. Our finding suggests that \emph{any} model that posits unusual causal relations in the hope of saving ``reality'' will necessarily be contextual. Finally, this work also represents a possible approach to how we ought to think of the generalised quantum processes of recent work~\cite{gutoski06, chiribella08, Chiribella2008, chiribella09b, Bisio2011, Bisio2014, oreshkov12, modioperational2012, Leifer2013, Ringbauer2015, pollockcomplete2015, costa2016, Allen2016, Milz2016, shrapnel2017}. It is clear that any ontological reading of such processes will have to contend with the spectre of contextuality. The paper is organised as follows. In section~\ref{OntModels} we present the traditional ontological models framework and clarify the rationale behind retrocausal explanations of quantum statistics. In section~\ref{opPrimitives} we introduce and justify the four primitive elements required to define our operational model: local regions, local controllables, outcomes and an environment. In section~\ref{opEquiv} we define the three classes of operationally indistinguishable elements: events, instruments and processes. In Section~\ref{ontModel} we characterise instrument and process non-contextuality according to these equivalence classes, and provide a generalised framework for a non-contextual ontological model. As this is the conceptual heart of our result, in Section~\ref{examples} we clarify the scope and applicability of this framework via three examples. Using standard quantum theory and results from previous work~\cite{oreshkov12, shrapnel2017}, in section~\ref{quantModel} we characterise an operational model that accords with the experimental predictions of quantum theory. Section~\ref{contradiction} puts these elements together to prove that one cannot produce an ontological model that is both process and instrument non-contextual and accords with the predictions of quantum theory. In Section~\ref{extension} we consider the constraints imposed on ontological models when one only assumes instrument non-contextuality. We finish with a discussion. \section{An introduction to ontological models and retrocausal approaches.}\label{OntModels} The ontological models framework assumes that systems possess well defined properties at all times~\cite{harrigan10,leifer2014quantum, Leifer2016}. The starting point is the very general claim that all experiments can be modelled operationally as sets of preparations, followed by transformations, followed by measurements, all performed upon some physical system. The set of all possible preparations, transformations and measurements is regarded as capturing the entire possibility space of any experiment and can be associated with the operational predictions of a particular theory. For example, an experiment can involve choices of possible preparation settings (labelled by the random variable $P$) and choices of possible measurement settings ($M$) with associated outcomes ($a$).\footnote{For this example we assume that any transformation between preparation and measurement is trivial.} An operational model then predicts probabilities for outcomes for all possible combinations of preparations and measurements: \begin{equation} \forall a,M,P:~p(a|M,P). \end{equation} Such probabilistic predictions should coincide with the operational predictions of the theory in question. For example, in the case of quantum theory each preparation choice is modelled as a density operator ($\rho_P$) on a Hilbert space associated to a quantum system $(\mathcal{H}_A)$. Similarly, each measurement choice $M$ is associated with a positive operator valued measure $ \{E_{a|M} \}$, whose elements correspond to particular outcomes $a$. The probabilities predicted by the theory are: \begin{equation} p(a |P, M) = \tr( \rho_P E_{a|M}). \end{equation} An \emph{ontological extension} of such an operational model further assumes that the system possesses well defined ontological properties between the time of preparation and measurement. Such properties are collectively known as the "ontic state" and typically denoted by $\lambda$. In the ontological models framework each preparation procedure $P$ is presumed to select a particular ontic state $\lambda$ according to a fixed probability distribution: $\mu_P(\lambda)$, and each measurement choice is presumed to output a particular outcome according to a fixed response function: $ \xi_{a|M}(\lambda)$. That is, (i) every preparation $P$ can be associated to a normalised probability distribution over the ontic state space $\mu_P(\lambda)$, such that $\int\mu_P (\lambda)d\lambda = 1$, and (ii) every measurement $M$, with outcomes $a$, can be associated to a set of response functions $\{\xi_{a|M}(\lambda)\}$ over the ontic states, satisfying $\sum_a \xi_{a|M}(\lambda) = 1$ for all $\lambda$. As the ontic states are not directly observed, the operational statistics are obtained via marginalisation and we have: \begin{equation}\label{OntMod} \forall a,M,P:~p(a|M,P) = \int \xi_{a|M}(\lambda) \mu_P(\lambda) d \lambda, \end{equation} where for quantum theory: \begin{equation} \forall a,M,P:~\tr( \rho_P E_{a|M}) = \int \xi_{a|M}(\lambda) \mu_P(\lambda) d \lambda. \end{equation} The ontological models framework has been used in numerous works to clarify the manner in which quantum theory should be considered contextual~\cite{spekkens05, spekkens08, Montina2011, kunjwal2016, Mazurek2016, Schmid2017}. The key assumption is that one can infer ontological equivalence from operational equivalence: for example, if two preparation procedures produce the same distributions over outcomes for all possible measurements, then any differences between them do not play a role in determining the ontic states of the system in question. Thus, the justification for why one can't distinguish between the two equivalent preparations at the \emph{operational} level is because there is no difference between the role the preparations play at the \emph{ontological} level. The view is that each use of a preparation device selects one from a set of possible ontic states according to exactly the same probability distribution in each run. Formally, if $\forall M$ and outcome $a$ \begin{equation} p(a|M,P_1) = p(a|M,P_2), \end{equation} % then both preparations specify the same distribution over ontic states: \begin{equation} \mu_{P_1}(\lambda) = \mu_{P_2} (\lambda). \end{equation} Similarly, if two measurements result in the same outcome statistics for all possible preparations then both measurements are represented by the same fixed response function. Formally, if $\forall P$ and outcomes $a$ \begin{equation} p(a|M_1,P) = p(a|M_2,P), \end{equation} % then both measurements specify the same distribution over ontic states: \begin{equation} \xi_{a|M_1}(\lambda) = \xi_{a|M_2}(\lambda). \end{equation} Thus, in a non-contextual ontological model one can account for operational statistics according to Eq.~\ref{OntMod}. Implicit in this model is the belief that the ontic state screens off the preparations from the measurements (a property also known as $\lambda$-mediation~\cite{Leifer2016}). In~\cite{spekkens08} it was shown that one can use the assumptions above to derive a contextuality proof: no model of the form of Eq.~\ref{OntMod} can explain the statistics of quantum theory. In this approach to contextuality~\cite{spekkens05, spekkens08, Montina2011, kunjwal2016, Mazurek2016, Schmid2017}, one assumes that ontic states determine correlations according to some fixed causal order. Formally, this is captured by Eq.~\ref{OntMod}: the preparation is assumed to cause the selection of a particular ontic state $\lambda$ according to a fixed distribution $\mu(\lambda)$, and the measurement choice does not alter this value but merely determines the outcome probability, also according to a fixed probability distribution. This leaves open the question of whether one can explain the contextuality of quantum theory by postulating an alternative, retrocausal ontology: If the future can affect the past, then the state $\lambda$ could depend on the measurement setting $M$, and Eq.~\eqref{OntMod} would not be justified. Generally speaking, retrocausal approaches posit the existence of backwards-in-time causal influences to explain quantum correlations. The stated appeal of such approaches is that the consequent explanations retain some element of our classical notion of reality: local causality, determinate ontology, and counterfactual definiteness. For example, %the two state vector formalism of Aharanov and co-authors includes a backwards evolving quantum state that depends directly on the choice of future measurement. This "renders future boundary conditions as the missing source of possible causes" ~\cite{Aharonov2016}. Price and Wharton explain Bell correlations by including a "zig-zag" of causal influence, passing via hidden variables that travel backwards in time from one measurement event to the source and then forwards in time to the distant measurement event~\cite{priceWharton2015}. %For Sutherland, the hidden variables in question are particle positions: he describes a theory in which all particle positions are dependent on both past and future measurement events. These approaches all have in common the idea that future measurement events can influence past ones via a causal influence that propagates backwards through space-time. Although not explicitly stated, there is also one further assumption underlying these approaches: such causal influences follow some kind of law-like behaviour. That is, one would not expect the \emph{rules} by which such retrocausal influences propagate, or backward-in-time states evolve, to be completely ad hoc. As stated in the introduction, we follow the Spekkens-style approach and also define non-contextuality in terms of operational equivalences. Where we depart however, is in our particular choice of operational primitives. The usual primitives of preparations, transformations and measurements do not permit one to consider causal scenarios that move beyond the most simple causally ordered situations; in these models the notion of reality is defined in terms of properties that exist \emph{before} a measurement takes place. The underlying ontology is therefore assumed to follow some ordinary causal structure, akin to the directed acyclic graphs of causal models~\cite{Pearlbook}. In our model we wish to be able to consider more general situations, for example where we include \emph{any} possible global dynamics, causal structure, space-time geometry or global constraints. In order to provide this alternative perspective we consider the primitive operational elements to be sets of labelled local regions, locally controllable properties and an environment. \section{Operational primitives}\label{opPrimitives} We define an operational model of any experiment to consist of \emph{local labelled regions} ($A, B, C,\dots$) where one can perform \emph{controlled operations} that can be associated with \emph{outcomes}. The regions align with concepts such as local laboratories, communicating parties (e.g. Alice and Bob) and local space-time regions (similar, e.g., to the operational framework of~\cite{oreshkov15}). There is no \emph{apriori} assumption that these regions be "fixed" or preassigned in some manner; they are simply labels for the locus of a set of controlled operations. Controlled operations generalise the notion of preparations, measurements, transformations, and can include the addition or subtraction of ancillary systems. Examples include the orientation of a wave-plate, the instigation of a microwave pulse, and the use of a photo-detector. We call such local operations the \emph{local controllables}. Each local controllable is represented as $\tilde{\mathcal{\mathfrak{I}}}^X$, where the superscript $X= A, B,\dots$ labels the associated region. We consider outcomes as labels associated to the result of choosing a particular local controllable; the outcomes for region $A$ are labelled $a=0,1,2,\dots$. Examples include the number of detected photons, the result of a spin measurement or the time of arrival of a photon. We allow the outcomes to have infinite possible values as this enables us to use the same variable for local controllables that have different numbers of possible outcomes. In general however, we expect that only a finite number of such outcomes is associated with non-zero probability. Finally, we consider all the possible properties that could account for correlations between outcomes in the local regions. These include any global properties, initial states, connecting mechanisms, causal influence, or global dynamics. We call this the \emph{environment}, $\tilde{W}$. Note that in our operational model environments and local controllables are by construction always uncorrelated. That is, if we see a property change in relation to a choice of local controllable we label this as an \emph{outcome} and do not classify it as part of the environment. \begin{figure}[ht]% \centering \includegraphics[width=0.7\columnwidth]{figure1}% \caption{\textbf{Operational primitives.}}% \label{region}% \end{figure} We can thus describe an experiment by a set of regions, outcomes, local controllables and an environment. If we consider a particular run of an experiment there will in general be a collection of outcomes that occur, one for each local region. One can associate a joint probability to this set of outcomes and empirically verify probability assignments for each possible set of outcomes. An operational model for such an experiment allows one to calculate expected probabilities: \begin{equation} p(a, b, c,\dots| \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B, \tilde{\mathcal{\mathfrak{I}}}^C,\dots, \tilde{W}). \end{equation} The operational model thus specifies a distribution over outcomes for local controllables $\tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B, \dots$, and a shared environment $\tilde{W}$, Fig.~\ref{region}. Note that it should be possible to have ignorance over part of the environment and characterise this accordingly using the operational model. More explicitly, if $\tilde{\xi}$ represents the part of the environment about which we are ignorant, then the operational probabilities given the known part of the environment are obtained by marginalising over $\tilde{\xi}$: \begin{align}\label{marginal} p(a, b, c,\dots| \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B, \tilde{\mathcal{\mathfrak{I}}}^C,\dots, \tilde{W}) =& \int d\tilde{\xi} p(a, b, c,\dots, \tilde{\xi}| \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B, \tilde{\mathcal{\mathfrak{I}}}^C,\dots, \tilde{W})\\ \nonumber =& \int d\tilde{\xi} p(a, b, c,\dots| \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B, \tilde{\mathcal{\mathfrak{I}}}^C,\dots, \tilde{W},\tilde{\xi}) p( \tilde{\xi}| \tilde{W}), \end{align} where the second equality comes from the assumption that the local controllables are uncorrelated with the environment. As a concrete example, $\tilde{W}$ can describe the axis along which a spin-$\frac{1}{2}$ particle is prepared, while $\tilde{\xi}$ represents whether the spin is prepared aligned or anti-aligned with that axis.\footnote{Here (and again in Section~\ref{quantModel}), we take for simplicity a scenario with a single region where a measurement is performed, so the specification of a process is equivalent to the specification of a state. More generally, the variables $W$ and $\xi$ could describe quantum channels, quantum networks, or more general quantum processes.} The marginal \eqref{marginal} then describes a scenario where there is some probabilistic uncertainty of the spin's direction i.e. which value of $\xi$ occurs in any given run. Note that, for the particular case $p( \tilde{\xi}| \tilde{W})=\frac{1}{2}$, we obtain the maximally mixed state irrespective of the axis, making the variable $\tilde{W}$ redundant. Such redundancies can be taken into account via operational equivalences. \section{Operational equivalences}\label{opEquiv} We next characterise the appropriate operational equivalences in order to define our ontological model. Notationally, we omit the `tilde' for each equivalence class. \subsection{Events} We say that a pair composed of an outcome and the respective local controllable $(a, \tilde{{\mathfrak{I}}}^A)$ is operationally equivalent to the pair $(a', \tilde{{\mathfrak{I}}}'^A)$ if the joint probabilities for $a, b, c,\dots$ and $a', b, c, \dots$ are the same for all possible outcomes and local controllables in the other regions $B, C,\dots,$ and for all environments $\tilde{W}$. \begin{equation} p(a,b,c,\dots | \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots\tilde{W}) = p(a',b,c,\dots | \tilde{\mathcal{\mathfrak{I}}}'^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots\tilde{W}), \end{equation} \begin{equation*} \forall (b,c,\dots,\tilde{\mathcal{\mathfrak{I}}}^B, \tilde{\mathcal{\mathfrak{I}}}^C,\dots, \tilde{W}). \end{equation*} We denote an equivalence class of such pairs of outcomes and local controllables as an \emph{event}: \begin{equation}\label{events} M^A = [(a, \tilde{\mathcal{\mathfrak{I}}}^A)]. \end{equation} \subsection{Instruments} We define an instrument as the list of \emph{possible events} for a local controllable $ \tilde{\mathcal{\mathfrak{I}}}^A $, where an event $M^A = [(a, \tilde{\mathcal{\mathfrak{I}}}^A)]$ is possible for $\tilde{\mathcal{\mathfrak{I}}^A}$ if \begin{equation} p(a,b,c,\dots | \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots\tilde{W}) \neq 0, \end{equation} for some \begin{equation*} (b,c,\dots,\tilde{\mathcal{\mathfrak{I}}}^B, \tilde{\mathcal{\mathfrak{I}}}^C,\dots, \tilde{W}). \end{equation*} We say that $\tilde{\mathcal{\mathfrak{I}}}^A$ is equivalent to $\tilde{\mathcal{\mathfrak{I}}}'^A$ if they define the same list of possible events and we denote the equivalence class ${\mathcal{\mathfrak{I}}}^A := [\tilde{\mathcal{\mathfrak{I}}}^A] \equiv \{M_1^A, \dots, M_n^A\}$. Note that our definition allows distinct instruments to share one or more events. Note also, our definition implies that the probability for an event doesn't depend on the particular instrument $\mathcal{\mathfrak{I}}$, once we assume the event is possible given the instrument. This property we call \emph{operational instrument equivalence}.\footnote{In other work, where we are not concerned with the possibility of non-contextual hidden variable theories, we refer to this property as instrument non-contextuality~\cite{shrapnel2017}. Here we reserve the term non-contextuality to refer to an ontological model.} \subsection{Process} The process captures those physical features responsible for generating the joint statistics for a set of events, independently of the choice of local instruments. A process is defined as an equivalence class of environments, $W:= [\tilde{W}]$, where $\tilde{W}$ is equivalent to $\tilde{W}' $, if \begin{equation} p(a,b,c,\dots | \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots,\tilde{W}) = p(a,b,c,\dots | \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots\tilde{W'}), \end{equation} \begin{equation*} \forall (a, b,c,\dots,\tilde{\mathcal{\mathfrak{I}}}^B, \tilde{\mathcal{\mathfrak{I}}}^C,\dots, ). \end{equation*} A simple example is the spatio-temporal ordering of regions. It is clear that the operational statistics of events in regions A and B can be different for the following two causal orderings: (i) A is before B, (ii) B is before A; thus the respective environments, $\tilde{W}_{(i)}$ and $\tilde{W}_{(ii)}$, will not be equivalent. On the other hand, for certain experiments we would not expect any difference in statistics for a simple rotation of the whole experiment by 45 degrees; these two environments will be represented by the same process $W$. The above equivalences allow us to define a joint probability distribution over the space of \emph{events} (rather than outcomes) conditioned on \emph{instruments} (rather than local controllables) and the \emph{process} (rather than the environment). As discussed above, this distribution satisfies operational instrument equivalence, which means that the joint probability for a set of events is either zero or independent of the respective instruments. Therefore, it can be expressed in terms of a \emph{frame function} $f_{W}$ that maps events to probabilities and is normalised for each instrument: \begin{equation} p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W) = f_W(M^A, M^B,\dots) \prod_{X=A,B,\dots} \chi_{\mathfrak{I}^X}(M^X), \end{equation} where, for a set $S$, $\chi_S$ is the indicator function, $\chi_S(s)=1$ for $s\in S$ and $\chi_S(s)=0$ for $s\not\in S$. Note that the indicator functions are necessary to make the whole expression a valid probability distribution, normalised over the \emph{entire} space of events. Furthermore, and in contrast to similar expressions involving POVMs, the dependency on the instruments is crucial to allow for causal influence across the regions: Integrating over the events of, say, region $A$, can result in a marginal distribution that still depends on $A$'s instrument and displays signalling from $A$ to other regions. However, the fact that the dependency on the instruments is solely through the indicator functions tells us that the causal relations can be attributed to the particular events realised in each experimental run, rather than to the whole instruments (which include the specification of events that did not happen). In other words, the event ``screens off'' the instrument: once the event in a local region is known, further knowledge of the instrument does not allow for any better prediction about events in other regions. \section{Ontological model}\label{ontModel} The purpose of an ontological model is to introduce possible elements of reality. Typically, one assumes that the ontology is encoded in a ``state'', representing the physical properties of a system at a given time. Here we shift the focus from states to more general properties of the environment that are responsible for mediating correlations between regions. We represent the collection of all such properties by a single variable $\omega$, named the \emph{ontic process}. We wish to clarify at this point that our ontic process captures the physical properties of the world that remain invariant under our local operations. That is, although we allow local properties to \emph{change} under specific operations, we wish our \emph{ontic process} to capture those aspects of reality that are independent of this probing. The interpretation of ontic processes and the relation with the usual notion of ontic states can be seen via the examples of the following section. Our ontological model specifies a joint probability for a set of outcomes, one at each local region, given the ontic process, the environment, and the set of local controllables. This joint probability reduces to the operational joint probability when the value of the ontic process is unknown: \begin{equation} p(a,b,c,\dots, | \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots\tilde{W}) = \int d\omega p(a,b,c,\dots, \omega | \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots\tilde{W}). \end{equation} There are three natural assumptions one might require of an ontological model defined according to these operational equivalences: \begin{assumption} \textbf{$\omega$-mediation} The ontic process mediates all the correlations between regions, thus $\omega$ screens off outcomes from the environment, and we have: \begin{equation} p(a,b,c,\dots| \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots,\tilde{W}) = \int d\omega p(a,b,c,\dots | \omega, \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots) p(\omega|\tilde{W}). \end{equation} \end{assumption} \begin{assumption} \textbf{Instrument non-contextuality.} Operationally indistinguishable pairs of outcomes and local controllables should remain indistinguishable at the ontological level. That is, for operationally equivalent pairs $(a, \tilde{\mathcal{\mathfrak{I}}}^A), (a', \tilde{\mathcal{\mathfrak{I}}}'^A)$, \begin{equation} p(a,b,c,\dots, | \omega, \tilde{\mathcal{\mathfrak{I}}}^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots) = p(a',b,c,\dots, | \omega, \tilde{\mathcal{\mathfrak{I}}}'^A, \tilde{\mathcal{\mathfrak{I}}}^B,\dots), \end{equation} \begin{equation*} \forall (b,c,\dots,\tilde{\mathcal{\mathfrak{I}}}^B,\dots), \forall \omega. \end{equation*} which means that we can define a probability distribution on the space of events, conditioned on instruments and on the ontic process, in terms of a \emph{frame function} $f_\omega$, such that: \begin{equation} p(M^A, M^B,\dots| \omega,\mathfrak{I}^A, \mathfrak{I}^B,\dots,) = \prod_{X} \chi_{\mathfrak{I}^X}(M^X) f_\omega(M^A, M^B\dots), \end{equation} where $\chi$ is the indicator function, $\chi_X(x)=1$ for $x\in X$ and $\chi_X(x)=0$ for $x\not\in X$, and $f_\omega$ maps events to probabilities: \begin{equation} f_\omega(M^A, M^B, \dots) \in [0,1], \end{equation} and is normalised for each set of events that corresponds to a particular instrument: \begin{equation} {\sum_{\substack{M^A \in \mathfrak{I}^A\\{M^B \in \mathfrak{I}^B}\\{M^C \in \mathfrak{I}^C}\\\dots}}}f_\omega(M^A, M^B,M^C, \dots) = 1. \end{equation} \end{assumption} \begin{assumption} \textbf{Process non-contextuality.} \\ For operationally equivalent processes $\tilde{W}, \tilde{W}'$ the assumption of process non-contextuality implies: \begin{equation} p(\omega|\tilde{W}) = p(\omega|\tilde{W}'), \end{equation} and we can define a function $g_W(\omega)$ that maps ontic processes to probabilities, given each process $W$: \begin{equation} g_W(\omega) = p(\omega|\tilde{W}),\quad W=\left[\tilde{W}\right], \end{equation} that is normalised for all $\omega$: \begin{equation} \int d \omega~g_W (\omega) = 1. \end{equation} \end{assumption} For an ontological model that satisfies the above three assumptions, the operational probability can now be expressed in terms of events, instruments and processes as: \begin{equation} p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W) = \prod_{X=A,B,\dots} \chi_{\mathfrak{I}^X}(M^X) \int d\omega~f_\omega(M^A, M^B, \dots) g_W(\omega). \end{equation} Although ontic states, as they are usually understood, are not represented explicitly in our framework, they are not excluded. In the following section we present three examples to illustrate how such ontic states, with or without retrocausality, can be represented in our model. \section{Examples}\label{examples} \subsection{Deterministic, classical models} \subsubsection{Causally-ordered models} As a first example, let us consider a classical, deterministic scenario (without retrocausality) with two regions, $A$ in the past of $B$, each delimited by a past and future space-like boundary, see Fig.~\ref{figure2a}. For a classical system, we can assign input states $\lambda_I^A$ and $\lambda_I^B$ to the past boundaries of $A$ and $B$, respectively, and output states $\lambda_O^A$ and $\lambda_O^B$ to the respective future boundaries. As measurements can be performed without disturbance on a classical system, we associate the input state in each region with the respective measurement outcome: $a\equiv \lambda_I^A$ and $b \equiv \lambda_I^B$. As local controllables we take deterministic local operations, defined as functions $f^X$ that map the input state of each region to the corresponding output: \begin{equation} \lambda_I^X\mapsto\lambda_O^X=f^X\left(\lambda_I^X\right), \end{equation} where X denotes the respective local region, $A$ or $B$. Assuming ordinary dynamical laws, the input state at $B$ can depend on the output at $A$ through some function: \begin{equation}\lambda_O^A \mapsto \lambda_I^B = w^B\left(\lambda_O^A\right). \end{equation} The input state at $A$, on the other hand, does not depend on $B$, and thus has to be specified as an independent environment variable. The ontic process for this model is thus identified with the pair \begin{equation} \omega = \left(\lambda^A_I, w^B\right). \end{equation} Indeed, knowing $\omega$ and the choice of local operations is sufficient to fully determine the measured outcomes: \begin{equation} a=\lambda^A_I, b= w^B\left(f^A\left(\lambda^A_I\right)\right). \end{equation} As the model is fully deterministic, and we have not introduced any redundant variables, there are no non-trivial equivalence classes. Explicitly, an event in region $A$ (and similarly for $B$) is given by the pair $\left(a,f^A\right)$, or equivalently by the input-output pair \begin{equation} \lambda^A := \left(\lambda^A_I,\lambda^A_O = f^A\left(\lambda^A_I\right)\right), \end{equation} while the instrument is given by the collection of events given a choice of operation, \begin{equation} \mathfrak{I}^A = \left\{\left(\lambda_I^A,\,f^A\left(\lambda_I^A\right)\right)\right\}_{\lambda_I^A}, \end{equation} which is just to say the instrument can be identified with the function $f^A$. We see in this example that the ontology, as traditionally understood, lies in the event variables $\lambda$. These variables are \emph{not} independent of the local controllables, because the event at $B$ can depend on the operation performed at $A$. However, there is still an aspect of the ontology that does not depend on the operations: the initial state $\lambda^A_I$ and the functional relation $w^B$. It is this invariant aspect of the ontology that we call a process. \begin{figure}[ht]% \subfloat[\label{figure2a}]{ \includegraphics[width=0.5\columnwidth]{figure2a}}%figure2a_u \quad \subfloat[\label{figure2c}]{ \includegraphics[width=0.5\columnwidth]{figure2b}%figure2b_u } \caption{\textbf{ Classical process with ontological interpretation.}{ (a) We assign input states $\lambda_I^A$ and $\lambda_I^B$ to the past boundaries of $A$ and $B$, respectively, and output states $\lambda_O^A$ and $\lambda_O^B$ to the respective future boundaries. A deterministic local operation is a function $f^X$ that maps the input state of each region to the corresponding output. (b) An example of ontic process is one describing classical closed time-like curves, defined by a pair of functions $\omega=\left(w^A, w^B\right)$, where $\lambda_O^B \mapsto w^A\left(\lambda_O^B\right) = \lambda_I^A$ and similarly for $w^B$. }} \end{figure} % \subsubsection{Time-travelling classical systems} General Relativity allows for space-time geometries with closed time-like curves, where a system can travel back in time and interact with its past self \cite{morris1988wormholes}, thus providing physically-motivated examples of scenarios that defy ordinary forward causality. Notably, qualitative analogies between quantum phenomena and classical time-travelling systems have been suggested~\cite{Durand2002}, making the latter an interesting test-bed for generalised ontological models. The example in the previous subsection can be readily generalised to a deterministic model of classical system near closed time-like curves by allowing the input state at $A$ to depend on $B$ through some function $\lambda_O^B \mapsto \lambda_I^A = w^A\left(\lambda_O^B\right)$. The process is now given by two functions, $\omega\equiv\left(w^A, w^B\right)$, Fig.~\ref{figure2c}, with the causally-ordered case recovered when one of the two is a constant. Compatibility with arbitrary local operations imposes constraints on the function $w^A, w^B$ and, in the two-region case, it turns out that one of them has in fact to be constant~\cite{Baumeler2016, baumeler2017reversible}. However, for three or more regions, it is possible to find deterministic processes, with no constant component, that are still consistent with arbitrary local operations\footnote{The incompatibility of such processes with an underlying causal order can be demonstrated rigorously by showing they can be used to violate causal inequalities~\cite{baumeler14}, device-independent constraints on probabilities imposed by a definite causal order~\cite{oreshkov12, Branciard2016}.}. Also in this case, the observed outcomes are fully determined once the process and the local operations are specified, as the unique fixed points $a\equiv \lambda_I^A, b\equiv \lambda_I^B,\dots$, of the function obtained by composing the process $\omega=\left(w^A, w^B,\dots\right)$ with the operations $\left(f^A,f^B,\dots\right)$ (see Ref.~\cite{baumeler2017reversible} for more details). Crucially, in this case the events in each region can depend on the choice of operation in all regions, $\lambda^A=\lambda^A(f^A,f^B,\dots)$. Thus, from the perspective of ordinary ontological models, time-travelling systems appear contextual, since it is impossible to assign a ``state'' to any region independently of the operations. Nonetheless, the relation between events, captured by the process, does not depend on the operations. Thus, following the terminology introduced here, models such as the above are both instrument and process non-contextual. (As in the previous causally-ordered example, there are no non-trivial equivalence classes, so non-contextuality is straightforward.) More general models of classical closed time-like curves might impose restrictions on the accessible local operations\footnote{ A constraint on the accessible operations is often invoked to solve ``paradoxes'', such as a time-traveller killing their past self. Although classical studies of closed time-like curves do not support the need for this type of restriction~\cite{Friedman:1990ja, Echeverria:1991ko, Lossev1992, Novikov1992, Mikheeva1993}, it might be necessary in a general theory. Such a restriction on an agent's actions is sometimes interpreted as a violation of ``free will''. This worry is however misplaced, since an agent can still be (or fail to be) free to perform all the physically available operations. A different set of operations would simply represent a deviation from classical physics in the local region where the agent acts.}. Even more generally, one can consider models where instruments are not associated with local input-to-output functions but with more general sets of input-output pairs, $\mathfrak{I}^A=\Lambda^A\subsetneq \Lambda^A_I\times\Lambda^A_O$, where $\Lambda^A_{I (O)}$ is the state space associated with the past (future) boundary of the local region. In such models, a choice of instrument selects which pairs of input-output states are \emph{possible}, while a deterministic process would determine, given all choices of instruments, which pairs are actually realised. Thus, in such models both the state in the past and in the future of a local region depend on the choice of instrument, thus again they are necessarily contextual from the point of view of traditional ontological models. Yet, they remain instrument and process non-contextual as long as deterministic processes are considered. In the above deterministic examples $\omega$-mediation is satisfied trivially, because ontic and operational processes coincide. This can be generalised to situations where we have only partial knowledge about the environment. For example, we might not have full knowledge of the initial state, but only know the temperature $T$ of a thermal bath from which the state is extracted; or the system might get coupled to some external environment during the evolution from one region to another. In all cases, we end up with partial knowledge of the ontic process, expressed by some probability $p(\omega|W)$ where $W$ represents all relevant accessible information about the environment (the temperature of the bath or other noise parameters). The resulting probabilistic operational model naturally satisfies the property of $\omega$-mediation, because knowing the temperature or noise parameters does not provide more information than already encoded in the ontic process, namely in the underlying microstates and functional relations. Note also that our construction of an ontological model respects the mobility of the boundary between local instruments and processes that one sees in ordinary applications of quantum theory. As a simple example, consider a preparation $P$ of a quantum system, followed by a measurement $M$. This can be modelled in three different ways: (i) with P as part of the environment $\tilde{W}$, and $M$ as an instrument associated to a single local region, (ii) with $P$ and $M$ as instruments in two distinct local regions, and $\tilde{W}$ capturing both a channel between preparation and measurement, plus any additional information about the environment, or (iii) with both $P$ and $M$ characterising the instruments in a single local region and all other information about the environment modelled as $\tilde{W}$. For classical processes characterised as causal models, such a shift in perspective is formalised by the notion of ``latent variables''\cite{Pearlbook}. An analogue notion of "latent laboratories" exists for quantum processes characterised as quantum causal models, and this formal structure likewise characterises the mobility of the boundary to which we refer\cite{costa2016}. \subsubsection{All-at-once stochastic models} In the above examples of time-travelling systems, the ontic process (or at least certain aspects of it) can be understood as describing the dynamical evolution of systems between regions. Some retrocausal approaches attempt to provide an ontology for quantum mechanics that does not rely on any dynamical process; rather, one should consider all relevant events in space-time ``at once''. The appearance of quantum probabilities is then justified by the fact that the information available at a given time is not sufficient to fully determine the state of the system at all times (with the missing information possibly contained in some unknown boundary condition in the future). Our framework naturally captures all such models, because an ontic process need not be interpreted as a transformation: it simply represents the rule generating all relevant events given the local operations. An instructive example is a toy model by Wharton~\cite{Wharton2014}, which represents a space-time scenario as a system in thermal equilibrium, with events at different space-time locations represented as states at different points in space. While having a clear ontological interpretation, this model offers qualitative analogies with quantum interference and, when analysed from an ordinary time-evolution perspective, displays an apparent contextuality. We show in detail in appendix \ref{toywharton} how (a generalisation of) Wharton's model fits within our framework and satisfies the requirements of $\omega$-mediation and instrument and process non-contextuality. The above three examples illustrate that it is indeed easy to represent many possible physical scenarios via ontological models that are both instrument and process non-contextual. Given the exotic nature of the latter two examples, it seems plausible that one could also produce such a model to explain quantum correlations. In the following sections we prove that this is not the case. \section{Quantum models}\label{quantModel} If one assumes that the results of experiments in local regions accord with quantum mechanics, then events can be associated with \emph{completely positive trace-non-increasing} (CP) maps $\mathcal{M}^A :A_I\rightarrow A_O$, where input and output spaces are the spaces of linear operators over input and output Hilbert spaces of the local region, $A_I\equiv\lin({\cal H}^{A_I})$, $A_O\equiv \lin({\cal H}^{A_O})$ respectively~\cite{chuang00}. Each set ${\mathfrak I}^A$ of CP maps that sums to a completely positive trace preserving (CPTP) map is a \emph{quantum instrument}~\cite{davies70}: \begin{equation} \tr \left[ \sum_{\mathcal{M}^A\in {\mathfrak I}^A} \mathcal{M}^A(\rho)\right] = \tr(\rho). \label{instrument} \end{equation} An instrument thus represents the collection of all possible events that can be observed given a specific choice of local controllable. % Given these definitions of events and instruments, one can predict the joint probability over possible events using a generalised form of the Born rule: \begin{align} p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W) =&f_W(M^A, M^B,\dots) \prod_{X} \chi_{\mathfrak{I}^X}(M^X),\\ \label{born} f_W(M^A, M^B\dots) =& \tr \left[(M^A \otimes M^B \otimes\dots) W\right], \end{align} where $M^A, M^B\dots$ are the Choi-Jamio{\l}kowski representations of the local CP maps associated to particular events, and $W$ is a positive, semi-definite operator associated to the relevant process~\cite{gutoski06, chiribella09b, oreshkov12}. We call $W$ the \emph{process matrix}, using the terminology of Ref.~\cite{oreshkov12}. It is possible to derive this trace rule for probabilities by assuming linearity~\cite{oreshkov12}, or alternatively one can \emph{derive} linearity (and the trace rule) from the assumption of operational instrument equivalence alone~\cite{shrapnel2017}. The significance of this latter derivation is that the condition of operational instrument equivalence is formally identical to that of instrument non-contextuality, with the only difference that the latter includes the ontic process. Therefore, %it is then clear that the trace rule is the unique probability assignment consistent with the assumption of instrument non-contextuality. By assuming instrument non-contextuality at the ontological level, this implies that, for each ontic process $\omega$, the corresponding frame function can be expressed as: \begin{equation}\label{framenoncontextual} f_\omega(M^A, M^B,\dots) = \tr\left[ \sigma(\omega) \mathbf{M}\right], \end{equation} where we introduced the short-hand notation $\mathbf{M} \equiv M^A \otimes M^B\otimes\dots$ and $\sigma(\omega)$ is a process matrix~\cite{shrapnel2017}. We now wish to show that the function $g_W(\omega)$ that features in our ontological model, under the assumption of process non-contextuality, can be represented as \begin{equation} \label{dualframe} g_W(\omega) = \tr \left[\eta(\omega)W\right], \end{equation} where $\{\eta(\omega)\}_{\omega \in \Omega}$, $\Omega$ being the set of ontic processes, is a quantum instrument. It is common in non-contextuality no-go theorems (as well as in the process matrix formalism) to assume preservation of probabilistic mixtures as an assumption that is independent of the assumption of non-contextuality. Here we rather derive it from our assumption of process non-contextuality. Consider two classical variables $\xi$, $W$ used to describe the process, where we already take operational equivalences into account. Following the earlier example, we can think of $W$ as describing a cartesian axis, while $\xi$---the aspect of the process about which we are ignorant---describes whether a spin-$\frac{1}{2}$ particle is prepared aligned or anti-aligned to this axis. The operational probabilities given $W$, and the corresponding decomposition for ontological probabilities, are obtained by marginalisation: \begin{align}\label{marginalprocess} p&(M^A,M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W)\\ \nonumber %=& \int\! d\xi\,d\omega\, p(M^A,M^B,\dots|\omega,\mathfrak{I}^A, \mathfrak{I}^B,\dots, W,\xi) %p(\omega, \xi| \mathfrak{I}^A, \mathfrak{I}^B,\dots, W) \\ \nonumber =&~\int\! d\xi\,d\omega\, p(M^A,M^B,\dots|\omega, \mathfrak{I}^A, \mathfrak{I}^B,\dots, W,\xi)p(\omega| \mathfrak{I}^A, \mathfrak{I}^B,\dots,W, \xi)p(\xi|\mathfrak{I}^A, \mathfrak{I}^B,\dots,W)\\ \nonumber =&~\int\! d\xi\,d\omega\, p(M^A,M^B,\dots|\omega, \mathfrak{I}^A, \mathfrak{I}^B,\dots)p(\omega | W, \xi) p(\xi|W), \end{align} where, in the last identity, we use the fact that $p(\omega | W, \xi)$ does not depend on the local controllables (and thus on the instruments) due to the assumption of $\omega$-mediation; and $p(\xi|W)$ is due to our assumption that the environment and local controllables (and thus process and instruments) are uncorrelated. Additionally, due to $\omega$-mediation, we no longer need to condition the $M^A,M^B,\dots$ directly on $W$ and $\xi$. Now let us write $W_{\xi}$ for the process corresponding to the pair $W, \xi$. We have \begin{equation} \label{convexity} g_{\int d{\xi}~W_{\xi}p({\xi}|W)}(\omega)= g_W(\omega) = p(\omega|W) %= \int d{\xi}~p(\omega|W, \xi) p(\xi|W) = \int d{\xi}~ g_{W_{\xi}} (\omega) p({\xi}|W), \end{equation} thus $g_W(\omega)$ is convex-linear in $W$. The first identity in Eq.~\eqref{convexity} comes from the fact that probabilistic mixtures of quantum processes are represented as convex combinations, thus $W = \int d{\xi}~W_{\xi} p({\xi}|W)$. This in turn is a consequence of the trace formula for operational quantum probabilities (which is itself a consequence of operational instrument equivalence): \begin{align} %p(a_j,b_k,\dots| \mathfrak{I}^A, \mathfrak{I}^B,\dots,\tilde{W}) p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W) &= \tr \left[(M^A \otimes M^B \otimes\dots) W \right] \prod_{X} \chi_{\mathfrak{I}^X}(M^X)\\ \nonumber &= \int d{\xi }~p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W,\xi) p({\xi}|W) \prod_{X} \chi_{\mathfrak{I}^X}(M^X) \\ %\int d{\xi }~p(a_j,b_k,\dots| \mathfrak{I}^A, \mathfrak{I}^B,\dots,\tilde{W},\xi) p({\xi}|\tilde{W}) &= \tr\left[ (M^A \otimes M^B \otimes\dots)\int d{\xi }~W_{\xi } p({\xi}|W) \right] \prod_{X} \chi_{\mathfrak{I}^X}(M^X) \end{align} for all CP maps $M^A, M^B,\dots$ Using standard linear-algebra arguments, $g_W(\omega)$ can be extended to a linear function over $W$, leading to the representation \eqref{dualframe}, $g_W(\omega)= \tr\left[ \eta(\omega)W\right]$. Positivity and normalisation of probabilities then imply \begin{align} g_W(\omega) \geq 0 &\Rightarrow \eta(\omega) \geq 0 \quad \forall \omega, \\ \label{etanormalisation} \int d\omega g_W(\omega) = 1 &\Rightarrow \tr\left[ \int d\omega \eta(\omega) W \right]= 1 \quad \forall\, W. \end{align} Operators $\eta(\omega)$ as defined above can be understood as the Choi representation of CP maps that sum up to a trace preserving map, namely $\left\{\eta(\omega)\right\}_{\omega\in \Omega}$ defines an instrument. In general, the CP maps $\eta(\omega)$ do not have to factorise over the separate regions, therefore it might not be possible to interpret them as local operations. This is not an obstacle, as such an interpretation is not required for the rest of the argument. \section{A quantum contradiction}\label{contradiction} To summarise the results so far, we have an operational rule for the predictions of the joint probabilities of outcomes according to quantum theory: \begin{equation} %p(a,b,c, \dots,|\mathfrak{I}^A, \mathfrak{I}^B,\dots,\tilde{W}) = \tr MW p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W) =\prod_{X} \chi_{\mathfrak{I}^X}(M^X)\tr\left[\mathbf{M}\,W\right] . \end{equation} We also have an ontological model for predicting the joint probabilities under the assumptions of $\omega$-mediation, instrument non-contextuality and process non-contextuality: % \begin{equation}\label{inframes} %p(a,b,c, \dots,|\mathfrak{I}^A, \mathfrak{I}^B,\dots,\tilde{W}) p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W) = \prod_{X} \chi_{\mathfrak{I}^X}(M^X) \int d\omega f_\omega(M^A, M^B, \dots) g_W(\omega), \end{equation} % which given the results of the last section, becomes: \begin{equation} p(M^A, M^B,\dots|\mathfrak{I}^A, \mathfrak{I}^B,\dots, W) = \prod_{X} \chi_{\mathfrak{I}^X}(M^X)\int d\omega \left[ \tr \sigma(\omega) \mathbf{M}\right] \left[\tr \eta(\omega)W\right]. \end{equation} % If this accords with quantum predictions then we should have: \begin{equation}\label{quantont} \tr\left[\mathbf{M}\,W\right] = \int d\omega~\left[ \tr \sigma(\omega) \mathbf{M}\right] \left[\tr \eta(\omega)W\right]~\forall M, W. \end{equation} It has been noted \cite{spekkens08} that a decomposition of the form \eqref{inframes} is akin to the expression of expectation values in terms of quasi-probability distributions~\cite{Wigner1932, Scully1997}. However, the non-contextuality assumptions force both $f_\omega$ and $g_W$ to be ordinary, positive probability distributions. It is well known that quantum expectation values cannot be expressed in such a way. It is however instructive to consider an explicit contradiction within the present process framework. From \eqref{quantont}, \begin{eqnarray} %\tr\left[\mathbf{M}\, W\right] = \tr\left\{\mathbf{M} \int d\omega~\sigma(\omega)~\left[ \tr \eta(\omega) W\right]\right\} \quad \forall \mathbf{M} \\ %\rightarrow W= \int d\omega~\sigma(\omega) \left[\tr\eta(\omega)W\right], \tr\left[\mathbf{M}\, W\right] = \tr\left[\mathbf{M} \int d\omega~\sigma(\omega)~g_W(\omega)\right] \quad \forall \mathbf{M} \\ \rightarrow W= \int d\omega~\sigma(\omega)~g_W(\omega), \label{ned} \end{eqnarray} which follows from the fact that $\mathbf{M}$ span a complete set of the joint linear space $A^I\otimes A^O\otimes B^I\otimes B^O,\dots$ Eq.~\eqref{ned} tells us that $W$ is a convex mixture of the operators $\sigma(\omega)$. If $W$ is extremal, namely if it cannot be decomposed into a non-trivial convex combination of other processes, then $W \propto \sigma(\omega)$ for $g_W(\omega) \neq 0$. Denoting the support of $g_W$ by $\Omega_W$, i.e., $\omega\in \Omega_W \Leftrightarrow g_W(\omega) \neq 0$, we have $W \propto \sigma(\omega)$ $\forall \omega \in \Omega_W$ for an extremal $W$. %If we choose $W$ to be extremal, by which we mean that it cannot be decomposed into a non-trivial convex combination of other processes, then $W \propto \sigma(\omega)$ for $g_W(\omega) \neq 0$. Consider now a process $W$ that can be decomposed into two distinct mixtures of two sets of extremal processes $W_j$ and $W'_k$ (we take discrete sets for simplicity): \begin{equation} W= \sum_j q_j W_j = \sum_k p_kW'_k. \end{equation} Since $g_W$ is convex-linear in $W$, we have $g_W= \sum_j q_j g_{W_j}$. This means that, for every $\omega \in \Omega_W$, there must be a $j$ such that $g_{W_j}(\omega) \neq 0$. In other words, $\Omega_W=\bigcup_j \Omega_{W_j}$. By a similar argument, we have that $\Omega_W=\bigcup_k \Omega_{W'_k}$. We thus see that each convex decomposition of $W$ into distinct extremal processes corresponds to a partition of $W$'s support into the extremal processes' supports. This in turns implies that each $\omega$ belongs to both $\Omega_{W_j}$ and $\Omega_{W'_k}$, for some $j$ and $k$. As we have seen, this would imply \begin{equation}\label{prop} \sigma(\omega)\propto W_j \propto W'_k. \end{equation} %We define $\Omega_W$ as the support of $g_W$. From \eqref{dualframe} and \eqref{ned} we have %\begin{eqnarray} % W =&&\int_{\Omega_W} d\omega~ \sigma(\omega)~ g_W(\omega)\\ %=&&\int_{\Omega_W} d\omega~ \sigma(\omega)~ \sum_j q_j g_{W_j}(\omega)\\ %=&&\int_{\Omega_W} d\omega~ \sigma(\omega)~ \sum_k p_k g_{W_k}(\omega), %\end{eqnarray} %and (by definition of $\Omega_W$) for all $\omega \in \Omega_W$ there exists a $j$ such that $g_{W_j}(\omega)\neq 0$, likewise for all $\omega \in \Omega_W$ there exists a $k$ such that $g_{W_k}(\omega)\neq 0$. This implies \begin{equation}\label{prop} W\propto \sigma(\omega)\propto W_j \propto W'_k. \end{equation} \\ However, one can find many examples where no process in one decomposition is proportional to any process in the other. This implies a contradiction and shows that a decomposition such as \eqref{quantont} cannot exist for all CP maps and quantum processes. As a particular example to show the above contradiction, consider a process $W$ corresponding to a quantum channel from a region with a two-level output, $A_O$ to a region with a two-level input, $B_I$: \begin{equation} W = \sum_j q_j W_j = \sum_k p_kW'_k, \end{equation} formed from the following two combinations of extremal processes: \begin{align} W_1=&\Proj{\id} = \id +X\otimes X -Y\otimes Y+Z\otimes Z,\\ W_2=&\Proj{X}= \id +X\otimes X +Y\otimes Y-Z\otimes Z,\\ W_3=&\Proj{Y} = \id -X\otimes X -Y\otimes Y-Z\otimes Z,\\ W_4=&\Proj{Z} = \id -X\otimes X +Y\otimes Y+Z\otimes Z. \end{align} \begin{align} W'_1=&\Proj{U\id} = \id +X\otimes UXU^\dagger -Y\otimes UYU^\dagger+Z\otimes UZU^\dagger,\\ W'_2=&\Proj{UX}= \id +X\otimes UXU^\dagger +Y\otimes UYU^\dagger - Z\otimes UZU^\dagger,\\ W'_3=&\Proj{UY} = \id -X\otimes UXU^\dagger -Y\otimes UYU^\dagger-Z\otimes UZU^\dagger,\\ W'_4=&\Proj{UZ} = \id -X\otimes UXU^\dagger+Y\otimes UYU^\dagger+Z\otimes UZU^\dagger. \end{align} where $X, Y$ and $Z$ are the Pauli matrices, $U$ is a unitary, and we used the notation $\Proj{V}:=\sum_{rs}\ket{r}\bra{s}\otimes V \ket{r}\bra{s}V^{\dag}$ for the Choi representation of a unitary $V$. It is clear that no $W_j $ is proportional to any $W'_k$ for an appropriate choice of $U$, and we have a contradiction with~\eqref{prop}. \section{Process-contextual extensions of quantum theory}\label{extension} Contextuality proofs do not always require both preparation and measurement non-contextuality. Indeed, many no-go theorems focus on the requirement of measurement non-contextuality alone. Interestingly, even without preparation non-contextuality, measurement non-contextuality imposes strong constraints on the ontology. Essentially, any non-contextual ontology must reduce to the Beltrametti-Bugajski (BB) model~\cite{beltrametti95}, which identifies elements of reality with the quantum wave function. An important consequence of this result is that no measurement non-contextual extension of quantum theory exists that can provide more accurate predictions of experimental outcomes~\cite{Montina2011}. It is thus interesting to consider dropping the requirement of process non-contextuality in our framework, leaving instrument non-contextuality as the sole requirement. It is easy to see that instrument non-contextual, process-\emph{contextual} models are possible. An example is a model where the ontic process is directly identified with the quantum process: \begin{equation} g_W\left(\omega\right) =\delta\left(W-\omega\right). \label{BB} \end{equation} Operational probabilities are then recovered simply by using the ``quantum process rule'', Eq.~\eqref{born}, for the ontic frame function: \begin{equation} f_{\omega}(M^A, M^B\dots) = \tr \left[\mathbf{M}\, \omega\right]. \end{equation} This ``crude'' ontological model is similar to the BB model. A difference is that the BB model only identifies \emph{pure} quantum states with elements of reality, while in Eq.~\eqref{BB} \emph{any} process counts as ontic, including those corresponding to mixed states or noisy channels. One could refine the above model by only allowing an appropriately defined ``pure process'' to be ontic. (See however Ref.~\cite{Araujo2017purification} for possible ambiguities regarding such a definition.) A similar non-extendability result to that of~\cite{Montina2011} also holds in our case. As already discussed above, the only instrument non-contextual frame function must be given by Eq.~\eqref{framenoncontextual}, namely to every ontic process $\omega$ is associated a process matrix $\sigma\left(\omega\right)$. The implication is that an instrument non-contextual hidden variable cannot provide more information than that contained in a process matrix. We thus conclude that quantum mechanics admits no non-trivial, instrument non-contextual extension. Indeed, this result holds independently of any assumptions one may make about the causal structure of a possible underlying ontology. Therefore, even instrument non-contextuality alone poses strong restrictions on hidden variable models that attempt to leverage exotic causal structures to recover a non-contextual notion of reality. \section{Discussion} We have shown that it is not possible to construct an ontological model that is both instrument and process non-contextual and also accords with the predictions of quantum mechanics. We take both forms of non-contextuality to be very reasonable assumptions if one wishes some aspect of "reality" to be describable in a manner that is independent of the act of experimentation. Thus our work shows that models that posit unusual causal, global or dynamical relations will not solve a key quantum mystery, that of contextuality. Standard no-go theorems show that quantum theory is not consistent with ontological models where the properties of a system exist prior to and independently of the way they are measured. A possible interpretation is that properties \emph{do} exist, but they are in fact dependent on future actions. Here we have shown that hidden variable models that attempt to leverage such influence from the future have to violate some broader form of non-contextuality. This new notion of non-contextuality refers to the rules that dictate how local actions influence observed events, rather than to states and measurements. We have introduced three assumptions in order to analyse non-contextuality in such scenarios where influence from the future is possible. The core idea is captured by the assumption of $\omega$-mediation. This states that an agent's actions should effect the world according to rules or laws that do not themselves depend on such actions. %This assumption is extremely natural and might in fact be understood (captured?) as part of (by the?) the definition of what we call an ``ontic process''. Indeed, if the rules changed every time we changed how we intervened on the world, we would not call them ``rules'' to begin with. In the context of ontological models, this assumption allows one to assume that experiments uncover an aspect of nature that is unchanging. The second assumption, instrument non-contextuality, states that operationally equivalent interventions should not produce distinct effects at the ontological level. We have shown that this assumption is compatible with scenarios that \emph{would} be interpreted as contextual when viewed from an ordinary, time-oriented perspective. For example, we have illustrated that time-travelling models where states \emph{can} depend on future interventions satisfy the requirement of instrument non-contextuality. Despite this generality, instrument non-contextuality is nonetheless sufficient to rule out all non-trivial hidden-variable extensions of quantum theory: Any additional variable that could provide better predictions for quantum statistics than ordinary quantum mechanics must be instrument \emph{contextual}. Our third assumption, process non-contextuality, states that the probabilistic assignment of the ontic description of an experiment should reflect the operationally equivalent arrangements of the same experiment. Here by ``experiment'' we mean the specification of the set of conditions under which agents can operate. That is, we include in this description all aspects of a physical scenario other than the choices of settings and the observed outcomes. Such aspects include what kind of systems are involved, the laws describing such systems, boundary conditions, etc. We have shown that no ontic model can satisfy this requirement of process non-contextuality, including those that directly identify quantum objects as ontic. The distinction between background environment variables and locally controllable settings that one makes when describing experiments using our approach is of course mobile. What counts as a freely chosen parameter in one situation can count as a fixed parameter in another. Our result is robust under such a shift in perspective: no matter how we decide to describe a quantum experiment, it will not be possible to find an ontic representation for it that is both instrument and process non-contextual. Finally, we draw attention to the fact that our results rely on complete matching to the operational predictions of quantum theory. This is a recognised feature of all ontological models that rely on operational equivalence classes and leaves open the possibility that particular ontological models might allow for some experimentally testable, different predictions. Thus, for proponents of particular retrocausal models, the door remains open to develop their ontology such that they can predict some possible deviation from quantum statistics. In the face of such statistical deviation, the possibility of a non-contextual ontological model remains open. %It is clear that there is some ambiguity about what should pertain to a local controllable and what should be described as part of the environment. Indeed, the border between the two categories remains mobile and there is an ineliminable perspectival element that enters the particular modelling of any given experiment. What matters, however, is that there is consistency between each possible description of the same experiment (we return to this point in the discussion). It is also worth noting that we do not make any \emph{apriori} assumptions regarding the \emph{relevance} (causal or otherwise) of each of these notions. For example, the colour of Alice's shoes can be included as a local controllable, and the weather conditions during the experiment as an environment. It is the operational equivalences that we define in the next section that enforce a notion of relevance. \vspace{-2pt} \begin{acknowledgments} \vspace{-2pt} We thank {\v C}aslav Brukner, Eric Cavalcanti, Ravi Kunjwal, Matthew Leifer, Gerard Milburn, Alberto Montina, David Schmid, Robert Spekkens, and Ken Wharton for helpful discussions. This work was supported by an Australian Research Council Centre of Excellence for Quantum Engineered Systems grant (CE 110001013), and by the Templeton World Charity Foundation (TWCF 0064/AB38). F.C.\ acknowledges support through an Australian Research Council Discovery Early Career Researcher Award (DE170100712). This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. We acknowledge the traditional owners of the land on which the University of Queensland is situated, the Turrbal and Jagera people. \end{acknowledgments} %%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% BIBLIOGRAPHY %%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%% %\bibliographystyle{linksen} %\bibliography{biblio} \providecommand{\href}[2]{#2} %\begingroup \raggedright \begin{thebibliography}{10} \bibitem{kochen67} S.~Kochen and E.~Specker, ``The problem of hidden variables in quantum mechanics,'' \href{http://dx.doi.org/10.1512/iumj.1968.17.17004}{{\em J. Math. Mech.} {\bfseries 17}, 59--87 (1967)}. \bibitem{bell66} J.~S. Bell, ``On the problem of hidden variables in quantum mechanics,'' \href{http://dx.doi.org/10.1103/RevModPhys.38.447}{{\em Rev. Mod. Phys.} {\bfseries 38}, 447--452 (1966)}. \bibitem{cabello08} A.~{Cabello}, ``{Experimentally testable state-independent quantum contextuality},'' \href{http://dx.doi.org/10.1103/PhysRevLett.101.210401}{{\em Phys. Rev. Lett.} {\bfseries 101}, 210401 (2008)}. \bibitem{spekkens05} R.~W. {Spekkens}, ``{Contextuality for preparations, transformations, and unsharp measurements},'' \href{http://dx.doi.org/10.1103/PhysRevA.71.052108}{{\em Phys. Rev.~A} {\bfseries 71}, 052108 (2005)}. \bibitem{Montina2011} Z.~Chen and A.~Montina, ``Measurement contextuality is implied by macroscopic realism,'' \href{http://dx.doi.org/10.1103/PhysRevA.83.042110}{{\em Phys. Rev. A} {\bfseries 83}, 042110 (2011)}. \bibitem{kunjwal2016} R.~Kunjwal, ``Contextuality beyond the Kochen-Specker theorem,'' \href{http://arxiv.org/abs/1612.07250}{{\ttfamily arXiv:1612.07250 [quant-ph]}}. \bibitem{Mazurek2016} M.~D. Mazurek, M.~F. Pusey, R.~Kunjwal, K.~J. Resch, and R.~W. Spekkens, ``An experimental test of noncontextuality without unphysical idealizations,'' \href{http://dx.doi.org/10.1038/ncomms11780}{{\em Nat. commun.} {\bfseries 7}, 11780 (2016)}. \bibitem{Schmid2017} D.~Schmid and R.~W. Spekkens, ``Contextual Advantage for State Discrimination,'' \href{http://dx.doi.org/10.1103/PhysRevX.8.011015}{{\em Phys. Rev. X} {\bfseries 8}, 011015 (2018)}. \bibitem{cavalcanti2017} E.~G. Cavalcanti, ``Classical Causal Models for Bell and Kochen-Specker Inequality Violations Require Fine-Tuning,'' \href{http://dx.doi.org/10.1103/PhysRevX.8.021018}{{\em Phys. Rev. X} {\bfseries 8}, 021018 (2018)}. \bibitem{chailloux2016} A.~Chailloux, I.~Kerenidis, S.~Kundu, and J.~Sikora, ``Optimal bounds for parity-oblivious random access codes,'' \href{https://doi.org/10.1088/1367-2630/18/4/045003}{{\em New\ J.\ Phys.} {\bfseries 18}, 045003 (2016)}. \bibitem{spekkens09} R.~W. Spekkens, D.~H. Buzacott, A.~J. Keehn, B.~Toner, and G.~J. Pryde, ``Preparation contextuality powers parity-oblivious multiplexing,'' \href{https://doi.org/10.1103/PhysRevLett.102.010401}{{\em Phys.\ Rev.\ Lett.} {\bfseries 102}, 010401 (2009)}. \bibitem{Howard2014} M.~Howard, J.~Wallman, V.~Veitch, and J.~Emerson, ``Contextuality supplies the `magic' for quantum computation,'' \href{http://dx.doi.org/10.1038/nature13460}{{\em Nature} {\bfseries 510}, 351--355 (2014)}. \bibitem{price2012} H.~Price, ``Does time-symmetry imply retrocausality? How the quantum world says “Maybe”?,'' \href{https://doi.org/10.1016/j.shpsb.2011.12.003}{{\em Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics} {\bfseries 43}, 75--83 (2012)}. \bibitem{priceWharton2015} H.~Price and K.~Wharton, ``Disentangling the Quantum World,'' \href{http://dx.doi.org/10.3390/e17117752}{{\em Entropy} {\bfseries 17}, 7752--7767 (2015)}. \bibitem{Evans01062013} P.~W. Evans, H.~Price, and K.~B. Wharton, ``New Slant on the EPR-Bell Experiment,'' \href{http://dx.doi.org/10.1093/bjps/axr052}{{\em Brit. J. Philos. Sci.} {\bfseries 64}, 297--324 (2013)}. \bibitem{Wharton2014} K.~Wharton, ``Quantum States as Ordinary Information,'' \href{http://dx.doi.org/10.3390/info5010190}{{\em Information} {\bfseries 5}, 190--208 (2014)}. \bibitem{Aharonov2016} Y.~Aharonov, E.~Cohen, and T.~Shushi, ``Accommodating Retrocausality with Free Will,'' \href{http://dx.doi.org/10.12743/quanta.v5i1.44}{{\em Quanta} {\bfseries 5}, 53--60 (2016)}. \bibitem{Leifer2016} M.~S. Leifer and M.~F. Pusey, ``Is a time symmetric interpretation of quantum theory possible without retrocausality?,'' \href{http://dx.doi.org/10.1098/rspa.2016.0607}{{\em Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences} {\bfseries 473}, (2017)}. \bibitem{Sutherland17} R.~I. Sutherland, ``How retrocausality helps,'' \href{https://doi.org/10.1063/1.4982765}{{\em AIP Conference Proceedings} {\bfseries 1841}, 020001 (2017)}. \bibitem{carati1999nonlocality} A.~Carati and L.~Galgani, ``Nonlocality of classical electrodynamics of point particles, and violation of Bell's inequalities,'' {\em Nuovo Cimento B} {\bfseries 114}, 489--500 (1999). \bibitem{Weinstein2009} S.~Weinstein, ``Nonlocality Without Nonlocality,'' \href{http://dx.doi.org/10.1007/s10701-009-9305-x}{{\em Found.\ Phys.} {\bfseries 39}, 921--936 (2009)}. \bibitem{wood2015} C.~J. Wood and R.~W. Spekkens, ``The lesson of causal discovery algorithms for quantum correlations: {Causal} explanations of {Bell}-inequality violations require fine-tuning,'' \href{http://dx.doi.org/10.1088/1367-2630/17/3/033002}{{\em New J. Phys.} {\bfseries 17}, 033002 (2015)}. \bibitem{gutoski06} G.~Gutoski and J.~Watrous, ``Toward a general theory of quantum games,'' in {\em Proceedings of 39th ACM STOC}, pp.~565--574. \newblock 2006. \newblock \href{http://arxiv.org/abs/quant-ph/0611234}{{\ttfamily arXiv:quant-ph/0611234}}. \bibitem{chiribella08} G.~Chiribella, G.~M. D'Ariano, and P.~Perinotti, ``Quantum Circuit Architecture,'' \href{http://dx.doi.org/10.1103/PhysRevLett.101.060401}{{\em Phys. Rev. Lett.} {\bfseries 101}, 060401 (2008)}. \bibitem{Chiribella2008} G.~Chiribella, G.~M. D'Ariano, and P.~Perinotti, ``Memory Effects in Quantum Channel Discrimination,'' \href{http://dx.doi.org/10.1103/PhysRevLett.101.180501}{{\em Phys. Rev. Lett.} {\bfseries 101}, 180501 (2008)}. \bibitem{chiribella09b} G.~{Chiribella}, G.~M. {D'Ariano}, and P.~{Perinotti}, ``{Theoretical framework for quantum networks},'' \href{http://dx.doi.org/10.1103/PhysRevA.80.022339}{{\em Phys. Rev.~A} {\bfseries 80}, 022339 (2009)}. \bibitem{Bisio2011} A.~Bisio, G.~Chiribella, G.~D'Ariano, and P.~Perinotti, ``Quantum networks: {General} theory and applications,'' {\em Acta Physica Slovaca. Reviews and Tutorials} {\bfseries 61}, 273--390 (2011). \href{https://arxiv.org/abs/1601.04864}{{\ttfamily arXiv:1601.04864 [quant-ph]}}. \bibitem{Bisio2014} A.~Bisio, G.~M. D'Ariano, P.~Perinotti, and M.~Sedlák, ``Optimal processing of reversible quantum channels,'' \href{http://dx.doi.org/10.1016/j.physleta.2014.04.042}{{\em Physics Letters A} {\bfseries 378}, 1797 -- 1808 (2014)}. \bibitem{oreshkov12} O.~{Oreshkov}, F.~{Costa}, and {\v C}.~{Brukner}, ``{Quantum correlations with no causal order},'' \href{http://dx.doi.org/10.1038/ncomms2076}{{\em Nat. Commun.} {\bfseries 3}, 1092 (2012)}. \bibitem{modioperational2012} K.~Modi, ``Operational approach to open dynamics and quantifying initial correlations,'' \href{http://dx.doi.org/10.1038/srep00581}{{\em Sci.\ Rep.} {\bfseries 2}, 581 (2012)}. \bibitem{Leifer2013} M.~S. Leifer and R.~W. Spekkens, ``Towards a formulation of quantum theory as a causally neutral theory of Bayesian inference,'' \href{http://dx.doi.org/10.1103/PhysRevA.88.052130}{{\em Phys.\ Rev.\ A} {\bfseries 88}, 052130 (2013)}. \bibitem{Ringbauer2015} M.~Ringbauer, C.~J. Wood, K.~Modi, A.~Gilchrist, A.~G. White, and A.~Fedrizzi, ``Characterizing Quantum Dynamics with Initial System-Environment Correlations,'' \href{http://dx.doi.org/10.1103/PhysRevLett.114.090402}{{\em Phys. Rev. Lett.} {\bfseries 114}, 090402 (2015)}. \bibitem{pollockcomplete2015} F.~A. Pollock, C.~Rodr\'{\i}guez-Rosario, T.~Frauenheim, M.~Paternostro, and K.~Modi, ``Non-Markovian quantum processes: Complete framework and efficient characterization,'' \href{http://dx.doi.org/10.1103/PhysRevA.97.012127}{{\em Phys. Rev. A} {\bfseries 97}, 012127 (2018)}. \bibitem{costa2016} F.~Costa and S.~Shrapnel, ``Quantum causal modelling,'' \href{https://doi.org/10.1088/1367-2630/18/6/063032}{{\em New\ J.\ Phys.} {\bfseries 18}, 063032 (2016)}. \bibitem{Allen2016} J.-M.~A. Allen, J.~Barrett, D.~C. Horsman, C.~M. Lee, and R.~W. Spekkens, ``Quantum Common Causes and Quantum Causal Models,'' \href{http://dx.doi.org/10.1103/PhysRevX.7.031021}{{\em Phys. Rev. X} {\bfseries 7}, 031021 (2017)}. \bibitem{Milz2016} S.~Milz, F.~A. Pollock, and K.~Modi, ``Reconstructing open quantum system dynamics with limited control,'' \href{http://arxiv.org/abs/1610.02152}{{\ttfamily arXiv:1610.02152 [quant-ph]}}. \bibitem{shrapnel2017} S.~Shrapnel, F.~Costa, and G.~Milburn, ``Updating the Born rule,'' \href{https://doi.org/10.1088/1367-2630/aabe12}{{\em New\ J.\ Phys.} {\bfseries 20 }, 053010 (2018)}. \bibitem{harrigan10} N.~Harrigan and R.~Spekkens, ``Einstein, Incompleteness, and the Epistemic View of Quantum States,'' \href{http://dx.doi.org/10.1007/s10701-009-9347-0}{{\em Found. Phys.} {\bfseries 40}, 125--157 (2010)}. \bibitem{leifer2014quantum} M.~S. Leifer, ``Is the quantum state real? An extended review of $\psi$-ontology theorems,'' \href{http://arxiv.org/abs/1409.1570}{{\ttfamily arXiv:1409.1570 [quant-ph]}}. \bibitem{spekkens08} R.~W. {Spekkens}, ``{Negativity and Contextuality are Equivalent Notions of Nonclassicality},'' \href{http://dx.doi.org/10.1103/PhysRevLett.101.020401}{{\em Phys. Rev. Lett.} {\bfseries 101}, 020401 (2008)}. \bibitem{Pearlbook} J.~Pearl, {\em Causality.} \newblock Cambridge University Press, 2009. \bibitem{oreshkov15} O.~Oreshkov and C.~Giarmatzi, ``Causal and causally separable processes,'' \href{http://dx.doi.org/10.1088/1367-2630/18/9/093020}{{\em New\ J.\ Phys.} {\bfseries 18}, 093020 (2016)}. \bibitem{morris1988wormholes} M.~S. Morris, K.~S. Thorne, and U.~Yurtsever, ``Wormholes, time machines, and the weak energy condition,'' \href{https://doi.org/10.1103/PhysRevLett.61.1446}{{\em Phys.\ Rev.\ Lett.} {\bfseries 61}, 1446 (1988)}. \bibitem{Durand2002} S.~Durand, ``An amusing analogy: modelling quantum-type behaviours with wormhole-based time travel,'' \href{http://dx.doi.org/10.1088/1464-4266/4/4/319}{{\em Journal of Optics B: Quantum and Semiclassical Optics} {\bfseries 4}, S351 (2002)}. \bibitem{Baumeler2016} {\"{A}}.~Baumeler and S.~Wolf, ``{The space of logically consistent classical processes without causal order},'' \href{http://dx.doi.org/10.1088/1367-2630/18/1/013036}{{\em New\ J.\ Phys.} {\bfseries 18}, 013036 (2016)}. \bibitem{baumeler2017reversible} {\"A}.~Baumeler, F.~Costa, T.~C. Ralph, S.~Wolf, and M.~Zych, ``Reversible time travel with freedom of choice,'' \href{http://arxiv.org/abs/1703.00779}{{\ttfamily arXiv:1703.00779 [quant-ph]}}. \bibitem{baumeler14} {\"A}.~Baumeler, A.~Feix, and S.~Wolf, ``{Maximal incompatibility of locally classical behavior and global causal order in multi-party scenarios},'' \href{http://dx.doi.org/10.1103/PhysRevA.90.042106}{{\em Phys. Rev. A} {\bfseries 90}, 042106 (2014)}. \bibitem{Branciard2016} C.~Branciard, M.~Araújo, A.~Feix, F.~Costa, and {\v C}.~Brukner, ``The simplest causal inequalities and their violation,'' \href{http://dx.doi.org/10.1088/1367-2630/18/1/013008}{{\em New\ J.\ Phys.} {\bfseries 18}, 013008 (2016)}. \bibitem{Friedman:1990ja} J.~Friedman, M.~S. Morris, I.~D. Novikov, F.~Echeverria, G.~Klinkhammer, K.~S. Thorne, and U.~Yurtsever, ``{Cauchy problem in spacetimes with closed timelike curves},'' \href{http://dx.doi.org/10.1103/PhysRevD.42.1915}{{\em Phys.\ Rev.\ D} {\bfseries 42}, 1915--1930 (1990)}. \bibitem{Echeverria:1991ko} F.~Echeverria, G.~Klinkhammer, and K.~S. Thorne, ``{Billiard balls in wormhole spacetimes with closed timelike curves: classical theory},'' \href{http://dx.doi.org/10.1103/PhysRevD.44.1077}{{\em Phys.\ Rev.\ D} {\bfseries 44}, 1077--1099 (1991)}. \bibitem{Lossev1992} A.~Lossev and I.~D. Novikov, ``The {Jinn} of the time machine: nontrivial self-consistent solutions,'' \href{http://dx.doi.org/10.1088/0264-9381/9/10/014}{{\em Class.\ Quantum Grav.} {\bfseries 9}, 2309 (1992)}. \bibitem{Novikov1992} I.~D. Novikov, ``Time machine and self-consistent evolution in problems with self-interaction,'' \href{http://dx.doi.org/10.1103/PhysRevD.45.1989}{{\em Phys.\ Rev.\ D} {\bfseries 45}, 1989--1994 (1992)}. \bibitem{Mikheeva1993} E.~V. Mikheeva and I.~D. Novikov, ``Inelastic billiard ball in a spacetime with a time machine,'' \href{http://dx.doi.org/10.1103/PhysRevD.47.1432}{{\em Phys.\ Rev.\ D} {\bfseries 47}, 1432--1436 (1993)}. \bibitem{chuang00} M.~Nielsen and I.~Chuang, {\em Quantum Computation and Quantum Information}. \newblock Cambridge University Press, 2000. \bibitem{davies70} E.~Davies and J.~Lewis, ``An operational approach to quantum probability,'' \href{http://dx.doi.org/10.1007/BF01647093}{{\em Comm. Math. Phys.} {\bfseries 17}, 239--260 (1970)}. \bibitem{Wigner1932} E.~Wigner, ``On the Quantum Correction For Thermodynamic Equilibrium,'' \href{http://dx.doi.org/10.1103/PhysRev.40.749}{{\em Phys. Rev.} {\bfseries 40}, 749--759 (1932)}. \bibitem{Scully1997} M.~Scully and M.~Zubairy, {\em Quantum Optics}. \newblock Cambridge University Press, 1997. \bibitem{beltrametti95} E.~G. Beltrametti and S.~Bugajski, ``A classical extension of quantum mechanics,'' \href{http://dx.doi.org/10.1088/0305-4470/28/12/007}{{\em J.~Phys. A: Math. Gen.} {\bfseries 28}, 3329 (1995)}. \bibitem{Araujo2017purification} M.~Ara{\'{u}}jo, A.~Feix, M.~Navascu{\'{e}}s, and {\v{C}}.~Brukner, ``A purification postulate for quantum mechanics with indefinite causal order,'' \href{http://dx.doi.org/10.22331/q-2017-04-26-10}{{\em {Quantum}} {\bfseries 1}, 10 (2017)}. \end{thebibliography}%\endgroup \appendix \section{Wharton's retrocausal toy model} \label{toywharton} The core idea of the model is to represent a system across space-time, analogously to the representation of a system in space in thermodynamical equilibrium. Rather than being determined by dynamical evolution, the states at each point in space-time are known with some probability. This is similar to how macrostates can be considered as providing probability distributions for microstates. In this model each event in space-time is represented as a site, labelled by the index $j$, within a lattice. At each site $j$ we can have a particle in a state $\lambda$, whose possible values are assumed to be $\pm 1$ for simplicity. The entire system across space-time is treated ``all-at-once'' in the same way one would treat a spatially extended system, where each site represents a different location in space. The system is then associated with a Hamiltonian $H=-\sum_{<i,j>} \lambda_i \lambda_j$, where the sum is taken over nearest-neighbours according the geometry of the lattice. All we know about the system is that it is in a thermal state, with inverse temperature $\beta$, thus the probability for a certain configuration $\vec{\lambda}:=\left(\lambda_1,\lambda_2,\dots\right)$ is $p(\vec{\lambda}|\beta) \propto e^{- H(\vec{\lambda})\beta}$. If we learn the state of one of the sites, we need to update the thermal distribution by conditioning on the observed value. However, since the model is supposed to represent a space-time configuration, the sites we can observe at any given time are restricted. \begin{figure}[ht]% \includegraphics[width=0.8\columnwidth]{WhartonFig.pdf}% \caption{\textbf{Wharton's toy model}~\cite{Wharton2014}. Each node $j$ represents a location in space-time where a system can be found in a state $\lambda_j$, $j=1,2,\dots$. The state of the entire system is sampled from a thermal ensemble, defined by a Hamiltonian containing interactions between nodes connected by an edge, where each node is treated as a site in a spatially distributed lattice. (a) Observing the system at a given time reveals the state at one of the nodes, e.g.\ $\lambda_1=1$, upon which the probability assignment at the other nodes has to be updated. (b) The analogue of an interference experiment is represented by the insertion of an additional node in the future, which results in a different thermal state and thus in a different probability distribution for all states. An observer at an earlier time that ignores this possibility might interpret such a dependence from future actions as a form of contextuality.}% \label{spins}% \end{figure} Retrocausality is introduced by assuming that performing a measurement at any given time can result in the introduction of a new site, thus changing the geometry of the system, Fig.~\ref{spins}. Assuming a thermal state with a given temperature, the two geometries result in different probability distributions for the microstates. If the system is interpreted as time oriented, and the influence of the future intervention is ignored, then one might be led to the conclusion that it is impossible to assign non-contextual states of reality to the system. The analogy is seen with a quantum interference experiment, where a measurement in the future is assumed to change the conditions that determine the state of the system in its past. If the influence from the future measurement is included, argues Wharton, then one might be able to recover an ontic interpretation of quantum mechanics, where the quantum state simply represents lack of information about the underlying state. This model is interesting because causal influence is not mediated by an explicit mechanism, as opposed to ordinary dynamical systems including the time-travelling examples in the main text. Nonetheless, it is possible to fit this model into our general ontological framework, where the observed probabilities are mediated by an ontic process. Crucially, the model turns out to be both instrument and process non-contextual, showing that approaches of this type cannot reproduce the predictions of quantum theory. \subsection*{Classical systems on an arbitrary geometry} We consider a more general version of Wharton's model, with arbitrary geometry, an arbitrary set of discrete values for the states, and arbitrary local interactions. Consider a set $\mathcal{N}$ of $\left|\mathcal{N}\right|=N$ sites. Each site $j\in\mathcal{N}$ can contain a classical system whose state $\lambda_j$ can take value in some set $\mathcal{S}_j$. %, with cardinality $\left|\mathcal{S}_j\right|=S_j$. The state of the entire system is thus described by a vector $\vec{\lambda}\equiv\left(\lambda_1,\dots,\lambda_N\right)\in \mathcal{S}:=\bigtimes_{j\in\mathcal{N}}\mathcal{S}_j$. A Hamiltonian function $H(\vec{\lambda})$ is defined on the system. We assume that this Hamiltonian is \emph{local}, namely it is a sum of terms representing local interactions between sites. A subset of sites $e\subset \mathcal{N}$ contributing to an interaction term is called ``hyperedge'' and the set $\mathcal{E}$ of hyperedges defines a ``hypergraph'' over $\mathcal{N}$. The Hamiltonian can thus be decomposed as \begin{equation} H=\sum_{e\in\mathcal{E}} h_e, \label{local} \end{equation} where each term $h_e$ is function on the space $\mathcal{L}_e:=\bigtimes_{j\in e}\mathcal{S}_j$. By convention, we identify the state $\lambda_j=0$ of system $j$ with the ``empty site'', namely with no system in it. This implies that, for every hyperedge $e$ containing $j$, \begin{equation} h_e\left(\dots,\lambda_j=0,\dots\right)=0. \label{void} \end{equation} In other words, each interaction term vanishes when one of the sites on which it acts is empty. In this way, ``different geometries'' corresponding to additional or missing sites, are simply represented as a particular choice of states in a fixed geometry. In our terminology, each site $j$ represents a (space-time) region and each state $\lambda_j$ represents an event. We can interpret each event as ``ontic''; however, since we assume that each ontic event can also be observed, ontic and operational events are identified. \paragraph*{No control.} Before considering the possibility of interventions, it is useful to see how our framework applies to the simpler scenario with no interventions. In this case, ``process'' is synonymous with ``state''. Thus, a deterministic process is simply a specific microstate $\vec{\lambda}$, while a general probabilistic process is a probability distributions $P\left(\vec{\lambda}\right)$. For the case of a thermal state, where the only information we can access about the environment is the inverse temperature $\beta$, the operational process is probabilistic, given by the Gibbs distribution \begin{equation} p(\vec{\lambda}\mid \beta) = \frac{e^{-\beta H\left(\vec{\lambda}\right)}}{Z(\beta)},\quad Z(\beta) = \!\!\sum_{\vec{\lambda}\in\mathcal{S}} e^{-\beta H\left(\vec{\lambda}\right)}. \label{Gibbs} \end{equation} Since there are no irrelevant environment variables in this model, questions about contextuality do not arise: each value of the environment variable corresponds to just one process (i.e.\ to one probability distribution for the ``events''). Therefore, at the formal level, we could identify the operational process with the ontic process; the resulting model would be process non-contextual by construction (instrument non-contextuality is even more trivial here, because there is no choice of instruments). A more natural ontological model is a deterministic one, where each ontic process (or ontic state) is identified with one microstate $\vec{\lambda}$. As required by the general formalism, the operational process provides a probability distribution over the possible ontic processes, and knowing the ontic process makes knowledge of the operational process redundant (the ontic process ``screens off'' the operational one), in agreement with the property of $\omega$-mediation. \paragraph*{Local control.} Local instruments are defined as subsets of events and represent the possibility of local control. Thus, in general, the possible sets of instruments at a site $j\in\mathcal{N}$ corresponds to a subset $\mathfrak{I}^j\subset \mathcal{S}_j$. As a simple case-study, we consider the scenario where the only control is inserting or removing a site, as in Wharton's example. Therefore, for each region $j\in\mathcal{N}$ there are two possible instruments: \begin{equation} \mathfrak{I}^j_0:=\left\{0\right\},\qquad \mathfrak{I}^j_1:=\mathcal{S}_{j}{\setminus\left\{0\right\}}. \label{instruments} \end{equation} A prominent feature of this example is that instruments are disjoint sets, so there is never an event that belongs to two distinct instruments. This ensures the instrument non-contextuality of the model. As in the no-control case, a deterministic process corresponds to a specification of all events, while a probabilistic process corresponds to a probability distribution for the possible events. The possibility of control means that the events now can depend on the instruments, so the process must encode this dependency. Thus, a deterministic process is given by a set of functions \begin{equation} \vec{\mathfrak{I}}\equiv\left(\mathfrak{I}^1,\dots,\mathfrak{I}^N\right)\mapsto \vec{\lambda},\quad \lambda_j=\omega_j(\vec{\mathfrak{I}}),\,j\in\mathcal{N} \label{detprocess} \end{equation} such that, for each $j\in\mathcal{N}$, \begin{equation} \omega_j\left(\dots, \mathfrak{I}^j=\mathfrak{I}^j_0,\dots\right)=0,\qquad \omega_j\left(\dots, \mathfrak{I}^j=\mathfrak{I}^j_1,\dots\right)\neq 0. \label{consistency} \end{equation} Condition \eqref{consistency} simply says that if we choose to remove the system from site $j$ ($\mathfrak{I}^j=\mathfrak{I}^j_0$), then there will be no system at site $j$ ($\lambda_j=0$), while if we choose to insert the system ($\mathfrak{I}^j=\mathfrak{I}^j_1$) then the system will be there, in one of its possible states ($\lambda_j\in \mathcal{S}_{j}{\setminus\left\{0\right\}}$). For a probabilistic process, dependency on the instruments is encoded in a conditional probability distribution $p(\vec{\lambda}\mid \vec{\mathfrak{I}})$. The probabilistic version of the consistency condition \eqref{consistency} reads \begin{align}\label{probconsistency} P&\left(\dots, \lambda_j\neq 0,\ldots \mid \dots, \mathfrak{I}^j=\mathfrak{I}^j_0,\dots\right)=0,\\ \nonumber P&\left(\dots, \lambda_j = 0,\ldots \mid \dots, \mathfrak{I}^j=\mathfrak{I}^j_1,\dots\right)=0. \end{align} A more compact way to represent a (non-contextual) process is through a frame function, which can be defined piece-wise as: \begin{equation} f(\vec{\lambda}) := p(\vec{\lambda}\mid \vec{\mathfrak{I}}) \quad \textrm{for } \lambda_1\in\mathfrak{I}^1, \dots, \lambda_N\in\mathfrak{I}^N. \label{frame} \end{equation} The consistency condition \eqref{probconsistency} is then expressed as \begin{equation} p(\vec{\lambda}\mid \vec{\mathfrak{I}}) = f(\vec{\lambda}) \prod_{j\in \mathcal{N}}\chi_{\mathfrak{I}^j}(\lambda_j). \label{frameconsistency} \end{equation} Let us pause for a moment on this definition. The frame function is defined as a function $f:\mathcal{S}\rightarrow [0,1]$. That is, it assigns a probability to each $N$-tuplet $\vec{\lambda}=\left(\lambda_1\dots,\lambda_N\right)$, without needing any additional information about the instruments. In the case we are considering, different instruments are non-overlapping sets of states. Therefore, if we know the state $\lambda_j$, we automatically know the instrument $\mathfrak{I}^j$ and requiring ``independence from the instruments'' is completely trivial: either the site is there, and the instrument is $\mathfrak{I}^j_1$, or the site is not there, and $\mathfrak{I}^j = \mathfrak{I}^j_1$. Once we know the state, there is nothing more the instrument can tell us. Technically, each value $\vec{\mathfrak{I}}$ defines a subset in the domain of $f$, and the value $f$ takes in each of these subsets is given by the conditional probability \eqref{frame}. The non-overlapping of different instruments is crucial for this construction: if the same $\vec{\lambda}$ could belong to two different instruments, we would not know which value of $p(\vec{\lambda}\mid \vec{\mathfrak{I}})$ to use to define the frame function. For overlapping instruments, the existence of a frame function is equivalent to the assumption of instrument non-contextuality. \paragraph*{Processes for the thermal state} Once again, the only environment variable is the inverse temperature $\beta$, which thus parametrises the operational processes. Given the above discussion, it should be clear that, for each $\beta$, we can write a conditional probability in the form \eqref{frameconsistency}, where the frame function is defined as in Eq.~\eqref{frame}, with probabilities provided by the Gibbs distribution \eqref{Gibbs}. Explicitly, \begin{align} \label{framethermal} f_{\beta}(\vec{\lambda}) =& \frac{e^{-\beta H(\vec{\lambda})}}{Z(\beta\mid \vec{\mathfrak{I}})}\quad \textrm{ for } \vec{\lambda}\in \vec{\mathfrak{I}},\\ \textrm{\hspace{-60pt} where } Z(\beta\mid \vec{\mathfrak{I}}):=&\!\!\sum_{\vec{\lambda}\in\vec{\mathfrak{I}}} e^{-\beta H\left(\vec{\lambda}\right)}. \end{align} Let us stress that, from the perspective of our framework, we might as well stop here: we already have a model that is both instrument and process non-contextual. The point of our theorem is to see if it is possible to write a given operational model in terms of an underlying non-contextual model; if that is possible, the operational model cannot reproduce the predictions of quantum mechanics. In this case we already have a non-contextual model, so we know it cannot reproduce quantum mechanics. Note that the theorem does not rely on any interpretation we might assign to ontic processes, events etc; it is simply a statement about properties that an ontological model can or cannot have. For the sake of completeness, and since we would more naturally associate ontology with determinism, we can write explicitly how a deterministic process model looks in the present case study. Recall that a deterministic process is a (multi-valued) function $\vec{\omega}\equiv \left(\omega_1\dots \omega_N\right)$ from the instruments to the events. For notational convenience, we can identify the two possible instruments $\mathfrak{I}^j_{x_j}$ at each site $j$ with their label $x_j\in\left\{0,1\right\}$. Therefore, a choice of instruments is given by $N$ binary variables $\vec{x}\equiv\left(x_1,\dots,x_N\right)$ and a process is identified with $2^N$ $N$-tuples $\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N}$, where $\vec{a}_{\vec{x}}:= \vec{\omega}(\vec{\mathfrak{I}}_{\vec{x}})$. A deterministic ontological model is thus defined by a conditional probability distribution \begin{equation} p\left(\vec{\omega}\mid\beta\right)\equiv P\left(\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N} \mid\beta\right) \end{equation} that reproduces the operational probabilities via $\omega$-mediation: \begin{equation} \sum_{\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N} } p\left(\vec{\lambda}\mid \vec{\mathfrak{I}}, \left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N}\right) p(\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N} \mid \beta) = p\left(\vec{\lambda}\mid \vec{\mathfrak{I}}, \beta\right), \label{omegasep} \end{equation} % %\begin{equation} %\sum_{\vec{\omega}}P\left(\vec{\lambda}\mid \vec{\mathfrak{I}}, \vec{\omega}\right) p(\vec{\omega} \mid \beta) = p\left(\vec{\lambda}\mid \vec{\mathfrak{I}}, \beta\right), %\label{omegasep} %\end{equation} where the sum should be understood as \begin{equation} \sum_{\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N} }\equiv \sum_{a^1_{0} \in \mathfrak{I}^1_{0}} \sum_{a^1_{1} \in \mathfrak{I}^1_{1}} \dots \sum_{a^N_{0} \in \mathfrak{I}^N_{0}} \sum_{a^N_{1} \in \mathfrak{I}^N_{1}} \end{equation} % and the ``ontic'' probabilities are given by \begin{align} \label{detprob} P\left(\vec{\lambda}\mid \vec{\mathfrak{I}}, \left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N}\right) &= \prod_{j\in \mathcal{N}}\chi_{\mathfrak{I}^j}(\lambda_j) \sum_{\vec{x}\in \left\{0,1\right\}^N} \delta_{\vec{\lambda}\,\vec{a}_{\vec{x}}}, \\ \delta_{\vec{\lambda}\,\vec{a}_{\vec{x}}} & := \prod_{j\in \mathcal{N}} \delta_{\lambda_j\,(a_{\vec{x}})_j}. \end{align} % %\begin{equation} %P\left(\vec{\lambda}\mid \vec{\mathfrak{I}}, \vec{\omega}\right) = \prod_{j\in \mathcal{N}}\chi_{\mathfrak{I}^j}(\lambda_j) \delta_{\lambda_j\,\omega_j(\vec{\mathfrak{I}})}. %\label{detprob} %\end{equation} We now show that the conditional probabilities for the ontic process in our thermal model are given by \begin{equation} P\left(\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N} \mid\beta\right) = \prod_{\vec{x}\in\left\{0,1\right\}^N}f_{\beta}(\vec{a}_{\vec{x}}), \label{onticprob} \end{equation} where the operational frame function $f_{\beta}$ is given by expression \eqref{framethermal}. To see that the conditional probabilities \eqref{onticprob} provide an ontological model for the original thermal-state model, one can verify that, by putting together expressions \eqref{onticprob} and \eqref{detprob} into Eq.~\eqref{omegasep}, one indeed obtains the operational probabilities~\eqref{frameconsistency}. %, because summing over the Kronecher deltas in the deterministic probability \eqref{detprob} results in replacing each $\vec{a}_{\vec{x}}$ with the corresponding $\vec{\lambda}$ (where one also needs to use the normalisation of the frame function $f_{\beta}$ so that, for each term $\delta_{\vec{\lambda}\,\vec{a}_{\vec{x}}}$, the sum over $\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N}$ only leaves . Explicitly, \begin{align*} \sum_{\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N}}& p\left(\vec{\lambda}\mid \vec{\mathfrak{I}}, \left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N}\right) p(\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N} \mid \beta) \\ % =& \prod_{j\in \mathcal{N}}\chi_{\mathfrak{I}^j}(\lambda_j) \sum_{\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\in \left\{0,1\right\}^N}} \sum_{\vec{x}'\in \left\{0,1\right\}^N} \delta_{\vec{\lambda}\,\vec{a}_{\vec{x}'}} \prod_{\vec{x}\in\left\{0,1\right\}^N}f_{\beta}(\vec{a}_{\vec{x}}) \\ % =& \prod_{j\in \mathcal{N}}\chi_{\mathfrak{I}^j}(\lambda_j) \sum_{\vec{x}'\in \left\{0,1\right\}^N} \sum_{\vec{a}_{\vec{x}'}\in\vec{\mathfrak{I}}_{\vec{x}'}} \delta_{\vec{\lambda}\,\vec{a}_{\vec{x}'}} \left[\sum_{\left\{\vec{a}_{\vec{x}}\right\}_{\vec{x}\neq \vec{x}'}}\prod_{\vec{x}\in\left\{0,1\right\}^N}f_{\beta}(\vec{a}_{\vec{x}}) \right]\\ % =& \prod_{j\in \mathcal{N}}\chi_{\mathfrak{I}^j}(\lambda_j) \sum_{\vec{x}'\in \left\{0,1\right\}^N} \sum_{\vec{a}_{\vec{x}'}\in\vec{\mathfrak{I}}_{\vec{x}'}} \delta_{\vec{\lambda}\,\vec{a}_{\vec{x}'}} f_{\beta}(\vec{a}_{\vec{x}'}) \left[\prod_{\vec{x} \neq \vec{x}'} \sum_{\vec{a}_{\vec{x}}\in\vec{\mathfrak{I}}_{\vec{x}}} f_{\beta}(\vec{a}_{\vec{x}}) \right] \\ % =& \prod_{j\in \mathcal{N}}\chi_{\mathfrak{I}^j}(\lambda_j) f_{\beta}(\vec{\lambda}), \end{align*} where we used the normalisation of the frame function, $\sum_{\vec{\lambda}\in\vec{\mathfrak{I}}_{\vec{x}}} f_{\beta}(\vec{\lambda})=1$ for every collection of instruments $\vec{\mathfrak{I}}_{\vec{x}}\equiv \left(\mathfrak{I}^1_{x_1},\dots,\mathfrak{I}^N_{x_N}\right)$. \end{document}
{ "alphanum_fraction": 0.7549117959, "avg_line_length": 92.4444444444, "ext": "tex", "hexsha": "72606eddb67f2bdaf04ed2baac82d8bd4ce9db17", "lang": "TeX", "max_forks_count": 62, "max_forks_repo_forks_event_max_datetime": "2022-03-23T12:19:08.000Z", "max_forks_repo_forks_event_min_datetime": "2017-10-23T19:29:09.000Z", "max_forks_repo_head_hexsha": "2ac87e215daea32699aa7f888f0405936d2ef452", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "kenjikun/engrafo", "max_forks_repo_path": "tests/integration/papers/1708.00137/main.tex", "max_issues_count": 894, "max_issues_repo_head_hexsha": "2ac87e215daea32699aa7f888f0405936d2ef452", "max_issues_repo_issues_event_max_datetime": "2022-03-29T19:06:07.000Z", "max_issues_repo_issues_event_min_datetime": "2017-10-23T09:27:08.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "kenjikun/engrafo", "max_issues_repo_path": "tests/integration/papers/1708.00137/main.tex", "max_line_length": 1748, "max_stars_count": 836, "max_stars_repo_head_hexsha": "2ac87e215daea32699aa7f888f0405936d2ef452", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "kenjikun/engrafo", "max_stars_repo_path": "tests/integration/papers/1708.00137/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-20T00:31:05.000Z", "max_stars_repo_stars_event_min_datetime": "2017-10-23T10:16:18.000Z", "num_tokens": 30005, "size": 105664 }
\documentclass{beamer} \usepackage{paratype} \setbeamerfont{frametitle}{family=\bf} \usepackage{listings} \usepackage{pdfpages} \usepackage{verbatim} \lstset{ language=C, keepspaces=true, moredelim=**[is][\color{red}]{@}{@}, } % Beamer theme settings \usecolortheme{seagull} \usenavigationsymbolstemplate{} % no navigation buttons \usepackage[utf8]{inputenc} \title{OpenCL day 1: GPU hardware and the programming model} \author{Cosmin Oancea and Troels Henriksen} \date{January 28, 2019} \begin{document} \frame{\titlepage} \section{Introduction and Course Contents} \begin{frame} \tableofcontents[currentsection] \end{frame} \section{Hardware Trends} \begin{frame} \tableofcontents[currentsection] \end{frame} \begin{frame} \frametitle{The first computers were not this} \begin{center} \includegraphics[width=\textwidth]{img/eniac.jpg} \end{center} \end{frame} \begin{frame} \frametitle{But this} \includegraphics[width=\textwidth]{img/human-computer.jpg} \end{frame} \begin{frame} \frametitle{And if you had a larger problem} \includegraphics[width=\textwidth]{img/human-computers.jpg} \end{frame} \begin{frame} \frametitle{But then they started looking like this} \begin{center} \includegraphics[width=\textwidth]{img/eniac.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then this} \begin{center} \includegraphics[width=\textwidth]{img/pdp1.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then this} \begin{center} \includegraphics[width=\textwidth]{img/vax.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then this} \begin{center} \includegraphics[width=\textwidth]{img/early-workstation.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then this} \begin{center} \includegraphics[width=\textwidth]{img/dell.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then this} \begin{center} \includegraphics[width=\textwidth]{img/hp.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then, from around 2005} \begin{center} \includegraphics[width=0.5\textwidth]{img/hp.jpg} \includegraphics[width=0.5\textwidth]{img/hp.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then, from around 2005} \begin{center} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \end{center} \end{frame} \begin{frame} \frametitle{Then, from around 2005} \begin{center} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \includegraphics[width=0.3\textwidth]{img/hp.jpg} \end{center} Improvements in \textit{sequential performance} stalled, although computers still got smaller and faster. \end{frame} \begin{frame} \frametitle{What Changed?} \begin{itemize} \item \textit{Power complexity} $P_{dynamic} \sim Freq^3$, preventing us from increasing processor frequency. \item \textit{Memory wall}, ever-increasing performance gap between processor and memory (which means that \textit{memory} becomes bottleneck, not processor speed). \end {itemize} \end{frame} \begin{frame} \frametitle{CPU progress} \includegraphics[width=\textwidth]{img/40-years-processor-trend.png} Addressed with \textit{more cores}. \end{frame} \begin{frame} \frametitle{The Memory Wall} \begin{center} \includegraphics[width=50ex]{img/memwall} Memory Wall = $\text{processor cycles} / \text{memory cycles}$ \end{center} Addressed with caches (not scalable) and \textit{latency hiding}. \end{frame} \begin{frame} \frametitle{This is why GPUs are useful} The design of GPUs directly attacks these two problems. \begin{itemize} \item\textbf{Frequency scaling} becomes less of an issue because we can instead use thousands of (slower) cores. \item The \textbf{memory wall} is partially circumvented by using faster and smaller memory, but mostly by \textit{latency hiding}. With tens of thousands of threads, we can probably find something else to do while some threads are waiting for memory! \end{itemize} Ultimately, GPUs do \textit{throughput processing}, and operations have (relatively) high latency. \end{frame} \begin{frame} \frametitle{CPUs compared to CPUs} \includegraphics[width=\textwidth]{img/cpu-gpu-architecture.pdf} \begin{itemize} \item GPUs have \textit{thousands} of simple cores and taking full advantage of their compute power requires \textit{tens of thousands} of threads. \item GPU threads are very \textit{restricted} in what they can do: no stack, no allocation, limited control flow, etc. \item Potential \textit{very high performance} and \textit{lower power usage} compared to CPUs, but programming them is \textit{hard}. \end{itemize} \end{frame} \begin{frame}[fragile,t] \frametitle{GPUs and Memory} \begin{center} \includegraphics[height=45ex]{img/gpubandwidth.png} \end {center} \end{frame} \begin{frame}[fragile,t] \frametitle{GPUs and GFLOPS} \begin{center} \includegraphics[height=43ex]{img/gpugflops.png} \end {center} \end{frame} \section{The GPU Architecture} \begin{frame} \tableofcontents[currentsection] \end{frame} \begin{frame} The following slides are taken from the presentation \textit{Introduction to GPU Architecture} by Ofer Rosenberg of AMD. \end{frame} { \setbeamercolor{background canvas}{bg=} \includepdf[pages=4-38]{img/Introduction-to-GPUs.pdf} } \begin{frame} \frametitle{The GPU we will be using: Radeon HD 7800\footnote{\url{https://developer.amd.com/wordpress/media/2012/12/AMD_Southern_Islands_Instruction_Set_Architecture.pdf}}} \includegraphics[width=\textwidth]{img/pitcairn.png} \end{frame} \begin{frame} \frametitle{Zooming in on the Compute Units} \includegraphics[width=\textwidth]{img/pitcairn-cus.png} \begin{itemize} \item Each vector-ALU executes a \textit{wavefront} of 64 work-items over four clock cycles. \item Many wavefronts in flight at once to hide latency. \end{itemize} \end{frame} \section{The OpenCL Programming Model} \begin{frame} \tableofcontents[currentsection] \end{frame} \begin{frame}[fragile,t] \frametitle{GPU programming} \begin{center} \includegraphics[height=43ex]{img/MemCpy1.pdf} \end {center} \end{frame} \begin{frame}[fragile,t] \frametitle{GPU programming} \begin{center} \includegraphics[height=43ex]{img/MemCpy2.pdf} \end {center} \end{frame} \begin{frame}[fragile,t] \frametitle{GPU programming} \begin{center} \includegraphics[height=43ex]{img/MemCpy3.pdf} \end {center} \end{frame} \begin{frame} \frametitle{OpenCL for this course} \begin{itemize} \item OpenCL is a standard C API for programming GPUs and other ``accelerators''. \item OpenCL is very low-level and very boilerplate-heavy. \item Any real application will build domain-specific abstraction layers on top. \item Since we want to teach you \textit{actual} OpenCL, we can't do that, but we will use a small library of abbreviations and helpers: \texttt{clutils.h} \item OpenCL comprises ordinary code running on the \textit{host} (CPU), which calls API functions to direct the \textit{device} (e.g. GPU). \end{itemize} \bigskip \hfill\includegraphics[width=0.2\textwidth]{img/opencl-logo.png} \url{https://www.khronos.org/registry/OpenCL/sdk/1.0/docs/man/xhtml/} \end{frame} \begin{frame} \frametitle{OpenCL is an SIMT model} \textit{Single Instruction Multiple Threads} means we provide a \textit{sequential function} that is executed in parallel by multiple threads (``work items'' in OpenCL). \begin{center} \includegraphics[width=0.75\textwidth]{img/ndrange.png} \end{center} Threads are arranged in \textit{workgroups}, which form an \textit{NDRange} (often called \textit{grid}). \end{frame} \begin{frame} \frametitle{OpenCL Platforms and Devices} A \textit{platform} is more like a \textit{vendor} (technically, an OpenCL backend or driver). Each platform provides access to zero or more \textit{devices}. \begin{center} \includegraphics[width=\textwidth]{img/opencl-platforms-devices.png} \end{center} To use OpenCL, we must pick a \textit{platform}, then one of its \textit{devices}, use that to create a \textit{context}, and then a \textit{command queue} to which we can finally enqueue device operations. \end{frame} \begin{frame}[fragile,fragile] \frametitle{Listing available devices (Day1/devices.c)} \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_int clGetPlatformIDs (cl_uint num_entries, cl_platform_id *platforms, cl_uint *num_platforms) \end{lstlisting} \begin{lstlisting} cl_uint num_platforms; // Find the number of platforms. OPENCL_SUCCEED( clGetPlatformIDs(0, NULL, &num_platforms)); printf("Found %d platforms\n", (int)num_platforms); \end{lstlisting} The \texttt{OPENCL\_SUCCEED()} macro translates OpenCL error codes to strings and aborts the process in case of error. Proper error handling is inherently application-specific and left as a very boring exercise. \end{frame} \begin{frame}[fragile] \begin{lstlisting} // Make room for them. cl_platform_id *all_platforms = calloc(num_platforms, sizeof(cl_platform_id)); // Fetch all the platforms. OPENCL_SUCCEED( clGetPlatformIDs(num_platforms, all_platforms, NULL)); for (unsigned int i = 0; i < num_platforms; i++) { ... } \end{lstlisting} \end{frame} \begin{frame}[fragile,fragile] \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_int clGetPlatformInfo (cl_platform_id platform, cl_platform_info param_name, size_t param_value_size, void *param_value, size_t *param_value_size_ret) \end{lstlisting} \begin{lstlisting} size_t req_bytes; char *name; // How much space do we need for the platform name? OPENCL_SUCCEED( clGetPlatformInfo(all_platforms[i], CL_PLATFORM_NAME, 0, NULL, &req_bytes)); \end{lstlisting} \end{frame} \begin{frame}[fragile] \begin{lstlisting} // Allocate space for the name and fetch it. name = malloc(req_bytes); OPENCL_SUCCEED( clGetPlatformInfo(all_platforms[i], CL_PLATFORM_NAME, req_bytes, name, NULL)); printf("Platform %d: %s\n", i, name); free(name); \end{lstlisting} \end{frame} \begin{frame}[fragile] \begin{lstlisting} // Now let us print the names of all the devices, // first we count how many of them exist. cl_uint num_devices; OPENCL_SUCCEED( clGetDeviceIDs(all_platforms[i], CL_DEVICE_TYPE_ALL, 0, NULL, &num_devices)); // Then we make room for them. cl_device_id *platform_devices = calloc(num_devices, sizeof(cl_device_id)); // Then we fetch them. OPENCL_SUCCEED( clGetDeviceIDs(all_platforms[i], CL_DEVICE_TYPE_ALL, num_devices, platform_devices, NULL)); \end{lstlisting} \end{frame} \begin{frame}[fragile] \begin{lstlisting} for (unsigned int j = 0; j < num_devices; j++) { // How much space do we need for the device name? OPENCL_SUCCEED( clGetDeviceInfo(platform_devices[j], CL_DEVICE_NAME, 0, NULL, &req_bytes)); // Allocate space for the name and fetch it. name = malloc(req_bytes); OPENCL_SUCCEED( clGetDeviceInfo(platform_devices[j], CL_DEVICE_NAME, req_bytes, name, NULL)); printf("\tDevice %d: %s\n", j, name); free(name); } \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{OpenCL in Visual Studio} \begin{block}{Warning} Neither Cosmin nor I are Windows users and we are entirely unexperienced with Visual Studio. \end{block} \begin{itemize} \item Ensure the AMD OpenCL SDK is installed. \item After creating a new project, edit its properties and set... \begin{enumerate} \item \textit{C/C++$\rightarrow$SDL checks} to No. \item \textit{C/C++$\rightarrow$Additional Include Directories} add \verb!C:\Program Files (x86)\AMD APP SDK\3.0\include!. \item \textit{Linker$\rightarrow$Additional Library Directories} add \verb!C:\Program Files (x86)\AMD APP SDK\3.0\lib!. \item \textit{Linker$\rightarrow$Input$\rightarrow$Additional Dependencies} add \verb!OpenCL.lib!. \end{enumerate} \item All but step 1 can be done by using the \texttt{AMDOpenCL.props} property sheet in the Git repository. \item Make sure you are doing a 64-bit build (VS calls this ``x64''). \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Obtaining a \texttt{cl\_command\_queue} (clutils.h)} Assuming variables \texttt{platform\_index} and \texttt{device\_index}. \begin{lstlisting} cl_uint num_platforms; OPENCL_SUCCEED( clGetPlatformIDs(0, NULL, &num_platforms)); cl_platform_id *all_platforms = (cl_platform_id*) calloc(num_platforms, sizeof(cl_platform_id)); OPENCL_SUCCEED( clGetPlatformIDs(num_platforms, all_platforms, NULL)); assert(platform_index < num_platforms); cl_platform_id platform = all_platforms[platform_index]; \end{lstlisting} \end{frame} \begin{frame}[fragile] \begin{lstlisting} cl_uint num_devices; OPENCL_SUCCEED( clGetDeviceIDs(platform, CL_DEVICE_TYPE_ALL, 0, NULL, &num_devices)); cl_device_id *platform_devices = (cl_device_id*) calloc(num_devices, sizeof(cl_device_id)); OPENCL_SUCCEED( clGetDeviceIDs(platform, CL_DEVICE_TYPE_ALL, num_devices, platform_devices, NULL)); assert(device_index < num_devices); cl_device_id device = platform_devices[device_index]; \end{lstlisting} \end{frame} \begin{frame}[fragile] \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_context clCreateContext (cl_context_properties *properties, cl_uint num_devices, const cl_device_id *devices, void *pfn_notify (...), void *user_data, cl_int *errcode_ret) \end{lstlisting} \begin{lstlisting} cl_context_properties properties[] = { CL_CONTEXT_PLATFORM, (cl_context_properties)platform, 0 }; cl_int error; cl_context ctx = clCreateContext(properties, 1, &device, NULL, NULL, &error); OPENCL_SUCCEED(error); \end{lstlisting} \end{frame} \begin{frame}[fragile,fragile] \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_command_queue clCreateCommandQueue (cl_context context, cl_device_id device, cl_command_queue_properties properties, cl_int *errcode_ret) \end{lstlisting} \begin{lstlisting} cl_command_queue queue = clCreateCommandQueue(*ctx, *device, 0, &error); OPENCL_SUCCEED(error); \end{lstlisting} Using \texttt{clutils.h}, all of the above can be replaced with: \begin{lstlisting} cl_context ctx; cl_command_queue queue; cl_device_id device; opencl_init_command_queue_default (&device, &ctx, &queue); \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Rot-13 in OpenCL (Day1/rot13.c)} Rot-13 is a cutting edge encryption algorithm. In C, it is: \begin{lstlisting} void rot13(char *out, char *in, int n) { for (int i = 0; i < n; i++) { if (i < n) { if (in[i] >= 'a' && in[i] <= 'z') { out[i] = (in[i] - 'a' + 13) % 26 + 'a'; } else { out[i] = in[i]; } } } } \end{lstlisting} Here restricted to operate on lowercase ASCII only to ensure readable output. \end{frame} \begin{frame}[fragile] \frametitle{Loading OpenCL programs} We obtain an OpenCL \textit{program} by passing its source (written in OpenCL C) to \texttt{clBuildProgram()}. Lots of boilerplate again; let's just use \texttt{clutils.h}: \begin{lstlisting} cl_program program = opencl_build_program(ctx, device, "kernels/rot13.cl", ""); \end{lstlisting} OpenCL C is a cut-down dialect of C with many restrictions: \begin{itemize} \item No function pointers. \item No recursion. \item Limited standard library. \item No memory allocation. \item No printing to the screen. \item \textit{Etc...} \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Kernel functions (Day1/kernels/rot13.cl)} An OpenCL C program contains kernel functions that serve as entry points: \begin{lstlisting} // Rot-13 for lowercase ASCII. kernel void rot13(global char *out, global char *in, int n) { int gtid = get_global_id(0); if (gtid < n) { if (in[gtid] >= 'a' && in[gtid] <= 'z') { out[gtid] = (in[gtid] - 'a' + 13) % 26 + 'a'; } else { out[gtid] = in[gtid]; } } } \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Accesing kernels (Day1/rot13.c)} To launch a kernel on the GPU from the host, we first use \texttt{clCreateKernel()} with the \texttt{cl\_program} object we got back: \begin{lstlisting} cl_kernel rot13_k = clCreateKernel(program, "rot13", &error); OPENCL_SUCCEED(error); \end{lstlisting} \begin{itemize} \item Now we can ask the GPU to run the kernel. \item Except that GPUs have their own separate memory, so we have no data for the kernel to work on! \end{itemize} \end{frame} \begin{frame}[fragile,fragile] \frametitle{Allocating GPU memory} \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_mem clCreateBuffer(cl_context context, cl_mem_flags flags, size_t size, void *host_ptr, cl_int *errcode_ret) \end{lstlisting} \begin{lstlisting} char *string = "Hello, World!\n"; cl_int n = strlen(string); cl_mem input = clCreateBuffer(ctx, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, n, string, &error); cl_mem output = clCreateBuffer(ctx, CL_MEM_WRITE_ONLY, n, NULL, &error); \end{lstlisting} \end{frame} \begin{frame}[fragile,fragile] \frametitle{Passing arguments to the kernel} Remember that \texttt{rot13\_k} object? We finally get to use it. \begin{lstlisting} clSetKernelArg (rot13_k, 0, sizeof(cl_mem), &output); clSetKernelArg (rot13_k, 1, sizeof(cl_mem), &input); clSetKernelArg (rot13_k, 2, sizeof(cl_int), &n); \end{lstlisting} Reminder on \texttt{Day1/kernels/rot13.cl}: \begin{lstlisting} kernel void rot13(global char *out, global char *in, int n) { ... } \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Launching a kernel} When launching a kernel, we must specify the layout of the grid: \begin{itemize} \item The number of dimensions (1, 2, or 3). \item The size of each workgroup in each dimension. \item The total number of threads in each dimension (which must be divisible by the workgroup size in that dimension). \end{itemize} For our rot-13, we want a 1D grid with one thread per input, rounded up to the workgroup size. \begin{lstlisting} size_t local_work_size[1] = { 256 }; size_t global_work_size[1] = { div_rounding_up(n, local_work_size[0]) * local_work_size[0] }; \end{lstlisting} Workgroup size is a tunable parameter, but we'll always pick 256 for now. \end{frame} \begin{frame}[fragile,fragile] \frametitle{\texttt{clEnqueueNDRangeKernel()}} \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_int clEnqueueNDRangeKernel (cl_command_queue command_queue, cl_kernel kernel, cl_uint work_dim, const size_t *global_work_offset, const size_t *global_work_size, const size_t *local_work_size, cl_uint num_events_in_wait_list, const cl_event *event_wait_list, cl_event *event) \end{lstlisting} \begin{lstlisting} clEnqueueNDRangeKernel(queue, rot13_k, 1, NULL, global_work_size, local_work_size, 0, NULL, NULL); \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{More on command queues} \begin{itemize} \item Enqueuing a command is \textit{asynchronous}. It might start executing immediately, soon, or not at all. \item Use \texttt{clFinish()} to ensure that all operations have finished: \end{itemize} \begin{lstlisting} OPENCL_SUCCEED(clFinish(queue)); \end{lstlisting} This is also where execution errors are typically reported. \end{frame} \begin{frame}[fragile] \frametitle{Reading results back from the GPU} \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_int clEnqueueReadBuffer (cl_command_queue command_queue, cl_mem buffer, cl_bool blocking_read, size_t offset, size_t cb, void *ptr, cl_uint num_events_in_wait_list, const cl_event *event_wait_list, cl_event *event) \end{lstlisting} \begin{lstlisting} char *output_string = malloc(n + 1); output_string[n] = '\0'; // Ensure 0-termination. clEnqueueReadBuffer(queue, output, CL_TRUE, 0, n, output_string, 0, NULL, NULL); printf("Result: %s\n", output_string); \end{lstlisting} \end{frame} \section{Debugging and Profiling OpenCL} \begin{frame} \tableofcontents[currentsection] \end{frame} \begin{frame}[fragile] \frametitle{Debugging with \texttt{oclgrind}} \begin{itemize} \item \url{https://github.com/jrprice/Oclgrind} \item Makes itself available as an OpenCL platform that runs your kernels in an error-checking interpreter. \item A lot like \texttt{valgrind}. \item Fairly slow, so use it on reduced workloads. \end{itemize} \begin{lstlisting}[basicstyle=\small,breaklines=true] $ oclgrind ./rot13 Using platform: Oclgrind Using device: Oclgrind Simulator Invalid read of size 1 at global memory address 0x100000000000e Kernel: rot13 Entity: Global(14,0,0) Local(14,0,0) Group(0,0,0) %1 = load i8, i8 addrspace(1)* %arrayidx, align 1, !dbg !20 At line 5 of input.cl: if (in[gtid] >= 'a' && in[gtid] <= 'z') { ... \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Profiling with Wall Clock Time} Just like how you profile anything else. \begin{lstlisting} // Current wall time in microseconds. static int64_t get_wall_time(void); \end{lstlisting} Use it like this: \begin{lstlisting} int64_t before = get_wall_time(); ... clFinish(ctx); int64_t after = get_wall_time(); printf("Took %d microseconds\n", (int)(after-before)); \end{lstlisting} The \lstinline{clFinish()} call is crucial as otherwise the device may still be working (remember that most enqueuings are \textit{asynchronous}). \end{frame} \begin{frame}[fragile] \frametitle{Profiling with Events} An event is an object that communicates the status of an OpenCL command. Whenever we enqueue something in a command queue, we can get an event object back. \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_int clEnqueueNDRangeKernel (cl_command_queue command_queue, cl_kernel kernel, cl_uint work_dim, const size_t *global_work_offset, const size_t *global_work_size, const size_t *local_work_size, cl_uint num_events_in_wait_list, const cl_event *event_wait_list, @cl_event *event@) \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Retrieving Information from Events} \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_int clGetEventInfo (cl_event event, cl_event_info param_name, size_t param_value_size, void *param_value, size_t *param_value_size_ret) \end{lstlisting} \begin{lstlisting} cl_int clGetEventProfilingInfo (cl_event event, cl_profiling_info param_name, size_t param_value_size, void *param_value, size_t *param_value_size_ret) \end{lstlisting} The latter only works if \lstinline{CL_QUEUE_PROFILING_ENABLE} was passed to \lstinline{clCreateCommandQueue()}. \end{frame} \begin{frame}[fragile] \frametitle{Values for \texttt{cl\_profiling\_info}} \begin{description} \item[\texttt{CL\_PROFILING\_COMMAND\_QUEUED}]\hfill\\ When the command was queued. \item[\texttt{CL\_PROFILING\_COMMAND\_SUBMIT}]\hfill\\ When the command was sent to the device. \item[\texttt{CL\_PROFILING\_COMMAND\_START}]\hfill\\ When the command started executing. \item[\texttt{CL\_PROFILING\_COMMAND\_END}]\hfill\\ When the command finished executing. \end{description} \begin{itemize} \item All produce a value of type \lstinline{cl_ulong}. \item \lstinline{clGetEventProfilingInfo()} returns \lstinline{CL_PROFILING_INFO_NOT_AVAILABLE} if the information is not available (yet) \end{itemize} \end{frame} \begin{frame}[fragile] \frametitle{Example of Profiling with Events} \begin{lstlisting} cl_event write_e; clEnqueueWriteBuffer(queue, to, CL_FALSE, 0, n, from, 0, NULL, &write_e)); ... cl_ulong start, end; clGetEventProfilingInfo (write_e, CL_PROFILING_COMMAND_START, sizeof(start), &start, NULL); clGetEventProfilingInfo (write_e, CL_PROFILING_COMMAND_START, sizeof(end), &end, NULL); \end{lstlisting} \end{frame} \begin{frame} \frametitle{Event Profiling versus Wall Clock Profiling} \begin{itemize} \item Event profiling is \textbf{much more fine-grained} and lets us see the per-operation runtime. \item Measuring per-operation with wall clock would require us to \texttt{clFinish()} after every operation, which is very slow because it prevents pipelining. \item Wall clock profiling tells us about \textbf{overall application performance}. We generally cannot just sum the runtimes for each event, since the commands may overlap in time, and the events do not count host-based overheads. \item \textbf{Ideally, use both.} \end{itemize} However, neither of these approaches will tell us \textit{why} something is slow... \end{frame} \section{Coalesced Memory Accesses} \begin{frame} \tableofcontents[currentsection] \end{frame} \begin{frame}[fragile] \frametitle{Summing the rows of a matrix} Consider summing the rows/columns of a $10000\times{}10000$ row-major matrix on CPU and GPU: \includegraphics[width=\textwidth]{img/rowcolumnarrays.jpg} \end{frame} \begin{frame}[fragile] \frametitle{Performance} \begin{lstlisting} for (int row = 0; row < n; row++) { cl_int sum = 0; for (int col = 0; col < n; col++) { sum += matrix[row*n+col]; } sums[row] = sum; } \end{lstlisting} On the GPU, we assign one iteration of the outer loop to each thread. \bigskip \begin{tabular}{lr} Summing rows on CPU & $22025\mu{}s$ \\ Summing columns on CPU & $741225\mu{}s$ \\ Summing rows on GPU & $60461\mu{}s$ \\ Summing columns on GPU & $6169\mu{}s$ \end{tabular} \end{frame} \begin{frame} \frametitle{Why does this go so badly?} The reason is our memory access pattern -- specifically, our loads are not \textit{coalesced}. \begin{block}{Memory Coalescing} All threads within each consecutive 16-thread gang should simultaneously access consecutive elements in memory to maximise memory bus usage. \end{block} \begin{itemize} \item If neighboring threads access widely distant memory in the same clock cycle, the loads have to be \textit{sequentialised}, instead of all fulfilled using one (wide) memory bus operation. \item The HD 7800 has a memory bus width of 256 bits, so only using 32 bits per operation exploits an eight of the bandwidth. \end{itemize} \end{frame} \begin{frame} \frametitle{The accesses specifically} \begin{table}[H] \caption{Current accesses - this is worst case behaviour!} \begin{tabular}{l|llll} \textbf{Iteration} & \textbf{Thread $0$} & \textbf{Thread $1$} & \textbf{Thread $2$} & ... \\\hline 0 & $\texttt{matrix}[0]$ & $\texttt{matrix}[n]$ & $\texttt{matrix}[2n]$ & ... \\ 1 & $\texttt{matrix}[1]$ & $\texttt{matrix}[n+1]$ & $\texttt{matrix}[2n+1]$ & ... \\ 2 & $\texttt{matrix}[2]$ & $\texttt{matrix}[n+2]$ & $\texttt{matrix}[2n+2]$ & ... \\ \end{tabular} \end{table} \begin{table}[H] \caption{These are the accesses we want} \begin{tabular}{l|llll} \textbf{Iteration} & \textbf{Thread $0$} & \textbf{Thread $1$} & \textbf{Thread $2$} & ... \\\hline 0 & $\texttt{matrix}[0]$ & $\texttt{matrix}[1]$ & $\texttt{matrix}[2]$ & ... \\ 1 & $\texttt{matrix}[n]$ & $\texttt{matrix}[n+1]$ & $\texttt{matrix}[n+2]$ & ... \\ 2 & $\texttt{matrix}[nc]$ & $\texttt{matrix}[2n+1]$ & $\texttt{matrix}[2n+2]$ & ... \\ \end{tabular} \end{table} \textit{This is the exact opposite of what we are usually taught for CPUs!} \end{frame} \section{Programming Exercises} \begin{frame} \tableofcontents[currentsection] \end{frame} \begin{frame} \frametitle{Profiling rot-13 with wall clock and events} \begin{itemize} \item \texttt{Day1-exercises/rot13-profile-simple.c} \item \texttt{Day1-exercises/rot13-profile-events.c} \end{itemize} Try profiling both one and multiple kernel launches. What do you observe? What if you call \texttt{clFinish()} after every kernel invocation? What if you also count the cost of copying from the CPU to the GPU? \end{frame} \begin{frame} \frametitle{Reversing a string in parallel} Write an OpenCL for reversing a string. Base it heavily on the Rot-13 program. Create your own Visual Studio project for it as well. \end{frame} \begin{frame}[fragile] \frametitle{Load balancing (\texttt{Day1-exercises/fibfact.c})} Finish the program, which is supposed to do the equivalent of: \begin{lstlisting} void f (int k, float *out, int *ns, int *op) for (int i = 0; i < k; i++) { int n = ns[i]; int x; if (op[i] == 1) { x = fib(n); } else { x = fact(n); } out[i] = x; } \end{lstlisting} \begin{itemize} \item Where \texttt{fact()} and \texttt{fib()} are the usual factorial and Fibonacci functions. \item How fast does it run for various contents of \texttt{ns} and \texttt{ops}? Can you make it faster by preprocessing these arrays? \end{itemize} \end{frame} \begin{frame} \frametitle{Implementing Game of Life (\texttt{Day1-exercises/life-arrays.c})} Conway's Game of Life is a simple 2D cellular automaton (``stencil'') that is embarassingly parallel. Each cell is updated based on the value of its neighbours. \begin{center} \includegraphics[width=4cm]{img/stencil.png} \end{center} \end{frame} \begin{frame} \frametitle{Using image objects for Game of Life (\texttt{Day1-exercises/life-images.c})} \begin{itemize} \item GPUs have special hardware for textures, and this can be used whenever we need 2D arrays with spatial locality (like in Game of Life). \item Instead of \texttt{clCreateBuffer()}, use \texttt{clCreateImage()}, and in the kernel use the \texttt{image2d\_t} type. \item Implement this as a 2D kernel in \texttt{Day1-exercises/life-images.c}. \item Main challenge: understand the OpenCL documentation and figure out how to represent our information in a colour channel. \end{itemize} \end{frame} \begin{frame}[fragile,fragile,fragile] \frametitle{Help for image objects} \begin{lstlisting}[backgroundcolor=\color{lightgray}] cl_mem clCreateImage (cl_context context, cl_mem_flags flags, const cl_image_format *image_format, const cl_image_desc *image_desc, void *host_ptr, cl_int *errcode_ret) \end{lstlisting} This is probably the best image format for us: \begin{lstlisting} cl_image_format format = { .image_channel_order = CL_RGBA, .image_channel_data_type = CL_UNSIGNED_INT8 }; \end{lstlisting} \end{frame} \begin{frame}[fragile] \frametitle{Image objects inside kernels} Inside the kernel we can use these functions to read/write elements: \begin{lstlisting} unsigned int4 read_imageui(image2d_t image, sampler_t sampler, int2 coord) void write_imageui(image2d_t image, int2 coord, unsigned int4 color) \end{lstlisting} E.g. \begin{lstlisting} write_imageui(img, (int2)(x,y), (uint4)(r,g,b,a)) uint4 v = read_imageui(img, sampler, (int2)(x,y)); // v.s0, v.s1, v.s2, v.s3 \end{lstlisting} \end{frame} \begin{frame} \frametitle{Matrix multiplication} \begin{itemize} \item Implement matrix multiplication as a 2D kernel with one thread per element of the output matrix. \item \textbf{Spoiler alert:} you will find that it is slow. Why? \end{itemize} Cosmin will eventually tell you how to make it less slow. \end{frame} \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.6992376573, "avg_line_length": 25.4668721109, "ext": "tex", "hexsha": "0e5f69027136d43813928f8e2d95ae6ab6844a6b", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "7ea15640e6bb559dc754e894c25bec7823a2a3b8", "max_forks_repo_licenses": [ "BSD-2-Clause" ], "max_forks_repo_name": "coancea/OpenCL-Repo", "max_forks_repo_path": "LaTeX/Day1.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7ea15640e6bb559dc754e894c25bec7823a2a3b8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-2-Clause" ], "max_issues_repo_name": "coancea/OpenCL-Repo", "max_issues_repo_path": "LaTeX/Day1.tex", "max_line_length": 129, "max_stars_count": null, "max_stars_repo_head_hexsha": "7ea15640e6bb559dc754e894c25bec7823a2a3b8", "max_stars_repo_licenses": [ "BSD-2-Clause" ], "max_stars_repo_name": "coancea/OpenCL-Repo", "max_stars_repo_path": "LaTeX/Day1.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 9599, "size": 33056 }
\chapter{Theoretical Foundations} \label{chap:TheoreticalFoundations} \input{chapters/theoretical_foundations/sections/neural_networks} \input{chapters/theoretical_foundations/sections/object_detection} \input{chapters/theoretical_foundations/sections/embeddings} \input{chapters/theoretical_foundations/sections/evaluating_information_retrieval} \input{chapters/theoretical_foundations/sections/tracking_evaluation} \input{chapters/theoretical_foundations/sections/single_object_tracking} \input{chapters/theoretical_foundations/sections/multiple_object_tracking} \input{chapters/theoretical_foundations/sections/feature_extraction}
{ "alphanum_fraction": 0.894488189, "avg_line_length": 52.9166666667, "ext": "tex", "hexsha": "86e7c8e45654eae1ba577319cae6c69241491d4d", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "68a3a6d1687ea43dc6cdfafcd5e6d9ce35f424e8", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "mondrasovic/phd_thesis", "max_forks_repo_path": "tex/chapters/theoretical_foundations/theoretical_foundations.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "68a3a6d1687ea43dc6cdfafcd5e6d9ce35f424e8", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "mondrasovic/phd_thesis", "max_issues_repo_path": "tex/chapters/theoretical_foundations/theoretical_foundations.tex", "max_line_length": 82, "max_stars_count": null, "max_stars_repo_head_hexsha": "68a3a6d1687ea43dc6cdfafcd5e6d9ce35f424e8", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mondrasovic/phd_thesis", "max_stars_repo_path": "tex/chapters/theoretical_foundations/theoretical_foundations.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 140, "size": 635 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{framed} \usepackage{hyperref} \usepackage{listings} \usepackage[parfill]{parskip} \usepackage{physics} \begin{document} \title{Exercise/Problem Guide for \textit{Nature of Computation} by Moore and Mertens} \author{Muthu Chidambaram} \date{Last Updated: \today} \maketitle \tableofcontents \newpage \section*{About} \begin{quote} \textit{``Computer science is no more about computers than astronomy is about telescopes.''} - Edsger Dijkstra (Maybe) \end{quote} These notes contain my solutions to some exercises and problems from the book \textit{Nature of Computation} by Christopher Moore and Stephan Mertens. \include{chapter_1} \include{chapter_2} \include{chapter_3} \include{chapter_4} \include{appendix} \end{document}
{ "alphanum_fraction": 0.7626728111, "avg_line_length": 21.7, "ext": "tex", "hexsha": "20012eca884cfb78079011837596b05d889ca54f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_forks_repo_path": "Nature_of_Computation_Moore_Mertens/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_issues_repo_path": "Nature_of_Computation_Moore_Mertens/main.tex", "max_line_length": 111, "max_stars_count": 1, "max_stars_repo_head_hexsha": "5ea4efe2267053695123a2c2d2c5171a672f61d4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "2014mchidamb/Math-Exercise-Guides", "max_stars_repo_path": "Nature_of_Computation_Moore_Mertens/main.tex", "max_stars_repo_stars_event_max_datetime": "2019-09-19T07:33:25.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-19T07:33:25.000Z", "num_tokens": 251, "size": 868 }
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[ ]{article} \usepackage{amsmath,amssymb} \usepackage{lmodern} \usepackage{ifxetex,ifluatex} \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{textcomp} % provide euro and other symbols \else % if luatex or xetex \usepackage{unicode-math} \defaultfontfeatures{Scale=MatchLowercase} \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} \fi % Use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} \IfFileExists{microtype.sty}{% use microtype if available \usepackage[]{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \makeatletter \@ifundefined{KOMAClassName}{% if non-KOMA class \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt}} }{% if KOMA class \KOMAoptions{parskip=half}} \makeatother \usepackage{xcolor} \IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available \IfFileExists{bookmark.sty}{\usepackage{bookmark}}{\usepackage{hyperref}} \hypersetup{ pdftitle={MHsampler}, pdfauthor={XC}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \usepackage[margin=1in]{geometry} \usepackage{color} \usepackage{fancyvrb} \newcommand{\VerbBar}{|} \newcommand{\VERB}{\Verb[commandchars=\\\{\}]} \DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} % Add ',fontsize=\small' for more characters per line \usepackage{framed} \definecolor{shadecolor}{RGB}{248,248,248} \newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} \newcommand{\AlertTok}[1]{\textcolor[rgb]{0.94,0.16,0.16}{#1}} \newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.77,0.63,0.00}{#1}} \newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\BuiltInTok}[1]{#1} \newcommand{\CharTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\CommentTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{#1}} \newcommand{\DecValTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.64,0.00,0.00}{\textbf{#1}}} \newcommand{\ExtensionTok}[1]{#1} \newcommand{\FloatTok}[1]{\textcolor[rgb]{0.00,0.00,0.81}{#1}} \newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\ImportTok}[1]{#1} \newcommand{\InformationTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.13,0.29,0.53}{\textbf{#1}}} \newcommand{\NormalTok}[1]{#1} \newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.81,0.36,0.00}{\textbf{#1}}} \newcommand{\OtherTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} \newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textit{#1}}} \newcommand{\RegionMarkerTok}[1]{#1} \newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\StringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\VariableTok}[1]{\textcolor[rgb]{0.00,0.00,0.00}{#1}} \newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.31,0.60,0.02}{#1}} \newcommand{\WarningTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{\textbf{\textit{#1}}}} \usepackage{graphicx} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} % Set default figure placement to htbp \makeatletter \def\fps@figure{htbp} \makeatother \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering \ifluatex \usepackage{selnolig} % disable illegal ligatures \fi \title{MHsampler} \author{XC} \date{19/07/2021} \begin{document} \maketitle \hypertarget{metropolishastings-algorithms}{% \subsection{Metropolis--Hastings Algorithms}\label{metropolishastings-algorithms}} Unlike IS sampling generating iid samples, MH generates correlated variables from MC, but it provide easier proposal when IS doesn't apply. \begin{itemize} \item only little need to be known about the target f \item Markovian property leads to efficient decompositions of high-dimensional problems in a sequence of smaller problems that are much easier to solve \item Most of your time and energy will be spent in designing and assessing your MCMC algorithms \item Incredible feature of MH: \begin{itemize} \tightlist \item for every given q(.), we can construct a Metropolis-Hasting kernel whose stationary is the target f \item \end{itemize} \end{itemize} \hypertarget{example-6.1}{% \subsection{Example 6.1}\label{example-6.1}} \begin{itemize} \tightlist \item Target distriubtion: Beta(2.7, 6.3) \item proposal dist q: unif{[}0,1{]}, means does not depend on previous step value of the chain \item MH algo \end{itemize} \begin{Shaded} \begin{Highlighting}[] \NormalTok{a }\OtherTok{\textless{}{-}} \FloatTok{2.7}\NormalTok{; b }\OtherTok{=} \FloatTok{6.3} \NormalTok{Nsim }\OtherTok{\textless{}{-}} \DecValTok{5000} \NormalTok{X }\OtherTok{\textless{}{-}} \FunctionTok{rep}\NormalTok{(}\FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{), Nsim) }\CommentTok{\# initialize the chain} \ControlFlowTok{for}\NormalTok{(i }\ControlFlowTok{in} \DecValTok{2}\SpecialCharTok{:}\NormalTok{Nsim) \{} \NormalTok{ Y }\OtherTok{=} \FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{) }\CommentTok{\# proposed value from q} \NormalTok{ alpha }\OtherTok{=} \FunctionTok{dbeta}\NormalTok{(Y, a, b) }\SpecialCharTok{/} \FunctionTok{dbeta}\NormalTok{(X[i}\DecValTok{{-}1}\NormalTok{], a, b) }\CommentTok{\# q are all unif[0, 1] cancell out} \NormalTok{ X[i] }\OtherTok{=}\NormalTok{ X[i}\DecValTok{{-}1}\NormalTok{] }\SpecialCharTok{+}\NormalTok{ (Y }\SpecialCharTok{{-}}\NormalTok{ X[i}\DecValTok{{-}1}\NormalTok{]) }\SpecialCharTok{*}\NormalTok{ (alpha }\SpecialCharTok{\textgreater{}} \FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{)) }\CommentTok{\# logical true or false} \NormalTok{\}} \FunctionTok{str}\NormalTok{(X)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## num [1:5000] 0.686 0.236 0.236 0.379 0.379 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(X, }\AttributeTok{type =} \StringTok{"l"}\NormalTok{) }\CommentTok{\# no pattern } \end{Highlighting} \end{Shaded} \includegraphics{MH_sampler_files/figure-latex/unnamed-chunk-1-1.pdf} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{plot}\NormalTok{(X[}\DecValTok{4500}\SpecialCharTok{:}\DecValTok{4800}\NormalTok{], }\AttributeTok{type =} \StringTok{"l"}\NormalTok{) }\CommentTok{\# for some intervals of time, the sequence (X(t)) does not change because all corresponding Y\textquotesingle{}s are rejected. } \end{Highlighting} \end{Shaded} \includegraphics{MH_sampler_files/figure-latex/unnamed-chunk-1-2.pdf} Remark: \begin{itemize} \tightlist \item Those multiple occurrences of the same numerical value (rejected Y's) must be kept in the sample as such, otherwise, the validity of the approximation of f is lost! \item Consider the entire chain as a sample, its histogram properly approximates the Be(2.7, 6.3) target. \end{itemize} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{hist}\NormalTok{(X, }\AttributeTok{breaks =} \DecValTok{300}\NormalTok{)} \FunctionTok{lines}\NormalTok{(}\FunctionTok{rbeta}\NormalTok{(}\DecValTok{5000}\NormalTok{, }\FloatTok{2.7}\NormalTok{, }\FloatTok{6.3}\NormalTok{), }\AttributeTok{col =} \StringTok{"red"}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{MH_sampler_files/figure-latex/unnamed-chunk-2-1.pdf} Can checked even further using a Kolmogorov--Smirnov test of equality between the two samples: \begin{Shaded} \begin{Highlighting}[] \FunctionTok{ks.test}\NormalTok{(}\FunctionTok{jitter}\NormalTok{(X), }\FunctionTok{rbeta}\NormalTok{(}\DecValTok{5000}\NormalTok{, a, b))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## ## Two-sample Kolmogorov-Smirnov test ## ## data: jitter(X) and rbeta(5000, a, b) ## D = 0.0346, p-value = 0.005028 ## alternative hypothesis: two-sided \end{verbatim} Can also compare the mean and variance \begin{Shaded} \begin{Highlighting}[] \FunctionTok{mean}\NormalTok{(X)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.3027625 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{var}\NormalTok{(X)} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.02085043 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \CommentTok{\# theoretical } \NormalTok{(mean\_theo }\OtherTok{\textless{}{-}}\NormalTok{ a}\SpecialCharTok{/}\NormalTok{(a}\SpecialCharTok{+}\NormalTok{b))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.3 \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \NormalTok{(var\_theo }\OtherTok{\textless{}{-}}\NormalTok{ a}\SpecialCharTok{*}\NormalTok{b }\SpecialCharTok{/}\NormalTok{ ((a}\SpecialCharTok{+}\NormalTok{b)}\SpecialCharTok{\^{}}\DecValTok{2} \SpecialCharTok{*}\NormalTok{ (a}\SpecialCharTok{+}\NormalTok{b}\SpecialCharTok{+}\DecValTok{1}\NormalTok{)))} \end{Highlighting} \end{Shaded} \begin{verbatim} ## [1] 0.021 \end{verbatim} Remark: - although rbeta output look similar as MH simulation output - rbeta generates iid, but MH generates correlated samples. - so the quality of the samples are degraded, and need more simulations to achieve the same precision. - need the " effective sample size for Markov chains" (Section 8.4.3). \hypertarget{properties-of-mh}{% \subsubsection{Properties of MH}\label{properties-of-mh}} \begin{itemize} \item In symmetric case, i.e.~\(q(y|x) = q(x|y)\), the acceptance probabilty alpha only depends on the ratio \(f(y)/ f(x^{(t)})\), so alpha is independent of q \item But the performance of HM will be affected by the choice of q \begin{itemize} \tightlist \item if the supp(q) is too small, compared with the range of f, then the M chain will have difficulty to explore the range of f, and will coverge very slowly. \end{itemize} \item Another property of MH algo: it only depends on the ratios: \(f(y)/ f(x^{(t)})\), \(q(x^{(t)})|y) / q(y|x^{(t)})\) hence independent of normalizing constant. \item q may be chosen in a way that the intractable parts of f is canceled out. \end{itemize} \hypertarget{example-6.2}{% \subsubsection{Example 6.2}\label{example-6.2}} To generate a student-t random variable, (that is, when f corresponds to a t(1) density), it is possible to use a N(0, 1) candidate within a Metropolis--Hastings algorithm \begin{Shaded} \begin{Highlighting}[] \NormalTok{Nsim }\OtherTok{\textless{}{-}} \FloatTok{1e4} \NormalTok{X }\OtherTok{\textless{}{-}} \FunctionTok{rep}\NormalTok{(}\FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{), Nsim) }\CommentTok{\# intial value } \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{2}\SpecialCharTok{:}\NormalTok{Nsim) \{} \NormalTok{ Y }\OtherTok{\textless{}{-}} \FunctionTok{rnorm}\NormalTok{(}\DecValTok{1}\NormalTok{) }\CommentTok{\# proposal} \NormalTok{ alpha }\OtherTok{\textless{}{-}} \FunctionTok{dt}\NormalTok{(Y, }\DecValTok{1}\NormalTok{) }\SpecialCharTok{*} \FunctionTok{dnorm}\NormalTok{(X[i}\DecValTok{{-}1}\NormalTok{]) }\SpecialCharTok{/}\NormalTok{ (}\FunctionTok{dt}\NormalTok{(X[i}\DecValTok{{-}1}\NormalTok{], }\DecValTok{1}\NormalTok{) }\SpecialCharTok{*} \FunctionTok{dnorm}\NormalTok{(Y))} \NormalTok{ X[i] }\OtherTok{\textless{}{-}}\NormalTok{ X[i}\DecValTok{{-}1}\NormalTok{] }\SpecialCharTok{+}\NormalTok{ (Y }\SpecialCharTok{{-}}\NormalTok{ X[i}\DecValTok{{-}1}\NormalTok{]) }\SpecialCharTok{*}\NormalTok{ (alpha }\SpecialCharTok{\textgreater{}} \FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{))} \NormalTok{\}} \FunctionTok{str}\NormalTok{(X) }\CommentTok{\# num [1:10000] 0.6923} \end{Highlighting} \end{Shaded} \begin{verbatim} ## num [1:10000] 0.00831 0.21775 -0.76952 1.07552 0.93699 ... \end{verbatim} \begin{Shaded} \begin{Highlighting}[] \FunctionTok{par}\NormalTok{(}\AttributeTok{mfrow =} \FunctionTok{c}\NormalTok{(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{))} \FunctionTok{hist}\NormalTok{(X, }\AttributeTok{breaks =} \DecValTok{250}\NormalTok{)} \FunctionTok{acf}\NormalTok{(X)} \CommentTok{\# want to see the approximation to P(X \textless{} 3)} \FunctionTok{plot}\NormalTok{(}\FunctionTok{cumsum}\NormalTok{(X }\SpecialCharTok{\textless{}} \DecValTok{3}\NormalTok{) }\SpecialCharTok{/}\NormalTok{ (}\DecValTok{1}\SpecialCharTok{:}\FloatTok{1e4}\NormalTok{), }\AttributeTok{type =} \StringTok{"l"}\NormalTok{, }\AttributeTok{lwd =} \DecValTok{2}\NormalTok{)} \end{Highlighting} \end{Shaded} \includegraphics{MH_sampler_files/figure-latex/unnamed-chunk-5-1.pdf} \hypertarget{more-realistic-situation}{% \subsection{More realistic situation}\label{more-realistic-situation}} When the indept proposal q is derived from a preliminary estimation of the parameters of the model. - the proposal could be a normal or t distribution centered at the mle of theta and the variance - covariance matrix be the inverse of fisher informaton matrix \hypertarget{random-walk-mh}{% \subsubsection{Random walk MH}\label{random-walk-mh}} Example 6.4 formal problem of generating the normal distribution N(0, 1) based on a random walk proposal equal to the uniform distribution on {[}−δ, δ{]}. \begin{Shaded} \begin{Highlighting}[] \NormalTok{Uni\_rdwk }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(delta) \{} \NormalTok{ Nsim }\OtherTok{\textless{}{-}} \FloatTok{1e4} \NormalTok{ X }\OtherTok{\textless{}{-}} \FunctionTok{rep}\NormalTok{(}\FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{), Nsim)} \ControlFlowTok{for}\NormalTok{ (i }\ControlFlowTok{in} \DecValTok{2}\SpecialCharTok{:}\NormalTok{Nsim) \{} \NormalTok{ Y }\OtherTok{\textless{}{-}} \FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{, X[i}\DecValTok{{-}1}\NormalTok{] }\SpecialCharTok{{-}}\NormalTok{ delta, X[i}\DecValTok{{-}1}\NormalTok{] }\SpecialCharTok{+}\NormalTok{ delta)} \CommentTok{\# \textless{}{-} rnorm(1, X[i{-}1], 1) \# proposal is N(X[i{-}1], 1) } \NormalTok{ alpha }\OtherTok{\textless{}{-}} \FunctionTok{dnorm}\NormalTok{(Y) }\SpecialCharTok{/} \FunctionTok{dnorm}\NormalTok{(X[i}\DecValTok{{-}1}\NormalTok{])} \NormalTok{ X[i] }\OtherTok{=}\NormalTok{ X[i}\DecValTok{{-}1}\NormalTok{] }\SpecialCharTok{+}\NormalTok{ (Y }\SpecialCharTok{{-}}\NormalTok{ X[i}\DecValTok{{-}1}\NormalTok{]) }\SpecialCharTok{*}\NormalTok{ (alpha }\SpecialCharTok{\textgreater{}} \FunctionTok{runif}\NormalTok{(}\DecValTok{1}\NormalTok{))} \NormalTok{ \}} \NormalTok{ X} \NormalTok{\}} \end{Highlighting} \end{Shaded} Calibrating the delta with 3 values: 0.1, 1, and 10 \begin{Shaded} \begin{Highlighting}[] \NormalTok{X\_0}\FloatTok{.1} \OtherTok{\textless{}{-}} \FunctionTok{Uni\_rdwk}\NormalTok{(}\FloatTok{0.1}\NormalTok{)} \NormalTok{X\_1 }\OtherTok{\textless{}{-}} \FunctionTok{Uni\_rdwk}\NormalTok{(}\DecValTok{1}\NormalTok{)} \NormalTok{X\_10 }\OtherTok{\textless{}{-}} \FunctionTok{Uni\_rdwk}\NormalTok{(}\DecValTok{10}\NormalTok{)} \FunctionTok{par}\NormalTok{(}\AttributeTok{mfrow =} \FunctionTok{c}\NormalTok{(}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{))} \CommentTok{\# plot cumsum} \NormalTok{plt\_cum }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(X, ylim) \{} \FunctionTok{plot}\NormalTok{(}\FunctionTok{cumsum}\NormalTok{(X) }\SpecialCharTok{/}\NormalTok{ (}\DecValTok{1}\SpecialCharTok{:}\FloatTok{1e4}\NormalTok{), }\AttributeTok{ylim =}\NormalTok{ ylim)} \NormalTok{\}} \FunctionTok{plt\_cum}\NormalTok{(X\_0}\FloatTok{.1}\NormalTok{, }\AttributeTok{ylim =} \FunctionTok{c}\NormalTok{(}\SpecialCharTok{{-}}\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{))} \FunctionTok{plt\_cum}\NormalTok{(X\_1, }\AttributeTok{ylim =} \FunctionTok{c}\NormalTok{(}\SpecialCharTok{{-}}\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{))} \FunctionTok{plt\_cum}\NormalTok{(X\_10, }\AttributeTok{ylim =} \FunctionTok{c}\NormalTok{(}\SpecialCharTok{{-}}\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{))} \CommentTok{\# plot hist} \NormalTok{plt\_hist }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(X) \{} \FunctionTok{hist}\NormalTok{(X, }\AttributeTok{breaks =} \DecValTok{250}\NormalTok{)} \NormalTok{\}} \FunctionTok{plt\_hist}\NormalTok{(X\_0}\FloatTok{.1}\NormalTok{)} \FunctionTok{plt\_hist}\NormalTok{(X\_1)} \FunctionTok{plt\_hist}\NormalTok{(X\_10)} \CommentTok{\# Plot ACF} \NormalTok{plt\_acf }\OtherTok{\textless{}{-}} \ControlFlowTok{function}\NormalTok{(X) \{} \FunctionTok{acf}\NormalTok{(X)} \NormalTok{\}} \FunctionTok{plt\_acf}\NormalTok{(X\_0}\FloatTok{.1}\NormalTok{)} \FunctionTok{plt\_acf}\NormalTok{(X\_1)} \FunctionTok{plt\_acf}\NormalTok{(X\_10)} \end{Highlighting} \end{Shaded} \includegraphics{MH_sampler_files/figure-latex/unnamed-chunk-7-1.pdf} Remark: - Too narrow or too wide a candidate (too small and too large delta) results in slow convergence and high autocorrelation - calibrating the scale δ of the random walk is crucial to achieving a good approximation to the target distribution in a reasonable number of iterations. - more realistic situations, this calibration becomes a challenging issue \hypertarget{adv-and-disadv-of-rd-walk}{% \paragraph{Adv and disadv of Rd walk}\label{adv-and-disadv-of-rd-walk}} \begin{itemize} \tightlist \item Idendpend MH only applies to some specific situations, while Rd walk caters to most cases \item But Rd WALK is not the most efficient choic: \begin{itemize} \tightlist \item it requires many iterations for difficulities such the low probability regions between modal regions of f \item due to its simitray, it spends half the simulation revisit the \item So exists alternatives that bypass the perfect symmetry in the rdwk to gain efficiency \item although not always easy to implement. \end{itemize} \end{itemize} One of the alternatives is the Langivine that choose to move to heavier value of target f by including the gradient in the proposal But in the mixture model structure, the bimodal structure can be hard for Lagenin as the local mode can be very attractive \begin{Shaded} \begin{Highlighting}[] \NormalTok{like}\OtherTok{=}\ControlFlowTok{function}\NormalTok{(beda)\{} \NormalTok{mia}\OtherTok{=}\FunctionTok{mean}\NormalTok{(Pima.tr}\SpecialCharTok{$}\NormalTok{bmi)} \FunctionTok{prod}\NormalTok{(}\FunctionTok{pnorm}\NormalTok{(beda[}\DecValTok{1}\NormalTok{]}\SpecialCharTok{+}\NormalTok{(Pima.tr}\SpecialCharTok{$}\NormalTok{bm[Pima.tr}\SpecialCharTok{$}\NormalTok{t}\SpecialCharTok{==}\StringTok{"Yes"}\NormalTok{]}\SpecialCharTok{{-}} \NormalTok{mia)}\SpecialCharTok{*}\NormalTok{beda[}\DecValTok{2}\NormalTok{]))}\SpecialCharTok{*} \FunctionTok{prod}\NormalTok{(}\FunctionTok{pnorm}\NormalTok{(}\SpecialCharTok{{-}}\NormalTok{beda[}\DecValTok{1}\NormalTok{]}\SpecialCharTok{{-}}\NormalTok{(Pima.tr}\SpecialCharTok{$}\NormalTok{bm[Pima.tr}\SpecialCharTok{$}\NormalTok{t}\SpecialCharTok{==}\StringTok{"No"}\NormalTok{]} \SpecialCharTok{{-}}\NormalTok{mia)}\SpecialCharTok{*}\NormalTok{beda[}\DecValTok{2}\NormalTok{]))}\SpecialCharTok{/}\FunctionTok{exp}\NormalTok{(}\FunctionTok{sum}\NormalTok{(beda}\SpecialCharTok{\^{}}\DecValTok{2}\NormalTok{)}\SpecialCharTok{/}\DecValTok{200}\NormalTok{)} \NormalTok{\}} \end{Highlighting} \end{Shaded} \end{document}
{ "alphanum_fraction": 0.7242117555, "avg_line_length": 43.358649789, "ext": "tex", "hexsha": "4723a7cbc9a4722ae53dfccc08ec55697195269f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "f38ad4229f80cdf6e4708096ce0f4d31dda932d1", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "xc308/APTS_CIS", "max_forks_repo_path": "MH_sampler.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "f38ad4229f80cdf6e4708096ce0f4d31dda932d1", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "xc308/APTS_CIS", "max_issues_repo_path": "MH_sampler.tex", "max_line_length": 366, "max_stars_count": null, "max_stars_repo_head_hexsha": "f38ad4229f80cdf6e4708096ce0f4d31dda932d1", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "xc308/APTS_CIS", "max_stars_repo_path": "MH_sampler.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 6994, "size": 20552 }
\chapter{Knee Joint Design} % The knee joint presented in this thesis is designed to replace the current passive pin knee joint with a spring wrap clutch for the WPI LARRE exoskeleton (see \autoref{sec:larre}). % \TODO{add more in the intro section for the knee joint design} \section{Mechanical Design} The orthotic joint design proposed uses a similar idea to how a human knee joint works; a cam mechanism extends the shank link as it is rotated relative to the thigh link. The joint therefore has two degrees of freedom: rotation around the center of rotation (output shaft of the motor and gearbox) and translation in the direction of the shank. However, since there is only one actuator, the joint is underactuated; this underactuation can be taken advantage of to match a patient's knee trajectory, where the center of mass of the shank extends away from the joint center as the joint bends. For ease of assembly, the entire joint is held together by 4 M5 shoulder bolts, which also act as the axles for a total of 10 bearings. \begin{figure} [ht!] \centering \includegraphics[width=0.8\linewidth]{Figures/Design/ExoKneeExplodedView.png} \caption{Exploded view of the knee joint, with all relevant components labeled} \label{fig:KneeJointExplodedView} \end{figure} \subsubsection{Torsion Bars} The center of rotation of the joint is designed to match the axis of rotation of the actuator. The output of this actuator is directly connected to the torsion bar using M5 shoulder bolts. Each bolt is designed to support 3 bearings: 2 on the motor side and 1 on the patient side. The reduced count on the patient side allows for the torsion bar to be partially recessed in the shank link to reduce the distance between the center of mass between the patient and the joint. The 6 bearings are still able to support the forces necessary throughout a walking gait cycle. \subsubsection{Shank Links} The 2 shank links attach to the lower part of the exoskeleton, and are responsible for taking the rotational energy created by the motor and partially changing it to translational energy to help linearly extend the shank. The bearings connected to the torsion bars ride in a guide built into the shank link. This guide is slightly larger than the bearing diameter (\(~0.3mm\)) to prevent rubbing without creating much of a backlash (\(0.39^\circ\) backlash, see calculation on \autoref{eq:ShankLinkBacklash}). \begin{equation} Backlash = atan(\frac{\frac{0.3mm}{2}}{22mm}) = 0.39^\circ \label{eq:ShankLinkBacklash} \end{equation} The surface of the guide must be smooth and parallel to the axis of the bearings to avoid damaging them. Depending on the material and manufacturing method chosen, the surface may require additional machining to ensure it can match these requirements. The length of the guide must be larger than the distance between the centers of the two shoulder bolts plus the maximum distance of linear extension by the knee (\autoref{eq:ExtensionGuideLength}). For this prototype, this length was \(78mm\). \begin{equation} GuideLength \geq TorsionBarC2C + MaxKneeExtension = 44mm + 34mm = 78mm \label{eq:ExtensionGuideLength} \end{equation} The shank link is also responsible to connect to the lower part of the exoskeleton. Just like the thigh link, this is done through the universal exoskeleton connector developed throughout the WPI LARRE project \cite{SpringWrapClutchKnee}. The connection between the thigh link and the shank link is very important, as it adds torsional stability and overall rigidness to the entire joint. It was therefore imperative during the design process to create wide surface contact between the thigh and shank links. To reduce the energy lost to friction between these plates, 3.2mm thick Delrin\textsuperscript{\textregistered} slides were laser cut and attached to the shank link. Similarly to the torsion bar, the shank link also uses 2 shoulder bolts to clamp the two shank links on the thigh link as well as to give the bearings that ride on the knee path guide a precise surface to mount to. To maintain a consistent clamping force, lock nuts are used since they do not easily back out with movement and vibration. \subsubsection{Thigh Link} \begin{figure}[ht!] \centering \includegraphics[width=0.7\linewidth]{Figures/Design/KneeJointAssyCrossSection.png} \caption{A cross section of the knee joint in a \(0^\circ\) position} \label{fig:KneeJointCrossSection} \end{figure} The thigh link acts as the main mounting point for the joint, as well as contains the knee path guide. Just like the shank link, the thigh link has the universal exoskeleton connector used throughout the WPI LARRE project. The motor bracket is connected to the thigh link at two locations using \(20mm\diameter x 50mm\) spacers. These spacers must be strong and stiff, as they transmit the torque between the thigh and shank connector in high load situations. A potentiometer is also mounted inside the thigh link to measure the current angle of the joint, as shown in \autoref{fig:KneeJointCrossSection}. The wire connecting to it is routed through a slot in the thigh link to avoid any interference with the moving shank links. This wire comes out the top and is connected to the main controller of the exoskeleton. \subsubsection{Knee Path Guide} The knee path guide is built into the thigh link as a slot. The geometry is calculated using several point measurements connected in SolidWorks with a spline. Each point is split by 15 degrees, and calculated from a pre-determined equation. This equation can be measured from a patient knee (see \autoref{sec:KneeParams}), but throughout the design and testing of this knee joint, \autoref{eq:KneeJointGeometryEquation} from \cite{KinDynKneeJoint} is used. \autoref{fig:CenterPlateGeometry} shows the equation above overlayed on the thigh link. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{Figures/Design/KneePathGuide.png} \caption{The thigh link contains the geometry (highlighted in blue) which the bearings ride on to mimic the tibiofemoral relationship} \label{fig:CenterPlateGeometry} \end{figure} The joint is designed to be easily adaptable between patients. Therefore, the only customized part in the entire system is the thigh link which holds the knee path guide. All other parts remain the same to decrease cost and improve repairability. \subsubsection{Torque Requirements \& Actuator Selection} The design parameters specified in \autoref{sec:DesignParams} are an output of at least \(65 W\) and \(25 Nm\) at \(150^\circ/sec\). The Maxon EC90 was chosen, with a peak power output of \(90W\) and a max continuous torque of \(0.560 Nm\) at \(2510 rpm\) (see \autoref{apx:EC90Datasheet}). To match the speed and torque requirements, a \(100:1\) gearbox ratio is needed. Due to its high reduction to size ratio, a strain wave gearbox from {Harmonic Drives\texttrademark} (part number 20-100-804) was chosen. Estimated efficiency of this gearbox is roughly \(\epsilon = 90\%\). \begin{table} \centering \begin{tabular}{||c|c|c||} \hline Input (Motor) Power & \(P_{input}\) & \(90 Watts\) \\ \hline Input (Motor) Torque @ Nominal & \(\tau_{input}\) & \(0.560 Nm\) \\ \hline Input (Motor) Speed @ Nominal & \(\omega_{input}\) & \(2510 rpm\) \\ \hline Input (Motor) Stall Torque & \(\tau_{in\_stall}\) & \(7.480 Nm\) \\ \hline \hline Gearbox Ratio & \(\frac{n_1}{n_2}\) & \(100:1\) \\ \hline \hline Output Power & \(P_{output}\) & \(81 Watts\) \\ \hline Output Torque @ Nominal & \(\tau_{input}\) & \(50.4 Nm\) \\ \hline Output Speed @ Nominal & \(\omega_{input}\) & \(150.6^\circ/sec\) \\ \hline Output Stall Torque & \(\tau_{out\_stall}\) & \(673.2 Nm\) \\ \hline \end{tabular} \caption{Motor/gearbox specifications and output power specifications of the proposed joint. See \autoref{apx:JointPowerTorqueSpeedCalcs} for all equations and calculations used.} \label{table:MotorGearboxSpecs} \end{table} The output power of the joint is \(81 W\), with a nominal torque of \(50.4 Nm\) at \(150.6^\circ/sec\). Power, torque, and speed specifications of the joint theoretically exceed the requirements. Physical testing is needed, however, to ensure that these numbers are accurate and sufficient for a rehabilitation exoskeleton. \subsubsection{Potentiometer and Rotary Encoder} A potentiometer was embedded into the knee design to act as an absolute rotary encoder to measure the current angle of the joint. It's purpose is twofold: to provide for an absolute angle at any given time and to provide for rough rotary encoding when a the motor (for passive experimentation). As mentioned above, the integration needed to protect the sensitive connection points. The potentiometer chosen was the Vishay PRV6, with \(200^\circ\) of travel, a linear resistance, and \(\pm 1\%\) tolerance, which equates to a sensored tolerance of \(\pm 2^\circ\). The motor used also has 3 hall sensors used for pinpointing the position of the rotor versus the stator. Since the motor has a 12 poles and 3 sensors (totaling 36 pulses per revolution) as well as a 100:1 reduction through the gearbox, the hall effect sensors can be used to create an effective 3600 pulses per revolution encoder. When used in conjunction with the absolute encoder, the encoded angle can be very precise. % \subsubsection{Motor Analog} % \TODO{change the name of this} % \TODO[inline]{Talk about motor/gearbox analog} \subsubsection{Bearings} \label{sec:BearingsAndCalcs} All 10 bearings used in the design are the same (for simplicity and reduction of cost): 19mm outside diameter x 6mm inside diameter x 6mm thick double shielded ball bearings (Model 626ZZ). Each is rated for \(2.6kN\) dynamic load and \(1.05kN\) static load. Before selecting these bearings, two calculations were required to ensure these bearings could support the forces required. The first is the requirement of the torsion bar. Given the max torque requirement for the project is \(25Nm\) and the torsion bar is \(44mm\) from center to center, \autoref{eq:TorsionBearingLoad} calculates that the total load on all 6 bearings used is \(1136N\), equaling to roughly \(190N\) per bearing. \begin{equation} \text{Total Load per Torsion Bar Bearing}: \frac{1}{6} \times \frac{25Nm}{44mm / 2} = \frac{1}{6} \times \frac{25Nm}{0.022m} = 189.4N \label{eq:TorsionBearingLoad} \end{equation} The second force requirement for these bearings were in the knee path cam. Each knee joint must be able to hold half of the weight requirement of \(100kg\) statically. \autoref{eq:CamBearingLoad} demonstrates that each of the 4 bearings used in the cam will see a maximum static load of \(245N\) per bearing. \begin{equation} \text{Total Load per Cam Bearing}: \frac{1}{4} \times 100kg \times 9.81m/s = 245.3N \label{eq:CamBearingLoad} \end{equation} \section{Material Selection \& Manufacturing} The concept behind the joint is not dependent on material choice. However, when it came time to manufacture the prototypes, two materials were selected as potential options: aluminum and polylactic acid (PLA) plastic. Aluminum benefits from its strength to weight ratio and manufacturing simplicity when it is being machined. PLA plastic, on the other hand, can be injection molded or 3D printed using fused deposition modeling (FDM) printers. This makes PLA more flexible and less expensive, at the cost of softness and strength when compared to aluminum. Other plastics and metals were initially considered. Out of the 3D printable plastics that were accessible with the tools available, PLA is among the strongest, stiffest, and hardest. Other FDM 3D printable plastics considered were acrylonitrile butadiene styrene (ABS) and polyethylene terephthalate (PET). On the metals, side, steels were considered as a material option. However, its density and higher complexity to machine when compared to aluminum ruled it out as a material option. \subsubsection{Material Analysis} To decide between aluminum and PLA, the materials were analyzed in finite element analysis (FEA) simulation inside Dassault SolidWorks. \autoref{table:MaterialProperties} shows the material properties used. It is important to note that manufacturing methods were not considered in the analysis; therefore, layer adhesion was not considered when calculating the strength of the material. \begin{table} \centering \begin{tabular}{ |c|c|c| } \hline Material & Aluminum & PLA \\ \hline \hline Mass Density [$kg/m^3$] & 2700 & 1420 \\ \hline Tensile Strength [$N/mm^2$] & 124.08 & 57.3\\ \hline Yield Strength [$N/mm^2$] & 55.15 & 14.3\\ \hline Shear Modulus [$N/mm^2$] & 26000 & 55000\\ \hline \end{tabular} \caption{Material properties used when analyzing each material in FEA simulation in SolidWorks} \label{table:MaterialProperties} \end{table} \begin{figure}[H] \centering \includegraphics[width=0.65\linewidth]{Figures/Design/FEA_Stress.png} \caption{Finite element analysis reported von Miss stress of knee joint manufactured from PLA. Force applied (at arrows) is 500N.} \label{fig:FEA_Stress_PLA} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.65\linewidth]{Figures/Design/FEA_Strain.png} \caption{Finite element analysis reported strain of knee joint manufactured from PLA. Force applied (at arrows) is 500N.} \label{fig:FEA_Strain_PLA} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.64\linewidth]{Figures/Design/FEA_Displacement.png} \caption{Finite element analysis reported displacement under load of knee joint manufactured from PLA. Force applied (at arrows) is 500N.} \label{fig:FEA_Displacement_PLA} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.65\linewidth]{Figures/Design/FEA_FOS.png} \caption{Finite element analysis reported factor of safety of knee joint manufactured from PLA. Force applied (at arrows) is 500N.} \label{fig:FEA_FoS_PLA} \end{figure} % \begin{figure}[ht!] % \centering % \includegraphics[width=0.8\linewidth]{Figures/Design/FEA_AL_45deg.png} % \caption{FEA of knee joint manufactured from Aluminum. Force applied (at arrows) is 500N, the resultant safety factor is 12.04.} % \label{fig:FEA_AL} % \end{figure} PLA was chosen as the best material for our experimentation. Software finite element analysis demonstrated that it can support the stresses required at any angle (shown in \autoref{fig:StrengthFlexion}). It can also be manufactured very quickly and easily with access to a conventional FDM 3D printer, allowing for quick revisions during the prototyping process. The final prototype was manufactured out of both aluminum and PLA; the thigh and shank links were 3D printed in PLA plastic while the torsion bar was machined out of aluminum to support the torque required. However, if this joint were to be manufactured for use outside of prototype development and clinical trials, I would recommend using aluminum, as the joint would likely be more resilient and last longer. It would also increase torsional stiffness in the joint and reduce the likelihood of the bearings creating divots in the surface of the knee path guide. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{Figures/Design/StrengthFlexionCurve.png} \caption{Analysis of the joint with major components manufactured out of PLA shows that the design is weakest at a \(90^\circ\) flexion, with a maximum static load of \(757N\) per joint.} \label{fig:StrengthFlexion} \end{figure} \subsubsection{Manufacturing} The manufacturing process between the two materials is very different as well. While there are many different ways of creating parts in either material, the research will focus on the most common ways as to be easily replicated by others if desired. As a plastic, PLA has many options for manufacturing. While PLA can be injection molded or machined, the most common use case for it is through FDM 3D printing. The accessibility and low cost at low production numbers makes this method of manufacturing the best for our use cases for the larger of our parts as well as any part that doesn't deal with big forces. For this project, a Creality Ender 3 was used to 3D print the parts required. Aluminum can also be 3D printed, but this requires some very specific tools to achieve. It can also be casted (similarly to injection molding for plastics), but this requires specific machining for the molds, and the parts still usually need to be machined to the final correct dimensions (the best option for high volume manufacturing). Therefore, all aluminum parts were designed to be manufactured using conventional lathes and mills. The manufacturing process, however, can be further simplified with access to a water jet or metal laser cutter. Such a tool can cut out all parts to a rough dimension, and a quick machining pass can finish the surfaces that need to be precise, such as the knee path guide and the slot in the shank link. If water jetting is selected as the preferred method of manufacture, parts may have a slight bevel due to the conical output of the water jet. \section{Knee Trajectory Testing} Motion capture and SolidWorks motion simulations were used to verify the joint's trajectory based on an inputted tibiofemoral trajectory. The motion simulation outputted a perfect match to the input equation (\autoref{eq:KneeJointGeometryEquation}), since the simulation platform is using a perfect model which directly inputs the equation above. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{Figures/Design/KneeTrajTest.png} \caption{Experimental setup for measuring the trajectory of the manufactured joint. 9 motion capture dots were used to measure any movement in all 6 degrees of freedom.} \label{fig:TrajTestSetup} \end{figure} To further verify the relationship, a 10-camera Vicon Vantage 5 motion capture system was used. 9 motion capture dots were placed strategically around the knee joint. Two rigid bodies were used (each containing 4 motion capture dots) to be able to measure position and orientation in all 6 degrees of freedom. One rigid body was placed on the connector on the shank link, while the other was attached to the connector of the thigh link. A final motion capture dot was placed at the joint center. Then, the joint was manually actuated through its range while collecting data from the motion capture system. The data was processed using software tools developed in \autoref{sec:KneeParams} for measuring human tibiofemoral relationships. To ensure the data collected remained representative of the test and was not modified by any processing tools developed, only data importing tools were used. These tools simply took the raw data from the Vicon system and imported it into Python in a cleaner way. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{Figures/Design/FlexionExtensionKneeJoint.png} \caption{Results from the motion capture system demonstrates that the designed knee joint can follow a desired trajectory very closely, only deviating by \(1mm\) maximum} \label{fig:TrajTestResults} \end{figure} The motion capture analysis data in \autoref{fig:TrajTestResults} demonstrates the effectiveness of the joint; it was able to follow the desired trajectory layed out in \autoref{eq:KneeJointGeometryEquation} with minimal error (deviation from goal trajectory under \(1mm\)).
{ "alphanum_fraction": 0.7683732661, "avg_line_length": 87.7212389381, "ext": "tex", "hexsha": "4af4e8d1a4c4583be5ba4fdb464bbbdda942494f", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "db48c8df95657fab1a8c56a8906f997dfd5241a4", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "alextac98/KneeJointThesis", "max_forks_repo_path": "sections/kneedesign.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "db48c8df95657fab1a8c56a8906f997dfd5241a4", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "alextac98/KneeJointThesis", "max_issues_repo_path": "sections/kneedesign.tex", "max_line_length": 998, "max_stars_count": 1, "max_stars_repo_head_hexsha": "db48c8df95657fab1a8c56a8906f997dfd5241a4", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "alextac98/KneeJointThesis", "max_stars_repo_path": "sections/kneedesign.tex", "max_stars_repo_stars_event_max_datetime": "2021-04-25T04:44:08.000Z", "max_stars_repo_stars_event_min_datetime": "2021-04-25T04:44:08.000Z", "num_tokens": 4855, "size": 19825 }
\documentclass[a4paper,11pt]{article} \renewcommand{\baselinestretch}{1.5} %% Change the baseline stretch to 1.5 \usepackage{geometry} \geometry{ a4paper, total={170mm,257mm}, left=20mm, top=20mm, } \usepackage{wrapfig} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{amssymb} \usepackage{siunitx} \usepackage{multirow} \usepackage{colortbl} \usepackage{hhline} \usepackage{lipsum} %%% Lorem ipsum \setlength{\headheight}{30.0pt} \setlength{\footskip}{20pt} %%%%%%%%%%%%%%%%TABLE \setlength{\arrayrulewidth}{0.5mm} \setlength{\tabcolsep}{18pt} \renewcommand{\arraystretch}{1.5} %%%%%%%%%%%% \usepackage{hyperref} \hypersetup{ colorlinks=True, linkcolor={blue!20!black}, filecolor=magenta, urlcolor=cyan, } \usepackage[export]{adjustbox} \usepackage[english]{babel} \usepackage{fancyhdr} \usepackage{multicol} \pagestyle{fancy} \fancyhf{} \rhead{\textit{Pul074BEX004}} \lhead{\textit{Amrit Prasad Phuyal}} \rfoot{\thepage} \usepackage{mathpazo} % Palatino font \usepackage{graphicx} \usepackage{float} \usepackage{xcolor} \usepackage{color} \input{./AnsENV.tex} %% Answer environment \input{./QueENV.tex} %% Question Environment \input{./CoverPage.tex} %%% cover page \usepackage{tikz} \usepackage{circuitikz} \newcommand\ddfrac[2]{\frac{\displaystyle #1}{\displaystyle #2}} \include{./define.tex} %%%%%%%%%%%%%%%%%%%%%% for Proteus circuit observation supply Figure scale(1) for observation, number(2) like "a,b,c,d..", gain (3), half power freq (4), \newcommand{\Porcirobs}[4]{ %\subsubsection{Proteus Observation Figure #2} \begin{figure}[H] %%%%%%%%%%%proteus circuit \centering \includegraphics[width=\linewidth]{./FIG/P_cir_fig#2.PDF} \caption{Proteus Circuit for #2} \end{figure} \begin{figure}[H] %%%%%%%%%proteus plot and observation \centering \includegraphics[width=#1\linewidth]{./FIG/plot_Fig#2.pdf} \begin{tabular}[H]{| m{14em}| m{22em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Gain } & \textbf{Half power frequency} \\ \hline #3 dB & (#4) KHz \\ \hline \end{tabular} \caption{Proteus Observation for #2} \end{figure} } \newcommand{\Pobs}[4]{ %\subsubsection{Proteus Observation Figure #2} % \begin{figure}[H] %%%%%%%%%%%proteus circuit % \centering % \includegraphics[width=\linewidth]{./FIG/P_cir_fig#2.PDF} % \caption{Proteus Circuit for #2} % \end{figure} \begin{figure}[H] %%%%%%%%%proteus plot and observation \centering \includegraphics[width=#1\linewidth]{./FIG/plot_Fig#2.pdf} \begin{tabular}[H]{| m{14em}| m{22em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Gain } & \textbf{Half power frequency} \\ \hline #3 dB & (#4) KHz \\ \hline \end{tabular} \caption{Proteus Observation for #2} \end{figure} } \begin{document} %%%% COver page \CP{Filter Design}{Lab \#6}{DESIGN OF HIGHER ORDER ACTIVE \vfill FILTERS} {SHARAD KUMAR GHIMIRE} %%%%%%%%%%%%%%%%%%%% \pagenumbering{gobble} \renewcommand{\contentsname}{Table of Contents} \tableofcontents \pagebreak \listoffigures %\pagebreak \vspace{5em} \listoftables %\lstlistoflistings \pagebreak \pagenumbering{arabic} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Title} {\large DESIGN OF HIGHER ORDER ACTIVE FILTERS} % Objectives \section{Objective} \begin{itemize} \item To be familiar with design of high order active filter using simulated inductors. \item To be familiar with design of high order active filter using FDNR. \item To be familiar with design of high order active filter using Leapfrog simulation. \end{itemize} %Requirement \section{Requirement} \subsection{Proteus Design Suite} Proteus is a simulation and design software tool developed by Labcenter Electronics for Electrical and Electronic circuit design.It is used to create schematic of a circuit and Visualization of its operation. \pagebreak % Theory section \section{Theory} \subsection{Generalized Impedance Converter (GIC)} A Generalized Impedance Converter (GIC) is an active two port network in which the input impedance is equal to the load impedance times a conversion function of the complex frequency variable . \begin{figure}[H] \centering \includegraphics[width=\linewidth]{./FIG/GIC.png} \caption{Generalized Impedance Converter (GIC)} \end{figure} \begin{equation} Z=Z_{in}=\frac{Z_1Z_3Z_5}{Z_2Z_4} \end{equation} \subsection{Simulated Inductors} In this method of designing Higher order active filter, we use simulated inductors to simulate thr grounded inductor in a passive circuit. In the above figure if value of all impedance except $Z_5$ is 1 i.e $Z_1=Z_2=Z_3=1 \Omega, Z_4= 1 F$ and $Z_5=k\Omega$. Then new value of $Z$ will be \begin{equation*} Z=Z_{Simulated Inductor}=\ddfrac{1\times1\times k}{\left(\frac{1}{s}\right)\times1}=ks \end{equation*} \subsection{Frequency Dependent Negative Resistor (FDNR)} Burton’s FDNR technique involves eliminating the use of inductor by scaling all impedances by frequency dependent factor $\frac{1}{s}$ converting a resistor to a capacitor $\left(\ddfrac{R}{s}\right)$, an inductor to a resistor $(L)$, and the capacitor to a Frequency Dependent Negative Resistor (FDNR) $\left(\ddfrac{1}{s^2C}\right)$ with symbol $\|\|$ . In the above figure if $Z_1=Z_2=1$ $\Omega$, $Z_3=Z_5=1$ F and $Z_4=k$ $\Omega$ is used. Then new value of $Z$ will be \begin{equation*} Z=Z_{FDNR}=\ddfrac{\left(\frac{1}{s}\right)\times\left(\frac{1}{s}\right)\times 1}{1\times k}=\frac{1}{ks^2} \end{equation*} \subsection{Leapfrog Simulation} This method simulates the operation of the ladder rather than its components by modeling all circuit equations and the voltage--current relationship of the elements.Simulation involve step of determining lowpass prototype, identifying admittance and impedance of the ladder, selecting Leapfrog parameters, and simulating the circuit.If needed frequency and Magnitude scaling is performed . %%Exercises \pagebreak \section{Exercises:} \begin{figure}[H] \centering \scalebox{1.25} \figquestion \caption{Fourth order Butterworth lowpass ladder circuit} \end{figure} %Question 1 \begin{Q} { The network given in figure 1 is the fourth order Butterworth lowpass filter at normalized frequency of 1 rad/sec. From this network, design a lowpass filter having half power frequency of 20000 rad/sec using FDNR. Realize the network and observe the magnitude response.} \end{Q} After applying Bruton's Transformation on above circuit we get, \begin{figure}[H] \centering \scalebox{1.25} \figfdnr \caption{Fourth order Butterworth Lowpass Circuit using FDNR} \end{figure} Using magnitude scaling factor of $K_m=\ddfrac{1}{s}$ on Figure 2, we get, \begin{equation*} \begin{aligned} & Z'_{R\textsubscript{$1$}}=1 \text{ F} \quad & & Z'_{R\textsubscript{$2$}}=1 \text{ F} \\ & Z'_{L\textsubscript{$1$}}=0.7654 \text{ }\Omega \quad & & Z'_{L\textsubscript{$2$}}=1.848 \text{ }\Omega \\ & \text{(FDNR) } Z'_{C\textsubscript{$1$}}=1.848 \quad & & \text{(FDNR) }Z'_{C\textsubscript{$2$}}=0.7654 \\ \end{aligned} \end{equation*} To realize first FDNR $Z'_{C\textsubscript{$1$}}$ we use $Z_1=Z_2=1$ $\Omega$, $Z_3=Z_5=1$ F and $Z_4=k$ $\Omega$, \begin{equation*} \begin{aligned} Z_{in} & =Z'_{C\textsubscript{$1$}}=\ddfrac{1\times\left(\frac{1}{s}\right)\times\left(\frac{1}{s}\right)}{1\times k} \\ & \frac{1}{1.848s^2}=\frac{1}{ks^2} \\ & \therefore k=1.848 \text{ }\Omega \end{aligned} \end{equation*} To realize Second FDNR $Z'_{C\textsubscript{$2$}}$ we use $Z_1=Z_2=1$ $\Omega$, $Z_3=Z_5=1$ F and $Z_4=k$ $\Omega$, \begin{equation*} \begin{aligned} Z_{in} & =Z'_{C\textsubscript{$2$}}= \frac{1}{0.7654s^2}=\frac{1}{ks^2} \\ & \therefore k=0.7654 \text{ }\Omega \end{aligned} \end{equation*} As per the question we require a halfpower frequency of 20000 rad/sec so, Frequency Scaling is calculated to $K_f$=2000. Thus final value will be after Frequency scaling $K_f=20000 $ and Impedance Scaling $K_m=1000$. \begin{table}[H] \centering \begin{tabular}[H]{| m{10em}|m{10em}|m{14em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Component Symbol} & \textbf{Normalized value } & \textbf{Final value after scaling} \\ \hline $Z'_{R\textsubscript{$1$}}$ & 1 $F$ & 50$nF$ \\ \hline $Z'_{R\textsubscript{$2$}}$ & 1 $F$ & 50$nF$ \\ \hline $Z'_{L\textsubscript{$1$}}$ & 0.7654$\Omega$ & 765 $\Omega$ \\ \hline $Z'_{L\textsubscript{$1$}}$ & 1.848 $\Omega$ & 1.848 $K\Omega$ \\ \hline \end{tabular} \caption{Component Values of LPF excluding FDNR's} \end{table} \begin{table}[H] \centering \begin{tabular}[H]{| m{10em}|m{10em}|m{14em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Component Symbol} & \textbf{Normalized value } & \textbf{Final value after scaling} \\ \hline $Z_1$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_2$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_3$ & 1$F$ & 50 $nF$ \\ \hline $Z_4$ & 1.848 $\Omega$ & 1.848 $K\Omega$ \\ \hline $Z_5$ & 1 $F$ & 50$nF$ \\ \hline \end{tabular} \caption{Component Values of First FDNR $Z'_{C\textsubscript{$1$}}$} \end{table} \begin{table}[H] \centering \begin{tabular}[H]{| m{10em}|m{10em}|m{14em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Component Symbol} & \textbf{Normalized value } & \textbf{Final value after scaling} \\ \hline $Z_1$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_2$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_3$ & 1$F$ & 50 $nF$ \\ \hline $Z_4$ & 0.7654 $\Omega$ & 765 $K\Omega$ \\ \hline $Z_5$ & 1 $F$ & 50$nF$ \\ \hline \end{tabular} \caption{Component Values of Second FDNR $Z'_{C\textsubscript{$2$}}$} \end{table} \Porcirobs{0.95}{low pass FDNR}{-6.02}{3.1765} \pagebreak %Question 2 \begin{Q} {Obtain a Highpass filter at normalized frequency of 1 rad/sec from the lowpass filter given in figure 1 using frequency transformation. From the circuit obtained, design a Highpass filter using simulated inductors. In your final design the half power frequency should be 4775 Hz and practically realizable elements. Realize the filter network. Also observe and analyze the magnitude response of the filter network.} \end{Q} Applying frequency transformation to get Highpass filter at normalized frequency of 1 rad/sec, we get, \begin{figure}[H] \centering \scalebox{1.25} \fighp \caption{Fourth order Highpass ladder circuit at at normalized frequency of 1 rad/sec} \end{figure} To realize the inductor $Z'_{C\textsubscript{$1$}}$ we use $Z_1=Z_2=Z_3=1$ $\Omega$, $Z_4=1$ F and $Z_5=k$ $\Omega$, we get, \begin{equation*} \begin{aligned} Z_{in} & =Z'_{C\textsubscript{$1$}}=\ddfrac{1\times1\times k}{\left(\frac{1}{s}\right)\times 1} \\ & \Rightarrow 0.5411s=s\times k \\ & \therefore k=0.5411 \text{ }\Omega \end{aligned} \end{equation*} Similarly, for the inductor $Z'_{C\textsubscript{$1$}}=1.3605 \Omega$ we get $K=Z_5=1.3605 \Omega$.As we require the highpass filter having frequency of $4775$ Hz, a frequency scaling factor of $K_f=\ddfrac{2\pi\times4775}{1}\approx 3*10^4$ is used and a magnitude scaling factor of $K_m=10^3$ is used. Hence a final value obtained will be after Frequency and Magnitude scaling, \begin{table}[H] \centering \begin{tabular}[H]{| m{10em}|m{10em}|m{14em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Component Symbol} & \textbf{Normalized value } & \textbf{Final value after scaling} \\ \hline $Z'_{R\textsubscript{$1$}}$ & 1 $\Omega$ & 1 $K\Omega$ \\ \hline $Z'_{R\textsubscript{$2$}}$ & 1 $\Omega$ & 1 $K\Omega$ \\ \hline $Z'_{L\textsubscript{$1$}}$ & 0.1.3065 $F$ & 45.35 $nF$ \\ \hline $Z'_{L\textsubscript{$1$}}$ & 0.5411 $F$ & 18.04 $nF$ \\ \hline \end{tabular} \caption{Component Values of HPF excluding FDNR's} \end{table} \begin{table}[H] \centering \begin{tabular}[H]{| m{10em}|m{10em}|m{14em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Component Symbol} & \textbf{Normalized value } & \textbf{Final value after scaling} \\ \hline $Z_1$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_2$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_3$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_4$ & 1 $F$ & 33.33 $nF$ \\ \hline $Z_5$ & 0.5411 $\Omega$ & 541 $\Omega$ \\ \hline \end{tabular} \caption{Component Values of simulated inductor $Z'_{C\textsubscript{$1$}}$} \end{table} \begin{table}[H] \centering \begin{tabular}[H]{| m{10em}|m{10em}|m{14em}|} \hline \rowcolor[rgb]{0.569,0.647,0.947} \textbf{Component Symbol} & \textbf{Normalized value } & \textbf{Final value after scaling} \\ \hline $Z_1$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_2$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_3$ & 1 $\Omega$ & 1$K\Omega$ \\ \hline $Z_4$ & 1 $F$ & 33.33 $nF$ \\ \hline $Z_5$ & 1.3605 $\Omega$ & 1.36 $K\Omega$ \\ \hline \end{tabular} \caption{Component Values of simulated inductor $Z'_{C\textsubscript{$2$}}$} \end{table} \Porcirobs{0.95}{high pass simulated inductor}{-6.02}{4.857} \pagebreak %Question 3 \begin{Q} {From the circuit given in figure 1, design a lowpass passive filter having half power frequency of 40000 rad/sec with practically suitable elements, using Leapfrog simulation. Realize the filter network and observe the magnitude response of the network.} \end{Q} We can represent the figure 2 as, \begin{figure}[H] \centering \scalebox{1.5} \figleap \caption{Block diagram representation of the fourth order Butterworth lowpass filter} \end{figure} Applying voltage and nodal analysis for Figure 9 , we get, \begin{equation*} \begin{aligned} & I_1=\frac{V_1-V_3}{Z_1}=T_1(V_1-V_3) \\ & V_3=Z_2(I_1-I_2)=T_2(I_1-I_2) \\ & I_2=\frac{V_3-V_4}{Z_3}= T_3(V_3-V_4) \\ & V_4=Z_4I_2=T_4I_2 \end{aligned} \end{equation*} Where, \begin{equation*} \begin{aligned} & T_1=\frac{1}{Z_1}=\frac{1}{1+0.7654s} \\ & T_2=Z_2=\frac{1}{1.848s} \\ & T_3=\frac{1}{Z_3}=\frac{1}{1.848s} \\ & T_4=Z_4=\frac{1}{1+0.7654s} \end{aligned} \end{equation*} Let $V_{I1}=I_1$ and $V_{I2}=I_2$ and rearranging the signs we get, \begin{equation} V_{I1}=-(-T_1)(V_1-V_3) \end{equation} \begin{equation} -V_3=-T_2(V_{I1}-V_{I2}) \end{equation} \begin{equation} -V_{I2}=-(-T_3)(V_4-V_3) \end{equation} \begin{equation} V_4=(-T_4)(-V_{I2}) \end{equation} Above equation can be represented in block Diagram as, \begin{figure}[H] \centering \includegraphics[width=\linewidth]{./FIG/Block rep.jpg} \caption{Block diagram representation of circuit equations} \end{figure} \begin{figure}[H] %%%%%%%%%%%proteus circuit \centering \includegraphics[width=0.39\linewidth]{./FIG/P_cir_figlow pass leapfrog.PDF} \caption{Proteus Circuit for low pass Leapfrog} \end{figure} \Pobs{0.95}{low pass leapfrog}{0}{4.807} %section for Discussion and Conclustion \section{Discussion and Conclusion} In this lab we designed the higher order filter using Active simulation of passive circuit. We used simulated inductors, FDNR and Leapfrog Simulation to design the filter. GIC is extensively used in above discussed methods. We also simulated the circuits designed using these methods and observe its magnitude hence fulfilling our Lab objective. \end{document}
{ "alphanum_fraction": 0.6002306805, "avg_line_length": 36.4285714286, "ext": "tex", "hexsha": "3eba0a970f37ba761399ec272757f916648e38c4", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2022-01-17T12:19:26.000Z", "max_forks_repo_forks_event_min_datetime": "2021-03-19T09:04:46.000Z", "max_forks_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "amritphuyal/LATEX", "max_forks_repo_path": "Filter design/LAB6/Filter LAB 6 Amrit Prasad Phuyal.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "amritphuyal/LATEX", "max_issues_repo_path": "Filter design/LAB6/Filter LAB 6 Amrit Prasad Phuyal.tex", "max_line_length": 389, "max_stars_count": 1, "max_stars_repo_head_hexsha": "7346dc337b8d7aab2dbe81c29611ca2b069e1299", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "amritphuyal/LATEX", "max_stars_repo_path": "Filter design/LAB6/Filter LAB 6 Amrit Prasad Phuyal.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-01T08:20:34.000Z", "max_stars_repo_stars_event_min_datetime": "2020-10-01T08:20:34.000Z", "num_tokens": 5574, "size": 17340 }
\documentclass[]{book} \usepackage{lmodern} \usepackage{amssymb,amsmath} \usepackage{ifxetex,ifluatex} \usepackage{fixltx2e} % provides \textsubscript \ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \else % if luatex or xelatex \ifxetex \usepackage{mathspec} \else \usepackage{fontspec} \fi \defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase} \fi % use upquote if available, for straight quotes in verbatim environments \IfFileExists{upquote.sty}{\usepackage{upquote}}{} % use microtype if available \IfFileExists{microtype.sty}{% \usepackage{microtype} \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts }{} \usepackage[margin=1in]{geometry} \usepackage{hyperref} \hypersetup{unicode=true, pdftitle={Syllabus for the Datamining Class}, pdfauthor={Gener Avilés R}, pdfborder={0 0 0}, breaklinks=true} \urlstyle{same} % don't use monospace font for urls \usepackage{natbib} \bibliographystyle{apalike} \usepackage{longtable,booktabs} \usepackage{graphicx,grffile} \makeatletter \def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} \def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} \makeatother % Scale images if necessary, so that they will not overflow the page % margins by default, and it is still possible to overwrite the defaults % using explicit options in \includegraphics[width, height, ...]{} \setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} \IfFileExists{parskip.sty}{% \usepackage{parskip} }{% else \setlength{\parindent}{0pt} \setlength{\parskip}{6pt plus 2pt minus 1pt} } \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{5} % Redefines (sub)paragraphs to behave more like sections \ifx\paragraph\undefined\else \let\oldparagraph\paragraph \renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}} \fi \ifx\subparagraph\undefined\else \let\oldsubparagraph\subparagraph \renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}} \fi %%% Use protect on footnotes to avoid problems with footnotes in titles \let\rmarkdownfootnote\footnote% \def\footnote{\protect\rmarkdownfootnote} %%% Change title format to be more compact \usepackage{titling} % Create subtitle command for use in maketitle \newcommand{\subtitle}[1]{ \posttitle{ \begin{center}\large#1\end{center} } } \setlength{\droptitle}{-2em} \title{Syllabus for the Datamining Class} \pretitle{\vspace{\droptitle}\centering\huge} \posttitle{\par} \author{Gener Avilés R} \preauthor{\centering\large\emph} \postauthor{\par} \predate{\centering\large\emph} \postdate{\par} \date{2017-02-19} \usepackage{booktabs} \usepackage{amsthm} \makeatletter \def\thm@space@setup{% \thm@preskip=8pt plus 2pt minus 4pt \thm@postskip=\thm@preskip } \makeatother \begin{document} \maketitle { \setcounter{tocdepth}{1} \tableofcontents } \chapter{Introduction}\label{introduction} This course is taught in the \textbf{\emph{Maestría y Doctorado en Ciencias e Ingeniería}} (MyDCI) programm of \emph{Facultad de Ingeniería, Arquitectura y Diseño} of \emph{Universidad Autónoma de Baja California} in the Ensenada Campus. The course is taught by \href{https://www.researchgate.net/profile/Maria_Cosio_Leon}{Dr.~María de los Ángeles Cosío León}. \chapter{Principal Components Analysis}\label{intro} \section{What does PCA do?}\label{what-does-pca-do} This methods tries to explain the correlation structure of a set of predictor variables using a smaller set o linear combinations of these variables called \textbf{\emph{components}}, note that \emph{components} are not variables, rather indicators of linear combinations between variables. Given a dataset with \(m\) variables a set of \(k\) linear combinations can be used to represent it (meaning that \(k\) contains almost as much information as the \(m\) variables), also \(k<<m\). \section{PCA Step by Step}\label{pca-step-by-step} \subsection{1. Getting the dataset and things ready.}\label{getting-the-dataset-and-things-ready.} Before starting the process of dimensionality reduction one should make sure the data is standardized, this is done to avoid bised in the results by values to large or to small when compared to each other. \subsection{2. Centering the points}\label{centering-the-points} \begin{itemize} \tightlist \item The \textbf{standardization process} is acomplished when the mean for each variable \(=0\) and the standard deviation \(=1\). The following formula can be followed to acomplish this process: \[Z_i = \frac {(X_i-\mu_i)}{\sigma_{ii}}\] \end{itemize} Where: \(\mu_i\) equals the mean of \(X_i\) and \(\sigma_{ii}\) equals the standard deviation of \(X_i\). \begin{itemize} \tightlist \item If the values are given as a set of points the process can be acomplished with the following formula: \end{itemize} \[x_{i,a} = x_{i,a} - \mu_a\] This move will facilitate the calculations down the road. \subsection{\texorpdfstring{3. Compute covariance (\(\sigma_{X,Y}\)) matrix}{3. Compute covariance (\textbackslash{}sigma\_\{X,Y\}) matrix}}\label{compute-covariance-sigma_xy-matrix} The \textbf{covariance} is a measure of the degree to which two variables vary together. Positive covariance indicates that when one variable increases, the other tends to increase. Negative covariance indicates that when one variable increases, the other tends to decrease. The covariance measure \textbf{is not scaled}. In a \(2x2\) matrix: \[\begin{vmatrix} 2.0 & 0.8 \\ 0.8 & 0.6 \end{vmatrix}\] Since the mean (\(\mu\)) is equal to \(\emptyset\) thanks to \emph{centering} the values in the previous step, the formula to calculate the covariance of the values in the matrix is: \[cov(x_1,x_2) = \frac{1}{n}\sum_{i=1}^{n}x_{i,j}x_{i,2}\] \textbf{The way to interpret \emph{covariance} is to understand it's results as information about how one attribute changes as the other one changes.} It is important to remember that, if we multiply a vector by the covariance matrix or \(\sum\) the resulting vector will turn towards the direction of the variance. Changing the units of measure would change the results, this is an inconvenience and is addressed by calculating the \textbf{\emph{correlation coefficient \(r_{ij}\)}}: \(r_{ij}\) scales the covariance by each of the standard deviations: \[r_{ij} = \frac{\sigma_{ij}^2}{\sigma_{ii} \sigma_{jj}}\] \textbf{The \(r_{ij}\) gives us a value with reference to know how much of a correlation exists between two variables.} \subsection{4. Eigenvectors + Eigenvalues}\label{eigenvectors-eigenvalues} Define a \textbf{new set of dimentions} by: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \tightlist \item Taking the dataset and looking for the direction of the data, looking to draw a line in which, along it, there is the \textbf{greatest amount of variance \(\sigma^2\)} in the data, this line will be called the \textbf{principal component 1 (PC1)}. \end{enumerate} \[\sigma^2 = \frac{\sum(X-\mu)^2}{N}\space \space \text{or}\space \space \sigma^2 = \frac{\sum X^2}{N} - \mu^2\] \emph{In the previous formula \(\sigma^2\) is defined as the sum of the squared distances of each term in the distribution from the mean (\(\mu^2\)) divided by the number of terms in the distribution (\(N\)). In simple words: \(\sigma^2\) measures \textbf{how far a set of random numbers are spread out from their mean}.} \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \setcounter{enumi}{1} \item Once PC1 is determined, it will established the next dimension by drawing an \textbf{\emph{orthogonal}} (perpendicular) line in relation to PC1, the exact area where the line will be drawn is determined by the same process of finding the gratest \(\sigma^2\) of the remaining data, once this is done PC2 is ready. \item This will be done iteratively until all the dimensions (\(d\)) of the dataset are covered and components (\(m\)) are generated for every single \(d\). \end{enumerate} \begin{itemize} \tightlist \item The first \(m<<d\) components become \(m\) new dimensions. \begin{itemize} \tightlist \item Coordinates from every datapoint will be changed to these ``new'' dimensions. \end{itemize} \item \textbf{Greatest variability} is pursued to maintain the \href{https://rpubs.com/generaviles/248692}{\emph{smoothness}} assumption of dimensions. \end{itemize} Eigenvectors and eigenvalues are mathematically expressed as: \[A \overrightarrow{v} = \lambda \overrightarrow{v}\] Where \(A\) represents \emph{transformation}, \(\overrightarrow{v}\), a vector, also known as \textbf{eigenvector}, that comes out of the matrix being analysied and \(\lambda\), a scalar value also known as \textbf{eigenvalue}. \textbf{Principal components = eigenvectors with largest eigenvalues.} \subsubsection{Finding Eigenvalues and Eigenvectors}\label{finding-eigenvalues-and-eigenvectors} In order to exemplify the process of finding these values and vector steps are presented for a \(2x2\) matrix, but this can be done with any matrix of \(n\space x\space n\) dimensions following the rules of matrix algebra. To begin with the example we will declare a matrix: \[A = \left[ \begin{array}{ccc} 7 & 3 \\ 3 & -1 \end{array} \right]\] Now the steps: \begin{enumerate} \def\labelenumi{\arabic{enumi}.} \item \textbf{Multiply an \(n\space x\space n\) identity matrix by the scalar \(\lambda\): \(IA\lambda\)} \[\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] * \lambda = \left[ \begin{array}{cc} \lambda & 0 \\ 0 & \lambda \end{array} \right]\] \item \textbf{Substract the identity matrix multiple from matrix A: \(A-\lambda I\)} \[\left[ \begin{array}{cc} 7 & 3 \\ 3 & -1 \end{array} \right] - \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] = \left[ \begin{array}{cc} 7-\lambda & 3 \\ 3 & -1-\lambda \end{array} \right]\] \item \textbf{Find the determinant of the matrix obtained in previous step: \(det(A-\lambda I)\)} \[ det\left[ \begin{array}{cc} 7-\lambda & 3 \\ 3 & -1-\lambda \end{array} \right] = (7-\lambda)(-1-\lambda)-(3*3)\] \[= - 7 - 7 \lambda + \lambda + \lambda^2 = -16-6\lambda + \lambda^2\] \[= \lambda^2 - 6\lambda -16\] \item \textbf{Solve for the values of \(\lambda\) that satisfy the equation \(det(A-\lambda I)=0\)} Solving for \(\lambda^2 - 6\lambda -16 = 0\) will result in: \[(\lambda-8)(\lambda+2)=0\] Therfore \(\lambda_1 = 8\) and \(\lambda_2 = -2\) \textbf{these are the eigenvalues for matrix \(A\).} \item \textbf{Solve for the corresponding vector to each \(\lambda\)} \end{enumerate} \textbf{Solving for }\(\lambda = 8\)\textbf{, in this process we will call the matrix with substituted values: \(B\).} \[ \left[ \begin{array}{cc} 7-(8) & 3 \\ 3 & -1-(8) \end{array} \right] = \left[ \begin{array}{cc} -1 & 3 \\ 3 & -9 \end{array} \right]\] We will assume the following \(B \overline X = 0 \space \therefore\) \[\left[ \begin{array}{cc} -1 & 3 \\ 3 & -9 \end{array} \right] \left[ \begin{array}{cc} x_1 \\ x_2 \end{array} \right] = \left[ \begin{array}{cc} 0 \\ 0 \end{array} \right]\] Applying row reduction \(3R_1 + R_2 = R_2\) to: \[\left[ \begin{array}{cc|r} -1 & 3 & 0\\ 3 & -9 & 0 \end{array} \right] = \left[ \begin{array}{cc|r} -1 & 3 & 0\\ 0 & 0 & 0 \end{array} \right] \space \therefore -x_1+3x_2 = 0\] From the previous operation we obtain \(3x_2 = x_1\), at this point we can choose a value for any \(x\), we will go for \(x_2 = 1\) to keep it simple. \[3x_2 = x_1 \space \therefore 3(1) = x_1 \space \therefore \space x_1 = 3\] \textbf{Now we know that the eigenvalue \(\lambda = 8\) \$ corresponds to the eigenvector \(\overline X = (3,1)\).} \textbf{Solving for \(\lambda -2\), generating matrix \(C\).} \[C = \left[ \begin{array}{cc} 7-(-2) & 3 \\ 3 & -1-(-2) \end{array} \right]\] \(C\overline X = 0 \space \therefore\) \[\left[ \begin{array}{cc} 9 & 3 \\ 3 & 1 \end{array} \right] \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right] = \left[ \begin{array}{c} 0 \\ 0 \end{array} \right]\] Applying row reduction \(3R_2 - R_1 = R_1\): \[\left[ \begin{array}{cc|r} 0 & 0 & 0\\ 3 & 1 & 0 \end{array} \right] \space \therefore 3x_1 + x_2 = 0\] Assigning \(x_1 = 1\): \[x_2 = -3x_1 \space \therefore x_2 = -3(1)\] \textbf{The eigenvalue \(\lambda = 8\) corresponds to the eigenvector \(\overline X = (1,-3)\)} \subsection{\texorpdfstring{5. Pick \(m<d\) eigenvectors with highest eigenvalues.}{5. Pick m\textless{}d eigenvectors with highest eigenvalues.}}\label{pick-md-eigenvectors-with-highest-eigenvalues.} In other words, usually the \textbf{2} eigenvectors with the highest scalars, or \(\lambda\), will be selected to represent the whole dataset as \emph{Principal Component 1} and \emph{Principal Component 2}. \subsection{6. Project datapoints to those eigenvectors.}\label{project-datapoints-to-those-eigenvectors.} One or the algoritm has to project the datapoints to these new set of dimensions so they can be analyized. \subsection{7. Perform analysis as needed according to study.}\label{perform-analysis-as-needed-according-to-study.} \section{Pros and Cons of PCA}\label{pros-and-cons-of-pca} This algorithm, as all, is better suited for specific circumstances and performs poorly in others. The following list trys to summarize this idea: \subsubsection{\texorpdfstring{\textbf{Pros}}{Pros}}\label{pros} \begin{itemize} \tightlist \item Reduction in size of data. \item Allows estimation of probabilites in high-dimensional data. \item It renders a set of components that are uncorrelated. \end{itemize} \subsubsection{\texorpdfstring{\textbf{Cons}}{Cons}}\label{cons} \begin{itemize} \tightlist \item It has a high computational cost, therefore it cannot be applied to very large datasets. \item Not good when working with fine-grained classes. \end{itemize} \chapter{Literature}\label{literature} Here is a review of existing methods. \chapter{Methods}\label{methods} We describe our methods in this chapter. \chapter{Applications}\label{applications} Some \emph{significant} applications are demonstrated in this chapter. \section{Example one}\label{example-one} \section{Example two}\label{example-two} \chapter{Final Words}\label{final-words} We have finished a nice book. \bibliography{packages,book} \end{document}
{ "alphanum_fraction": 0.7186319621, "avg_line_length": 33.1685649203, "ext": "tex", "hexsha": "cb7f7849f4d763a13e44b07db5b3899957afb60e", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6f332c9766d0fc0cf56184738cce4f7250cb66d0", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "generaviles/algoritmosDatosFisiologicos", "max_forks_repo_path": "_book/bookdown-demo.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6f332c9766d0fc0cf56184738cce4f7250cb66d0", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "generaviles/algoritmosDatosFisiologicos", "max_issues_repo_path": "_book/bookdown-demo.tex", "max_line_length": 130, "max_stars_count": null, "max_stars_repo_head_hexsha": "6f332c9766d0fc0cf56184738cce4f7250cb66d0", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "generaviles/algoritmosDatosFisiologicos", "max_stars_repo_path": "_book/bookdown-demo.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 4651, "size": 14561 }
\documentclass[floatfix,reprint,nofootinbib,amsmath,amssymb,epsfig,pre,floats,letterpaper,groupedaffiliation]{revtex4} \usepackage{amsmath} \usepackage{amssymb} \usepackage{graphicx} \usepackage{epstopdf} \usepackage{dcolumn} \usepackage{bm} \usepackage{listings} \usepackage{xcolor} \usepackage{inconsolata} \colorlet{punct}{red!60!black} \definecolor{background}{HTML}{EEEEEE} \definecolor{delim}{RGB}{20,105,176} \colorlet{numb}{magenta!60!black} \lstdefinelanguage{generic}{ basicstyle=\fontsize{8}{10}\ttfamily, showstringspaces=false, breaklines=true, literate= *{0}{{{\color{numb}0}}}{1} {1}{{{\color{numb}1}}}{1} {2}{{{\color{numb}2}}}{1} {3}{{{\color{numb}3}}}{1} {4}{{{\color{numb}4}}}{1} {5}{{{\color{numb}5}}}{1} {6}{{{\color{numb}6}}}{1} {7}{{{\color{numb}7}}}{1} {8}{{{\color{numb}8}}}{1} {9}{{{\color{numb}9}}}{1} {:}{{{\color{punct}{:}}}}{1} {,}{{{\color{punct}{,}}}}{1} {\{}{{{\color{delim}{\{}}}}{1} {\}}{{{\color{delim}{\}}}}}{1} {[}{{{\color{delim}{[}}}}{1} {]}{{{\color{delim}{]}}}}{1}, } \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\e}{\mathrm{e}} \newcommand{\la}{\langle} \newcommand{\ra}{\rangle} \begin{document} \title{Off-chain Limit Orders for the LMSR/LS-LMSR} \author{Jack Peterson\\\emph{www.augur.net}} \maketitle The user wants to make a buy limit order on outcome $i$. The user enters: \begin{itemize} \item Limit price $p(q_i)$. \item Total number of shares to buy, $N$. \item Price cap for the stop order, $\xi$. This is the maximum price the user is willing to accept for this order. \end{itemize} What is the maximum number of shares ($n$) the user can buy such that the price $p(q_i+n)$ does not rise above the price cap ($\xi$)? \beq\label{eq:solve_for_n} p\left(q_i+n\right)=\xi \eeq To solve Eq.~\ref{eq:solve_for_n} for $n$, the price function $p$ must be specified. First, use the LMSR's simple price function: \beq\label{eq:price_LMSR} p\left(q_i\right)=\frac{\e^{\beta q_i}}{\sum_{j} \e^{\beta q_j}}, \eeq making the substitution $\beta\equiv b^{-1}$ for readability.\footnote{$p(q_i)$ and $b(q_i)$ are written as univariate functions because only $q_i$ is varied here.} Plug Eq.~\ref{eq:price_LMSR} into Eq.~\ref{eq:solve_for_n} and rearrange to solve for $n$: \beq\label{eq:solve_for_n_LMSR} n = -q_i + \frac{1}{\beta} \log\left(\frac{\xi}{1-\xi} \sum_{j\ne i} \e^{\beta q_j}\right). \eeq If $n \ge N$, this is just a stop order: it converts to a market order, is completely filled by the automated market maker, and nothing further happens. If $n < N$, a market order for $n$ shares is submitted and filled by the market maker (bringing the price to $\xi$). This leaves $N-n$ total shares in the user's limit order. The order remains open until the market maker's price again drops to the limit price $p(q_i)$. The LS-LMSR's price function can also be used, although since it is more complicated there is not a closed-form expression for $n$ like Eq.~\ref{eq:solve_for_n_LMSR}. The LS-LMSR's price function is \beq\label{eq:price_LSLMSR} p\left(q_i\right) = \alpha \log\left(\sum_j \e^{q_j/b(q_i)}\right) + \frac{\displaystyle \e^{q_i/b(q_i)} \sum_j q_j - \sum_j q_j \e^{q_j/b(q_i)}}{\displaystyle\sum_j q_j \sum_j \e^{q_j/b(q_i)}}, \eeq where $b\left(q_i\right) \equiv \alpha \sum_j q_j$. Plug Eq.~\ref{eq:price_LSLMSR} into Eq.~\ref{eq:solve_for_n}: \beq b\left(q_i + n\right) = b\left(q_i\right) + n \eeq \beq\label{eq:solve_for_n_LSLMSR} \alpha \log\left(\e^{\frac{q_i + n}{b(q_i)+n}} + \sum_{j\ne i} \e^{\frac{q_j}{b(q_i)+n}}\right) + \frac{ \displaystyle \e^{\frac{q_i+n}{b(q_i)+n}} \sum_{j\ne i} q_j - \sum_{j\ne i} q_j \e^{\frac{q_j}{b(q_i)+n}} }{ \displaystyle \left(n+\sum_j q_j\right) \left(\e^{\frac{q_i+n}{b(q_i)+n}} + \sum_{j\ne i} \e^{\frac{q_j}{b(q_i)+n}}\right) } = \xi \eeq Eq.~\ref{eq:solve_for_n_LSLMSR} is then numerically solved for $n$. \subsection*{Example implementation (Matlab)} \begin{lstlisting}[language=generic] % price cap xi = 0.3; % LMSR beta = 1; q = 10*ones(1,5); i = 1; qj = [q(1:i-1) q(i+1:end)]; n = -q(i) + log(xi*sum(exp(beta*qj))/(1 - xi)) / beta q(i) = q(i) + n; p_lmsr = exp(beta*q) / sum(exp(beta*q)) % LS-LMSR clear n a = 0.0079; q = 10*ones(1,5); i = 1; qj = [q(1:i-1) q(i+1:end)]; F = @(n) a*log(exp((q(i) + n)/a/(n + sum(q))) + sum(exp(qj/a/(n + sum(q))))) + ... (exp((q(i) + n)/a/(n + sum(q)))*sum(qj) - sum(qj.*exp(qj/a/(n + sum(q))))) / ... ((n + sum(q))*(exp((q(i) + n)/a/(n + sum(q))) + sum(exp(qj/a/(n + sum(q)))))) - xi; n0 = fsolve(F, 0.05) q(i) = q(i) + n0; b = a*sum(q); p_lslmsr = a*log(sum(exp(q/b))) + ... (exp(q/b)*sum(q) - sum(q.*exp(q/b))) / sum(q) / sum(exp(q/b)) \end{lstlisting} \newpage \subsection*{Example implementation (JavaScript)} \begin{lstlisting}[language=generic] #!/usr/bin/env node var Decimal = require("decimal.js"); var fzero = require("fzero"); // MSR parameters var q = [new Decimal(10), // outcome 1 shares new Decimal(10), // outcome 2 shares new Decimal(10), // outcome 3 shares new Decimal(10), // outcome 4 shares new Decimal(10)]; // outcome 5 shares var i = 1; // outcome to trade var a = new Decimal("0.0079"); // LS-LMSR alpha var xi = new Decimal("0.3"); // price cap // LS-LMSR objective function (Eq. 6) function f(n) { n = new Decimal(n); var numOutcomes = q.length; var qj = new Array(numOutcomes); var sum_q = new Decimal(0); for (var j = 0; j < numOutcomes; ++j) { qj[j] = q[j]; sum_q = sum_q.plus(q[j]); } qj.splice(i, 1); var q_plus_n = n.plus(sum_q); var b = a.times(q_plus_n); var exp_qi = q[i].plus(n).dividedBy(b).exp(); var exp_qj = new Array(numOutcomes); var sum_qj = new Decimal(0); var sum_exp_qj = new Decimal(0); var sum_qj_x_expqj = new Decimal(0); for (j = 0; j < numOutcomes - 1; ++j) { sum_qj = sum_qj.plus(qj[j]); exp_qj[j] = qj[j].dividedBy(b).exp(); sum_exp_qj = sum_exp_qj.plus(exp_qj[j]); sum_qj_x_expqj = sum_qj_x_expqj.plus(q[j].times(exp_qj[j])); } return a.times(q[i].plus(n).dividedBy(b).exp().plus(sum_exp_qj).ln()).plus( exp_qi.times(sum_qj).minus(sum_qj_x_expqj).dividedBy( q_plus_n.times(exp_qi.plus(sum_exp_qj)) ).minus(xi) ); } console.log(fzero(f, [0.001, 10]).solution); \end{lstlisting} \end{document}
{ "alphanum_fraction": 0.6192667587, "avg_line_length": 36.6235955056, "ext": "tex", "hexsha": "3ae023a98748725403550ed797268944aa15ec97", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2016-11-28T02:06:49.000Z", "max_forks_repo_forks_event_min_datetime": "2016-11-28T02:06:49.000Z", "max_forks_repo_head_hexsha": "df5da408f6176f6eaa1d96be35a806e66783b58f", "max_forks_repo_licenses": [ "AAL" ], "max_forks_repo_name": "EdgeApp/augur.js", "max_forks_repo_path": "scripts/limitorders.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "df5da408f6176f6eaa1d96be35a806e66783b58f", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "AAL" ], "max_issues_repo_name": "EdgeApp/augur.js", "max_issues_repo_path": "scripts/limitorders.tex", "max_line_length": 425, "max_stars_count": 1, "max_stars_repo_head_hexsha": "df5da408f6176f6eaa1d96be35a806e66783b58f", "max_stars_repo_licenses": [ "AAL" ], "max_stars_repo_name": "Airbitz/augur.js", "max_stars_repo_path": "scripts/limitorders.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-28T02:53:27.000Z", "max_stars_repo_stars_event_min_datetime": "2021-05-28T02:53:27.000Z", "num_tokens": 2421, "size": 6519 }
\documentclass{svproc} %\documentclass{article} \def\UrlFont{\rmfamily} \usepackage{graphicx} %\usepackage{subcaption} \usepackage{bm} %\usepackage{geometry} \usepackage{float} \usepackage{caption} \usepackage{pdfpages} \usepackage{setspace} \usepackage{amsmath} \usepackage{amssymb} \usepackage{multicol} \usepackage{color} \doublespacing \usepackage[margin=1.1in]{geometry}% to typeset URLs, URIs, and DOIs \usepackage{url} \usepackage{threeparttable} \usepackage[bottom]{footmisc} \usepackage{adjustbox} \usepackage{multirow} \usepackage{makecell} \usepackage{caption} \usepackage{subfig} \def\UrlFont{\rmfamily} \raggedbottom \newenvironment{centermath} {\begin{center}$\displaystyle} {$\end{center}} \newcommand\scalemath[2]{\scalebox{#1}{\mbox{\ensuremath{\displaystyle #2}}}} \begin{document} \mainmatter % start of a contribution % \title{Solar Analysis} % \titlerunning{Solar Analysis} % abbreviated title(for running head) % also used for the TOC unless % \toctitle is used % \author{Jacob Merrell} \institute{} % %%%% list of authors for the TOC(use if author list has to be modified) \maketitle \begin{abstract} Solar energy is environmentally friendly, but can it also be cost efficient? This study focuses on power bill data collected from one solar panel owner. The study confirms that solar power helps save money, and hypothesizes that on average the savings from using the solar panel after 8 years will cover the costs of the installation. We explore ways to account for the correlation in the data, and to also incorporate temperature into the model. \end{abstract} \section{Introduction} Ideally energy would be efficient, cheap, and environmentally friendly. Finding the right balance is a difficult question for scientists to answer. Sunlight is available to all, given the weather is right. Harnessing the power of solar energy is an environmentally friendly alternative to other more pollutant producing energy sources. The purpose of this analysis to be able to: (1) predict how much power bills are going to be in the future (2) how much can someone save, on average, using solar energy, and (3) how long it will take to earn back the money used to install solar equipment. Regression techniques to handle correlated data will be used to answer the goals of the analysis. \section{Exploratory Data Analysis} \begin{center} \includegraphics [height=7.5cm]{solar_data.pdf} \end{center} The graph above shows the power bill for the subject given dates in time. There is a sizable difference between the power bill after switching to solar energy. Exploring the data shows that normal regression methods are inadequate for this data. There appears to be a seasonal trend in the power bill. This makes sense because in very cold and very hot months, more energy is used to maintain the climate inside the home. The dataset includes 51 months of power bills from the same subject. \section{Model Selection} The model used for this analysis is \begin{equation} Y \sim N(X\beta, \sigma^2R) \end{equation} In the model $Y$ denotes the vector of power bill amounts, and the $X$ matrix contains the observed values of the explanatory variables at the date of each power bill. The $\beta$ vector contains the model coefficients, $\sigma^2$ is the variance of the residuals, and $R$ is the variance covariance matrix. The variance covariance matrix in this model is structured as AR(1). This model assumes that correlation of each observation with the previous observation is constant and equal to $\rho$. This makes sense to use an AR(1) model since the data are measured in discrete and consistent time intervals. The effects being measured in the model are solar v non-solar power bills, the interaction between solar status and season (summer or winter), and the interaction between season and temperature. Temperature was not originally given in the dataset, but was gathered from the US Climate Data website. The reason for gathering this data was to see the effect extreme temperatures had on the power bill. The model assumptions are normality of the standardized residuals, homoscedacisity, and that the data are multivariate normally distributed. Normality of the residuals can be observed by plotting the residuals to verify if they seem normally distributed. Homoscedacisity is verified through a graph of the fitted values v. the residuals. A constant variability or jitter of the residual values about 0 should be similar across all fitted values. The model follows multivariate normal distribution. The model will help us achieve our goals by estimating coefficients and allowing us to make predictions of the power bill amount at each month. The model will also help us capture the seasonality trends inherent in the data. \section{Model Justification} Since the data are correlated, using the residuals from the model without any adjustment can lead to incorrect conclusions about the assumptions. To correct for this issue, we used a decorrelated regression model. After undergoing the decorrelation transformation, the residuals and fitted values can be used to verify whether or not the assumptions hold. The assumptions for the model are explored in the graphs below \begin{center} \includegraphics [height=7.5cm]{assump.pdf} \end{center} The histogram of the standardized residuals shows a normal distribution. The fitted v. residual plot shows that the variance of the residuals about the 0 line is about the same. All assumptions hold for this study. \section{Performance Evaluation} The model had an adjusted R-squared of 0.9376. This mean 93.76\% of variation in power bill is explained by the model. To test the predictive power of the model, we used test and training data to cross validate. The test and training datasets were chosen to preserve the time series; in this case choosing random points would interfere with the correlated structure inherent in the data. The training data was used to predict the test data. The RMSE, bias, and 95\% prediction interval coverage we calculated. This process was repeated several times to get more reliable results. On average, the RMSE was \$27.21; this means the predictions were off by \$27.21 on average. The 95\% confidence interval for the RMSE was (\$6.74, \$54.48). The bias showed that we were underestimating the power bill by \$0.67 on average, and almost 95\% of all observed power bill values were contained within the prediction intervals. Even though the coverage was very good, the average prediction window was \$104.57. This means most of the predictions for a power bill during a given month would have been in a ballpark of plus or minus \$50 away from the point estimate. \section{Results} The graph below shows the fitted values from the model compared to the actual power bill values. \begin{center} \includegraphics [height=7.5cm]{solar_fit.pdf} \end{center} While the model uses an AR(1) covariance structure, the model estimate for $\rho$ is very small (.0377). This means the effects and interactions included in the model account for most of the correlation inherent in the data. To achieve the goals of the study we used the model to predict the amount of money spent on power bills over the next 10 years. Then we compared the amount of money spent per month when using solar power v. the amount of money spent on power when not using solar energy. The average monthly savings when using solar energy is \$86.69. The 95\% confidence interval for the monthly savings is (\$80.41,\$92.97). In this the study, the subject using solar power in their home paid \$8,000 (after government subsidies) to install the solar panels. Assuming the average monthly savings, it would take 7.69 years of cumulative saving to cover the installation costs. The 95\% confidence interval for the amount of time it would take to recover the initial investment is (7.17,8.29). \section{Conclusion} Using solar power helps save \$86.69 on average each month. However, the RMSE is fairly given the size of the monthly power bill. Predictions for individual months will vary much more than average yearly totals or average seasonal totals. Assuming an initial \$8,000 investment, it will take 7.69 years on average for the investment in solar panels to pay off. All power bill data in this study was gathered about 1 home owner. One improvement to be made for future studies is to include more solar panel owners across a broad range of geographical locations. Another element not incorporated into the study was the impact of inflation or changing utility costs on the power bill amount. The data overall could have used more explanatory variables to explain the variation in power bill. If one plans on living in their home for an extended period of time, solar energy is a cost effective and nature friendly way of using electricity. \newpage \begin{thebibliography}{6} \bibitem{R} R: A Language and Environment for Statistical Computing.R Core Team. R Foundation for Statistical Computing.Vienna, Austria. (2017).url = {https://www.R-project.org/} \end{thebibliography} \end{document}
{ "alphanum_fraction": 0.7853891177, "avg_line_length": 68.8507462687, "ext": "tex", "hexsha": "3cfab04fcf43d3022055d248c4f6fcc8b17746b8", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "5c71914201531289e4614337ba308268579dfc8b", "max_forks_repo_licenses": [ "BSD-3-Clause", "MIT" ], "max_forks_repo_name": "jmmerrell/jmmerrell.github.io", "max_forks_repo_path": "solar_AR1/solar_project.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "5c71914201531289e4614337ba308268579dfc8b", "max_issues_repo_issues_event_max_datetime": "2021-07-18T20:48:10.000Z", "max_issues_repo_issues_event_min_datetime": "2021-07-18T20:48:08.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause", "MIT" ], "max_issues_repo_name": "jmmerrell/jmmerrell.github.io", "max_issues_repo_path": "solar_AR1/solar_project.tex", "max_line_length": 1157, "max_stars_count": null, "max_stars_repo_head_hexsha": "5c71914201531289e4614337ba308268579dfc8b", "max_stars_repo_licenses": [ "BSD-3-Clause", "MIT" ], "max_stars_repo_name": "jmmerrell/jmmerrell.github.io", "max_stars_repo_path": "solar_AR1/solar_project.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 2045, "size": 9226 }
\documentclass[12pt]{article} \usepackage[ a4paper, headsep=1.5cm, headheight=30pt, left=2.5cm,right=2.5cm,top=4cm,bottom=3cm]{geometry} \usepackage{fancyhdr} \usepackage{enumitem} \usepackage{amsmath} \usepackage{siunitx} \usepackage{graphicx} \usepackage[font=footnotesize]{caption} \usepackage{float} \graphicspath{ {./figures/} } \begin{document} \begin{titlepage} \vspace*{2cm} \LARGE GCfit \vspace{2cm} \Huge \textbf{Globular Cluster Observation Data} \vspace{2cm} \LARGE Data File Catalog \vspace{1.5cm} \vfill Version 2 January 18, 2022 \end{titlepage} \section{Introduction} All datasets are stored in a `Hierarchical Data Format' (HDF5) file. A data group\footnote{Contrary to HDF standards, in all project documentation a `Dataset' does not refer to the typical HDF dataset, but is analogous to a specific HDF group while `Variable's are most analogous to HDF datasets. In this document, the standard HDF group/dataset notation will be used.} must contain all data representing a single observational product, that is, all datasets associated to a single physical process, from a single source, along with all relevant metadata. All data corresponding to a single data group should exist within the relevant `key' group (given below) under the file root group, which corresponds to a physical process or observable. If multiple groups exist covering the same observation type (Ex: Multiple different sources observing proper motion profiles) Then those groups must exist as further subgroups within the `key' group (Ex: \texttt{/proper\_motion/sourceA/} and \texttt{/proper\_motion/sourceB/}). However, all subgroups must exist within the key at the same level. No unequal nesting or shared space is allowed. Each group has a number of required datasets, which are detailed below. Each dataset may have required supplementary datasets as well, such as uncertainties. Each dataset may also require certain metadata fields, such as unit names, to be stored as attributes on the dataset itself. \section{Attributes} Overall cluster attributes and metadata are stored as attributes to the file root group. Certain attributes are required for fitting certain observables. Some attributes, when required for model creation, are given default values, if they do not exist in the file. All attributes stored must correspond to the units given below as they will be assumed at runtime. \begin{center} \begin{table}[H] \begin{tabular}{ | c | c | c | c | c | } \hline Variable & Attribute Name & Notes & Default Value & Units \\ \hline\hline Galactic Longitude & \texttt{l} & Required for pulsar fitting & N/A & degrees \\ \hline Galactic Latitude & \texttt{b} & Required for pulsar fitting & N/A & degrees \\ \hline Right Ascension & \texttt{RA} & Required for mass function fitting & N/A & degrees \\ \hline Declination & \texttt{DEC} & Required for mass function fitting & N/A & degrees \\ \hline Metallicity [Fe/He] & \texttt{FeHe} & Defines mass function evolution & -1.00 & dex \\ \hline Age & \texttt{age} & Defines mass function evolution & 12 & Gyr \\ \hline Total Proper Motion & \texttt{\(\mu\)} & Required for pulsar fitting & N/A & mas/yr \\ \hline Total escape rate \(\dot{N}\) & \texttt{Ndot} & Defines mass function evolution & 0 & \\ \hline \end{tabular} \end{table} \end{center} % TODO document the sources of ^ somehow (and units as well), within the file \newpage \input{DP_initials} \newpage \section{Data Products} * denotes required fields \input{DP_pulsar.tex} \input{DP_numberdensity.tex} \input{DP_propermotion.tex} \input{DP_velocitydispersion.tex} \input{DP_massfunction.tex} \end{document}
{ "alphanum_fraction": 0.7189762151, "avg_line_length": 29.0827067669, "ext": "tex", "hexsha": "bfede602674001f75920c338c47ddafe80ce29c3", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-02-07T21:21:24.000Z", "max_forks_repo_forks_event_min_datetime": "2022-02-07T21:21:24.000Z", "max_forks_repo_head_hexsha": "f26f004bb9caf7429cbae23a6ca559fad42f3498", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "pjs902/GCfit", "max_forks_repo_path": "docs/source/raws/DPC/DPC.tex", "max_issues_count": 3, "max_issues_repo_head_hexsha": "f26f004bb9caf7429cbae23a6ca559fad42f3498", "max_issues_repo_issues_event_max_datetime": "2022-03-29T17:44:09.000Z", "max_issues_repo_issues_event_min_datetime": "2022-02-09T16:04:32.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "pjs902/GCfit", "max_issues_repo_path": "docs/source/raws/DPC/DPC.tex", "max_line_length": 80, "max_stars_count": 1, "max_stars_repo_head_hexsha": "f26f004bb9caf7429cbae23a6ca559fad42f3498", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "pjs902/GCfit", "max_stars_repo_path": "docs/source/raws/DPC/DPC.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-22T17:54:07.000Z", "max_stars_repo_stars_event_min_datetime": "2022-01-22T17:54:07.000Z", "num_tokens": 1022, "size": 3868 }
%% %% This file is part of ICTP RegCM. %% %% ICTP RegCM is free software: you can redistribute it and/or modify %% it under the terms of the GNU General Public License as published by %% the Free Software Foundation, either version 3 of the License, or %% (at your option) any later version. %% %% ICTP RegCM is distributed in the hope that it will be useful, %% but WITHOUT ANY WARRANTY; without even the implied warranty of %% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the %% GNU General Public License for more details. %% %% You should have received a copy of the GNU General Public License %% along with ICTP RegCM. If not, see <http://www.gnu.org/licenses/>. %% \section*{Acknowledgements} This paper is dedicated to all those that have contributed to the growth of RegCM system over the past 20+ years, the members (800+) of the RegCNET, and the ICTP. % vim: tabstop=8 expandtab shiftwidth=2 softtabstop=2
{ "alphanum_fraction": 0.7306079665, "avg_line_length": 39.75, "ext": "tex", "hexsha": "c38e0ceb592f426e5a5d81704769b05a24897a9c", "lang": "TeX", "max_forks_count": 17, "max_forks_repo_forks_event_max_datetime": "2021-11-14T06:55:20.000Z", "max_forks_repo_forks_event_min_datetime": "2019-06-10T12:49:05.000Z", "max_forks_repo_head_hexsha": "bda1c78790f0a1501916d0979b843216a08b2cef", "max_forks_repo_licenses": [ "AFL-1.1" ], "max_forks_repo_name": "taobrienlbl/RegCM", "max_forks_repo_path": "Doc/ReferenceManual/ack.tex", "max_issues_count": 9, "max_issues_repo_head_hexsha": "bda1c78790f0a1501916d0979b843216a08b2cef", "max_issues_repo_issues_event_max_datetime": "2021-09-24T11:26:46.000Z", "max_issues_repo_issues_event_min_datetime": "2020-02-20T06:43:03.000Z", "max_issues_repo_licenses": [ "AFL-1.1" ], "max_issues_repo_name": "taobrienlbl/RegCM", "max_issues_repo_path": "Doc/ReferenceManual/ack.tex", "max_line_length": 73, "max_stars_count": 27, "max_stars_repo_head_hexsha": "bda1c78790f0a1501916d0979b843216a08b2cef", "max_stars_repo_licenses": [ "AFL-1.1" ], "max_stars_repo_name": "taobrienlbl/RegCM", "max_stars_repo_path": "Doc/ReferenceManual/ack.tex", "max_stars_repo_stars_event_max_datetime": "2021-11-15T08:55:01.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-23T08:36:25.000Z", "num_tokens": 240, "size": 954 }
\SetAPI{J-C} \section{ValueHolders} \label{feature:ValueHolderContainer} \ClearAPI \TODO
{ "alphanum_fraction": 0.8068181818, "avg_line_length": 17.6, "ext": "tex", "hexsha": "2a6ae36d0a0f5bd0cac4403c7678a3078dca300b", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2022-01-08T12:54:51.000Z", "max_forks_repo_forks_event_min_datetime": "2018-10-28T14:05:27.000Z", "max_forks_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "Dennis-Koch/ambeth", "max_forks_repo_path": "doc/reference-manual/tex/feature/ValueHolderContainer.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_issues_repo_issues_event_max_datetime": "2022-01-21T23:15:36.000Z", "max_issues_repo_issues_event_min_datetime": "2017-04-24T06:55:18.000Z", "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "Dennis-Koch/ambeth", "max_issues_repo_path": "doc/reference-manual/tex/feature/ValueHolderContainer.tex", "max_line_length": 36, "max_stars_count": null, "max_stars_repo_head_hexsha": "8552b210b8b37d3d8f66bdac2e094bf23c8b5fda", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "Dennis-Koch/ambeth", "max_stars_repo_path": "doc/reference-manual/tex/feature/ValueHolderContainer.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 29, "size": 88 }
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage{algpseudocode} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{bm} \usepackage{booktabs} \usepackage{caption} \usepackage{color} \usepackage{commath} \usepackage{empheq} \usepackage{epsfig} \usepackage{framed} \usepackage{graphicx} \usepackage{grffile} \usepackage{listings} \usepackage{mathtools} \usepackage{pdfpages} \usepackage{pgfplots} \usepackage{siunitx} \usepackage{wrapfig} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=black, filecolor=black, urlcolor=black, citecolor=black, } % Command "alignedbox{}{}" for a box within an align environment % Source: http://www.latex-community.org/forum/viewtopic.php?f=46&t=8144 \newlength\dlf % Define a new measure, dlf \newcommand\alignedbox[2]{ % Argument #1 = before & if there were no box (lhs) % Argument #2 = after & if there were no box (rhs) & % Alignment sign of the line { \settowidth\dlf{$\displaystyle #1$} % The width of \dlf is the width of the lhs, with a displaystyle font \addtolength\dlf{\fboxsep+\fboxrule} % Add to it the distance to the box, and the width of the line of the box \hspace{-\dlf} % Move everything dlf units to the left, so that & #1 #2 is aligned under #1 & #2 \boxed{#1 #2} % Put a box around lhs and rhs } } % Default fixed font does not support bold face \DeclareFixedFont{\ttb}{T1}{txtt}{bx}{n}{12} % for bold \DeclareFixedFont{\ttm}{T1}{txtt}{m}{n}{12} % for normal \DeclareMathOperator{\atantwo}{atan2} \DeclareMathOperator{\acos}{acos} \def\du#1{\underline{\underline{#1}}} \definecolor{codegreen}{rgb}{0,0.6,0} \definecolor{codegray}{rgb}{0.5,0.5,0.5} \definecolor{codepurple}{rgb}{0.58,0,0.82} \definecolor{backcolour}{rgb}{0.95,0.95,0.92} \lstdefinestyle{mystyle}{ backgroundcolor=\color{backcolour}, commentstyle=\color{codegreen}, keywordstyle=\color{magenta}, numberstyle=\tiny\color{codegray}, stringstyle=\color{codepurple}, basicstyle=\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, keepspaces=true, numbers=left, numbersep=5pt, showspaces=false, showstringspaces=false, showtabs=false, tabsize=2 } \lstset{style=mystyle} \author{John Karasinski} \title{Space Station Remote Manipulator System (SSRMS) Kinematics} \begin{document} \maketitle \tableofcontents \clearpage \section{Introduction} \begin{figure}[b!] \includegraphics[width=\textwidth]{ssrms.jpg} \caption{Orbital ATK's Cygnus cargo spacecraft is captured using the Canadarm2 robotic arm on the International Space Station~\cite{ssrms_cc}.} \label{ssrms_image} \end{figure} The Space Station Remote Manipulator System (SSRMS) was designed and built to construct the International Space Station (ISS) and grapple with visiting space vehicles. The SSRMS is a seven degree of freedom (DoF) manipulator consisting entirely of revolute joints. The arm is symmetric, consisting of a 3DoF (roll, yaw, pitch) ``shoulder'', an ``elbow'' pitch joint, and a 3DoF (pitch, yaw, roll) ``wrist''. Due to this symmetric structure, the arm has the ability to ``walk'' along the station, greatly increasing it's available working space. The arm can lock the wrist to a grapple fixture, then disconnect the shoulder (which becomes the new wrist) to walk along the station. See Figure~\ref{ssrms_image} for a picture of the arm grappling a visiting Cygnus vehicle. The SSRMS is operating from one of the two Robotic Work Stations (RWS) located in either the Cupola or the Destiny module on the ISS. While operating the arm from the RWS, astronauts commonly lock one of the shoulder joints, which allows for more predictable movement of the arm. The shoulder roll is the most commonly locked joint during training and operation of the arm~\cite{astro_emails}. For this reason, we will consider the shoulder roll joint (the first joint of the arm) to be locked at a fixed angle for the majority of this report. We will begin by defining the D-H parameters and the transformation matrices for each joint, solve for the inverse kinematics, and then work through two different Jacobians. \section{Finite Kinematic Analysis} \subsection{Denavit-Hartenberg Parameters} \begin{figure}[b!] \includegraphics[width=\textwidth]{dh.jpg} \caption{The Denavit-Hartenberg (D-H) parameters for the Space Station Remote Manipulator System (SSRMS)~\cite{xu2014analytical}.} \label{dh_params} \end{figure} The Denavit-Hartenberg parameters form a minimal representation of a kinematic linkage of a robotic arm. These four parameters are the joint angle, $\theta$, the link twist angle, $\alpha$, the link length, $a$, and the joint offset, $d$. These parameters are identified by inspection, and are based off the coordinates from and lengths defined in Figure~\ref{dh_params}. The resulting D-H parameters are presented in Table~\ref{dhparams}. The parameters are plugged into the generic D-H transformation matrix, see Equation~\ref{dh_matrix}. This equation transforms positions and rotations from the $i^{th}$ to the $i+1^{th}$ coordinates frames. \begin{align} T_{i, i+1} &= \left[\begin{matrix} \cos{\left (\phi \right )} & - \cos{\left (\alpha \right )} \sin{\left (\phi \right )} & \sin{\left (\alpha \right )} \sin{\left (\phi \right )} & a \cos{\left (\phi \right )}\\ \sin{\left (\phi \right )} & \cos{\left (\alpha \right )} \cos{\left (\phi \right )} & - \sin{\left (\alpha \right )} \cos{\left (\phi \right )} & a \sin{\left (\phi \right )}\\ 0 & \sin{\left (\alpha \right )} & \cos{\left (\alpha \right )} & d\\ 0 & 0 & 0 & 1 \end{matrix}\right] \label{dh_matrix} \end{align} \begin{table}[h] \centering \begin{tabular}{c|*{4}{c}} \toprule $i$ & $\theta_i$ & $\alpha_i$ & $a_i$ & $d_i$ \\ \midrule 1 & 90 & 90 & 0 & $d_1$ \\ 2 & 90 & 90 & 0 & $d_2$ \\ 3 & 0 & 0 & $a_3$ & $d_3$ \\ 4 & 0 & 0 & $a_4$ & 0 \\ 5 & 180 & 90 & 0 & 0 \\ 6 & -90 & 90 & 0 & $d_6$ \\ 7 & 180 & 90 & 0 & $d_7$ \\ \bottomrule \end{tabular} \caption{The Denavit-Hartenberg parameters for the SSRMS. These parameters are the joint angle, $\theta$, the link twist angle, $\alpha$, the link length, $a$, and the joint offset, $d$. These $\theta_i$s give the initial or ``zero-displacement'' configuration, but each $\theta_i$ is modeled as an individual variable below.} \label{dhparams} \end{table} The resulting seven matrices are therefore \begin{align*} T_{01} &= \left[\begin{matrix}\cos{\left (\theta_{1} \right )} & 0 & \sin{\left (\theta_{1} \right )} & 0\\\sin{\left (\theta_{1} \right )} & 0 & - \cos{\left (\theta_{1} \right )} & 0\\0 & 1 & 0 & d_{1}\\0 & 0 & 0 & 1\end{matrix}\right] &&T_{12} = \left[\begin{matrix}\cos{\left (\theta_{2} \right )} & 0 & \sin{\left (\theta_{2} \right )} & 0\\\sin{\left (\theta_{2} \right )} & 0 & - \cos{\left (\theta_{2} \right )} & 0\\0 & 1 & 0 & d_{2}\\0 & 0 & 0 & 1\end{matrix}\right] \\ T_{23} &= \left[\begin{matrix}\cos{\left (\theta_{3} \right )} & - \sin{\left (\theta_{3} \right )} & 0 & a_{3} \cos{\left (\theta_{3} \right )}\\\sin{\left (\theta_{3} \right )} & \cos{\left (\theta_{3} \right )} & 0 & a_{3} \sin{\left (\theta_{3} \right )}\\0 & 0 & 1 & d_{3}\\0 & 0 & 0 & 1\end{matrix}\right] &&T_{34} = \left[\begin{matrix}\cos{\left (\theta_{4} \right )} & - \sin{\left (\theta_{4} \right )} & 0 & a_{4} \cos{\left (\theta_{4} \right )}\\\sin{\left (\theta_{4} \right )} & \cos{\left (\theta_{4} \right )} & 0 & a_{4} \sin{\left (\theta_{4} \right )}\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{matrix}\right] \\ T_{45} &= \left[\begin{matrix}\cos{\left (\theta_{5} \right )} & 0 & \sin{\left (\theta_{5} \right )} & 0\\\sin{\left (\theta_{5} \right )} & 0 & - \cos{\left (\theta_{5} \right )} & 0\\0 & 1 & 0 & 0\\0 & 0 & 0 & 1\end{matrix}\right] &&T_{56} = \left[\begin{matrix}\cos{\left (\theta_{6} \right )} & 0 & \sin{\left (\theta_{6} \right )} & 0\\\sin{\left (\theta_{6} \right )} & 0 & - \cos{\left (\theta_{6} \right )} & 0\\0 & 1 & 0 & d_{6}\\0 & 0 & 0 & 1\end{matrix}\right] \\ T_{67} &= \left[\begin{matrix}\cos{\left (\theta_{7} \right )} & 0 & \sin{\left (\theta_{7} \right )} & 0\\\sin{\left (\theta_{7} \right )} & 0 & - \cos{\left (\theta_{7} \right )} & 0\\0 & 1 & 0 & d_{7}\\0 & 0 & 0 & 1\end{matrix}\right] \end{align*} \subsubsection{Direct Kinematics} Once these seven matrices are defined, it is often desirable to be able to translate directly from the initial coordinate frame to the final end effector frame. This is easily found by multiplying the successive matrices together to form $T_{07}$, see Equation~\ref{direct}. Multiplying these matrices together yields \begin{align} T_{07} =& T_{01} T_{12} T_{23} T_{34} T_{45} T_{56} T_{67} \label{direct} \\ T_{07}[1, 1] &= u_x = \left(- s_{1} c_{345} + s_{345} c_{1} c_{2}\right) s_{7} + \left(s_{1} s_{345} c_{6} + s_{2} s_{6} c_{1} + c_{1} c_{2} c_{6} c_{345}\right) c_{7} \nonumber \\ T_{07}[2, 1] &= u_y = \left(s_{1} s_{345} c_{2} + c_{1} c_{345}\right) s_{7} + \left(s_{1} s_{2} s_{6} + s_{1} c_{2} c_{6} c_{345} - s_{345} c_{1} c_{6}\right) c_{7} \nonumber \\ T_{07}[3, 1] &= u_z = \left(s_{2} c_{6} c_{345} - s_{6} c_{2}\right) c_{7} + s_{2} s_{7} s_{345} \nonumber \\ T_{07}[4, 1] &= 0 = 0 \nonumber \\ T_{07}[1, 2] &= v_x = s_{1} s_{6} s_{345} - s_{2} c_{1} c_{6} + s_{6} c_{1} c_{2} c_{345} \nonumber \\ T_{07}[2, 2] &= v_y = - s_{1} s_{2} c_{6} + s_{1} s_{6} c_{2} c_{345} - s_{6} s_{345} c_{1} \nonumber \\ T_{07}[3, 2] &= v_z = s_{2} s_{6} c_{345} + c_{2} c_{6} \nonumber \\ T_{07}[4, 2] &= 0 = 0 \nonumber \\ T_{07}[2, 3] &= w_x = - \left(s_{1} s_{345} c_{2} + c_{1} c_{345}\right) c_{7} + \left(s_{1} s_{2} s_{6} + s_{1} c_{2} c_{6} c_{345} - s_{345} c_{1} c_{6}\right) s_{7} \nonumber \\ T_{07}[1, 3] &= w_y = \left(s_{1} c_{345} - s_{345} c_{1} c_{2}\right) c_{7} + \left(s_{1} s_{345} c_{6} + s_{2} s_{6} c_{1} + c_{1} c_{2} c_{6} c_{345}\right) s_{7} \nonumber \\ T_{07}[3, 3] &= w_z = \left(s_{2} c_{6} c_{345} - s_{6} c_{2}\right) s_{7} - s_{2} s_{345} c_{7} \nonumber \\ T_{07}[4, 3] &= 0 = 0 \nonumber \\ T_{07}[1, 4] &= p_x = a_{3} s_{1} s_{3} + a_{3} c_{1} c_{2} c_{3} + a_{4} s_{1} s_{34} + a_{4} c_{1} c_{2} c_{34} + d_{2} s_{1} + d_{3} s_{2} c_{1} - d_{6} s_{1} c_{345} + d_{6} s_{345} c_{1} c_{2} \nonumber \\ &\phantom{= a_b =} + d_{7} s_{1} s_{6} s_{345} - d_{7} s_{2} c_{1} c_{6} + d_{7} s_{6} c_{1} c_{2} c_{345} \nonumber \\ T_{07}[2, 4] &= p_y = a_{3} s_{1} c_{2} c_{3} - a_{3} s_{3} c_{1} + a_{4} s_{1} c_{2} c_{34} - a_{4} s_{34} c_{1} - d_{2} c_{1} + d_{3} s_{1} s_{2} + d_{6} s_{1} s_{345} c_{2} + d_{6} c_{1} c_{345} \nonumber \\ &\phantom{= a_b =} - d_{7} s_{1} s_{2} c_{6} + d_{7} s_{1} s_{6} c_{2} c_{345} - d_{7} s_{6} s_{345} c_{1} \nonumber \\ T_{07}[3, 4] &= p_z = a_{3} s_{2} c_{3} + a_{4} s_{2} c_{34} + d_{1} - d_{3} c_{2} + d_{6} s_{2} s_{345} + d_{7} s_{2} s_{6} c_{345} + d_{7} c_{2} c_{6} \nonumber \\ T_{07}[4, 4] &= 1 = 1 \nonumber \end{align} \subsection{Joint/Shape Matrices} We can similarly use joint and shape matrices to arrive at these $T$ matrices. Shape matrices allow for a more general approach compared to D-H matrices, and ``avoid the difficulties that sometimes arise in the use of D-H matrices''~\cite{uicker2013matrix}. For easier readability, we relabel the joints from $1-7$ to $A-G$. All of the joints of the SSRMS are revolute. A general revolute joint, $h$, can be modeled with the joint matrix of \begin{align*} \Phi_h \left( \phi_h \right) = \left[\begin{matrix} \cos\left( \phi_h \right) & -\sin\left( \phi_h \right) & 0 & 0 \\ \sin\left( \phi_h \right) & \cos\left( \phi_h \right) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix}\right] \end{align*} We can construct the seven $T$ matrices by \begin{align} T_{i,i+1} &= S_{i, j} J_j S_{i+1,j}^{-1} \label{joint_eq} \\ T_{01} &= S_{0A} J_A S_{1A}^{-1} \nonumber \\ T_{12} &= S_{1B} J_B S_{2B}^{-1} \nonumber \\ T_{23} &= S_{2C} J_C S_{3C}^{-1} \nonumber \\ T_{34} &= S_{3D} J_D S_{4D}^{-1} \nonumber \\ T_{45} &= S_{4E} J_E S_{5E}^{-1} \nonumber \\ T_{56} &= S_{5F} J_F S_{6F}^{-1} \nonumber \\ T_{67} &= S_{6G} J_G S_{7G}^{-1} \nonumber \end{align} For joints $J_A, J_B, J_C, J_D, J_E, J_F,$ and $J_G$, we also define two shape matrices. \begin{align*} \phantom{T_{01}}& && S_{0A} = I, S_{1A} = \begin{bmatrix*}[c] 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & -d_1 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix*}, \\ S_{1B} &= I, S_{2B} = \begin{bmatrix*}[c] 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & -d_2 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix*}, && S_{2C} = I, S_{3C} = \begin{bmatrix*}[c] 1 & 0 & 0 & -a_3 \\ 0 & 0 & 1 & -d_3 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix*}, \\ S_{3D} &= I, S_{4D} = \begin{bmatrix*}[c] 1 & 0 & 0 & -a_3 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix*}, && S_{4E} = I, S_{5E} = \begin{bmatrix*}[c] 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix*}, \\ S_{5F} &= I, S_{6F} = \begin{bmatrix*}[c] 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & -d_6 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix*}, && S_{6G} = I, S_{7G} = \begin{bmatrix*}[c] 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & -d_7 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix*} \end{align*} Multiplying out these shape and joint matrices according toe Equation~\ref{joint_eq} yields the same $T$ matrices as obtained using the Denavit-Hartenberg parameters. \subsection{Inverse Kinematics} \subsubsection{Method} The SSRMS has eight solutions of joint angles for any such given pose~\cite{xu2014analytical}. In their 2014 paper, Xu et al. show how to to solve for these configurations using a series of flags labeled ``SHOULDER'', ``WRIST'', and ``ELBOW''. After locking the first rotary joint at a known angle, there are a pair of solutions for the second joint known as the ``SHOULDER'' configuration. With the first two joints known, it is then possible to solve for the final two joints, giving the ``WRIST'' configuration. Finally, the middle three joints can be solved for, giving the ``ELBOW'' configuration. This technique was inspired by a 1984 paper by Lee and Ziegler which used this geometric approach to solve for the inverse kinematics of PUMA robots~\cite{lee1984geometric}. In their paper, Lee and Ziegler define 4 ``indicators'' (``ARM'', ``ELBOW'', ``WRIST'', and `FLIP'') based off the geometric configuration of the PUMA robot. These indicators were used to provide consistent solutions for the PUMA robots during movement through their workspace. Lee and Ziegler presented an algorithmic approach which was programmed to show how their method could be used in practice. An example of two of these indicators in shown in Figure~\ref{puma}. For the SSRMS solution presented below, the SHOULDER, ELBOW, and WRIST configuration indicators take on the follow values~\cite{xu2014analytical} \begin{align*} \text{SHOULDER}&=\begin{cases} +1, & \text{right shoulder}.\\ -1, & \text{left shoulder}. \end{cases}\\ \text{ELBOW}&=\begin{cases} +1, & \text{outside elbow}.\\ -1, & \text{inside elbow}. \end{cases}\\ \text{WRIST}&=\begin{cases} +1, & \text{wrist down}.\\ -1, & \text{wrist up}. \end{cases} \end{align*} \begin{figure}[b!] \begin{centering} \includegraphics[width=0.5\textwidth]{puma.jpg} \caption{Definition of two of the PUMA robotic arm configurations, taken from Lee and Ziegler~\cite{lee1984geometric}.} \label{puma} \end{centering} \end{figure} \subsubsection{SSRMS Solution} In general, for a known end effector pose, we can define~\cite{mae225_notes} \begin{align*} T_{07} &= \left[\begin{matrix} u_x & v_x & w_x & p_x \\ u_y & v_y & w_y & p_y \\ u_z & v_z & w_z & p_z \\ 0 & 0 & 0 & 1 \\ \end{matrix}\right]\\ &= T_{01} T_{12} T_{23} T_{34} T_{45} T_{56} T_{67} \end{align*} Premultiplying both sides by $T_{01}^{-1}$ yields, \begin{align*} T_{01}^{-1} T_{07} = T_{12} T_{23} T_{34} T_{45} T_{56} T_{67} \end{align*} Equating each element $(i,j)$ on both the left and right hand sides yields: \begin{align} u_x c_1 + u_y s_1 &= \left(s_{2} s_{6} + c_{2} c_{6} c_{345}\right) c_{7} + s_{7} s_{345} c_{2} \label{eq1} \\ v_x c_1 + v_y s_1 &= - s_{2} c_{6} + s_{6} c_{2} c_{345} \label{eq5} \\ w_x c_1 + w_y s_1 &= \left(s_{2} s_{6} + c_{2} c_{6} c_{345}\right) s_{7} - s_{345} c_{2} c_{7} \label{eq7} \\ p_x c_1 + p_y s_1 &= a_{3} c_{2} c_{3} + a_{4} c_{2} c_{34} + d_{3} s_{2} + d_{6} s_{345} c_{2} - d_{7} s_{2} c_{6} + d_{7} s_{6} c_{2} c_{345} \label{eq3} \\ u_z &= \left(s_{2} c_{6} c_{345} - s_{6} c_{2}\right) c_{7} + s_{2} s_{7} s_{345} \label{eq2} \\ v_z &= s_{2} s_{6} c_{345} + c_{2} c_{6} \label{eq6} \\ w_z &= \left(s_{2} c_{6} c_{345} - s_{6} c_{2}\right) s_{7} - s_{2} s_{345} c_{7} \label{eq8} \\ - d_{1} + p_z &= a_{3} s_{2} c_{3} + a_{4} s_{2} c_{34} - d_{3} c_{2} + d_{6} s_{2} s_{345} + d_{7} s_{2} s_{6} c_{345} + d_{7} c_{2} c_{6} \label{eq4} \\ u_x s_1 - u_y c_1 &= - s_{7} c_{345} + s_{345} c_{6} c_{7} \label{th51}\\ v_x s_1 - v_y c_1 &= s_{6} s_{345} \label{th53} \\ w_x s_1 - w_y c_1 &= s_{7} s_{345} c_{6} + c_{7} c_{345} \label{th52} \\ p_x s_1 - p_y c_1 &= a_{3} s_{3} + a_{4} s_{34} + d_{2} - d_{6} c_{345} + d_{7} s_{6} s_{345} \\ 0 &= 0 \\ 0 &= 0 \\ 0 &= 0 \\ 1 &= 1 \end{align} where we have defined $s_i = \sin{i}, c_i = \cos{i}, s_{ij} = \sin{\left(i+j\right)}, c_{ij} = \cos{\left(i+j\right)}, s_{ijk} = \sin{\left(i+j+k\right)}$ and $c_{ijk} = \cos{\left(i+j+k\right)}$. Manipulating the equations, we take $\left(Eq.~\ref{eq1} \right) s_2 - \left(Eq.~\ref{eq2} \right) c_2$ and simplify, producing \begin{align} \left(u_x c_1 + u_y s_1\right) s_2 - u_z c_2 &= s_{6} c_{7} \label{eqc3} \end{align} Similarly, we can do $\left(Eq.~\ref{eq3} \right) s_2-\left(Eq.~\ref{eq4}\right) c_2$ and simplify, which results in \begin{align} \left(p_x c_1 + p_y s_1 \right) s_2 - \left(- d_{1} + p_z \right) c_2 &= d_3 - c_6 d_7 \label{eqc1} \end{align} We can also subtract $\left(Eq.~\ref{eq6} \right) c_2 - \left(Eq.~\ref{eq5} \right) s_2$ \begin{align} v_z c_2 - \left( v_x c_1 + v_y s_1 \right) s_2 &= c_{6} \label{eqc2} \end{align} Finally, we can also subtract $\left(Eq.~\ref{eq7} \right) s_2 - \left(Eq.~\ref{eq8} \right) c_2$ \begin{align} \left(w_x c_1 + w_y s_1\right) s_2 - w_z c_2 &= s_{6} s_{7} \label{eqc4} \end{align} Rearranging Equations~\ref{eqc1} and \ref{eqc2} to be equal to $c_6$ and equating the two yields \begin{align} -d_3 &= \left( \left( v_x d_7 - p_x \right) c_1 + \left( v_y d_7 - p_y \right) s_1 \right) s_2 + \left(-v_z d_7 - d_{1} + p_z \right) c_2 \end{align} Locking the shoulder roll angle to a known angle, $\boxed{\theta_1 = \beta}$, we can solve for $\theta_2$, \begin{align} \boxed{\theta_2 = \mbox{SHOULDER} \cdot \acos \left( \dfrac{d_3}{\sqrt{h_1^2 + q_1^2}} \right) + \atantwo(q_1,h_1)} \end{align} where \begin{align} h_1 &= \left(-v_z d_7 - d_{1} + p_z \right) \\ q_1 &= \left( \left( v_x d_7 - p_x \right) c_{\beta} + \left( v_y d_7 - p_y \right) s_{\beta} \right) \end{align} With $\theta_1$ and $\theta_2$ now known, $\theta_6$ can be solved using Equation~\ref{eqc2}, \begin{align} \boxed{\theta_6 = \mbox{WRIST} \cdot \acos \left(v_z c_2 - \left( v_x c_1 + v_y s_1 \right) s_2 \right)} \end{align} And we can then combine Equations~\ref{eqc3} and~\ref{eqc4}, yielding \begin{align} \boxed{\theta_7 = \atantwo \left( \dfrac{\left(u_x c_1 + u_y s_1\right) s_2 - u_z c_2}{s_6}, \dfrac{\left(w_x c_1 + w_y s_1\right) s_2 - w_z c_2}{s_6}\right)} \end{align} With the shoulder and wrist joints resolved, we can now solve for the middle joints. We now take \begin{align*} \left(T_{12}^{-1} \right) \left(T_{17}\right) \left(T_{67}^{-1}\right) \left(T_{56}^{-1}\right) = \left(T_{23}\right) \left(T_{34}\right) \left(T_{45}\right) \end{align*} Taking the left and right hand side $\left(1, 4\right)$ and $\left(2, 4 \right)$ elements from the resulting matrix yields \begin{align} a_{3} c_{3} + a_{4} c_{34} &= d_{6} \left(w_z s_{2} + c_{2}\left(w_x c_{1} + w_y s_{1}\right) \right) c_{7} \nonumber \\ &\phantom{=} - d_{6} \left(u_z s_{2} + c_{2}\left(u_x c_{1} + u_y s_{1} \right) \right) s_{7} \nonumber \\ &\phantom{=} - d_{7} \left(v_z s_{2} + c_{2}\left(v_x c_{1} + v_y s_{1} \right) \right) \nonumber \\ &\phantom{=}+ \left(- d_{1} + p_z\right) s_{2} + c_{2}\left(p_x c_{1} + p_y s_{1}\right) \label{mj1} \\ a_{3} s_{3} + a_{4} s_{34} &= - d_{2} + d_{6} \left(w_x s_{1} - w_y c_{1}\right) c_{7} - d_{6} \left(u_x s_{1} - u_y c_{1}\right) s_{7} \nonumber \\ &\phantom{=} - d_{7} \left(v_x s_{1} - v_y c_{1}\right) + p_x s_{1} - p_y c_{1} \label{mj2} \end{align} $\theta_4$ is then solved by combining the above two equations, resulting in \begin{align} \boxed{\theta_4 = \mbox{ELBOW} \cdot \acos \left( \dfrac{X^2 + Y^2 - a_3^2 - a_4^2}{2 a_3 a_4} \right)} \end{align} where \begin{align*} X &= d_{6} \left( \left(w_z s_{2} + c_{2}\left(w_x c_{1} + w_y s_{1}\right) \right) c_{7} - \left(u_z s_{2} + c_{2}\left(u_x c_{1} + u_y s_{1} \right) \right) s_{7} \right) \nonumber \\ &\phantom{=}- d_{7} \left(v_z s_{2} + c_{2}\left(v_x c_{1} + v_y s_{1} \right) \right) + \left(- d_{1} + p_z\right) s_{2} + c_{2}\left(p_x c_{1} + p_y s_{1}\right) \\ Y &= - d_{2} + d_{6} \left(w_x s_{1} - w_y c_{1}\right) c_{7} - d_{6} \left(u_x s_{1} - u_y c_{1}\right) s_{7} - d_{7} \left(v_x s_{1} - v_y c_{1}\right) + p_x s_{1} - p_y c_{1} \end{align*} Substituting the solution into $\theta_4$ and Equations~\ref{mj1} and~\ref{mj2} and combining yields \begin{align*} \boxed{\theta_3 = \atantwo \left(Y \left( a_3 + a_4 c_4 \right) - X a_4 s_4, X \left(a_3 + a_4 c_4 \right) + Y a_4 s_4 \right)} \end{align*} Subtracting $(Eq.~\ref{th52}) c_7$ and $(Eq.~\ref{th51}) s_7$ yields \begin{align*} c_{345} &= \left(w_x s_1 - w_y c_1\right) c_7 - \left(u_x s_1 - u_y c_1 \right) s_7 \end{align*} And from Equation~\ref{th53} we have \begin{align*} s_{345} = \dfrac{v_x s_1 - v_y c_1 }{s_{6}} \end{align*} which we can combine to solve for the last joint \begin{align*} \theta_5 &= \left(\theta_3 + \theta_4 + \theta_5 \right) - \left(\theta_3 + \theta_4 \right) \\ \alignedbox{\theta_5}{=\atantwo \left(s_{345}, c_{345} \right) - \left(\theta_3 + \theta_4 \right)} \end{align*} \subsection{Numerical Example} For practical purposes, the link length and offset values can be set to \begin{align*} a_{3} = 2.30, a_{4} = 2.30, d_{1} = 0.65, d_{2} = 0.30, d_{3} = 0.90, d_{6} = 0.30, d_{7} = 0.65 \end{align*} Note that $a_3=a_4, d_1=d_7, \text{and } d_2=d_6$, as the arm is symmetric. As an example, plugging in these values and the initial angles given in Table~\ref{dhparams} into Equation~\ref{direct} yields \begin{align*} T_{07} &= \left[\begin{matrix} 0 & 0 & 1 & 0.6 \\ 1 & 0 & 0 & 0.9 \\ 0 & 1 & 0 & 5.9 \\ 0 & 0 & 0 & 1 \\ \end{matrix}\right] \\ \end{align*} As another example, given the end effector pose \begin{align*} T_{07} &= \left[\begin{matrix} 0.8021 & 0.1217 & 0.5846 & 2.4790 \\ -0.5859 & 0.3495 & 0.7311 & -2.4734 \\ -0.1154 & 0.9290 & 0.3517 & -0.4927 \\ 0 & 0 & 0 & 1 \end{matrix}\right] \end{align*} and locking the first joint variable $\theta_1 = \beta = 60^\circ$, we can solve for the 8 possible configurations of the arm, see Table~\ref{ik_res}. The equations used to solve for these values are taken from above, and the code used to generate these results are in the Appendex. \begin{table}[h] \centering \begin{tabular}{*{10}{r}} \toprule S & E & W & $\theta_1$ & $\theta_2$ & $\theta_3$ & $\theta_4$ & $\theta_5$ & $\theta_6$ & $\theta_7$ \\ \midrule 1 & 1 & 1 & 60.000 & -20.268 & 64.074 & 79.722 & -149.770 & 138.205 & -77.426 \\ 1 & 1 & -1 & 60.000 & -20.268 & 58.153 & 99.444 & 16.428 & -138.205 & 102.573 \\ 1 & -1 & 1 & 60.000 & -20.268 & 143.797 & -79.722 & -70.048 & 138.205 & -77.426 \\ -1 & 1 & 1 & 60.000 & -109.087 & 35.576 & 79.140 & -119.938 & 49.659 & -85.275 \\ 1 & -1 & -1 & 60.000 & -20.268 & 157.598 & -99.444 & 115.872 & -138.205 & 102.573 \\ -1 & 1 & -1 & 60.000 & -109.087 & 23.189 & 100.025 & 51.565 & -49.659 & 94.724 \\ -1 & -1 & 1 & 60.000 & -109.087 & 114.717 & -79.140 & -40.797 & 49.659 & -85.275 \\ -1 & -1 & -1 & 60.000 & -109.087 & 123.214 & -100.025 & 151.590 & -49.659 & 94.724 \\ \bottomrule \end{tabular} \caption{The eight possible configurations for locking the first joint, where S, E, and W stand for ``SHOULDER'', ``ELBOW'', and ``WRIST'', respectively.} \label{ik_res} \end{table} \section{Differential Kinematic Analysis} \subsection{Method 1: Kinematic Jacobian} The first Jacobian is based of a kinematic approach. Where $\hat{z}_i$ is taken from the last column of $T_{0i}$, and can be defined~\cite{mae225_notes} \begin{align*} T_{0i} &= \left[\begin{matrix} & \du{\Theta}_i & & \vdots & a_i \\ & \hdots & & & \hdots \\ 0 & 0 & 0 & \vdots & 1 \end{matrix}\right] \\ {\Theta}_i &= \left[\begin{matrix} & & \\ x_i & y_i & z_i \\ & & \end{matrix}\right] \\ \hat{z}_i &= \left( \prod_{i=0}^n {\Theta}_i \right) z_i \end{align*} and $\vec{r}_i$ is defined \begin{align*} \vec{r}_i = \sum_{i=0}^n \vec{a}_i \end{align*} With these definitions, we can find the Jacobian via \begin{align*} \dot{\vec{P}} &= \sum_{i=0}^{n} \left( \hat{z}_i \times \vec{r}_i \right) \dot{\theta}_i \\ \vec{w} &= \sum_{i=0}^{n} \dot{\theta}_i \hat{z}_i \\ {J}_K \dot{q} &= \left[\begin{matrix} \underline{\dot{P}} \\ \underline{\vec{w}} \end{matrix}\right] \\ \left[\begin{matrix} \hat{z}_1 \times \vec{r}_1 & \hat{z}_2 \times \vec{r}_2 & \cdots & \hat{z}_7 \times \vec{r}_7 \\ \hat{z}_1 & \hat{z}_2 & \cdots & \hat{z}_7 \end{matrix}\right] \left[\begin{matrix} \dot{\theta}_1 \\ \dot{\theta}_2 \\ \vdots \\ \dot{\theta}_7 \\ \end{matrix}\right] &= \left[\begin{matrix} \underline{\dot{P}}_{EE} \\ \underline{w}_{EE} \\ \end{matrix}\right] \\ \end{align*} The results of these equations yield in the kinematic Jacobian, \begin{align*} J_{11} &= - a_{3} s_{1} c_{2} c_{3} + a_{3} s_{3} c_{1} - a_{4} s_{1} c_{2} c_{34} + a_{4} s_{34} c_{1} + d_{2} c_{1} - d_{3} s_{1} s_{2} \\ &\phantom{= }- d_{6} s_{1} s_{345} c_{2} - d_{6} c_{1} c_{345} + d_{7} s_{1} s_{2} c_{6} - d_{7} s_{1} s_{6} c_{2} c_{345} + d_{7} s_{6} s_{345} c_{1} \\ J_{21} &= a_{3} s_{1} s_{3} + a_{3} c_{1} c_{2} c_{3} + a_{4} s_{1} s_{34} + a_{4} c_{1} c_{2} c_{34} + d_{2} s_{1} + d_{3} s_{2} c_{1} \\ &\phantom{= }- d_{6} s_{1} c_{345} + d_{6} s_{345} c_{1} c_{2} + d_{7} s_{1} s_{6} s_{345} - d_{7} s_{2} c_{1} c_{6} + d_{7} s_{6} c_{1} c_{2} c_{345} \\ J_{31} &= 0 \\ J_{41} &= 0 \\ J_{51} &= 0 \\ J_{61} &= 1 \\ \\ J_{12} &= - \left(a_{3} s_{2} c_{3} + a_{4} s_{2} c_{34} - d_{3} c_{2} + d_{6} s_{2} s_{345} + d_{7} s_{2} s_{6} c_{345} + d_{7} c_{2} c_{6}\right) c_{1} \\ J_{22} &= - \left(a_{3} s_{2} c_{3} + a_{4} s_{2} c_{34} - d_{3} c_{2} + d_{6} s_{2} s_{345} + d_{7} s_{2} s_{6} c_{345} + d_{7} c_{2} c_{6}\right) s_{1} \\ J_{32} &= a_{3} c_{2} c_{3} + a_{4} c_{2} c_{34} + d_{3} s_{2} + d_{6} s_{345} c_{2} - d_{7} s_{2} c_{6} + d_{7} s_{6} c_{2} c_{345} \\ J_{42} &= s_{1} \\ J_{52} &= - c_{1} \\ J_{62} &= 0 \end{align*} \begin{align*} J_{13} &= a_{3} s_{1} c_{3} - a_{3} s_{3} c_{1} c_{2} + a_{4} s_{1} c_{34} - a_{4} s_{34} c_{1} c_{2} + d_{6} s_{1} s_{345} \\ &\phantom{= }+ d_{6} c_{1} c_{2} c_{345} + d_{7} s_{1} s_{6} c_{345} - d_{7} s_{6} s_{345} c_{1} c_{2} \\ J_{23} &= - a_{3} s_{1} s_{3} c_{2} - a_{3} c_{1} c_{3} - a_{4} s_{1} s_{34} c_{2} - a_{4} c_{1} c_{34} + d_{6} s_{1} c_{2} c_{345} \\ &\phantom{= }- d_{6} s_{345} c_{1} - d_{7} s_{1} s_{6} s_{345} c_{2} - d_{7} s_{6} c_{1} c_{345} \\ J_{33} &= \left(- a_{3} s_{3} - a_{4} s_{34} + d_{6} c_{345} - d_{7} s_{6} s_{345}\right) s_{2} \\ J_{43} &= s_{2} c_{1} \\ J_{53} &= s_{1} s_{2} \\ J_{63} &= - c_{2} \\ \\ J_{14} &= a_{4} s_{1} c_{34} - a_{4} s_{34} c_{1} c_{2} + d_{6} s_{1} s_{345} + d_{6} c_{1} c_{2} c_{345} + d_{7} s_{1} s_{6} c_{345} - d_{7} s_{6} s_{345} c_{1} c_{2} \\ J_{24} &= - a_{4} s_{1} s_{34} c_{2} - a_{4} c_{1} c_{34} + d_{6} s_{1} c_{2} c_{345} - d_{6} s_{345} c_{1} - d_{7} s_{1} s_{6} s_{345} c_{2} - d_{7} s_{6} c_{1} c_{345} \\ J_{34} &= \left(- a_{4} s_{34} + d_{6} c_{345} - d_{7} s_{6} s_{345}\right) s_{2} \\ J_{44} &= s_{2} c_{1} \\ J_{54} &= s_{1} s_{2} \\ J_{64} &= - c_{2} \\ \\ J_{15} &= d_{6} s_{1} s_{345} + d_{6} c_{1} c_{2} c_{345} + d_{7} s_{1} s_{6} c_{345} - d_{7} s_{6} s_{345} c_{1} c_{2} \\ J_{25} &= d_{6} s_{1} c_{2} c_{345} - d_{6} s_{345} c_{1} - d_{7} s_{1} s_{6} s_{345} c_{2} - d_{7} s_{6} c_{1} c_{345} \\ J_{35} &= \left(d_{6} c_{345} - d_{7} s_{6} s_{345}\right) s_{2} \\ J_{45} &= s_{2} c_{1} \\ J_{55} &= s_{1} s_{2} \\ J_{65} &= - c_{2} \\ \\ J_{16} &= d_{7} \left(s_{1} s_{345} c_{6} + s_{2} s_{6} c_{1} + c_{1} c_{2} c_{6} c_{345}\right) \\ J_{26} &= d_{7} \left(s_{1} s_{2} s_{6} + s_{1} c_{2} c_{6} c_{345} - s_{345} c_{1} c_{6}\right) \\ J_{36} &= d_{7} \left(s_{2} c_{6} c_{345} - s_{6} c_{2}\right) \\ J_{46} &= - s_{1} c_{345} + s_{345} c_{1} c_{2} \\ J_{56} &= s_{1} s_{345} c_{2} + c_{1} c_{345} \\ J_{66} &= s_{2} s_{345} \\ \\ J_{17} &= 0 \\ J_{27} &= 0 \\ J_{37} &= 0 \\ J_{47} &= \left(s_{1} s_{345} + c_{1} c_{2} c_{345}\right) s_{6} - s_{2} c_{1} c_{6} \\ J_{57} &= \left(s_{1} c_{2} c_{345} - s_{345} c_{1}\right) s_{6} - s_{1} s_{2} c_{6} \\ J_{67} &= s_{2} s_{6} c_{345} + c_{2} c_{6} \\ \end{align*} \subsection{Method 2: Geometric Jacobian} The geometric Jacobian is formed from a linearized form of the kinematic equations~\cite{mae225_notes}. We first form our $^i D$ matrices from \begin{align*} ^i D = T_{i-1} Q_i T_{i-1}^{-1} \end{align*} And $T_{0}= {I}, T_{i} = \prod_0^n A_n,$ where $A_n$ are the matrices formed by the D-H parameters. Where, as all our joints are revolute, the derivative operator matrix is \begin{align*} Q &= \left[\begin{matrix} 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{matrix}\right] \end{align*} Selecting elements from these $^i D$ matrices, we form the Jacobian via \begin{align*} J_G &= \left[\begin{matrix} ^0D_{14} & ^1D_{14} & ^2D_{14} & ^3D_{14} & ^4D_{14} & ^5D_{14} & ^6D_{14} \\ ^0D_{24} & ^1D_{24} & ^2D_{24} & ^3D_{24} & ^4D_{24} & ^5D_{24} & ^6D_{24} \\ ^0D_{34} & ^1D_{34} & ^2D_{34} & ^3D_{34} & ^4D_{34} & ^5D_{34} & ^6D_{34} \\ ^0D_{32} & ^1D_{32} & ^2D_{32} & ^3D_{32} & ^4D_{32} & ^5D_{32} & ^6D_{32} \\ ^0D_{13} & ^1D_{13} & ^2D_{13} & ^3D_{13} & ^4D_{13} & ^5D_{13} & ^6D_{13} \\ ^0D_{21} & ^1D_{21} & ^2D_{21} & ^3D_{21} & ^4D_{21} & ^5D_{21} & ^6D_{21} \\ \end{matrix}\right] \end{align*} And \begin{align*} \dot{x} &= J_G \dot{q} \\ \left[\begin{matrix} \mu_x \\ \mu_y \\ \mu_z \\ \omega_x \\ \omega_y \\ \omega_z \\ \end{matrix}\right] &= \left[\begin{matrix} ^0D_{14} & ^1D_{14} & ^2D_{14} & ^3D_{14} & ^4D_{14} & ^5D_{14} & ^6D_{14} \\ ^0D_{24} & ^1D_{24} & ^2D_{24} & ^3D_{24} & ^4D_{24} & ^5D_{24} & ^6D_{24} \\ ^0D_{34} & ^1D_{34} & ^2D_{34} & ^3D_{34} & ^4D_{34} & ^5D_{34} & ^6D_{34} \\ ^0D_{32} & ^1D_{32} & ^2D_{32} & ^3D_{32} & ^4D_{32} & ^5D_{32} & ^6D_{32} \\ ^0D_{13} & ^1D_{13} & ^2D_{13} & ^3D_{13} & ^4D_{13} & ^5D_{13} & ^6D_{13} \\ ^0D_{21} & ^1D_{21} & ^2D_{21} & ^3D_{21} & ^4D_{21} & ^5D_{21} & ^6D_{21} \\ \end{matrix}\right] \left[\begin{matrix} \dot{\theta}_1 \\ \dot{\theta}_2 \\ \dot{\theta}_3 \\ \dot{\theta}_4 \\ \dot{\theta}_5 \\ \dot{\theta}_6 \\ \dot{\theta}_7 \\ \end{matrix}\right] \end{align*} Solving for these $^i D$ matrices and selecting the identified elements results in \begin{align*} J_{11} &= 0 && J_{12} = d_{1} c_{1} &&& J_{13} = &&&& - d_{1} s_{1} s_{2} + d_{2} c_{1} c_{2} \\ J_{21} &= 0 && J_{22} = d_{1} s_{1} &&& J_{23} = &&&& d_{1} s_{2} c_{1} + d_{2} s_{1} c_{2} \\ J_{31} &= 0 && J_{32} = 0 &&& J_{33} = &&&& d_{2} s_{2} \\ J_{41} &= 0 && J_{42} = s_{1} &&& J_{43} = &&&& s_{2} c_{1} \\ J_{51} &= 0 && J_{52} = - c_{1} &&& J_{53} = &&&& s_{1} s_{2} \\ J_{61} &= 1 && J_{62} = 0 &&& J_{63} = &&&& - c_{2} \\ \end{align*} \begin{align*} J_{14} &= - a_{3} s_{1} c_{3} + a_{3} s_{3} c_{1} c_{2} - d_{1} s_{1} s_{2} + d_{2} c_{1} c_{2} \\ J_{24} &= a_{3} s_{1} s_{3} c_{2} + a_{3} c_{1} c_{3} + d_{1} s_{2} c_{1} + d_{2} s_{1} c_{2} \\ J_{34} &= \left(a_{3} s_{3} + d_{2}\right) s_{2} \\ J_{44} &= s_{2} c_{1} \\ J_{54} &= s_{1} s_{2} \\ J_{64} &= - c_{2} \\ \\ J_{15} &= - a_{3} s_{1} c_{3} + a_{3} s_{3} c_{1} c_{2} - a_{4} s_{1} c_{34} + a_{4} s_{34} c_{1} c_{2} - d_{1} s_{1} s_{2} + d_{2} c_{1} c_{2} \\ J_{25} &= a_{3} s_{1} s_{3} c_{2} + a_{3} c_{1} c_{3} + a_{4} s_{1} s_{34} c_{2} + a_{4} c_{1} c_{34} + d_{1} s_{2} c_{1} + d_{2} s_{1} c_{2} \\ J_{35} &= \left(a_{3} s_{3} + a_{4} s_{34} + d_{2}\right) s_{2} \\ J_{45} &= s_{2} c_{1} \\ J_{55} &= s_{1} s_{2} \\ J_{65} &= - c_{2} \\ \\ J_{16} &= - \left(d_{1} c_{2} - d_{3}\right) \left(s_{1} s_{345} + c_{1} c_{2} c_{345}\right) - \left(a_{3} c_{45} + a_{4} c_{5} + d_{1} s_{2} c_{345} + d_{2} s_{345}\right) s_{2} c_{1} \\ J_{26} &= - \left(d_{1} c_{2} - d_{3}\right) \left(s_{1} c_{2} c_{345} - s_{345} c_{1}\right) - \left(a_{3} c_{45} + a_{4} c_{5} + d_{1} s_{2} c_{345} + d_{2} s_{345}\right) s_{1} s_{2} \\ J_{36} &= a_{3} c_{2} c_{45} + a_{4} c_{2} c_{5} + d_{2} s_{345} c_{2} + d_{3} s_{2} c_{345} \\ J_{46} &= - s_{1} c_{345} + s_{345} c_{1} c_{2} \\ J_{56} &= s_{1} s_{345} c_{2} + c_{1} c_{345} \\ J_{66} &= s_{2} s_{345} \\ \\ J_{17} &= \left(\left(s_{1} s_{345} + c_{1} c_{2} c_{345}\right) c_{6} + s_{2} s_{6} c_{1}\right) \left(a_{3} s_{45} + a_{4} s_{5} + d_{1} s_{2} s_{345} - d_{2} c_{345} + d_{6}\right) \\ &\phantom{=}+ \left(s_{1} c_{345} - s_{345} c_{1} c_{2}\right) \left(a_{3} c_{6} c_{45} + a_{4} c_{5} c_{6} + d_{1} s_{2} c_{6} c_{345} - d_{1} s_{6} c_{2} + d_{2} s_{345} c_{6} + d_{3} s_{6}\right) \\ J_{27} &= \left(\left(s_{1} c_{2} c_{345} - s_{345} c_{1}\right) c_{6} + s_{1} s_{2} s_{6}\right) \left(a_{3} s_{45} + a_{4} s_{5} + d_{1} s_{2} s_{345} - d_{2} c_{345} + d_{6}\right) \\ &\phantom{=}- \left(s_{1} s_{345} c_{2} + c_{1} c_{345}\right) \left(a_{3} c_{6} c_{45} + a_{4} c_{5} c_{6} + d_{1} s_{2} c_{6} c_{345} - d_{1} s_{6} c_{2} + d_{2} s_{345} c_{6} + d_{3} s_{6}\right) \\ J_{37} &= \left(s_{2} c_{6} c_{345} - s_{6} c_{2}\right) \left(a_{3} s_{45} + a_{4} s_{5} + d_{1} s_{2} s_{345} - d_{2} c_{345} + d_{6}\right) - \\ &\phantom{=}\left(a_{3} c_{6} c_{45} + a_{4} c_{5} c_{6} + d_{1} s_{2} c_{6} c_{345} - d_{1} s_{6} c_{2} + d_{2} s_{345} c_{6} + d_{3} s_{6}\right) s_{2} s_{345} \\ J_{47} &= s_{1} s_{6} s_{345} - s_{2} c_{1} c_{6} + s_{6} c_{1} c_{2} c_{345} \\ J_{57} &= - s_{1} s_{2} c_{6} + s_{1} s_{6} c_{2} c_{345} - s_{6} s_{345} c_{1} \\ J_{67} &= s_{2} s_{6} c_{345} + c_{2} c_{6} \\ \end{align*} \clearpage \section{Conclusions} In this report we have completed a kinematic analysis of the Space Station Robotic Manipulator with a locked shoulder roll joint. Transformation matrices were found using both Denavit-Hartenberg parameters and shape matrices. The direct kinematic solution was identified using the seven transformation matrices. The inverse kinematic solution found by Xu et al. was verified and several inaccuracies were fixed~\cite{xu2014analytical}. Using the resulting inverse kinematic solution along with three configuration indicators, a Python program was written to generate eight solutions for an example pose, matching the solution found by Xu et al. A differential kinematic analysis was conducted, resulting in two Jacobians. The kinematic Jacobian, $J_K$, was generated using $\hat{z}$ and $\vec{r}$ derived from the Denavit-Hartenberg matrices. A geometric Jacobian, $J_G$, was calculated using a different approach involving derivative matrices. A symbolic Python program was written to calculate and compare the numerical efficiency of these two Jacobians. $J_K$ required a total of 4833 operations per calculation, or \begin{verbatim} 269*ADD + 1210*COS + 1754*MUL + 100*NEG + 1106*SIN + 394*SUB \end{verbatim} while $J_G$ required a total of 1138 operations per calculation, or \begin{verbatim} 207*ADD + 225*COS + 351*MUL + 33*NEG + 4*POW + 207*SIN + 111*SUB \end{verbatim} Based off this simple analysis, it appears that the geometric Jacobian technique produced a more efficient Jacobian in this case. % \nocite{*} \bibliography{bib} \bibliographystyle{plain} \clearpage \appendix \section{Appendix: Code} To confirm that the algorithms written here were correct, I used the Stanford Arm examples in class. I verified that the results produced with the Stanford Arm D-H parameters were correct according the results presented in lecture\footnote{Note that the Stanford Arm D-H parameters presented on 3/6 are different than those presented on 3/13. On 3/6 the Stanford Arm had $d_1=0$.}. \lstinputlisting[language=Python]{analysis.py} \end{document}
{ "alphanum_fraction": 0.5974317383, "avg_line_length": 50.2581521739, "ext": "tex", "hexsha": "9c2512ccb62ef7f6e93a4b8989605be5f71048f5", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "a35b4c7ac63def67910666708fcf4abb089ea142", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "karasinski/MAE-225", "max_forks_repo_path": "Project/report.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "a35b4c7ac63def67910666708fcf4abb089ea142", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "karasinski/MAE-225", "max_issues_repo_path": "Project/report.tex", "max_line_length": 301, "max_stars_count": 2, "max_stars_repo_head_hexsha": "a35b4c7ac63def67910666708fcf4abb089ea142", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "karasinski/MAE-225", "max_stars_repo_path": "Project/report.tex", "max_stars_repo_stars_event_max_datetime": "2020-10-07T22:14:45.000Z", "max_stars_repo_stars_event_min_datetime": "2018-01-25T01:24:46.000Z", "num_tokens": 15779, "size": 36990 }
\clearpage\pagenumbering{arabic} \chapter{Overview}\label{Overview} Coast is a platform for developing and deploying World Wide Web applications. It provides an extensible framework, reusable components, and a configuration mechanism that is used to create web applications. For deployment, it provides an efficient and scaleable server and additional flexible communication infrastructure.\\ \\ In this chapter we give an overview of the environment and challenges posed by server programming and web application programming. \section{WWW Environment} The World Wide Web bases on the Hypertext Transfer Protocol (HTTP). It is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol which can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred. \begin{figure}[hbt] \centering \includegraphics[width=0.6\hsize]{chap01/request_reply_protocol} \caption{Request reply protocol.} \label{fig:request_reply_protocol} \end{figure} An important type of content transferred is HTML. It is the lingua franca for publishing hypertext on the World Wide Web. It is a non-proprietary format based upon SGML, and can be created and processed by a wide range of tools, from simple plain text editors - you type it in from scratch- to sophisticated WYSIWYG authoring tools. HTML uses tags such as <h1> and </h1> to structure text into headings, paragraphs, lists, hypertext links etc. \section{Web Applications} Very soon people started to publish not only static files but also dynamically generated content with several extension technologies (CGI\footnote{CGI: Common Gateway Interface, standard way of extending web server functionality with dynamic behavior.}, ISAPI\footnote{ISAPI: Internet Server API, API definition for inprocess extensions of web servers}, NSAPI\footnote{NSAP: Netscape Server API, Netscapes flavour of extension API}). WebApplications proved as effective information and functionality deployment vehicle, tying together existing IS Worlds with a universal UI tool, the Web-browser. But as the Web evolved from its origins as a simple delivery mechanism — suitable for small files and simple applications — to a platform for complex applications, Web servers and the stateless HTTP proved very limiting. New robust, scalable architectures and a new class of servers to support a high-volume, distributed, and heterogeneous application environment, were required.\\ \\ To overcome the limitations of HTTP based page serving a WebApplication Server has to implement additional features. \subsection{Enabling web access means integration} Implementing WebApplications means accessing data sources and legacy applications, since it is neither possible nor useful to re-implement all existing database systems and applications at once to enable web access. Enabling web services mostly requires integration of several databases or existing systems. The mix of services a WebApplication consists of or their implementation can change very fast and often. A vast range of technologies exist to access the functionality of existing systems. Needless to say all have bugs and problems now and then.\\ \\ Essentially two problems exists: \begin{itemize} \item Knowing the details of the system level API and using it correctly. \item Finding and correcting problems in case of failures \end{itemize} A programmer should not have to care about the details of system level integration.\\ \\ It should be possible to integrate and manipulate backend systems without impact on the logic part of the application on a syntactic level. \subsection{Session Management} HTTP is a stateless protocol, every information needed to produce a reply is contained in the request. Implementing a WebApplication this is either not feasible or inconvenient for the user. To build sophisticated and stable interactions into web applications we need to keep state. Saving all state in the request implies overhead and is limited by most browsers and intermediate proxy-servers. Loosing the state, e.g. because of size limitations, produces unexpected results. Although WebServers have already defined authentication and authorization schemas based on Access Control Lists (ACL) or Lightweight Directory Access Protocol (LDAP), this is mostly too coarse grained or inflexible for WebApplications. Calculating the user’s authentication and authorization information on every request is inefficient and puts unnecessary load on machines. Storing authentication and authorization information between requests in session store, makes them more efficient and enables better security, since authentication credentials need not be included in every request. Sometimes there is also a need to load a lot of data to initialize the application context on a per user basis. It is faster and more efficient, to do it once and keep it than to do it on every request. Session information keeps the state needed to operate the WebApplication in meaningful ways on a per user basis. Session information has also to be released some when. Since no strict interaction pattern can be enforced on the web this is best handled by a timeout mechanism. Using a session, after it is freed, e.g. by using a bookmark, needs to establish the same session’s state as when the bookmark information was generated. Reestablishing lost information can be used to automatically enforce re-authentication rules into secure parts of a WebApplication. \subsection{Role based access control} Navigation usually depends on the credentials (e.g. authentication and authorization) of a user. Navigation paths are grouped into categories (e.g. public, customer, administration etc.). Using a specific category we like the user to assume a role into the system, e.g. he uses the system as a guest. This means he has the access rights a guest has. If access to pages a guest can access is not restricted, everybody can access those pages and we do not even need to care about the authentication of the user. As soon as the user wants to access a restricted area we need to authenticate and authorize the user. This can be done in any way you like and if successful, the user assumes the new role needed to visit the restricted area. Depending on the web applications type, the same user can assume several roles.\\ \\ The access control takes place on two levels: \begin{itemize} \item Role level checks if the session has the correct role with regard to the service requested. \item Intra role level checks if the session has the right to access this service assuming the role level is correct. This allows for fine granular service autorization schemes. \end{itemize} Services are only executed if both of those checks succeed. In the failure case appropriate action takes place, be it a request for re-authentication or sending back an error message instead of a normal service reply. \subsection{Dynamic content expansion} Web applications are based on serving HTML pages. Instead of serving static content or simple dynamic content (like time of day or number of accesses etc) those pages are highly dynamic and changing. Most web application servers have some sort of dynamic page rendering using HTML templates and a macro expansion mechanism. The rendering of a page takes place in several steps. First of all the overall template has to be evaluated, then all the dynamic content macros are expanded. Execution of the macros accesses data that is provided by backend systems. What if those systems fail to deliver the data as expected? Ideally it is possible to separate layout of an application from dynamically generated content. And it is even possible to change layout and structure of the whole application on the state of some data provided. So if backend systems fail we can respond with a completely different page as when they are up and running. \section{COAST Concepts} COAST has a tool set to implement a wide range of applications, but it is geared toward implementation of TCP based request-reply servers implementing a web application. This kind of server is on one hand an efficient Web server that serves static page content like images and help pages and on the other hand an application that generates HTML output dynamically using backend datasources available. In the following sections we shed light on the concepts implemented in the COAST environment. \subsection{General Server issues} This section describes the challenges posed by implementing server software. Apart from functional requirements a server has always to strive for fullfilling technical requirements like high availability, fast response time, high throughput and ease of administration. Ideally a server never crashes, never leaks and is able to keep a certain level of response time at peak loads. \subsubsection{Servers and Request processing} A server is a running process providing a set of services. It is known by a DNS name (implying an IP-address) and a well known port. Clients requesting service connect to the server and establish a TCP connection. Over this connection data is sent and received. A server should not be blocked when request processing is in progress. There exists two basic approaches to achieve this goal. Several processes are spawned and act as servers (Multiprocessing). This solution is stable but produces overhead in the case that information between the processes has to be shared. The other solution spawns threads (Multithreading) in one process, which is dangerous in the case of errors, but more efficient with regard to information sharing. COAST uses multithreading in one process. \subsubsection{Services and Service Dispatching} A request is always associated with a service. A service groups requests by any criteria useful for the purpose of the application (e.g. all local files publicly accessible, all requests getting their input from database xy etc.). Services read input and send replies. However to find out which service to use we already have to read some of the input supplied. We call this service dispatching. It is done by a specialised component that selects the service. Service dispatching can be done by any attributes suitable present in the request (e.g. Port and Address, Source Address or URI Prefix). \subsubsection{Programming in a MT-Server} Programming in a multi threaded server allows for efficient sharing of data. Simply assign a value to a variable seen by more than one thread and it is usable by all, which have access to it. But this is only half of the story. On the other hand you have to make sure that under no circumstances shared values are accessed unprotected, if they are changed by any side. And when you start locking code to achieve this, all the nasty things called deadlock, live-lock and starvation, can show up and threaten the quality of the server. Multi-threading is difficult and error prone to programm. The code has to be correct in a concurrent world. Therefore an important goal of the architecture of COAST is to relief the application programmer from the MT-burden as far possible. A lot of shared objects have read only data. Every request handled has a context object that handles access to read write stores. The context object is a local variable of the executing thread so it needs no locking. Essentially only session information is shared and read/write. For this reason all accesses to session information and the management of sessions have to be carefully locked. But any extension, which does not follow the predefined path, has to consider concurrency aspects carefully. \subsection{Web application issues} Web applications refine and add to the requirements we have for a server application. The protocol used is HTTP therefore a Web application lives in a request reply world. A web application can be seen as sequence of pages visited by a user. Each page defines some follow up pages with links. There are fast entry points in the form of bookmarks, and forward and backward chains in the form of back and forward buttons in the browser. At each point a user is allowed to visit some follow up pages based on his authentication credentials. With this prerequisite we are able to define an enhanced model of request processing. \subsubsection{Request handling cycle} A WebApplication service predefines several steps of processing. Each has it‘s own goal and different objects involved. \begin{figure}[hbt] \centering \begin{tikzpicture} \colorlet{good}{green!75!black} \colorlet{bad}{red} \colorlet{neutral}{black!60} \colorlet{none}{white} \draw node[text centered, text width=3cm]{WebApplication Service}; \begin{scope}[line width=4mm,rotate=270] \draw[ good, decoration={markings, mark=at position 1 with {\arrow{stealth}}}, postaction={decorate} ] (-123:2cm) arc (-123:-101:2cm); \draw[good!60!white] (-36:2cm) arc (-36:-101:2cm); \draw[neutral] (-36:2cm) arc (-36:36:2cm); \draw[bad!60!white] (36:2cm) arc (36:93:2cm); \newcount\mycount \foreach \angle in {0,72,...,3599} { \mycount=\angle\relax \divide\mycount by 10\relax \draw[black!15,thick] (\the\mycount:18mm) -- (\the\mycount:22mm); } \draw (0:2.2cm) node[below] {``ok'': 10 (20\%)}; \draw (165:2.2cm) node[above] {none: 20 (40\%)}; \draw (-111:2.2cm) node[left] {``very good'': 3 (6\%)}; \draw (-68:2.2cm) node[left] {``good'': 9 (18\%)}; \draw (65:2.2cm) node[right] {``bad'': 8 (16\%)}; \draw (93:2.2cm) node[right] {``very bad'': 0 (0\%)}; \end{scope} \draw[gray] (0,0) circle (2.2cm) circle (1.8cm); \end{tikzpicture} \caption{WebApplication‘s request processing cycle.} \label{fig:webservicerequestcycle} \end{figure} \begin{figure}[hbt] \centering \begin{tikzpicture} \colorlet{good}{green!75!black} \colorlet{bad}{red} \colorlet{neutral}{black!60} \colorlet{none}{white} \draw node[text centered, text width=3cm]{WebApplication Service}; \begin{scope}[line width=4mm,rotate=90] \draw [ good!60!white, decoration={markings, mark=at position 1 with { \node [single arrow,fill=good!60!white, inner sep=0pt, transform shape] {};}}, postaction={decorate} ] { (-45:2cm) arc (-45:-90:2cm) }; \draw [ good, decoration={markings, mark=at position 1 with { \node [single arrow,fill=good, inner sep=0pt, transform shape] {};}}, postaction={decorate} ] { (0:2cm) arc (0:-45:2cm) }; \draw decorate [ decoration={text along path,text={this is getting silly}}, ] { (0:2cm) arc (0:-45:2cm) }; \end{scope} \end{tikzpicture} \caption{WebApplication‘s request processing cycle.} \label{fig:webservicerequestcycleX} \end{figure} Analyze request and select the service: The incoming request is parsed as far as necessary to select the service (e.g. the HTTP-Header). WebDisplay Web application requests contain four standard elements of information: session id, role, page and action. They are all used in the WebDisplay request processing cycle. The Web application service is session based, therefore we select also the session defined by the request’s sessionid or create a new one. \begin{figure}[hbt] \centering \includegraphics[width=0.5\hsize]{chap01/Acceptor} \caption{Acceptor.} \label{fig:acceptor} \end{figure} \begin{figure}[hbt] \centering \includegraphics[width=0.9\hsize]{chap01/test} \caption{MscGen example.} \label{fig:mscgen_example} \end{figure} \begin{figure}[hbt] \centering %% Start of dot diagram \Dot[width=0.35\textwidth]{BuildDependencies} { margin="0 0 0 0"; rankdir=BT; node [shape=Mrecord]; prepare [label="prepare",fillcolor="/spectral11/3",style=filled]; compile [label="compile",fillcolor="/spectral11/4",style=filled]; compileT [label="compile-tests",fillcolor="/spectral11/1",style=filled]; runtest [label="run tests",fillcolor="/spectral11/5",style=filled]; runapp [label="run app",fillcolor="/spectral11/9",style=filled]; package [label="create pkg",fillcolor="/spectral11/6",style=filled]; deploy [fillcolor="/spectral11/7",style=filled]; edge [arrowhead=vee]; compile->prepare; compileT->compile; runapp->compile; runtest->compileT; package->runtest; deploy->package; }{digraph} %% End of dot diagram \caption{Graphviz - inline dot example.} \label{fig:graphviz_inline_sample} \end{figure}
{ "alphanum_fraction": 0.7762366808, "avg_line_length": 51.373088685, "ext": "tex", "hexsha": "4614e99a564a2bdb06bc3e1ffe7ee085616d2dc4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "37d933c5fe2e0ce9a801f51b2aa27c7a18098511", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "zer0infinity/CuteForCoast", "max_forks_repo_path": "manual/chapters/chap01.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "37d933c5fe2e0ce9a801f51b2aa27c7a18098511", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "zer0infinity/CuteForCoast", "max_issues_repo_path": "manual/chapters/chap01.tex", "max_line_length": 105, "max_stars_count": null, "max_stars_repo_head_hexsha": "37d933c5fe2e0ce9a801f51b2aa27c7a18098511", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "zer0infinity/CuteForCoast", "max_stars_repo_path": "manual/chapters/chap01.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3950, "size": 16799 }
% !TEX root = ../thesis.tex \section{The Dataflow Graph Design} \label{sec:vg:dataflow} \vspace{-10pt} Dataflow operators are instantiated and connected by the Reactive Vega \emph{parser}, which traverses a declarative specification containing definitions for input datasets, visual encoding rules, and interaction primitives as described in~\cref{sec:vg:lang}. When data tuples are observed, or when interaction events occur, they are propagated (or ``\emph{pulsed}'') through the graph with each operator being evaluated in turn. Propagation ends at the renderer. \vspace{-10pt} \subsection{Data, Interaction, and Scene Graph Operators} \vspace{-7pt} Reactive Vega's dataflow operators fall into one of three categories: input data processing, interaction handling, or scene graph construction. \vspace{-10pt} \subsubsection{Processing Input Data} \vspace{-10pt} Reactive Vega parses each dataset definition and constructs a corresponding branch in the dataflow graph. These branches comprise input and output nodes connected by a pipeline of data transformation operators. Input nodes receive raw tuples as a linear stream (tree and graph structures are supported via parent-child or neighbor pointers, respectively). Upon data source updates, tuples are flagged as either \emph{added}, \emph{modified}, or \emph{removed}, and each tuple is given a unique identifier. Data transformation operators use this metadata to perform targeted computation and, in the process, may derive new tuples from existing ones. Derived tuples retain access to their ``parent'' via prototypal inheritance\,---\,operators need not propagate unrelated upstream changes. Some operators require additional inspection of tuple state. Consider an aggregate operator that calculates running statistics over a dataset (e.g., mean and variance). When the operator observes added or removed tuples, the statistics can be updated based on the current tuple values. With modified tuples, the previous value must be subtracted from the calculation and the new value added. Correspondingly, tuples include a \texttt{previous} property. Writes to a tuple attribute are done through a setter function that copies current values to the \texttt{previous} object. \subsubsection{Handling Interaction} \vspace{-10pt} Reactive Vega instantiates an event listener node for each low-level event type required by the visualization (e.g., \texttt{mousedown} or \texttt{touchstart}). These nodes are directly connected to dependent signals as specified by event selectors~\cite{reactive-vega-model}. In the case of ordered selectors (e.g., ``drag'' events specified by \texttt{[mousedown, mouseup] > mousemove}), each constituent event is connected to an automatically created anonymous signal; an additional anonymous signal connects them to serve as a gatekeeper, and only propagates the final signal value when appropriate. Individual signals can be dependent on multiple event nodes and/or other signals, and value propagation follows E-FRP's two-phase update (\secref{sec:propagation}). \vspace{-10pt} \subsubsection{Constructing the Scene Graph} \vspace{-10pt} To construct the scene graph, Reactive Vega follows a process akin to the Protovis bind-build-evaluate pipeline~\cite{heer:protovisjava}. When a declarative specification is parsed, Reactive Vega traverses the mark hierarchy to \emph{bind} property definitions: property sets are compiled into encoding functions and stored with the specification. At run-time, \emph{build} and \emph{evaluate} operators are created for each bound mark. The build operator performs a data join~\cite{bostock:d3} to generate one scene graph element (or ``mark'') per tuple in the backing dataset, and the evaluate operator runs the appropriate encoding functions. A downstream \emph{bounds} operator calculates the bounding boxes of generated marks. For a nested scene graph to be rendered correctly, the order of operations is critical: parent marks must be built and encoded before their children, but the bounds of the children must be calculated before their parents. The resultant scene graph, as seen in \cref{fig:vg:groupedBar}, exhibits an alternating structure, with individual mark elements grouped under a sentinel mark specification node. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{groupedBar} \caption{A grouped bar chart (top), with the underlying scene graph (bottom), and corresponding portion of the dataflow graph (right).} \label{fig:vg:groupedBar} \end{figure} Scene graph elements are also modeled as data tuples and can serve as the input data for downstream visual encoding primitives. This establishes a \emph{reactive geometry} that accelerates common layout tasks, such as label positioning, and expands the expressiveness of the specification language. As marks can be run through subsequent data transformations, higher-level layout algorithms (e.g., those that require a pre-computed initial layout~\cite{flexbox}) are now supported in a fully declarative fashion. \subsection{Changesets and Materialization} \vspace{-7pt} All data does not flow through the system at all times. Instead, operators receive and transmit \emph{changesets}. A changeset consists of observed tuples, new signal values, and updates to other dependencies that have transpired since the last render event. The propagation of a changeset begins in response to streaming tuples or user interaction. The corresponding input node creates a fresh changeset, and populates it with the detected update. As the changeset flows through the graph, operators use it to perform targeted recomputation, and may augment it in a variety of ways. For example, a \texttt{Filter} operator might remove tuples from a changeset if they do not meet the filter predicate, or may mark modified tuples as \texttt{added} if they previously had been filtered. A Cartesian product operator, on the other hand, replaces incoming tuples with the cross-product with another data stream. Some operators, however, require a complete dataset. For example, a windowed-join requires access to all tuples in the current windows of the joined data sources. For such scenarios, special \emph{collector} operators (akin to \emph{views}~\cite{abadi:aurora} or \emph{synopses}~\cite{arasu:stream} in streaming databases) materialize the data currently in a branch. In order to mitigate the associated time and memory expenses, Reactive Vega automatically shares collectors between dependent operators. Upon instantiation, such operators must be annotated as requiring a collector; at run-time they can then request a complete dataset from the scheduler. Finally, if animated transitions are specified, a changeset contains an interpolation queue to which mark evaluators add generated mark instances; the interpolators are then run when the changeset is evaluated by the renderer. \vspace{-10pt} \subsection{Coordinating Changeset Propagation} \label{sec:propagation} \vspace{-7pt} A centralized dataflow graph scheduler is responsible for dispatching changesets to appropriate operators. The scheduler ensures that changeset propagation occurs in topological order so that an operator is only evaluated after all of its dependencies are up-to-date. This schedule prevents wasteful intermediary computation or momentary inconsistencies, known as \emph{glitches}~\cite{cooper:embedding}. Centralizing this responsibility, rather than delegating it to operators, enables more aggressive pruning of unnecessary computation as described in~\secref{sec:pruning}. The scheduler has access to the full graph structure and, thus, more insight into the state of individual operators and propagation progress. When an interaction event occurs, however, an initial non-topological update of signals is performed. Dependent signals are reevaluated according to their specification order. As a result, signals may use prior computed values of their dependencies, which will subsequently be updated. This process mimics E-FRP's two-phase update~\cite{wan:efrp}, and is necessary to enable expressive signal composition. Once all necessary signals have been reevaluated, a changeset with the new signal values is propagated to the rest of the dataflow graph. \subsection{Pushing Internal and Pulling External Changesets} \vspace{-7pt} Two types of edges connect operators in the dataflow graph. The first connects operators that work with the same data; for example a pipeline of data transformation operators for the same data source, or a mark's build and evaluate operators. Changesets are pushed along these edges, and operators use and augment them directly. The second type of edge connects operators with external dependencies (e.g., other data sources, signals, and scale functions). As these edges connect disparate data spaces, they cannot directly connect operators with their dependencies. To do so would result in operators performing computation over mismatched data types. Instead, external dependencies are connected to their dependents' nearest upstream \texttt{Collector} node, and changesets that flow along these edges are flagged as \emph{reflow changesets}. When a \texttt{Collector} receives a reflow changeset, it propagates its tuples forward, flagging them as modified. The dependents now receive correct input data. Reflow changes also flow along edges that connect signals to other signals. However, as these edges exist in scalar data space, \texttt{Collector} nodes are not needed. This hybrid push/pull system enables a complex web of interdependent operators while reducing the complexity of individual elements. For example, regardless of whether a signal parameterizes data transforms or visual encoding primitives, it simply needs to output a reflow changeset. Without such a system in place, the signal would instead have to construct a different changeset for each dependency edge it was a part of, and determine the correct dataset to supply. \Cref{fig:vg:teaser,fig:vg:groupedBar,fig:vg:scenegraph} use filled and unfilled arrows for internal and external connections, respectively. \vspace{-10pt} \subsection{Dynamically Restructuring the Graph} \vspace{-7pt} To support streaming nested data structures, operators can dynamically restructure the graph at runtime by extending new branches, or pruning existing ones, based on observed data. These dataflow branches model their corresponding hierarchies as standard relations, thereby enabling subsequent operators to remain agnostic to higher-level structure. For example, a \texttt{Facet} operator partitions tuples by key fields; each partition then propagates down a unique, dynamically-constructed dataflow branch, which can include other operators such as \texttt{Filter} or \texttt{Sort}. In order to maintain interactive performance, new branches are queued for evaluation as part of the same cycle they were created in. To ensure changeset propagation continues to occur in topological order, operators are given a \emph{rank} upon instantiation to uniquely identify their place in the ordering. When new edges are added to the dataflow graph, the ranks are updated such that an operator's rank is always greater than those of its dependencies. When the scheduler queues operators for propagation, it also stores the ranks it observes. Before propagating a changeset to an operator, the scheduler compares the operator's current rank to the stored rank. If the ranks match, the operator is evaluated; if the ranks do not match, the graph was restructured and the scheduler requeues the operator. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{scenegraph} \caption{Dataflow operators responsible for scene graph construction are dynamically instantiated at run-time, a process that results in \cref{fig:vg:groupedBar}. (a) At compile-time, a branch corresponding to the root scene graph node is instantiated. (b-c) As the changeset (blue) propagates through nodes, group-mark builders instantiate builders for their children. Parent and child builders are temporarily connected (dotted lines) to ensure children are built in the same cycle. (d-e) When the changeset propagates to the children, the temporary connection is replaced with a connection to the mark's backing data source (also blue).} \label{fig:vg:scenegraph} \end{figure} Scene graph operators are the most common source of graph restructuring, as building a nested scene graph is entirely data-driven. Dataflow branches for child marks (consisting of build-evaluate-bound chains) cannot be instantiated until the parent mark instances have been generated. As a result, only a single branch, corresponding to the root node of the scene graph, is constructed at compile-time. As data streams through the graph, or as interaction events occur, additional branches are created to build and encode corresponding nested marks. To ensure their marks are rendered in the same propagation cycle, new branches are temporarily connected to their parents. These connections are subsequently removed so that children marks will only be rebuilt and re-encoded when their backing data source updates. \Cref{fig:vg:scenegraph} provides a step-by-step illustration of how scene graph operators are constructed during a propagation cycle for the grouped bar chart in \cref{fig:vg:groupedBar}.
{ "alphanum_fraction": 0.8117445157, "avg_line_length": 53.8232931727, "ext": "tex", "hexsha": "dea92e7d7438d8f5777fc76121415cd18258de00", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c79821f947743769bcdd130bb96a488372dfc403", "max_forks_repo_licenses": [ "CC-BY-3.0" ], "max_forks_repo_name": "arvind/thesis", "max_forks_repo_path": "vega-arch/dataflow.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c79821f947743769bcdd130bb96a488372dfc403", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-3.0" ], "max_issues_repo_name": "arvind/thesis", "max_issues_repo_path": "vega-arch/dataflow.tex", "max_line_length": 80, "max_stars_count": 5, "max_stars_repo_head_hexsha": "c79821f947743769bcdd130bb96a488372dfc403", "max_stars_repo_licenses": [ "CC-BY-3.0" ], "max_stars_repo_name": "arvind/thesis", "max_stars_repo_path": "vega-arch/dataflow.tex", "max_stars_repo_stars_event_max_datetime": "2019-02-22T00:18:21.000Z", "max_stars_repo_stars_event_min_datetime": "2017-08-26T08:44:57.000Z", "num_tokens": 2933, "size": 13402 }
The formal design of algorithmic assurances is still an emerging field. Consequently, there are many opportunities for further research along different lines. This section outlines some possible promising directions for future work. \vspace{-0.15 in} \subsection{Properties of Assurances} \label{sec:assurance_props_future} Figure~\ref{fig:refined_trust} gives some hints about how designers might be able to fully characterize the properties of AIA assurances. In this survey we investigate, in some detail, `Level of Integration'. However all of the other grayed-out boxes in Figure~\ref{fig:refined_trust} have open questions that should still be investigated. \input{ass_st.tex} \input{ass_cc.tex} \input{ass_ei.tex} \input{ass_tt.tex} \subsection{Trust and Distrust} The treatment of assurances in this survey is based, in part, on a model of interpersonal trust. For completeness it will be important to further investigate \textit{distrust}, as reviewed and discussed by \citet{Lewicki1998-ox}, and formalized in \citet{McKnight2001-gz}. Low trust is not the same as distrust, and low distrust is not the same as trust. \citet{McKnight2001-gz} suggest that `the emotional intensity of distrust distinguishes it from trust', and they explain that distrust comes from emotions like wariness, caution, and fear -- whereas trust stems from emotions like hope, safety, and confidence. Trust and distrust are orthogonal elements that define a person's TRB towards a trustee. Since distrust was not considered here, it is not clear to what extent the human-AIA trust model remains effective in the presence of user wariness, caution, or fear. Questions for future work include: to what extent can behaviors driven by distrust be isolated from those originating from trust? How can those behaviors be detected to begin with? And in what circumstances is the extra effort necessary? \subsection{Human Limitations} Dealing with human users requires consideration of their cognitive limitations. For instance cognitive biases known as `framing effects' (reacting to the same choice in different ways depending on how it is presented) will be important to consider for designing usable AIAs that must make decisions under uncertainty ~\cite{Freedy2007-sg,Riley1996-qm}. The existence of framing effects are not surprising to those familiar with cognitive science, but they will likely be unanticipated phenomena to many AIA system designers. Other related cognitive biases and limitations such as `recency effects' (being biased in making choices based on recent experience), `focusing effects' (being biased in choice selection based on a single aspect of a correlated event), or `normalcy biases' (failure to consider situations which have never occurred before) are also important to consider. Besides cognitive biases, humans are also limited in their ability to understand certain kinds of information. Communities that investigate how probabilistic and statistical explanations can be presented to humans will have many insights that are relevant for AIA designers and assurance design \cite{Rouse1986-dz,Wallace2001-fm,Kuhn1997-qc,Lomas2012-ie,Swartout1983-ko}. But it is not immediately clear what methods are most appropriate for application in assurance design, or how they might be applied. For instance, can the AIA detect when cognitive limitations are effecting TRBs? What other user limitations need to be characterized? \input{perception_mediums.tex} \input{obs_effects.tex}
{ "alphanum_fraction": 0.814624393, "avg_line_length": 134.6538461538, "ext": "tex", "hexsha": "552bd27a002f496a7f98977b90c87f9ba06c3602", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4411dda583ba35cb688105c274f40d329548ec94", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "bisraelsen/OnAssurances", "max_forks_repo_path": "future_work.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4411dda583ba35cb688105c274f40d329548ec94", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "bisraelsen/OnAssurances", "max_issues_repo_path": "future_work.tex", "max_line_length": 1109, "max_stars_count": null, "max_stars_repo_head_hexsha": "4411dda583ba35cb688105c274f40d329548ec94", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "bisraelsen/OnAssurances", "max_stars_repo_path": "future_work.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 746, "size": 3501 }
\chapter*{Acknowledgments} \addcontentsline{toc}{section}{Acknowledgments} Humans cannot live on work alone, and without a good outlet for frustration and pent up energy I would probably be on a lot more governmental watchlists. Credit must first be given to Matt Bershady; without your expert guidance and truly shocking (dangerous?) willingness to listen to what I had to say none of this would have been possible. You always pushed me just past my comfort zone and told me \emph{what} to do, not \emph{how} to do it (until I did it wrong a few times). The past six years rank extremely high in the epochs of my development as a legit scientist, and I expect them to remain there for a long time. Beneath the Ivory Tower, in the beautiful slums of grad school, the passage of time has been eased immeasurably by many of my fellow travelers. To Andrew and Corey, thanks for being great research partners. To Danielle and Britt, your feline obsession did little to temper the pleasure of spending my first 2 years in your company. To the Atom: Anna, Chris, Jenna, and John; thanks for giving me a couch that I could use to get away from the cat talk. That and a lot of snacks and scientific insight. Special mention is gladly given to Jenna, who unwittingly accompanied me through the last 10 years. From throwing lemons at East to biking to Michael's; it's been real. To Diego, DK, Julie, and Max, the Young Blood that keeps me energized, thanks for reducing the thickness of my jade layer with your youthful enthusiasm and adorable 1st year problems. Me g\`usta. To Claire and Elijah; sometimes a trip down the hall was all I needed and you tolerated my interruptions with grace and good humor. It would be impossible understate the importance of my family to my apparent success. Unwavering support, unconditional love, and copious amounts of Night Train are all crucial ingredients in the foundation of this work. To Mary, you have been with me through the thickest and the thinnest and for that I will be forever grateful. You drive me to be the person you think I am and are always willing to forgive my failings. To Anna, the best student I ever had, your steadfast determination to move forward despite fear and uncertainty is a continuing inspiration. Thank you so much to The Begowatts, members past and present; you gave me an outlet for the squishier parts of my brain and something to do when I just wanted to hit something. A very special thanks to the City Bar for being a refuge from the storm. Kevin and MJ, you are the best bartenders I've ever had the privilege to buy beer from; thanks for providing a welcoming environment and slinging some wicked suds. And finally, without music to dull the baser parts of my psyche none of this would have happened. In the end recognition must be given to The Boss for reminding me that being earnest can be cool, The Band for so much joy (and especially ``Ophelia''), NRW\footnote{\url{https://www.youtube.com/channel/UCAZ77vdqYbuGbCAWB62WbqQ}} for a great background, and B{\"O}C for being with me from the beginning.
{ "alphanum_fraction": 0.7918962723, "avg_line_length": 52.2881355932, "ext": "tex", "hexsha": "a6ee080145d47778fa8fa0935cf20ed767335e20", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "113dfb95996777e2b36785d7ee80a824a671ab09", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "eigenbrot/eigenbrot-thesis", "max_forks_repo_path": "Acknowledgments.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "113dfb95996777e2b36785d7ee80a824a671ab09", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "eigenbrot/eigenbrot-thesis", "max_issues_repo_path": "Acknowledgments.tex", "max_line_length": 76, "max_stars_count": null, "max_stars_repo_head_hexsha": "113dfb95996777e2b36785d7ee80a824a671ab09", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "eigenbrot/eigenbrot-thesis", "max_stars_repo_path": "Acknowledgments.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 750, "size": 3085 }
\section{Decimal and Binary Math} The next few sections introduce some math ideas that may or may not be familiar to you. In order to really understand how computers work, you need to understand several math concepts that are not hard, but may be unfamiliar. The first concept is the \emph{exponent}. \subsection*{Bases and Exponents} Exponentiation is a mathematical operation, written as $b^n$, involving two numbers, the base, $b$, and the exponent, $n$. Exponentiation means ``repeated multiplication of the base." \noindent That is, $b^n$ is the product of multiplying the base number $n$ times: \begin{equation*} b^{n}=\underbrace {b\times \cdots \times b} _{n} \end{equation*} \noindent In this case, $b^n$ is called the ``\emph{n-th} power of b", or ``b raised to the power n." So 3 to the 4\textsuperscript{th} power looks like this: \begin{equation*} 3^4 ~=~ 3 \times 3 \times 3 \times 3 ~=~ 9 \times 9 ~=~ 81 \end{equation*} Put another way: the exponent is a number, smaller and on the upper right hand side of a number, that means ``multiplying a number times itself zero times, once, or more than once" depending on whether the number is 0, 1, 2, or another number (written $n$ in the description above). It is possible to use fractions as exponents, but we won't talk about that here. The base can be any number, though most people think most easily in base-10. So in base-10, 10 to the power 2, or 10 to the second power, means $10 \times 10$. \subsection*{Order of Magnitude (Columns)} When counting up from zero, by one, in base 10, you eventually get to 9. In order to count any higher, you must ``carry the one" over to the next column and reset the counter in the ``ones" column (the place that counts by one, from zero to nine). When you move from the ``\emph{ones}'' column to the ``\emph{tens}'' column, the next column to the left represents the next \emph{order of magnitude} of the base. ``Magnitude" means ``size", and in math, it means, specifically, moving from counting by one number (the base) to counting by the base times itself -- first twice, then three times, then more. For base-10, you count from 0 to 9 in the right-most column, then from 10 to 90 in the next column to the left, then from 100 to 900 in the next column to the left, then from 1000 to 9000, and so on. Each time the counter gets full (reaches 9), you cannot represent any more of the quantity being counted without moving to the next largest order of magnitude. That is, if your display only shows two digits, once you count past 99, you have no idea is the number is 0, or 100, or 200, or 900, or ten thousand, or 4 billion. \newpage \subsection*{Squares} Shapes that are squares have four sides with the same length; that is, the length and width are the same. \emph{Squaring a number} is multiplying the number times itself, just like, in a square, both sides have the same length. The most common way you will see a ``squared number" described is with a little 2 up above the number, like $2^2$, or $3^2$, or $10^2$. That smaller `2' is the exponent, that we discussed above. \medskip Squaring the number 10 (that is, multiplying ten times itself) gets: $10 \times 10 = 100$ \bigskip \begin{tabular}{l m{0.75in} l l } \blockline{1}{0.5} & $1 \times 1 = $ & \blockline{1}{0.5} & $=1^2$ \\ \\ \blockline{2}{0.5} & $2 \times 2 = $ & \makeplate{2}{1}{0.5} & $=2^2$ \\ \\ \blockline{3}{0.5} & $3 \times 3 = $ & \makeplate{3}{1}{0.5} & $=3^2$\\ \\ \blockline{4}{0.5} & $4 \times 4 = $ & \makeplate{4}{1}{0.5} & $=4^2$ \\ \\ \blockline{5}{0.5} & $5 \times 5 = $ & \makeplate{5}{1}{0.5} & $=5^2$ \\ \\ \blockline{6}{0.5} & $6 \times 6 = $ & \makeplate{6}{1}{0.5} & $=6^2$ \\ \end{tabular} \newpage \subsection*{Cubes} Cubes have the same length side, on each of six sides, in \emph{three} dimensions. When cubing a number, you are multiplying the number times itself, and then multiplying it times itself \emph{again}, because all three sides are the same (they have the same value). The most common way you will see a ``cubed number" described is with a little 3 up above the number, like $2^3$, or $3^3$, or $10^3$. As above, the exponent tells you how many times you multiply the number times itself (here, that means three times). \medskip Cubing the number 10 gets: $10 \times 10 \times 10 = 1000$ \bigskip \begin{tabular}{m{1.1in} m{1.0in} m{1.45in} m{1.9in}} \blockline{1}{0.5} & $1 \times 1 \times 1 = $ & \blockline{1}{0.5} & One times itself is one.\\ \\ \blockline{2}{0.5} & $2 \times 2\times 2 = $ & \makeplate{2}{2}{0.5} & Two sets of four, or \newline$4+4$ (that is, $2^2 + 2^2$) \\ \\ \blockline{3}{0.5} & $3 \times 3 \times 3 = $ & \makeplate{3}{3}{0.5} & Three sets of nine, or \newline$9+9+9$\newline Put another way:\newline $9 \times 3 = 27$ \\ \\ \blockline{4}{0.5} & $4 \times 4 \times 4 = $ & \makeplate{4}{4}{0.5} & Cubes get big quickly---\newline Here's four sets of 16. \newline$16 \times 4 = 64$ \\ \\ \blockline{5}{0.5} & $5 \times 5 \times 5 = $ & \makeplate{5}{5}{0.5} & Five sets of 25. \newline$25 \times 5 = 125$ \\ \\ \end{tabular} \newpage \subsection*{Multipliers and Prefixes} In systems of measurement, when talking about, say, weight, height, or pressure, there are base units and there are ways to refer to these base units in large multiples, or in tiny fractions. You're probably a little taller than one \emph{meter} in height. And it takes one hundred \emph{centimeters} to add up to one meter. The prefix ``centi-" means it takes one hundred of \emph{these} to add up to one of the \emph{base unit}, which in this case is a meter. The same goes for weight. You probably weigh between 20 and 40 \emph{kilograms}. The prefix ``{kilo-}'' means \emph{one thousand} of whatever the base unit is. Put another way, grams are pretty small amounts of weight, so measuring things like people or cars is impractical if we use grams, because cars weigh millions of grams. Medicines, on the other hand, are usually measured in quantities called \emph{milligrams} -- each milligram is only $\frac{1}{1000}$ (one one-thousandth) of a gram. Don't be confused into thinking ``milli-" means ``million'' --- ``mega" means ``million.'' When thinking about computers, we hear terms that are often expressed as multiples of things like memory and storage capacity. A kilobyte is 1,000 bytes (a \emph{byte} is 8 individual bits, and each bit is the smallest unit of computing -- it can only represent 1 or 0). A megabyte is 1,000 kilobytes (or, 1,000,000 bytes). A gigabyte is one \emph{billion} bytes, and a terabyte is one \emph{trillion} bytes. \newpage \subsection*{Representing Numbers in Decimal} \emph{Decimal} means ``with tens". You've always been taught to count in the decimal system -- by ten. When you are counting by ones, once you get past 9, you reset the right-most number to zero and add that ten to the next column, which is the \emph{tens} column (so you only increment that counter by one, since you are adding \emph{one} ``ten'' to the total). Once you fill up 9 ``tens" (90) and count up past 9 in the ``ones'' column (that is, you add one to 99), you have to set the ones column and the tens column to zero, and add one to the ``hundreds'' column. You \emph{carry} the ten to the left from the ones, and carry the hundred to the left, to the next-largest set of tens. One hundred is ten tens, one thousand is 100 tens, and so forth. \noindent It looks like this: \bigskip \begin{tabular}{l l l l l | l r } \rot{ten thousands} & \rot{thousands} & \rot{hundreds} & \rot{tens} & \rot{ones} & \multicolumn{2}{c}{Number} \\ \hline {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 0 && 0 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 1 && 1 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 2 && 2 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 3 && 3 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 4 && 4 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 5 && 5 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 6 && 6 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 7 && 7 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 8 && 8 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 9 && 9 \\ {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 1 & 0 && 10 \\ \end{tabular} \bigskip When you get to the bottom of the column, you re-set the counter for that column to zero, and add one to the next column over (carry). So you go from 9 to 10, or 19 to 20, or 99 to 100. So any column can only hold between 0 and 9, before you have to carry over to the next \emph{order of magnitude}, which for a decimal system is \emph{ten times the size of one step in the column}. So when you run out of room for the ones column, you go to the \emph{tens} column, which contains ten times as many units as any single step (from 1 to 2, or from 8 to 9) in the column to the right of it. When you run out of room in the column, you have to carry to the next order of magnitude. You go from the ones, to the tens, to the hundreds (100 is $10 \times 10$), to the thousands (1000 is $10 \times 100$), and so on. To express a number, we count up the amount of each order of magnitude and add them all together. The number 628 is made of $(6 \times 100) + (2 \times 10) + (8 \times 1)$, for instance. \newpage Here are a few examples of numbers expressed in columns. For each one, you count up how many units in the column exist, then count them for each order of magnitude, and add each value together to get the number being described: \bigskip \begin{tabular}{p{2.7in} | l l l l l l | l l r } \hline \textbf{Text Description} & & \rot{ten thousands} & \rot{thousands} & \rot{hundreds} & \rot{tens} & \rot{ones} && \multicolumn{2}{c}{\textbf{Number}} \\[\sep] \hline & && & & & & &&\\[-2mm] no ten-thousands, no thousands, no hundreds, no tens, and no ones && {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 0 &&& 0 \\[\widesep] no ten-thousands, no thousands, no hundreds, no tens, and one ones && {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 1 &&& 1 \\[\widesep] no ten-thousands, no thousands, no hundreds, no tens, and two ones && {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 2 &&& 2 \\[\widesep] no ten-thousands, no thousands, no hundreds, three tens, and no ones && {\color{lightgray}0} & {\color{lightgray}0} & {\color{lightgray}0} & 3 & 0 &&& 30 \\[\widesep] no ten-thousands, no thousands, four hundreds, no tens, and no ones && {\color{lightgray}0} & {\color{lightgray}0} & 4 & 0 & 0 &&& 400 \\[\widesep] no ten-thousands, no thousands, five hundreds, five tens, and five ones && {\color{lightgray}0} & {\color{lightgray}0} & 5 & 5 & 5 &&& 555 \\[\widesep] no ten-thousands, six thousands, five hundreds, no tens, and two ones && {\color{lightgray}0} & 6 & 5 & 0 & 2 & & & 6502 \\[\widesep] six ten-thousands, eight thousands, no hundreds, three tens, and no ones && 6 & 8 & 0 & 3 & 0 & & & 68030 \\[\widesep] \hline \end{tabular} \bigskip \stbox{6.0in}{\emph{Exercise:} What if, in the above table, you counted past 99,999? What number would you see? That is, what do you see if there is not a ``hundred-thousands" column? What happened to the hundred thousand that should be counted? Do you need to know how many hundred thousands are in the number? How would you handle the need for a larger number, if you have it? (Introduces the concept of \emph{overflow}.)} \newpage \subsection*{Representing Numbers in Binary} Computers only understand ``1'' and ``0'' -- because it can sense the electricity in a wire that is either \emph{on} (having a detectable voltage greater than zero) or \emph{off} (having a reference voltage that is basically at ``zero volts", or ``ground."). That means that a computer, in order to add, subtract, or store information, has to express \emph{literally anything and everything it can handle} in terms of either ones or zeroes. This system is called \emph{binary}, because computers only understand two ``states" -- on, or off. Instead of ``base-10" counting (where moving to the next column happens when you pass 9), binary is ``base-2'' counting; the numbers move to the next column when the number passes 1---because {\color{webblue}\href{https://www.youtube.com/watch?v=MOn_ySghN2Y}{there's no such number as ``two''}} if you're a computer! In order to add numbers in binary, we can't count to ten. We must count to one, and then, if the resulting number is greater than one, carry the digit to the next column. Counting in binary is interesting because it is different, and because it introduces us to a new way of doing math. In order to add two numbers, computers have to do the following: \be \+ store each number in a binary format; \+ compare each column (ones, twos, fours, eights, etc.) of each number and see if the numbers in that column add up to more than one; \+ carry the ``one'' to the next column (the next \emph{order of magnitude}, which is two times the previous column). That is, if the number goes from one to two, the ``one" will go to zero and the ``two" will be moved -- and added -- to the next column over to the left; \+ keep on adding and carrying digits until the addition is complete; \+ count up the values of each order of magnitude (either one, or none) and add them together; and \+ report the new number as a [binary] result. \ee \begin{minipage}[c]{6.5in} \begin{center} \textbf{Binary and decimal representation for each number from 0 to 15:} \smallskip \begin{tabular}{p{1.25in} | p{0.10in} p{0.10in} p{0.10in} p{0.10in} | l r} \hline\\[\negsep] \textbf{Description} & \rot{eights} & \rot{fours} & \rot{twos} & \rot{ones} && \textbf{{\color{webblue}base 10}}\\[\sep] \hline\\[\negsep] $0 + 0 + 0 + 0$ & 0 & 0 & 0 & 0 && {\color{webblue}0} \\ \grr $0 + 0 + 0 + 1$ & 0 & 0 & 0 & 1 && {\color{webblue}1} \\ $0 + 0 + 2 + 0$ & 0 & 0 & 1 & 0 && {\color{webblue}2} \\ \grr $0 + 0 + 2 + 1$ & 0 & 0 & 1 & 1 && {\color{webblue}3} \\ $0 + 4 + 0 + 0$ & 0 & 1 & 0 & 0 && {\color{webblue}4} \\ \grr $0 + 4 + 0 + 1$ & 0 & 1 & 0 & 1 && {\color{webblue}5} \\ $0 + 4 + 2 + 0$ & 0 & 1 & 1 & 0 && {\color{webblue}6} \\ \grr $0 + 4 + 2 + 1$ & 0 & 1 & 1 & 1 && {\color{webblue}7} \\ $8 + 0 + 0 + 0 $ & 1 & 0 & 0 & 0 && {\color{webblue}8} \\ \grr $8 + 0 + 0 + 1 $ & 1 & 0 & 0 & 1 && {\color{webblue}9} \\ $8 + 0 + 2 + 0 $ & 1 & 0 & 1 & 0 && {\color{webblue}10} \\ \grr $8 + 0 + 2 + 1 $ & 1 & 0 & 1 & 1 && {\color{webblue}11} \\ $8 + 4 + 0 + 0 $ & 1 & 1 & 0 & 0 && {\color{webblue}12} \\ \grr $8 + 4 + 0 + 1 $ & 1 & 1 & 0 & 1 && {\color{webblue}13} \\ $8 + 4 + 2 + 0 $ & 1 & 1 & 1 & 0 && {\color{webblue}14} \\ \grr $8 + 4 + 2 + 1 $ & 1 & 1 & 1 & 1 && {\color{webblue}15} \\ \hline \end{tabular} \end{center} \stbox{6.0in}{\emph{Problem 1:} what happens in the above table if you add 1 to 15? What number would the computer report if it only has four columns to use? (As above with counting past 99,999, this problem refers to \emph{overflow}). } \stbox{6.0in}{\emph{Problem 2:} Let's say the machine runs by itself at some regular speed, and adds one to the number each second. Let's also say you are using this counter as a way to control a single blinking light, since ``1'' means ``there is electricity available to that wire" and so a ``1'' would turn the light on. Look at each column of numbers (that is, each \emph{order of magnitude!}). Remembering that ``a line that has a voltage" is a 1 and ``no voltage'' (usually called ``ground'') is a zero, which line (column) would make the light blink fastest? Which line would blink the slowest? } \end{minipage} \bigskip \begin{tabular}{ llll llll llll llll l} \multicolumn{17}{c}{\textbf{How Computers Count to 2,020}}\\[\sep] \hline\\[\negsep] % 2020: 0111 1110 0100 $2^{15}$ & $2^{14}$ & $2^{13}$ & $2^{12}$ & $2^{11}$ & $2^{10}$ & $2^9$ & $2^8$ & $2^7$ & $2^6$ & $2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$ & \\ \hline\\[\negsep] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & zero \\ \grr 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 2019 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 2020 \\ %\\[\sep] \hline \end{tabular} \bigskip \begin{tabular}{ llll llll llll llll l} \multicolumn{17}{c}{\textbf{How Computers Count to 65,535}}\\[\sep] \grr \hline\\[\negsep] \grr $2^{15}$ & $2^{14}$ & $2^{13}$ & $2^{12}$ & $2^{11}$ & $2^{10}$ & $2^9$ & $2^8$ & $2^7$ & $2^6$ & $2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$ & \\ \hline\\[\negsep] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & zero \\ \grr 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 32,767 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 32,768 \\ \grr 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 65,535 \\[\sep] \hline \end{tabular} \bigskip If the computer counts up from zero, then once it gets past the $2^{14}$ column, it jumps to the $2^{15}$ (32,768) column. \stbox{6.0in}{\emph{Useless skill:} Did you know you can count to 31 on one hand? You have five fingers, and each finger can be open or closed, and $2^5$ is 32. Use your thumb for the ``0 or 1" ($2^0$, ``two to the zeroth power") column; the digit to its left represents two to the first power (the ``twos digit"); the next digit to the left represents two to the second power (the ``fours digit"); and so on. Watch out for ``4'' though!} \begin{tabular}{llll llll l} \rot{128} & \rot{64} & \rot{32} & \rot{sixteen} & \rot{eight} & \rot{four} & \rot{two} & \rot{one} & \\[\sep] \hline\\[\negsep] $2^7$ & $2^6$ & $2^5$ & $2^4$ & $2^3$ & $2^2$ & $2^1$ & $2^0$ & \textbf{Result} \\[\sep] \hline\\[\negsep] 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\color{webblue}\textbf{0}} \\ \grr 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & {\color{webblue}\textbf{2}} ($2 + 0$) \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & {\color{webblue}\textbf{3}} ($2 + 1$) \\ \grr 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & {\color{webblue}\textbf{4}} ($4 + 0 + 0$) \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & {\color{webblue}\textbf{7}} ($4 + 2 + 1$) \\ \grr 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & {\color{webblue}\textbf{8}} ($8 + 0 + 0 + 0$) \\ \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \makeblank{1.5in} \\ \grr 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & \makeblank{1.5in} \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & \makeblank{1.5in} \\ \grr 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & \makeblank{1.5in} \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & \makeblank{1.5in} \\ \grr 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & \makeblank{1.5in} \\ 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & \makeblank{1.5in} \\ \grr 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \makeblank{1.5in} \\[\sep] \hline \end{tabular} \bigskip \stbox{6.0in}{\emph{Tip:} if you have a sequence of all ones, like 7 or 15 or 31, rather than adding up each binary column, you can just subtract one from the next largest column base number. So \texttt{0111} equals \texttt{1000} minus one.} \vfill \stbox{4.25in}{\noindent{\color{red}\textbf{Joke:}} \medskip \noindent\emph{Remember:} There are only 10 types of people in the world;\\ those who understand binary, and those who don't.\\ \bigskip (If the joke needs explanation, write 10 like \texttt{0010} and then figure up the value in binary)}
{ "alphanum_fraction": 0.6484232868, "avg_line_length": 58.8928571429, "ext": "tex", "hexsha": "93af801fae187d4d2fc9aa3322f35dd3c7879b59", "lang": "TeX", "max_forks_count": 4, "max_forks_repo_forks_event_max_datetime": "2019-11-17T05:31:36.000Z", "max_forks_repo_forks_event_min_datetime": "2017-11-14T04:40:14.000Z", "max_forks_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "jessehamner/TechMillForKids", "max_forks_repo_path": "chapters/math.tex", "max_issues_count": 6, "max_issues_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_issues_repo_issues_event_max_datetime": "2021-05-25T19:21:58.000Z", "max_issues_repo_issues_event_min_datetime": "2017-03-10T21:46:26.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "jessehamner/TechMillForKids", "max_issues_repo_path": "chapters/math.tex", "max_line_length": 1128, "max_stars_count": 28, "max_stars_repo_head_hexsha": "f064bf1408537f71e4e7dc14f02a8e7e20c2af3a", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "jessehamner/TechMillForKids", "max_stars_repo_path": "chapters/math.tex", "max_stars_repo_stars_event_max_datetime": "2022-01-25T09:31:54.000Z", "max_stars_repo_stars_event_min_datetime": "2017-11-13T21:45:08.000Z", "num_tokens": 7205, "size": 19788 }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Twenty Seconds Resume/CV % LaTeX Template % Version 1.1 (8/1/17) % % This template has been downloaded from: % http://www.LaTeXTemplates.com % % Original author: % Carmine Spagnuolo ([email protected]) with major modifications by % Vel ([email protected]) % % License: % The MIT License (see included LICENSE file) % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %---------------------------------------------------------------------------------------- % PACKAGES AND OTHER DOCUMENT CONFIGURATIONS %---------------------------------------------------------------------------------------- \documentclass[letterpaper]{twentysecondcv} % a4paper for A4 %---------------------------------------------------------------------------------------- % PERSONAL INFORMATION %---------------------------------------------------------------------------------------- % If you don't need one or more of the below, just remove the content leaving the command, e.g. \cvnumberphone{} \profilepic{cv_pic} % Profile picture \cvname{Benjamin Schmidt} % Your name \cvjobtitle{M. Sc. Researcher} % Job title/career \cvdate{20 August 1988} % Date of birth \cvaddress{Kaiserin-Augusta-Allee 43 \newline 10589 Berlin} % Short address/location, use \newline if more than 1 line is required \cvnumberphone{+49 (0) 173 9145137} % Phone number \cvsite{github.com/benatouba/} % Personal website \cvmail{[email protected]} % Email address %---------------------------------------------------------------------------------------- \begin{document} %---------------------------------------------------------------------------------------- % ABOUT ME %---------------------------------------------------------------------------------------- \aboutme{} % To have no About Me section, just remove all the text and leave \aboutme{} %---------------------------------------------------------------------------------------- % SKILLS %---------------------------------------------------------------------------------------- % Skill bar section, each skill must have a value between 0 an 6 (float) % \skills{{Bash/4},{MATLAB/2},{LaTeX/3},{R/5},{Python3 (Scientific)/5}} \skillslogo{{bash-logo}, {python-logo},{r-logo},{latex-logo}} \skillslogosmall{{matlab-logo}, {numpy-logo}, {scipy-logo}, {git-logo}} %------------------------------------------------ % Skill text section, each skill must have a value between 0 an 6 % \langs{{French/2},{English/4},{German/6}} \langs{{French/2},{English/5},{German/6}} %---------------------------------------------------------------------------------------- % Skill bar section, each skill must have a value between 0 an 6 (float) \interests{{Computer programming} \newline {Hiking} \newline {Sports} \newline {Music} \newline {Photography} } \makeprofile % Print the sidebar %---------------------------------------------------------------------------------------- % PUBLICATIONS %---------------------------------------------------------------------------------------- %\section{Publications} %\begin{twentyshort} % Environment for a short list with no descriptions % \twentyitemshort{1865}{Chapter One, Down the Rabbit Hole.} % \twentyitemshort{1865}{Chapter Two, The Pool of Tears.} % \twentyitemshort{1865}{Chapter Three, The Caucus Race and a Long Tale.} % \twentyitemshort{1865}{Chapter Four, The Rabbit Sends a Little Bill.} % \twentyitemshort{1865}{Chapter Five, Advice from a Caterpillar.} % %\twentyitemshort{<dates>}{<title/description>} %\end{twentyshort} %---------------------------------------------------------------------------------------- % AWARDS %---------------------------------------------------------------------------------------- %\section{Awards} %\begin{twentyshort} % Environment for a short list with no descriptions % \twentyitemshort{1987}{All-Time Best Fantasy Novel.} % \twentyitemshort{1998}{All-Time Best Fantasy Novel before 1990.} % %\twentyitemshort{<dates>}{<title/description>} %\end{twentyshort} %---------------------------------------------------------------------------------------- % EXPERIENCE %---------------------------------------------------------------------------------------- \section{Experience} \newline \begin{twenty} % Environment for a list with descriptions \twentyitem{08/19-present}{Technische Universität zu Berlin}{Research Assistant}{ \textbf{Chair:} Climatology \newline \textbf{Project:} Q-TiP - Quaternary Tipping Points of Lake Systems in the Arid Zone of Central Asia. \newline \textbf{Tasks:} Downscaling and analysis of present day and pliocene climatological data using The Weather and Research Forecast Model and Python. \newline} \twentyitem{10/19-present}{Freie Universität zu Berlin}{Lecturer}{ \textbf{Chair:} Applied Physical Geography\newline \textbf{Subjects: Introduction to statistics, Statistics with R} \newline} \twentyitem{02/19-07/19}{Potsdam Institute for Climate Impact Research}{Research Assistent}{ \textbf{Group:} ISIMIP\newline \textbf{Project:} ISI-CFACT: Producing counterfactual climatological data from past datasets for the ISIMIP project.\newline \textbf{Tasks:} Construction and application of an algorithm to construct counterfactual climate from large datasets in Python. \newline} \twentyitem{02/17-01/19}{Humboldt-Universität zu Berlin}{Student collaborator}{ \textbf{Chair:} Climate Geography\newline \textbf{Project:} Hitzewellen in Berlin – Klimaprojektionen: Untersuchung der räumlichen und zeitlichen Variation der Hitzebelastung in Berlin. \newline \textbf{Tasks:} Application of meso scale climate model COSMO-CLM, with the urban canopy layer "Double Canyon Energy Parameterization" scheme. Assistance with data acquisition and analysis. \newline} \twentyitem{08/15-09/16}{Christian-Albrechts Universität zu Kiel}{Student collaborator}{ \textbf{Chair:} Geophysics\newline \textbf{Project:} BORA - Berechnung von Offshore-Rammschall\newline \textbf{Tasks:} Assistance with analysis of data signals using MATLAB and setup for data acquisition. \newline} %\twentyitem{<dates>}{<title>}{<location>}{<description>} \end{twenty} %---------------------------------------------------------------------------------------- % EDUCATION %---------------------------------------------------------------------------------------- \section{Education} \begin{twenty} % Environment for a list with descriptions %\twentyitem{since 1865}{Ph.D. {\normalfont candidate in Computer Science}}{Wonderland}{\emph{A Quantified Theory of Social %Cohesion.}} \twentyitem{10/16-03/19}{Humboldt-Universität zu Berlin}{Grade: 1.2}{Master of Science in Global Change Geography - The Physical Geography of Human-Environment Systems. \newline \newline This master degree program examines current research questions, approaches and insights regarding the interactions between environment and society in the context of global change. \newline Extra Courses: Arctic Seismic Exploration at UNIS - University Centre in Svalbard \newline } \twentyitem{10/11-09/16}{Christian-Albrechts-Universität zu Kiel}{Grade: 2.2}{Bachelor of Science in Physics of the Earth's System - Meteorology, Oceanograpy, Geophysics. \newline \newline A bachelor program covering the physical geosciences taught in equals parts by CAU Kiel and GEOMAR - Helmholtz Centre for Ocean Research Kiel.\newline} \twentyitem{08/95-06/08}{Freie Waldorfschule Flensburg}{Grade: 1.8}{A levels/Abitur - Specialising in mathematics and geography. \newline International experience: A semester abroad in Pretoria, Republic of South Africa} %\twentyitem{<dates>} {} {<title>}{<location>}{<description>} \end{twenty} %---------------------------------------------------------------------------------------- % SECOND PAGE EXAMPLE %---------------------------------------------------------------------------------------- %\newpage % Start a new page %\makeprofile % Print the sidebar %\section{Other information} %\subsection{Review} %Alice approaches Wonderland as an anthropologist, but maintains a strong sense of noblesse oblige that comes with her class status. She has confidence in her social position, education, and the Victorian virtue of good manners. Alice has a feeling of entitlement, particularly when comparing herself to Mabel, whom she declares has a ``poky little house," and no toys. Additionally, she flaunts her limited information base with anyone who will listen and becomes increasingly obsessed with the importance of good manners as she deals with the rude creatures of Wonderland. Alice maintains a superior attitude and behaves with solicitous indulgence toward those she believes are less privileged. %\section{Other information} %\subsection{Review} %Alice approaches Wonderland as an anthropologist, but maintains a strong sense of noblesse oblige that comes with her class status. She has confidence in her social position, education, and the Victorian virtue of good manners. Alice has a feeling of entitlement, particularly when comparing herself to Mabel, whom she declares has a ``poky little house," and no toys. Additionally, she flaunts her limited information base with anyone who will listen and becomes increasingly obsessed with the importance of good manners as she deals with the rude creatures of Wonderland. Alice maintains a superior attitude and behaves with solicitous indulgence toward those she believes are less privileged. %---------------------------------------------------------------------------------------- \end{document}
{ "alphanum_fraction": 0.610989011, "avg_line_length": 50.5555555556, "ext": "tex", "hexsha": "d0bcf68dfecab7817cb015ac9bef49f624b1b327", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "67d94a04d173ebdf04684644218fa752f2cf2143", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "benatouba/CV", "max_forks_repo_path": "template.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "67d94a04d173ebdf04684644218fa752f2cf2143", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "benatouba/CV", "max_issues_repo_path": "template.tex", "max_line_length": 696, "max_stars_count": 1, "max_stars_repo_head_hexsha": "67d94a04d173ebdf04684644218fa752f2cf2143", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "benatouba/CV", "max_stars_repo_path": "template.tex", "max_stars_repo_stars_event_max_datetime": "2019-11-22T18:17:08.000Z", "max_stars_repo_stars_event_min_datetime": "2019-11-22T18:17:08.000Z", "num_tokens": 2084, "size": 9555 }
\lab{Getting Started}{Getting Started} The individual mandatory assignments (called ``labs'' in the following) in the second part of DM561 aim to introduce to applications of Linear Algebra and to train your Python programming skills. There will be weekly programming tasks for which you have to submit a solution. \section*{Submitting Assignments} % =========================================== \subsection*{Labs} Every lab has a corresponding specifications file with some code to get you started and to make your submission compatible with automated test drivers. The template code will be provided via IMADA Git Server at \url{https://git.imada.sdu.dk}. How to proceed in detail will be described below. To submit a lab, modify the provided specifications file and use \texttt{git} to submit your solution (discussed in the next section). The submissions will be automatically graded. For example, the first assignment (which will not be graded, and is just used to introduce you to the procedure of how you should submit your solutions and how your solution is graded) has the specifications file \texttt{asg0-onlytesting/onlytesting.py}. To complete that assignment, provide your implementation in the file \texttt{asg0-onlytesting/asg0.py} and submit it via \texttt{git}. After grading, you will be able to get access to a file called \texttt{asg0-onlytesting/grade.txt} with your score and some feedback. Each assignment will have a formal deadline. However we will provide testing results once every hour (at minute 00), such that you can improve your code. The grading will be based on your implementation at the exact time of the deadline of the assignment. \begin{warn} Do \textbf{not} move or rename the lab folders or the enclosed specifications files; if you do, the test drivers will not be able to find your assignment. Do \textbf{not} edit the file \texttt{grade.txt}. This file is overwritten when you pull with git from the remote server and must stay unchanged. \end{warn} \newpage \section*{Setup} % ============================================================ \begin{warn} We strongly recommend using a Unix-based operating system (Mac or Linux) for the labs. Unix has a true bash terminal, works well with git and python, and is the preferred platform for computational and data scientists. It is possible to do this curriculum with Windows, but expect some road bumps along the way. We will ensure that all the exercises can be solved in the IMADA Computer Lab. You can use your own environment, but you should not expect that we are able to answer your environment specific questions. \end{warn} Code has to be submitted using git. \subsection*{Setup With Git} % ------------------------------------------------ \emph{Git} is a program that manages updates between an online code repository and the copies of the repository, called \emph{clones}, stored locally on computers. Git is installed in the IMADA Computer Lab. The instructions given below in this document should be enough for the needs in this course. The tutorials linked below will provide much more information than needed in this course. Nevertheless, git is an industry-standard collaboration tool, and being able to use it efficiently is an asset. If you decide to use your own computer, and git is not already installed on your computer, you can download it at \url{http://git-scm.com/downloads} (or use the installation procedure of your specific system). If you have never used git, you might want to read a few of the following resources. \begin{itemize} \item Official git tutorial: \url{https://git-scm.com/docs/gittutorial} \item Bitbucket git tutorials: \url{https://www.atlassian.com/git/tutorials} \item GitHub git cheat sheet: \href{https://education.github.com/git-cheat-sheet-education.pdf}{\texttt{https://education.github.com/github-git-cheat-sheet.pdf}} \item GitLab git tutorial: \url{https://docs.gitlab.com/ce/gitlab-basics/start-using-git.html} \item Codecademy git lesson: \url{https://www.codecademy.com/learn/learn-git} \item Training video series by GitHub: \href{https://www.youtube.com/playlist?list=PLg7s6cbtAD15G8lNyoaYDuKZSKyJrgwB-}{\texttt{https://www.youtube.com/playlist?list=PLg7.../}} \end{itemize} There are many websites for hosting online git repositories. IMADA has its own server for hosting git repositories \url{https://git.imada.sdu.dk}. While not needed for submitting your code, you can login to the webpage using your university account name and the same password as you use for reading your mail or logging into blackboard. Choose as authentication source ``SDU''. See Fig.~\ref{fig:gogs} for an example. Via the webpage you will always be able to see the state of your code that will be used for auto-grading. \begin{figure} \includegraphics[width=\textwidth]{gitea_login} \caption{IMADA Git Server (Gitea) login page} \label{fig:gogs} \end{figure} \begin{enumerate} \item \emph{Clone your existing repository}. Usually, you have to create a repository. However, we already created a repository for each student of DM561. You will not have to create any repositories, but only clone it. \item \emph{Connect your folder to the new repository}. \label{step:connect-folder}\label{step:download-data} In a shell application (Terminal on Linux or Mac, or Git Bash (\url{https://gitforwindows.org/} on Windows), enter the following commands (we will use the student with the username ``username'' as example, of course you have to change this). \begin{lstlisting} # Navigate to the folder where you want to store your files $ <b<cd>b> /path/to/folder # cd means 'change directory'. # Make sure you are in the right place. $ <b<pwd>b> # pwd means 'print working directory'. /path/to/folder # Clone the repository we provided $ git clone https://git.imada.sdu.dk/DM561_2021/username-repo.git Cloning into 'username-repo'... Username for 'https://git.imada.sdu.dk': username Password for 'https://[email protected]': ******** remote: Counting objects: 48, done. remote: Compressing objects: 100% (44/44), done. remote: Total 48 (delta 16), reused 0 (delta 0) Unpacking objects: 100% (48/48), done. $ ls username-repo $ cd username-repo $ ls -rtl -rw------- 1 username username 66 Oct 31 19:48 README.md -rw------- 1 username username 134 Oct 31 19:48 Info.md drwx------ 2 username username 4096 Oct 31 19:48 asg0-onlytesting # Record your credentials (has to be done once only). $ git config --local user.name "Firstname Surname" $ git config --local user.email "[email protected]" \end{lstlisting} \item \emph{Install Python package dependencies}. \label{step:install-dependencies} Some of the labs require third-party Python packages that you might not have installed on your system. If they are missing you will see an error message similar to \begin{lstlisting} $ python test.py Traceback (most recent call last): File "test.py", line 1, in <module> import matplotlib ImportError: No module named matplotlib \end{lstlisting} You can easily install missing packages via \begin{lstlisting} $ pip3 install --user matplotlib Collecting matplotlib Downloading https://files.pythonhosted.org/packages/b2/58/5842588fa67b45ffb451c4c98eda283c0c42b8f2c5e503e4f6d9ff3c3a63/matplotlib-3.0.1-cp35-cp35m-manylinux1_x86_64.whl (12.9MB) [...] \end{lstlisting} \end{enumerate} %\subsection*{Setup Without Git} % --------------------------------------------- Note, that you must have git installed in order to i.) get the data files for each lab and ii.) to submit you solution. Git is installed in the Computer Lab --- if you want to install it within your own environment, \url{http://git-scm.com/downloads} is a good starting point. \section*{Using Git} % ======================================================== Git manages the history of a file system through \emph{commits}, or checkpoints. Use \li{git status} to see the files that have been changed since the last commit. These changes are then moved to the (local) \emph{staging area} (a list of files for the next commit) with \li{git add <filename(s)>}. Record the changes in the staging area with the command \li{git commit -m "<A brief message describing the changes>"}. All of these commands are done within a ``clone'' of the repository, which is stored somewhere on a computer. This repository must be manually synchronized with the remote repository server via two other git commands: \li{git pull}, to pull updates from the web to the computer; and \li{git push}, to push updates from the computer to the git server. In a nutshell, for the Labs in DM561 you usually have to modify one file only. This file first has to be added to the staging area, then it has to be commited, and then it has to be pushed to the remote server. In order to get the grading, you have to pull the corresponding file from the server after we tested your solution and created the grading file. \begin{table}[H] \begin{tabular}{l|l} Command & Explanation \\ \hline \li{git status} & Display the staging area and untracked changes. \\ \li{git pull} & Pull changes from the online repository. \\ \li{git push} & Push changes to the online repository. \\ \li{git add <filename(s)>} & Add a file or files to the staging area. \\ \li{git commit -m "<message>"} & Save the changes in the staging area with a given message. \\ \end{tabular} \caption{Most common git commands needed for DM561.} \end{table} \begin{table}[H] \begin{tabular}{l|l} Command & Explanation \\ \hline \li{git add -u} & Add all modified, tracked files to the staging area. \\ \li{git checkout -- <filename>} & Revert changes to an unstaged file since the last commit. \\ \li{git reset HEAD -- <filename>} & Remove a file from the staging area. \\ \li{git diff <filename>} & See the changes to an unstaged file since the last commit. \\ \li{git diff --cached <filename>} & See the changes to a staged file since the last commit. \\ \li{git config --local <option>} & Record your credentials (\li{user.name}, \li{user.email}, etc.). \\ \end{tabular} \caption{Some more git commands.} \end{table} \begin{info} When pulling updates with \li{git pull origin master}, your terminal may sometimes display the following message. \begin{lstlisting} Merge branch 'master' of https://git.imada.sdu.dk/<name>/<repo> into master # Please enter a commit message to explain why this merge is necessary, # especially if it merges an updated upstream into a topic branch. # # Lines starting with '#' will be ignored, and an empty message aborts # the commit. ~ ~ \end{lstlisting} This means that someone else (the grading system) has pushed a commit (e.g., the file containing your grades) that you do not yet have, while you have also made one or more commits locally that they (the grading system) do not have. This screen, displayed in \emph{vim} (\url{https://en.wikipedia.org/wiki/Vim_(text_editor)}), is asking you to enter a message (or use the default message) to create a \emph{merge commit} that will reconcile both changes. To close this screen and create the merge commit, type \li{:wq} and press \li{enter}. \end{info} % TODO: git staging area diagram. \subsection*{Example Work Session} Cloning and giving details on your name and email has only to be done once. The below work session assumes this has been done already. Short version: \begin{lstlisting} $ <b<cd>b> ~/Desktop/Student-Materials/ $ git pull # Pull updates. # Make changes to a file (in this example onlytesting.py) # Record the changes in git. $ git add onlytesting.py # Track changes. $ git commit -m "Made some changes." # Commit changes. $ git push # Push updates. \end{lstlisting} Long version: \begin{lstlisting} # Navigate to the clone of the repository. $ <b<cd>b> ~/Desktop/Student-Materials/ # Pull any updates from the online repository (such as preliminary feedback and grading), if they exist. $ git pull remote: Counting objects: 4, done. remote: Compressing objects: 100% (4/4), done. remote: Total 4 (delta 2), reused 0 (delta 0) Unpacking objects: 100% (4/4), done. From https://git.imada.sdu.dk/DM561-2021/username-repo 6dde06d..e24cee5 master -> origin/master Updating 6dde06d..e24cee5 Fast-forward asg0-onlytesting/grade.txt | 33 +++++---------------------------- 1 file changed, 5 insertions(+), 28 deletions(-) # It seems someone graded your solution, and you would find the result in the file asg0-onlytesting/grade.txt # Work on the labs. For example, modify asg0-onlytesting/onlytesting.py $ git status On branch master Your branch is up to date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: asg0-onlytesting/onlytesting.py no changes added to commit (use "git add" and/or "git commit -a") $ git add asg0-onlytesting/onlytesting.py $ git status On branch master Your branch is up to date with 'origin/master'. Changes to be committed: (use "git reset HEAD <file>..." to unstage) modified: asg0-onlytesting/onlytesting.py # Commit the changes to the repository with an informative message. $ git commit -m "Made some changes" [master 72a5ab3] Made some changes 1 file changed, 1 insertion(+) <<[master fed9b34] Made some changes 1 file changed, 10 insertion(+) 1 deletion(-)>> # Push the changes to the online repository. $ git push Enumerating objects: 7, done. Counting objects: 100% (7/7), done. Delta compression using up to 8 threads Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 373 bytes | 373.00 KiB/s, done. Total 4 (delta 2), reused 0 (delta 0) To https://git.imada.sdu.dk/DM561-2021/username-repo.git e24cee5..72a5ab3 master -> master # The changes have been saved and the online repository updated. $ git status On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean \end{lstlisting} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End:
{ "alphanum_fraction": 0.7312666948, "avg_line_length": 49.2249134948, "ext": "tex", "hexsha": "1d12857a1b8fbbfd52485b5ee4a4c9a3382867b2", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_forks_repo_licenses": [ "CC0-1.0" ], "max_forks_repo_name": "DM561/dm561.github.io", "max_forks_repo_path": "acme-material/Labs/Appendices/Setup/SetupStudent.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_issues_repo_issues_event_max_datetime": "2021-03-31T19:00:36.000Z", "max_issues_repo_issues_event_min_datetime": "2019-10-18T19:57:53.000Z", "max_issues_repo_licenses": [ "CC0-1.0" ], "max_issues_repo_name": "DM561/dm561.github.io", "max_issues_repo_path": "acme-material/Labs/Appendices/Setup/SetupStudent.tex", "max_line_length": 524, "max_stars_count": 1, "max_stars_repo_head_hexsha": "216e8f41007f4f4fbd174c529f543b20bb477702", "max_stars_repo_licenses": [ "CC0-1.0" ], "max_stars_repo_name": "DM561/dm561.github.io", "max_stars_repo_path": "acme-material/Labs/Appendices/Setup/SetupStudent.tex", "max_stars_repo_stars_event_max_datetime": "2019-04-13T13:22:41.000Z", "max_stars_repo_stars_event_min_datetime": "2019-04-13T13:22:41.000Z", "num_tokens": 3574, "size": 14226 }
\documentclass[journal]{packages/vgtc} % final (journal style) %\documentclass[review,journal]{packages/vgtc} % review (journal style) %\documentclass[widereview]{packages/vgtc} % wide-spaced review %\documentclass[preprint,journal]{packates/vgtc} % preprint (journal style) %\documentclass[electronic,journal]{packages/vgtc} % electronic version, journal %% Uncomment one of the lines above depending on where your paper is %% in the conference process. ``review'' and ``widereview'' are for review %% submission, ``preprint'' is for pre-publication, and the final version %% doesn't use a specific qualifier. Further, ``electronic'' includes %% hyperreferences for more convenient online viewing. %% Please use one of the ``review'' options in combination with the %% assigned online id (see below) ONLY if your paper uses a double blind %% review process. Some conferences, like IEEE Vis and InfoVis, have NOT %% in the past. %% Please note that the use of figures other than the optional teaser is not permitted on the first page %% of the journal version. Figures should begin on the second page and be %% in CMYK or Grey scale format, otherwise, colour shifting may occur %% during the printing process. Papers submitted with figures other than the optional teaser on the %% first page will be refused. %% These three lines bring in essential packages: ``mathptmx'' for Type 1 %% typefaces, ``graphicx'' for inclusion of EPS figures. and ``times'' %% for proper handling of the times font family. % TVCG Packages \usepackage{mathptmx} \usepackage{graphicx} \usepackage{times} \usepackage{multirow} \usepackage{booktabs} \usepackage{amsmath} \usepackage{caption} \usepackage{subcaption} \usepackage[utf8]{inputenc} % Import other packages \input{packages.tex} % Import Figures \input{figures.tex} % Import Tables \input{tables.tex} %% We encourage the use of mathptmx for consistent usage of times font %% throughout the proceedings. However, if you encounter conflicts %% with other math-related packages, you may want to disable it. %% This turns references into clickable hyperlinks. \usepackage[bookmarks,backref=true,linkcolor=black]{hyperref} %,colorlinks \hypersetup{ pdfauthor = {}, pdftitle = {}, pdfsubject = {}, pdfkeywords = {}, colorlinks=true, linkcolor= black, citecolor= black, pageanchor=true, urlcolor = blue, plainpages = false, linktocpage } %% If you are submitting a paper to a conference for review with a double %% blind reviewing process, please replace the value ``0'' below with your %% OnlineID. Otherwise, you may safely leave it at ``0''. \onlineid{0} %% declare the category of your paper, only shown in review mode \vgtccategory{Research} %% allow for this line if you want the electronic option to work properly \vgtcinsertpkg %% In preprint mode you may define your own headline. %\preprinttext{To appear in an IEEE VGTC sponsored conference.} %% Paper title. \title{The new best paper award} %% This is how authors are specified in the journal style %% indicate IEEE Member or Student Member in form indicated below \author{Author's Name} \authorfooter{ %% insert punctuation at end of each item \item % Lane Harrison is with Worcester Polytechnic Institute. E-mail: [lane]@cs.wpi.edu. Authors' info. Email. \item Authors' info. Email. } %other entries to be set up for journal \shortauthortitle{Author \MakeLowercase{\textit{et al.}}: Title} %% Abstract section. \abstract{ Your abstract is great. } % end of abstract %% Keywords that describe your work. Will show as 'Index Terms' in journal %% please capitalize first letter and insert punctuation after last keyword \keywords{Perception, Visualization, Evaluation.} %% ACM Computing Classification System (CCS). %% See <http://www.acm.org/class/1998/> for details. %% The ``\CCScat'' command takes four arguments. \CCScatlist{ % not used in journal version \CCScat{K.6.1}{Management of Computing and Information Systems}% {Project and People Management}{Life Cycle}; \CCScat{K.7.m}{The Computing Profession}{Miscellaneous}{Ethics} } %% Uncomment below to include a teaser figure. % Teaser should be jnds for -pcp versus +pcp (distinguishable; not distinguishable; target; not distinguishable; distinguishable) \teaser{ \centering \includegraphics[width=\textwidth]{img/teaser} \caption{Every paper should have a teaser.} \label{fig:teaser} } %% Uncomment below to disable the manuscript note %\renewcommand{\manuscriptnotetxt}{} %% Copyright space is enabled by default as required by guidelines. %% It is disabled by the 'review' option or via the following command: % \nocopyrightspace %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%% START OF THE PAPER %%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %% The ``\maketitle'' command must be the first command after the %% ``\begin{document}'' command. It prepares and prints the title block. %% the only exception to this rule is the \firstsection command \firstsection{Introduction} \maketitle %% \section{Introduction} %for journal use above \firstsection{..} instead \input{body.tex} %% if specified like this the section will be committed in review mode \acknowledgments{ Text. } \bibliographystyle{abbrv} \bibliography{paper.bib} \end{document}
{ "alphanum_fraction": 0.7245011086, "avg_line_length": 32.6024096386, "ext": "tex", "hexsha": "560bfb753b8de5d95f746be9f7b79a80a58915b7", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "c7a7f6a98136dafb2a75776de1ae5b606f1d88a6", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "wmjpillow/SHMetroVIS", "max_forks_repo_path": "paper/paper.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "c7a7f6a98136dafb2a75776de1ae5b606f1d88a6", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "wmjpillow/SHMetroVIS", "max_issues_repo_path": "paper/paper.tex", "max_line_length": 129, "max_stars_count": 3, "max_stars_repo_head_hexsha": "c7a7f6a98136dafb2a75776de1ae5b606f1d88a6", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "wmjpillow/SHMetroVIS", "max_stars_repo_path": "paper/paper.tex", "max_stars_repo_stars_event_max_datetime": "2021-05-27T13:05:50.000Z", "max_stars_repo_stars_event_min_datetime": "2019-09-11T05:39:35.000Z", "num_tokens": 1311, "size": 5412 }
\section{Case detection and isolation} \label{cdr} \subsection{Determining the proportion of cases detected} We calculate a time-varying case detection rate, being the proportion of all symptomatic cases (clinical strata 2 to 5) that are detected (clinical strata 3 to 5). This proportion is informed by the number of tests performed using the following formula: \[CDR(time)=1-e^{-shape \times tests(time)}\] \textit{time} is the time in days from the 31\textsuperscript{st} December 2019 and \textit{tests(time)} is the number of tests per capita done on that date. To determine the value of the shape parameter, we solve this equation based on the assumption that a certain daily testing rate \textit{tests(t)} is associated with a certain \textit{CDR(t)}. Solving for \textit{shape} yields: \[shape = \frac{-log(1 - CDR(t))}{tests(t)}\] That is, if it is assumed that a certain daily per capita testing rate is associated with a certain proportion of symptomatic cases detected, we can determine \textit{shape}. As this relationship is not well understood and unlikely to be consistent across all settings, we vary the \textit{CDR} that is associated with a certain per capita testing rate during uncertainty/calibration. Given that the \textit{CDR} value can be varied widely, the purpose of this is to incorporate changes in the case detection rate that reflect the historical profile of changes in testing capacity over time. The proportion of persons entering clinical stratum 3 is calculated once the \textit{CDR} is known, along with the proportion of all incident cases hospitalised (strata 4 and 5). \subsection{Isolation of detected cases} As described in the Section \ref{clin} above, as infected persons progress from the early to the late stage of active COVID-19, infectiousness is reduced for those in the detected strata (3 to 5) to reflect case isolation.
{ "alphanum_fraction": 0.7874601488, "avg_line_length": 94.1, "ext": "tex", "hexsha": "0fa35d3dd9ec40c2c43088b20d593cfda227f707", "lang": "TeX", "max_forks_count": 10, "max_forks_repo_forks_event_max_datetime": "2021-08-19T16:19:03.000Z", "max_forks_repo_forks_event_min_datetime": "2020-04-24T00:38:00.000Z", "max_forks_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_forks_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_forks_repo_name": "emmamcbryde/AuTuMN-1", "max_forks_repo_path": "docs/tex/tex_descriptions/models/covid_19/detection.tex", "max_issues_count": 96, "max_issues_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_issues_repo_issues_event_max_datetime": "2022-03-31T01:48:46.000Z", "max_issues_repo_issues_event_min_datetime": "2020-01-29T05:10:29.000Z", "max_issues_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_issues_repo_name": "emmamcbryde/AuTuMN-1", "max_issues_repo_path": "docs/tex/tex_descriptions/models/covid_19/detection.tex", "max_line_length": 384, "max_stars_count": 14, "max_stars_repo_head_hexsha": "b1e7de15ac6ef6bed95a80efab17f0780ec9ff6f", "max_stars_repo_licenses": [ "BSD-2-Clause-FreeBSD" ], "max_stars_repo_name": "emmamcbryde/AuTuMN-1", "max_stars_repo_path": "docs/tex/tex_descriptions/models/covid_19/detection.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-09T03:38:35.000Z", "max_stars_repo_stars_event_min_datetime": "2020-03-11T06:15:30.000Z", "num_tokens": 440, "size": 1882 }
\chapter{SIREn Query Parser}{ The query parser is used in SIREn in order to allow people to interact with its index. However we must keep in mind that Sindice, which uses SIREn for searching and browsing purpose, is aimed to be used as a web service by machine as well. In this section, we present a query parser that matches triple patterns in datasets before describing a parser that matches entities. } \label{chap:siren-extension} \input{conclusion/siren} \chapter{Conclusion}{ In the recent years, the amount of semantic data has considerable increased. More and more web sites are exporting their data using RDF, as the power of the semantic web is increasingly attractive: to be able to efficiently search across multiple data sources specific information, and to retrieve relevant results hereafter. The semantic Web is a challenging and expanding research domain. The Web as we know it is undergoing a radical change, and the semantic web is contributing a lot to it. People are publishing RDF data following the best practices of Linked Data. The Linking Open Data Cloud is a huge gathering of inter-connected semantic data sources. Applications can use this linked data and provide a concrete benefit to the way we use the Web. \emph{Sig.ma} is a use case for a mashup application that shows the power of the semantic. Its purpose is to search across many data sources, and to provide information organized around \emph{entities}, e.g., a product, a person or any other concept. To be able to efficiently use this collection of knowledge of we scale so that applications like Sig.ma are viable is an important and challenging matter. Sig.ma is built on \emph{Sindice}, a web service that provides search and retrieval capabilities over semantic data. The web service uses \emph{SIREn} at its core, an Information Retrieval search engine, to query an \emph{information need} and to retrieve \emph{relevant} documents. In this report we presented data structures that are commonly used in Information Retrieval search engines. We also discussed about some of the focusing points for optimizing these structures in order to have more efficient and scalable IR search engines like SIREn. We presented AFOR, a compression method that provides both fast compression (increases index updates) and decompression (increases query throughput) speed, and yet with a high compression ratio. We proposed SkipBlock, a novel self-indexing model where some configurations carefully chosen provide faster random lookups from an inverted list and a more compact structure than the original Skip List model.} \label{chap:conclusion} \section{Summary of the Report} The flow of this report reflects the flow of a research work by \begin{inparaenum}[(1)] \item describing a problem and why current related work do not answer the requirements; \item proposing a solution to the problem; and \item proving the previous claims by performing comparative benchmarks. \end{inparaenum} Following this pattern, we proposed two novel structures, where both have the common goal to reduce the amount of data read in order to increase the IO throughput. \begin{description} \item[Compression Technique] compression techniques do not only aim at reducing the storage space, but also at increasing the IO access time. Reading/writing less data from/to disk with a high performance algorithm reduce the wasted time on IO access. Thus the performance of operations that directly depends on some data to process is improved. We proposed \emph{AFOR}, a new compression class that can increase query throughput compared to other state of the art algorithms thanks to a more ``close-to-data'' compression. \item[Self-Indexing Technique] query processing returns relevant information by applying some operations on the inverted lists. However not all the data that inverted lists have is necessary, thus reading or decoding such data is a wasted time. Self-indexing is a technique that allows to skip over portions of the inverted lists that are unnecessary for the processing of a query. We proposed a new self-indexing model, called \emph{SkipBlock}, that aims to improve the original model Skip List by taking into consideration the compression algorithm used on the inverted lists. We will also present at \emph{The $33^{rd}$ European Conference on Information Retrieval} (ECIR)\footnote{ECIR: \url{http://www.ecir2011.dcu.ie/}} the paper which introduced the model (Appendix~\ref{app:SkipBlock-paper}). \end{description} \section{Future Work} The Information Retrieval domain for Semantic Data covers not only what has been presented in this report but a wider area of problems. In the coming months, I will stay within DERI and work on different subjects. In this section I list a number of the possible future work. \begin{itemize} \item Finalize the AFOR implementation to make it into a production ready state. \item Continue the research on SkipBlock, which will consists in optimizing search strategies and finding more adapted ones. \item Implement a novel SIREn index structure that will improve query processing performance. \item Start researching on dynamic query processing. \end{itemize} \section{Personal Benefits} My internship at DERI has been a source for new knowledge in many areas. I was able to deepen not only my skills in computer programming but also my scientific knowledge. \begin{description} \item[Computer Skills] Thanks to this internship I was able to improve my skills on different programming languages: in JAVA since our project SIREn uses it, in script shell such as bash or Ruby when writing benchmarks automating scripts, in a text stream editor like \emph{Sed}, very convenient for automate operations on large files such as logging or benchmarking results files. \\*[\parskip] Because SIREn is a system built to be highly efficient and scalable over millions of entities descriptions, any code that will be used within has to be well written. The engineer must take care of the memory and the CPU consumption so that the best performance is reached. Concerning JAVA there are many classes available to help the developer, for instance it is better to use the class \texttt{StringBuilder}\footnote{\texttt{StringBuilder}: \url{http://tinyurl.com/3xbkvw6}} when operating of very large strings than to use the \texttt{String} type. \\*[\parskip] At last it is important to comment the code written, not only for people using it later on but also for ourselves since it permits to know its structure or what are the possible optimizations. As part of commenting the code, writing meaningful descriptions in SVN logs helps to keep track of what was done and the reason of some changes. \item[Engineering Skills] For the last months of my internship I was given a project (the SkipBlock model) to work on alone. This experience showed me the different points to take care of when managing projects, such as coordinating the development with some deadline. Also when implementing a solution, there are sometimes a difference between the model and the real results of the implementations. This leads to the necessity of taking decisions in order to understand why it is so and to be able to explain clearly the reasons. Moreover it is frequent when implementing under a deadline pressure that the code isn't optimized. In a short term this is not a problem, but in a long term it becomes one as a \emph{technical debt}\footnote{Technical Debt: \url{http://www.martinfowler.com/bliki/TechnicalDebt.html}}, since a messy code will end up in re-factoring. \item[Research Skills] DERI made me aware of the challenges that we can expect from the research environment, such as a research dependent implementations which change a project flow, since time constraints cannot be put on tasks because the expecting difficulty and problems are still unknown. Moreover working on the SIREn project allowed me to take part into the publication process of scientific papers. These points as well as the scientific domain of DERI (Semantic Web) and of the project (IR plus highly efficient structures) gave me the desire to keep on working in the DI2 team for the following year. \item[Scientific Knowledge] Thanks to this internship I was able to gain more knowledge on the interesting Information Retrieval domain. Moreover the internship made me aware of the Semantic Web infrastructure and of all the possibilities it provides. \item[Human Skills] A project cannot be successful unless the communication between members is clear and efficient. A good communication flow allows members to know the big picture of the research, and what should be working on each individuals. DERI is a multi-cultural research institute, with people form all around the world. This working environment was a good basis for improving my English. \end{description}
{ "alphanum_fraction": 0.8069502426, "avg_line_length": 58.6953642384, "ext": "tex", "hexsha": "519a4220348f1dc7b4c1488b50cebeca715adcdd", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "6ec36c77177642434a6ac4890d394ac2afea2c43", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "scampi/report-compression-skiplists", "max_forks_repo_path": "conclusion.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "6ec36c77177642434a6ac4890d394ac2afea2c43", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "scampi/report-compression-skiplists", "max_issues_repo_path": "conclusion.tex", "max_line_length": 80, "max_stars_count": null, "max_stars_repo_head_hexsha": "6ec36c77177642434a6ac4890d394ac2afea2c43", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "scampi/report-compression-skiplists", "max_stars_repo_path": "conclusion.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 1939, "size": 8863 }
%&LaTeX \section{Audio \& Image Compression} \subsection{Lab Background: sound and images in MATLAB} MATLAB has a number of functions that can be used to read and write sound and image files as well as manipulate and display them. See the \texttt{help} for each function for details. Ones that you'll likely find useful are: \begin{description} \item[audiorecorder] Perform real-time audio capture. This may or may not work on your system; it is critically dependent on your hardware and MATLAB's support thereof. You may find it simpler to record and edit audio files with other software and then save it as \texttt{.wav} files to load into MATLAB (yes, that's right, MATLAB won't read \texttt{mp3} files). \item[sound] Play vector as a sound. This allows you to create arbitrary waveforms mathematically (e.g., individual sinusoids, sums of sinusoids) and then play them through your speakers. This is the function to use if you're values are scaled within the range of -1 to +1. \item[soundsc] This function works like \texttt{sound}, but first it scales the vector values to fall within the range of -1 to +1. Much of the time, you won't have your waveforms pre-scaled, and you'll use this function. \item[imread] Read image from graphics file. MATLAB's image file support is much better than its audio file support. This function (and \texttt{imwrite}) supports many file types. \item[imwrite] Write image to graphics file. \end{description} Within MATLAB, audio is a 1-D vector (stereo is $n \times 2$, but we won't deal with stereo) and grey-scale images are 2D arrays (color images are 3-D arrays). For simplicity's sake, make sure any sound files you use are mono. \subsubsection{Data Types} MATLAB supports a number of data types, and the type that the file I/O functions return often depends on the kind of file read. Regardless, almost all of the MATLAB functions you'll use to process data return doubles. For example, consider the following commands that read a color image, convert it to grey-scale, and display it: \begin{verbatim} >> A = imread('/tmp/ariel.jpg'); >> B = mean(A,3); >> imagesc(B) >> colormap(grey) >> axis equal >> whos Name Size Bytes Class A 100x55x3 16500 uint8 array B 100x55 44000 double array \end{verbatim} The (color) image is read into \texttt{A}, which, as you can see from the output of the \texttt{whos} command, is a 100x55x3 unsigned, 8-bit integer array (the third dimension for the red, green, and blue image color components). I convert the image to grey-scale by taking the mean of the three colors at each pixel (the overall brightness), producing a \texttt{B} array that is \texttt{double}. The \texttt{imagesc} function displays the image, the \texttt{colormap} function determines how the array values translate into screen colors, and the \texttt{axis equal} command ensures that the pixels will be square on the screen. Much of the time, we will just perform our calculations using \texttt{double} data types. However, we can use functions like \texttt{uint8} to convert our data. Note that the sound playing functions don't seem to like integer vectors. At the very least, you can get real audio files from \url{http://faculty.washington.edu/stiber/pubs/Signal-Computing/}; I assume that you'll have no trouble locating interesting images (please keep your work G-rated). Don't use ones that are too big; there will be a 2MB size limit on E-Submit for your entire compressed file. Write answers to any questions in the steps below in a file \texttt{answers.txt}, \texttt{answers.doc}, or something similarly named. \subsection{Lossless image coding} The simplest way to compress images is \emph{run-length coding} (RLE), a form of repetitive sequence compression in which multiple pixels with the same values are converted into a count and a value. To implement this, we need to reserve a special image value --- one that will never be used as a pixel value --- as a flag to indicate that the next two numbers are a (count, value) pair, rather than just a couple pixels. Let's apply RLE to three different kinds of images: color photographs, color drawings, and black-and-white images (e.g., a scan of text). You can choose images you like, or get these from the book web site: \url{http://faculty.washington.edu/stiber/pubs/Signal-Computing/ariel.jpg} (color photo), \url{http://faculty.washington.edu/stiber/pubs/Signal-Computing/cartoon.png} (color drawing), and \url{http://faculty.washington.edu/stiber/pubs/Signal-Computing/text.png} (black-and-white). \paragraph{Step 1.1} Write a MATLAB script to read an image in, convert it to grey-scale if the image array is 3-D, and scale the image values so they are in the range [1, 255] (note the absence of zero). Save your script as \texttt{step11.m}. Verify that this has worked by using the \texttt{min} and \texttt{max} functions. Convert the results to \texttt{uint8} and display each using \texttt{imagesc}. Save the resulting images as \texttt{step11a.jpg}, \texttt{step11b.jpg}, and \texttt{step11c.jpg}. If you used your own images, name them \texttt{step11a-in.jpg}, \texttt{step11b-in.jpg}, and \texttt{step11c-in.jpg}. \paragraph{Step 1.2} At this point, you should have in each case a 2-D array with values in the range [1, 255] inclusive. To simplify matters, we will treat each array as though it were one-dimensional. This is easy in MATLAB, as we can index a 2-D array with a single index ranging from 1 to $N \times M$ (the array size). Write a RLE function (save it as \texttt{RLE.m}) that takes in a 2-D array and scans it for runs of pixels with the same value, producing a RLE vector on its output. When fewer than four pixels in a row have the same value, they should just appear in the vector. When four or more (up to 255) pixels in a row have the same value, they should be replaced with three vector elements: a zero (indicating that the next two elements are a run code, rather than ordinary pixel values), a count (should contain 4 to 255), and the pixel value for the run. Verify that your RLE function works by implementing a RLD function (RLE decoder, saved as \texttt{RLD.m}) that takes in a RLE vector, $N$, and $M$ and outputs a 2-D $N \times M$ array. The RLD output should be identical to the RLE input (subtracting them should produce an array of zeros); verify that this is the case. For each image type, compute the compression factor by dividing the number of elements in the RLE vector by the number of elements in the original array. What compression factors do you get for each image? \subsection{Lossy audio coding: DPCM} In \emph{differential pulse-code modulation} (DPCM), we encode the differences between signal samples in a limited number of bits. In this section, you'll take an audio signal, apply DPCM with differing numbers of bits, see how much space is saved, and hear if and how the sound is modified. A slight complicating factor is that all of the data will be represented using \texttt{double}, however, we will limit the values that are stored in each double to integers in the range $[0, 2^{\mathrm{bits}}-1]$ (where ``bits'' is the number of bits we're using). \paragraph{Step 2.1} Find a sound to work with; you can use \url{http://faculty.washington.edu/stiber/pubs/Signal-Computing/amoriole2.mat} if you like. Likely, it will have values that are not integers; convert the values to 16-bit integer values by scaling (to $[0, 2^{16}-1]$) and rounding (reminder: the vector's type will still be \texttt{double}; we're just changing the \emph{values} in each element to be integers in that range). Verify that this conversion produces no audible change in the sound. What is the MATLAB code to do this initial quantization? \paragraph{Step 2.2} Now write a DPCM function, saved as \texttt{DPCM.m}, that takes the sound vector and the number of bits for each difference and outputs a DPCM-coded vector. Note that the MATLAB \texttt{diff} function will compute the differences between elements of a vector. Your DPCM function should: \begin{enumerate} \item compute the differences between the samples, \item limit each difference value to be in the range $[-2^{\mathrm{bits}-1}-1, 2^{\mathrm{bits}-1}-1]$, producing a quantized vector and a vector of ``residues'' (you will generate two vectors). For each quantized difference, the ``residues'' vector will be zero if the difference, $\Delta x_i$, is within the above range. Otherwise, it will contain the amount that $\Delta x_i$ exceeds that range (i.e., the difference between $\Delta x_i$ and its quantized value, either $-2^{\mathrm{bits}-1}-1$ or $2^{\mathrm{bits}-1}-1$). \item ``make up for'' each nonzero value in the ``residues'' vector. For each such value, scan the quantized difference vector from that index onward, and modify its entries, up to the quantization limits above, until all of the residue has been ``used up''. \item The final result is a single coded vector that your function should return. \end{enumerate} For example, let's say that we're using 4 bit DPCM and that some sequence of differences is $(\Delta x_1=10, \Delta x_2=5, \Delta x_3=-3)$. The range of quantized differences is $[-7, +7]$, and so the quantized differences are $(7, 5, -3)$ and the residues are $(3, 0, 0)$. Since the first residue is nonzero, we proceed to modify quantized differences starting with the second one, until we've added three to them. The resulting final quantized differences are $(7, 7, -2)$. \paragraph{Step 2.3} Implement an inverse DPCM function IDPCM, saved as \texttt{IDPCM.m}, that takes the first value from the original sound vector and the DPCM vector and returns a decoded sound vector. There's no way for us to know where the losses in the encoding occurred, so we just use the DPCM values as differences. Note that the MATLAB \texttt{cumsum} function computes a vector with elements being the cumulative sum of the elements of its input vector, and that you can add a constant to all of the elements of a vector using the normal addition operator. \paragraph{Step 2.4} Test your DPCM and IDPCM functions by coding your sound using 15, 14, 12, and 10 bit differences. At what point does the coding process produce noticeable degradation (on cheap computer speakers)? Save your original sound vector as \texttt{step24-in.mat} and \texttt{step24-in.wav} and the IDPCM output sounds as \texttt{step24-15.mat} and \texttt{step24-15.wav}, \texttt{step24-14.mat} and \texttt{step24-14.wav}, \texttt{step24-12.mat} and \texttt{step24-12.wav}, and \texttt{step24-10.mat} and \texttt{step24-10.wav}. See the MATLAB \texttt{save} command for how to save individual variables in \texttt{.mat} files. For extra credit, investigate how few bits you can use to encode the sound and still detect some aspect of the original sound. Is this surprising to you? \subsection{Lossy image coding: JPEG} \paragraph{Step 3.1} In this sequence of steps, we will use frequency-dependent quantization, similar to that used in JPEG, to compress an image. Start with your grey-scale, continuous-tone image from step 1.1 (if you used a color image, convert it to grey-scale as you did in step 1.1). The MATLAB image processing or signal processing toolboxes are needed to have access to DCT functions, so we'll use the \verb|fft2()| and \verb|ifft2()| functions instead. To do a basic test of these functions, write a script that loads your image, converting it to grey-scale if necessary, and then computes its 2D FFT using \verb|fft2()|. The resulting matrix has complex values, which we will need to preserve. Display the magnitude of the FFT (remember to use the \verb|abs| function to get the magnitude of a complex number) using \verb|imagesc()|. Do this for each image type. Can you relate any features in the FFT to characteristics in the original image? \paragraph{Step 3.2} Use \verb|ifft2()| to convert the FFT back and plot the result versus the original greyscale image (use \verb|imagesc()|) to check that everything is working fine. Analyze the difference between the two images (i.e., actually subtract them) to satisfy yourself that any changes are merely small errors attributable to finite machine precision). Repeat this process for the other images. \paragraph{Step 3.3} Let's quantize the image's spectral content. First, find the number of zero elements in the FFT, using something like \verb|origZero=length(find(abs(a)==0));|, where \verb|a| is the FFT. \emph{Remember to exclude the DC value in the \texttt{fft2} output in figuring out this range.} Then, zero out additional frequency components by zeroing out all with magnitudes below some threshold. You'll want to set the threshold somewhere between the min and max magnitudes of \verb|a|, which you can get as \verb|mn=min(min(abs(a)));| and \verb|mx=max(max(abs(a)));|. Let's make four tests, with thresholds 5\%, 10\%, 20\%, and 50\% of the way between the min and max, i.e., \verb|th=0.05*(mx-mn)+mn;|. Zero out all FFT values below the threshold using something like: \begin{verbatim} b = a; b(find(abs(a)<th)) = 0; \end{verbatim} You can count the number of elements thresholded by finding the number of zero elements in \verb|b| at this point and subtracting the number that were originally zero (i.e., \verb|origZero|). This is an estimate of the amount the image could be compressed with an entropy coder. Express the number of thresholded elements as a fraction of the total number of pixels in the original image and make a table or plot of this value versus threshold level. \textit{Extra credit.} Note that the FFT may have very high values for just a few elements, and low values for others. You might plot a histogram to verify this. Not including the DC value likely will eliminate the highest value in the FFT. However, some of the other values may still be large enough to produce too large of a range. Can you suggest an approach that will take this into account? How does the JPEG algorithm deal with or avoid this problem? \paragraph{Step 3.4} Now we will see the effect of this thresholding on image quality. Convert the thresholded FFT back to an image using something like \verb|c = abs(ifft2(b));|. For each type of image and each threshold value, plot the original image and the final processed image. Compute the mean squared error (MSE) between the original and reconstructed image (mean squared error for matrices can be computed as \verb|mean(mean((a-c).^2))|). What can you say about the effects on the image and MSE? Collect your code together as a script to automate the thresholding and reconstruction, so you can easily compute MSE for a number of thresholds. Plot MSE vs. threshold percentage (just as you plotted fraction of pixels thresholded vs. threshold in step 3.3). % LocalWords: WebQ MATLAB DSP
{ "alphanum_fraction": 0.7596784997, "avg_line_length": 52.5704225352, "ext": "tex", "hexsha": "15ed7997f567cdc3ceb81647531e9c84cd7fa568", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "stiber/Signal-Computing", "max_forks_repo_path": "J-DSP Labs/lab9/lab9.tex", "max_issues_count": 11, "max_issues_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_issues_repo_issues_event_max_datetime": "2021-01-29T17:19:16.000Z", "max_issues_repo_issues_event_min_datetime": "2015-08-18T18:16:55.000Z", "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "stiber/Signal-Computing", "max_issues_repo_path": "J-DSP Labs/lab9/lab9.tex", "max_line_length": 80, "max_stars_count": 11, "max_stars_repo_head_hexsha": "cb5c7825e0cc80ca2ecd3e324fcf6231c320a721", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "stiber/Signal-Computing", "max_stars_repo_path": "J-DSP Labs/lab9/lab9.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-16T15:48:26.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-10T16:54:45.000Z", "num_tokens": 3929, "size": 14930 }
\documentclass[preprint]{sigplanconf} \usepackage{amssymb} \usepackage{amsthm} \usepackage{breakurl} % Not needed if you use pdflatex only. \usepackage{color} \usepackage{epsfig} \usepackage{esvect} \usepackage{listings} \usepackage{mathpartir} \usepackage{MnSymbol} \usepackage{multirow} \usepackage{rotating} \lstdefinestyle{C++}{language=C++,% showstringspaces=false, columns=fullflexible, escapechar=@, basicstyle=\sffamily, % commentstyle=\rmfamily\itshape, moredelim=**[is][\color{white}]{~}{~}, morekeywords={concept,requires,noexcept}, literate={[<]}{{\textless}}1 {[>]}{{\textgreater}}1 % {<}{{$\langle$}}1 {>}{{$\rangle$}}1 % {<=}{{$\leq$}}1 {>=}{{$\geq$}}1 {==}{{$==$}}2 {!=}{{$\neq$}}1 % {=>}{{$\Rightarrow\;$}}1 {->}{{$\rightarrow{}$}}1 % {<:}{{$\subtype{}\ $}}1 {<-}{{$\leftarrow$}}1 % {s1;}{{$s_1$;}}3 {s2;}{{$s_2$;}}3 {s3;}{{$s_3$;}}3 {s4;}{{$s_4$;}}3 {s5;}{{$s_5$;}}3 {s6;}{{$s_6$;}}3 {s7;}{{$s_7$;}}3 {sn;}{{$s_n$;}}3 {si;}{{$s_i$;}}3% {P1}{{$P_1$}}2 {P2}{{$P_2$}}2 {P3}{{$P_3$}}2 {P4}{{$P_4$}}2 {P5}{{$P_5$}}2 {P6}{{$P_6$}}2 {P7}{{$P_7$}}2 {Pn}{{$P_n$}}2 {Pi}{{$P_i$}}2% {D1}{{$D_1$}}2 {D2}{{$D_2$}}2 {D3}{{$D_3$}}2 {D4}{{$D_4$}}2 {D5}{{$D_5$}}2 {D6}{{$D_6$}}2 {D7}{{$D_7$}}2 {Dn}{{$D_n$}}2 {Di}{{$D_i$}}2% {e1}{{$e_1$}}2 {e2}{{$e_2$}}2 {e3}{{$e_3$}}2 {e4}{{$e_4$}}2% {E1}{{$E_1$}}2 {E2}{{$E_2$}}2 {E3}{{$E_3$}}2 {E4}{{$E_4$}}2% {m_e1}{{$m\_e_1$}}4 {m_e2}{{$m\_e_2$}}4 {m_e3}{{$m\_e_3$}}4 {m_e4}{{$m\_e_4$}}4% {Divide}{{Divide}}6 % {Match}{{\emph{Match}}}5 % {Case}{{\emph{Case}}}4 % {Qua}{{\emph{Qua}}}3 % {When}{{\emph{When}}}4 % {Otherwise}{{\emph{Otherwise}}}9 % {EndMatch}{{\emph{EndMatch}}}8 % {CM}{{\emph{CM}}}2 {KS}{{\emph{KS}}}2 {KV}{{\emph{KV}}}2 % {EuclideanDomain}{\concept{EuclideanDomain}}{15} % {LazyExpression}{\concept{LazyExpression}}{14} % {Polymorphic}{\concept{Polymorphic}}{11} % {Convertible}{\concept{Convertible}}{11} % {Integral}{\concept{Integral}}8 % {SameType}{\concept{SameType}}8 % {Pattern}{\concept{Pattern}}7 % {Regular}{\concept{Regular}}7 % {Object}{\concept{Object}}6 % {Field}{\concept{Field}}5 % } \lstset{style=C++} \lstdefinestyle{Caml}{language=Caml,% morekeywords={when} } \lstdefinestyle{Haskell}{language=Haskell,% morekeywords={out,view,real} } \DeclareRobustCommand{\Cpp}{C\texttt{++}} \DeclareRobustCommand{\code}[1]{{\lstinline[breaklines=false,escapechar=@]{#1}}} \DeclareRobustCommand{\codebr}[1]{{\lstinline[breaklines=true]{#1}}} \DeclareRobustCommand{\codehaskell}[1]{{\lstinline[breaklines=false,language=Haskell]{#1}}} \DeclareRobustCommand{\codeocaml}[1]{{\lstinline[breaklines=false,language=Caml]{#1}}} \DeclareRobustCommand{\concept}[1]{{\small\textsc{#1}}} \newcommand{\exclude}[1]{} \newcommand{\halfline}{\vspace{-1.5ex}} \newtheorem{lemma}{Lemma} \newtheorem{theorem}{Theorem} \newtheorem{corollary}{Corollary} %% grammar commands \newcommand{\Rule}[1]{{\rmfamily\itshape{#1}}} \newcommand{\Alt}{\ensuremath{|}} \newcommand{\is}{$::=$} \newcommand{\subtype}{\textless:} \newcommand{\lazyevals}{\Downarrow} \newcommand{\evals}{\Rightarrow} \newcommand{\evalspp}{\Rightarrow^+} \newcommand{\DynCast}[2]{\ensuremath{dc\langle{#1}\rangle({#2})}} \newcommand{\nullptr}{\ensuremath{\bot}} \newcommand{\True}{\ensuremath{\mathsf{true}}} \newcommand{\False}{\ensuremath{\mathsf{false}}} \newcommand{\Wildcard}{\ensuremath{\mathit{\bf wildcard}}} \newcommand{\Value}[1]{\ensuremath{\mathit{\bf value}\langle{#1}\rangle}} \newcommand{\Variable}[1]{\ensuremath{\mathit{\bf variable}\langle{#1}\rangle}} \newcommand{\ExprU}[2]{\ensuremath{\mathit{\bf expr}\langle{#1},{#2}\rangle}} \newcommand{\ExprB}[3]{\ensuremath{\mathit{\bf expr}\langle{#1},{#2},{#3}\rangle}} \newcommand{\ExprK}[3]{\ensuremath{\mathit{\bf expr}\langle{#1},{#2},\cdots,{#3}\rangle}} \newcommand{\Guard}[2]{\ensuremath{\mathit{\bf guard}\langle{#1},{#2}\rangle}} \newcommand{\Cnstr}[3]{\ensuremath{\mathit{\bf ctor}\langle{#1},{#2},\cdots,{#3}\rangle}} \newcommand{\f}[1]{{ {{#1\%}}}} \newcommand{\s}[1]{{ {\bf \underline{#1\%}}}} \newcommand{\n}[1]{{ {\bf ~ ~ ~ ~ }}} \newcommand{\Opn}{{\scriptsize {\bf Open}}} \newcommand{\Cls}{{\scriptsize {\bf Tag}}} \newcommand{\Unn}{{\scriptsize {\bf Union}}} \input{data2} \newsavebox{\sembox} \newlength{\semwidth} \newlength{\boxwidth} \newcommand{\Sem}[1]{% \sbox{\sembox}{\ensuremath{#1}}% \settowidth{\semwidth}{\usebox{\sembox}}% \sbox{\sembox}{\ensuremath{\left[\usebox{\sembox}\right]}}% \settowidth{\boxwidth}{\usebox{\sembox}}% \addtolength{\boxwidth}{-\semwidth}% \left[\hspace{-0.3\boxwidth}% \usebox{\sembox}% \hspace{-0.3\boxwidth}\right]% } \newcommand{\authormodification}[2]{{\color{#1}#2}} \newcommand{\ys}[1]{\authormodification{blue}{#1}} \newcommand{\bs}[1]{\authormodification{red}{#1}} \newcommand{\gdr}[1]{\authormodification{magenta}{#1}} \begin{document} %\conferenceinfo{DSL 2011}{Bordeaux, France} %\copyrightyear{2011} %\copyrightdata{[to be supplied]} \titlebanner{Technical Report} % These are ignored unless \preprintfooter{Y.Solodkyy, G.Dos Reis, B.Stroustrup: An Elegant and Efficient Pattern Matching Library for C++} % 'preprint' option specified. \title{An Elegant and Efficient Pattern Matching Library for C++} %\subtitle{your \code{visit}, Jim, is not \code{accept}able anymore} \subtitle{\code{accepting} aint no \code{visit}ors} \authorinfo{Yuriy Solodkyy\and Gabriel Dos Reis\and Bjarne Stroustrup} {Texas A\&M University\\ Texas, USA} {\{yuriys,gdr,bs\}@cse.tamu.edu} \maketitle \begin{abstract} Pattern matching is an abstraction mechanism that greatly simplifies code. We present functional-programming-style pattern matching for C++ implemented as a library. The library provides a uniform notation for matching against open hierarchy of run-time polymorphic classes as well as closed set of classes (including classes tagged by user and discriminated unions) for which compile-time polymorphism can be used. The library integrates well with programming styles supported by C++, in particular it supports virtual and repeated multiple inheritance and can be used in generic code. Our library equals or outperforms the visitor design pattern, as commonly used for pattern-matching scenarios in C++, and for many use cases it equals or outperforms equivalent code in languages with built-in pattern matching. Our solution better addresses more problems than the visitor design pattern does: it is non-intrusive and does not have extensibility restrictions. It also avoids control inversion and can be used in pattern-matching scenarios that visitors are ill suited for. Code using patterns is significantly more concise and easier to comprehend than alternative solutions in C++. Implementing pattern matching as a library allows us to experiment with syntax, implementation algorithms, and use while preserving benefit from the performance and portability provided by industrial compilers and support tools. The solution approach can be reused in other object-oriented languages to implement \emph{type switching}, \emph{type testing}, \emph{pattern matching} and \emph{multiple dispatch} efficiently. The library was motivated by and is used for applications involving large, typed, abstract syntax trees. \end{abstract} \category{D}{3}{3} \terms Languages, Design \keywords Pattern Matching, Type Switching, Visitor Design Pattern, Expression Problem, Memoization, C++ \section{Introduction} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:intro} %Motivate the problem %Give a summary of the paper: what you did and how %Explicitly state your contribution Pattern matching is an abstraction supported by many programming languages. It allows the user tersely to describe a (possibly infinite) set of values accepted by the pattern. A \emph{pattern} represents a predicate on values, and is usually much more concise and readable than the equivalent predicate spelled out as imperative code. Popularized by functional programming community, most notably Hope\cite{BMS80}, ML\cite{ML90}, Miranda\cite{Miranda85} and Haskell\cite{Haskell98Book}, for providing syntax very close to mathematical notations. From there, it has found its way into many imperative programming languages e.g. Pizza\cite{Odersky97pizzainto}, Scala\cite{Scala2nd}, Fortress\cite{RPS10}, as well as dialects of Java\cite{Liu03jmatch:iterable,HydroJ2003}, C++\cite{Prop96}, Eiffel\cite{Moreau:2003} and others. It is relatively easy to provide a form of pattern matching when designing a new language, but to introduce it into a language in widespread use is a challenge. The obvious utility of the feature may be compromised by the need to fit into the language's syntax, semantics, and tool chains. A prototype implementation requires more effort than for an experimental language and is harder to get into use because mainstream users are unwilling to try non-portable, non-standard, and unoptimized features. To balance the utility and effort we decided to take the Semantically Enhanced Library Language (SELL) approach\cite{SELL}. We provide the general-purpose programming language with a library, extended with a tool support. This will typically (as in this case) not provide you 100\% of the functionality that a language extension would do, but it allows experimentation and special-purpose use with existing compilers and tool chains. With pattern matching, we managed to avoid external tool support by relying on some pretty nasty macro hacking to provide a conventional and convenient interface to an efficient library implementation. Our current solution is a proof of concept that sets a minimum threshold for performance, brevity, clarity and usefulness of a language solution for pattern matching in C++. It provides full functionality, so we can experiment with use of pattern matching in C++ and with language alternatives. To give an idea of what our library offers, consider an example from a domain where pattern matching is considered to provide terseness and clarity -- compiler construction. Consider for example a simple language of expressions: \begin{lstlisting} @$exp$ \is{} $val$ \Alt{} $exp+exp$ \Alt{} $exp-exp$ \Alt{} $exp*exp$ \Alt{} $exp/exp$@ \end{lstlisting} \noindent An OCaml data type describing this grammar as well as a simple evaluator of expressions in it, can be declared as following: \begin{lstlisting}[language=Caml,keepspaces,columns=flexible] type expr = Value of int | Plus of expr * expr | Minus of expr * expr | Times of expr * expr | Divide of expr * expr ;; let rec eval e = match e with Value v -> v | Plus (a, b) -> (eval a) + (eval b) | Minus (a, b) -> (eval a) - (eval b) | Times (a, b) -> (eval a) * (eval b) | Divide (a, b) -> (eval a) / (eval b) ;; \end{lstlisting} \noindent The corresponding C++ data types would most likely be parameterized, but for now we will just use simple classes: \begin{lstlisting}[keepspaces,columns=flexible] struct Expr { virtual @$\sim$@Expr() {} }; struct Value : Expr { int value; }; struct Plus : Expr { Expr* exp1; Expr* exp2; }; struct Minus : Expr { Expr* exp1; Expr* exp2; }; struct Times : Expr { Expr* exp1; Expr* exp2; }; struct Divide : Expr { Expr* exp1; Expr* exp2; }; \end{lstlisting} \noindent Using our library, we can express matching about as tersely as OCaml: \begin{lstlisting}[keepspaces,columns=flexible] int eval(const Expr* e) { Match(e) { Case(Value, n) return n; Case(Plus, a, b) return eval(a) + eval(b); Case(Minus, a, b) return eval(a) - eval(b); Case(Times, a, b) return eval(a) * eval(b); Case(Divide, a, b) return eval(a) / eval(b); } EndMatch } \end{lstlisting} \noindent To make the example fully functional we need to provide mappings of binding positions to corresponding class members: \begin{lstlisting}[keepspaces,columns=flexible] template <> struct bindings<Value> { CM(0,Value::value); }; template <> struct bindings<Plus> { CM(0,Plus::exp1); ... CM(1,Plus::exp2); }; template <> struct bindings<Divide> { CM(0,Divide::exp1); CM(1,Divide::exp2); }; \end{lstlisting} \noindent This binding code would be implicitly provided by the compiler had we chosen that implementation strategy. The syntax is provided without any external tool support. Instead we rely on a few C++0x features~\cite{C++0x}, template meta-programming, and macros. It runs about as fast as the OCaml version (\textsection\ref{sec:ocaml}), and, depending on the usage scenario, compiler and underlying hardware, comes close or outperforms the handcrafted C++ code based on the \emph{visitor design pattern} (\textsection\ref{sec:eval}). \subsection{Motivation} The ideas and the library presented here, were motivated by our rather unsatisfactory experiences working with various C++ front-ends and program analysis frameworks~\cite{Pivot09,Phoenix,Clang,Lise}. The problem was not in the frameworks per se, but in the fact that we had to use the \emph{visitor design pattern}~\cite{DesignPatterns1993} to inspect, traverse, and elaborate abstract syntax trees of their target languages. We found visitors unsuitable to express our application logic, surprisingly hard to teach students, and slow. We found dynamic casts in many places, often nested, because users wanted to answer simple structural questions without having to resort to visitors. Users preferred shorter, cleaner, and more direct code to visitors, even at a high cost in performance (assuming that the programmer knew the cost). The usage of \code{dynamic\_cast} resembled the use of pattern matching in functional languages to unpack algebraic data types. Thus, our initial goal was to develop a domain-specific library for C++ to express various predicates on tree-like structures as elegantly as is done in functional languages. This grew into a general high-performance pattern-matching library. The library is the latest in a series of 5 libraries. The earlier versions were superceded because they failed to meet our standards for notation, performance, or generality. Our standard is set by the principle that a fair comparison must be against the gold standard in a field. For example, if we work on a linear algebra library, we must compare to Fortran or one of the industrial C++ libraries, rather than Java or C. For pattern matching we chose optimized OCaml as our standard for closed (compile-time polymorphic) sets of classes and C++ for uses of the visitor pattern. For generality and simplicity of use, we deemed it essential to do both with a uniform syntax. \subsection{Expression Problem} \label{sec:exp} %Expression problem is a problem of supporting in a programming language modular %extensibility of both data and functions at the same time. Functional languages %allow for easy addition of new functions at the expense of disallowing new data %variants. Object-oriented languages allow for easy addition of new variants at %the expense of disallowing new functions. Many attempts have been made to %resolve this dilema in both camps, nevertheless no universally accepted solution %that is modular, open and efficient has been found. %Visitor Design Pattern has became de-facto standard in dealing with expression %problem in many industry-strength object-oriented languages because of two %factors: its speed and being a library solution. It comes at the cost of %restricting extensibility of data, increased verbosity and being hard to teach %and understand, but nevertheless, remains the weapon of choice for interacting %with numerous object-oriented libraries and frameworks. Functional languages allow for the easy addition of new functions on existing data types, but fall short in extending data types themselves (e.g. with new constructors), which requires modifying the source code. Object-oriented languages, on the other hand, make data type extension trivial through inheritance, but the addition of new functions that work on these classes typically requires changes to the class definition. This dilemma was first discussed by Cook~\cite{Cook90} and then accentuated by Wadler~\cite{exprproblem} under the name \emph{expression problem}. Quoting Wadler: \emph{``The Expression Problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts)''}. To better understand the problem, note that classes differ from algebraic data types in two important ways: they are \emph{extensible} since new variants can be added by inheriting from the base class, as well as \emph{hierarchical} and thus \emph{non-disjoint} since variants can be inherited from other variants and form a subtyping relation between themselves~\cite{Glew99}. This is not the case with traditional algebraic data types in functional languages, where the set of variants is \emph{closed}, while the variants are \emph{disjoint}. Some functional languages e.g. ML2000~\cite{ML2000} and Moby~\cite{Moby} were experimenting with \emph{hierarchical extensible sum types}, which are closer to object-oriented classes then algebraic data types are, but, interestingly, they did not provide pattern matching facilities on them! Zenger and Odersky later refined the expression problem in the context of independently extensible solutions~\cite{fool12} as a challenge to find an implementation technique that satisfies the following requirements: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item \emph{Extensibility in both dimensions}: It should be possible to add new data variants, while adapting the existing operations accordingly. It should also be possible to introduce new functions. \item \emph{Strong static type safety}: It should be impossible to apply a function to a data variant, which it cannot handle. \item \emph{No modification or duplication}: Existing code should neither be modified nor duplicated. \item \emph{Separate compilation}: Neither datatype extensions nor addition of new functions should require re-typechecking the original datatype or existing functions. No safety checks should be deferred until link or runtime. \item \emph{Independent extensibility}: It should be possible to combine independently developed extensions so that they can be used jointly. \end{itemize} %Paraphrasing, the expression problem can be summarized as a problem of %supporting modular extensibility of both data and functions at the same time in %one programming language. \noindent Object-oriented languages further complicate the matter with the fact that data variants are not necessarily disjoint and may form subtyping relationships between themselves. We thus introduced an additional requirement based on the Liskov substitution principle~\cite{Lis87}: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item \emph{Substitutability}: Operations expressed on more general data variants should be applicable to more specific ones that are in a subtyping relation with them. \end{itemize} %Depending on the semantics of the language's subtyping relation, %substitutability requirement may turn pattern matching into an expensive %operation. OCaml, for example, that uses structural subtyping on its object %types, does not offer pattern \noindent We will refer to a solution that satisfies all of the above requirements as \emph{open}. Numerous solutions have been proposed to dealing with the expression problem in both functional and object-oriented camps, but notably very few are truly open, while none has made its way into one of the mainstream languages. We refer the reader to Zenger and Odersky's original manuscript for a discussion of the approaches~\cite{fool12}. Interestingly, most of the discussed object-oriented solutions were focusing on the visitor design pattern~\cite{DesignPatterns1993}, which even today seems to be the most commonly used approach to dealing with the expression problem in practice. \subsection{Visitor Design Pattern} \label{sec:vdp} %Discuss visitor design pattern and its problems. %\begin{itemize} %\item Intrusive - requires changes to the hierarchy %\item Not open - addition of new classes changes visitor interface %\item Does not provide by default relation between visitors of base and derived classes %\item Control inversion %\item Cannot be generically extended to handling n arguments %\end{itemize} The \emph{visitor design pattern}~\cite{DesignPatterns1993} was devised to solve the problem of extending existing classes with new functions in object-oriented languages. Consider the above Expr example and imagine that in addition to evaluation we would like to also provide a pretty printing of expressions. A typical object-oriented approach would be to introduce a virtual function \\ \code{virtual void print() const = 0;} inside the abstract base class \code{Expr}, which will be implemented correspondingly in all the derived classes. This works well as long as we know all the required operations on the abstract class in advance. Unfortunately, this is very difficult to achieve in reality as the code evolves, especially in a production environment. To put this in context, imagine that after the above interface with pretty-printing functionality has been deployed, we decided that we need similar functionality that saves the expression in XML format. Adding new virtual function implies modifying the base class and creating a versioning problem with the code that has been deployed already using the old interface. To alleviate this problem, the Visitor Design Pattern separates the \emph{commonality} of all such future member-functions from their \emph{specifics}. The former deals with identifying the most-specific derived class of the receiver object known to the system at the time the base class was designed. The latter provides implementation of the required functionality once the most-specific derived class has been identified. The interaction between the two is encoded in the protocol that fixes a \emph{visitation interface} enumerating all known derived classes on one side and a dispatching mechanism that guarantees to select the most-specific case with respect to the dynamic type of the receiver in the visitation interface. An implementation of this protocol for our Expr example might look like the following: \begin{lstlisting} // Forward declaration of known derived classes struct Value; struct Plus; ... struct Divide; @\halfline@ // Visitation interface struct ExprVisitor { virtual void visit(const Value&) = 0; virtual void visit(const Plus&) = 0; ... // One virtual function per each known derived class virtual void visit(const Divide&) = 0; }; @\halfline@ // Abstract base and known derived classes struct Expr { virtual void accept(ExprVisitor&) const = 0; }; struct Value : Expr { ... void accept(ExprVisitor& v) const { v.visit(*this); } }; struct Plus : Expr { ... void accept(ExprVisitor& v) const { v.visit(*this); } }; \end{lstlisting} \noindent Note that even though implementations of \code{accept} member-functions in all derived classes are syntactically identical, a different \code{visit} is called. We rely here on the overload resolution mechanism of C++ to pick the most specialized \code{visit} member-function applicable to the static type of \code{*this}. %This mere code %maintenance convenience unfortunately, often confuses novices on what %is going on. We thus would like to point out that member-functions in the %visitation interface are not required to be called with the same name, -- we %could have equally well called them \code{visit_value}, \code{visit_plus} etc. %making the corresponding changes to calls inside \code{Value::accept}, %\code{Plus::accept} etc. A user can now implement new functions by overriding \code{ExprVisitor}'s functions. For example: \begin{lstlisting} std::string to_str(const Expr* e) // Converts expressions to string { struct ToStrVisitor : ExprVisitor { void visit(const Value& e) { result = std::to_string(e.value); } ... void visit(const Divide& e) { result = to_str(e.exp1) + '/' + to_str(e.exp2); } std::string result; } v; e->accept(v); return v.result; } \end{lstlisting} \noindent The function \code{eval} we presented above, as well as any new function that we would like to add to \code{Expr}, can now be implemented in much the same way, without the need to change the base interface. This flexibility does not come for free, though, and we would like to point out some pros and cons of this solution. The most important advantage of the visitor design pattern is the {\bf possibility to add new operations} to the class hierarchy without the need to change the interface. Its second most-quoted advantage is {\bf speed} -- the overhead of two virtual function calls incurred by the double dispatch present in the visitor design pattern is often negligible on modern architectures. Yet another advantage that often remains unnoticed is that the above solution achieves extensibility of functions with {\bf library only means} by using facilities already present in the language. Nevertheless, there are quite a few disadvantages. The solution is {\bf intrusive} since we had to inject syntactically the same definition of the \code{accept} method into every class participating in visitation. It is also {\bf specific to hierarchy}, as we had to declare a visitation interface specific to the base class. The amount of {\bf boilerplate code} required by visitor design pattern cannot go unnoticed. It also increases with every argument that has to be passed into the visitor to be available during the visitation. This aspect can be seen in the example from \textsection\ref{sec:xmpl} where we have to store both functors inside the visitor. More importantly, visitors {\bf hinder extensibility} of the class hierarchy: new classes added to the hierarchy after the visitation interface has been fixed will be treated as their most derived base class present in the interface. A solution to this problem has been proposed in the form of \emph{Extensible Visitors with Default Cases}~\cite[\textsection 4.2]{Zenger:2001}; however, the solution, after remapping it onto C++, has problems of its own, discussed in detail in related work in \textsection\ref{sec:rw}. %The visitation interface %hierarchy can easily be grown linearly (adding new cases for the new classes in %the original hierarchy each time), but independent extensions by different %authorities require developer's intervention to unify them all, before they can %be used together. This may not be feasible in environments that use dynamic %linking. To avoid writing even more boilerplate code in new visitors, the %solution would require usage of virtual inheritance, which typically has %an overhead of extra memory dereferencing. On top of the double dispatch already %present in the visitor pattern, the solution will incur two additional virtual %calls and a dynamic cast for each level of visitor extension. Additional double %dispatch is incurred by forwarding of default handling from base visitor to a %derived one, while the dynamic cast is required for safety and can be replaced %with a static case when visitation interface is guaranteed to be grown linearly %(extended by one authority only). Yet another virtual call is required to be %able to forward computations to subcomponents on tree-like structures to the %most derived visitor. This last function lets one avoid the necessity of using %heap to allocate a temporary visitor through the \emph{Factory Design %Pattern}\cite{DesignPatterns1993} used in \emph{Extensible Visitor} solution %originally proposed by Krishnamurti, Felleisen and Friedman\cite{Krishnamurthi98}. Once all the boilerplate related to visitors has been written and the visitation interface has been fixed we are still left with some annoyances incurred by the pattern. One of them is the necessity to work with the {\bf control inversion} that visitors put in place. Because of it we have to save any local state and any arguments that some of the \code{visit} callbacks might need from the calling environment. Similarly, we have to save the result of the visitation, as we cannot assume that all the visitors that will potentially be implemented on a given hierarchy will use the same result type. Using visitors in a generic algorithm requires even more precautions. We summarize these visitor-related issues in the following motivating example, followed by an illustration of a pattern-matching solution to the same problem enabled with our library. \subsection{Motivating Example} \label{sec:xmpl} While comparing generic programming facilities available to functional and imperative languages (mainly Haskell and C++), Dos Reis and J\"arvi present the following example in Haskell describing a sum functor\cite{DRJ05}: \begin{lstlisting}[language=Haskell] data Either a b = Left a | Right b @\halfline@ eitherLift :: (a -> c) -> (b -> d) -> Either a b -> Either c d eitherLift f g (Left x) = Left (f x) eitherLift f g (Right y) = Right (g y) \end{lstlisting} \noindent In simple words, the function \codehaskell{eitherLift} above takes two functions and an object and depending on the actual type constructor the object was created with, calls first or second function on the embedded value, encoding the result correspondingly. Its equivalent in C++ is not straightforward. Idiomatic, type-safe, handling of discriminated unions in C++ typically assumes use of the \emph{Visitor Design Pattern}\cite{DesignPatterns1993}. \begin{lstlisting} template <class X, class Y> class Either; template <class X, class Y> class Left; template <class X, class Y> class Right; @\halfline@ template <class X, class Y> struct EitherVisitor { virtual void visit(const Left<X,Y>&) = 0; virtual void visit(const Right<X,Y>&) = 0; }; @\halfline@ template <class X, class Y> struct Either { virtual @$\sim$@Either() {} virtual void accept(EitherVisitor<X,Y>& v) const = 0; }; @\halfline@ template <class X, class Y> struct Left : Either<X,Y> { const X& x; Left(const X& x) : x(x) {} void accept(EitherVisitor<X,Y>& v) const { v.visit(*this); } }; @\halfline@ template <class X, class Y> struct Right : Either<X,Y> { const Y& y; Right(const Y& y) : y(y) {} void accept(EitherVisitor<X,Y>& v) const { v.visit(*this); } }; \end{lstlisting} \noindent The code above defines the necessary parameterized data structures as well as a correspondingly parameterized visitor class capable of introspecting it at run-time. The authors agree with us \emph{``The code has a fair amount of boilerplate to simulate pattern matching...''}\cite{DRJ05} The actual implementation of \codehaskell{lift} in C++ now amounts to declaring and invoking a visitor: \begin{lstlisting} template <class X, class Y, class S, class T> const Either<S,T>& eitherLift(const Either<X,Y>& e, S f(X), T g(Y)) { typedef S (*F)(X); typedef T (*G)(Y); struct Impl : EitherVisitor<X,Y> { F f; G g; const Either<S,T>* value; Impl(F f, G g) : f(f), g(g), value() {} void visit(const Left<X,Y>& e) { value = left<S,T>(f(e.x)); } void visit(const Right<X,Y>& e) { value = right<S,T>(g(e.y)); } }; Impl vis(f, g); e.accept(vis); return *vis.value; } \end{lstlisting} \noindent The same function expressed with our pattern-matching facility seems to be much closer to the original Haskell definition: \begin{lstlisting}[keepspaces,columns=flexible] template <class X, class Y, class S, class T> const Either<S,T>* lift(const Either<X,Y>& e, S f(X), T g(Y)) { Match(e) Case(( Left<X,Y>), x) return left<S,T>(f(x)); Case((Right<X,Y>), y) return right<S,T>(g(y)); EndMatch } \end{lstlisting} \noindent This is also as fast as the visitor solution, but unlike the visitors based approach it neither requires \code{EitherVisitor}, nor any of the injected \code{accept} member-functions. We do require binding definitions though to be able to bind variables \code{x} and \code{y}: %@\footnote{We need to take the first argument in parentheses to avoid interpretation of comma in template argument list by the preprocessor}@ %\footnote{Definitions of obvious functions \code{left} and \code{right} have %been ommitted in both cases.} \begin{lstlisting}[keepspaces,columns=flexible] template <class X, class Y> struct bindings<Left<X,Y>> { CM(0, Left<X,Y>::x); }; template <class X, class Y> struct bindings<Right<X,Y>> { CM(0,Right<X,Y>::y); }; \end{lstlisting} \noindent Note that these binding definitions are made once for all possible instantiations with the use of partial template specialization in C++ and would not be needed if we implemented pattern matching in a compiler rather than a library. \subsection{Summary} The contributions of the paper are twofold and can be summarized as following: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item We present techniques based on memoization (\textsection\ref{sec:copc}) and class precedence list (\textsection\ref{sec:cotc}) that can be used to implement type switching efficiently based on the run-time type of the argument. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item The techniques come close and often outperform its de facto contender -- visitor design pattern -- without sacrificing extensibility (\textsection\ref{sec:eval}). \item They work in the presence of multiple inheritance, including repeated and virtual inheritance, as well as in generic code (\textsection\ref{sec:vtblmem}). \item The solution is open by construction (\textsection\ref{sec:poets}), non-intrusive, and avoids the control inversion typical for visitors. \item It applies to polymorphic (\textsection\ref{sec:vtp}-\ref{sec:vtblmem}) and tagged (\textsection\ref{sec:cotc}) class hierarchies through a unified syntax~\cite{AP}. \item Our memoization device (\textsection\ref{sec:memdev}) generalizes to other languages and can be used to implement type switching (\textsection\ref{sec:vtblmem}), type testing (\textsection\ref{sec:poets},\cite[\textsection 4.7]{TR}), predicate dispatch (\textsection\ref{sec:memdev}), and multiple dispatch (\textsection\ref{sec:cc}) efficiently. \item We list conditions under which virtual table pointers, commonly used in C++ implementations, uniquely identify the exact subobject within the most derived type (\textsection\ref{sec:vtp}). \item We also build an efficient cache indexing function for virtual table pointers that minimizes the amount of conflicts (\textsection\ref{sec:sovtp},\ref{sec:moc},\cite[\textsection 4.3.5]{TR}). \end{itemize} \item We present a functional style pattern matching for C++ built as a library employing the above technique. Our solution: \begin{itemize} \item Is open, non-intrusive and avoids the control inversion typical for visitors. \item Can be applied retroactively to any polymorphic or tagged class hierarchy. \item Provides a unified syntax for various encodings of extensible hierarchycal datatypes in C++. \item Generalizes the controversial n+k patterns by leaving semantic choices to the user. \item Supports a limited form of views. \item Is simpler to use than conventional object-oriented or union-based alternatives. \item Improves performance compared to alternatives in real applications. \end{itemize} \end{itemize} \noindent Our technique can be used in a compiler and/or library setting to implement facilities that depend on dynamic type or run-time properties of objects: e.g. type switching, type testing, pattern matching, predicate dispatch, multi-methods etc. We also look at different approaches to endoding algebraic data types in C++ and present a unified pattern-matching syntax that works uniformly with all of them. We generalize Haskell's n+k patterns\cite{haskell98} to any invertible operations. Semantics issues that typically accompany n+k pattern are handled transparently by forwarding the problem into the concepts domain, thanks to the fact that we work in a library setting. We also provide support for views in a form that resembles extractors in Scala. A practical benefit of our solution is that it can be used right away with any compiler with a decent support of C++0x without requiring the installation of any additional tools or preprocessors. The solution is a proof of concept that sets a minimum threshold for the performance, brevity, clarity and usefulness of a language solution for open type switching in C++. The rest of this paper is structured as following. In Section~\ref{sec:bg}, we present evolution of pattern matching in different languages, presenting informally through example commonly used terminology and semantics of various pattern-matching constructs. Section~\ref{sec:adt} presents various approaches that are taken in C++ to encoding algebraic data types. Sections~\ref{sec:syn} and~\ref{sec:sem} describe the syntax and semantics of our pattern matching facilities. Sections~\ref{sec:slv} and~\ref{sec:view} discuss approach taken by our library in handling generalized n+k patterns and views. Section~\ref{sec:impl} discusses techniques that makes type switching, used as a back-bone of the match statement, efficient, while section~\ref{sec:eval} provides its performance evaluation against some common alternatives. Section~\ref{sec:rw} discusses related work, and section~\ref{sec:cc} concludes by discussing some future directions and possible improvements. \section{Background} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:bg} Pattern matching in the context of a programming language was first introduced in a string manipulation language SNOBOL\cite{SNOBOL64}. Its fourth reincarnation SNOBOL4 had patterns as first-class data types providing operations of concatenation and alternation on them\cite{SNOBOL71}. The first reference to a pattern-matching construct that resembles the one found in statically typed functional languages today is usually attributed to Burstall and his work on structural induction\cite{Burstall69provingproperties}. In the context of object-oriented programming, pattern matching has been first explored in Pizza programming language\cite{Odersky97pizzainto}. These efforts have been continued in Scala\cite{Scala2nd} and together with notable work of Burak Emir on \emph{Object-Oriented Pattern Matching}\cite{EmirThesis} have resulted in incorporation of pattern matching into the language. %The first tree based pattern matching methods were found in Fred McBride's %extension of LISP in 1970. %ML and Haskell further popularized pattern matching ... Pattern matching has been closely related to \emph{algebraic data types} and \emph{equational reasoning} since the early days of functional programming. In languages like ML and Haskel an \emph{Algebraic Data Type} is a data type each of whose values is picked from a disjoint sum of (possibly recursive) data types, called \emph{variants}. Each of the variants is marked with a unique symbolic constant called \emph{constructor}, while the set of all constructors of a given type is called \emph{signature}. Constructors provide a convenient way of creating a value of its variant type as well as a way of discriminating its variant type from the algebraic data type through pattern matching. Algebraic data type \codeocaml{expr} from Section~\ref{sec:intro} consists of 5 variants, marked with constructors \codeocaml{Value}, \codeocaml{Plus}, \codeocaml{Minus}, \codeocaml{Times} and \codeocaml{Divide} respectively. Constructor \codeocaml{Value} expects a value of type \codeocaml{int} during construction, as well as any pattern that admits values of type \codeocaml{int} during decomposition through pattern matching. Similarly, the other four constructors expect a value of a cartesian product of two \codeocaml{expr} types during construction, as well as any pattern that would admit a value of such type during decomposition. Algebraic data types can be parameterized and recursive, as demonstrated by the following Haskell code that defines a binary tree parameterized on type \codehaskell{k} of keys and type \codehaskell{d} of data stored in the nodes: \begin{lstlisting}[language=Haskell] data Tree k d = Node k d (Tree k d) (Tree k d) | Leaf \end{lstlisting} \noindent Naturally, they can be decomposed in a generic algorithm like the function \code{find} below, defined through case analysis on the tree's structure: \begin{lstlisting}[language=Haskell] find :: (Ord k) => k -> Tree k d -> Maybe d find i Leaf = Nothing find i (Node key item left right) = if i == key then Just item else if i [<] key then find i left else find i right \end{lstlisting} \noindent The set of values described by such an algebraic data type is defined inductively as the least set closed under constructor functions of its variants. Algebraic data types draw their name from the practice of using case distinction in mathematical function definitions and proofs that involve \emph{algebraic terms}. One of the main differences of algebraic data types from classes in object-oriented languages is that an algebraic data type definition is \emph{closed} because it fixes the structure of its instances once and for all. Once we have listed all the variants a given algebraic data type may have we cannot extend it with new variants without modifying its definition. This is not the case in object-oriented languages, where classes are \emph{open} to extension through subclassing. Notable exceptions to this restriction in functional community are \emph{polymorphic variants} in OCaml\cite{garrigue-98} and \emph{open data types} in Haskell\cite{LohHinze2006}, which allow addition of new variants later. These extensions, however, are simpler than object-oriented extensions as neither polymorphic variants nor open data types form subtyping relation between themselves: open data types do not introduce any subtyping relation, while the subtyping relation on polymorphic variants is a \emph{semantic subtyping} similar to that of XDuce\cite{HosoyaPierce2000}, which is based on the subset relation between values of the type. In either case they maintain the important property that each value of the underlying algebraic data type belongs to exactly one disjoint subset tagged with a constructor. The \emph{nominative subtyping} of object-oriented languages does not usually have this disjointness making classes effectively have multiple types. In particular, the case of disjoint constructors can be seen as a degenerated case of a flat class hierarchy among the multitude of possible class hierarchies. Closedness of algebraic data types is particularly useful for reasoning about programs by case analysis and allows the compiler to perform an automatic \emph{incompleteness} check -- test of whether a given \emph{match statement} covers all possible cases. Similar reasoning about programs involving extensible data types is more involved as we are dealing with potentially open set of variants. \emph{Completeness} check in such scenario reduces to checking presence of a case that handles the static type of the subject. Absence of such a case, however, does not necessarily imply incompleteness, only potential incompleteness, as the answer will depend on the actual set of variants available at run-time. A related notion of \emph{redundancy} checking arises from the tradition of using \emph{first-fit} strategy in pattern matching. It warns the user of any \emph{case clause} inside a match statement that will never be entered because of a preceding one being more general. Object-oriented languages, especially C++, typically prefer \emph{best-fit} strategy (e.g. for overload resolution and class template specialization) because it is not prone to errors where semantics of a statement might change depending on the ordering of preceding definitions. The notable exception in C++ semantics that prefers the \emph{first-fit} strategy is ordering of \code{catch} handlers of a \code{try}-block. Similarly to functional languages the C++ compiler will perform \emph{redundancy} checking on catch handlers and issue a warning that lists the redundant cases. We use this property of the C++ type system to perform redundancy checking of our match statements in \textsection\ref{sec:redun}. The patterns that work with algebraic data types we have seen so far are generally called \emph{tree patterns} or \emph{constructor patterns}. Their analog in object-oriented languages is often referred to as \emph{type pattern} since it may involve type testing and type casting. Special cases of these patterns are \emph{list patterns} and \emph{tuple patterns}. The former lets one split a list into a sequence of elements in its beginning and a tail with the help of list constructor \codehaskell{:} and an empty list constructor \codehaskell{[]} e.g. \codehaskell{[x:y:rest]}. The latter does the same with tuples using tuple constructor \codehaskell{(,,...,)} e.g. \codehaskell{([x:xs],'b',(1,2.0),"hi",True)}. Pattern matching is not used solely with algebraic data types and can equally well be applied to built-in types. The following Haskell code defines factorial function in the form of equations: \begin{lstlisting}[language=Haskell] factorial 0 = 1 factorial n = n * factorial (n-1) \end{lstlisting} \noindent Here 0 in the left hand side of the first \emph{equation} is an example of a \emph{value pattern} (also known as \emph{constant pattern}) that will only match when the actual argument passed to the function factorial is 0. The \emph{variable pattern} \codehaskell{n} (also referred to as \emph{identifier pattern}) in the left hand side of the second equation will match any value, \emph{binding} variable \codehaskell{n} to that value in the right hand side of equation. Similarly to variable pattern, the \emph{wildcard pattern} \codehaskell{_} will match any value, neither binding it to a variable nor even obtaining it. Value patterns, variable patterns and wildcard patterns are generally called \emph{primitive patterns}. Patterns like variable and wildcard patterns that never fail to match are called \emph{irrefutable}, in contrast to \emph{refutable} patterns like value patterns, which may fail to match. In Haskell 98\cite{Haskell98Book} the above definition of factorial could also be written as: \begin{lstlisting}[language=Haskell] factorial 0 = 1 factorial (n+1) = (n+1) * factorial n \end{lstlisting} \noindent The \codehaskell{(n+1)} pattern in the left hand side of equation is an example of \emph{n+k pattern}. According to its informal semantics ``Matching an $n+k$ pattern (where $n$ is a variable and $k$ is a positive integer literal) against a value $v$ succeeds if $v \ge k$, resulting in the binding of $n$ to $v-k$, and fails otherwise''\cite{haskell98}. n+k patterns were introduced into Haskell to let users express inductive functions on natural numbers in much the same way as functions defined through case analysis on algebraic data types. Besides succinct notation, such language feature could facilitate automatic proof of termination of such functions by compiler. Peano numbers, used as an analogy to algebraic data type representation of natural numbers, is not always the best abstraction for representing other mathematical operations however. This, together with numerous ways of defining semantics of generalized n+k patterns were some of the reasons why the feature was never generalized in Haskell to other kinds of expressions, even though there were plenty of known applications. Moreover, numerous debates over semantics and usefulness of the feature resulted in n+k patterns being removed from the language altogether in Haskell 2010 standard\cite{haskell2010}. Generalization of n+k patterns, called \emph{application patterns} has been studied by Nikolaas N. Oosterhof in his Master's thesis\cite{OosterhofThesis}. Application patterns essentially treat n+k patterns as equations, while matching against them attempts to solve or validate the equation. While n+k patterns were something very few languages had, another common feature of many programming languages with pattern matching are guards. A \emph{guard} is a predicate attached to a pattern that may make use of the variables bound in it. The result of its evaluation will determine whether the case clause and the body associated with it will be \emph{accepted} or \emph{rejected}. The following OCaml code for $exp$ language from Section~\ref{sec:intro} defines the rules for factorizing expressions $e_1e_2+e_1e_3$ into $e_1(e_2+e_3)$ and $e_1e_2+e_3e_2$ into $(e_1+e_3)e_2$ with the help of guards spelled out after keyword \codeocaml{when}: \begin{lstlisting}[language=Caml,keepspaces,columns=flexible] let factorize e = match e with Plus(Times(e1,e2), Times(e3,e4)) when e1 = e3 -> Times(e1, Plus(e2,e4)) | Plus(Times(e1,e2), Times(e3,e4)) when e2 = e4 -> Times(Plus(e1,e3), e4) | e -> e ;; \end{lstlisting} \noindent One may wonder why we could not simply write the above case clause as \codeocaml{Plus(Times(e,e2), Times(e,e4))} to avoid the guard? Patterns that permit use of the same variable in them multiple times are called \emph{equivalence patterns}, while the requirement of absence of such patterns in a language is called \emph{linearity}. Neither OCaml nor Haskell support such patterns, while Miranda\cite{Miranda85} as well as Tom's pattern matching extension to C, Java and Eiffel\cite{Moreau:2003} supports \emph{non-linear patterns}. The example above illustrates yet another common pattern-matching facility -- \emph{nesting of patterns}. In general, a constructor pattern composed of a linear vector of (distinct) variables is called a \emph{simple pattern}. The same pattern composed not only of variables is called \emph{nested pattern}. Using nested patterns, with a simple expression in the case clause we could define a predicate that tests the top-level expression to be tagged with a \codeocaml{Plus} constructor, while both of its arguments to be marked with \codeocaml{Times} constructor, binding their arguments (or potentially pattern matching further) respectively. Note that the visitor design pattern does not provide this level of flexibility and each of the nested tests might have required a new visitor to be written. Nesting of patterns like the one above is typically where users resort to \emph{type tests} and \emph{type casts} that in case of C++ can be combined into a single call to \code{dynamic_cast}. Related to nested patterns are \emph{as-patterns} that help one take a value apart while still maintaining its integrity. The following rule could have been a part of a hypothetical rewriting system in OCaml similar to the one above. Its intention is to rewrite expressions of the form $\frac{e_1/e_2}{e_3/e_4}$ into $\frac{e_1}{e_2}\frac{e_4}{e_3} \wedge e_2\neq0 \wedge e_3\neq0 \wedge e_4\neq0$. \begin{lstlisting}[language=Caml] | Divide(Divide(_,e2) as x, Divide(e3,e4)) -> Times(x, Divide(e4, e3)) \end{lstlisting} \noindent We introduced a name ``x'' as a synonym of the result of matching the entire sub-expression \codeocaml{Divide(_,e2)} in order to refer it without recomposing in the right-hand side of the case clause. We omitted the conjunction of relevant non-zero checks for brevity, one can see that we will need access to \codeocaml{e2} in it however. Decomposing algebraic data types through pattern matching has an important drawback that was originally spotted by Wadler\cite{Wadler87}: they expose concrete representation of an abstract data type, which conflicts with the principle of \emph{data abstraction}. To overcome the problem he proposed the notion of \emph{views} that represent conversions between different representations that are implicitly applied during pattern matching. As an example, imagine polar and cartesian representations of complex numbers. A user might choose polar representation as a concrete representation for the abstract data type \codeocaml{complex}, treating cartesian representation as view or vice versa:\footnote{We use the syntax from Wadler's original paper for this example} \begin{lstlisting}[language=Haskell,columns=flexible] complex ::= Pole real real view complex ::= Cart real real in (Pole r t) = Cart (r * cos t) (r * sin t) out (Cart x y) = Pole (sqrt(x^2 + y^2)) (atan2 x y) \end{lstlisting} \noindent The operations then might be implemented in whatever representation is the most suitable, while the compiler will implicitly convert representation if needed: \begin{lstlisting}[language=Haskell,columns=flexible] add (Cart x1 y1) (Cart x2 y2) = Cart (x1 + x2) (y1 + y2) mult (Pole r1 t1) (Pole r2 t2) = Pole (r1 * r2) (t1 + t2) \end{lstlisting} \noindent The idea of views were later adopted in various forms in several languages: Haskell\cite{views96}, Standard ML\cite{views98}, Scala (in the form of \emph{extractors}\cite{EmirThesis}) and F$\sharp$ (under the name of \emph{active patterns}\cite{Syme07}). %Views in functional programming languages [92, 71] are conversions from one data type to %another that are implicitly applied in pattern matching. They play a role similar to extractors %in Scala, in that they permit to abstract from the concrete data-type of the matched objects. %However, unlike extractors, views are anonymous and are tied to a particular target data %type. Logic programming languages like Prolog take pattern matching to even greater level. The main difference between pattern matching in logic languages and functional languages is that functional pattern matching is a ``one-way'' matching where patterns are matched against values, possibly binding some variables in the pattern along the way. Pattern matching in logic programming is ``two-way'' matching based on \emph{unification} where patterns can be matched against other patterns, possibly binding some variables in both patterns and potentially leaving some variables \emph{unbound} or partially bound -- i.e. bound to patterns. A hypothetical example of such functionality can be matching a pattern \codeocaml{Plus(x,Times(x,1))} against another pattern \codeocaml{Plus(Divide(y,2),z)}, which will result in binding \codeocaml{x} to a \codeocaml{Divide(y,2)} and \codeocaml{z} to \codeocaml{Times(Divide(y,2),1)} with \codeocaml{y} left unbound, leaving both \codeocaml{x} and \codeocaml{z} effectively a pattern. \subsection{Expression Templates} Interestingly enough C++ has a pure functional sublanguage in it that has a striking similarity to ML and Haskell. The sublanguage in question is template facilities of C++ that has been shown to be Turing complete\cite{veldhuizen:templates_turing_complete}. Haskell definition of \code{factorial} we saw earlier can be rewritten in template sublanguage of C++ as following: \begin{lstlisting} template <int N> struct factorial { enum { result = N*factorial<N-1>::result }; }; template <> struct factorial<0> { enum { result = 1 }; }; \end{lstlisting} \noindent One can easily see similarity with equational definitions in Haskell, with the exception that more specific cases (specialization for 0) have to follow the general definition in C++. The main difference between Haskell definition and its C++ counterpart is that the former describes computations on \emph{run-time values}, while the latter can only work with \emph{compile-time values}. Turns out we can even express our $exp$ language using this functional sublanguage: \begin{lstlisting} template <class T> struct value { value(const T& t) : m_value(t) {} T m_value; }; template <class T> struct variable { variable() : m_value() {} T m_value; }; template <typename E1, typename E2> struct plus { plus(const E1& e1, const E2& e2) : m_e1(e1), m_e2(e2) {} const E1 m_e1; const E2 m_e2; }; // ... definitions of other expressions \end{lstlisting} \noindent The idea is that expressions can be composed out of subexpressions, whose shape (type) is passed as arguments to above templates. Explicit description of such expressions is very tedious however and is thus never expressed directly, but as a result of corresponding operations: \begin{lstlisting}[keepspaces,columns=flexible] template <typename T> value<T> val(const T& t) { return value<T>(t); } template <typename E1, typename E2> plus<E1,E2> operator+(const E1& e1, const E2& e2) { return plus<E1,E2>(e1,e2); } \end{lstlisting} \noindent With this, one can now capture various expressions as following: \begin{lstlisting} variable<int> v; auto x = v + val(3); \end{lstlisting} \noindent The type of variable \code{x} -- \code{plus<variable<int>,value<int>>} -- captures the structure of the expression, while the values inside of it represent various subexpressions the expression was created with. Such an expression can be arbitrarily, but finitely nested. Note that value 3 is not added to the value of variable \code{v} here, but the expression \code{v+3} is recorded, while the meaning to such expression can be given differently in different contexts. A general observation is that only the shape of the expression becomes fixed at compile time, while the values of variables involved in it can be changed arbitrarily at run time, allowing for \emph{lazy evaluation} of the expression. Polymorphic function \code{eval} below implements just that: \begin{lstlisting}[keepspaces,columns=flexible] template <typename T> T eval(const value<T>& e) { return e.m_value; } template <typename T> T eval(const variable<T>& e) { return e.m_value; } template <typename E1, typename E2> auto eval(const plus<E1,E2>& e) -> decltype(eval(e.m_e1) + eval(e.m_e2)) { return eval(e.m_e1) + eval(e.m_e2); } \end{lstlisting} \noindent One can now modify value of the variable \code{v} and re-evaluate expression as following: \begin{lstlisting} v = 7; // assumes overloading of assignment int r = eval(x); // returns 10 \end{lstlisting} \noindent The above technique for lazy evaluation of expressions was independently invented by Todd Veldhuizen and David Vandevoorde and is generally known in the C++ community by the name \emph{Expression Templates} that Todd coined\cite{Veldhuizen95expressiontemplates, vandevoorde2003c++}. Note again how implementation of \code{eval} resembles equations in Haskell that decompose an algebraic data type. The similarities are so striking that there were attempts to use Haskell as a pseudo code language for template metaprogramming in C++\cite{Milewski11}. A key observation in this analogy is that partial and explicit template specialization of C++ class templates are similar to defining equations for Haskell functions. Variables introduced via template clause of each equation serve as \emph{variable patterns}, while the names of actual templates describing arguments serve as \emph{variant constructors}. An important difference between the two is that Haskell's equations use \emph{first-fit} strategy making order of equations important, while C++ uses \emph{best-fit} strategy, thus making the order irrelevant. Patterns expressed this way can be arbitrarily nested as long as they can be expressed in terms of the types involved and not the values they store. Using the above example, for instance, it is very easy to specialize \code{eval} for an expression of form $c_1*x+c_2$ where $c_i$ are some (not known) constant values and $x$ is any variable. Specializing for a concrete instance of that expression $2*x+3$ will be much harder, because in the representation we chose values 2 and 3 become run-time values and thus cannot participate in compile-time computations anymore. In this case we could have devised a template that allocates a dedicated type for each constant making such value part of the type: \begin{lstlisting}[keepspaces,columns=flexible] template <class T, T t> struct constant {}; template <typename T, T t> T eval(const constant<T,t>& e) { return t; } template <typename E> auto eval(const times<constant<int,0>,E>& e) -> decltype(eval(e.m_e2)) { return (decltype(eval(e.m_e2)))(0); } template <typename E> auto eval(const times<E,constant<int,0>>& e) -> decltype(eval(e.m_e1)) { return (decltype(eval(e.m_e1)))(0); } \end{lstlisting} \noindent Here the first equation for \code{eval} describes the necessary general case for handling expressions of type \code{constant<T,t>}, while the other two are redundant cases that can be seen as an optimization detecting expressions of the form $e*0$ and $0*e$ for any arbitrary expression $e$ and returning 0 without actually computing $e$. Unfortunately, a similar pattern to detect expressions of the form $x-x$ for any variable $x$ cannot be expressed because expression templates are blind to object identity and can only see their types. This means that expression templates of the form $x-y$ are indisthinguishable at compile time from expressions of the form $x-x$ because their types are identical. Nevertheless, with all the limitations, expression templates provide an extremely powerful abstraction mechanism, which we use to express a pattern-language for our SELL. Coincidentally, we employ the compile-time pattern-matching facility already supported by C++ as a meta-language to implement its run-time counterpart. %\section{Pattern Matching for C++} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\label{sec:pm} \section{Algebraic Data Types in C++} \label{sec:adt} C++ does not have a direct support of algebraic data types, but they can usually be emulated in a number of ways. A pattern-matching solution that strives to be general will have to account for different encodings and be applicable to all of them. Consider an ML data type of the form: \begin{lstlisting}[language=ML,keepspaces,columns=flexible,escapechar=@] datatype DT = @$C_1$@ of {@$L_{11}:T_{11},...,L_{1m}:T_{1m}$@} | ... | @$C_k$@ of {@$L_{k1}:T_{k1},...,L_{kn}:T_{kn}$@} \end{lstlisting} \noindent There are at least 3 different ways to represent it in C++. Following Emir, we will refer to them as \emph{encodings}~\cite{EmirThesis}: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Polymorphic Base Class (or \emph{polymorphic encoding} for short) \item Tagged Class (or \emph{tagged encoding} for short) \item Discriminated Union (or \emph{union encoding} for short) \end{itemize} \noindent In polymorphic and tagged encoding, base class \code{DT} represents algebraic data type, while derived classes represent variants. The only difference between the two is that in polymorphic encoding base class has virtual functions, while in tagged encoding it has a dedicated member of integral type that uniquely identifies the variant -- derived class. The first two encodings are inherently \emph{open} because the classes can be arbitrarily extended through subclassing. The last encoding is inherently \emph{closed} because we cannot add more members to the union without modifying its definition. %In order to be able to provide a common syntax for these representations, we %need to understand better similarities and differences between them. Before we %look into them let's fix some terminology. When we deal with pattern matching, the static type of the original expression we are matching may not necessarily be the same as the type of expression we match it with. We call the original expression a \emph{subject} and its static type -- \emph{subject type}. We call the type we are trying to match subject against -- a \emph{target type}. In the simplest case, detecting that the target type is a given type or a type derived from it, is everything we want to know. We refer to such a use-case as \emph{type testing}. In the next simplest case, besides testing we might want to get a pointer or a reference to the target type of subject as casting it to such a type may involve a non-trivial computation only a compiler can safely generate. We refer to such a use-case as \emph{type identification}. Type identification of a given subject against multiple target types is typically referred to as \emph{type switching}. Once we uncovered the target type, we may want to be able to decompose it \emph{structurally} (when the target type is a \emph{structured} data type like array, tuple or class) or \emph{algebraically} (when the target type is a scalar data type like \code{int} or \code{double}). Structural decomposition in our library can be performed with the help of \emph{tree patterns}, while algebraic decomposition can be done with the help of \emph{generalized n+k patterns}. \subsection{Polymorphic Base Class} \label{sec:pbc} In this encoding user declares a polymorphic base class \code{DT} that will be extended by classes representing all the variants. Base class might declare several virtual functions that will be overridden by derived classes, for example \code{accept} used in a Visitor Design Pattern. \begin{lstlisting}[keepspaces,columns=flexible] class DT { virtual @$\sim$@DT{} }; class @$C_1$@ : public DT {@$T_{11} L_{11}; ... T_{1m} L_{1m};$@} ... class @$C_k$@ : public DT {@$T_{k1} L_{k1}; ... T_{kn} L_{kn};$@} \end{lstlisting} The uncover the actual variant of such an algebraic data type, the user might use \code{dynamic_cast} to query one of the $k$ expected run-time types (an approach used by Rose\cite{SQ03}) or she might employ a visitor design pattern devised for this algebraic data type (an approach used by Pivot\cite{Pivot09} and Phoenix\cite{Phoenix}). The most attractive feature of this approach is that it is truly open as we can extend classes arbitrarily at will (leaving the orthogonal issues of visitors aside). \subsection{Tagged Class} \label{sec:tc} This encoding is similar to the \emph{Polymorphic Base Class} in that we use derived classes to encode the variants. The main difference is that the user designates a member in the base class, whose value will uniquely determine the most derived class a given object is an instance of. Constructors of each variant $C_i$ are responsible for properly initializing the dedicated member with a unique value $c_i$ associated with that variant. Clang\cite{Clang} among others uses this approach. \begin{lstlisting}[keepspaces,columns=flexible] class DT { enum kinds {@$c_1, ..., c_k$@} m_kind; }; class @$C_1$@ : public DT {@$T_{11} L_{11}; ... T_{1m} L_{1m};$@} ... class @$C_k$@ : public DT {@$T_{k1} L_{k1}; ... T_{kn} L_{kn};$@} \end{lstlisting} In such scenario the user might use a simple switch statement to uncover the type of the variant combined with a \code{static_cast} to properly cast the pointer or reference to an object. People might prefer this encoding to the one above for performance reasons as it is possible to avoid virtual dispatch with it altogether. Note, however, that once we allow for extensions and not limit ourselves with encoding algebraic data types only it also has a significant drawback in comparison to the previous approach: we can easily check that given object belongs to the most derived class, but we cannot say much about whether it belongs to one of its base classes. A visitor design pattern can be implemented to take care of this problem, but control inversion that comes along with it will certainly diminish the convenience of having just a switch statement. Besides, forwarding overhead might lose some of the performance benefits gained originally by putting a dedicated member into the base class. \subsection{Discriminated Union} \label{sec:du} This encoding is popular in projects that are either implemented in C or originated from C before coming to C++. It involves a type that contains a union of its possible variants, discriminated with a dedicated value stored as a part of the structure. The approach is used by EDG front-end\cite{EDG} and many others. \begin{lstlisting}[keepspaces,columns=flexible] struct DT { enum kinds {@$c_1, ..., c_k$@} m_kind; union { struct @$C_1$@ {@$T_{11} L_{11}; ... T_{1m} L_{1m};$@} @$C_1$@; ... struct @$C_k$@ {@$T_{k1} L_{k1}; ... T_{kn} L_{kn};$@} @$C_k$@; }; }; \end{lstlisting} As before, the user can use a switch statement to identify the variant $c_i$ and then access its members via $C_i$ union member. This approach is truly closed, as we cannot add new variants to the underlying union without modifying class definition. Note also that in this case both subject type and target types are the same and we use an integral constant to distinguish which member(s) of the underlying union is active now. In the other two cases the type of a subject is a base class of the target type and we use either run-time type information or the integral constant associated by the user with the target type to uncover the target type. \section{Pattern Matching Syntax} \label{sec:syn} Figure~\ref{syntax} presents the syntax enabled by our SELL in an abstract syntax form rather than traditional EBNF in order to better describe compositions allowed by the library. In particular, the allowed compositions depend on the C++ type of the entities being composed, so we need to include it in the notation. We do make use of several non-terminals from the C++ grammar in order to put the use of our constructs into context. % TODO: %() Function call %[] Array subscripting %* Indirection (dereference) %& Address-of %sizeof Size-of \begin{figure} \begin{center} \begin{tabular}{rp{0em}cl} \Rule{match statement} & $M$ & \is{} & \code{Match(}$e$\code{)} $\left[C s^*\right]^*$ \code{EndMatch} \\ \Rule{case clause} & $C$ & \is{} & \code{Case(}$T\left[,x\right]^*$\code{)} \\ & & \Alt{} & \code{Qua(} $T\left[,\omega\right]^*$\code{)} \\ & & \Alt{} & \code{Otherwise(}$\left[,x\right]^*$\code{)} \\ \Rule{target expression} & $T$ & \is{} & $\tau$ \Alt{} $l$ \Alt{} $\nu$ \\ \Rule{view} & $\nu$ & \is{} & \code{view<}$\tau,l$\code{>} \\ \Rule{match expression} & $m$ & \is{} & $\pi(e)$ \\ \Rule{pattern} & $\pi$ & \is{} & $\_$ \Alt{} $\eta$ \Alt{} $\varrho$ \Alt{} $\mu$ \Alt{} $\varsigma$ \Alt{} $\chi$ \\ \Rule{extended pattern} & $\omega$ & \is{} & $\pi$ \Alt{} $c$ \Alt{} $x$ \\ \Rule{tree pattern} & $\mu$ & \is{} & \code{match<}$\nu|\tau\left[,l\right]$\code{>(}$\omega^*$\code{)} \\ \Rule{guard pattern} & $\varrho$ & \is{} & $\pi \models \xi$ \\ \Rule{n+k pattern} & $\eta$ & \is{} & $\chi$ \Alt{} $\eta \oplus c$ \Alt{} $c \oplus \eta$ \Alt{} $\ominus \eta$ \Alt{} $(\eta)$ \Alt{} $\_$ \\ \Rule{wildcard pattern} & $\_^{wildcard}$ \\ \Rule{variable pattern} & $\chi$ & \is{} & $\kappa$ \Alt{} $\iota$ \\ \Rule{value pattern} & $\varsigma^{value\langle\tau\rangle}$ \\ \Rule{xt variable} & $\kappa^{variable\langle\tau\rangle}$ \\ \Rule{xt reference} & $\iota^{var\_ref\langle\tau\rangle}$ \\ \Rule{xt expression} & $\xi$ & \is{} & $\chi$ \Alt{} $\xi \oplus c$ \Alt{} $c \oplus \xi$ \Alt{} $\ominus \xi$ \Alt{} $(\xi)$ \Alt{} $\xi \oplus \xi$ \\ \Rule{layout} & $l$ & \is{} & $c^{int}$ \\ \Rule{unary operator} & $\ominus$ & $\in$ & $\lbrace*,\&,+,-,!,\sim\rbrace$ \\ \Rule{binary operator} & $\oplus$ & $\in$ & $\lbrace*,/,\%,+,-,\ll,\gg,\&,\wedge,|,$ \\ & & & $<,\leq,>,\geq,=,\neq,\&\&,||\rbrace$ \\ \Rule{type-id} & $\tau$ & & C++\cite[\textsection A.7]{C++0x} \\ \Rule{statement} & $s$ & & C++\cite[\textsection A.5]{C++0x} \\ \Rule{expression} & $e^\tau$ & & C++\cite[\textsection A.4]{C++0x} \\ \Rule{constant-expression} & $c^\tau$ & & C++\cite[\textsection A.4]{C++0x} \\ \Rule{identifier} & $x^\tau$ & & C++\cite[\textsection A.2]{C++0x} \\ \end{tabular} \end{center} \caption{Syntax enabled by out pattern-matching library} \label{syntax} \end{figure} \noindent {\bf Match statement} is an analog of a switch statement that allows case clauses to be used as its case statements. We require it to be terminated with a dedicated \code{EndMatch} macro, to properly close the syntactic structure introduced with \code{Match} and followed by \code{Case},\code{Qua} and \code{Otherwise} macros. Match statement will accept subjects of pointer and reference types, treating them uniformly in case clauses. This means that user does not have to mention \code{*,&} or any of the \code{const,volatile}-qualifiers when specifying target types. Passing \code{nullptr} as a subject is considered \emph{ill-formed} however -- a choice we have made for performance reasons. Examples of match statement has already been presented in \textsection\ref{sec:intro} and \textsection\ref{sec:xmpl}. We support three kinds of {\bf case clauses}: \code{Case}-\emph{clause}, \code{Qua}-\emph{clause} and \code{Otherwise}-\emph{clause} also called \emph{default clause}. \code{Case} and \code{Qua} clauses are refutable and both take a target expression as their first argument. \code{Otherwise} clause is irrefutable and can occur at most once among the clauses. Its target type is the subject type. \code{Case} and \code{Otherwise} clauses take additionally a list of identifiers that will be treated as variable patterns implicitly introduced into the clause's scope and bound to corresponding members of their target type. \code{Qua} clause permits nested patterns as its arguments, but naturally requires all the variables used in the patterns to be explicitly pre-declared. Even though our default clause is not required to be the last clause of the match statement, we strongly encourage the user to place it last (hence the choice of name -- otherwise). Placing it at the beginning or in the middle of a match statement will only work as expected with \emph{tagged class} and \emph{discriminated union} encodings that use \emph{the-only-fit-or-default} strategy for choosing cases. The \emph{polymorphic base class} encoding uses \emph{first-fit} strategy and thus irrefutable default clause will effectively hide all subsequent case clauses, making them redundant. As we show in \textsection\ref{} the switch between \emph{polymorphic base class} and \emph{tagged class} encodings can simply be made with addition or removal a single definition, which may inadvertently change semantics of those match statements, where the default clause were not placed last. When default clause takes optional variable patterns, it behaves in exactly the same way as \code{Case} clause whose target type is the subject type. {\bf Target expression} used by the case clauses can be either a target type, a constant value, representing \emph{layout} (\textsection\ref{sec:bnd}) or a \emph{view} type combining the two (\textsection\ref{sec:view}). Constant value is only allowed for union encoding of algebraic data types, in which case the library assumes the target type to be the subject type. {\bf Views} in our library are represented by instantiations of a template class \code{view<T,l>} that takes a target type and a layout, combining the two into a new type. Our library takes care of transparent handling of this new type as the original combination of target type and layout. Views are discussed in details in~\textsection\ref{sec:view}. {\bf Match expression} can be seen as an inline version of match statement with a single \code{Qua}-clause. Once a pattern is created, it can be applied to an expression in order to check whether that expression matches the pattern, possibly binding some variables in it. The result of application is always of type \code{bool} except for the tree pattern, where it is a value convertible to \code{bool}. The actual value in this case is going to be a pointer to target type \code{T} in case of a successful match and a \code{nullptr} otherwise. Match expressions will most commonly be seen to quickly decompose a possibly nested expression with a tree pattern as seen in the example in the paragraph below. They are the most used expressions under the hood however that let our library be composable. {\bf Pattern} summarizes \emph{applicative patterns} -- patterns that can be used in a match expression described above. For convenience reasons this category is extended with $c$ and $x$ to form an {\bf extended pattern} -- a pattern that can be used as an argument of \emph{tree pattern} and \code{Qua} clause. Extended pattern lets us use constants as a \emph{value pattern} and regular C++ variables as a \emph{variable pattern} inside these constructs. The library implicitly recognizes them and transforms into $\varsigma$ and $\iota$ respectively. This transformation is further explained in~\textsection\ref{sec:aux} with $\stackrel{flt}{\vdash}$ rule set. {\bf Tree pattern} takes a target type and an optional layout as its template arguments, which uniquely determines a concrete decomposition scheme for the type. Any nested sub-patterns are taken as run-time arguments. Besides applicative patterns, we allow constants and regular C++ variables to be passed as arguments to a tree pattern. They are treated as \emph{value patterns} and \emph{variable patterns} respectively. Tree patterns can be arbitrarily nested. The following example reimplements \code{factorize} from \textsection\ref{sec:bg} in C++ enhanced with our SELL: \begin{lstlisting} const Expr* factorize(const Expr* e) { const Expr *e1, *e2, *e3, *e4; if (match<Plus>(match<Times>(e1,e2),match<Times>(e3,e4))(e)) if (e1 == e3) return new Times(e1, new Plus(e2,e4)); else if (e2 == e4) return new Times(new Plus(e1,e3), e4); return e; } \end{lstlisting} \noindent The above example instantiates a nested pattern and then immediately applies it to value \code{e} to check whether the value matches the pattern. If it does, it binds local variables to sub-expressions, making them available inside if. Examples like this are known to be a week spot of visitor design pattern and we invite you to implement it with it in order to compare both solutions. {\bf Guard patterns} in our SELL consist of two expressions separated by operator \code{|=}\footnote{Operator \code{|=} defining the guard was chosen arbitrarily from those that have relatively low precedence in C++. This was done to allow most of the other operators be used inside the condition without parenthesis}: an expression being matched (left operand) and a condition (right operand). The right operand is allowed to make use of the variable bound in the left operand. When used on arguments of a tree pattern, the condition is also allowed to make use of any variables bound by preceding argument positions. Naturally, a guard pattern that follows a tree pattern may use all the variables bound by the tree pattern. Consider for example decomposition of a color value, represented as a three-byte RGB triplet: \begin{lstlisting}[keepspaces,columns=flexible] variable<double> r,g,b; auto p = match<RGB>( 255*r, 255*g |= g [<] r, 255*b |= b [<] g+r ) |= r+g+b <= 0.5; \end{lstlisting} \noindent Note that C++ standard leaves the order of evaluation of functions arguments unspecified\cite[\textsection 8.3.6]{C++0x}, while we seem to rely here on \emph{left-to-right} order. The reason we can do this lays in the fact that for the purpose of pattern matching all the sub-expressions are evaluated lazily and the unspecified order of evaluation refers only to the order in which corresponding expression templates are created. The actual evaluation of these expression templates happens later when the pattern is applied to an expression and since at that point we have an entirely built expression at hand, we ourselves enforce its evaluation order. Another important bit about our guards that has to be kept in mind is that guards depend on lazy evaluation and thus expression templates. This is why the variables mentioned inside a guard pattern must be of type \code{variable<T>} and not just \code{T}. Failing to declare them as such will result in eager evaluation of guard expression as a normal C++ expression. This will usually go unnoticed at compile time, while surprising at run time, especially to novices. We chose to provide a syntax for guards to be specified on arguments of a tree pattern in addition to after the pattern in order to detect mismatches early without having to compute and either bind or match subsequent arguments. {\bf n+k patterns} are essentially a subset of \emph{xt expressions} with at most one non-repeated variable in them. This allows for expressions like $2x+1$, but not for $x+x+1$, which even though is semantically equivalent to the first one, will not be accepted by our library as an \emph{n+k pattern}. Expressions \code{255*r}, \code{255*g} and \code{255*b} in the example above were instances of our \emph{generalized n+k patterns}. Informally they meant the following: the value we are matching against is of the form $255*x$, what is the value of $x$? Since color components were assumed to be byte values in the range $\left[0-255\right]$ the user effectively gets normalized RGB coordinates in variables \code{r}, \code{g} and \code{b} ranging over $\left[0.0-1.0\right]$. n+k pattern are visually appealing in the sense that they let us write code very close to mathematical notations often used in literature. Consider the definition of fast Fibonacci algorithm taken almost verbatim from the book. Function \code{sqr} here returns a square of its argument. \begin{lstlisting}[keepspaces,columns=flexible] int fib(int n) { variable<int> m; Match(n) Qua(int,1) return 1; Qua(int,2) return 1; Qua(int,2*m) return sqr(fib(m+1)) - sqr(fib(m-1)); Qua(int,2*m+1) return sqr(fib(m+1)) + sqr(fib(m)); EndMatch } \end{lstlisting} {\bf Wildcard pattern} in our library is represented by a predefined global variable \code{_} of a dedicated type \code{wildcard} bearing no state. Wildcard pattern is accepted everywhere where a \emph{variable pattern} $\chi$ is accepted. The important difference from a use of an unused variable is that no code is executed to obtain a value for a given position and copy that value into a variable. The position is ignored altogether and the pattern matching continues. There are two kinds of {\bf variable patterns} in our library: \emph{xt variable} and \emph{xt reference}. {\bf xt variable} stands for \emph{expression-template variable} and refers to variables whose type is \code{variable<T>} for any given type \code{T}. {\bf xt reference} is an \emph{expression-template variable reference} whose type is \code{var_ref<T>}. The latter is never declared explicitly, but is implicitly introduced by the library to wrap regular variables in places where our syntax accepts $x$. Both variable kinds are terminal symbols in our SELL letting us build expression templates of them. The main difference between the two is that {\bf xt variable} $\kappa$ maintains a value of type \code{T} as its own state, while {\bf xt reference} $\iota$ only keeps a reference to a user-declared variable of type \code{T}. Besides the difference in where the state is stored, both kinds of variables have the same semantics and we will thus refer to them as $\chi$. We will also use $\chi^\tau$ to mean either $\kappa^{variable\langle\tau\rangle}$ or $\iota^{var\_ref\langle\tau\rangle}$ since the type of the actual data $\tau$ is what we are interested in, while the fact that it was wrapped into \code{variable<>} or \code{var_ref<>} can be implicitly inferred from the meta-variable $\chi$. We would also like to point out that in most cases a variable pattern can be used as regular variable in the same context where a variable of type \code{T} can. In few cases where this does not happen implicitly, the user might need to put an explicit cast to \code{T&} or \code{const T&}. {\bf Value pattern} is similarly never declared explicitly and is implicitly introduced by the library in places where $c$ is accepted. {\bf xt expression} refers to an \emph{expression-template expression} -- a non-terminal symbol in our expression language built by applying a given operator to argument expression templates. We use this syntactic category to distinguish lazily evaluated expressions introduced by our SELL from eagerly evaluated expressions, directly supported by C++. {\bf Layout} is an enumerator that user can use to define alternative bindings for the same class. When layout is not mentioned, the default layout is used, which is the only required layout a user has to define if she wishes to make use of bindings. We discuss layouts in details in \textsection\ref{sec:bnd}. {\bf Binary operator} and {\bf unary operator} name a subset of C++ operators we make use of and provide support for in our pattern-matching library. The remaining syntactic categories refer to non-terminals in the C++ grammar bearing the same name. {\bf Identifier} will only refer to variable names in our SELL, even though it has a broader meaning in the C++ grammar. {\bf Expression} subsumes any valid C++ expression. We use expression $e^\tau$ to refer to a C++ expression, whose result type is $\tau$. {\bf Constant-expression} is a subset of the above restricted to only expression computable at compile time. {\bf Statement} refers to any valid statement allowed by the C++ grammar. Our match statement $M$ would have been extending this grammar rule with an extra case should it have been defined in the grammar directly. {\bf Type-id} represents a type expression that designates any valid C++ type. We are using this meta-variable in the superscript to other meta-variables in order to indicate a C++ type of the entity they represent. \subsection{Bindings Syntax} \label{sec:bnd} Structural decomposition in functional languages is done with the help of constructor symbol and a list of patterns in positions that correspond to arguments of that constructor. C++ allows for multiple constructors in a class, often overloaded for different types but the same arity. Implicit nature of variable patterns that matches ``any value'' will thus not help in disambiguating such constructors, unless the user explicitly declares the variables, thus fixing their types. Besides, C++ does not have means for general compile-time reflection, so a library like ours cannot enumerate all the constructors present in a class automatically. This is why we decided to separate \emph{construction} of objects from their \emph{decomposition} through pattern matching with \emph{bindings}. %Similarly to constructors, a class may have multiple deconstructors. Unlike %constructors, deconstructors are named differently. The following grammar defines a syntax for a sublanguage our user will use to specify decomposition of various classes for pattern matching:\footnote{We reuse several meta-variables introduced in the previous grammar} \begin{figure} \begin{center} \begin{tabular}{lp{1em}cl} \Rule{bindings} & & \is{} & $\delta^*$ \\ \Rule{binding definition} & $\delta$ & \is{} & \code{template <}$\left[\vec{p}\right]$\code{>} \\ & & & \code{struct bindings<} $\tau[$\code{<}$\vec{p}$\code{>}$]\left[,l\right]$\code{>} \\ & & & \code{\{} $\left[ks\right]\left[kv\right]\left[bc\right]\left[cm^*\right]$ \code{\};} \\ \Rule{class member} & $cm$ & \is{} & \code{CM(}$c^{size\_t},q$\code{);} \\ \Rule{kind selector} & $ks$ & \is{} & \code{KS(}$q$\code{);} \\ \Rule{kind value} & $kv$ & \is{} & \code{KV(}$c$\code{);} \\ \Rule{base class} & $bc$ & \is{} & \code{BC(}$\tau$\code{);} \\ \Rule{template-parameter-list} & $\vec{p}$ & & C++\cite[\textsection A.12]{C++11} \\ \Rule{qualified-id} & $q$ & & C++\cite[\textsection A.4]{C++11} \\ \end{tabular} \end{center} \caption{Syntax used to provide bindings to concrete class hierarchy} \label{fig:bindings} \end{figure} \noindent Any type $\tau$ may have arbitrary amount of \emph{bindings} associated with it and distinguished through the \emph{layout} parameter $l$. The \emph{default binding} which omits layout papameter is implicitly associated with layout whose value is equal to predefined constant \code{default_layout = size_t(}$\sim$\code{0)}. User-defined layouts should not reuse this dedicated value. A \emph{Binding definition} consists of either full or partial specialization of a template-class: \begin{lstlisting} template <typename T, size_t l = default_layout> struct bindings; \end{lstlisting} \noindent The body of the class consists a sequence of specifiers, which generate the necessary definitions for querying bindings by the library code. Note that binding definitions made this way are \emph{non-intrusive} since the original class definition is not touched. They also respect \emph{encapsulation} since only the public members of the target type will be accessible from within \code{bindings} specialization. A \emph{Class Member} specifier \code{CM(}$c,q$\code{)} that takes (zero-based) binding position $c$ and a qualified identifier $q$, specifies a member, whose value will be used to bind variable in position $c$ of $\tau$'s decomposition with this \emph{binding definition}. Qualified identifier is allowed to be of one of the following kinds: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Data member of the target type \item Nullary member-function of the target type \item Unary external function taking the target type by pointer, reference or value. \end{itemize} \noindent The following example definition provides bindings to the standard library type \code{std::complex<T>}: \begin{lstlisting}[keepspaces,columns=flexible] template <typename T> struct bindings<std::complex<T>> { CM(0, std::complex<T>::real); CM(1, std::complex<T>::imag); }; \end{lstlisting} \noindent It states that when pattern matching against \code{std::complex<T>} for any given type \code{T}, use the result of invoking member-function \code{real()} to obtain the value for the first pattern matching position and \code{imag()} for the second position. In the presence of several overloaded members with the same name but different arity, \code{CM} will unambiguously pick one that falls into one of the three listed above categories of accepted members. In the example above, nullary \code{T std::complex<T>::real() const} is preferred to unary \code{void std::complex<T>::real(T)}. Note that the binding definition above is made once for all instantiations of \code{std::complex} and can be fully or partially specialized for cases of interest. Non-parameterized classes will fully specialize the \code{bindings} trait to define their own bindings. Using \code{CM} specifier a user defines the semantic functor $\Delta_i^{\tau,l},i=1..k$ we introduced in \textsection\ref{sec:semme} as following: \begin{lstlisting} template <> struct bindings<@$\tau$@> { CM(0, @$\tau$@::member_for_position_0); ... CM(k, @$\tau$@::member_for_position_k); }; \end{lstlisting} Note that binding definitions made this way are \emph{non-intrusive} since the original class definition is not touched. They also respect \emph{encapsulation} since only the public members of the target type will be accessible from within \code{bindings} specialization. A \emph{Kind Selector} specifier \code{KS(}$q$\code{)} is used to specify a member of the subject type that will uniquely identify the variant for \emph{tagged} and \emph{union} encodings. The member $q$ can be of any of the three categories listed for \code{CM}, but is required to return an \emph{integral type}. A \emph{Kind Value} specifier \code{KV(}$c$\code{)} is used by \emph{tagged} and \emph{union} encodings to specify a constant $c$ that uniquely identifies the variant. A \emph{Base Class} specifier \code{BC(}$\tau$\code{)} is used by the \emph{tagged} encoding to specify an immediate base class of the class whose bindings we define. A helper \code{BCS(}$\tau*$\code{)} specifier can also be used to specify the exact topologically sorted list of base classes (\textsection\ref{sec:cotc}). A \emph{Layout} parameter $l$ can be used to define multiple bindings for the same target type. This is particularly essential for \emph{union} encoding where the types of the variants are the same as the type of subject and thus layouts become the only way to associate variants with position bindings. For this reason we require binding definitions for \emph{union} encoding always use the same constant $l$ as a kind value specified with \code{KV(l)} and the layout parameter $l$! %The above definition will fall short, however, when we have to define bindings %for an algebraic data type encoded as \emph{discriminated union}. In this case %we have a single target type and multiple bindings associated with it. To %account for this, we recognize that a class (whether discriminated union or not) %may have alternative bindings associated with it and thus introduce an optional %integral parameter called \emph{layout} that can be passed to \code{bindings} to %distinguish various binding layouts. Classes that are not instances of our %discriminated union encoding are free to choose whatever unique constants they %feel appropriate to define alternative layouts. We require, however, that classes %representing discriminated union encoding use constants that correspond to kinds %supported by the union and only access members within layout that are valid for %its kind. Consider for example the following discriminated union describing various shapes: \begin{lstlisting}[keepspaces,columns=flexible] struct cloc { double first; double second; }; struct ADTShape { enum shape_kind {circle, square, triangle} kind; union { struct { cloc center; double radius; }; // circle struct { cloc upper_left; double size; }; // square struct { cloc first, second, third; }; // triangle }; }; template <> struct bindings<ADTShape> { KS(ADTShape::kind); // Kind Selector }; template <> struct bindings<ADTShape, ADTShape::circle> { KV(ADTShape::circle); // Kind Value CM(0, ADTShape::center); CM(1, ADTShape::radius); }; \end{lstlisting} \noindent \code{KS} specifier within default bindings for \code{ADTShape} tells the library that value of a \code{ADTShape::kind} member, extracted from subject at run time, should be used to obtain a unique value that identifies the variant. Binding definition for \code{circle} variant then uses the same constant \code{ADTShape::circle} as the value of the layout parameter of the \code{bindings<T,l>} trait and \code{KV(l)} specifier to indicate its \emph{kind value}. Should the shapes have been encoded with a \emph{Tag Class}, the bindings for the base class \code{Shape} would have contained \code{KS(Shape::kind)} specifier, while derived classes \code{Circle}, \code{Square} and \code{Triangle}, representing corresponding variants, would have had \code{KV(Shape::circle)} etc. specifiers in their binding definitions. These variant classes could have additionally defined a few alternative layouts for themselves, in which case the numbers for the layout parameter could have been arbitrarily chosen. \section{Pattern Matching Semantics} \label{sec:sem} We use natural semantics\cite{Kahn87} (big-step operational semantics) to describe our pattern-matching semantics. As with syntax, we do not formalize the semantics of the entire language, but concentrate only on presenting relevant parts of our extension. We assume the entire state of the program is modeled by an environment $\Gamma$, which we can query as $\Gamma(x)$ to get a value of a variable $x$. In addition to meta-variables we have seen already, metavariables $u,v$ and $b^{bool}$ range over values. We make a simplifying assumption that all values of user-defined types are represented via variables of reference types and there exist a non-throwing operation \DynCast{\tau}{v} that can test whether an object of a given type is an instance of another type, returning a proper reference to it or a dedicated value \nullptr{} that represents \code{nullptr}. Intuitively, the semantics of such references is that of pointers in C++, which are implicitly dereferenced. We describe our semantics with several rule sets that deal with different parts of our syntax. \subsection{Semantics of Matching Expressions} \label{sec:semme} The rule set in Figure~\ref{exprsem} deals with pattern application $\pi(e)$, which essentially performs matching of a pattern $\pi$ against expression $e$. The judgements are of the form $\Gamma\vdash \pi(e) \evals v,\Gamma'$ that can be interpreted as given an environment $\Gamma$, pattern application $\pi(e)$ results in value $v$ and environment $\Gamma'$. When we use $\evalspp$ instead of $\evals$ we simply pointing out that corresponding evaluation rule comes from the C++ semantics and not from our rules. \begin{figure} \begin{mathpar} \inferrule[Wildcard] {} {\Gamma\vdash \_(e) \evals true,\Gamma} \inferrule[Value] {\Gamma\vdash e \evalspp v,\Gamma_1 \\ \Gamma_1\stackrel{eval}{\vdash} \varsigma \evals u,\Gamma_2} {\Gamma\vdash \varsigma^\tau(e^\tau) \evals (u==v),\Gamma_2} \inferrule[Variable] {\Gamma\vdash e \evalspp v,\Gamma_1 \\ \Gamma_1 \vdash \DynCast{\tau_1}{v} \evalspp u,\Gamma_2} {\Gamma\vdash \chi^{\tau_1}(e^{\tau_2}) \evals (u \neq \nullptr{}),\Gamma_2[\chi\leftarrow u]} \end{mathpar} \begin{mathpar} \inferrule[n+k Binary Left] {\Gamma\vdash e \evalspp v_1,\Gamma_1 \\ \Gamma_1\vdash \Psi_\oplus^\tau[v_1](\bullet,c) \evalspp \langle b_2,v_2\rangle,\Gamma_2 \\ \Gamma_2\vdash \eta(v_2) \evals b_3,\Gamma_3} {\Gamma\vdash (\eta^\tau \oplus c)(e) \evals (b_2 \wedge b_3),\Gamma_3} \end{mathpar} \begin{mathpar} \inferrule[n+k Binary Right] {\Gamma\vdash e \evalspp v_1,\Gamma_1 \\ \Gamma_1\vdash \Psi_\oplus^\tau[v_1](c,\bullet) \evalspp \langle b_2,v_2\rangle,\Gamma_2 \\ \Gamma_2\vdash \eta(v_2) \evals b_3,\Gamma_3} {\Gamma\vdash (c \oplus \eta^\tau)(e) \evals (b_2 \wedge b_3),\Gamma_3} \end{mathpar} \begin{mathpar} \inferrule[n+k Unary] {\Gamma\vdash e \evalspp v_1,\Gamma_1 \\ \Gamma_1\vdash \Psi_\ominus^\tau[v_1](\bullet) \evalspp \langle b_2,v_2\rangle,\Gamma_2 \\ \Gamma_2\vdash \eta(v_2) \evals b_3,\Gamma_3} {\Gamma\vdash (\ominus \eta^\tau)(e) \evals (b_2 \wedge b_3),\Gamma_3} \end{mathpar} \begin{mathpar} \inferrule[Guard] {\Gamma\vdash e \evalspp v_1,\Gamma_1 \\ \Gamma_1\vdash \pi(v_1) \evals b_2,\Gamma_2 \\ \Gamma_2\stackrel{eval}{\vdash} \xi \evals b_3,\Gamma_3} {\Gamma\vdash (\pi \models \xi)(e) \evals (b_2 \wedge b_3),\Gamma_3} \end{mathpar} \begin{mathpar} \inferrule[Tree-Nullptr] {\Gamma \vdash e \evalspp v,\Gamma_0 \\ \Gamma_0 \vdash \DynCast{\tau}{v} \evalspp \nullptr{},\Gamma_1} {\Gamma\vdash ($match$\langle\tau\left[,l\right]\rangle(\omega_1,...,\omega_k))(e) \evals \nullptr{},\Gamma_1} \end{mathpar} \begin{mathpar} \inferrule[Tree-False] {\Gamma \vdash e \evalspp v,\Gamma_0 \\ \Gamma_0 \vdash \DynCast{\tau}{v} \evalspp u^{\tau},\Gamma_1 \\\\ \Gamma_1 \vdash \Delta_1 ^{\tau,l}(u) \evalspp v_1, \Gamma_1' \\ \Gamma_1' \stackrel{flt}{\vdash} \omega_1 \evals \pi_1 \\ \Gamma_1' \vdash \pi_1(v_1) \evals true, \Gamma_2 \\\\ \Gamma_2 \vdash \Delta_2 ^{\tau,l}(u) \evalspp v_2, \Gamma_2' \\ \Gamma_2' \stackrel{flt}{\vdash} \omega_2 \evals \pi_2 \\ \Gamma_2' \vdash \pi_2(v_2) \evals true, \Gamma_3 \\\\ \cdots \\\\ \Gamma_{i-1}\vdash \Delta_{i-1}^{\tau,l}(u) \evalspp v_{i-1},\Gamma_{i-1}' \\ \Gamma_{i-1}'\stackrel{flt}{\vdash} \omega_{i-1} \evals \pi_{i-1}\\ \Gamma_{i-1}'\vdash \pi_{i-1}(v_{i-1}) \evals true, \Gamma_i \\\\ \Gamma_i \vdash \Delta_i ^{\tau,l}(u) \evalspp v_i, \Gamma_i' \\ \Gamma_i' \stackrel{flt}{\vdash} \omega_i \evals \pi_i \\ \Gamma_i' \vdash \pi_i(v_i) \evals false,\Gamma_{i+1} \\\\ } {\Gamma\vdash ($match$\langle\tau\left[,l\right]\rangle(\omega_1,...,\omega_k))(e) \evals \nullptr{},\Gamma_{i+1}} \end{mathpar} \begin{mathpar} \inferrule[Tree-True] {\Gamma \vdash e \evalspp v,\Gamma_0 \\ \Gamma_0 \vdash \DynCast{\tau}{v} \evalspp u^{\tau},\Gamma_1 \\\\ \Gamma_1 \vdash \Delta_1 ^{\tau,l}(u) \evalspp v_1, \Gamma_1' \\ \Gamma_1' \stackrel{flt}{\vdash} \omega_1 \evals \pi_1 \\ \Gamma_1' \vdash \pi_1(v_1) \evals true, \Gamma_2 \\\\ \Gamma_2 \vdash \Delta_2 ^{\tau,l}(u) \evalspp v_2, \Gamma_2' \\ \Gamma_2' \stackrel{flt}{\vdash} \omega_2 \evals \pi_2 \\ \Gamma_2' \vdash \pi_2(v_2) \evals true, \Gamma_3 \\\\ \cdots \\\\ %\Gamma_{k-1}\vdash \Delta_{k-1}^{\tau,l}(u) \evalspp v_{k-1},\Gamma_{k-1}' \\ \Gamma_{k-1}'\stackrel{flt}{\vdash} \omega_{k-1} \evals \pi_{k-1}\\ \Gamma_{k-1}'\vdash \pi_{k-1}(v_{k-1}) \evals true, \Gamma_k \\\\ \Gamma_k \vdash \Delta_k ^{\tau,l}(u) \evalspp v_k, \Gamma_k' \\ \Gamma_k' \stackrel{flt}{\vdash} \omega_k \evals \pi_k \\ \Gamma_k' \vdash \pi_k(v_k) \evals true, \Gamma_{k+1} \\\\ } {\Gamma\vdash ($match$\langle\tau\left[,l\right]\rangle(\omega_1,...,\omega_k))(e) \evals u^{\tau},\Gamma_{i+1}} \end{mathpar} \caption{Semantics of match-expressions} \label{exprsem} \end{figure} Matching a wildcard pattern against expression always succeeds without changes to the environment (\RefTirName{Wildcard}). Matching a value pattern against expression succeeds only if the result of evaluating that expression is the same as the constant (\RefTirName{Value}). Matching against variable will always succeeds when the type of expression $e$ is the same as variable's value type $\tau$. When the types are different, the library will try to use \code{dynamic_cast<}$\tau$\code{>(e)} to see whether dynamic type of expression can be casted to $\tau$. If it does, matching succeeds, binding variable to the result of \code{dynamic_cast}. If it does not, matching fails (\RefTirName{Variable}). Our semantics for generalized n+k patterns draws on the notion of \emph{backward collecting semantics} used in \emph{abstract interpretation}\cite{CousotCousot92-1}. In general, backward collecting semantics $Baexp\Sem{A}(E)R$ of an expression $A$ defines the subset of possible environments $E$ such that the expression may evaluate, without producing a runtime error, to a value belonging to given set $R$: \begin{lstlisting} @$Baexp\Sem{A}(E)R = \{x \in E | A(x) \in R\}$@ \end{lstlisting} This can be interpreted as following: given a set $E$ from where the arguments of an expression $A$ draw their values, as well as a set $R$ of acceptable (not all possible) results, find the largest subset $X \subseteq E$ of arguments that will only render results of evaluating $A$ on them in $R$. Intuitively n+k patterns like $f(x,y)=v$ relate a known result of a given function application to its arguments (hence analogy with backward collecting semantics). The case where multiple unknown arguments are matched against a single result should not be immediately discarded as there are known n-ary functions whose inverse is unique. An example of such function is Cantor pairing function that defines bijection between $\mathbb{N}\times\mathbb{N}$ and $\mathbb{N}$. Even when such mappings are not one-to-one, their restriction to a given argument often is. Most generalizations of n+k patterns seem to agree on the following rules: \begin{itemize} \item Absence of solution that would result in a given value should be indicated through rejection of the pattern. \item Presence of a unique solution should be indicated with acceptance of the pattern and binding of corresponding variables to the solution. \end{itemize} \noindent As to the case when multiple solutions are possible, several alternatives can be viable: \begin{itemize} \item Reject the pattern. \item Accept, binding to either an arbitrary or some normalized solution. \item When set of solutions is guaranteed to be finite -- accept, binding solutions to a set variable. \item When set of solutions is guaranteed to be enumerable -- accept, binding solutions to a generator capable of enumerating them all. \end{itemize} \noindent We believe that depending on application, any of these semantic choices can be valid, which is why we prefer not to make such choice for the user, but rather provide him with means to decide himself. This is why our semantics for matching against generalized n+k pattern depends on a family of user-defined functions \begin{lstlisting} @$\Psi_f^\tau:\tau_r\times\tau_1\times...\times1\times...\times\tau_k\rightarrow bool\times\tau$@ \end{lstlisting} \noindent such that $\Psi_f^\tau[r](c_1,...,\bullet,...,c_k)$ for a given function \begin{lstlisting} @$f:\tau_1,...,\tau,...,\tau_k\rightarrow\tau_r$@ \end{lstlisting} \noindent should return a pair composed of a boolean indicating acceptance of a pattern $f(c_1,...,x^\tau,...,c_k)=r$ and a solution with respect to $x$ when the match was reported successful. Symbol $\bullet$ indicates a position in the argument list for which a solution is sought, the rest of the arguments are known values. We describe how user can supply the function $\Psi_f^\tau$ for his own operation $f$ in \textsection\ref{sec:slv}. The only difference between the three rules defining semantics of n+k patterns is the arity of the root operator and the position in the argument list in which the only non-value pattern was spotted (\RefTirName{n+k Binary Left}, \RefTirName{n+k Binary Right} and \RefTirName{n+k Unary}). We use user-defined $\Psi_f^\tau$ to solve for a given argument of a given operator and then recurse to match corresponding sub-expression. Note that when user did not provide $\Psi_f^\tau$ for the argument in which solution is sought, the rule is rejected. Evaluation of a guard pattern first tries to match the left hand side of a guard expression, usually binding a variable in it, and then if the match was successful, lazily evaluates its right hand side to make sure the new value of the bound variable is used. The result of evaluating the right hand side converted to \code{bool} is reported as the result of matching the entire pattern (\RefTirName{Guard}). Matching of a tree patterns begins with evaluating subject expression and casting it dynamically to target type $\tau$ when subject type is not $\tau$. When \code{dynamic_cast} fails returning \code{nullptr}, the pattern is rejected (\RefTirName{Tree-Nullptr}). Once a value of a target type has been uncovered, we proceed with matching arguments left-to-right. For each argument we first translate \emph{extended pattern} $\omega$ accepted by tree pattern into an \emph{application pattern} $\pi$ to get rid of the syntactic convenience we allow on arguments of tree patterns. Using the target type and optional layout that was provided by the user, we obtain a value that should be bound in the $i^{th}$ position of decomposition. Again we rely on family of user provided functions $\Delta_i^{\tau,l}(u)$ that take an object instance and returns a value bound in the $i^{th}$ position of its $l^{th}$ layout. Specification of function $\Delta$ by the user is discussed in~\textsection\ref{sec:bnd}. Here we would like to point out however that the number of argument patterns passed over to the tree pattern could be smaller than the number of binding positions defined by specific layout. Remaining argument positions are implicitly assumed to be filled with the wildcard pattern. When we have the value for the $i^{th}$ position of object's decomposition, we match it against the pattern specified in the $i^{th}$ position and if the value is accepted we move on to matching the next argument (\RefTirName{Tree-False}). Only in case when all the argument patterns have been successfully matched the matching succeeds by returning a pointer to the target type, which in C++ can be used everywhere a boolean expression is expected. Returning pointer instead of just boolean value gives us functionality similar to that of \emph{as patterns} while maintaining the composibility with the rest of the library (\RefTirName{Tree-True}). \subsection{Semantics of Match Statement} \label{sec:semms} Our second rule set deals with semantics of a \emph{match statement}. The judgements are of the form $\Gamma\vdash s \evals u,\Gamma'$ on statements, including match statement, and are slightly extended for case clauses $\Gamma\vdash_v C \evals u,\Gamma'$ with value $v$ of a subject that is passed along from the match statement onto the clauses. We also use a small helper function $TL(t,\tau_s)$ defined on target expression $t \in T$ and the subject's type $\tau_s$: \begin{eqnarray*} TL(\tau,\tau_s) &=& \langle \tau, default\_layout \rangle \\ TL(l,\tau_s) &=& \langle \tau_s, l \rangle \\ TL(view\langle\tau,l\rangle,\tau_s) &=& \langle \tau, l \rangle \end{eqnarray*} \noindent The function essentially disambiguates one of the three kinds of target expressions and returns a combination of a target type and layout used in each case. %\begin{figure} \begin{mathpar} \inferrule[Match-True] {\Gamma \vdash e \evalspp v,\Gamma_1 \\ v \neq \nullptr \\ \Gamma_1 \vdash_v C_1 \evals false,\Gamma_2 \\ \Gamma_2 \vdash_v C_2 \evals false,\Gamma_3 \\ \cdots \\ \Gamma_{i-1}\vdash_v C_{i-1}\evals false,\Gamma_i \\ \Gamma_i \vdash_v C_i \evals true, \Gamma_{i+1} \\ \Gamma_{i+1}\vdash \vec{s}_i \evalspp u,\Gamma' } {\Gamma\vdash Match(e) \left[C_i \vec{s}_i\right]^*_{i=1..n} EndMatch \evals u,\Gamma'$\textbackslash$\{x | x \not\in \Gamma_i\}} \inferrule[Match-False] {\Gamma \vdash e \evalspp v,\Gamma_1 \\ v \neq \nullptr \\ \Gamma_1 \vdash_v C_1 \evals false,\Gamma_2 \\ \Gamma_2 \vdash_v C_2 \evals false,\Gamma_3 \\ \cdots \\ \Gamma_{n-1}\vdash_v C_{n-1}\evals false,\Gamma_n \\ \Gamma_n \vdash_v C_n \evals false,\Gamma_{n+1} } {\Gamma\vdash Match(e) \left[C_i \vec{s}_i\right]^*_{i=1..n} EndMatch \evals false,\Gamma_{n+1}} \inferrule[Qua] {TL(t,\sigma)=\langle \tau,l \rangle \\ \Gamma \vdash $match$\langle\tau,l\rangle(\vec{\omega})(v) \evals u,\Gamma' \\ \Gamma'' = (u \neq \nullptr\ ?\ \Gamma'[$matched$^\tau\rightarrow u] : \Gamma')} {\Gamma \vdash_{v^\sigma} Qua(t,\vec{\omega}) \evals u,\Gamma''} \inferrule[Case] {\Delta_i^t : \tau \rightarrow \tau_i, i=1..k \\ \Gamma[x_i^{\tau_i}\rightarrow\tau_i()]_{i=1..k} \vdash_v Qua(t,x_1,...,x_k)(v) \evals u,\Gamma' \\ \Gamma'' = (u \neq \nullptr\ ?\ \Gamma' : \Gamma'$\textbackslash$\{x_i | i=1..k\})} {\Gamma \vdash_v Case(t,x_1,...,x_k) \evals u,\Gamma''} \inferrule[Otherwise] {\Gamma \vdash Case(\tau,\vec{x})(v) \evals true,\Gamma'} {\Gamma \vdash_{v^\tau} Otherwise(\vec{x}) \evals true,\Gamma'} \end{mathpar} %\caption{Semantics of match-statement} %\label{stmtsem} %\end{figure} \noindent Evaluation of a match statement begins with evaluation of subject expression, which is not allowed to result in \code{nullptr}. This value is passed along to each of the clauses. The clauses are evaluated in their lexical order until the first one that is not rejected. Statements associated with it are evaluated to form the outcome of the match statement. The resulting environment makes sure that local variables introduced by case clauses are not available after the match statement (\RefTirName{Match-True}). When none of the clauses were accepted, which is only possible when default clause was not specified, the resulting environment might still be different from the initial environment because of variables bound in partial matches during evaluation of clauses (\RefTirName{Match-False}). Evaluation of a \code{Qua}-clause is equivalent to evaluation of a corresponding match-expression on a tree pattern. Successful match will introduce a variable \code{matched} of type $\tau\&$ that is bound to subject properly casted to the target type $\tau$ into the local scope of the clause. Evaluation of \code{Case}-clauses amounts of evaluation of \code{Qua}-clauses in the environment extended with variables passed as arguments to the clause. The variables introduced by the \code{Case}-clause have the static type of values bound in corresponding positions, which ensures that variable patterns will be irrefutable (\RefTirName{Case}). In practice, the variables are of reference type so that no unnecessary copying is happening. Evaluation of default clause cannot fail because there is no \code{dynamic_cast} involved neither for the subject nor for implicit local variables: for the former the target type is by definition the subject type, while for the latter the type is chosen to be the type of expected values (\RefTirName{Otherwise}). \subsection{Auxiliary Rules} \label{sec:aux} The next rule set deals with evaluation of expression templates referred to from the previous rule sets via $\stackrel{eval}{\vdash}$. The judgements are of the form $\Gamma\stackrel{eval}{\vdash} \xi \evals v,\Gamma'$ that can be interpreted as given an environment $\Gamma$, evaluation of an expression template $\xi$ results in value $v$ and environment $\Gamma'$. We refer here to an unspecified semantic function $\Sem{o}$ that represents C++ semantics of operation $o$ as specified by the C++ standard. \begin{mathpar} \inferrule[Var] {} {\Gamma\stackrel{eval}{\vdash} \chi \evals \Gamma(\chi),\Gamma} \inferrule[Unary] {\Gamma\stackrel{eval}{\vdash} \xi \evals v,\Gamma_1} {\Gamma\stackrel{eval}{\vdash} \ominus \xi \evals \Sem{\ominus} v,\Gamma_1} \inferrule[Binary] {\Gamma\stackrel{eval}{\vdash} \xi_1 \evals v_1,\Gamma_1 \\ \Gamma_1\stackrel{eval}{\vdash} \xi_2 \evals v_2,\Gamma_2} {\Gamma\stackrel{eval}{\vdash} \xi_1 \oplus \xi_2 \evals v_1\Sem{\oplus}v_2,\Gamma_2} \inferrule[Binary-Left] {\Gamma\stackrel{eval}{\vdash} \xi \evals v,\Gamma_1} {\Gamma\stackrel{eval}{\vdash} \xi \oplus c \evals v\Sem{\oplus}c,\Gamma_1} \inferrule[Binary-Right] {\Gamma\stackrel{eval}{\vdash} \xi \evals v,\Gamma_1} {\Gamma\stackrel{eval}{\vdash} c \oplus \xi \evals c\Sem{\oplus}v,\Gamma_1} \end{mathpar} \noindent The rules are quite simple so we do not elaborate them in details. The reason we have two separate rules for the case when one of the arguments is constant expression stems from the idiomatic use of expression templates enabling direct use of constants in operations that already involve expression template arguments. The next set of rules describes transformation of extended patterns into applicative patterns to get rid of syntactic sugar enabled by extended patterns. \begin{mathpar} \inferrule[Filter-Pattern] {} {\Gamma\stackrel{flt}{\vdash} \pi \evals \pi} \inferrule[Filter-Variable] {} {\Gamma\stackrel{flt}{\vdash} x \evals \iota(x)} \inferrule[Filter-Constant] {} {\Gamma\stackrel{flt}{\vdash} c \evals \varsigma(c)} \end{mathpar} \section{Generalized n+k Patterns} \label{sec:slv} Intuitively n+k patterns like $f(x,y)=v$ relate a known result of a given function application to its arguments. The case where multiple unknown arguments are matched against a single result should not be immediately discarded as there are known n-ary functions whose inverse is unique. An example of such function is Cantor pairing function that defines bijection between $\mathbb{N}\times\mathbb{N}$ and $\mathbb{N}$. Even when such mappings are not one-to-one, their restriction to a given argument often is. Most generalizations of n+k patterns seem to agree on the following rules: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Absence of solution that would result in a given value should be indicated through rejection of the pattern. \item Presence of a unique solution should be indicated with acceptance of the pattern and binding of corresponding variables to the solution. \end{itemize} \noindent When multiple solutions are possible, returning a set or an enumerator is not usually considered due to differences in types: variable $x$ representing a solution to $f(x)=c$ is intuitively expected to have the same type as the argument of $f$, so that it can be applied to $x$. Rejecting the pattern might be a plausible approach in some applications where multiple solutions are treated as ambiguity. It is incapable of distinguishing absence of a solution from ambigous solution however, which is often desired. Binding to an arbitrary solution in case of multiple ones might be sensitive as to which solution is chosen: some applications might prefer the smallest/largest one, some the smallest positive etc. We believe that depending on application, different semantic choices can be valid, which is why we prefer not to make such choice for the user, but rather provide him with means to decide. In fact we go even further and do not require the values bound by our generalized n+k pattern to be a solution to the corresponding equation. We do this for several reasons: \begin{itemize} \item Due to numeric errors, truncation and integer overflow we will rarely obtain the exact algebraic solution. \item Curve fitting can be seen as pattern matching in some application domains. \item Sometimes we might be interested in matching against a projection of a value onto some base and the obtained result may not necessarily yield a solution to matching against the original value. \end{itemize} Consider matching an expression $x+1$ with variable $x$ of type \code{char} ranging over $[-128,127]$ against a value -128. Should the value 127 be considered a solution since 127+1 overflows in char resulting in -128? From the mathematical point of view it should not, but a particular application might accept such a solution for the sake of performance. Similarly, matching $3*y$ against 1.0 with a variable $y$ of type \code{double} will result in a number that is slightly different from $\frac{1}{3}$. Should such match be accepted the next logical request a user might have is to be able to match an expression of the form $n/m$ for integer variables $n$ and $m$ against a value 3.1415926 and get the closest fraction to it. Matching against such pattern does not have to be imprecise as one can match it against an object of a class representing rational numbers. Taking the argument of precision even further one may want to be able to do curve fitting with generalized n+k patterns for the sake of expressive syntax. Consider an object that contains sampling of some random variable. A hypotetical match statement might be querying: \begin{lstlisting}[keepspaces,columns=flexible] match (random_variable) { case Gaussian(@$\mu,\sigma^2$@): ... case Poisson(@$\lambda$@): ... case Bernoulli(@$p$@): ... } \end{lstlisting} Fitting error threshold in such scenario can either be global or passed as a parameter into expressions we are matching against: e.g. \code{case Gaussian(}$0.01,\mu,\sigma^2$\code{):}. Again the fitting does not have to be imprecise. Consider a library dealing with polynomials of arbitrary degree. Given a general polynomial we might want to be able to check a few special cases for which analytical solutions to some larger question exist: \begin{lstlisting}[keepspaces,columns=flexible] match (polynomial) { case a*X^1 + 1: ... case 2*X^2 + b*X^1 + c: ... } \end{lstlisting} X in such scenario is not a variable but a placeholder value of a kind that lets us identify the degree, whose coefficient is sought. A simpler example of this kind is decomposition of a complex number with Euler's notation $a+b*i$ for scalar variables $a$ and $b$. With variables, such a generalized n+k pattern is irrefutable for all complex numbers, but when a more specific form is queried (e.g. $3+b*i$) a given complex number may fail to match such a pattern. While matching, we will project such a complex number along its real and imaginary components and will try matching the operands of addition using those projections. Solutions obtained along each projection may not necessarily combine into the final solution. What all these examples have in common is not necessarily solving the equation that generalized n+k patterns represent, but the fact that we associate certain notations with certain mathematical entities they represent. Parameters of those expressions are typically associated with the parameterns of the underlying mathematical object and we perform decomposition of that object into parts. The structure of the expression tree is an analog of a constructor symbol in structural decomposition, while its leaves are placeholders for parameters to be matched against or inferred from the mathematical object in question. Algebraic decomposition to mathematical entities is what views are to algebraic data types. Consider for example an object representing a 2D line. At different parts of the program we might need to decompose that line differently (hypothetical syntax): \begin{lstlisting}[keepspaces,columns=flexible] if (line matches m*X + c) ... // slope-intercept form if (line matches a*X + b*Y = c) ... // linear equation form if (line matches (Y-y0)*(x1-x0)=(y1-y0)*(X-x0)) ... // two-points form \end{lstlisting} As before, X and Y are not variables, but some syntactic entities that let us properly decompose parts. Matching against the slope-intercept notation will not be able to decompose a line of the form $y=c$, but otherwise still looks like solving an equation (even though quantified over all X). The other two notations include equality sign in their expression, which makes our argument that we decompose against a known notation (as opposed to solving some equation) stronger. \subsubsection{Solvers} The above class esssentially defines forward semantics of a family of operations. To define backward semantics of it for the use in n+k patterns, the user defines \emph{solvers} by overloading a function \begin{lstlisting} template <LazyExpression E, typename S> bool solve(const E&, const S&); \end{lstlisting} The first argument of a function takes an expression template representing an expression we are matched against, while the second argument represents the expected result. The following example defines a generic solver for multiplication by a constant: \begin{lstlisting} template <LazyExpression E, typename T> requires Field<E::result_type> bool solve(const expr<multiplication,E,value<T>>& e, const E::result_type& r) { return solve(e.m_e1,r/eval(e.m_e2)); } @\halfline@ template <LazyExpression E, typename T> requires Integral<E::result_type> bool solve(const expr<multiplication,E,value<T>>& e, const E::result_type& r) { T t = eval(e.m_e2); return r%t == 0 && solve(e.m_e1,r/t); } \end{lstlisting} \noindent Note that we overload not only on the structure of the expression, but also on the properties of their result type (or any other type involved). In particular when the type of the result of the sub-expression models \code{Field} concept, we can rely on presence of unique inverse and simply call division without any additional checks. A similar overload for integral multiplication additionally checks that result is divisible by the constant, before generically forwarding the matching to the first argument of multiplication. This last overload combined with a similar solver for addition of integral types is everything the library needs to properly handle the definition of the \code{fib} function from \textsection\ref{sec:syn}. A solver capable of decomposing a complex value using the Euler's notation is very easy to define by fixing the structure of expression: \begin{lstlisting} template <LazyExpression E1, LazyExpression E2> requires SameType<E1::result_type,E2::result_type> bool solve(const expr<addition, expr<multiplication,E1,value<complex<E1::result_type>>>, E2 >& e, const complex<E1::result_type>& r); \end{lstlisting} \section{Views} \label{sec:view} Support of multiple bindings through layouts in our library effectively enables a facility similar to Wadler's \emph{views}. Reconsider example from \textsection\ref{sec:bg} that discusses cartesian and polar representations of complex numbers, demonstrating the notion of view. The same example recoded with our SELL looks as following: \begin{lstlisting}[keepspaces,columns=flexible] // Introduce layouts enum { cartesian = default_layout, polar }; @\halfline@ // Define bindings with them template <typename T> struct bindings<std::complex<T>> { CM(0,std::real<T>); CM(1,std::imag<T>); }; template <typename T> struct bindings<std::complex<T>, polar> { CM(0,std::abs<T>); CM(1,std::arg<T>); }; @\halfline@ // Define views template <typename T> using Cartesian = view<std::complex<T>>; template <typename T> using Polar = view<std::complex<T>, polar>; @\halfline@ std::complex<double> c; double a,b,r,f; @\halfline@ if (match<std::complex<double>>(a,b)(c)) // default if (match< Cartesian<double>>(a,b)(c)) // same as above if (match< Polar<double>>(r,f)(c)) // view \end{lstlisting} \noindent The C++ standard effectively enforces the standard library to use cartesian representation\cite[\textsection26.4-4]{C++11}. Knowing that, we choose the \code{cartesian} layout to be default, with \code{polar} being an alternative layout for complex numbers. We then define bindings for each of these layouts as well as introduce template aliases (an analog of typedefs for parameterized classes) for each of the views. Template class \code{view<T,l>} defined by the library provides a way to bind together a target type with one of its layouts into a single type. This type can be used everywhere in the library where an original target type was expected, while the library will take care of decoding the type and layout from the view and passing them along where needed. The first two match expressions are the same and incur no run-time overhead since they use default layout of the underlying type. The third match expression will implicitly convert cartesian representation into polar, thus incurring some overhead. This overhead would have been present in code that depends on polar coordinates anyways, since the user would have had to invoke the corresponding functions manually. \section{Match Statement} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:impl} Our implementation of pattern matching expressions follows the naive way of essentially interpreting them through backtracking. On one hand, this was a consequence of working in a library setting, where code transformations are much harder to achive. On the other hand, from the very beginning we were trying to find an expressive alternative to object decomposition with either nested dynamic casts or visitor design pattern, and thus were not concerned with pattern matching on multiple arguments, where decision tree approach becomes more efficient. Dealing with single argument certainly leaves less choices for optimization, but does not eliminate them as repeated use of constructor-pattern with the same target type but different argument patterns essentially leads to the same inefficiencies. To tackle this issue in a library setting we rely on and give more control to the library user. For example, we fix the order of evaluation, but let guard-patterns be placed directly on the arguments of a constructor-pattern to let the user benefit from the consciesness of expression, while holding a grip on performance. Similarly, we added \code{Alt} sub-clauses to \code{Qua}-clause to syntactically separate fast type switching from slow sequential evaluation of pattern matching expressions. The fall-through behavior of the \code{Match}-statement allows the user to achieve the same effect directly with \code{Qua}-clauses, however the performance overhead involved justified the addition of otherwise syntactic sugar. The interpretation of pattern matching expressions with expression templates follows very closely the composition of expressions described by abstract syntax in~\textsection\ref{sec:syn} as well as their application to subject expression described by evaluation rules in \textsection\ref{sec:semme}. This section thus mainly concentrates on efficient implementation of a match statement as well as unification of its syntax to the three encodings of algebraic data types outlined in \textsection\ref{sec:adt}. The discussion will largely focus on devising an efficient \emph{type-switch}, which is then used by our library as a backbone to the general match statement presented in~\textsection\ref{sec:semms}. By encoding algebraic data types with classes we alter their semantics in two important ways: we make them \emph{extensible} as new variants can be added by simply deriving from the base class, as well as \emph{hierarchical} as variants can be inherited from other variants and thus form a subtyping relation between themselves~\cite{Glew99}. This is not the case with traditional algebraic data types in functional languages, where the set of variants is \emph{closed}, while the variants are \emph{disjoint}. Some functional languages e.g. ML2000~\cite{ML2000} and Moby~\cite{Moby} were experimenting with \emph{hierarchical extensible sum types}, which are closer to object-oriented classes then algebraic data types are, but, interestingly, they did not provide pattern matching facilities on them. Working within a multi-paradigm programming language like C++, we will not be looking at algebraic data types in the closed form they are present in functional languages, but rather in an open/extensible form discussed by Zenger~\cite{Zenger:2001}, Emir~\cite{EmirThesis}, L\"oh~\cite{LohHinze2006}, Glew~\cite{Glew99} and others. We will thus assume an object-oriented setting where new variants can be added later and form subtyping relations between each other including those through multiple inheritance. We will look separately at polymorphic and tagged class encodings as our handling of these two encodings is significantly different. Before we look into these differences in greater details, however, we would like to look at the problem of type switching without specific implementation in mind as well as properties we would like to seek from such an implementation. \subsection{Type Switch} Functional languages use pattern matching to perform case analysis on a given algebraic data type. In this section we will try to generalize this construct to case analysis of hierarchical and extensible data types. Presence of such a construct will allow for external function definitions by detaching a particular case analysis from the hierarchy it is performed on. Consider a class \code{B} and a set of classes \code{Di} directly or indirectly inherited from it. An object is said to be of the \emph{most derived type} \code{D} if it was created by explicitly calling a constructor of that type. The inheritance relation on classes induces a subtyping relation on them, which in turn allows objects of a derived class to be used in places where an object of a base class is expected. The type of variable or parameter referencing such an object is called the \emph{static type} of the object. When object is passed by reference or by pointer, we might end up in a situation where the static type of an object is different from its most derived type, with the latter necessarily being a subtype of the former. The most derived class along with all its base classes that are not base classes of the static type are typically referred to as the \emph{dynamic types} of an object. At each program point the compiler knows the static type of an object, but not its dynamic types. By \emph{type switch} we will call a control structure taking either a pointer or a reference to an object, called \emph{subject}, and capable of uncovering a reference or a pointer to a full object of a type present in the list of case clauses. Similar control structures exist in many programming languages and date back to at least Simula's Inspect statement~\cite{Simula67}. Consider an object of (most derived) type \code{D}, pointed to by a variable of static type \code{B*}: e.g. \code{B* base = new D;}. A hypothetical type switch statement, not currently supported by C++, can look as following: \begin{lstlisting} switch (base) { case D1: s1; ... case Dn: sn; } \end{lstlisting} \noindent and can be given numerous plausible semantics: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item \emph{First-fit} semantics will evaluate the first statement $s_i$ such that $D_i$ is a base class of $D$ \item \emph{Best-fit} semantics will evaluate the statement corresponding to the most derived base class $D_i$ of $D$ if it is unique (subject to ambiguity) \item \emph{The-only-fit} semantics will only evaluate statement $s_i$ if $D_i=D$. \item \emph{All-fit} semantics will evaluate all statements $s_i$ whose guard type $D_i$ is a subtype of $D$ (order of execution has to be defined) \item \emph{Any-fit} semantics might choose non-deterministically one of the statements enabled by all-fit \end{itemize} \noindent The list is not exhaustive and depending on a language, any of these semantics or their combination might be a plausible choice. Functional languages, for example, often prefer first-fit, while object-oriented languages would typically be inclined to best-fit semantics. The-only-fit semantics is traditionally seen in procedural languages like C and Pascal to deal with discriminated union types. All-fit and any-fit semantics might be seen in languages based on predicate dispatching~\cite{ErnstKC98} or guarded commands~\cite{EWD:EWD472}, where a predicate can be seen as a characteristic function of a type, while logical implication can be seen as subtyping. \subsection{Open and Efficient Type Switching} \label{sec:poets} The fact that algebraic data types in functional languages are closed allows for their efficient implementation. The traditional compilation scheme assigns unique tags to every variant of the algebraic data type and pattern matching is then simply implemented with a jump table over all tags. A number of issues in object-oriented languages makes this extremely efficient approach infeasible: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Extensibility \item Subtyping \item Multiple inheritance \item Separate compilation \item Dynamic linking \end{itemize} \noindent Unlike functional style algebraic data types, classes are \emph{extensible} whereby new variants can be arbitrarily added to the base class in the form of derived classes. Such extension can happen in a different translation unit or a static library (subject to \emph{separate compilation}) or a dynamically linked module (subject to \emph{dynamic linking}). Separate compilation effectively implies that all the derived classes of a given class will only be known at link time, postponing thus any tag-allocation related decisions until then. The Presence of dynamic linking effectively requires the compiler to assume that the exact derived classes will only be known at run time, and not even at start-up time. %and thus any tag allocation scheme should on one hand assume presence of %unknown tags and on the other -- the necessity of maintaing the same tags for %the commonly seen classes of each dynamic module. The \emph{subtyping} relation that comes along with extensibility through subclassing effectively gives every class multiple types -- its own and the types of all its base classes. In such a scenario it is natural to require that type switching can be done not only against the exact dynamic type of an object, but also against any of its base classes (subject to our substitutability requirement). This in itself is not a problem for functional-style tag allocation as long as the set of all derived classes is known, since the compiler can partition tags of all the derived classes according to chosen semantics based on classes mentioned in case clauses. Unfortunately this will not work in the presence of dynamic linking as there might be new derived classes with tags not known at the time of partitioning and thus not mentioned in the generated jump table. \emph{Multiple inheritance} complicates things further by making each class potentially belong to numerous unrelated hierarchies. Any tag allocation scheme capable of dealing with multiple inheritance will either have to assure that generated tags satisfy properties of each subhierarchy independently or use different tags for different subhierarchies. Multiple inheritance also introduces such a phenomenon as \emph{cross-casting}, whereby a user may request to cast pointers between unrelated classes, since they can potentially become base classes of a later defined class. From an implementation point of view this means that not only do we have to be able to check that a given object belongs to a given class (type testing), but also be able to find a correct offset to it from a given base class (type casting). While looking at various schemes for implementing type switching we noted down a few questions that might help evaluate and compare solutions: \begin{enumerate} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Can the solution handle base classes in case clauses? \item Will it handle the presence of base and derived classes in the same match statement? \item Will it work with derived classes coming from a DLL? \item Can it cope with multiple inheritance (repeated, virtual)? \item Can independently developed DLLs that either extend classes involved in type switching or do type switching themselves be loaded together without any integration efforts? \item Are there any limitations on the number and or shape of class extensions? \item What is the complexity of performing matching, based on the number of case clauses and the number of possible types? \end{enumerate} The number of possible types in the last question refers to the number of subtypes of the static type of the subject, not all the types in the program. Several solutions discussed below depend on the number of case clauses in the match statement, which raises the question of how many such clauses a typical program might have. The C++ pretty-printer for Pivot we implemented using our pattern matching techniques originally had 8 match statements with 5, 7, 8, 10, 15, 17, 30 and 63 case clauses each. While experimenting with probability distributions of various classes to minimize the number of conflicts (see \textsection\ref{sec:moc}), we had to associate probabilities with classes and implemented it with a match statement over all 160 nodes in the Pivot's class hierarchy. With Pivot having the smallest number of node kinds among the compiler frameworks we had a chance to work with, we expect a similar or larger number of case clauses in other compiler applications. Instead of starting with an efficient solution and trying to make it open, let us start with an open solution and try to make it efficient. An obvious solution that will pass the above checklist can look like the following: \begin{lstlisting} if (D1* derived = dynamic_cast<D1*>(base)) { s1; } else if (D2* derived = dynamic_cast<D2*>(base)) { s2; } else ... if (Dn* derived = dynamic_cast<Dn*>(base)) { sn; } \end{lstlisting} \noindent Despite the obvious simplicity, its main drawback is performance: a typical implementation of \code{dynamic_cast} might take time proportional to the distance between base and derived classes in the inheritance tree~\cite{XXXXX}. What is worse, is that the time to uncover the type in the $i^{th}$ case clause is proportional to $i$, while failure to match will always take the longest. This linear increase can be seen in the Figure~\ref{fig:DCastVis1}, where the above cascading-if was applied to a flat hierarchy encoding an algebraic data type with 100 variants. The same type-switching functionality implemented with the visitor design pattern took only 28 cycles regardless of the case.\footnote{Each case $i$ was timed multiple times, thus turning the experiment into a repetitive benchmark described in \textsection\ref{sec:eval}. In a more realistic setting, represented by random and sequential benchmarks, the cost of double dispatch was varying between 52 and 55 cycles.} This is more than 3 times faster than the 93 cycles it took to uncover even the first case with \code{dynamic_cast}, while it took 22760 cycles to uncover the last. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{DCast-vs-Visitors1.png} \caption{Type switching based on na\"ive techniques} \label{fig:DCastVis1} \end{figure} When the class hierarchy is not flat and has several levels, the above cascading-if can be replaced with a decision tree that tests base classes first and thus eliminates many of the derived classes from consideration. This approach is used by Emir to deal with type patterns in Scala~\cite[\textsection 4.2]{EmirThesis}. The intent is to replace a sequence of independent dynamic casts between classes that are far from each other in the hierarchy with nested dynamic casts between classes that are close to each other. Another advantage is the possibility to fail early: if the type of the subject does not match any of the clauses, we will not have to try all the cases. A flat hierarchy, which will likely be formed by the leaves in even a multi-level hierarchy, will not be able to benefit from this optimization and will effectively degrade to the above cascading-if. Nevertheless, when applicable, the optimization can be very useful and its benefits can be seen in Figure~\ref{fig:DCastVis1} under ``Decision-Tree + dynamic\_cast''. The class hierarchy for this timing experiment formed a perfect binary tree with classes number 2*N and 2*N+1 derived from a class with number N. The structure of the hierarchy also explains the repetitive pattern of timings. The above solution either in a form of cascading-if or as a decision tree can be significantly improved by lowering the cost of a single \code{dynamic_cast}. We devised an asymptotically constant version of this operator that we call \code{memoized_cast} in \textsection\ref{sec:memcast}. As can be seen from the graph titled ``Cascading-If + memoized\_cast'', it speeds up the above cascading-if solution by a factor of 18 on average, as well as outperforms the decision-tree based solution with dynamic\_cast for a number of case clauses way beyond those that can happen in a reasonable program. We leave the discussion of the technique until \textsection\ref{sec:memcast}, while we keep it in the chart to give perspective on an even faster solution to dynamic casting. The slowest implementation in the chart based on exception handling facilities of C++ is discussed in \textsection\ref{sec:xpm}. The approach of Gibbs and Stroustrup~\cite{FastDynCast} employs divisibility of numbers to obtain a tag allocation scheme capable of performing type testing in constant time. Extended with a mechanism for storing offsets required for this-pointer adjustments, the technique can be used for extremely fast dynamic casting on quite large class hierarchies. The idea is to allocate tags for each class in such a way that tag of a class D is divisible by a tag of a class B if and only if class D is derived from class B. For comparison purposes we hand crafted this technique on the above flat and binary-tree hierarchies and then redid the timing experiments from Figure~\ref{fig:DCastVis1} using the fast dynamic cast. The results are presented in Figure~\ref{fig:DCastVis2}. For reference purposes we retained ``Visitor Design Pattern'' and ``Cascading-If + memoized\_cast'' timings from Figure~\ref{fig:DCastVis1} unchanged. Note that the Y-axis has been scaled-up 140 times, which is why the slope of ``Cascading-If + memoized\_cast'' timings is so much steeper. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{DCast-vs-Visitors2.png} \caption{Type switching based on the fast dynamic cast of Gibbs and Stroustrup~\cite{FastDynCast}} \label{fig:DCastVis2} \end{figure} As can be seen from the figure the use of our memoized\_cast implementation can get close in terms of performance to the fast dynamic cast, especially when combined with decision trees. An important difference that cannot be seen from the chart, however, is that the performance of memoized\_cast is asymptotic, while the performance of fast dynamic cast is guaranteed. This happens because the implementation of memoized\_cast will incur an overhead of a regular dynamic\_cast call on every first call with a given most derived type. Once that class is memoized, the performance will remain as shown. Averaged over all calls with a given type we can only claim we are asymptotically as good as fast dynamic cast. Unfortunately fast dynamic casting is not truly open to fully satisfy our checklist. The structure of tags required by the scheme limits the number of classes it can handle. A 32-bit integer is estimated to be able to represent 7 levels of a class hierarchy that forms a binary tree (255 classes), 6 levels of a similar ternary tree hierarchy (1093 classes) or just one level of a hierarchy with 9 base classes -- multiple inheritance is the worst case scenario of the scheme that quickly drains its allocation possibilities. Besides, similarly to other tag allocation schemes, presence of class extensions in \emph{Dynamically Linked Libraries} (DLLs) will likely require an integration effort to make sure different DLLs are not reusing prime numbers in a way that might result in an incorrect dynamic cast. A number of other constant-time techniques for class-membership testing is surveyed by Gil and Zibin~\cite[\textsection 4]{PQEncoding}. They are intended for type testing, and thus will have to be combined with decision trees for type switching, resulting in similar to fast dynamic cast performance. They too assume access to the entire class hierarchy at compile time and thus are not open. In view of the predictably-constant dispatching overhead of the visitor design pattern, it is clear that any open solution that will have a non-constant dispatching overhead will have a poor chance of being adopted. Multi-way switch on sequentially allocated tags~\cite{Spuler94} was one of the few techniques that could achieve constant overhead, and thus compete with and even outperform visitors. Unfortunately the scheme has problems of its own that make it unsuitable for truly open type-switching and here is why. %To better understand the problem let us look at some existing solutions to type %switching that we found to be used in practice. %From our experience on this project we have noticed that we can only compete %with visitors when switch statements are implemented with a jump table. As soon %as compiler was putting even a single branch into the decision tree of cases, %the performance was degraded significantly. From this perspective we do not %regard solutions based on decision trees as efficient, since they do not let us %compete compete with the visitors solution. The simple scheme of assigning a unique tag per variant (instantiatable class here) will not pass our first question because the tags of base and derived classes will have to be different if the base class can be instantiated on its own. In other words we will not be able to land on a case label of a base class, while having a derived tag only. The already mentioned partitioning of tags of derived classes based on the classes in case clauses also will not help as it assumes knowledge of all the classes and thus fails extensibility through DLLs. In practical implementations hand crafted for a specific class hierarchy, tags often are not chosen arbitrarily, but to reflect the subtyping relation of the underlying hierarchy. Switching on base classes in such a setting will typically involve a call to some function $f$ that converts derived class' tag into a base class' tag. An example of such a scheme would be having a certain bit in the tag set for all the classes derived from a given base class. Unfortunately this solution creates more problems than it solves. First of all the solution will not be able to recognize an exceptional case where most of the derived classes should be handled as a base class, while a few should be handled specifically. Applying the function $f$ puts several different types into an equivalence class with their base type, making them indistinguishable from each other. Secondly, the assumed structure of tags is likely to make the set of tags sparse, effectively forcing the compiler to use a decision tree instead of a jump table to implement the switch. Even though conditional jump is reported to be faster than indirect jump on many computer architectures~\cite[\textsection 4]{garrigue-98}, this did not seem to be the case in our experiments. Splitting of a jump table into two with a condition, that was sometimes happening because of our case label allocation scheme, was resulting in a noticeable degradation of performance in comparison to a single jump table. Besides, as was seen in the scheme of Gibbs and Stroustrup, the assumed structure of tags can also significantly decrease the number of classes a given allocation scheme can handle. It is also interesting to note that even though their scheme can be easily adopted for type switching with decision trees, it is not easily adoptable for type switching with jump tables: in order to obtain tags of base classes we will have to decompose the derived tag into primes and then find all the dividers of the tag present in case clauses. To summarize, truly open and efficient type switching is a non-trivial problem. The approaches we found in the literature were either open or efficient, but not both. Efficient implementation was typically achieved by sealing the class hierarchy and using a jump table on sequential tags. Open implementations were resorting to type testing and decision trees, which was not efficient. We are unaware of any efficient tag allocation scheme that can be used in a truly open scenario. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5555 %\noindent %We chose to give it a first-fit semantics in our library as it was resembling %pattern matching facilities of other languages and was the most intuitive. The %following code can be generated to implement it: % %\begin{lstlisting} %if (D1* derived = dynamic_cast<D1*>(base)) { s1; } else %if (D2* derived = dynamic_cast<D2*>(base)) { s2; } else %... %if (Dn* derived = dynamic_cast<Dn*>(base)) { sn; } %\end{lstlisting} %\noindent %Note that leaving \code{else} out will effectively turn it into an all-fit %statement with enabled statements executed in lexicographical order. % %The above code is easy to understand but is extremely inefficient as for an %object of dynamic type $D_i$ we will have to perform $i-1$ dynamic casts that %fail first. The diagram below compares the times spent by visitors and the above %type switch statement to uncover the $i^{th}$ case. We postpone the discussion %of \code{memoized_cast} until section \textsection\ref{}, here we would only %like to notice that even though faster than the actual dynamic cast it also bears %a linear coefficient, not present in visitors. \section{Solution for Polymorphic Classes} \label{sec:copc} Our handling of type switches for polymorphic and tagged encodings differs with each having its pros and cons described in details in \textsection\ref{sec:cmp}. In this section we will concentrate on the truly open type switch for polymorphic encoding. The type switch for tagged encoding (\textsection\ref{sec:cotc}) is simpler and more efficient, however, making it open will eradicate its performance advantages. The difference in performance is the price we pay for keeping the solution open. The core of the proposal relies on two key aspects of C++ implementations: \begin{enumerate} \item a constant-time access to the virtual table pointer embedded in an object of dynamic class type; \item injectivity of the relation between an object's inheritance path and the virtual table pointer extracted from that object. \end{enumerate} \subsection{Virtual Table Pointers} \label{sec:vtp} Before we discuss our solution we would like to talk about certain properties of the C++ run-time system that we rely on. In particular, we show that under certain conditions the compiler cannot share the same virtual tables between different classes or subobjects of the same class. This allows us to use virtual table pointers to \emph{uniquely} identify the subobjects within the most derived class. Strictly speaking, the C++ standard~\cite{C++0x} does not require implementations to use any specific technique (e.g. virtual tables) to implement virtual functions, however interoperability requirements have forced many compiler vendors to design a set of rules called Common Vendor Application Binary Interface (the C++ ABI)~\cite{C++ABI}. Most C++ compilers today follow these rules, with the notable exception of Microsoft Visual C++. The technique presented here will work with any C++ compiler that follows the C++ ABI. Microsoft's own ABI is not publically available and thus we cannot formally verify that it satisfies our requirements. Nevertheless, we did run numerous experiments with various class hierarchies and have sufficient confidence that our approach can be used in Visual C++. This is why we include experimental results for this compiler as well. Besides single inheritance, which is supported by most object-oriented languages, C++ supports multiple-inheritance of two kinds: repeated and virtual (shared). \emph{Repeated inheritance} creates multiple independent subobjects of the same type within the most derived type. \emph{Virtual inheritance} creates only one shared subobject, regardless of the inheritance paths. Because of this peculiarity of the C++ type system it is not sufficient to talk only about the static and dynamic types of an object -- one has to talk about a \emph{subobject} of a certain static type accessible through a given inheritance path within a dynamic type. \begin{figure}[tbp] \centering \includegraphics[width=0.47\textwidth]{Hierarchies.png} \caption{Single inheritance, repeated multiple inheritance and virtual multiple inheritance} \label{fig:hierarchy} \end{figure} \noindent Note that the above picture portrais subobject relatedion, not the inheritance. The notion of subobject has been formalized before~\cite{RF95,WNST06,RDL11}. We follow here the presentation of Ramamanandro et al~\cite{RDL11}. A base class subobject of a given complete object is represented by a pair $\sigma = \langle h,l\rangle$ with $h \in \{\mathrm{Repeated},\mathrm{Shared}\}$ representing the kind of inheritance (single inheritance is $\mathrm{Repeated}$ with one base class) and $l$ representing the path in a non-virtual inheritance graph. A predicate $C\leftY\sigma\rightY A$ they introduce means that $\sigma$ designates a subobject of static type $A$ within the most derived object of type $C$. A class that declares or inherits a virtual function is called a \emph{polymorphic class}~\cite[\textsection 10.3]{C++0x}. The C++ ABI in turn defines \emph{dynamic class} to be a class requiring a virtual table pointer (because it or its bases have one or more virtual member functions or virtual base classes). A polymorphic class is thus a dynamic class by definition. A \emph{virtual table pointer} (vtbl-pointer) is a member of object's layout pointing to a virtual table. A \emph{virtual table} is a table of information used to dispatch virtual functions, access virtual base class subobjects, and to access information for \emph{RunTime Type Identification} (RTTI). Because of repeated inheritance, an object of given type may have several vtbl-pointers in it. Each such pointer corresponds to one of the polymorphic base classes. Given an object $a$ of static type $A$ that has $k$ vtbl-pointers in it, we will use the same notation we use for regular fields to refer them: $a.\textit{vtbl}_i$. A \emph{primary base class} for a dynamic class is the unique base class (if any) with which it shares the virtual table pointer at offset 0. The data layout procedure for non-POD types described in \textsection2.4 of the C++ ABI~\cite{C++ABI} requires dynamic classes either to allocate vtable pointer at offset 0 or share the virtual table pointer from its primary base class, which is by definition at offset 0. For our purpose this means that we can rely on a virtual table pointer always being present at offset 0 for all dynamic classes, and thus for all polymorphic classes. \begin{lemma} In an object layout that adheres to the C++ ABI, a polymorphic class always has a virtual table pointer at offset 0. \label{lem:vtbl} \end{lemma} \noindent Knowing how to extract a vtbl-pointer as well as that all the objects of the same most derived type share the same vtbl-pointers, the idea is to use their values to uniquely identify the type and subobject within it. Unfortunately nothing in the C++ ABI states these pointers should be unique. A popular optimization technique lets the compiler share the virtual table of a derived class with its primary base class as long as the derived class that does not override any virtual methods. Use of such optimization will violate the uniqueness of vtbl-pointers; however, we show below that in the presense of RTTI, a C++ ABI-compliant implementation is guaranteed to have different values of vtbl-pointers in different subobjects. %C++ standard requires an argument of \code{dynamic_cast} to be a pointer to or %an lvalue of a polymorphic type when performing \emph{downcast} -- a cast from %base to derived~\cite[\textsection 5.2.7-6]{C++0x}. We can thus always safely %extract virtual table pointer from offset 0 of any valid argument to %\code{dynamic_cast}. %Similarly, each class that has virtual member functions or virtual bases has an %associated set of virtual tables. There may be multiple virtual tables for a %particular class, if it is used as a base class for other classes. However, the %virtual table pointers within all the objects (instances) of a particular %most-derived class point to the same set of virtual tables. The exact content of the virtual table is not important for our discussion, but we would like to point out a few fields in it. The following definitions are copied verbatim from the C++ ABI~\cite[\textsection 2.5.2]{C++ABI}: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item The \emph{typeinfo pointer} points to the typeinfo object used for RTTI. It is always present. \item The \emph{offset to top} holds the displacement to the top of the object from the location within the object of the virtual table pointer that addresses this virtual table, as a \code{ptrdiff_t}. It is always present. \item \emph{Virtual Base (vbase) offsets} are used to access the virtual bases of an object. Such an entry is added to the derived class object address (i.e. the address of its virtual table pointer) to get the address of a virtual base class subobject. Such an entry is required for each virtual base class. \end{itemize} \noindent Given a virtual table pointer \code{vtbl}, we will refer to these fields as \code{rtti(vtbl)}, \code{off2top(vtbl)} and \code{vbase(vtbl)} respectively. We will also assume presence of a function $\mathit{offset}(\sigma)$ that defines the offset of the base class identified by the end of the path $\sigma$ within a class identified by its first element. \begin{theorem} In an object layout that adheres to the C++ ABI with present runtime type information, the equality of virtual table pointers of two objects of the same static type implies that they both belong to subobjects with the same inheritance path in the same most-derived type. \begin{eqnarray*} \forall a_1, a_2 : A\ |\ a_1\in C_1\leftY\sigma_1\rightY A \wedge a_2\in C_2\leftY\sigma_2\rightY A \\ a_1.\textit{vtbl}_i = a_2.\textit{vtbl}_i \Rightarrow C_1 = C_2 \wedge \sigma_1 = \sigma_2 \end{eqnarray*} \label{thm:vtbl} \end{theorem} \begin{proof} Let us assume first $a_1.\textit{vtbl}_i = a_2.\textit{vtbl}_i$ but $C_1 \neq C_2$. In this case we have \code{rtti}$(a_1.\textit{vtbl}_i) = $\code{rtti}$(a_2.\textit{vtbl}_i)$. By definition \code{rtti}$(a_1.\textit{vtbl}_i) = C_1$ while \code{rtti}$(a_2.\textit{vtbl}_i) = C_2$, which contradicts that $C_1 \neq C_2$. Thus $C_1 = C_2 = C$. Let us assume now that $a_1.\textit{vtbl}_i = a_2.\textit{vtbl}_i$ but $\sigma_1 \neq \sigma_2$. Let $\sigma_i=\langle h_i,l_i\rangle,i=1,2$ If $h_1 \neq h_2$ then one of them refers to a virtual base while the other to a repeated one. Assuming $h_1$ refers to a virtual path, \code{vbase}$(a_1.\textit{vtbl}_i)$ has to be defined inside the vtable according to the ABI, while \code{vbase}$(a_2.\textit{vtbl}_i)$ -- should not. This would contradict again that both $vtbl_i$ refer to the same virtual table. We thus have $h_1 = h_2 = h$. If $h = \mathrm{Shared}$ then there is only one path to such $A$ in $C$, which would contradict $\sigma_1 \neq \sigma_2$. If $h = \mathrm{Repeated}$ then we must have that $l_1 \neq l_2$. In this case let $k$ be the first position in which they differ: $l_1^j=l_2^j \forall j<k \wedge l_1^k\neq l_2^k$. Since our class $A$ is a base class for classes $l_1^k$ and $l_2^k$, both of which are in turn base classes of $C$, the object identity requirement of C++ requires that the relevant subobjects of type $A$ have different offsets within class $C$: $\mathit{offset}(\sigma_1)\neq \mathit{offset}(\sigma_2)$ However $\mathit{offset}(\sigma_1)=$\code{off2top}$(a_1.\textit{vtbl}_i)=$\code{off2top}$(a_2.\textit{vtbl}_i)=\mathit{offset}(\sigma_2)$ since $a_1.\textit{vtbl}_i = a_2.\textit{vtbl}_i$, which contradicts that the offsets are different. \end{proof} \noindent Conjecture in the other direction is not true in general as there may be duplicate virtual tables for the same type present at run-time. This happens in many C++ implementations in the presence of DLLs as the same class compiled into executable and into a DLL it loads may have identical virtual tables inside the executable's and DLL's binaries. Note also that we require both static types to be the same. Dropping this requirement and saying that equality of vtbl-pointers also implies equality of the static types is not true in general because a derived class will share the vtbl-pointer with its primary base class (see Lemma~\ref{lem:vtbl}). The theorem can be reformulated, however, stating that one static type will necessarily have to be a subtype of the other. The current formulation is sufficient for our purposes, while reformulation would have required more elaborate discussion of the algebra of subobjects~\cite{RDL11}, which we touch only briefly. \begin{corollary} Results of \code{dynamic_cast} can be reapplied to a different instance from within the same subobject. $\forall A,B \forall a_1, a_2 : A\ |\ a_1.\textit{vtbl}_i = a_2.\textit{vtbl}_i \Rightarrow$ \\ \code{dynamic_cast<B>}$(a_1).\textit{vtbl}_j = $\code{dynamic_cast<B>}$(a_2).\textit{vtbl}_j \vee$ \\ \code{dynamic_cast<B>}$(a_1)$ throws $\wedge$ \code{dynamic_cast<B>}$(a_2)$ throws. \label{crl:vtbl} \end{corollary} \noindent During construction and deconstruction of an object, the value of a given vtbl-pointer may change. In particular, that value will reflect the dynamic type of the object to be the type of the fully constructed part only. However, this does not affect our reasoning, as during such transition we also treat the object to have the type of its fully constructed base only. Such interpretation is in line with the C++ semantics for virtual function calls and the use of RTTI during construction and destruction of an object. Once the complete object is fully constructed, the value of the vtbl-pointer will remain the same for the lifetime of the object. \subsection{Memoization Device} \label{sec:memdev} Let us look at a slightly more general problem than type switching. Consider a generalization of the switch statement that takes predicates on a subject as its clauses and executes the first statement $s_i$ whose predicate is enabled: \begin{lstlisting} switch (x) { case P1(x): s1; ... case Pn(x): sn; } \end{lstlisting} \noindent Assuming that predicates depend only on $x$ and nothing else as well as that they do not involve any side effects, we can be sure that the next time we come to such a switch with the same value, the same predicate will be enabled first. Thus, we would like to avoid evaluating predicates and jump straight to the statement it guards. In a way we would like the switch to memoize which case is enabled for a given value of $x$. The idea is to generate a simple cascading-if statement interleaved with jump targets and instructions that associate the original value with enabled target. The code before the statement looks up whether the association for a given value has already been established, and, if so, jumps directly to the target; otherwise the sequential execution of the cascading-if is started. To ensure that the actual code associated with the predicates remains unaware of this optimization, the code preceeding it after the target must re-establish any invariant guaranteed by sequential execution (\textsection\ref{sec:vtblmem}). The above code can easily be produced in a compiler setting, but producing it in a library setting is a challenge. Inspired by Duff's Device~\cite{Duff}, we devised a construct that we call \emph{Memoization Device} that does just that in standard C++: \begin{lstlisting} typedef decltype(x) T; static std::unordered_map<T,int> jump_target_map; switch (int& target = jump_target_map[x]) { default: // entered when we have not seen x yet if (P1(x)) { target = 1; case 1: s1; } else if (P2(x)) { target = 2; case 2: s2; } else ... if (Pn(x)) { target = @$n$@; case @$n$@: sn; } else target = @$n+1$@; case @$n+1$@: // none of the predicates is true on x } \end{lstlisting} \noindent The static \code{jump_target_map} hash table will be allocated upon first entry to the function. The map is initially empty and according to its logic, request for a key $x$ not yet in the map will result in allocation of a new entry with its associated data default initialized (to 0 for int). Since there is no case label 0 in the switch, the default case will be taken, which, in turn, will initiate sequential execution of the interleaved cascading-if statement. Assignments to \code{target} effectively establish association between value $x$ and corresponding predicate, since \code{target} is just a reference to \code{jump_target_map[x]}. The last assignment records absence of enabled predicates for the value. The sequential execution of the cascading-if statement will keep checking predicates $P_j(x)$ until the first predicate $P_i(x)$ that returns true. By assigning $i$ to \code{target} we will effectively associate $i$ with $x$ since \code{target} is just a reference to \code{jump_target_map[x]}. This association will make sure that the next time we are called with the value $x$ we will jump directly to the label $i$. When none of the predicates returns true, we will record it by associating $x$ with $N+1$, so that the next time we can jump directly to the end of the switch on $x$. The above construct effectively gives the entire statement first-fit semantics. In order to evaluate all the statements whose predicates are true, and thus give the construct all-fit semantics, we might want to be able to preserve the fall-through behavior of the switch. In this case we can still skip the initial predicates returning false and start from the first successful one. This can be easily achieved by removing all else statements and making if-statements independent as well as wrapping all assignments to \code{target} with a condition, to make sure only the first successful predicate executes it: \begin{lstlisting} if (Pi(x)) { if (target == 0) target = @$i$@; case @$i$@: si; } \end{lstlisting} \noindent Note that the protocol that has to be maintained by this structure does not depend on the actual values of case labels. We only require them to be different and include a predefined default value. The default clause can be replaced with a case clause for the predefined value, however keeping the default clause results in a faster code. A more important performance consideration is to keep the values close to each other. Not following this rule might result in a compiler choosing a decision tree over a jump table implementation of the switch, which in our experience significantly degrades the performance. The first-fit semantics is not an inherent property of the memoization device however. Assuming that the conditions are either mutually exclusive or imply one another, we can build a decision-tree-based memoization device that will effectively have \emph{most-specific} semantics -- an analog of best-fit semantics in predicate dispatching~\cite{ErnstKC98}. Imagine that the predicates with the numbers $2i$ and $2i+1$ are mutually exclusive and each imply the value of the predicate with number $i$ i.e. $\forall x \in \mathsf{Domain}(P)$ \begin{eqnarray*} P_{2i+1}(x)\rightarrow P_i(x) \wedge P_{2i}(x)\rightarrow P_i(x) \wedge \neg(P_{2i+1}(x) \wedge P_{2i}(x)) \end{eqnarray*} \noindent The following decision-tree based memoization device will execute the statement $s_i$ associated with the \emph{most-specific} predicate $P_i$ (i.e. the predicate that implies all other predicates true on $x$) that evaluates to true or will skip the entire statement if none of the predicates is true on $x$. \begin{lstlisting} switch (int& target = jump_target_map[x]) { default: if (P1(x)) { if (P2(x)) { if (P4(x)) { target = 4; case 4: s4; } else if (P5(x)) { target = 5; case 5: s5; } target = 2; case 2: s2; } else if (P3(x)) { if (P6(x)) { target = 6; case 6: s6; } else if (P7(x)) { target = 7; case 7: s7; } target = 3; case 3: s3; } target = 1; case 1: s1; } else { target = 0; case 0: ; } } \end{lstlisting} \noindent An example of predicates that satisfy this condition are class membership tests where the truth of a predicate that tests membership in a derived class implies the truth of a predicate that tests membership in its base class. Our library solution prefers the simpler cascading-if approach only because the necessary structure of the code can be laid out directly with macros. A compiler solution will use the decision-tree approach whenever possible to lower the cost of the first match from linear in case's number to logarithmic as seen in Figure\ref{fig:DCastVis1}. When the predicates do not satisfy the implication or mutual exclusion properties mentioned above, a compiler of a language based on predicate dispatching would typically issue an ambiguity error. Some languages might choose to resolve it according to lexical or some other ordering. In any case, the presence of ambiguities or their resolution has nothing to do with memoization device itself. The latter only helps optimize the execution once a particular choice of semantics has been made and code implementing it has been laid out. The main advantage of the memoization device is that it can be built around almost any code, providing that we can re-establish the invariants, guaranteed by sequential execution. Its main disadvantage is the size of the hash table that grows proportionally to the number of different values seen. Fortunately, the values can often be grouped into equivalence classes, such that values in the same class do not change the predicate. The map can then associate the equivalence class of a value with a target instead of associating the value with it. The next subsection does exactly that for polymorphic objects. \subsection{Vtable Pointer Memoization} \label{sec:vtblmem} The memoization device can almost immediately be used for multi-way type testing by using \code{dynamic_cast<Di>} as a predicate $P_i$. This cannot be considered a type switching solution, however, as one would expect to also have a reference to the uncovered type. Using a \code{static_cast<Di>} upon successful type test would have been a solution if we did not have multiple inheritance. It certainly can be used as such in languages with only single inheritance. For the fully functional C++ solution, we combine the memoization device with the properties of virtual table pointers into a \emph{Vtable Pointer Memoization} technique. We saw that vtbl-pointers uniquely determine the subobject within an object (Theorem~\ref{thm:vtbl}), while the result of a \code{dynamic_cast} can be reapplied from the same subobject (Corollary~\ref{crl:vtbl}). The idea is thus to group all the objects according to the value of their vtbl-pointer and associate both target and the required offset with it through memoization device: \begin{lstlisting} typedef std::pair<ptrdiff_t,size_t> type_switch_info; static std::unordered_map<intptr_t, type_switch_info> jump_target_map; intptr_t vtbl = *reinterpret_cast<const intptr_t*>(p); type_switch_info& info = jump_target_map[vtbl]; const void* tptr; switch (info.second) ... \end{lstlisting} \noindent We use the virtual table pointer extracted from a polymorphic object pointed to by \code{p} as a key for association. The value stored along the key in association now keeps both: the target for the switch as well as a memoized offset for dynamic cast. The code for the $i^{th}$ case now evaluates the required offset on the first entry and associates it along the target with the vtbl-pointer of the subject. The call to \code{adjust_ptr<Di>} re-establishes the invariant that \code{matched} is a properly-casted reference to type \code{Di} of the subject \code{p}. %The condition of the inner if-statement is only needed to implement the %sequential all-fit semantics and can be removed when fall-through behavior is %not required. \begin{lstlisting} if (tptr = dynamic_cast<const Di*>(p)) { if (info.second == 0) { // supports fall-through info.first = intptr_t(tptr)-intptr_t(p); // offset info.second = @$i$@; // jump target } case @$i$@: // @$i$@ is a constant here - clause's position in switch auto matched = adjust_ptr<Di>(p,info.first); si; } \end{lstlisting} \noindent The main condition remains the same. We keep checking for the first initialization because we allow fall-through semantics here, letting the user break from the switch when needed. Upon first entry we compute the offset that the dynamic cast performed and save it together with target associated to the virtual table pointer. On the next iteration we will jump directly to the case label and restore the invariant of \code{matched} being a properly-casted reference to the derived object. The use of dynamic cast makes a huge difference in comparison to the use of static cast we dismissed above. First of all the C++ type system is much more restrictive about the static cast and many cases where it is not allowed can still be handled by dynamic cast. Examples of these include downcasting from an ambiguous base class or cross-casting between unrelated base classes. An important benefit we get from this optimization is that we do not store the actual values (pointers to objects) in the hash table anymore, but group them into equivalence classes based on their virtual table pointers. The number of such pointers in a program is always bound by $O(|A|)$, where $A$ represents the static type of an object, while $|A|$ represents the number of classes directly or indirectly derived from $A$. The linear coefficient hidden in big-o notation reflects possibly multiple vtbl-pointers in derived classes due to the use of multiple inheritance. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{DCast-vs-Visitors3.png} \caption{Time to uncover i\textsuperscript{th} case. X-axis - case i; Y-axis - cycles per iteration} \label{fig:DCastVis3} \end{figure} The most important benefit of this optimization, however, is the constant time on average used to dispatch each of the case clauses, regardless of their position in the type switch. The net effect of this optimization can be seen in Figure~\ref{fig:DCastVis3}. We can see that the time does not increase with the position of the case we are handling. The spikes represent activities on computer during measurement and are present in both measurements. The constant time on average comes from the average complexity of accessing an element in an \code{unordered_map}, while its worst complexity can be proportional to the size of the map. We show in the next section, however, that most of the time we will be bypassing traditional access to elements of the map, because, as-is, the type switch is still about 50\% slower than the visitor design pattern. Note that we can apply the reasoning of \textsection\ref{sec:memdev} and change the first-fit semantics of the resulting match statement into a best-fit semantics simply by changing the underlying cascading-if structure with decision tree. A compiler implementation of a type switch based on Vtable Pointer Memoization will certainly take advantage of this optimization to cut down the cost of the first run on a given vtbl-pointer, when the actual memoization happens. \subsubsection{Structure of Virtual Table Pointers} \label{sec:sovtp} Virtual table pointers are not entirely random addresses in memory and have certain structure when we look at groups of those that are associated with classes related by inheritance. Let us first look at some vtbl pointers that were present in some of our tests. The 32-bit pointers are shown in binary form (lower bits on the right) and are sorted in ascending order: \begin{verbatim} 00000001001111100000011001001000 00000001001111100000011001011100 00000001001111100000011001110000 ... 00000001001111100000011111011000 00000001001111100000011111101100 \end{verbatim} Virtual table pointers are not constant values and are not even guaranteed to be the same between different runs of the same application. Techniques like \emph{address space layout randomization} or simple \emph{rebasing} of the entire module are likely to change these values. The relative distance between them is likely to remain the same though as long as they come from the same module. Comparing all the vtbl pointers that are coming through a given match statement we can trace ar run time the set of bits in which they do and do not differ. For the above example it may look as \texttt{00000001001111100000X11XXXXXXX00} where positions marked with X represent bits that are different in some vtbl pointers. When a DLL is loaded it may have its own copy of vtables for classes also used in other modules as well as vtables for classes it introduces. Comparing similarly all vtbl pointers coming only from this DLL we can get a different pattern \\ \texttt{01110011100000010111XXXXXXXXX000} and when compared over all the loaded modules the pattern will likely becomes something like \texttt{0XXX00X1X0XXXXXX0XXXXXXXXXXXXX00}. The common bits on the right come from the virtual table size and alignment requirements, and, depending on compiler, configuration, and class hierarchy could easily vary from 2 to 6 bits. Because the vtbl-pointer under the C++ ABI points into an array of function pointers, the alignment requirement of 4 bytes for those pointers on a 32-bit architecture is what makes at least the last 2 bits to be 0. For our purpose the exact number of bits on the right is not important as we evaluate this number at run time based on vtbl-pointers seen so far. Here we only would like to point out that there would be some number of common bits on the right. Another observation we made during our experiments with the vtbl-pointers of various existing applications was that the values of the pointers where changing more frequently in the lower bits than in the higher ones. We believe that this was happening because programmers tend to group multiple derived classes in the same translation unit so the compiler was emitting virtual tables for them close to each other as well. Note that derived classes that do not introduce their own virtual functions (even if they override some existing ones) are likely to have virtual tables of the same size as their base class. Even when they do add new virtual functions, the size of their virtual tables can only increase relative to their base classes. This is why the difference between many consecutive vtbl-pointers that came through a given match statement was usually constant or very slightly different. The changes in higher bits were typically due to separate compilation and especially due to dynamically loaded modules. When a DLL is loaded, it may have its own copies of vtables for classes that are also used in other modules, in addition to vtables for classes it introduces. Comparing all vtbl-pointers coming only from that DLL we can get a different pattern \texttt{01110011100000010111XXXXXXXXX000} and when compared over all the loaded modules the pattern will likely become something like \texttt{0XXX00X1X0XXXXXX0XXXXXXXXXXXXX00}. Overall they were not changing the general tendency we saw: smaller bits were changing more frequently than larger ones, with the exception of the lowest common bits, of course. These observations made virtual table pointers of classes related by inheritance ideally suitable for indexing -- the values obtained by throwing away the common bits on the right were compactly distributed in small disjoint ranges. We use those values to address a cache built on top of the hash table in order to eliminate a hash table lookup in most of the cases. The important guarantee about the validity of the cached hash table references comes from the C++0x standard, which states that ``insert and emplace members shall not affect the validity of references to container elements''~\cite[\textsection 23.2.5(13)]{C++0x}. Depending on the number of actual collisions that happen in the cache, our vtable pointer memoization technique can come close to, and even outperform, the visitor design pattern. The numbers are, of course, averaged over many runs as the first run on every vtbl-pointer will take an amount of time as shown in Figure\ref{fig:DCastVis1}. We did however test our technique on real code and can confirm that it does perform well in the real-world use cases. The information about jump targets and necessary offsets is just an example of information we might want to be able to associate with, and access via, virtual table pointers. Our implementation of \code{memoized_cast} from \textsection\ref{sec:memcast} effectively reuses this general data structure with a different type of element values. We thus created a generic reusable class \code{vtblmap<T>} that maps vtbl-pointers to elements of type T. We will refer to the combined cache and hash-table data structure, extended with the logic for minimizing conflicts presented below, as a \emph{vtblmap} data structure. \subsubsection{Minimization of Conflicts} \label{sec:moc} The small number of cycles that the visitor design pattern needs to uncover a type does not let us put too sophisticated cache indexing mechanisms into the critical path of execution. This is why we limit our indexing function to shifts and masking operations as well as choose the size of the cache to be a power of 2. Throughout this section by \emph{collision} we will call a run-time condition in which the cache entry of an incoming vtbl pointer is occupied by another vtbl-pointer. Collision requires vtblmap to fetch the data associated with the new vtbl-pointer from a slower hash-table and, under certain conditions, reconfigure cache for better performance. By \emph{conflict} we will call a different run-time condition under which given cache configuration maps two or more vtbl pointers to the same cache location. Presence of conflict does not necessarily imply presence of collisions, but collisions can only happen when there is a conflict. In the rest of this section we devise a mechanism that tries to minimize the amount of conflicts in a hope that it will also decrease the amount of actual collisions. Given $n$ vtbl-pointers we can always find a cache size that will render no conflicts between them. The necessary size of such a cache, however, can be too big to justify the use of memory. This is why, in our current implementation, we always consider only 2 different cache sizes: $2^k$ and $2^{k+1}$ where $2^{k-1} < n \leq 2^k$. This guarantees that the cache size is never more than 4 times bigger than the minimum required cache size. During our experiments, we noticed that often the change in the smallest different bit happens only in a few vtbl-pointers, which was effectively cutting the available cache space in half. To overcome this problem, we let the number of bits by which we shift the vtbl-pointer vary further and compute it in a way that minimizes the number of conflicts. To avoid doing any computations in the critical path, \code{vtblmap} only recomputes the optimal shift and the size of the cache when an actual collision happens. In order to avoid constant recomputations when conflicts are unavoidable, we add an additional restriction of only reconfiguring the optimal parameters if the number of vtbl-pointers in the \code{vtblmap} has increased since the last recomputation. Since the number of vtbl-pointers is of the order $O(|A|)$, where $A$ is the static type of all vtbl-pointers coming through a \code{vtblmap}, the restriction assures that reconfigurations will not happen infinitely often. To minimize the number of recomputations even further, our library communicates to the \code{vtblmap}, through its constructor, the number of case clauses in the underlying match statement. We use this number as an estimate of the expected size of the \code{vtblmap} and pre-allocate the cache according to this estimated number. The cache is still allowed to grow based on the actual number of vtbl-pointers that comes through a \code{vtblmap}, but it never shrinks from the initial value. This improvement significantly minimizes the number of collisions at early stages, as well as the number of possibilities we have to consider during reconfiguration. The above logic of \code{vtblmap} always chooses the configuration that renders no conflicts, when such a configuration is possible during recomputation of optimal parameters. When this is not possible, it is natural to prefer collisions to happen on less-frequent vtbl-pointers. We studied the frequency of vtbl-pointers that come through various match statements of a C++ pretty-printer that we implemented on top of the Pivot framework~\cite{Pivot09} using our pattern-matching library. We ran the pretty-printer on a set of C++ standard library headers and then ranked all the classes from the most-frequent to the least-frequent ones, on average. The resulting probability distribution is shown with a thicker line in Figure\ref{fig:PowerLaw}. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{std-lib-power-law-distributions.png} \caption{Probability distribution of various nodes in Pivot framework} \label{fig:PowerLaw} \end{figure} Note that Y-Axis is using logarithmic scale, suggesting that the resulting probability has power-law distribution. This is likely to be a specifics of our application, nevertheless, the above picture demonstrates that frequency of certain classes can be larger than the overall frequency of all the other classes. In our case, the two most frequent classes were representing the use of a variable in a program, and their combined frequency was larger than the frequency of all the other nodes. Naturally, we would like to avoid conflicts on such classes in the cache, when possible. Let us assume that a given \code{vtblmap} contains a set of vtbl pointers $V = \{v_1, ... , v_n\}$ with known probabilities $p_i$ of occuring. For a cache of size $2^k$ and a shift by $l$ bits we get a cache-indexing function $f_{lk} : V \rightarrow [0..2^k-1]$ defined as $f_{lk}(v_i) = (v_i \gg l) \& (2^k-1)$. To calculate the probability of conflict for a given $l$ and $k$ parameters, let us consider $j^{th}$ cache cell and a subset $V^j_{lk}=\{v \in V | f_{lk}(v)=j\}$. When the size of this subset $m=|V^j_{lk}|$ is greater than 1, we have a potential conflict as subsequent request for a vtbl pointer $v''$ might be different from the vtbl pointer $v'$ currenly stored in the cell $j$. Within the cell only the probability of not having a conflict is the probability of both values $v''$ and $v'$ be the same: \begin{eqnarray*} P(v''=v')=\sum\limits_{v_i \in V^j_{lk}}P(v''=v_i)P(v'=v_i)=\sum\limits_{v_i \in V^j_{lk}}P^2(v_i|V^j_{lk})=\\ =\sum\limits_{v_i \in V^j_{lk}}\frac{P^2(v_i)}{P^2(V^j_{lk})}= \sum\limits_{v_i \in V^j_{lk}}\frac{p_i^2}{(\sum\limits_{v_{i'} \in V^j_{lk}}p_{i'})^2}= \frac{\sum\limits_{v_i \in V^j_{lk}}p_i^2}{(\sum\limits_{v_{i} \in V^j_{lk}}p_{i})^2} \end{eqnarray*} The probability of having a conflict among the vtbl pointers of a given cell is thus one minus the above value: \begin{eqnarray*} P(v''\neq v')=1-\frac{\sum\limits_{v_i \in V^j_{lk}}p_i^2}{(\sum\limits_{v_{i} \in V^j_{lk}}p_{i})^2} \end{eqnarray*} To obtain probability of conflict given any vtbl pointer and not just the one from a given cell we need to sum up the above probabilities of conflict within a cell multiplied by the probability of vtbl pointer fall into that cell: \begin{eqnarray*} P_{lk}^{conflict}=\sum\limits_{j=0}^{2^k-1}P(V^j_{lk})(1-\frac{\sum\limits_{v_i \in V^j_{lk}}p_i^2}{(\sum\limits_{v_{i} \in V^j_{lk}}p_{i})^2})=\\ =\sum\limits_{j=0}^{2^k-1}(\sum\limits_{v_{i} \in V^j_{lk}}p_{i})(1-\frac{\sum\limits_{v_i \in V^j_{lk}}p_i^2}{(\sum\limits_{v_{i} \in V^j_{lk}}p_{i})^2}) \end{eqnarray*} Our reconfiguration algorithm then iterates over possible values of $l$ and $k$ and chooses those that minimize the overal probability of conflict $P_{lk}^{conflict}$. The only data still missing are the actual probabilities $p_i$ used by the above formula. They can be approximated in many different ways. Besides probability distribution on all the tests, Figure~\ref{fig:PowerLaw} shows probabilities of a given node on each of the tests. The X-Axis in this case represents the ordering of all the nodes according to their overall rank of all the tests combined. As can be seen from the picture, the shape of each specific test's distribution still mimics the overal probability distribution. With this in mind we can simply let the user assign probabilities to each of the classes in the hierarchy and use these values during reconfiguration. The practical problem we came accross with this solution was that we wanted these probabilities be inheritable as Pivot separates interface and implementation classes and we prefered the user to define them on interfaces rather than on implementation classes. The easiest way to do so wast to write a dedicated function that would return the probabilities using a match statement. Unfortunately such a function will introduce a lot of overhead as it will ideally only be used very few times (since we try to minimize the amount of reconfiguration) and thus not be using memoized jumps but rather slow cascading-if. A simpler and likely more precise way of estimating $p_i$ would be to count frequencies of each vtbl pointers directly inside the \code{vtblmap}. This introduces an overhead of an increment into the critical path of execution, but according to our tests was only degrading the overal performance by 1-2\%. Instead, it was compensating with a smaller amount of conflicts and thus a potential gain of performance. We leave the choice of whether the library should count frequencies of each vtbl pointer to the user of the library as the concrete choice may be to advantage on some class hierarchies and to disadvantage on others. Figure~\ref{fig:Collisions} compares the amount of collisions when frequency information is and is not used. The data was gathered from 312 tests on multiple match statements present in Pivot's C++ pretty printer when it was ran over standard library headers. In 122 of these test both schemes had 0 conflicts and these tests are thus not shown on the graph. The remaining tests where ranked by the amount of conflicts in the scheme that does not utilize frequency information. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{CollisionsWithAndWithoutFrequencies.png} \caption{Decrease in number of collisions when probabilities of nodes are taken into account} \label{fig:Collisions} \end{figure} As can be seen from the graph, both schemes render quite low amount of collisions given that there was about 57000 calls in the rightmost test having the largest amount of conflicts. Taking into account that the Y-axis has logarithmic scale, the use of frequency information in many cases decreased the amount of conflicts by a factor of 2. The handfull of cases where the use of frequency increased the number of conflicts can be explained by the fact that the optimal values are not recomputed after each conflict, but after several conflicts and only if the amount of vtbl pointers in the vtblmap increased. These extra conditions sacrify optimality of parameters at any given time for the amount of times they are recomputed. By varying the number of conflicts we are willing to tolerate before reconfiguration we can decrease the number of conflicts by increasing the amount of recomputations and vise versa. From our experience, however, we saw that the drop in the number of conflicts was not translating into a proportional drop in execution time, while the amount of reconfigurations was proportional to the increase in execution time. This is why we choose to tolerate a relatively large amount of conflicts before recomputation just to keep the amount of recomputations low. \section{Solution for Tagged Classes} \label{sec:cotc} The memoization device outlined in \textsection\ref{sec:memdev} can, in principle, also be applied to tagged classes. The dynamic cast will be replaced by a small compile-time template meta-program that checks whether the class associated with the given tag is derived from the target type of the case clause. If so, a static cast can be used to obtain the offset. Despite its straightforwardness, we felt that it should be possible to do better than the general solution, given that each class is already identified with a dedicated constant known at compile time. As we mentioned in \textsection\ref{sec:poets}, the nominal subtyping of C++ effectively gives every class multiple types. The idea is thus to associate with the type not only its most derived tag, but also the tags of all its base classes. In a compiler implementation such a list can be stored inside the virtual table of a class, while in our library solution it is shared between all the instances with the same most derived tag in a less efficient global map, associating the tag to its tag list. The list of tags is topologically sorted according to the subtyping relation and terminates with a dedicated value distinct from all the tags. We call such a list a \emph{Tag Precedence List} (TPL) as it resembles the \emph{Class Precedence List} (CPL) of object-oriented descendants of Lisp (e.g. Dylan, Flavors, LOOPS, and CLOS) used there for \emph{linearization} of class hierarchies. The classes in CPL are ordered from most specific to least specific with siblings listed in the \emph{local precedence order} -- the order of the direct base classes used in the class definition. TPL is just an implementation detail and the only reason we distinguish TPL from CPL is that in C++ classes are often separated into interface and implementation classes and it might so happen that the same tag is associated by the user with an interface and several implementation classes. A type switch below, built on top of a hierarchy of tagged classes, proceeds as a regular switch on the subject's tag. If the jump succeeds, we found an exact match; otherwise, we get into a default clause that obtains the next tag in the tag precedence list and jumps back to the beginning of the switch statement for a rematch: \begin{lstlisting} const size_t* taglist = 0; size_t attempt = 0; size_t tag = object->tag; ReMatch: switch (tag) { default: if (!taglist) taglist = get_taglist(object->tag); tag = taglist[++attempt]; goto ReMatch; case end_of_list: break; case bindings<D1>::kind_value: s1; break; ... case bindings<Dn>::kind_value: sn; break; } \end{lstlisting} \noindent The above structure, which we call \emph{TPL Dispatcher}, lets us dispatch to case clauses of the most derived class with an overhead of initializing two local variables, compared to a switch on a sealed hierarchy. Dispatching to a case clause of a base class will take time roughly proportional to the distance between the matched base class and the most derived class in the inheritance graph. When none of the base class tags was matched, we will necessarily reach the end\_of\_list marker in the tag precedence list and thus exit the loop. Our library automatically builds the function \code{get_taglist} based on the \code{BC} or \code{BCS} specifiers that the user specifies in bindings (\textsection\ref{sec:bnd}). To make the behavior clos \begin{lstlisting} if (is_derived_from<Di>(object)) { case bindings<Di>::kind_value: auto matched = static_cast<Di>(object); si; } \end{lstlisting} \section{(Ab)using Exceptions for Type Switching} \label{sec:xpm} Several authors had noted the relationship between exception handling and type switching before~\cite{Glew99,ML2000}. Not surprisingly, the exception handling mechanism of C++ can be abused to implement the first-fit semantics of a type switch statement. The idea is to harness the fact that catch-handlers in C++ essentially use first-fit semantics to decide which one is going to handle a given exception. The only problem is to raise an exception with a static type equal to the dynamic type of the subject. To do this, we employ the \emph{polymorphic exception} idiom~\cite{PolyExcept} that introduces a virtual function \code{virtual void raise() const = 0;} into the base class, overridden by each derived class in syntactically the same way: \code{throw *this;}. The \code{Match}-statement then simply calls \code{raise} on its subject, while case clauses are turned into catch-handlers. The exact name of the function is not important, and is communicated to the library as \emph{raise selector} with \code{RS} specifier in the same way \emph{kind selector} and \emph{class members} are (\textsection\ref{sec:bnd}). The \code{raise} member function can be seen as an analog of the \code{accept} member function in the visitor design pattern, whose main purpose is to discover subject's most specific type. The analog of a call to \code{visit} to communicate that type is replaced, in this scheme, with exception unwinding mechanism. Just because we can, it does not mean we should abuse the exception handling mechanism to give us the desired control flow. In the table-driven approach commonly used in high-performance implementations of exception handling, the speed of handling an exception is sacrificed to provide a zero execution-time overhead for when exceptions are not thrown~\cite{Schilling98}. Using exception handling to implement type switching will reverse the common and exceptional cases, significantly degrading performance. As can be seen in Figure\ref{fig:DCastVis1}, matching the type of the first case clause with polymorphic exception approach takes more than 7000 cycles and then grows linearly (with the position of case clause in the match statement), making it the slowest approach. The numbers illustrate why exception handling should only be used to deal with exceptional and not common cases. Despite its total inpracticality, the approach fits well into our unified syntax (\textsection\ref{sec:unisyn}) and gave us a very practical idea of harnessing a C++ compiler to do \emph{redundancy checking} at compile time. \subsection{Redundancy Checking} \label{sec:redun} As discussed in \textsection\ref{sec:bg}, redundancy checking is only applicable to first-fit semantics of the match statement, and warns the user of any case clause that will never be entered because of a preceding one being more general. We provide a library-configuration flag, which, when defined, effectively turns the entire match statement into a try-catch block with handlers accepting the target types of the case clauses. This forces the compiler to give warning when a more general catch handler preceds a more specific one effectively performing redundancy checking for us, e.g.: \begin{lstlisting} filename.cpp(55): warning C4286: 'ipr::Decl*' : is caught by base class ('ipr::Stmt*') on line 42 \end{lstlisting} \noindent Note that the message contains both the line number of the redundant case clause (55) and the line number of the case clause that makes it redundant (42). Unfortunately, the flag cannot be always enabled, as the case labels of the underlying switch statement have to be eliminated in order to render a syntactically correct program. Nevertheless, we found the redundancy checking facility of the library extremely useful when rewriting visitor-based code: even though the order of overrides in a visitor's implementation does not matter, for some reason more general ones were inclined to happen before specific ones in the code we looked at. Perhaps programmers are inclined to follow the class declaration order when defining and implementing visitors. A related \emph{completeness checking} -- test of whether a given match statement covers all possible cases -- needs to be reconsidered for extensible data types like classes, since one can always add a new variant to it. Completeness checking in this case may simply become equivalent to ensuring that there is either a default clause in the type switch or a clause with the static type of a subject as a target type. In fact, our library has an analog of a default clause called \code{Otherwise}-clause, which is implemented under the hood exactly as a regular case clause with the subject's static type as a target type. \section{Unified Syntax} \label{sec:unisyn} The discussion in this subsection will be irrelevant for a compiler implementation, nevertheless we include it because some of the challenges we came accross as well as techniques we used to overcome them might show up in other active libraries. The problem is that working in a library setting, the toolbox of properties we can automatically infer about user's class hierarchy, match statement, clauses in it, etc. is much more limited than the set of properties a compiler can infer. On one side such additional information may let us generate a better code, but on the other side we understand that it is important not to overburden the user's syntax with every bit of information she can possibly provide us with to generate a better code. Some examples of information we can use to generate a better code even in the library setting include: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Encoding we are dealing with (\textsection\ref{sec:adt}) \item Shape of the class hierarchy: flat/deep, single/multiple inheritance etc. \item The amount of clauses in the match statement \item Presense of Otherwise clause in the match statement \item Presence of extensions in dynamically linked libraries \end{itemize} We try to infer the information when we can, but otherwise resort to a usually slower default that will work in all or most of the cases. The major source of inefficiency comes from the fact that macro resolution happens before any meta-programming techniques can be employed and thus the macros have to generate a syntactic structure that can essentially handle all the cases as opposed to the exact case. Each of the macros involved in rendering the syntactic structure of a match statement (e.g. \code{Match}, \code{Case}, \code{Otherwise}) have a version identified with a suffix that is specific to a combination of encoding and shape of the class hierarchy. By default the macros are resolved to a unified version that infers encoding with a template meta-program, but this resolution can be overriden with a configuration flag for a more specific version when all the match statements in user's program satisfy the requirements of that version. The user can also pin-point specific match statement with the most applicable version, but we discourage such use as performance differences are not big enough to justify the exposure of details. To better understand what is going on, consider the following examples. Case labels for polymorphic base class encoding can be arbitrary, but preferably sequential numbers, while the case labels for tagged class and discriminated union encodings are the actual kind values associated with concrete variants. Discriminated union and tagged class encodings can use both types (views in case of unions) and kind values to identify the target variant, while polymorphic base class encoding can only use types for that. The latter encoding requires allocation of a static vtblmap in each match statement, not needed by any other encoding, while tagged class encoding on non-flat hierarchy requires the use of default label of the generated switch statement as well as a dedicated case label distinct from all kind values (\textsection\ref{sec:cotc}). When merging these and other requirements into a syntactic structure of a unified version capable of handling any encoding we essentially always have to reserve the use of default label (and thus not use it to generate \code{Otherwise}-clause), allocate an extra dedicated case label, introduce a loop over base classes used by tagged class encoding etc. This is a clear overhead for handling of a discriminated union encoding whose syntactic structure only involves a simple switch over kind values and default label to implement \code{Otherwise}. To minimize the effects of this overhead we rely on compiler's optimizer to inline calls specific to each encoding and either remove branching on conditions that will always be true after inlining or elminate dead code on conditions that will always be false after inlining. Luckily for us today's compilers do a great job in doing just that, rendering our unified version only slightly less efficient than the specialized ones. These differences can be best seen in Figure\ref{relperf} under corresponding entries of \emph{Unified} and \emph{Specialized} columns. \section{Memoized Dynamic Cast} \label{sec:memcast} We saw in Corollary~\ref{crl:vtbl} that the results of \code{dynamic_cast} can be reapplied to a different instance from within the same subobject. This leads to simple idea of memoizing the results of \code{dynamic_cast} and then using them on subsequent casts. In what follows we will only be dealing with the pointer version of the operator since the version on references that has a slight semantic difference can be easily implemented in terms of the pointer one. The \code{dynamic_cast} operator in C++ involves two arguments: a value argument representing an object of a known static type as well as a type argument denoting the runtime type we are querying. Its behavior is twofold: on one hand it should be able to determine when the object's most derived type is not a subtype of the queried type (or when the cast is ambiguous), while on the other it should be able to produce an offset by which to adjust the value argument when it is. We mimic the syntax of \code{dynamic_cast} by defining: \begin{lstlisting} template <typename T, typename S> inline T memoized_cast(S* p); \end{lstlisting} \noindent which lets the user replace all the uses of \code{dynamic_cast} in the program with \code{memoized_cast} with a simple: \begin{lstlisting} #define dynamic_cast memoized_cast \end{lstlisting} \noindent It is important to stress that the offset is not a function of the source and target types of the \code{dynamic_cast} operator, which is why we cannot simply memoize the outcome inside the individual instantiations of \code{memoized_cast}. The use of repeated multiple inheritance will result in classes having several different offsets associated with the same pair of source and target types depending on which subobject the cast is performed from. According to corollary~\ref{crl:vtbl}, however, it is a function of target type and the value of the vtbl-pointer stored in the object, because the vtbl-pointer uniquely determines the subobject within the most derived type. Our memoization of the results of \code{dynamic_cast} should thus be specific to a vtbl-pointer and the target type. The easiest way to achieve this would be to use a dedicated global \code{vtblmap<std::ptrdiff_t>} (\textsection\ref{sec:sovtp}) per each instantiation of the \code{memoized_cast}. This, however, will create an unnecessarily large amount of vtblmap structures, many of which will be duplicating information and repeating the work already done. This will happen because instantiations of \code{memoized_cast} with same target but different source types can share their vtblmap structures since vtbl pointers of different source types are necessarily different according to Theorem~\ref{thm:vtbl}. Even though the above solution can be easily improved to allocates a single vtblmap per target type, an average application might have a lot of different target types. This is especially true for applications that will use our Match statement since we use \code{dynamic_cast} under the hood in each case clause. Indeed our C++ pretty printer was creating 160 vtblmaps of relatively small size each, which was increasing the executable size quite significantly because of numerous instantiations as well as noticably slowed down the compilation time. To overcome the problem we turn each target type into a runtime instantiation index of the type and allocate a single \code{vtblmap<std::vector<std::ptrdiff_t>>} that associates vtbl pointers with a vector of offsets indexed by target type. The slight performance overhead that is brought by this improvement is specific to our library solution and would not be present in a compiler implementaion. Instead we get a much smaller memory footrpint, which can be made even smaller once we recognize the fact that global type indexing may effectively enumerate target classes that will never appear in the same Match statement. This will result in entries in the vector of offsets that are never used. Our actual solution uses separate indexing of target types for each source type they are used with, and also allocates a different \code{vtblmap<std::vector<std::ptrdiff_t>>} for each source type. This lets us minimize unused entries within offset vectors by making sure only the plausible target types for a given source type are indexed. This solution should be suitable for most applications since we expect to have a fairly small number of source types for the \code{dynamic_cast} operator and a much larger number of target types. For the unlikely case of a small number of target types and large number of source types we allow the user to revert to the default behavior with a library configuration switch that allocates a single \code{vtblmap} per target type as we have already discussed above. The use of \code{memoized_cast} to implement the \code{Match}-statement potentially reuses the results of \code{dynamic_cast} computations across multiple independent match statements. This allows leveraging the cost of the expensive first call with a given vtbl-pointer even further across all the match statements inside the program. The above define, with which a user can easily turn all dynamic casts into memoized casts, can be used to speed-up existing code that uses dynamic casting without any refactoring overhead. %\subsection{Discussion} %\label{sec:dsc} %Let us look at both our techniques in the context of Zenger and Odersky %challenge to independently extensible solutions of extension problem discussed %in \textsection\ref{sec:exp}. %\begin{itemize} %\item Extensibility in both dimensions: \\ % %It should be possible to add new data variants, while adapting the % %existing operations accordingly. It should also be possible to introduce % %new functions. % Our techniques allow one to extend data with subclassing as well as % introduce new functions through a match statement on corresponding % encoding. The existing operations %\item Strong static type safety: \\ % %It should be impossible to apply a function to a data variant, which it % %cannot handle. %\item No modification or duplication: \\ % %Existing code should neither be modified nor duplicated. %\item Separate compilation: \\ % %Neither datatype extensions nor addition of new functions should require % %re-typechecking the original datatype or existing functions. No safety % %checks should be deferred until link or runtime. %\item Independent extensibility: \\ % %It should be possible to combine independently developed extensions so % %that they can be used jointly. %\end{itemize} % \section{Evaluation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:eval} \begin{figure*} \begin{tabular}{@{}c@{ }l||@{ }r@{}@{ }r@{}@{ }r@{}|@{ }r@{}@{ }r@{}@{ }r@{}||@{ }r@{}@{ }r@{}@{ }r@{}|@{ }r@{}@{ }r@{}@{ }r@{}||@{ }r@{}@{ }r@{}@{ }r@{}|@{ }r@{}@{ }r@{}@{ }r@{}} \hline % ----------------------------------------------------------------------------------------------------------------------------------------- \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & & \multicolumn{6}{c||}{G++/32 on Windows Laptop} & \multicolumn{6}{c||}{MS Visual C++/32} & \multicolumn{6}{c}{MS Visual C++/64} \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & Syntax & \multicolumn{3}{c|}{Unified} & \multicolumn{3}{c||}{Specialized} & \multicolumn{3}{c|}{Unified} & \multicolumn{3}{c||}{Specialized} & \multicolumn{3}{c|}{Unified} & \multicolumn{3}{c}{Specialized} \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & Encoding & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & Repetitive &\gwNGPp&\gwNGKp&\gwNGUp&\gwNSPp&\gwNSKp&\gwNSUp&\vwNGPp&\vwNGKp&\vwNGUp&\vwNSPp&\vwNSKp&\vwNSUp&\vxNGPp&\vxNGKp&\vxNGUp&\vxNSPp&\vxNSKp&\vxNSUp \\ & Sequential &\gwNGPq&\gwNGKq&\gwNGUq&\gwNSPq&\gwNSKq&\gwNSUq&\vwNGPq&\vwNGKq&\vwNGUq&\vwNSPq&\vwNSKq&\vwNSUq&\vxNGPq&\vxNGKq&\vxNGUq&\vxNSPq&\vxNSKq&\vxNSUq \\ & Random &\gwNGPn&\gwNGKn&\gwNGUn&\gwNSPn&\gwNSKn&\gwNSUn&\vwNGPn&\vwNGKn&\vwNGUn&\vwNSPn&\vwNSKn&\vwNSUn&\vxNGPn&\vxNGKn&\vxNGUn&\vxNSPn&\vxNSKn&\vxNSUn \\ \hline % ------------------------------------------------------------------------------------------------------------------------------------------ \multirow{3}{*}{\begin{sideways}{\tiny Forward}\end{sideways}} & Repetitive &\gwYGPp&\gwYGKp&\gwYGUp&\gwYSPp&\gwYSKp&\gwYSUp&\vwYGPp&\vwYGKp&\vwYGUp&\vwYSPp&\vwYSKp&\vwYSUp&\vxYGPp&\vxYGKp&\vxYGUp&\vxYSPp&\vxYSKp&\vxYSUp \\ & Sequential &\gwYGPq&\gwYGKq&\gwYGUq&\gwYSPq&\gwYSKq&\gwYSUq&\vwYGPq&\vwYGKq&\vwYGUq&\vwYSPq&\vwYSKq&\vwYSUq&\vxYGPq&\vxYGKq&\vxYGUq&\vxYSPq&\vxYSKq&\vxYSUq \\ & Random &\gwYGPn&\gwYGKn&\gwYGUn&\gwYSPn&\gwYSKn&\gwYSUn&\vwYGPn&\vwYGKn&\vwYGUn&\vwYSPn&\vwYSKn&\vwYSUn&\vxYGPn&\vxYGKn&\vxYGUn&\vxYSPn&\vxYSKn&\vxYSUn \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & & \multicolumn{6}{c||}{G++/32 on Linux Desktop} & \multicolumn{6}{c||}{MS Visual C++/32 with PGO} & \multicolumn{6}{c}{MS Visual C++/64 with PGO} \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & Syntax & \multicolumn{3}{c|}{Unified} & \multicolumn{3}{c||}{Specialized} & \multicolumn{3}{c|}{Unified} & \multicolumn{3}{c||}{Specialized} & \multicolumn{3}{c|}{Unified} & \multicolumn{3}{c}{Specialized} \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & Encoding & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn & \Opn & \Cls & \Unn \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- \hline % ----------------------------------------------------------------------------------------------------------------------------------------- & Repetitive &\glNGPp&\glNGKp&\GwNGUp&\glNSPp&\glNSKp&\GwNSUp&\VwNGPp&\VwNGKp&\VwNGUp&\VwNSPp&\VwNSKp&\VwNSUp&\VxNGPp&\VxNGKp&\VxNGUp&\VxNSPp&\VxNSKp&\VxNSUp \\ & Sequential &\glNGPq&\glNGKq&\GwNGUq&\glNSPq&\glNSKq&\GwNSUq&\VwNGPq&\VwNGKq&\VwNGUq&\VwNSPq&\VwNSKq&\VwNSUq&\VxNGPq&\VxNGKq&\VxNGUq&\VxNSPq&\VxNSKq&\VxNSUq \\ & Random &\glNGPn&\glNGKn&\GwNGUn&\glNSPn&\glNSKn&\GwNSUn&\VwNGPn&\VwNGKn&\VwNGUn&\VwNSPn&\VwNSKn&\VwNSUn&\VxNGPn&\VxNGKn&\VxNGUn&\VxNSPn&\VxNSKn&\VxNSUn \\ \hline % ------------------------------------------------------------------------------------------------------------------------------------------ \multirow{3}{*}{\begin{sideways}{\tiny Forward}\end{sideways}} & Repetitive &\glYGPp&\glYGKp&\GwYGUp&\glYSPp&\glYSKp&\GwYSUp&\VwYGPp&\VwYGKp&\VwYGUp&\VwYSPp&\VwYSKp&\VwYSUp&\VxYGPp&\VxYGKp&\VxYGUp&\VxYSPp&\VxYSKp&\VxYSUp \\ & Sequential &\glYGPq&\glYGKq&\GwYGUq&\glYSPq&\glYSKq&\GwYSUq&\VwYGPq&\VwYGKq&\VwYGUq&\VwYSPq&\VwYSKq&\VwYSUq&\VxYGPq&\VxYGKq&\VxYGUq&\VxYSPq&\VxYSKq&\VxYSUq \\ & Random &\glYGPn&\glYGKn&\GwYGUn&\glYSPn&\glYSKn&\GwYSUn&\VwYGPn&\VwYGKn&\VwYGUn&\VwYSPn&\VwYSKn&\VwYSUn&\VxYGPn&\VxYGKn&\VxYGUn&\VxYSPn&\VxYSKn&\VxYSUn \\ \hline % ----------------------------------------------------------------------------------------------------------------------------------------- \hline % ---------------------------------------------------------------------------------------------------------------------------------- & & \multicolumn{6}{c||}{ } & \multicolumn{12}{c}{Windows Laptop} \\ \hline % ---------------------------------------------------------------------------------------------------------------------------------- \end{tabular} \caption{Relative performance of type switching versus visitors. Numbers in regular font (e.g. \f{67}), indicate that our type switching is faster than visitors by corresponding percentage. Numbers in bold font (e.g. \s{14}), indicate that visitors are faster by corresponding percentage.} \label{relperf} \end{figure*} In this section, we evaluate the performance of our solution in comparison to its de-facto contender -- the visitor design pattern. We also compare performance of some typical use cases expressed with our solution and OCaml. Our evaluation methodology consists of several benchmarks that we believe represent various possible uses of objects inspected with either visitors or pattern matching. The \emph{repetitive} benchmark performs multiple calls on different objects of the same most derived type. This scenario happens in object-oriented setting when a group of polymorphic objects is created and passed around (e.g. numerous particles of a given kind in a particle simulation system). We include it because double dispatch becomes about twice faster (27 vs. 53 cycles) in this scenario compared to others due to cache and call target prediction mechanisms. The \emph{sequential} benchmark effectively uses an object of each derived type only once and then moves on to an object of a different type. The cache is typically reused the least in this scenario. The scenario is typical of lookup tables, where each entry is implemented with a different derived class. The \emph{random} benchmark is the most representative as it randomly makes calls on random objects, which will probably be the most common usage scenario in the real world. The \emph{forwarding} benchmark is not a benchmark on its own, but rather a combinator that can be applied to any of the above scenarios. It refers to the common technique used by visitors where, for class hierarchies with multiple levels of inheritance, the \code{visit} method of a derived class will provide a default implementation of forwarding to its immediate base class, which, in turn, may forward it to its base class, etc. The use of forwarding in visitors is a way to achieve substitutability, which in type switches corresponds to the use of base classes in the case clauses. This approach is used in Pivot, whose AST hierarchy consists of 154 node kinds, of which only 5 must be handled, while the rest will forward to them when visit for them was not overriden. The class hierarchy for non-forwarding test was a flat hierarchy with 100 derived classes, encoding an algebraic data type. The class hierarchy for forwarding tests had two levels of inheritance with 5 intermediate base classes and 95 derived ones. Each benchmark was tested with either \emph{unified} or \emph{specialized} syntax, each of which included tests on polymorphic (\emph{Open}) and tagged (\emph{Tag}) encodings. Specialized syntax avoids generating unnecessary syntactic structure used to unify syntax, and thus produces faster code. We include it in our results because a compiler implementation of type switching will only generate the best suitable code. The benchmarks were executed in the following configurations refered to as \emph{Linux Desktop} and \emph{Windows Laptop} respectively: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item Dell Dimension\textsuperscript{\textregistered} desktop with Intel\textsuperscript{\textregistered} Pentium\textsuperscript{\textregistered} D (Dual Core) CPU at 2.80 GHz; 1GB of RAM; Fedora Core 13 \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item G++ 4.4.5 executed with -O2 \end{itemize} \item Sony VAIO\textsuperscript{\textregistered} laptop with Intel\textsuperscript{\textregistered} Core\texttrademark i5 460M CPU at 2.53 GHz; 6GB of RAM; Windows 7 Professional \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item G++ 4.5.2 / MinGW executed with -O2; x86 binaries \item MS Visual C++ 2010 Professional x86/x64 binaries with profile-guided optimizations \end{itemize} \end{itemize} \noindent The code on the critical path of our type switch implementation benefits significantly from branch hinting as some branches are much more likely than others. We use the branch hinting facilities of GCC to tell the compiler which branches are more likely, but, unfortunately, Visual C++ does not have similar facilities. The official way suggested by Microsoft to achieve the same effect is to use \emph{Profile-Guided Optimization} and let the compiler gather statistics on each branch. This is why the result for Visual C++ reported here are those obtained with profile-guided optimizations enabled. The slightly less-favorable-for-us results without profile-guided optimizations can be found in the accompanying technical report~\cite{TR}. %The results of optimizing code created with Visual C++ by using profile %guided optimizations as currently Visual C++ does not have means for branch %hinting, which are supported by G++ and proven to be very effective in few %cruicial places. Profile guided optimization in Visual C++ lets compiler find %out experimentally what we would have otherwise hinted, even though this %includes other optimizations as well. We compare the performance of our solution relative to the performance of visitors in Figure~\ref{relperf}. The values are given as percentages of performance increase against the slower technique. Numbers in regular font represent cases where our type switching was faster than visitors were. Numbers in bold indicate cases where visitors were faster. From the numbers, we can see that type switching wins by a good margin in the presence of at least one level of forwarding on visitors. Using type switching on closed hierarchies is also a definite winner. From the table it may seem that Visual C++ is generating not as good code as GCC does, but remember that these numbers are relative, and thus the ratio depends on both the performance of virtual calls and the performance of switch statements. Visual C++ was generating faster virtual function calls, while GCC was generating faster switch statements, which is why their relative performance seem to be much more favorable for us in the case of GCC. Similarly the code for x64 is only slower relatively: the actual time spent for both visitors and type switching was smaller than that for x86, but it was much smaller for visitors than type switching, which resulted in worse relative performance. \subsection{Vtable Pointer Memoization vs. TPL Dispatcher} \label{sec:cmp} With a few exceptions for x64, it can be seen from Figure~\ref{relperf} that the performance of the TPL dispatcher (the Tag column) dominates the performance of the vtable pointer memoization approach (the Open column). We believe that the difference, often significant, is the price one pays for the true openness of the vtable pointer memoization solution. Unfortunately, the TPL dispatcher is not truly open. The use of tags, even if they would be allocated by compiler, may require integration efforts to ensure that different DLLs have not reused the same tags. Randomization of tags, similar to a proposal of Garrigue~\cite{garrigue-98}, will not eliminate the problem and will surely replace jump tables in switches with decision trees. This will likely significantly degrade the numbers for the Tag column of Figure~\ref{relperf}, since the tags in our experiments were all sequential. Besides, the TPL dispatcher approach relies on static cast to obtain the proper reference once the most specific case clause has been found. As we described in \textsection\ref{sec:vtblmem}, this has severe limitations in the presence of multiple inheritance, and thus is not as versatile as the other solution. Overcoming this problem will either require the use of \code{dynamic_cast} or techniques similar to those we used in vtable pointer memoization. This will likely degrade performance numbers for the Tag column even further. Note also that the vtable pointer memoization approach can be used to implement both first-fit and best-fit semantics, while the TPL dispatcher is only suitable for best-fit semantics. Their complexity guarantees also differ: vtable pointer memoization is constant on average, and slow on the first call. Tag list approach is logarithmic in the size of the class hierarchy on average (assuming a balanced hierarchy), including on the first call. \subsection{Comparison with OCaml} \label{sec:ocaml} We now compare our solution to the built-in pattern-matching facility of OCaml~\cite{OPM01}. In this test, we timed a small OCaml application performing our sequential benchmark on an algebraic data type of 100 variants. Corresponding C++ applications were working with a flat class hierarchy of 100 derived classes. The difference between the C++ applications lies in the encoding (Open/Tag/Kind) and the syntax (Unified/Special) used. Kind encoding is the same as Tag encoding, but it does not require substitutability, and thus can be implemented with a direct switch on tags. It is only supported through specialized syntax in our library as it differs from the Tag encoding only semantically. The optimized OCaml compiler \texttt{ocamlopt.opt} that we used to compile the code can be based on different toolsets on some platforms, e.g. Visual C++ or GCC on Windows. To make the comparison fair we had to make sure that the same toolset was used to compile the C++ code. We ran the tests on both of the machines described above using the following configurations: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \item The tests on a Windows 7 laptop were all based on the \emph{Visual C++ toolset} and used \texttt{ocamlopt.opt} version 3.11.0. \item The tests on a Linux desktop were all based on the \emph{GCC toolset} and used \texttt{ocamlopt.opt} version 3.11.2 \end{itemize} \noindent The timing results presented in Figure~\ref{fig:OCamlComparison} are averaged over 101 measurements and show the number of seconds it took to perform a million decompositions within our sequential benchmark. \begin{figure}[htbp] \centering \includegraphics[width=0.47\textwidth]{OCamlComparison.png} \caption{Performance comparison of various encodings and syntax against OCaml code} \label{fig:OCamlComparison} \end{figure} We can see that the use of specialized syntax on a closed/sealed hierarchy can match the speed of, and even be four times faster than, the code generated by the native OCaml compiler. Once we go for an open solution, we become about 30-50\% slower. \subsection{Qualitative Comparison} \label{sec:qualcmp} For this experiment we have reimplemented a visitor based C++ pretty printer for Pivot's IPR using our pattern matching library. Most of the rewrite was performed by sed-like replaces that converted visit methods into respective case-clauses. In several cases we had to manually reorder case-clauses to avoid redundancy as visit-methods for base classes were typically coming before the same for derived, while for pattern matching we needed them to come after. Redundancy checking support in the library discussed in \textsection\ref{sec:redun} was invaluable in finding out all such cases. During this refactoring we have made several simplifications that became obvious in pattern-matching code, but were not in visitors code because of control inversion. Simplifications that were applicable to visitors code were eventually integrated into visitors code as well to make sure we do not compare algorithmically different code. In any case we were making sure that both approaches regardless of simplifications were producing byte-to-byte the same output as the original pretty printer we started from. The size of executable for pattern matching approach was smaller than that for visitors. So was also the source code. We extracted from both sources the functionality that was common to them and placed it in a separate translation unit to make sure it does not participate in the comparison. We kept all the comments however that were eqaully applicable to code in either approach. Note that the visitors involved in the pretty printer above did not use forwarding: since all the C++ constructs were handled by the printer, every visit-method was overriden from those statically possible based on the static type of the argument. %Listing parameter for a case clause always causes access to member. Best hope is %that compiler will eliminate it if it is not needed. At the moment we do not %have means to detect empty macro arguments or \_. %To be continued... In general from our rewriting experience we will not recommend rewriting existing visitor code with pattern matching for the simple reason that pattern matching code will likely follow the structure already set by the visitors. Pattern matching was most effective when writing new code, where we could design the structure of the code having the pattern matching facility in our toolbox. \section{Related Work} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:rw} There are two main approaches to compiling pattern matching code: the first is based on \emph{backtracking automata} and was introduced by Augustsson\cite{}, the second is based on \emph{decision trees} and is attributed in the literature to Dave MacQueen and Gilles Kahn in their implementation of Hope compiler \cite{}. Backtracking approach usually generates smaller code, while decision tree approach produces faster code by ensuring that each primitive test is only performed once. Neither of the approaches addresses specifically type patterns or type switching and simply assumes presence of a primitive operation capable of performing type tests. Memoization device we proposed is not specifically concerned with compiling pattern matching and can be used independently. In particular it can be combined with either backtracking or decision tree approaches to avoid subsequent decisions on datum that has already been seen. %xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \emph{Extensible Visitors with Default Cases}~\cite[\textsection 4.2]{Zenger:2001} attempt to solve the extensibility problem of visitors; however, the solution, after remapping it onto C++, has problems of its own. The visitation interface hierarchy can easily be grown linearly (adding new cases for the new classes in the original hierarchy each time), but independent extensions by different authorities require developer's intervention to unify them all, before they can be used together. This may not be feasible in environments that use dynamic linking. To avoid writing even more boilerplate code in new visitors, the solution would require usage of virtual inheritance, which typically has an overhead of extra memory dereferencing. On top of the double dispatch already present in the visitor pattern, the solution will incur two additional virtual calls and a dynamic cast for each level of visitor extension. Additional double dispatch is incurred by forwarding of default handling from a base visitor to a derived one, while the dynamic cast is required for safety and can be replaced with a static cast when the visitation interface is guaranteed to be grown linearly (extended by one authority only). Yet another virtual call is required to be able to forward computations to subcomponents on tree-like structures to the most derived visitor. This last function lets one avoid the necessity of using the heap to allocate a temporary visitor through the \emph{Factory Design Pattern}~\cite{DesignPatterns1993} used in the \emph{Extensible Visitor} solution originally proposed by Krishnamurti, Felleisen and Friedman~\cite{Krishnamurthi98}. In order to address the expression problem in Haskell, L\"{o}h and Hinze proposed to extend its type system with open data types and open functions~\cite{LohHinze2006}. Their solution allows the user to mark top-level data types and functions as open and then provide concrete variants and overloads anywhere in the program. Open data types are extensible but not hierarchical, which largely avoids the problems discussed here. The semantics of open extension is given by transformation into a single module, where all the definitions are seen in one place. This is a significant limitation of their approach that prevents it from being truly open, since it essentially assumes a whole-program view, which excludes any extension via DLLs. As is the case with many other implementations of open extensions, the authors rely on the closed world for efficient implementation: in their implementation, \emph{``data types can only be entirely abstract (not allowing pattern matching) or concrete with all constructors with the reason being that pattern matching can be compiled more efficiently if the layout of the data type is known completely''}. The authors also believe that \emph{there are no theoretical difficulties in lifting this restriction, but it might imply a small performance loss if closed functions pattern match on open data types}. Our work addresses exactly this problem, showing that it is not only theoretically possible but also practically efficient and in application to a broader problem. Polymorphic variants in OCaml~\cite{garrigue-98} allow the addition of new variants later. They are simpler, however, than object-oriented extensions, as they do not form subtyping between variants themselves, but only between combinations of them. This makes an important distinction between \emph{extensible sum types} like polymorphic variants and \emph{extensible hierarchical sum types} like classes. An important property of extensible sum types is that each value of the underlying algebraic data type belongs to exactly one disjoint subset, tagged with a constructor. The \emph{nominative subtyping} of object-oriented languages does not usually have this disjointness making classes effectively have multiple types. In particular, the case of disjoint constructors can be seen as a degenerated case of a flat class hierarchy among the multitude of possible class hierarchies. \emph{Tom} is a pattern-matching compiler that can be used together with Java, C or Eiffel to bring a common pattern matching and term rewriting syntax into the languages\cite{Moreau:2003}. It works as a preprocessor that transforms syntactic extensions into imperative code in the target language. Tom is quite transparent as to the concrete target language used and can potentially be extended to other target languages besides the three supported now. In particular, it never uses any semantic information of the target language during the compilation process and it does not inspect nor modify the source language part (their preprocessor is only aware of parenthesis and block delimiters of the source language). Tom has a sublanguage called Gom that can be used to define algebraic data types in a uniform mannaer, which their preprocessor then transforms into conrete definitions in the target language. Alternatively, the user can provide mappings to his own data structures that the preprocessor will use to generate the code. In comparison to our approach Tom has much bigger goals. The combination of pattern matching, term rewriting and strategies turns Tom into a tree-transformation language similar to Stratego/XT, XDuce and others. The main accent is made on expressivity and the speed of development, which makes one often wonder about the run-time complexity of the generated code. Tom's approach is also prone to general problems of any preprocessor based solution\cite[\textsection 4.3]{SELL}. For example, when several preprocessors have to be used together, each independent extension may not be able to understand the other's syntax, making it impossible to form a toolchain. A library approach we follow avoids most of these problems by relying only on a standard C++ compiler. It also lets us employ semantics of the language within patterns: e.g. our patterns work directly on underlying user-defined data structures, largely avoiding abstraction penalties. A tighter integration with the language semantics also makes our patterns first-class citizens that can be composed and passed to other functions. The approach we take to type switching can also be used by Tom's preprocessor to implement type patterns efficiently -- similarly to other object-oriented languages, Tom's handling of them is based on highly inefficient \code{instanceof} operator and its equivalents in other languages. Pattern matching in Scala~\cite{Scala2nd} also allows type patterns and thus type switching. The language supports extensible and hierarchical data types, but their handling in a type switching constructs varies. Sealed classes are handled with an efficient switch over all tags, since sealed classes cannot be extended. Classes that are not sealed are similarly approached with a combination of an \code{InstanceOf} operator and a decision tree~\cite{EmirThesis}. %An example would be our generalized n+k patterns where we %can turn any invertible function even user defined into a pattern. There has been previous attempts to use pattern matching with the Pivot framework that we used to experiment with our library. In his dissertation, Pirkelbauer devised a pattern language capable of representing various entities in a C++ program. The patterns were then translated with a tool into a set of visitors implementing the underlying pattern matching semantics\cite{PirkelbauerThesis}. Earlier, Cook et al used expression templates to implement a query language for Pivot's Internal Program Representation \cite{iql04}. While their work was built around a concrete class hierarchy letting them put some semantic knowledge about concrete classes into the The principal difference of their work from this work is that authors were essentially creating a pattern matcher for a given class hierarchy and thus could take the semantics of the entities represented by classes in the hierarchy into account. Our approach is parametrized over class hierarchy and thus provides a rather lower level pattern-matching functionality that lets one simplify work with that hierarchy. One can think of it as a generalized dynamic\_cast. To be continued... \section{Future Work} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:fw} In the future we would like to provide an efficient multi-threaded implementation of our library as currently it relies heavily on static variables and global state, which will have problems in a multi-threaded environment. The match statement that we presented here deals with only one subject at the moment, but we believe that our memoization device, along with the vtable pointer memoization technique we presented, can cope reasonably efficiently with multiple subjects. Their support will make our library more general by addressing asymmetric multiple dispatch. We would also like to experiment with other kinds of cache indexing functions in order to decrease the frequency of conflicts, especially those coming from the use of dynamically-linked libraries. Containers as described by the standard C++ library do not have the implicit recursive structure present in lists, sequences and other recursive data structures of functional languages. Viewing them as such with view will likely have a significant performance overhead, not usually affordable in the kind of applications C++ is used for. We therefore would like to experiment with some pattern matching alternatives that will let us work with STL containers efficiently yet expressively as in functional languages. \section{Conclusions} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \label{sec:cc} Type switching is an open alternative to visitor design pattern that overcomes the restrictions, inconveniences, and difficulties in teaching and using, typically associated with it. Our implementation of it comes close or outperforms the visitor design pattern, which is true even in a library setting using a production-quality compiler, where the performance base-line is already very high. We describe three techniques that can be used to implement type switching, type testing, pattern matching, predicate dispatching, and other facilities that depend on the run-time type of an argument as well as demonstrate their efficiency. The \emph{Memoization Device} is an optimization technique that maps run-time values to execution paths, allowing to take shortcuts on subsequent runs with the same value. The technique does not require code duplication and in typical cases adds only a single indirect assignment to each of the execution paths. It can be combined with other compiler optimizations and is particularly suitable for use in a library setting. The \emph{Vtable Pointer Memoization} is a technique based on memoization device that employs uniqueness of virtual table pointers to not only speed up execution, but also properly uncover the dynamic type of an object. This technique is a backbone of our fast type switch as well as memoized dynamic cast optimization. The \emph{TPL Dispatcher} is yet another technique that can be used to implement best-fit type switching on tagged classes. The technique has its pros and cons in comparison to vtable pointer memoization, which we discuss in the paper. These techniques can be used in a compiler and library setting, and support well separate compilation and dynamic linking. They are open to class extensions and interact well with other C++ facilities such as multiple inheritance and templates. The techniques are not specific to C++ and can be adopted in other languages for similar purposes. Using the above techniques, we implemented a library for efficient type switching in C++. We used the library to rewrite existing code that relied heavily on visitors, and discovered that the resulting code became much shorter, simpler, and easier to maintain and comprehend. We used the library to rewrite existing code that relied heavily on visitors, and discovered that the resulting code became much shorter, simpler, and easier to maintain and comprehend. \bibliographystyle{abbrvnat} \bibliography{mlpatmat} \end{document}
{ "alphanum_fraction": 0.7385493494, "avg_line_length": 55.9203152364, "ext": "tex", "hexsha": "926e36861e57b25f6134f381eafea46ced02b2d6", "lang": "TeX", "max_forks_count": 108, "max_forks_repo_forks_event_max_datetime": "2021-11-18T11:06:59.000Z", "max_forks_repo_forks_event_min_datetime": "2015-02-13T17:39:07.000Z", "max_forks_repo_head_hexsha": "eef288eb9fe59712ff153dd70791365391b7b118", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "akrzemi1/Mach7", "max_forks_repo_path": "media/papers/TR/mlpatmat-all-in-one.tex", "max_issues_count": 62, "max_issues_repo_head_hexsha": "eef288eb9fe59712ff153dd70791365391b7b118", "max_issues_repo_issues_event_max_datetime": "2021-11-14T22:02:14.000Z", "max_issues_repo_issues_event_min_datetime": "2015-01-12T07:59:17.000Z", "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "akrzemi1/Mach7", "max_issues_repo_path": "media/papers/TR/mlpatmat-all-in-one.tex", "max_line_length": 217, "max_stars_count": 1310, "max_stars_repo_head_hexsha": "eef288eb9fe59712ff153dd70791365391b7b118", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "akrzemi1/Mach7", "max_stars_repo_path": "media/papers/TR/mlpatmat-all-in-one.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-18T04:44:01.000Z", "max_stars_repo_stars_event_min_datetime": "2015-01-04T03:44:04.000Z", "num_tokens": 62176, "size": 255444 }
\input{../header.tex} \title{\vspace{-2cm}INF3490/INF4490 Exercises - Support Vector Machines} \author{Eivind Samuelsen\input{../author_footnote.tex}} \date{} % Removing paragraph indents is sometimes useful: \setlength\parindent{0pt} % ============================================================================== % ================================= DOCUMENT =================================== \begin{document} \renewcommand\marginsymbol[1][0pt]{% \tabto*{0cm}\makebox[-1cm][c]{$\mathbb{P}$}\tabto*{\TabPrevPos}} \maketitle \input{../intro.tex} \section{SVM vs MLP} What advantages and disadvantages are there to support vector machines (svm) versus multilayer perceptrons (mlp)? What problems do they both suffer from? \section{Kernel functions} What is a kernel function? Which are the most common kernel functions and roughly what kind of transformations do they correspond to? \section{Soft Margins} What two factors must be balanced when using an SVM with soft margin? \section{Ensemble} Try to come up with a few cases when using an ensemble of classifiers where it would be fine to just pick the most popular class, and where you would want to have the majority in favor of a single class or even full consensus. \section{Principle Component Analysis} What is the motivation behind principle component analysis? \section{Covariance} Work out the covariance between the x and y dimensions of the following 2-dimensional data set. Describe what the results indicate about the data. \begin{table}[H] \centering \begin{tabular}{c|c|c|c|c|c} Index & 1 & 2 & 3 & 4 & 5 \\ \hline x & 10 & 39 & 19 & 23 & 28 \\ y & 43 & 13 & 32 & 21 & 20 \\ \hline \end{tabular} \caption{Two-dimensional data set} \label{tab:cov} \end{table} \input{../contact.tex} \end{document} % ==============================================================================
{ "alphanum_fraction": 0.6434599156, "avg_line_length": 35.7735849057, "ext": "tex", "hexsha": "5dcf740d1f417045c1d9c701a836fdb4e852609f", "lang": "TeX", "max_forks_count": 15, "max_forks_repo_forks_event_max_datetime": "2021-03-15T12:12:50.000Z", "max_forks_repo_forks_event_min_datetime": "2016-10-31T12:30:37.000Z", "max_forks_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "olehermanse/INF3490-PythonAI", "max_forks_repo_path": "material/week8/inf3490-ex8.tex", "max_issues_count": 2, "max_issues_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_issues_repo_issues_event_max_datetime": "2017-08-29T00:28:54.000Z", "max_issues_repo_issues_event_min_datetime": "2016-10-20T09:36:19.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "olehermanse/INF3490-PythonAI", "max_issues_repo_path": "material/week8/inf3490-ex8.tex", "max_line_length": 226, "max_stars_count": 16, "max_stars_repo_head_hexsha": "8b813345264513a57934317b01e1311628dc5b01", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "mpambasange/MachineLearning", "max_stars_repo_path": "material/week8/inf3490-ex8.tex", "max_stars_repo_stars_event_max_datetime": "2022-02-15T20:56:07.000Z", "max_stars_repo_stars_event_min_datetime": "2016-09-01T08:50:59.000Z", "num_tokens": 483, "size": 1896 }
\chapter{Background} \section{GPU Computing} To understand how GPU software works, it’s important to first understand GPU hardware since the code used to program GPUs aligns very closely with the underlying hardware. Today’s GPUs have many fundamental design decisions that may seem unfamiliar when compared to that of a CPU’s architecture. This paper uses AMD's GCN architecture as an example for many of the GPU metrics and comparisons. This is due to the fact that the GCN architecture is an extremely prevalent GPU architecture, and it is the main architecture currently supported by AMD's ROCm ecosystem. This includes the Radeon Vega Pro cards with the 2017 GCN 5 architecture that was used for this thesis' tests and experiments. As of November 2021, ROCm has expanded its list of supported devices to include the AMD Radeon Pro W6800 GPU which was released in 2020 and uses the newer RDNA 2 architecture \cite{rocmCompatibility}. We hope that in the near future, ROCm will continue to evolve and support new GPU architectures and devices. \subsection{Modern GPU Architecture} The fundamental building block of all modern GPU designs are clusters of processing elements grouped together with resources. AMD refers to these processing groups as Compute Units (CUs) which can be equated with the term Streaming Multiprocessors (SMs) on CUDA capable GPUs \cite{amdConferenceTalk}. AMD's GCN architecture groups many of these CUs into processors called Shader Units that are each managed with their own Workload Manager as seen in Figure \ref{gcn2}. Each Compute unit has its own resources such as a scalar unit for flow control, cache memory, registers, and more. The main computation unit for each CU is a collection of vector units, or Streaming Processors in CUDA terminology, that share an instruction cache. These vector units can be conceptually thought of as SIMD (Single Instruction/Multiple Data) units that are capable of performing floating point calculations since each SIMD contains its own a floating point unit. They are capable of treating arrays of data as single elements through which an operation should be applied across. In AMD’s GCN architecture, each Compute Unit is divided up into four SIMD units where each unit is 16 lanes wide and therefore capable of simultaneously executing a single operation across 16 work items \cite{gcnWhitepaper}. This gives us a throughput of 64 single-precision operations per-cycle on each CU. \begin{figure}[hbtp] \includegraphics[width=\textwidth]{figures/gcn2.png} \centering \caption{A GCN-based AMD GPU that groups Compute Units into Shader Engines that are each managed by a Workload Manager \cite{amdConferenceTalk}.} \label{gcn2} \end{figure} \quad Threads within a GPU are not scheduled individually. Instead, threads are grouped into units called wavefronts on AMD devices or warps for CUDA devices. Wavefront size is a property of the hardware architecture. They are 64 threads wide for GCN-based architectures, for example, or 32 threads wide on NVIDIA's CUDA-capable architectures. Wavefronts are executed on a single SIMD in four consecutive cycles. A one-cycle instruction therefore is executed in four batches across the each of the four 16-lane-wide SIMD units to cover all 64 lanes in an AMD wavefront. This hierarchy of CUs, SIMD units, wavefronts, and threads is illustrated in Figure \ref{gcn1}. \begin{figure}[hbtp] \includegraphics[width=70mm,scale=0.5]{figures/gcn1.png} \centering \caption{The contents of a Compute Unit is divided up into a group of resources and SIMD units that are each in turn separated out into wavefronts and threads \cite{gpuSortPerformance}.} \label{gcn1} \end{figure} \quad GPU architectures are further complicated by their multi-tier hierarchy of memory. Visible to all CUs on the entire GPU is gigabytes of Graphics Double Data Rate (GDDR) synchronous DRAM and sometimes High-Bandwidth Memory (HBM) on newer devices. This memory is collectively referred to as `global memory'. When compared to CPU DRAM, the off-chip global memory for GPUs is designed for high bandwidth due to characteristics of the data commonly used for GPU acceleration. Unfortunately, global memory is still susceptible to longer latency times which is a fundamental property of the memory type \cite{greenBook, pycuda}. This latency can often act as a bottleneck for GPU acceleration. PCIe controllers help with the transfers across the PCIe bus with host memory, and some devices have Infinity Fabric Controllers that can manage communication with other GPUs on the system. The inclusion of DMA engines allows for asynchronous memory transfers between the device and the host, or between multiple devices. This layout of memory controllers and engines is illustrated in Figure \ref{gcn3}. At a finer level, each compute unit also has its own scratch-pad memory called the Local Data Share (LDS) which is typically 64KB for AMD and NVIDIA architectures. This data share is shared across SIMD units, and it can be used for communication between threads. Accessing shared memory is much faster than global memory, so a variety of techniques exist to help capitalize on this data share during execution time. \begin{figure}[hbtp] \includegraphics[width=\textwidth]{figures/gcn3.png} \centering \caption{A high level overview of the memory architecture on a GCN-based AMD GPU \cite{amdConferenceTalk}.} \label{gcn3} \end{figure} \subsection{GPU Software} When writing applications that must run on GPUs, code is split into two main parts — host code and device code. Host code is the normal part of an application that will run on the CPU and is written in a standard language such as C++. Device code on the other other hand are functions that are written to be run on SIMD units on a GPU. The entry point to device code sections are functions called kernels. Kernels make use of memory buffers that are allocated on the device from host code in order to manipulate a set of data. This means that most GPU-accelerated applications use a repeating pattern of allocating space on the device, copying data into the memory buffer, running a device kernel, and copying the data back to the host. \quad When GPU kernels are launched, they make use of the underlying architecture’s SIMD units to execute the work across many parallel workers which are often referred to as threads. These threads are organized into groups called workgroups on AMD devices or thread blocks on CUDA compatible devices. All threads within a workgroup exist on the device and the CU at the same time. These workgroups are made up of multiple wavefronts as discussed earlier, and GCN hardware uses 16 wavefronts per workgroup. Finally, workgroups are organized into a grid of multiple workgroup blocks as shown in Figure \ref{gridthreadblock}. The number and organization of the blocks and grid are under programmer control within the size limits allowed by the architecture. Blocks can be organized logically into 1, 2, or three dimensional grids, which is usually decided by the topology of the data. Kernels executing on images, for example, often work well as 2-dimensional grids. \begin{figure}[hbtp] \includegraphics[width=100mm]{figures/gridblockthread.png} \centering \caption{GPU kernels divide data elements into logical groupings of grids, threads, and blocks \cite{greenBook}.} \label{gridthreadblock} \end{figure} \quad The hierarchy of grids, blocks, and threads closely matches the organization of the underlying GPU hardware. Blocks are dynamically scheduled onto compute units, and all threads in a block execute on the same compute unit. This allows threads to share LDS memory and L1 cache. The downside to this hierarchy and memory model is that it provides an extra layer of complexity for the programmer. The parameters for the grid and block sizes must be chosen to make use of both the data and the device’s architecture, and kernel code should be designed to take advantage of faster LDS and cache memory when possible to avoid slower access to global memory. \quad The final software detail for GPU programming that needs to be discussed are streams. Streams are a way of dividing up the resources of the device for further parallel execution. Streams are queues of tasks that are guaranteed to complete in order on a given stream, and each stream is allowed to overlap and run synchronously with other streams on the same device. The exception to this rule is a special stream known as the null-stream. Tasks in the null-stream are not allowed to overlap with any tasks on any other stream. They only begin execution once all tasks enqueued on all other streams have completed. Blocking calls like memory copies will always happen on the null-stream. \section{ROCm} For a long time, NVIDIA has continued to dominate the GPU industry. As of Quarter 2 of 2021, NVIDIA is estimated to hold 83\% marketshare of the discrete GPU market over AMD's 17\% \cite{marketshare}. In the world of GPGPU computing, this dominance has come from its proprietary software included in the CUDA Toolkit which allows for the creation of GPU-accelerated software for embedded systems, workstations, data centers, cloud platforms, and HPC computers \cite{cuda}. This flexibility comes from its compiler, NVCC, which leverages the widely used LLVM infrastructure to allow developers to write kernels in modified syntax in C++ programs and compile them it for execution on NVIDIA devices \cite{nvcc}. On top of this, CUDA's early entry and dominance in the industry has given it the advantage of a strong community and ecosystem. The Radeon Open eCosystem (ROCm) is AMD's answer to CUDA which it describes as its ``open software platform for GPU-accelerated computing'' \cite{rocm}. Launched as part of AMD's ``Boltzmann Initiative'' in 2015, ROCm aims to solve new problems in GPU computing while maintaining an open source and multi-platform identity. This new ecosystem provides a wide range of programming models and languages to choose from. Among those is a C++ dialect called HIP (the Heterogeneous-Computing Interface for Portability) that provides many APIs and interfaces that mirror CUDA's \cite{hip}. It even includes a tool called HIPify that does most, if not all, of the work in converting CUDA programs to HIP. Unlike CUDA which is exclusive to NVIDIA devices, HIP allows for portability across platforms at the expense of a few API limitations \cite{hipfaq}. \begin{figure}[hbtp] \includegraphics[width=\textwidth]{figures/ROCm_Stack.png} \centering \caption{An overview of the various systems in place that make up ROCm's foundation including compilers for both GCN and LLVM-based device runtimes \cite{rocmDocs}.} \label{rocm1} \end{figure} \quad ROCm uses what is called the ROCr runtime which itself is based on the Heterogeneous System Architecture (HSA) Runtime API. The runtime is language-independent which allows it to serve many GPU-compatible languages including AMD's Heterogeneous Compute Compiler (HCC) which provides full control over AMD devices, or the Heterogeneous-Computing Interface for Portability (HIP) which specializes in cross-platform compatibility with both AMD and NVIDIA. To service these languages, the ROCm stack includes both GCN and LLVM compiler toolchains to compile GPU code for all compatible devices. This setup is illustrated in Figure \ref{rocm1}. ROCm also provides a variety of tools for supporting multiple GPUs. The ROCK kernel itself ROCm includes what is known as ROCmRDMA. It allows third party kernel drivers to use direct GPU memory access (DMA) and for DMA-based peer-to-peer data exchanges between devices using PCI express. ROCm also uses the Unified Communication X (UCX) library for both inter-node communication and intra-node communication, as well as the the open source message passing interface OpenMPI. \section{Python} \subsection{Python language} Python is a dynamically-typed high-level scripting language that has become increasingly popular in recent years. According to Stack Overflow's 2021 Developer Survey, Python gained in popularity over the previous year to become the 3rd most popular language of 2021 with more than 48\% of developers saying they use Python, and the sixth most loved language with more than 67\% of developers expressing interest in continuing to develop with it \cite{soSurvey}. Python has become immensely popular in areas of computing such as machine learning, data analytics, and scientific computing. One of the biggest advantages Python provides is its high level of abstraction and low-verbosity that makes it one of the most concise programming languages -- even when compared with functional languages \cite{rosetta}. This can allow for developers to focus more on high-level designs and processes and less on syntax \cite{pycuda}. When it comes to scientific computing, for example, it has been shown that Python's high level interfaces can help reduce the amount of time that scientists and researchers have to spend writing code \cite{gpucomppy}. \quad Python also comes with the advantage of a large compute-ecosystem and many open source libraries available for developers to use. Many libraries for scientific computing have historically been focused on raw performance and achieve this through lower level languages such as Fortran, C, and C++ \cite{pythonEcosystem}. Performance and ease-of-use do not have to be mutually exclusive, however. More recently, library designers have been writing performance-driven code in existing lower-level languages while designing interfaces for this code in a thin layer of Python function-wrappers. The language Python is now becoming synonymous with its massive collection of community-driven libraries and frameworks that tackle just about every computing need imaginable. The most popular of which are open source, open for contributors, and free for programmers to use. \subsection{CPython} CPython is the original reference implementation of the Python language. As the name suggests, CPython is implemented in the language C. This is the reason it is so easy for developers to extend the Python language through extension modules written in C or C-compatible languages. Although Python is often described simply as an interpreted language, the CPython implementation includes both a compiler and an interpreter. The compiler generates an AST from the Python source code and compiles it down into bytecode instructions. This bytecode gets executed by CPython's stack-based virtual machine as a giant evaluation loop that terminates once the Python program should stop for any reason. \quad A very important part of the CPython runtime to understand is the way it handles concurrency. When it comes to maintaining thread-state, the CPython interpreter was created with what is known as the "Global Interpreter Lock" or GIL. It imposes the restriction that only one thread within a given Python process is allowed to process Python bytecode at a time. This means that multi-threading is often restricted to tasks such as IO operations that don't require access to CPython objects, functions, or memory, which all require the GIL to be held \cite{cpythonthreads}. Python programmers often have to turn to multiprocessing to achieve parallelism. Since each process runs in its own instance of the Python interpreter and occupies its own region of memory, each Python process has a distinct GIL instance. \quad One of the next distinct features of CPython is how it handles Python types. Its easy to generalize Python by saying the language has no types. But in reality, within the CPython implementation, everything has one type -- the ``Python Object'' type. All values, error types, functions, and more are represented as objects that can be stored and passed around. To make this more confusing, every Python object has to be carefully reference-counted for the Python garbage collector to work. When an object is shared or duplicated, this count is incremented. Each time a reference is no longer needed, this count is manually decremented within CPython code. Once the count reaches zero, the Python interpreter knows that the object is no longer needed and its resources can safely be de-allocated.
{ "alphanum_fraction": 0.810574854, "avg_line_length": 193.630952381, "ext": "tex", "hexsha": "d5f39a32f5184859ec89e63b827265765ccc2fb4", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "8c1bba10a4a037a88ad639c8b0c4c10f12762349", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "jasbury1/millipyde", "max_forks_repo_path": "thesis/src/chapters/background.tex", "max_issues_count": 8, "max_issues_repo_head_hexsha": "8c1bba10a4a037a88ad639c8b0c4c10f12762349", "max_issues_repo_issues_event_max_datetime": "2021-12-04T06:17:50.000Z", "max_issues_repo_issues_event_min_datetime": "2021-06-01T00:58:22.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "jasbury1/millipyde", "max_issues_repo_path": "thesis/src/chapters/background.tex", "max_line_length": 1684, "max_stars_count": null, "max_stars_repo_head_hexsha": "8c1bba10a4a037a88ad639c8b0c4c10f12762349", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "jasbury1/millipyde", "max_stars_repo_path": "thesis/src/chapters/background.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3357, "size": 16265 }
\section{A Hybrid Compact Neural Architecture for Visual Place Recognition} \emph{IEEE ROBOTICS AND AUTOMATION LETTERS, VOL. 5, NO. 2, APRIL 2020} {[}4{]} \subsection{Introduction}\label{header-n182} Performing visual place recognition (VPR) reliably is a challenge for any robotic system or autonomous vehicle operating over long periods in real-world environments. Convolutional neural networks (CNN) have been applied to the field of VPR with great success, typically using dedicated hardware: the GPUs. However, classical CNNs neglect any temporal information between consecutive images. However, sequence-based algorithms, such as SeqSLAM, matching two or more sequences of images to perform VPR. Two main deep learning models can be used to capture sequence patterns: \emph{computer-science-oriented} and \emph{neuroscience-oriented} models. In recent researches, recurrent neural networks (RNN) are used to reproduce the multi-scale spatial representation of an environment. While the results are promising, these computer-science-oriented systems are tested only in small synthetic environments, and the integration with neuroscience-oriented recurrent models such as continuous attractor neural networks (CANN) is not well explored. An attractor network is a network of nodes (i.e. neurons), often recurrently connected, whose time dynamics settle to a stable pattern. A pattern can be stationary, time-varying (i.e. cyclic), or chaotic. The particular pattern which network settles to is called its \emph{attractor}. In neuroscience theory, different kinds of attractor neural networks have been associated with different functions, such as memory, motor behavior, and classification. More in detail, a continuous attractor network is a special type of attractor network, which models a non-linear dynamical system. A dynamical system consists of a \emph{state place}, which its coordinates describe the state at any instance and a \emph{dynamical role} that specifies the immediate future of all state variables. For example, the state of a pendulum is its angle and angular velocity, and the evolution rule is Newton's equation \emph{F}=\emph{m\^{}a}. An \emph{attractor} can be discrete (a discrete set of points) or continuous (a continuous object embedded in the state space). \begin{figure}[h!] \centering \includegraphics[width=0.5\linewidth]{images/continuousattractor.jpg} \caption{A system (the yellow ball) with a continuous attractor (the blue surface)} \end{figure} In this work, the authors propose a hybrid neural network that incorporates both computer-science- and neuroscience-oriented models to perform the VPR task. Their approach comprises two key components: FlyNet, a compact neural network, and a 1-d CANN as a temporal model that encodes sequences of images to perform appearance-invariant VPR using real data. The resulting FlyNet+CANN model achieves competitive AUC results, but with far fewer parameters, minimal training time and smaller computational footprint than conventional deep learning and algorithmic-based approaches. \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{images/flynetcann.png} \caption{FlyNet+CANN hybrid neural architecture} \end{figure} \subsection{Previous work}\label{header-n187} To design deep-learning-based models for VPR it is necessary to explore how this activity is performed by mammalians' brains and take inspiration from it. RatSLAM is an example, this method performs visual SLAM implementing the mechanisms using by rodents' brain. Other models perform VPR following the insect brains, like ants, bees, and flies, that exhibits the great capacity to navigate. Place recognition in insects is, however, most likely mediated by processing within the \emph{mushroom bodies} (MB), a pair of structures involved in classification, learning, and recognition of both olfactory and visual information. Their structure has been similar to a multi-layer perceptron (MLP) network, which receives massive input signals from sensory lobes. These impressive capabilities, achieved with relatively small brains, make them attractive models for roboticists. For FlyNet, we take inspiration from algorithmic insights found in the fruit fly olfactory neural circuit. The authors investigate how it can be integrated with recurrent-based networks for the VPR task. Classical CNN models for image recognition have good performance but they have also undesirable characteristics. In fact, these networks are difficult to implement in a real robot, due to their size and complexity. In contrast, the authors propose the usage of compact neural models such as FlyNet to alleviate these requirements. To access and exploit the power of temporal information in many applications, researchers have developed a range of RNN. Another approach, implementing by RatSLAM, uses incorporated multi-dimensional CANN models with pre-assigned weights and structure. There exist other non-neural techniques, like SeqSLAM, that match sequences of pre-processed frames to provide an estimate of place. In this work, the authors attempt to develop a new bio-inspired, hybrid neural network for VPR tasks based on insect brain architectures such as FlyNet, which is extremely compact and can incorporate the filtering capabilities of a 1-d CANN to achieve competitive localization results. \subsection{Proposed method}\label{header-n189} \subsubsection{FlyNet algorithm}\label{header-n190} The FlyNet proposed in this works is inspired by the \emph{fly algorithm}. The Drosophila's small brain identifies odors by assigning similar neural activity patterns to similar input odors. The neural networks are composed of 4 layers (the input layer, two hidden layers, and the output layer). The network works as follows. A binary, sparse random matrix (\emph{random projection}) connects the input layer to the second layer: each neuron receives and sums about 10\% of the input neurons. This mechanism is also used to connect the second and third layers, but the number of neurons in the third layer is the same as the output one. Finally, using a WTA (winner-take-all) circuit, the third layer's neurons are mapped to the output layer, setting the first 5\% with the high value to 1 and the rest to 0. The input layer generates a specific binary identifier for the input odor. The \emph{FlyNet Algorithm} (FNA) proposed in this work is a mapping of the fly algorithm for vision purposes. The only difference is the WTA circuit, which is set to consider true the first 50\% of the neurons with the high neurons. \begin{figure}[h!] \centering \includegraphics[width=0.7\linewidth]{images/fly.png} \caption{The fly algorithm's network architecture. The random projection is shown only for the connection between the two hidden layer, but all the input layer is connected to the first hidden layer using the same mechanism} \end{figure} \subsubsection{FlyNet models}\label{header-n193} The authors implement a range of VPR models, using FNA and a module with temporal filtering capabilities. These networks models are the following: \begin{itemize} \item \textbf{FlyNet:} it's composed by the FNA that terminates with a fully connected (FC) network. Its architecture is a three-layer MLP with 64--64--1000 units respectively, where the first two layers make up the FNA and the last one composes the FC network. \item \textbf{FlyNet+SeqSLAM:} it incorporates the SeqSLAM algorithm on top of our single-frame FlyNet network. This model can be compared along with the other following temporal models. \item \textbf{FlyNet+RNN:} It is a purely neural model that incorporates an RNN on top of FlyNet and terminates with another FC layer. Its architecture is the same as FlyNet (the FC layers have 100 units), with 512 recurrent units. \item \textbf{FlyNet+CANN:} it incorporates a variation of the CANN architecture proposed in RatSLAM, which is a 1-dimensional model, on top of the FlyNet network. The CANN layer has 1002 units. \end{itemize} \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{images/flynetmodels.png} \caption{FlyNet models} \end{figure} \subsection{Experiments}\label{header-n205} \subsubsection{Dataset and data preprocessing}\label{header-n206} To evaluate the capabilities of the proposed FlyNet-based models, the authors conduct extensive experiments on two of the most widespread benchmarks used in VPR, the \emph{Nordland} and \emph{Oxford RobotCar} datasets. Nordland includes extreme seasonal changes across spring, summer, fall, and winter, captured during a train journey, in northern Norway. The summer traversal is used for training, and the remaining for testing. The Oxford RobotCar dataset provides over 100 traverses with different lighting (e.g. day, night) and weather (e.g. direct sun, overcast) conditions through a car ride in Oxford city. The images are pre-processed before being used by the models. FlyNet baselines convert the images into single-channel (gray-scale) frames normalized between {[}0, 1{]}, and then resize them to $32 \times 64$. \subsubsection{Experiments evaluation}\label{header-n208} The authors train and test the four FlyNet models in order to find the best model and compare it with other existing state-of-the-art techniques. In particular, these methods are \emph{SeqSLAM} (without FNA attacked), \emph{LoST-X}, and \emph{Multi-Process Fusion}. \paragraph{Metrics}\label{header-n210} VPR models' performance is evaluated using precision-recall (PR) curves and area under the curve (AUC) metrics. The tolerance used to consider a query place as a correct match is being within 20 frames around the ground truth location for the Nordland dataset, and up to 50 meters (10 frames) away from the ground truth for the Oxford RobotCar dataset. \paragraph{Comparison of FlyNet to Other Neural Networks}\label{header-n212} FlyNet (alone) is compared with the other four single-frame models: a simple FC network, an FC network with dropout, a CNN, and an implementation of the NetVLAD method. The FC network has the same architecture as FlyNet: it is a three-layer MLP with 64-64-1000 neurons respectively. The FC network with dropout is the same as the previous one, but with a dropout rate of 90\% and 50\% for the first and second layers, respectively, in order to approximate the FlyNet sparsity and for fair comparison purposes. The CNN model has 2 convolutional layers while the NetVLAD output representation dimensionality is reduced from 4096 to 64 to be comparable in size with the FlyNet. \newpage \subsection{Experiments results}\label{header-n214} \subsubsection{FlyNet vs. Other Single-Frame Networks}\label{header-n215} FlyNet is directly competitive with both FC networks, despite FlyNet having over 3 times fewer parameters (64 k vs. 199 k). CNN and NetVLAD models, with 6 and 234 times more parameters than FlyNet respectively, the larger the model the better the results we obtained. Under \emph{small environmental changes} (e.g. summer to fall) both networks achieved over 70\% AUC. However, under \emph{extreme visual changes} (e.g. summer to winter) all these models show relatively similar results, below 12\% AUC, except for NetVLAD with 20\% AUC. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{images/flynetothermodels.png} \caption{Comparison of FlyNet (alone) to other single-frame neural networks. AUC results across different models on the Nordland dataset (left). Average accuracy over 10 training experiments vs. number of epochs for FlyNet (middle) and a fully connected (FC) network with dropout (right).} \end{figure} \subsubsection{FlyNet models evaluation}\label{header-n218} Although there are significant performance differences at a single-frame matching level, the figure below shows that when using sequence-based filtering techniques these differences reduce significantly. For FlyNet+SeqSLAM, the performance of FlyNet (alone) was significantly improved. Similarly, the RNN layer on top of FlyNet improved even further these results. However, when integrating the output of FlyNet with a 1-d CANN we were able to outperform these models, even under extreme environmental changes: this is the best model. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{images/flynetcomparemodels.png} \caption{AUC results of the four FlyNet models} \end{figure} \newpage \subsubsection{Best model vs. state-of-the-art methods}\label{header-n221} MPF is performing better while being able to recall almost all places at 100\% precision on both fall and winter testing traverses. FlyNet+CANN achieves state-of-the-art results, comparable with SeqSLAM and MPF in all these tested traverses. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{images/bestmodelvsothers.png} \caption{AUC results of the state-of-the-art methods measured on the two dataset} \end{figure} Similarly, PR performance on the Oxford RobotCar dataset is shown in the following figure. FlyNet+CANN not only achieves state-of-the-art results comparable with the other methods, but it maintains PR performance even under extreme environmental changes (e.g. overcast to night), as shown the bottom-right side of the figure. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{images/bestmodelvsothersPR.png} \caption{PR results of the state-of-the-art methods measured on the two dataset} \end{figure} \subsubsection{Computational performance}\label{header-n226} The processing time required to perform appearance-invariant VPR by our hybrid model is compared to those from state-of-the-art methods in terms of running time for (1) feature extraction, (2) visual place matching between query and reference traverses, and (3) average place recognition time for a single query image from a 1000-image reference database. This Avg. Time (3) is calculated as (Feature Ext. (1) + Place Match. (2))/1000. Processing time results on the Nordland dataset are reported in the following table. The FlyNet+CANN can be up to 6.5, 310, and 1.5 times faster than MPF, LoST-X, and SeqSLAM, respectively. \begin{longtable}[]{@{}llll@{}} \toprule \textbf{Method} & \textbf{Feature extraction} & \textbf{Place matching} & \textbf{Avg. time (fps)}\tabularnewline \midrule \endhead \textbf{FlyNet+CANN} & \textbf{35 sec} & \textbf{25 sec} & \textbf{0.06 sec (16.66)}\tabularnewline MPF & 1.9 min & 4.6 min & 0.39 sec (2.56)\tabularnewline LoST-X & 110 min & 200 min & 18.6 sec (0.05)\tabularnewline SeqSLAM & 50 sec & 40 sec & 0.09 sec (11.11)\tabularnewline \bottomrule \caption{Processing time comparison on the Nordland dataset} \end{longtable} The following figure shows the comparison between the networks' complexity and the results obtained, viewing the AUC metric for the most challenging appearance change (day to night). The best model proposed in this works obtains the best results with the minimum number of parameters. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{images/flynetothermodeldimensions.png} \caption{Oxford RobotCar AUC performance vs. Network Size. Comparison for the most challenging appearance change (day to night)} \end{figure} \subsection{Conclusions}\label{header-n256} FlyNet+CANN model achieves competitive visual localization results compared to existing deep learning and algorithmic-based VPR techniques, but with significantly fewer parameters, a smaller footprint, and reduced processing time. The authors want to demonstrate that, taking inspiration from the biological brain, it is possible to build sample-efficient, high-performing VPR models. FlyNet has the same number of layers and sparse structure found in the fly olfactory neural circuit. Despite the fly, brains extend by forty times the dimensionality of the inputs, the authors have experimentally shown that also reducing this dimension the FlyNet training accuracy remained around 96\%. At the same time, FlyNet+CANN enabled the use of a relatively low-performance but fast network to get better VPR results, which is also able to generalize across challenging environmental changes.
{ "alphanum_fraction": 0.8000619579, "avg_line_length": 50.1242236025, "ext": "tex", "hexsha": "a66459b706fd3a3791b4436488be425aded97b2a", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "521eed39147a4ec22a55dcf7df1b3210bbe8f30b", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "micheleantonazzi/intelligent-systems", "max_forks_repo_path": "article_3.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "521eed39147a4ec22a55dcf7df1b3210bbe8f30b", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "micheleantonazzi/intelligent-systems", "max_issues_repo_path": "article_3.tex", "max_line_length": 289, "max_stars_count": null, "max_stars_repo_head_hexsha": "521eed39147a4ec22a55dcf7df1b3210bbe8f30b", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "micheleantonazzi/intelligent-systems", "max_stars_repo_path": "article_3.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 3922, "size": 16140 }
%%% % \makeatletter % \renewcommand\thesection{\@arabic\c@section} % % \@addtoreset{equation}{section} % % \@addtoreset{figure}{section} % % \@addtoreset{table}{section} % % \@addtoreset{Problem}{section} % \makeatother % \titleformat{\section} % {\sectionFontShape\sectionFontSize\center} % {\scshape \S~\thesection.} % {1ex minus .1ex} % {} \titlecontents{chapter}[0pt]% {\vspace{1ex minus .1ex}}% {\chaptername~\thecontentslabel.\quad}% {}% {\titlerule*[1pc]{.}\contentspage} % \titlecontents{section}[0pt]% % {}% % {\S~\thecontentslabel.\quad}% % {..}% % {\titlerule*[1pc]{.}\contentspage} \makeatletter \renewcommand\chapter{\if@openright\cleardoublepage\else\clearpage\fi% \thispagestyle{fancy}% \global\@topnum\z@% \@afterindentfalse% \secdef\@chapter\@schapter} \makeatother %%% Local Variables: %%% mode: latex %%% TeX-master: "default" %%% coding: utf-8-unix %%% End:
{ "alphanum_fraction": 0.6782122905, "avg_line_length": 21.3095238095, "ext": "tex", "hexsha": "0a991ac78b822da9c7ab126ec58a2e6960917fad", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "bae7977afde0e5dbc6f4a0da8d6e1a742da2acf9", "max_forks_repo_licenses": [ "CC-BY-4.0" ], "max_forks_repo_name": "yamadharma/kermit", "max_forks_repo_path": "phdthesr/doc/examples/diploma/preamble-local.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "bae7977afde0e5dbc6f4a0da8d6e1a742da2acf9", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "CC-BY-4.0" ], "max_issues_repo_name": "yamadharma/kermit", "max_issues_repo_path": "phdthesr/doc/examples/diploma/preamble-local.tex", "max_line_length": 70, "max_stars_count": 1, "max_stars_repo_head_hexsha": "bae7977afde0e5dbc6f4a0da8d6e1a742da2acf9", "max_stars_repo_licenses": [ "CC-BY-4.0" ], "max_stars_repo_name": "yamadharma/kermit", "max_stars_repo_path": "phdthesr/doc/examples/diploma/preamble-local.tex", "max_stars_repo_stars_event_max_datetime": "2020-07-10T15:30:09.000Z", "max_stars_repo_stars_event_min_datetime": "2020-07-10T15:30:09.000Z", "num_tokens": 314, "size": 895 }
%% bare_jrnl_compsoc.tex %% V1.3 %% 2007/01/11 %% by Michael Shell %% See: %% http://www.michaelshell.org/ %% for current contact information. %% %% This is a skeleton file demonstrating the use of IEEEtran.cls %% (requires IEEEtran.cls version 1.7 or later) with an IEEE Computer %% Society journal paper. %% %% Support sites: %% http://www.michaelshell.org/tex/ieeetran/ %% http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/ %% and %% http://www.ieee.org/ %%************************************************************************* %% Legal Notice: %% This code is offered as-is without any warranty either expressed or %% implied; without even the implied warranty of MERCHANTABILITY or %% FITNESS FOR A PARTICULAR PURPOSE! %% User assumes all risk. %% In no event shall IEEE or any contributor to this code be liable for %% any damages or losses, including, but not limited to, incidental, %% consequential, or any other damages, resulting from the use or misuse %% of any information contained here. %% %% All comments are the opinions of their respective authors and are not %% necessarily endorsed by the IEEE. %% %% This work is distributed under the LaTeX Project Public License (LPPL) %% ( http://www.latex-project.org/ ) version 1.3, and may be freely used, %% distributed and modified. A copy of the LPPL, version 1.3, is included %% in the base LaTeX documentation of all distributions of LaTeX released %% 2003/12/01 or later. %% Retain all contribution notices and credits. %% ** Modified files should be clearly indicated as such, including ** %% ** renaming them and changing author support contact information. ** %% %% File list of work: IEEEtran.cls, IEEEtran_HOWTO.pdf, bare_adv.tex, %% bare_conf.tex, bare_jrnl.tex, bare_jrnl_compsoc.tex %%************************************************************************* % *** Authors should verify (and, if needed, correct) their LaTeX system *** % *** with the testflow diagnostic prior to trusting their LaTeX platform *** % *** with production work. IEEE's font choices can trigger bugs that do *** % *** not appear when using other class files. *** % The testflow support page is at: % http://www.michaelshell.org/tex/testflow/ % Note that the a4paper option is mainly intended so that authors in % countries using A4 can easily print to A4 and see how their papers will % look in print - the typesetting of the document will not typically be % affected with changes in paper size (but the bottom and side margins will). % Use the testflow package mentioned above to verify correct handling of % both paper sizes by the user's LaTeX system. % % Also note that the "draftcls" or "draftclsnofoot", not "draft", option % should be used if it is desired that the figures are to be displayed in % draft mode. % % The Computer Society usually requires 12pt for submissions. % \documentclass[12pt,journal,compsoc]{IEEEtran} % % If IEEEtran.cls has not been installed into the LaTeX system files, % manually specify the path to it like: % \documentclass[12pt,journal,compsoc]{../sty/IEEEtran} % Some very useful LaTeX packages include: % (uncomment the ones you want to load) % *** MISC UTILITY PACKAGES *** % %\usepackage{ifpdf} % Heiko Oberdiek's ifpdf.sty is very useful if you need conditional % compilation based on whether the output is pdf or dvi. % usage: % \ifpdf % % pdf code % \else % % dvi code % \fi % The latest version of ifpdf.sty can be obtained from: % http://www.ctan.org/tex-archive/macros/latex/contrib/oberdiek/ % Also, note that IEEEtran.cls V1.7 and later provides a builtin % \ifCLASSINFOpdf conditional that works the same way. % When switching from latex to pdflatex and vice-versa, the compiler may % have to be run twice to clear warning/error messages. % Code listings \newcommand{\code}[1]{\texttt{#1}} % *** CITATION PACKAGES *** % \ifCLASSOPTIONcompsoc % IEEE Computer Society needs nocompress option % requires cite.sty v4.0 or later (November 2003) % \usepackage[nocompress]{cite} \usepackage{cite} \else % normal IEEE \usepackage{cite} \fi % cite.sty was written by Donald Arseneau % V1.6 and later of IEEEtran pre-defines the format of the cite.sty package % \cite{} output to follow that of IEEE. Loading the cite package will % result in citation numbers being automatically sorted and properly % "compressed/ranged". e.g., [1], [9], [2], [7], [5], [6] without using % cite.sty will become [1], [2], [5]--[7], [9] using cite.sty. cite.sty's % \cite will automatically add leading space, if needed. Use cite.sty's % noadjust option (cite.sty V3.8 and later) if you want to turn this off. % cite.sty is already installed on most LaTeX systems. Be sure and use % version 4.0 (2003-05-27) and later if using hyperref.sty. cite.sty does % not currently provide for hyperlinked citations. % The latest version can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/cite/ % The documentation is contained in the cite.sty file itself. % % Note that some packages require special options to format as the Computer % Society requires. In particular, Computer Society papers do not use % compressed citation ranges as is done in typical IEEE papers % (e.g., [1]-[4]). Instead, they list every citation separately in order % (e.g., [1], [2], [3], [4]). To get the latter we need to load the cite % package with the nocompress option which is supported by cite.sty v4.0 % and later. Note also the use of a CLASSOPTION conditional provided by % IEEEtran.cls V1.7 and later. % *** GRAPHICS RELATED PACKAGES *** % \ifCLASSINFOpdf \usepackage[pdftex]{graphicx} % declare the path(s) where your graphic files are \graphicspath{{./img/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics \DeclareGraphicsExtensions{.pdf,.jpeg,.png} \else % or other class option (dvipsone, dvipdf, if not using dvips). graphicx % will default to the driver specified in the system graphics.cfg if no % driver is specified. \usepackage[dvips]{graphicx} % declare the path(s) where your graphic files are \graphicspath{{./img/}} % and their extensions so you won't have to specify these with % every instance of \includegraphics \DeclareGraphicsExtensions{.eps} \fi % graphicx was written by David Carlisle and Sebastian Rahtz. It is % required if you want graphics, photos, etc. graphicx.sty is already % installed on most LaTeX systems. The latest version and documentation can % be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/graphics/ % Another good source of documentation is "Using Imported Graphics in % LaTeX2e" by Keith Reckdahl which can be found as epslatex.ps or % epslatex.pdf at: http://www.ctan.org/tex-archive/info/ % % latex, and pdflatex in dvi mode, support graphics in encapsulated % postscript (.eps) format. pdflatex in pdf mode supports graphics % in .pdf, .jpeg, .png and .mps (metapost) formats. Users should ensure % that all non-photo figures use a vector format (.eps, .pdf, .mps) and % not a bitmapped formats (.jpeg, .png). IEEE frowns on bitmapped formats % which can result in "jaggedy"/blurry rendering of lines and letters as % well as large increases in file sizes. % % You can find documentation about the pdfTeX application at: % http://www.tug.org/applications/pdftex % *** MATH PACKAGES *** % %\usepackage[cmex10]{amsmath} % A popular package from the American Mathematical Society that provides % many useful and powerful commands for dealing with mathematics. If using % it, be sure to load this package with the cmex10 option to ensure that % only type 1 fonts will utilized at all point sizes. Without this option, % it is possible that some math symbols, particularly those within % footnotes, will be rendered in bitmap form which will result in a % document that can not be IEEE Xplore compliant! % % Also, note that the amsmath package sets \interdisplaylinepenalty to 10000 % thus preventing page breaks from occurring within multiline equations. Use: %\interdisplaylinepenalty=2500 % after loading amsmath to restore such page breaks as IEEEtran.cls normally % does. amsmath.sty is already installed on most LaTeX systems. The latest % version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/amslatex/math/ % *** SPECIALIZED LIST PACKAGES *** % %\usepackage{algorithmic} % algorithmic.sty was written by Peter Williams and Rogerio Brito. % This package provides an algorithmic environment fo describing algorithms. % You can use the algorithmic environment in-text or within a figure % environment to provide for a floating algorithm. Do NOT use the algorithm % floating environment provided by algorithm.sty (by the same authors) or % algorithm2e.sty (by Christophe Fiorio) as IEEE does not use dedicated % algorithm float types and packages that provide these will not provide % correct IEEE style captions. The latest version and documentation of % algorithmic.sty can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/algorithms/ % There is also a support site at: % http://algorithms.berlios.de/index.html % Also of interest may be the (relatively newer and more customizable) % algorithmicx.sty package by Szasz Janos: % http://www.ctan.org/tex-archive/macros/latex/contrib/algorithmicx/ % *** ALIGNMENT PACKAGES *** % %\usepackage{array} % Frank Mittelbach's and David Carlisle's array.sty patches and improves % the standard LaTeX2e array and tabular environments to provide better % appearance and additional user controls. As the default LaTeX2e table % generation code is lacking to the point of almost being broken with % respect to the quality of the end results, all users are strongly % advised to use an enhanced (at the very least that provided by array.sty) % set of table tools. array.sty is already installed on most systems. The % latest version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/required/tools/ %\usepackage{mdwmath} %\usepackage{mdwtab} % Also highly recommended is Mark Wooding's extremely powerful MDW tools, % especially mdwmath.sty and mdwtab.sty which are used to format equations % and tables, respectively. The MDWtools set is already installed on most % LaTeX systems. The lastest version and documentation is available at: % http://www.ctan.org/tex-archive/macros/latex/contrib/mdwtools/ % IEEEtran contains the IEEEeqnarray family of commands that can be used to % generate multiline equations as well as matrices, tables, etc., of high % quality. %\usepackage{eqparbox} % Also of notable interest is Scott Pakin's eqparbox package for creating % (automatically sized) equal width boxes - aka "natural width parboxes". % Available at: % http://www.ctan.org/tex-archive/macros/latex/contrib/eqparbox/ % *** SUBFIGURE PACKAGES *** \ifCLASSOPTIONcompsoc \usepackage[tight,normalsize,sf,SF]{subfigure} \else \usepackage[tight,footnotesize]{subfigure} \fi % subfigure.sty was written by Steven Douglas Cochran. This package makes it % easy to put subfigures in your figures. e.g., "Figure 1a and 1b". For IEEE % work, it is a good idea to load it with the tight package option to reduce % the amount of white space around the subfigures. Computer Society papers % use a larger font and \sffamily font for their captions, hence the % additional options needed under compsoc mode. subfigure.sty is already % installed on most LaTeX systems. The latest version and documentation can % be obtained at: % http://www.ctan.org/tex-archive/obsolete/macros/latex/contrib/subfigure/ % subfigure.sty has been superceeded by subfig.sty. \ifCLASSOPTIONcompsoc \usepackage[caption=false]{caption} \usepackage[font=normalsize,labelfont=sf,textfont=sf]{subfig} \else \usepackage[caption=false]{caption} \usepackage[font=footnotesize]{subfig} \fi % subfig.sty, also written by Steven Douglas Cochran, is the modern % replacement for subfigure.sty. However, subfig.sty requires and % automatically loads Axel Sommerfeldt's caption.sty which will override % IEEEtran.cls handling of captions and this will result in nonIEEE style % figure/table captions. To prevent this problem, be sure and preload % caption.sty with its "caption=false" package option. This is will preserve % IEEEtran.cls handing of captions. Version 1.3 (2005/06/28) and later % (recommended due to many improvements over 1.2) of subfig.sty supports % the caption=false option directly: \ifCLASSOPTIONcompsoc \usepackage[caption=false,font=normalsize,labelfont=sf,textfont=sf]{subfig} \else \usepackage[caption=false,font=footnotesize]{subfig} \fi % The latest version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/subfig/ % The latest version and documentation of caption.sty can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/caption/ % *** FLOAT PACKAGES *** % %\usepackage{fixltx2e} % fixltx2e, the successor to the earlier fix2col.sty, was written by % Frank Mittelbach and David Carlisle. This package corrects a few problems % in the LaTeX2e kernel, the most notable of which is that in current % LaTeX2e releases, the ordering of single and double column floats is not % guaranteed to be preserved. Thus, an unpatched LaTeX2e can allow a % single column figure to be placed prior to an earlier double column % figure. The latest version and documentation can be found at: % http://www.ctan.org/tex-archive/macros/latex/base/ %\usepackage{stfloats} % stfloats.sty was written by Sigitas Tolusis. This package gives LaTeX2e % the ability to do double column floats at the bottom of the page as well % as the top. (e.g., "\begin{figure*}[!b]" is not normally possible in % LaTeX2e). It also provides a command: %\fnbelowfloat % to enable the placement of footnotes below bottom floats (the standard % LaTeX2e kernel puts them above bottom floats). This is an invasive package % which rewrites many portions of the LaTeX2e float routines. It may not work % with other packages that modify the LaTeX2e float routines. The latest % version and documentation can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/sttools/ % Documentation is contained in the stfloats.sty comments as well as in the % presfull.pdf file. Do not use the stfloats baselinefloat ability as IEEE % does not allow \baselineskip to stretch. Authors submitting work to the % IEEE should note that IEEE rarely uses double column equations and % that authors should try to avoid such use. Do not be tempted to use the % cuted.sty or midfloat.sty packages (also by Sigitas Tolusis) as IEEE does % not format its papers in such ways. %\ifCLASSOPTIONcaptionsoff % \usepackage[nomarkers]{endfloat} % \let\MYoriglatexcaption\caption % \renewcommand{\caption}[2][\relax]{\MYoriglatexcaption[#2]{#2}} %\fi % endfloat.sty was written by James Darrell McCauley and Jeff Goldberg. % This package may be useful when used in conjunction with IEEEtran.cls' % captionsoff option. Some IEEE journals/societies require that submissions % have lists of figures/tables at the end of the paper and that % figures/tables without any captions are placed on a page by themselves at % the end of the document. If needed, the draftcls IEEEtran class option or % \CLASSINPUTbaselinestretch interface can be used to increase the line % spacing as well. Be sure and use the nomarkers option of endfloat to % prevent endfloat from "marking" where the figures would have been placed % in the text. The two hack lines of code above are a slight modification of % that suggested by in the endfloat docs (section 8.3.1) to ensure that % the full captions always appear in the list of figures/tables - even if % the user used the short optional argument of \caption[]{}. % IEEE papers do not typically make use of \caption[]'s optional argument, % so this should not be an issue. A similar trick can be used to disable % captions of packages such as subfig.sty that lack options to turn off % the subcaptions: % For subfig.sty: % \let\MYorigsubfloat\subfloat % \renewcommand{\subfloat}[2][\relax]{\MYorigsubfloat[]{#2}} % For subfigure.sty: % \let\MYorigsubfigure\subfigure % \renewcommand{\subfigure}[2][\relax]{\MYorigsubfigure[]{#2}} % However, the above trick will not work if both optional arguments of % the \subfloat/subfig command are used. Furthermore, there needs to be a % description of each subfigure *somewhere* and endfloat does not add % subfigure captions to its list of figures. Thus, the best approach is to % avoid the use of subfigure captions (many IEEE journals avoid them anyway) % and instead reference/explain all the subfigures within the main caption. % The latest version of endfloat.sty and its documentation can obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/endfloat/ % % The IEEEtran \ifCLASSOPTIONcaptionsoff conditional can also be used % later in the document, say, to conditionally put the References on a % page by themselves. % *** PDF, URL AND HYPERLINK PACKAGES *** % \usepackage{url} % url.sty was written by Donald Arseneau. It provides better support for % handling and breaking URLs. url.sty is already installed on most LaTeX % systems. The latest version can be obtained at: % http://www.ctan.org/tex-archive/macros/latex/contrib/misc/ % Read the url.sty source comments for usage information. Basically, % \url{my_url_here}. % *** Do not adjust lengths that control margins, column widths, etc. *** % *** Do not use packages that alter fonts (such as pslatex). *** % There should be no need to do such things with IEEEtran.cls V1.6 and later. % (Unless specifically asked to do so by the journal or conference you plan % to submit to, of course. ) % correct bad hyphenation here \hyphenation{op-tical net-works semi-conduc-tor} \begin{document} % % paper title % can use linebreaks \\ within to get better formatting as desired \title{Development of a payment channel over the Bitcoin network} % % % author names and IEEE memberships % note positions of commas and nonbreaking spaces ( ~ ) LaTeX will not break % a structure at a ~ so this keeps an author's name from being broken across % two lines. % use \thanks{} to gain access to the first footnote area % a separate \thanks must be used for each paragraph as LaTeX2e's \thanks % was not built to handle multiple paragraphs % % %\IEEEcompsocitemizethanks is a special \thanks that produces the bulleted % lists the Computer Society journals use for "first footnote" author % affiliations. Use \IEEEcompsocthanksitem which works much like \item % for each affiliation group. When not in compsoc mode, % \IEEEcompsocitemizethanks becomes like \thanks and % \IEEEcompsocthanksitem becomes a line break with idention. This % facilitates dual compilation, although admittedly the differences in the % desired content of \author between the different types of papers makes a % one-size-fits-all approach a daunting prospect. For instance, compsoc % journal papers have the author affiliations above the "Manuscript % received ..." text while in non-compsoc journals this is reversed. Sigh. \author{David Lozano Jarque,~\IEEEmembership{Undergraduate student,~\url{UAB.cat}}% <-this % stops a space \IEEEcompsocitemizethanks{ \IEEEcompsocthanksitem \protect\\ % note need leading \protect in front of \\ to get a newline within \thanks as % \\ is fragile and will error, could use \hfil\break instead. E-mail: [email protected] \IEEEcompsocthanksitem Specialized in Information Technologies \IEEEcompsocthanksitem Tutored by Joan Herrera Joancomartí (\url{dEIC.UAB.cat}) \IEEEcompsocthanksitem Course 2016-2017}% <-this % stops a space \thanks{Manuscript written on June 2017, Engineering School (\url{UAB.cat})}} % note the % following the last \IEEEmembership and also \thanks - % these prevent an unwanted space from occurring between the last author name % and the end of the author line. i.e., if you had this: % % \author{....lastname \thanks{...} \thanks{...} } % ^------------^------------^----Do not want these spaces! % % a space would be appended to the last name and could cause every name on that % line to be shifted left slightly. This is one of those "LaTeX things". For % instance, "\textbf{A} \textbf{B}" will typeset as "A B" not "AB". To get % "AB" then you have to do: "\textbf{A}\textbf{B}" % \thanks is no different in this regard, so shield the last } of each \thanks % that ends a line with a % and do not let a space in before the next \thanks. % Spaces after \IEEEmembership other than the last one are OK (and needed) as % you are supposed to have spaces between the names. For what it is worth, % this is a minor point as most people would not even notice if the said evil % space somehow managed to creep in. % The paper headers \markboth{Final computer science degree project, Engineering School, Autonomous University of Barcelona (UAB.cat)}% {Shell \MakeLowercase{\textit{et al.}}: Development of a payment channel over the Bitcoin network} % The only time the second header will appear is for the odd numbered pages % after the title page when using the twoside option. % % *** Note that you probably will NOT want to include the author's *** % *** name in the headers of peer review papers. *** % You can use \ifCLASSOPTIONpeerreview for conditional compilation here if % you desire. % The publisher's ID mark at the bottom of the page is less important with % Computer Society journal papers as those publications place the marks % outside of the main text columns and, therefore, unlike regular IEEE % journals, the available text space is not reduced by their presence. % If you want to put a publisher's ID mark on the page you can do it like % this: %\IEEEpubid{0000--0000/00\$00.00~\copyright~2007 IEEE} % or like this to get the Computer Society new two part style. %\IEEEpubid{\makebox[\columnwidth]{\hfill 0000--0000/00/\$00.00~\copyright~2007 IEEE}% %\hspace{\columnsep}\makebox[\columnwidth]{Published by the IEEE Computer Society\hfill}} % Remember, if you use this you must call \IEEEpubidadjcol in the second % column for its text to clear the IEEEpubid mark (Computer Society jorunal % papers don't need this extra clearance.) % use for special paper notices %\IEEEspecialpapernotice{(Invited Paper)} % for Computer Society papers, we must declare the abstract and index terms % PRIOR to the title within the \IEEEcompsoctitleabstractindextext IEEEtran % command as these need to go into the title area created by \maketitle. \IEEEcompsoctitleabstractindextext{% \begin{abstract} %\boldmath Bitcoin is a decentralized digital cryptocurrency that allows payments between users without the need of a central authority. Despite the potential of the technology, in the past years, the scaling debate has been the main focus of development as because of the internal details of implementation of the technology, the network can not process and store the highly increasing demand of transactions in the public ledger, also called the \textit{blockchain}. A solution for this is reducing the need of transactions with off-chain payment channels, that can be able to process thousands of micropayment transactions between two nodes so that most transactions do not appear in the blockchain but if they did would be valid, using the Bitcoin scripting language and some game theory techniques. With payment channels, only the setup and closure transactions would appear in the blockchain and all the payment transactions would be temporary and stored just by the nodes of the channel, relieving the Bitcoin blockchain transaction rate. This project consists in designing and implementing a bidirectional payment channel by using the combination of two unidirectional payment channels. \end{abstract} % IEEEtran.cls defaults to using nonbold math in the Abstract. % This preserves the distinction between vectors and scalars. However, % if the journal you are submitting to favors bold math in the abstract, % then you can use LaTeX's standard command \boldmath at the very start % of the abstract to achieve this. Many IEEE journals frown on math % in the abstract anyway. In particular, the Computer Society does % not want either math or citations to appear in the abstract. % Note that keywords are not normally used for peerreview papers. \begin{IEEEkeywords} Cryptocurrency, Bitcoin, scaling, Payment channel, Bidirectional payment channel \end{IEEEkeywords}} % make the title area \maketitle % To allow for easy dual compilation without having to reenter the % abstract/keywords data, the \IEEEcompsoctitleabstractindextext text will % not be used in maketitle, but will appear (i.e., to be "transported") % here as \IEEEdisplaynotcompsoctitleabstractindextext when compsoc mode % is not selected <OR> if conference mode is selected - because compsoc % conference papers position the abstract like regular (non-compsoc) % papers do! \IEEEdisplaynotcompsoctitleabstractindextext % \IEEEdisplaynotcompsoctitleabstractindextext has no effect when using % compsoc under a non-conference mode. % For peer review papers, you can put extra information on the cover % page as needed: % \ifCLASSOPTIONpeerreview % \begin{center} \bfseries EDICS Category: 3-BBND \end{center} % \fi % % For peerreview papers, this IEEEtran command inserts a page break and % creates the second title. It will be ignored for other modes. \IEEEpeerreviewmaketitle \section{Introduction} % Computer Society journal papers do something a tad strange with the very % first section heading (almost always called "Introduction"). They place it % ABOVE the main text! IEEEtran.cls currently does not do this for you. % However, You can achieve this effect by making LaTeX jump through some % hoops via something like: % %\ifCLASSOPTIONcompsoc % \noindent\raisebox{2\baselineskip}[0pt][0pt]% % {\parbox{\columnwidth}{\section{Introduction}\label{sec:introduction}% % \global\everypar=\everypar}}% % \vspace{-1\baselineskip}\vspace{-\parskip}\par %\else % \section{Introduction}\label{sec:introduction}\par %\fi % % Admittedly, this is a hack and may well be fragile, but seems to do the % trick for me. Note the need to keep any \label that may be used right % after \section in the above as the hack puts \section within a raised box. % The very first letter is a 2 line initial drop letter followed % by the rest of the first word in caps (small caps for compsoc). % % form to use if the first word consists of a single letter: % \IEEEPARstart{A}{demo} file is .... % % form to use if you need the single drop letter followed by % normal text (unknown if ever used by IEEE): % \IEEEPARstart{A}{}demo file is .... % % Some journals put the first two words in caps: % \IEEEPARstart{T}{his demo} file is .... % % Here we have the typical use of a "T" for an initial drop letter % and "HIS" in caps to complete the first word. \IEEEPARstart{B}{itcoin} is a cryptocurrency that first appeared in a cryptography mailing list\cite{bitcoin-mailing-list-post:online} with a post by an anonymous user who called himself ``Satoshi Nakamoto`` and defined in a whitepaper\cite{bitcoin-whitepaper:online} the first decentralized cryptocurrency. It allowed direct peer to peer digital currency transactions without the need of a central authority in which users trust for validating those transactions. Instead, each peer can validate those transactions using cryptography (technically validating digital signatures) and after that generate a block of transactions including them. An action (validate transactions and generating blocks) whose reward is retrieving newly generated currency, aiming peers to secure the network. To decide which node can generate (also called \textit{mine}) the next block of transactions and reach a consensus, they are challenged to solve a cryptographic riddle, called \textit{proof of work}\cite{bitcoin-wiki-proof-of-work:online}. Nodes trying to solve that challenge and generate (also called \textit{solve}) new blocks to receive a reward for their work are called \textit{miners} \\\\ All the transactions ever made, grouped in a structure called \textit{block}, are stored forming a chain. This chain is stored in a distributed read-write only database each (full) network node stores called the \textit{blockchain}. This name is given as each block is chained to the previous creating a not-modifiable chain of blocks using hash functions that link each block to a previous one using its hash. \subsection{The \textit{blockchain} limits} At a high level, this is how Bitcoin works. The problem comes with the public ledger or \textit{blockchain} that stores absolutely all transactions ever performed. With an average block size of nearly 1MB\cite{blockchain-block-size:online} (as it is the hardcoded limit size for a block) that contains approximately 2.000 transactions \cite{blockchain-tx-per-block:online} and with a block appearing every 10 minutes, this makes this distributed database grow approximately 50GB every year\cite{blockchain-size:online}. The block size limit is fixed at 1MB and difficulty for solving new blocks using the proof-of-work algorithm\cite{bitcoin-wiki-proof-of-work:online} is dynamically adjusted so that new blocks appear approximately every 10 minutes. This is fixed in the protocol and therefore the code of the software nodes run, so can not be changed without everyone agreeing or could lead to a chain split\cite{chain-split:online}.\\\\ \subsection{The scaling problem} With Bitcoin gaining popularity among more users, more transactions are created and needed to handle and get stored in the blockchain. Due to the limits set, not all transactions can be handled and the number of delayed transactions until the network can handle them is increasing every day. There are several active proposals\cite{segwit-org:online,bitcoin-unlimited:online} to change those limits and allow to handle more transactions, but meanwhile a solution gets activated and agreed by all the Bitcoin ecosystem (users, developers and miners), another long term solution is being proposed: reducing the number of transactions needed to perform payments. \subsection{Payment channels} This is where payment channels appear\cite{bitcoin-wiki-payment-channels:online}, allowing to two users or more that need a constant flow of transactions to pay each other instantly without waiting for the confirmation of the transaction in the blockchain. The way they operate is exchanging transactions privately between them that do not appear in the blockchain, also called \textit{off-chain transactions}. Just the opening and closure transactions of the channel would be needed to appear in the blockchain, therefore reducing the amount of transactions they need to send to the blockchain and relieving the blockchain from transactions. The opening would lock some funds into a smart contract and the closure would return those funds depending on the transactions performed in the payment channel. \\\\ In the event of any dispute on the closure transaction, the trick is that privately exchanged off-chain transactions could be sent to the blockchain and they would be valid. Therefore once broadcasted would give the same result of funds distribution as the closure transaction. Those payment transactions, but, are kept private by each node until the channel needs to be closed and there is no mutual agreement between peers. Each payment transaction replaces the old one using incentives so just the last one payment needs to be kept, allowing a high rate of transactions between nodes of the payment channel. The payment channel has to be secure by design and implemented with a secure protocol so that no party of the payment channel can not steal or lock the other party funds (act maliciously). \section{Bitcoin and Smart Contracts} As said before, Bitcoin allows to store a decentralized consensual database of transactions that transfer units of currency (bitcoins\cite{bitcoin-capitalization:online}) between users. To understand how currency units are moved, we need to understand what a transaction is at a low level technical detail \subsection{Bitcoin transactions} A Bitcoin transaction is just an array of bytes that specifies some inputs and some outputs, prefixed by a version field and suffixed with a field named \textit{nLocktime} we will talk about later. \begin{figure}[h] \includegraphics[width=\linewidth]{img/tx_format.png} \centering{ \caption{Transaction binary format}} \end{figure} What every transaction does is to spend a previously generated output by specifying in an input a pointer to that previous output, also called \textit{UTXO} (unspent transaction output). An \textit{UTXO} refers to a transaction id and number of output of that transaction that has not yet been spent by any other transaction\footnote{If all transactions have to spend a previous output, when are the first outputs generated? There is a special transaction with no inputs that is the one that generates currency units. It is just valid once in a block, to spend the reward (that must match exactly the reward value) a miner receives when solving a new block}. Also, some data (most times ECDSA signatures) follows the \textit{UTXO} in order to authorize spending the funds. This data is called the \textit{scriptSig}. This input or inputs are moved specifying a new set of outputs to move the funds spent. In each output, the value of currency units to move to the output and the conditions for them to be spent must be specified. Those conditions are placed in a field called \textit{scriptPubKey} as initially funds were always paid to a public key, so just the owner of its pairing private key could spend them. \\\\ This mentioned data to spend \textit{scriptSig} and conditions to spend \textit{scriptPubKey} is specified using a scripting language exclusively designed for the Bitcoin protocol\cite{bitcoin-wiki-script:online}. \subsection{Bitcoin scripting language} One of the powers of Bitcoin is its stack-based scripting language, as allows to specify how funds can be transferred by creating scripts in the Bitcoin scripting language\cite{bitcoin-wiki-script:online}. Therefore for a transaction to be valid, the input must refer a valid and non-spent \textit{UTXO} and the execution of the \textit{scriptSig} input followed by the referred output script (the \textit{scriptPubKey}) must end successfully with a non-empty stack. Also, the sum of outputs' values must be less than the sum of inputs' values\footnote{The difference between the sum of inputs' values and the sum of outputs' values if is greater than 0 is called the transaction \textbf{fee}, and will be rewarded along with the block reward to the node that includes the transaction in a block}. This scripting language basically reads 1-byte opcodes that able to store (push into the stack) data, perform arithmetical and logical operations, and some cryptographic operations like ECDSA signatures and hash functions among others. \\\\ The most used script set to move funds is called a P2PKH (pay-to-public-key-hash). This kind of script set uses the following scripts to move funds: \begin{itemize} \item \textbf{\textit{scriptPubKey}} Specifies a hash of an ECDSA public key and a signature from this public key whose hash matches the specified one \item \textbf{\textit{scriptSig}} Must contain a valid signature followed by the public key used to create that signature (whose hash must match the specified in the \textit{scriptPubKey}) \end{itemize} A Bitcoin address is then the hash of a public key needed in the mentioned \textit{scriptPubKey}\footnote{Technically, the address is prefixed by a version byte and suffixed with a SHA-256 4-byte checksum of that hash, all encoded in base58 for visualization purposes. The version byte helps identifying the address script set (P2PKH or P2SH) and network}. This addresses are commonly used to pay units of currency between users who reveal their addresses to be paid.\\\\ But as said before, the Bitcoin scripting language allows us to code any script to specify the spend conditions or \textit{scriptPubKey} and any script to specify the data needed to spend following those conditions (the \textit{scriptSig}). Here is when the script set called P2SH (pay-to-script-hash) comes. This method of payment allows us to create an smart contract by defining an script where we specify the conditions to spend the output (called the \textit{redeemScript}) and create an output paying to this script hash: \begin{itemize} \item \textbf{\textit{scriptPubKey}} Specifies the hash of the smart contract (defined in a \textit{redeemScript}) that must be executed to spend the funds \item \textbf{\textit{scriptSig}} Contains the data needed by the \textit{redeemScript} in order for it to execute succesfully along with the \textit{redeemScript} itself. \end{itemize} As we see to spend a P2SH UTXO, we must reveal the \textit{reedemScript} and often specify also data that the script needs to be spent, like some signatures (multisig P2SH) or a hash preimage, or whatever the \textit{redeemScript} we design needs to execute succesfully\footnote{Despite we could technically specify any output and input script so that if the input script followed by the output script execute succesfully the transaction is valid, if we don't use either P2PKH or P2SH, our transaction would be non-standard and probably not accepted by the network nodes because of the Bitcoin protocol implementation\cite{bitcoin-se-non-standard:online}}. \section{Unidirectional Payment Channels} Once understood how Bitcoin transactions work and how we can develop smart contracts using the Bitcoin scripting language, we can introduce unidirectional payment channels. This kind of payment channel basically ables transactions between two users, one of them paying (payer) incrementally some amounts to the other one (payee). We will call Alice the payer and Bob the payee. \\\\Also called simple micropayment channels, they were first defined by Mike Hearn and Jeremy Spilman\cite{bitcoin-wiki-contract:online}. With the activation of CLTV opcode in the Bitcoin scripting language through BIP-65\cite{bip-65:online}, but, those channels were improved to avoid transaction malleability\cite{bitcoin-wiki-tx-malleability:online} simplifying the channel structure. \subsection{The scheme} Every channel has three phases: \begin{enumerate} \item \textbf{Funding:} where Alice, also called funder, puts some units of currency she owns into a smart contract (we use a P2SH to pay to a \textit{redeemScript} hash). The transaction to perform this operation is called the \textit{funding transaction}. This smart contract must lock the funds for a certain period of time in order to avoid Alice to spend the channel funds before the channel gets closed. The time where the funds get unlocked and available to Alice again is called the expiry time of the channel. This way we ensure Alice can not move this funds until the channel's expiry time so Bob can retrieve the payments before this time with any of the payment transactions signed by both of them. \item \textbf{Payments:} where Alice creates and signs transactions spending the funding transaction UTXO that incrementally pay more to Bob (via a P2PKH \textit{scriptPubKey}). Bob just keeps the transaction that pays more to him, as just one of all the payment transactions is valid because all of them spend the same UTXO (and just one transaction can spend an UTXO). This is why the channel is unidirectional, as Bob will keep the transaction that pays more to them because of the economical incentive. All these transactions are not released to the blockchain until the channel closure, where Bob performs his signature if he agreeds in the transaction output (moves the funds to his P2PKH address, for instance) and releases the transaction to the network. A multisignature scheme (also called \textit{multisig})\cite{bitcoin-wiki-multisig:online} is necessary to ensure Bob can not perform any payment by himself and Alice can not return the funds to herself. \textbf{Closure:} this can happen because of two reasons: \begin{itemize} \item \textbf{Graceful close} Bob broadcasts the latest received payment transaction (signed by both Alice and Bob) to spend the funding transaction UTXO and closes the channel as the funding transaction UTXO can not be spent again. This must be performed by Bob before the channel's expiry time in order to ensure that funds can be retrieved as after that time Alice can move the funds to herself again. \item \textbf{Expiry date} If Bob does not cooperate, when the expiry time arrives Alice can safely recover her funds just by performing a P2PKH transaction spending the funding transaction UTXO. \end{itemize} \end{enumerate} To sum up, the scheme is to create a funding transaction paying a certain amount of locked Alice's funds to a smart contract that allows spending it either \begin{itemize} \item a) Using a \textit{multisig} scheme so Alice creates a transaction to pay to Bob that signs prior sending it to him. When Bob has the partially signed transaction, if he agrees paying to the output specified (probably his P2PKH address) he can just sign it and wait to broadcast it before the expiry time and make the payment effective \item b) After a certain time by Alice (as if Bob does not collaborate and does not perform the \textit{multisig}, funds could be locked forever) \end{itemize} This can be achieved either by creating a smart funding transaction that includes the expiry time condition or a \textit{multisig} funding transaction and a refund transaction that is signed by both and allows to be spent after a certain time using the mentioned CLTV opcode defined in BIP-65\cite{bip-65:online}. For this project, we opted for a single smart transaction in order to simplify the process and avoid transaction malleability. Eventually, the most important part is the funding smart contract, as must allow a refund after a certain time in case Bob does not collaborate so Alice can recover the funds and also to pay incremental amounts to Bob. \subsection{The smart contract} In order to create a transaction that spends some funds of Alice and the output pays to a smart contract that requires a \textit{multisig} for being spent or just a signature after certain time, our proposal\footnote{Along with Carlos Gonzalez Cebrecos} was to create a transaction funding this \textit{redeemScript}: \begin{center} \code{OP\_IF <time> OP\_CHECKLOCKTIMEVERIFY OP\_DROP <PubKeyAlice\_1> OP\_CHECKSIG OP\_ELSE OP\_2 <PubKeyAlice\_2> <PubKeyBob> OP\_2 OP\_CHECKMULTISIG OP\_ENDIF} \end{center} Note that Alice both owns private key of \code{<PubKeyAlice\_(1/2)>} and Bob holds the private key of \code{<PubKeyBob>}. \subsection{Channel operations} With this smart contract script, we could create and test after that all the transactions for the channel: \begin{itemize} \item \textbf{Funding}: A transaction spending an input referring to an Alice's P2PKH UTXO and with a P2SH output paying to the previously mentioned redeem script hash \item \textbf{Payment}: A transaction signed by both parties (firstly signed by Alice and then sent to Bob missing its signature to be valid) spending the redeem script with the \code{OP\_CHECKMULTISIG} statement specifying an \code{OP\_FALSE} and whose outputs are two P2PKH to Bob for some amount and to Alice as a return. Each payment transaction must pay more than the previous one to Bob, as Bob will always hold the one that pays more to him. In case of wanting Bob to receive less than the previous transaction, we need a bidirectional payment channel. \item \textbf{Graceful closure}: A payment transaction can act as a closure if broadcasted to the network previously signed by Bob. It has to be sent by Bob user before the expiry time or Alice could use the \textit{refund transaction} so all payment transactions would be invalid as those funds would be already spent by the \textit{refund transaction}. \item \textbf{Closure by expiry time}: Also called \textit{refund transaction}. A transaction signed by Alice, and with \code{nLocktime}\footnote{We require the use of the \code{nLocktime} transaction field in order to make the script \code{OP\_CHECKLOCKTIMEVERIFY} work as specified in BIP-65\cite{bip-65:online}} field set after the \code{<time>} field specified in the script. In other words, after the channel expiry time. This way Alice is spending the funding transaction with just its signature as specifies to pay with the first block of the redeem script with an \code{OP\_TRUE} whose output is a P2PKH output to a public key she owns its associated private key. \end{itemize} \subsection{The protocol} All this transactions must be created following a secure protocol that ensures all users are secure creating and operating the channel without any of them trusting the other. The protocol to establish a unidirectional payment channel between Alice (the payer) and Bob (the payee) would be the following: \begin{figure}[h] \begin{center} \includegraphics[height=8.5cm]{unidir-pc} \caption{Unidirectional payment channel protocol} \end{center} \end{figure} Alice, as the payer, requests opening a channel and specifies the funds of the channel (maximum amount Alice can pay to Bob) and the expiry date of it. If Bob agrees on the channel creation, he sends its public key so that Alice can create the funding transaction. Once the funding transaction is created, Alice sends the transaction to Bob along with the redeem script, so he can trace it and verify the contract is correct. Bob sends an acknowledge to Alice if wants to proceed with the channel opening (a signed one with Bob's key so Alice can verify the acknowledge is real). Alice eventually can broadcast the funding transaction. Once the transaction is confirmed, Alice can create payment transactions, sign them and send them to Bob privately. When the expiry date is arriving, Bob will close the channel by broadcasting the latest received payment transaction. If Bob does not collaborate, Alice can create and broadcast her refund transaction after the channel expiry time. \section{Bidirectional payment channels} The problem with above channels is that just Alice can pay incrementally amounts of currency unit to Bob. What if we want the channel to be duplex so that both parties can send amounts of currency in both ways? In this work we researched following the solution proposed by Christian Decker and Roger Wattenhofer\cite{decker2015fast} that is to basically to create a duplex payment channel by using two unidirectional payment channels linked together, one in each direction. Another popular solution proposed is the Lightning Network, that uses a more complex structure to build a duplex payment channel based on hash-based smart contracts\cite{poon2015bitcoin} \subsection{The scheme} As said previously, the idea is to use two unidirectional payment channels, one in each direction, so that we can pay in both directions. To do that, in the funding transaction, there must be two (or more) inputs and two outputs. One (or more) input and one output per user. The Alice input spent value minus fees will be the first output value, where the output will pay to the same redeem script as the unidirectional channel. This will be the channel used by Alice to pay to Bob. The second (or after last Alice's) input and second output will be constructed using the same scheme for Bob to pay Alice. The rest of the payment channel would work the same way that in a unidirectional channel, where each transaction spends an output or another depending if Alice is paying to Bob or viceversa. \subsection{The protocol} In order to create the duplex payment channel, the following protocol must be followed in order for the channel to be secure: \begin{figure}[h] \begin{center} \includegraphics[height=10.5cm]{bidir-pc} \caption{Bidirectional payment channel protocol} \end{center} \end{figure} We can see the protocol is similar to the unidirectional payment channel. In this example, Alice starts the request for the creation of the payment channel, but Bob could also send the request, inverting the communications until the payments section. The basic protocol consists in Alice sending the request to Bob for the channel creation, as happened with the unidirectional channel with the funds Alice desires to pay Bob, but also including his public key so that Bob can verify the funding transaction created. If Bob agrees with the channel creation, replies with his public key to create the output for Alice paying to Bob and the funds that Bob wants to use to fund his channel to pay Alice. Once Alice has all data, can create the funding transaction with the two outputs and her input(s), and sign her input(s)\footnote{indicating \code{SIGHASH\_ALL} meaning that signs the transaction containing the two outputs}. Alice sends the partially completed transaction (along with the redeem scripts) to Bob. Bob checks the transaction is correct and adds his input signed, returning the fully signed transaction to Alice as a final acknowledge for creating the channel. Once Alice receives the transaction, checks that is valid and broadcasts the transaction to the network. Now payments spending the Alice output to pay to Bob and the Bob output to pay to Alice can be performed creating off-chain payment transactions the same way as in unidirectional channels. To close the channel, both Bob and Alice have to release the latest received payment transactions before the channel expiry to close the channel. If a party does not collaborate, they can both send their respective refund transactions. \subsection{Channel operations} The same operations applied for the unidirectional payment channel (funding, payment, graceful closure and closure by expiry time) would be valid (despite the transaction for funding being slightly different with an added input and output for the second way channel). \subsection{Channel reset} One thing that can happen is that either Alice or Bob spends all the funds they owned paying to the other user. In that case, the channel needs to be reset, so that the received funds from the other party can be used to continue paying to them. To do this, a solution is also described by C. Decker and R. Wattenhofer\cite{decker2015fast} and is called the invalidation tree using what it is called atomic multiparty opt-in transactions. \subsubsection{\ \ \ \ \ Atomic multiparty opt-in} This kind of meta-transactions are a model for creating transactions to fund smart contracts (one or more outputs) that instead of being funded by one or more inputs with a P2PKH \textit{scriptSig} owned by a user, they claim a multisig \textit{P2SH} output that has not been signed yet. This allows to first design the smart contract and once all parties agree, they sign a transaction spending one or more P2PKH to fund the multisig output claimed by the opt-in transaction and now both transactions have funded the smart contract in a secure way no matter the order of signatures. \subsubsection{\ \ \ \ \ Locktime incentives} This previous transaction models are not necessary for a simple duplex payment channel, but can be used if we wish the channel to be reset. Creating another smart contract with different conditions (like specifying different amounts) spending the opt-in transaction but with a lower locktime (locking the transaction to be valid to a time closer to the present) than the previous smart contract transaction would make the new transaction the valid one per incentive as the old one would have a larger locktime and therefore the current one can be spent before. In order for the locktime incentive to invalidate previous transactions work, it must be lower than the channel's expiry time. Consequently, renewing the expiry time is the only channel parameter change that could not be done with this kind of incentive. We can also chain opt-in transactions forming what is called an invalidation tree, where the invalidation is performed by specifying lower timelocks on each new transactions branches to invalidate previous ones. \subsubsection{\ \ \ \ \ Invalidation trees} Chaining multiple atomic multiparty opt-in transactions forming a tree can be used along specifying increasingly lower locktimes in order to invalidate old tree branches as those branches of transactions having a larger locktime would be replaced by ones with lower locktime because they can be released to the network earlier. All locktimes in order to safely invalidate old branches, they must be lower than the previous higher locktime by an increment of time enough for the transaction be confirmed on the network to avoid attacks (for instance, 3 or 4 blocks of increment). Also, these trees slightly change the closure by expiry time operation, as now, refund transactions are needed because we can not set a smart contract with two output values in the same output. This refund transactions would be created in the protocol along with the funding's channel operations. In the case of a graceful closure, a transaction spending the funding transaction with the final balance could be sent if both parties agree, or all the latest valid tree branch if they do not agree. \begin{figure}[h] \begin{center} \includegraphics[width=\linewidth]{invalidation-trees} \caption{An example of invalidation trees using atomic multiparty opt-in transactions and locktime incentives} \end{center} \end{figure} \section{The implementation} In order to implement the bidirectional payment channel, a research was performed to check what Python libraries where available to develop smart contracts and therefore transactions with smart contracts. What we found is that no object-oriented and well documented library was available to create non-typical transactions (P2SH with a custom \code{redeemScript}). Because of that, we implemented a new library / framework to create easily customized transactions using the Bitcoin protocol information and it's implementation details\cite{bitcoin-org-developer:online, bitcoin-wiki-proto:online}. \subsection{Our Bitcoin framework} To implement our framework, we decided to create a series of modules and classes oriented towards to the puzzle-friendlyness property: all objects / classes must be able to be serialized / deserialize into / from an array of bytes compatible with the Bitcoin protocol. We just implemented to save time, but, the strictly necessary modules and classes needed for this project development. With this framework we also hope to set the base to develop a well designed, usable and easy to understand Bitcoin Python library that aims new developers to create smart contracts in the Bitcoin network. \subsection{Developing progress} The framework started with the ability to create an empty valid transaction and after that implementing all the fields necessary, composing each field of another subfields to allow the mentioned puzzle-friendliness property. The latest developed part of the framework was part of the Bitcoin scripting language (that is in constant development) to implement the needed opcodes to create the smart contracts.\\\\ Once the framework allowed to create valid transactions (that required special focus on cryptography functions and its serialization), we tested basic P2PKH transactions created with the framework and a P2SH \textit{multisig} transaction. After that, the \code{OP\_CHECKLOCKTIMEVERIFY} opcode was implemented and tested and a unidirectional payment channel was created.\\\\ After all development and testing finished for the creation of valid and functional unidirectional payment channels, I started developing the bidirectional payment channel as specified in the previous chapter of this document. \subsection{The duplex channel implementation} The channel implemented basically allows using a command line interface to operate a channel with the following operations. No communication has been implemented to focus on the channel security rather than automatization and easing the channel use. \begin{itemize} \item \textbf{Funding:} generates the funding transaction and an invalidation tree of transactions with a defined depth, accepting parameters to set the channel funds, expiry time and the public keys of the invalidation tree's P2SH scripts. In order for the invalidation to be secure, the P2SH scripts hashes have to be different so they can not match another node of the tree. For this purpose, we added in the implementation numbers at the end of the \textit{redeemScript} that modify their hash but not their functionality. Once the funding transaction and the first invalidation tree branch has been created, the refund transaction is also created with the timelock set at the expiry time. All this transactions (except funding) will be signed by the user who creates the channel, that is supposed to also have the details of the channel as the software does not implement external communications. After that, they can be sent to the other user, who can use the \code{bitcoin-cli} utility from Bitcoin Core\cite{bitcoin-core} to sign all of them. Then, the other user can return them to the creator so can eventually sign and broadcast the funding transaction. \item \textbf{Payment:} With the previous transactions stored, with the payment operation, setting the payment channel UTXO and with the private key, both users can generate payment transactions until the unidirectional payment channel of each of them is spent. \item \textbf{Reset:} Given the same parameters as the funding, but specifying a reset operation and the previous timelock used, will generate another branch of transactions with the new funds provided. \end{itemize} \subsubsection{\ \ \ \ \ Usage} The script syntax is the following:\\ \code{python -m src <operation> [arguments]}\\ Where we can use the optional argument \code{-h} to know how to indicate the operation (currently \code{fund, reset}) and rest of arguments \subsubsection{\ \ \ \ \ Future work and research lines} \textbf{Usability}\\ The current channel requires the users that operate the channel high knowledge about the Bitcoin technology and protocol as have to sign manually some transactions and broadcast them. In order to make this bidirectional payment channels accessible to a wider audience, the software should be automated to perform all operations with a graphical user interface. This interface should also hide the channel complexities: storing the transactions' tree, communicating both users to agree on the channel parameters, broadcasting transactions to the network and handling private and public keys mainly.\\\\ \textbf{Multihop payment channels}\\ Using HLTC (Hash-locked timed contracts\cite{HashedTi42:online}) the implementation could be extended providing off-chain transactions to perform payments across multiple existing payment channels in a similar way the Lightning Network implements it\cite{poon2015bitcoin} % An example of a floating figure using the graphicx package. % Note that \label must occur AFTER (or within) \caption. % For figures, \caption should occur after the \includegraphics. % Note that IEEEtran v1.7 and later has special internal code that % is designed to preserve the operation of \label within \caption % even when the captionsoff option is in effect. However, because % of issues like this, it may be the safest practice to put all your % \label just after \caption rather than within \caption{}. % % Reminder: the "draftcls" or "draftclsnofoot", not "draft", class % option should be used if it is desired that the figures are to be % displayed while in draft mode. % %\begin{figure}[!t] %\centering %\includegraphics[width=2.5in]{myfigure} % where an .eps filename suffix will be assumed under latex, % and a .pdf suffix will be assumed for pdflatex; or what has been declared % via \DeclareGraphicsExtensions. %\caption{Simulation Results} %\label{fig_sim} %\end{figure} % Note that IEEE typically puts floats only at the top, even when this % results in a large percentage of a column being occupied by floats. % However, the Computer Society has been known to put floats at the bottom. % An example of a double column floating figure using two subfigures. % (The subfig.sty package must be loaded for this to work.) % The subfigure \label commands are set within each subfloat command, the % \label for the overall figure must come after \caption. % \hfil must be used as a separator to get equal spacing. % The subfigure.sty package works much the same way, except \subfigure is % used instead of \subfloat. % %\begin{figure*}[!t] %\centerline{\subfloat[Case I]\includegraphics[width=2.5in]{subfigcase1}% %\label{fig_first_case}} %\hfil %\subfloat[Case II]{\includegraphics[width=2.5in]{subfigcase2}% %\label{fig_second_case}}} %\caption{Simulation results} %\label{fig_sim} %\end{figure*} % % Note that often IEEE papers with subfigures do not employ subfigure % captions (using the optional argument to \subfloat), but instead will % reference/describe all of them (a), (b), etc., within the main caption. % An example of a floating table. Note that, for IEEE style tables, the % \caption command should come BEFORE the table. Table text will default to % \footnotesize as IEEE normally uses this smaller font for tables. % The \label must come after \caption as always. % %\begin{table}[!t] %% increase table row spacing, adjust to taste %\renewcommand{\arraystretch}{1.3} % if using array.sty, it might be a good idea to tweak the value of % \extrarowheight as needed to properly center the text within the cells %\caption{An Example of a Table} %\label{table_example} %\centering %% Some packages, such as MDW tools, offer better commands for making tables %% than the plain LaTeX2e tabular which is used here. %\begin{tabular}{|c||c|} %\hline %One & Two\\ %\hline %Three & Four\\ %\hline %\end{tabular} %\end{table} % Note that IEEE does not put floats in the very first column - or typically % anywhere on the first page for that matter. Also, in-text middle ("here") % positioning is not used. Most IEEE journals use top floats exclusively. % However, Computer Society journals sometimes do use bottom floats - bear % this in mind when choosing appropriate optional arguments for the % figure/table environments. % Note that, LaTeX2e, unlike IEEE journals, places footnotes above bottom % floats. This can be corrected via the \fnbelowfloat command of the % stfloats package. \section{Conclusion} Bitcoin has a great potential as it's the first decentralized cryptocurrency currently understood as a great store of economic value. Despite that, the scaling problem makes Bitcoin growth slowlier than desired. A solution for that is relieving the Bitcoin's blockchain from transactions using payment channels with \textit{off-chain transactions} between payment services providers\cite{bitcoin-psp:online}. The bidirectional payment channel described and implemented in this project allows to create simple and secure\footnote{Until SegWit\cite{segwit-org:online} is not activated, just unidirectional payment channels and bidirectional payment channels without the reset operation are secure, because creating off-chain transaction chains can be vulnerable due to transactions' malleability issues\cite{bitcoin-wiki-tx-malleability:online}} bidirectional payment channels using the combination of unidirectional payment channels, atomic multiparty opt-in transactions and invalidation trees with locktime incentives. Despite the channel's decreasing duration because of the reset operations, the structure is far more simple than other solutions like the Lightning Network\cite{poon2015bitcoin} and requires less data exchange. Furthermore, if both parties cooperate along the channel creation, the bidirectional payment channel gets really simple to operate, with the disadvantage of having to send all the valid tree branch if the final balance of the channel is not mutually agreed. If the current project gets implemented securely with SegWit\cite{segwit-org:online} activation, eventually Bitcoin will be able to provide a high throughput of instant and low fee transactions without worrying about its scalability. % if have a single appendix: %\appendix[Proof of the Zonklar Equations] % or %\appendix % for no appendix heading % do not use \section anymore after \appendix, only \section* % is possibly needed % use appendices with more than one appendix % then use \section to start each appendix % you must declare a \section before using any % \subsection or using \label (\appendices by itself % starts a section numbered zero.) % % \appendices % \section{Proof of the First Zonklar Equation} % Appendix one text goes here. % you can choose not to have a title for an appendix % if you want by leaving the argument blank % \section{} % Appendix two text goes here. % use section* for acknowledgement \ifCLASSOPTIONcompsoc % The Computer Society usually uses the plural form \section*{Acknowledgments} \else % regular IEEE prefers the singular form \section*{Acknowledgment} \fi This work could not have been possible without the collaboration with Carlos Gonzalez Cebrecos, another student performing a similar project that coworked developing our Bitcoin framework and solving some Bitcoin and related technologies doubts along with our tutors Jordi Herrera Joancomarti, Sergi Delgado and Cristina Perez who introduced us in the Bitcoin world. % Can use something like this to put references on a page % by themselves when using endfloat and the captionsoff option. \ifCLASSOPTIONcaptionsoff \newpage \fi % trigger a \newpage just before the given reference % number - used to balance the columns on the last page % adjust value as needed - may need to be readjusted if % the document is modified later %\IEEEtriggeratref{8} % The "triggered" command can be changed if desired: %\IEEEtriggercmd{\enlargethispage{-5in}} % references section % can use a bibliography generated by BibTeX as a .bbl file % BibTeX documentation can be easily obtained at: % http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/ % The IEEEtran BibTeX style support page is at: % http://www.michaelshell.org/tex/ieeetran/bibtex/ %\bibliographystyle{IEEEtran} % argument is your BibTeX string definitions and bibliography database(s) %\bibliography{IEEEabrv,../bib/paper} % % <OR> manually copy in the resultant .bbl file % set second argument of \begin to the number of references % (used to reserve space for the reference number labels box) % \begin{thebibliography}{1} % \end{thebibliography} \bibliography{bibliography}{} \bibliographystyle{ieeetr} % biography section % % If you have an EPS/PDF photo (graphicx package needed) extra braces are % needed around the contents of the optional argument to biography to prevent % the LaTeX parser from getting confused when it sees the complicated % \includegraphics command within an optional argument. (You could create % your own custom macro containing the \includegraphics command to make things % simpler here.) %\begin{biography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{mshell}}]{Michael Shell} % or if you just want to reserve a space for a photo: % \begin{IEEEbiography}{Michael Shell} % Biography text here. % \end{IEEEbiography} % if you will not have a photo at all: % \begin{IEEEbiographynophoto}{John Doe} % Biography text here. % \end{IEEEbiographynophoto} % insert where needed to balance the two columns on the last page with % biographies %\newpage % \begin{IEEEbiographynophoto}{Jane Doe} % Biography text here. % \end{IEEEbiographynophoto} % You can push biographies down or up by placing % a \vfill before or after them. The appropriate % use of \vfill depends on what kind of text is % on the last page and whether or not the columns % are being equalized. %\vfill % Can be used to pull up biographies so that the bottom of the last one % is flush with the other column. %\enlargethispage{-5in} % that's all folks \end{document}
{ "alphanum_fraction": 0.7827197245, "avg_line_length": 71.9087093389, "ext": "tex", "hexsha": "587a4ab47404867eaec8fc32a5aa20ab78d9aaeb", "lang": "TeX", "max_forks_count": 1, "max_forks_repo_forks_event_max_datetime": "2022-03-23T09:37:40.000Z", "max_forks_repo_forks_event_min_datetime": "2022-03-23T09:37:40.000Z", "max_forks_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679", "max_forks_repo_licenses": [ "Apache-2.0" ], "max_forks_repo_name": "davidlj95/smart-payment-channel", "max_forks_repo_path": "whitepaper/main.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "Apache-2.0" ], "max_issues_repo_name": "davidlj95/smart-payment-channel", "max_issues_repo_path": "whitepaper/main.tex", "max_line_length": 1722, "max_stars_count": 3, "max_stars_repo_head_hexsha": "51acef689eca781cce18d72c167bb0dcfe8cf679", "max_stars_repo_licenses": [ "Apache-2.0" ], "max_stars_repo_name": "davidlj95/smart-payment-channel", "max_stars_repo_path": "whitepaper/main.tex", "max_stars_repo_stars_event_max_datetime": "2022-03-23T09:37:30.000Z", "max_stars_repo_stars_event_min_datetime": "2017-12-24T07:43:51.000Z", "num_tokens": 15821, "size": 68529 }
\documentclass[parskip=half, twocolumn, 13pt]{scrartcl} \renewcommand*{\familydefault}{\ttdefault} \usepackage{fontmfizz} \begin{document} \section*{fontmfizz} \mfThreedprint \quad \char`\\mfThreedprint \\ \mfAlpinelinux \quad \char`\\mfAlpinelinux \\ \mfAngular \quad \char`\\mfAngular \\ \mfAngularAlt \quad \char`\\mfAngularAlt \\ \mfAntenna \quad \char`\\mfAntenna \\ \mfApache \quad \char`\\mfApache \\ \mfArchlinux \quad \char`\\mfArchlinux \\ \mfAws \quad \char`\\mfAws \\ \mfAzure \quad \char`\\mfAzure \\ \mfBackbone \quad \char`\\mfBackbone \\ \mfBlackberry \quad \char`\\mfBlackberry \\ \mfBomb \quad \char`\\mfBomb \\ \mfBootstrap \quad \char`\\mfBootstrap \\ \mfC \quad \char`\\mfC \\ \mfCassandra \quad \char`\\mfCassandra \\ \mfCentos \quad \char`\\mfCentos \\ \mfClojure \quad \char`\\mfClojure \\ \mfCodeigniter \quad \char`\\mfCodeigniter \\ \mfCodepen \quad \char`\\mfCodepen \\ \mfCoffeeBean \quad \char`\\mfCoffeeBean \\ \mfCplusplus \quad \char`\\mfCplusplus \\ \mfCsharp \quad \char`\\mfCsharp \\ \mfCss \quad \char`\\mfCss \\ \mfCssthree \quad \char`\\mfCssthree \\ \mfCssthreeAlt \quad \char`\\mfCssthreeAlt \\ \mfDthree \quad \char`\\mfDthree \\ \mfDatabase \quad \char`\\mfDatabase \\ \mfDatabaseAlt \quad \char`\\mfDatabaseAlt \\ \mfDatabaseAlttwo \quad \char`\\mfDatabaseAlttwo \\ \mfDebian \quad \char`\\mfDebian \\ \mfDocker \quad \char`\\mfDocker \\ \mfDreamhost \quad \char`\\mfDreamhost \\ \mfElixir \quad \char`\\mfElixir \\ \mfElm \quad \char`\\mfElm \\ \mfErlang \quad \char`\\mfErlang \\ \mfExherbo \quad \char`\\mfExherbo \\ \mfFedora \quad \char`\\mfFedora \\ \mfFireAlt \quad \char`\\mfFireAlt \\ \mfFreebsd \quad \char`\\mfFreebsd \\ \mfFreecodecamp \quad \char`\\mfFreecodecamp \\ \mfGentoo \quad \char`\\mfGentoo \\ \mfGhost \quad \char`\\mfGhost \\ \mfGit \quad \char`\\mfGit \\ \mfGnome \quad \char`\\mfGnome \\ \mfGo \quad \char`\\mfGo \\ \mfGoAlt \quad \char`\\mfGoAlt \\ \mfGoogle \quad \char`\\mfGoogle \\ \mfGoogleAlt \quad \char`\\mfGoogleAlt \\ \mfGoogleCode \quad \char`\\mfGoogleCode \\ \mfGoogleDevelopers \quad \char`\\mfGoogleDevelopers \\ \mfGradle \quad \char`\\mfGradle \\ \mfGrails \quad \char`\\mfGrails \\ \mfGrailsAlt \quad \char`\\mfGrailsAlt \\ \mfGrunt \quad \char`\\mfGrunt \\ \mfGulp \quad \char`\\mfGulp \\ \mfGulpAlt \quad \char`\\mfGulpAlt \\ \mfHadoop \quad \char`\\mfHadoop \\ \mfHaskell \quad \char`\\mfHaskell \\ \mfHeroku \quad \char`\\mfHeroku \\ \mfHtml \quad \char`\\mfHtml \\ \mfHtmlfive \quad \char`\\mfHtmlfive \\ \mfHtmlfiveAlt \quad \char`\\mfHtmlfiveAlt \\ \mfIphone \quad \char`\\mfIphone \\ \mfJava \quad \char`\\mfJava \\ \mfJavaBold \quad \char`\\mfJavaBold \\ \mfJavaDuke \quad \char`\\mfJavaDuke \\ \mfJavascript \quad \char`\\mfJavascript \\ \mfJavascriptAlt \quad \char`\\mfJavascriptAlt \\ \mfJetty \quad \char`\\mfJetty \\ \mfJquery \quad \char`\\mfJquery \\ \mfKde \quad \char`\\mfKde \\ \mfLaravel \quad \char`\\mfLaravel \\ \mfLineGraph \quad \char`\\mfLineGraph \\ \mfLinuxMint \quad \char`\\mfLinuxMint \\ \mfLooking \quad \char`\\mfLooking \\ \mfMagento \quad \char`\\mfMagento \\ \mfMariadb \quad \char`\\mfMariadb \\ \mfMaven \quad \char`\\mfMaven \\ \mfMicroscope \quad \char`\\mfMicroscope \\ \mfMobileDevice \quad \char`\\mfMobileDevice \\ \mfMobilePhoneAlt \quad \char`\\mfMobilePhoneAlt \\ \mfMobilePhoneBroadcast \quad \char`\\mfMobilePhoneBroadcast \\ \mfMongodb \quad \char`\\mfMongodb \\ \mfMssql \quad \char`\\mfMssql \\ \mfMysql \quad \char`\\mfMysql \\ \mfMysqlAlt \quad \char`\\mfMysqlAlt \\ \mfNetbsd \quad \char`\\mfNetbsd \\ \mfNginx \quad \char`\\mfNginx \\ \mfNginxAlt \quad \char`\\mfNginxAlt \\ \mfNginxAlttwo \quad \char`\\mfNginxAlttwo \\ \mfNodejs \quad \char`\\mfNodejs \\ \mfNpm \quad \char`\\mfNpm \\ \mfObjc \quad \char`\\mfObjc \\ \mfOpenshift \quad \char`\\mfOpenshift \\ \mfOracle \quad \char`\\mfOracle \\ \mfOracleAlt \quad \char`\\mfOracleAlt \\ \mfOsx \quad \char`\\mfOsx \\ \mfPerl \quad \char`\\mfPerl \\ \mfPhoneAlt \quad \char`\\mfPhoneAlt \\ \mfPhoneGap \quad \char`\\mfPhoneGap \\ \mfPhoneRetro \quad \char`\\mfPhoneRetro \\ \mfPhp \quad \char`\\mfPhp \\ \mfPhpAlt \quad \char`\\mfPhpAlt \\ \mfPlayframework \quad \char`\\mfPlayframework \\ \mfPlayframeworkAlt \quad \char`\\mfPlayframeworkAlt \\ \mfPlone \quad \char`\\mfPlone \\ \mfPostgres \quad \char`\\mfPostgres \\ \mfPostgresAlt \quad \char`\\mfPostgresAlt \\ \mfPython \quad \char`\\mfPython \\ \mfRaspberrypi \quad \char`\\mfRaspberrypi \\ \mfReactjs \quad \char`\\mfReactjs \\ \mfRedhat \quad \char`\\mfRedhat \\ \mfRedis \quad \char`\\mfRedis \\ \mfRuby \quad \char`\\mfRuby \\ \mfRubyOnRails \quad \char`\\mfRubyOnRails \\ \mfRubyOnRailsAlt \quad \char`\\mfRubyOnRailsAlt \\ \mfRust \quad \char`\\mfRust \\ \mfSass \quad \char`\\mfSass \\ \mfSatellite \quad \char`\\mfSatellite \\ \mfScala \quad \char`\\mfScala \\ \mfScalaAlt \quad \char`\\mfScalaAlt \\ \mfScript \quad \char`\\mfScript \\ \mfScriptAlt \quad \char`\\mfScriptAlt \\ \mfShell \quad \char`\\mfShell \\ \mfSitefinity \quad \char`\\mfSitefinity \\ \mfSolaris \quad \char`\\mfSolaris \\ \mfSplatter \quad \char`\\mfSplatter \\ \mfSpring \quad \char`\\mfSpring \\ \mfSuse \quad \char`\\mfSuse \\ \mfSvg \quad \char`\\mfSvg \\ \mfSymfony \quad \char`\\mfSymfony \\ \mfTomcat \quad \char`\\mfTomcat \\ \mfUbuntu \quad \char`\\mfUbuntu \\ \mfUnity \quad \char`\\mfUnity \\ \mfWireless \quad \char`\\mfWireless \\ \mfWordpress \quad \char`\\mfWordpress \\ \mfXeleven \quad \char`\\mfXeleven \\ \end{document}
{ "alphanum_fraction": 0.6927292656, "avg_line_length": 37.7517241379, "ext": "tex", "hexsha": "f6accedb4affcbe19ebe1a2c642902cc34ce934f", "lang": "TeX", "max_forks_count": 3, "max_forks_repo_forks_event_max_datetime": "2020-11-14T22:32:02.000Z", "max_forks_repo_forks_event_min_datetime": "2015-01-19T14:47:31.000Z", "max_forks_repo_head_hexsha": "4a9f0076552de6cbb8266650ca9b33feaceeb4f2", "max_forks_repo_licenses": [ "MIT" ], "max_forks_repo_name": "kdungs/latex-fontmfizz", "max_forks_repo_path": "fontmfizz.tex", "max_issues_count": 4, "max_issues_repo_head_hexsha": "4a9f0076552de6cbb8266650ca9b33feaceeb4f2", "max_issues_repo_issues_event_max_datetime": "2020-10-23T07:26:08.000Z", "max_issues_repo_issues_event_min_datetime": "2015-12-16T08:59:18.000Z", "max_issues_repo_licenses": [ "MIT" ], "max_issues_repo_name": "kdungs/latex-fontmfizz", "max_issues_repo_path": "fontmfizz.tex", "max_line_length": 63, "max_stars_count": 5, "max_stars_repo_head_hexsha": "4a9f0076552de6cbb8266650ca9b33feaceeb4f2", "max_stars_repo_licenses": [ "MIT" ], "max_stars_repo_name": "kdungs/latex-fontmfizz", "max_stars_repo_path": "fontmfizz.tex", "max_stars_repo_stars_event_max_datetime": "2020-05-10T12:09:37.000Z", "max_stars_repo_stars_event_min_datetime": "2015-12-16T08:57:41.000Z", "num_tokens": 1995, "size": 5474 }
\section{Example} Text here \gls{reference} \glspl{reference2} \must{Something really important!} \should{something quite important} \fbr{Insert figure here} \begin{lstlisting} CODE \end{lstlisting}
{ "alphanum_fraction": 0.7696078431, "avg_line_length": 15.6923076923, "ext": "tex", "hexsha": "d7c072911f3f1c19593ac4c72c1885f2f3044f10", "lang": "TeX", "max_forks_count": null, "max_forks_repo_forks_event_max_datetime": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_head_hexsha": "4c332b9f9fb4b38b2dec0aff42234c866b518991", "max_forks_repo_licenses": [ "BSD-3-Clause" ], "max_forks_repo_name": "WarwickRSE/MaterialTemplates", "max_forks_repo_path": "NotesSource/Chapter.tex", "max_issues_count": null, "max_issues_repo_head_hexsha": "4c332b9f9fb4b38b2dec0aff42234c866b518991", "max_issues_repo_issues_event_max_datetime": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_licenses": [ "BSD-3-Clause" ], "max_issues_repo_name": "WarwickRSE/MaterialTemplates", "max_issues_repo_path": "NotesSource/Chapter.tex", "max_line_length": 69, "max_stars_count": null, "max_stars_repo_head_hexsha": "4c332b9f9fb4b38b2dec0aff42234c866b518991", "max_stars_repo_licenses": [ "BSD-3-Clause" ], "max_stars_repo_name": "WarwickRSE/MaterialTemplates", "max_stars_repo_path": "NotesSource/Chapter.tex", "max_stars_repo_stars_event_max_datetime": null, "max_stars_repo_stars_event_min_datetime": null, "num_tokens": 57, "size": 204 }